Optimizing Mappings


Informatica PowerCenter 9.1 Optimizing Mappings

*  Optimizing Mappings Overview,

*  Optimizing Flat File Sources,

*  Configuring Single-Pass Reading,

*  Optimizing Pass-Through Mappings,

*  Optimizing Expressions,

*  Optimizing External Procedures,

*  Optimizing Filters,

*  Optimizing Datatype Conversions.

Optimizing Mappings Overview:

Mapping-level optimization may take time to implement, but it can significantly boost session performance.

Focus on mapping-level optimization after you optimize the targets and sources. Generally, you reduce the number of transformations in the mapping and delete unnecessary links between transformations to optimize the mapping. Configure the mapping with the least number of transformations and expressions to do the most amount of work possible. Delete unnecessary links between transformations to minimize the amount of data moved.

Optimizing Flat File Sources:

Complete the following tasks to optimize flat file sources:

*  Optimize delimited flat file sources.

*  Optimize XML and flat file sources.

*  Optimize the line sequential buffer length.

Optimizing the Line Sequential Buffer Length:

If the session reads from a flat file source, you can improve session performance by setting the number of bytes the Integration Service reads per line. By default, the Integration Service reads 1024 bytes per line. If each line in the source file is less than the default setting, you can decrease the line sequential buffer length in the session properties.

Optimizing Delimited Flat File Sources:

If a source is a delimited flat file, you must specify the delimiter character to separate columns of data in the source file. You must also specify the escape character. The Integration Service reads the delimiter character as a regular character if you include the escape character before the delimiter character. You can improve session performance if the source flat file does not contain quotes or escape characters.

Optimizing XML and Flat File Sources:

XML files are usually larger than flat files because of the tag information. The size of an XML file depends on the level of tagging in the XML file. More tags result in a larger file size. As a result, the Integration Service may take longer to read and cache XML sources.

Configuring Single-Pass Reading:

Single-pass reading allows you to populate multiple targets with one source qualifier. Consider using single-pass reading if you have multiple sessions that use the same sources. You can combine the transformation logic for each mapping in one mapping and use one source qualifier for each source. The Integration Service reads each source once and then sends the data into separate pipelines. A particular row can be used by all the pipelines, by any combination of pipelines, or by no pipelines.

For example, source table, and you use that source daily to perform an aggregation and a ranking. If you place the Aggregator and Rank transformations in separate mappings and sessions, you force the Integration Service to read the same source table twice. However, if you include the aggregation and ranking logic in one mapping with one source qualifier, the Integration Service reads the source table once, and then sends the appropriate data to the separate pipelines.

When changing mappings to take advantage of single-pass reading, you can optimize this feature by factoring out common functions from mappings. For example, if you need to subtract a percentage from the Price ports for both the Aggregator and Rank transformations, you can minimize work by subtracting the percentage before splitting the pipeline. You can use an Expression transformation to subtract the percentage, and then split the mapping after the transformation.

foto mapping

Optimizing Filters:

Use one of the following transformations to filter data:

*  Source Qualifier transformation. The Source Qualifier transformation filters rows from relational sources.

*  Filter transformation. The Filter transformation filters data within a mapping. The Filter transformation filters rows from any type of source.

If you filter rows from the mapping, you can improve efficiency by filtering early in the data flow. Use a filter in the Source Qualifier transformation to remove the rows at the source. The Source Qualifier transformation limits the row set extracted from a relational source.

If you cannot use a filter in the Source Qualifier transformation, use a Filter transformation and move it as close to the Source Qualifier transformation as possible to remove unnecessary data early in the data flow. The Filter transformation limits the row set sent to a target.

Avoid using complex expressions in filter conditions. To optimize Filter transformations, use simple integer or true eller false expressions in the filter condition.

OBS: You can also use a Filter or Router transformation to drop rejected rows from an Update Strategy transformation if you do not need to keep rejected rows.

Advertisements

2 thoughts on “Optimizing Mappings

  1. I for all time emailed this blog post page to all my contacts, as if like to read it next
    my friends will too.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s