• Success
    Manage your Success Plans and Engagements, gain key insights into your implementation journey, and collaborate with your CSMs
    Accelerate your Purchase to Value engaging with Informatica Architects for Customer Success
    All your Engagements at one place
  • Communities
    A collaborative platform to connect and grow with like-minded Informaticans across the globe
    Connect and collaborate with Informatica experts and champions
    Have a question? Start a Discussion and get immediate answers you are looking for
    Customer-organized groups that meet online and in-person. Join today to network, share ideas, and get tips on how to get the most out of Informatica
  • Knowledge Center
    Troubleshooting documents, product guides, how to videos, best practices, and more
    Knowledge Center
    One-stop self-service portal for solutions, FAQs, Whitepapers, How Tos, Videos, and more
    Video channel for step-by-step instructions to use our products, best practices, troubleshooting tips, and much more
    Information library of the latest product documents
    Best practices and use cases from the Implementation team
  • Learn
    Rich resources to help you leverage full capabilities of our products
    Role-based training programs for the best ROI
    Get certified on Informatica products. Free, Foundation, or Professional
    Free and unlimited modules based on your expertise level and journey
    Self-guided, intuitive experience platform for outcome-focused product capabilities and use cases
  • Resources
    Library of content to help you leverage the best of Informatica products
    Most popular webinars on product architecture, best practices, and more
    Product Availability Matrix statements of Informatica products
    Monthly support newsletter
    Informatica Support Guide and Statements, Quick Start Guides, and Cloud Product Description Schedule
    End of Life statements of Informatica products
Last Updated Date May 25, 2021 |


A variety of factors are considered when assessing the success of a project. Naming standards are an important, but often overlooked component. The application and enforcement of naming standards not only establishes consistency in the repository, but provides a friendly environment for developers. Choose a good naming standard and adhering to it to ensures that the repository can be easily understood by all developers.


Having a good naming convention facilitates smooth migrations and improves readability for anyone reviewing or carrying out maintenance on the repository objects. It helps them to understand the processes being affected. If consistent names and descriptions are not used, significant time may be needed to understand the workings of mappings and transformation objects. If no description is provided, a developer is likely to spend considerable time going through an object or mapping to understand its objective.

The following pages offer suggested naming conventions for various repository objects. Whatever convention is chosen, it is important to make the selection very early in the development cycle and communicate the convention to project staff working on the repository. The policy can be enforced by peer review and at test phases by adding processes to check conventions both to test plans and to test execution documents.

Suggested Naming Conventions

Designer Objects

Suggested Naming Conventions


m_{PROCESS}_{SOURCE_SYSTEM}_{TARGET_NAME} or suffix with _{descriptor} if there are multiple mappings for that single target table




t_{update_types(s)}_{TARGET_NAME} this naming convention should only occur within a mapping as the actual target name object affects the actual table that PowerCenter will access

Aggregator Transformation

AGG_{FUNCTION} that leverages the expression and/or a name that describes the processing being done.

Application Source Qualifier Transformation

ASQ_{TRANSFORMATION} _{SOURCE_TABLE1}_{SOURCE_TABLE2} represents data from application source.

Custom Transformation

CT_{TRANSFORMATION} name that describes the processing being done.

Data Quality Transform

IDQ_{descriptor}_{plan} with the descriptor describing what this plan is doing with the optional plan name included if desired.

Expression Transformation

EXP_{FUNCTION} that leverages the expression and/or a name that describes the processing being done.

External Procedure Transformation


Filter Transformation

FIL_ or FILT_{FUNCTION} that leverages the expression or a name that describes the processing being done.

Flexible Target Key




Idoc Interpreter

idoci_{Descriptor}_{IDOC Type} defining what the idoc does and possibly the idoc message.

Idoc Prepare

 idocp_{Descriptor}_{IDOC Type} defining what the idoc does and possibly the idoc message.

Java Transformation

JTX_{FUNCTION} that leverages the expression or a name that describes the processing being done.

Joiner Transformation


Lookup Transformation

LKP_{TABLE_NAME} or suffix with _{descriptor} if there are multiple look-ups on a single table. For unconnected look-ups, use ULKP in place of LKP.

Mapplet Input Transformation

MPLTI_{DESCRIPTOR} indicating the data going into the mapplet.

Mapplet Output Transformation

MPLTO_{DESCRIPTOR} indicating the data coming out of the mapplet.

MQ Source Qualifier Transformation

MQSQ_{DESCRIPTOR} defines the messaging being selected.

Normalizer Transformation

NRM_{FUNCTION} that leverages the expression or a name that describes the processing being done.

Rank Transformation

RNK_{FUNCTION} that leverages the expression or a name that describes the processing being done.

Router Transformation


SAP DMI Prepare

dmi_{Entity Descriptor}_{Secondary Descriptor} defining what entity is being loaded and a secondary description if multiple DMI objects are being leveraged in a mapping.

Sequence Generator Transformation

SEQ_{DESCRIPTOR} if using keys for a target table entity, then refer to that

Sorter Transformation


Source Qualifier Transformation

SQ_{SOURCE_TABLE1}_{SOURCE_TABLE2}. Using all source tables can be impractical if there are a lot of tables in a source qualifier, so refer to the type of information being obtained, for example a certain type of product SQ_SALES_INSURANCE_PRODUCTS.

SQL Transformation

SQL_{Query function to be performed}

Stored Procedure Transformation


Transaction Control Transformation

TC_{DESCRIPTOR} indicating the function of the transaction control.

Union Transformation


Unstructured Data Transform

UD_{descriptor} with the descriptor ideintifying the kind of data being parsed by the UDO transform.

Update Strategy Transformation

UPD_{UPDATE_TYPE(S)} or UPD_{UPDATE_TYPE(S)}_{TARGET_NAME} if there are multiple targets in the mapping. E.g., UPD_UPDATE_EXISTING_EMPLOYEES

Web Service Consumer


XML Generator Transformation

XG_{DESCRIPTOR}defines the target message.

XML Parser Transformation

XP_{DESCRIPTOR}defines the messaging being selected.

XML Source Qualifier Transformation

XSQ_{DESCRIPTOR}defines the data being selected.


Port Names

Ports names should remain the same as the source unless some other action is performed on the port. In that case, the port should be prefixed with the appropriate name.

When the developer brings a source port into a lookup, the port should be prefixed with in_ or i_. This helps the user immediately identify the ports that are being input without having to line up the ports with the input checkbox.  In any other transformation, if the input port is transformed in an output port with the same name, prefix the input port with in_ or i_. 

Generated output ports can also be prefixed with out_ or o_. This helps trace the port value throughout the mapping as it may travel through many other transformations. If it is intended to be able to use the autolink feature based on names, then outputs may be better left as the name of the target port in the next transformation. For variables inside a transformation, the developer can use the prefix v, 'var_ or v_' plus a meaningful name.

To highlight the input/output, and variable port prefixes lowercase may be specified even when an uppercase naming standard is used for ports.

With some exceptions, port standards apply when creating a transformation object. The exceptions are the Source Definition, the Source Qualifier, the Lookup, and the Target Definition ports, which must not change since the port names are used to retrieve data from the database or filesystem and sometimes these are case sensitive.

Other transformations that are not applicable to the port standards are:

  • Normalizer - The ports created in the Normalizer are automatically formatted when the developer configures it.
  • Sequence Generator - The ports are reserved words.
  • Router - Because output ports are created automatically, prefixing the input ports with an I_ prefixes the output ports with I_ as well. Port names should not have any prefix.
  • Sorter, Update Strategy, Transaction Control, and Filter - These ports are always input and output. There is no need to rename them unless they are prefixed. Prefixed port names should be removed.
  • Union - The group ports are automatically assigned to the input and output; therefore prefixing with anything is reflected in both the input and output. The port names should not have any prefix.

All other transformation object ports can be prefixed or suffixed with:

  • in_ or i_for Input ports
  • o, o_ or _out for Output ports
  • io_ for Input/Output ports
  • v, v_ or var_ for variable ports
  • lkp_ for returns from look ups
  • mplt_ for returns from mapplets

Prefixes are preferred because they are generally easier to notice; developers do not need to expand the columns to see the suffix for longer port names.

Transformation object ports can also:

  • Have the Source Qualifier port name.
  • Be unique.
  • Be meaningful.
  • Be given the target port name.

Transformation Descriptions

This section defines the standards to be used for transformation descriptions in the Designer.

Source Qualifier Descriptions

  • Should include the aim of the source qualifier and the data it is intended to select. Should also indicate if any overrides are used. If so, it should describe the filters or settings used. Some projects prefer items such as the SQL statement to be included in the description as well.

Lookup Transformation Descriptions

  • Describe the lookup along the lines of the [lookup attribute] obtained from [lookup table name] to retrieve the [lookup attribute name].


  • Lookup attribute is the name of the column being passed into the lookup and is used as the lookup criteria.
  • Lookup table name is the table on which the lookup is being performed.
  • Lookup attribute name is the name of the attribute being returned from the lookup. If appropriate, specify the condition when the lookup is actually executed.
  • It is also important to note lookup features such as persistent cache or dynamic lookup.

Expression Transformation Descriptions

Must adhere to the following format:

  • This expression [explanation of what transformation does].

Expressions can be distinctly different depending on the situation; therefore the explanation should be specific to the actions being performed.

Within each Expression, transformation ports have their own description in the format:

  • This port [explanation of what the port is used for]

Aggregator Transformation Descriptions

Must adhere to the following format:

  • This Aggregator [explanation of what transformation does].

Aggregators can be distinctly different, depending on the situation; therefore the explanation should be specific to the actions being performed.

Within each Aggregator, transformation ports have their own description in the format:

  • This port [explanation of what the port is used for].

Sequence Generators Transformation Descriptions

Must adhere to the following format:

  • This Sequence Generator provides the next value for the [column name] on the [table name].


  • Table name is the table being populated by the sequence number, and th
  • Column name is the column within that table being populated.

Joiner Transformation Descriptions

Must adhere to the following format:

  • This Joiner uses [joining field names] from [joining table names].


  • Joining field names are the names of the columns on which the join is done, and the
  • Joining table names are the tables being joined.

Normalizer Transformation Descriptions

Must adhere to the following format:

  • This Normalizer [explanation].


  • explanation describes what the Normalizer does.

Filter Transformation Descriptions

Must adhere to the following format:

  • This Filter processes [explanation].


  • explanation describes what the filter criteria are and what they do.

Stored Procedure Transformation Descriptions

  • Explain the stored procedures functionality within the mapping (i.e., what does it return in relation to the input ports?).

Mapplet Input Transformation Descriptions

  • Describe the input values and their intended use in the mapplet.

Mapplet Output Transformation Descriptions

  • Describe the output ports and the subsequent use of those values. As an example, for an exchange rate mapplet, describe what currency the output value will be in. Answer the questions like: is the currency fixed or based on other data? What kind of rate is used? Is it a fixed inter-company rate? An inter-bank rate? Business rate or tourist rate? Has the conversion gone through an intermediate currency?

Update Strategies Transformation Descriptions

  • Describe the Update Strategy and whether it is fixed in its function or determined by a calculation.

Sorter Transformation Descriptions

  • Explanation of the port(s) that are being sorted and their sort direction.

Router Transformation Descriptions

  • Describes the groups and their functions.

Union Transformation Descriptions

  • Describe the source inputs and indicate what further processing on those inputs (if any) is expected to take place in later transformations in the mapping.

Transaction Control Transformation Descriptions

  • Describe the process behind the transaction control and the function of the control to commit or rollback.

Custom Transformation Descriptions

  • Describe the function that the custom transformation accomplishes and what data is expected as input and what data will be generated as output. Also indicate the module name (and location) and the procedure which is used.

External Procedure Transformation Descriptions

  • Describe the function of the external procedure and what data is expected as input and what data will be generated as output. Also indicate the module name (and location) and the procedure that is used.

Java Transformation Descriptions

  • Describe the function of the java code and what data is expected as input and what data is generated as output. Also indicate whether the java code determines the object to be an Active or Passive transformation.

Rank Transformation Descriptions

  • Indicate the columns being used in the rank, the number of records returned from the rank, the rank direction, and the purpose of the transformation.

XML Generator Transformation Descriptions

  • Describe the data expected for the generation of the XML and indicate the purpose of the XML being generated.

XML Parser Transformation Descriptions

  • Describe the input XML expected and the output from the parser and indicate the purpose of the transformation.

Mapping Comments

These comments describe the source data obtained and the structure file, table or facts and dimensions that it populates. Remember to use business terms along with such technical details as table names. This is beneficial when maintenance is required or if issues arise that need to be discussed with business analysts.

Mapplet Comments

These comments are used to explain the process that the mapplet carries out. Always be sure to see the notes regarding descriptions for the input and output transformation.

Repository Objects

Repositories, as well as repository level objects, should also have meaningful names. Repositories should prefix with either L_ for local or G for global and a descriptor. Descriptors usually include information about the project and/or level of the environment (e.g., PROD, TEST, DEV).

Folders and Groups

Working folder names should be meaningful and include project name and, if there are multiple folders for that one project, a descriptor. User groups should also include project name and descriptors, as necessary. For example, folder DW_SALES_US and DW_SALES_UK could both have TEAM_SALES as their user group. Individual developer folders or non-production folders should prefix with z_ so that they are grouped together and not confused with working production folders.

Shared Objects and Folders

Any object within a folder can be shared across folders and maintained in one central location. These objects are sources, targets, mappings, transformations, and mapplets. To share objects in a folder, the folder must be designated as shared. In addition to facilitating maintenance, shared folders help reduce the size of the repository since shortcuts are used to link to the original, instead of copies.

Only users with the proper permissions can access these shared folders. These users are responsible for migrating the folders across the repositories and, with help from the developers, for maintaining the objects within the folders. For example, if an object is created by a developer and is to be shared, the developer should provide details of the object and the level at which the object is to be shared before the Administrator accepts it as a valid entry into the shared folder. The developers, not necessarily the creator, control the maintenance of the object, since they must ensure that a subsequent change does not negatively impact other objects.

If the developer has an object that he or she wants to use in several mappings or across multiple folders, like an Expression transformation that calculates sales tax, the developer can place the object in a shared folder. Then use the object in other folders by creating a shortcut to the object. In this case, the naming convention is sc_ (e.g., sc_EXP_CALC_SALES_TAX). The folder should prefix with SC_ to identify it as a shared folder and keep all shared folders grouped together in the repository.

Workflow Manager Objects 

WorkFlow Objects

Suggested Naming Convention



Command Object



wk or wklt_{DESCRIPTOR}


wkf or wf_{DESCRIPTOR}

Email Task:

email_ or eml_{DESCRIPTOR}

Decision Task:

dcn_ or dt_{DESCRIPTOR}

Assign Task:


Timer Task:

timer_ or tmr_{DESCRIPTOR}

Control Task:

ctl_{DESCRIPTOR}Specify when and how the PowerCenter Server is to stop or abort a workflow by using the Control task in the workflow.

Event Wait Task:

wait_ or ew_{DESCRIPTOR}Waits for an event to occur. Once the event triggers, the PowerCenter Server continues executing the rest of the workflow.

Event Raise Task:

raise_ or er_{DESCRIPTOR} Represents a user-defined event. When the PowerCenter Server runs the Event-Raise task, the Event-Raise task triggers the event. Use the Event-Raise task with the Event-Wait task to define events.

ODBC Data Source Names

All Open Database Connectivity (ODBC) data source names (DSNs) should be set up in the same way on all client machines. PowerCenter uniquely identifies a source by its Database Data Source (DBDS) and its name. The DBDS is the same name as the ODBC DSN since the PowerCenter Client talks to all databases through ODBC.

Also be sure to setup the ODBC DSNs as system DSNs so that all users of a machine can see the DSN. This approach ensures that there is less chance of a discrepancy occurring among users when they use different (i.e., colleagues') machines and have to recreate a new DSN when they use a separate machine.

If ODBC DSNs are different across multiple machines, there is a risk of analyzing the same table using different names. For example, machine1 has ODBS DSN Name0 that points to database1. TableA gets analyzed in on machine 1. TableA is uniquely identified as Name0.TableA in the repository. Machine2 has ODBS DSN Name1 that points to database1. TableA gets analyzed in on machine 2. TableA is uniquely identified as Name1.TableA in the repository. The result is that the repository may refer to the same object by multiple names, creating confusion for developers, testers, and potentially end users.

Also, refrain from using environment tokens in the ODBC DSN. For example, do not call it dev_db01. When migrating objects from dev, to test, to prod, PowerCenter can wind up with source objects called dev_db01 in the production repository. ODBC database names should clearly describe the database they reference to ensure that users do not incorrectly point sessions to the wrong databases.

PowerCenter Connection Information

Security considerations may dictate using the company name of the database or project instead of {user}_{database name}, except for developer scratch schemas, which are not found in test or production environments. Be careful not to include machine names or environment tokens in the database connection name. Database connection names must be very generic to be understandable and ensure a smooth migration.

The naming convention should be applied across all development, test, and production environments. This allows seamless migration of sessions when migrating between environments. If an administrator uses the Copy Folder function for deployment, session information is also copied. If the Database Connection information does not already exist in the folder the administrator is copying to, it is also copied. So, if the developer uses connections with names like Dev_DW in the development repository, they are likely to eventually wind up in the test, and even the production repositories as the folders are migrated. Manual intervention is then necessary to change connection names, user names, passwords, and possibly even connect strings.

Instead, if the developer just has a DW connection in each of the three environments, when the administrator copies a folder from the development environment to the test environment, the sessions automatically use the existing connection in the test repository. With the right naming convention, sessions can be migrated from test to the production repository without manual intervention.

At the beginning of a project, have the Repository Administrator or DBA setup all connections in all environments based on the issues discussed in this Best Practice. Then use permission options to protect these connections so that only specified individuals can modify them. Whenever possible, avoid allowing developers create their own connections using different conventions and possibly duplicating connections.

Table of Contents


Link Copied to Clipboard