• Success
    Manage your Success Plans and Engagements, gain key insights into your implementation journey, and collaborate with your CSMs
    Accelerate your Purchase to Value engaging with Informatica Architects for Customer Success
  • Communities
    A collaborative platform to connect and grow with like-minded Informaticans across the globe
    Connect and collaborate with Informatica experts and champions
    Have a question? Start a Discussion and get immediate answers you are looking for
    Customer-organized groups that meet online and in-person. Join today to network, share ideas, and get tips on how to get the most out of Informatica
  • Knowledge Center
    Troubleshooting documents, product guides, how to videos, best practices, and more
    Knowledge Center
    One-stop self-service portal for solutions, FAQs, Whitepapers, How Tos, Videos, and more
    Video channel for step-by-step instructions to use our products, best practices, troubleshooting tips, and much more
    Information library of the latest product documents
    Best practices and use cases from the Implementation team
  • Learn
    Rich resources to help you leverage full capabilities of our products
    Role-based training programs for the best ROI
    Get certified on Informatica products. Free, Foundation, or Professional
    Free and unlimited modules based on your expertise level and journey
    Self-guided, intuitive experience platform for outcome-focused product capabilities and use cases
  • Resources
    Library of content to help you leverage the best of Informatica products
    Most popular webinars on product architecture, best practices, and more
    Product Availability Matrix statements of Informatica products
    Monthly support newsletter
    Informatica Support Guide and Statements, Quick Start Guides, and Cloud Product Description Schedule
    End of Life statements of Informatica products
Last Updated Date May 25, 2021 |

Using Trust Settings and Validation Rules

Trust is a designation of confidence in the relative accuracy of a particular piece of data. Validation determines whether a particular piece of data is valid. Trust and validation work together to determine “the best version of the truth” among multiple sources of data. This article provides a brief overview of how trust settings and validation rules work together and includes best practice recommendations and examples.

This article is recommended reading for administrators and implementers.

Using Trust Levels

This section describes how to determine appropriate trust levels for an individual piece of data coming from a given data source. Use the Trust tool in the Hub Console to configure trust levels. To learn more, see the Informatica MDM Configuration Guide.

About Trust Levels

Trust is a designation the confidence in the relative accuracy of a particular piece of data. For each column from each source, define a trust level represented by a number between 0 and 100, with zero being the least trustworthy and 100 being the most trustworthy. By itself, this number has no meaning. It becomes meaningful only when compared with another trust number to determine which is higher.

Trust is used to determine:

  • Survivorship when two or more records are merged (in case of a group merge).
  • Whether updates from a source system are reliable enough to update the “best version of the truth” record.
  • The MDM Hub’s on-going management of the “best of breed” record is achieved using the trust rules to assess updates from source systems in terms of their trust weightings.

How Trust Works

In a merge, Informatica MDM Hub calculates a trust score for both records being merged together (merge source and merge target). Informatica MDM Hub compares the trust score of the merge source with the trust score of the merge target and changes the survived value in the base object only if the merge source has a higher trust score than the merge target. If the trust score of the merge target is higher, then the value of the merge target remains unchanged.

Consider the following example. When two base object records merge, the MDM Hub calculates the trust score for each trusted column in the two base object records being merged.

Cells (the intersection of a column and record) with the highest values survive in the final merged record.



When an update comes in from a source system, the MDM Hub calculates the trust score on the incoming data and compares it to the trust score of the data in the base object.

Updates are applied to the base object only for cells that have the same or higher trust score on the incoming data.


How Decay Periods Affect Trust Levels

Depending on the configured decay period specified, a small difference (such as one day) in the age of two records does not affect survivorship immediately, especially if the merge date is very close to the src_lud (not much time has passed for the trust level to move down the curve). With Linear decay, the impact of age remains constant. With RISL (rapid initial, slow later) and SIRL (slow initial, rapid later), the impact of age changes as the trust level moves down the curve.

However, trust levels are affected by the way time units are specified. The more granularly the time units are specified, the more sensitive the graph is to small changes in age, although that sensitivity does decrease with longer decay periods. For example, the following table shows trust settings based on different ways to configure the decay period. For all of these examples, the maximum trust setting is 90 and the minimum trust setting is 10.

Decay Period

Graph Type

Trust Level

One year


trust = 90

12 months


trust = 89.60

365 days


trust = 89.56




1 year


trust = 90

12 months


trust = 89.8

365 days


trust = 89.78




3 years


trust = 90 after one day

36 months


trust = 89.93 (actually 89.9333, but the system rounds to two decimal places)

1095 days


trust = 89.93 (actually 89.9269)

Ranking Source Systems According to Trustworthiness

Before defining trust settings, it is necessary to analyze the data source systems and rank them in descending order of reliability. The goal is to define the relative (not absolute) level of reliability of data in these source systems. Ranking is by attribute. For each attribute, the ranking of source systems might differ. Levels need not be exclusive; there can be more than one system rated at the same level.

Consider doing this process on a whiteboard. List the attributes and assign them either a straight ranking for each attribute or a group of related attributes (such as address data).

When ranking the reliability of source systems, consider the following issues:

  • What are the processes for updating the source data? For example, if the source system has three screens for updating all of the data, then data on the first, most frequently-used screen is likely to be updated more frequently than values on any subsequent screens.
  • What information goes into each source system? How is data validated? What is the process for updating data? Do the attributes that the user wants to bring into Informatica MDM Hub exist in the source system (if so, then more unwanted or incorrect data may be encountered)? How clean is the data in the source system and how clean can the data be made by removing junk data? It is important to understand what the source systems and the ETL process are doing to cleanse data in the source system.
  • Look at systems that are highly rated. Are there conditions that are defined as part of the data analysis that result from the most reliable source? Note those conditions as part of the analysis.
  • Focus on one base object at a time. Within the base object, focus on each trusted attribute. Rank the source systems for that attribute according to their relative trustworthiness.
  • Ask on-site business experts and/or data specialists to provide practical knowledge about the data sources so that trust rankings can be more effectively defined. Consider conducting one or more trust workshops with these experts to help clarify the trust rankings. Make sure to document any decisions, particularly trade-off decisions, and obtain sign-off approval from the participants.
  • Analyze data for invalid conditions. Conduct a frequency analysis to determine how often such conditions occur per source. The goal is to identify what is the more correct data, not just the more correctly formatted data.

Note: Be sure to distinguish between invalid data conditions that can be remedied through data cleansing and those that cannot. Consider focusing on trust and validation rules for conditions that cannot be remedied.

Determine which columns require trust settings and which do not. Define trust on a column if any of the following conditions apply:

  • There are two or more data sources for that column and they are not equally reliable (or equally unreliable).
  • The Last Update Date (LUD) must be taken into account in determining survivorship.
  • A data steward must be able to select or promote the surviving trust value in the Merge Manager / Data Manager.

Consider the performance impact of configuring trust columns.

  • The larger the number of configured trust columns and validation rules, the longer it takes to complete the load and merge processes.
  • The larger the number of trusted columns, the longer it takes to complete the update for the control table.

Identify logical trust groups in the data and assign them all the same trust levels, as well as validation downgrades.

  • For example, address fields should all belong to the same logical trust group so that all parts of the address are always taken from the same source record. This is because the granular components of an address are dependent on each other for their meaning. Nonsensical addresses could result if parts of the address were taken from one source and other parts of the address were taken from a difference source.

Note: The logical trust group for address should include a validation status indicator if it is being used to determine a downgrade percentage in a validation rule.

  • Names (First Name, Middle Name and Last Name) usually do not belong to a logical trust group. This is because components of a full name are not dependent on each other for their meaning. A source system that provides, for example, good information on last names might provide only an initial letter for middle name, while another source system that provides lower-quality last names might provide full and valid middle names.

Informatica MDM Hub handles delete flags in two different ways, please refer to the Informatica MDM Configuration Guide, chapter “Hard Delete Detection”:

  • direct delete-flagging any cross-reference for a base object will result in the base object record being flagged for delete as well.
  • consensus delete-A base object record is flagged as fully inactive only if all of its cross-reference records are flagged as deleted. In this model, the base object records that have some but not all cross-references flagged as deleted are flagged as partially deleted.

Trust Best Practices

Trust values are run-time calculations. Trust is planned in the Discover and Design phases and verified and fine-tuned in the Build phase.

Choosing the correct trust levels is a complex process. It is not enough to consider one system in isolation. Ensure that the trust settings for all of the source systems that contribute to a particular column combine to produce the desired behavior.

  1. During the Discover phase, talk with as many subject matter experts as possible about the data.
  2. Use the Data Quality Audit questionnaire in the Analysis phase. Question the system owners, including maintenance, data steward, and sales liaison representatives.
    1. For each table/file, determine the table/file name, the total number of records in the inspected set, and the total number of records in the full data population.
    2. For each column in the table, determine the column name, number of distinct values, number of NULL values, percentage of NULL values, text length (maximum, minimum, and average), types of non-alphanumeric characters found, number of values that indicate “unknown” or “undefined”, the top ten values (the ten values that occur most frequently), and any other notes regarding the visual inspection of the data.
  3. Use the Trust Matrix to record all relevant information that goes toward determining trust settings for each source system. The Trust Matrix asks a number of questions about the source data. Each question is designed to elicit information about the probable reliability of the source system. Here are some of the questions that should be considered:
    1. Does the source system validate this data value? How reliably does it do this?
    2. How important is this data value to the users of the source system, as compared with other data values? Users are likely to put the most effort into validating the data that is central to their work.
    3. How frequently is the source system updated?
    4. How frequently is a particular attribute likely to be updated?
  4. Rank the systems in relation to the source system of highest trust based on the attributes that will be used.
  5. For each column in each base object table, it is possible to enable or disable trust using the Trust tool in the Hub Console.
    1. If trust is disabled, Informatica MDM Hub will always use the most recently loaded value, regardless of which source system it comes from.
    2. For most columns that come from multiple source systems, enable trust because some systems are more reliable sources of that particular information than others. By enabling the trust for a column, the trust settings for each source system that could update the column are also specified.
  6. For a data steward to override settings of sources, enable trust and use a special source system called “Admin” that represents manual updates that the data steward makes within Informatica MDM Hub. This source system can contribute data to any column that has trust enabled on using the Trust tool. Specify trust settings for the Informatica MDM Admin system. It may be helpful to set the trust settings for this system to high values to ensure that the manual updates override any existing values from source systems.
  7. Trust and validation can cause situations in which values survive in the base object even though they are no longer in any of the cross-references. Validation downgrade can mean that the source does not update a cell even if it had previously provided the cell value. The survived value in a base object might not have same value as the corresponding cross-reference. There might not be any cross-references with the same value or trust as the base object. This situation causes problems in the following areas:
    1. Delete indicators – making sure the right value is in the base object
    2. Removing the influence of inactivated records from base object

Configuring Trust Levels

This section covers issues associated with configuring trust levels. Use the Trust tool in the Hub Console to configure trust levels. To learn more, see the Informatica MDM Configuration Guide.

Guidelines for Configuring Trust Settings

Consider the following guidelines for configuring trust settings:

  • If a column receives data from multiple sources, then enable trust for that column if required.

It is required to specify the relative trust level for each of the source systems that update the column.

  • If doing a lot to clean data from one source and not another, reduce trust for the data source that requires more cleaning, after receiving the appropriate approvals from the business.
  • If setting a long decay period for data, there might be difficulty picking up small fluctuations in the trust level. Balance this consequence against the reasons for setting a long decay period.
  • Some groups of data form logical trust groups. For example, the components of an address form a logical trust group. All the elements of an address must have the same settings: trust codes, decay values, etc. Do not pick up pieces of an address from different sources. Also, if a postal service database returns an indicator that some part of the address data is invalid, then grouping the data means that all parts of the address will be downgraded the same amount.
  • With staging tables, if there are logical trust groups, enable the Allow Null Update flag for the members of that group. For example, suppose an Address Line 2 column contains the value Suite 2 and then a user corrects the record by removing the Suite 2 value. If Allow Null Update is not enabled for that column, then the Suite 2 value would remain in the cell, resulting in an inaccurate record.
  • Avoid assigning numbers that are too close together. Make sure that the trust levels are set far enough apart (a minimum difference of five; ten is better) to avoid rounding problems that might occur during trust calculations. In the course of calculating trust as it degrades, Informatica MDM Hub rounds these numbers and, if the numbers are too close together, rounding errors can obscure the differences.

Defining Trust Settings

When defining trust settings, it is essential to:

  • Determine the ranking of the attributes (or groups of attributes). See Ranking Source Systems According to Trustworthiness for more information.
  • Assign trust values based on these rankings. See Guidelines for Configuring Trust Settings for more information.
  • Assign decay values based on the analysis of the continuing reliability of the data. See Enabling Cell Update for more information.

The following example shows ranking source systems for customer name.

























To define trust settings:

  1. Review the data source analysis and notice the criteria noted that distinguished the highly-rated systems. The criteria that result in the most reliable sources become the validation rules (see Using Validation). Using these criteria, make sure that data from sources that conform to those rules prevail over less reliable data sources.
  2. Quantify these rules by applying a numerical designation of trust to those source systems using a scale of 0 (lowest trust) to 100 (highest trust). Remember that these numbers have no meaning in themselves—they are meaningful only in the relative ranking of the source systems in relation to each other.
  3. Once the validation rules have been identified, define the decay type and rate.
    1. The most common decay type is SIRL (Slow initial, Rapid Later). This decay type makes the most sense for most data.
    2. Another common scenario is when data that comes from the source system (in the form of updates) must always prevail over the existing data. In this case, consider disabling trust. This will guarantee that the newest incoming data from the source system will overwrite the data already in the MDM Hub.
  4. Define maximum / minimum trust settings and decay curves.
    1. To do so, identify the cross-over points where decay curves would intersect each other. Leave a buffer at the top and bottom of the ranges (avoid setting the maximum trust to 100 or the minimum trust to 0). Leave a buffer between source systems as well. This buffer makes it easier to tweak trust settings and to add more sources later. A suggested gap between settings is at least five or more, preferably

Enabling Cell Update

The default behavior for when Informatica MDM Hub receives an updated value for a column on a record from a source system is that all trust values for the trusted columns for that source are recalculated from maximum trust again, based on the last update date of the record. Because Informatica MDM Hub does not check to see whether the actual cell values have changed, an update in one column is regarded as enforcing the values in other columns. This restarts the decay curve for all the values for the record from the beginning. In order for Informatica MDM Hub to check whether the actual column value has changed before updating the column and recalculating its trust level from the Maximum Trust, enable cell update using the Schema Manager in the Hub Console .

Enable cell update on the staging table if there are parts of the record coming from source systems that are regularly updated, and other parts of the record that are not regularly updated. Generally, users never look at the parts that are not regularly updated. It is a good idea to enable cell update so that these parts of the record carry on decaying, while the updated bits have their trust values reset appropriately.

For example, suppose a source system has three screens for updating all the data. Anything that is not on the first, most frequently-used screen is probably updated much less frequently. In this case, enabling cell update allows the trust value for these infrequently updated cells to continue to decay.

Using Validation

This section describes how to use validation rules to determine the validity of an individual piece of data coming from a given data source. Use the Schema Manager in the Hub Console to configure validation rules. To learn more, see the Informatica MDM Configuration Guide.

About Validation Rules

A validation rule tells the MDM Hub the condition under which a data value is not valid. If data meets the criterion specified by the validation rule, then the trust value for that data is downgraded by the percentage specified in the validation rule. If the Reserve Minimum Trust flag is set for the column, then the trust score cannot be downgraded below the column’s minimum trust setting.

How Validation Works

By setting validation rules with trust settings, cells that meet the condition defined in the validation rule have their trust scores downgraded by the percentage downgrade value specified for the validation rule according to the following algorithm.

Final trust = Trust - (Trust * Validation_Downgrade / 100)

For example, with a validation downgrade percentage of 50%, and a trust level calculated at 60:

Final Trust Score = 60 - (60 * 50 / 100)


Final Trust Score = 60 - 30 = 30

Validation rules are evaluated in sequence, and the last validation rule that is met provides the validation downgrade that is applied. The order of the validation rules is therefore important. For example, the following two validation rule lists have different results for the same input data.

Sequence 1

  1. 'Downgrade trust on First_Name by 50% if Length < 3’
  2. 'Downgrade trust on First_Name by 75% if Delete_ind=Y'

Sequence 2

  1. 'Downgrade trust on First_Name by 75% if Delete_ind=Y'
  2. 'Downgrade trust on First_Name by 50% if Length < 3'

For a given record that is flagged as deleted and where the value in the First_Name column is 'MK', the final trust score for each of the lists given above are calculated as follows:

  • Sequence1: Final Trust Score = (Trust - (Trust * 75 / 100))
    • If Trust was calculated as 60, then for Sequence 1, Final Trust = (60 - 45) = 15.
  • Sequence 2: Final Trust Score = (Trust - (Trust * 50 / 100))
    • If Trust was calculated as 60, then for Sequence 2, Final Trust = (60 - 30) = 30.

If it is more important that the trust score be downgraded for deleted records than for records with short first names, then obviously the first scenario is the best approach to use.

Ordering and Grouping Validation Rules

The order of the validation rules is very critical. Validation rules should be ordered starting with the rules that have the lowest impact (rules resulting in the least downgrade), and moving to the rules that have the highest impact (rules resulting in the highest downgrade). In many cases, downgrades are mixed and matched across rules. Therefore, the goal is to determine how to order them by level of severity.

Consider the following set of example validation rules:

  • Rule 1-Downgrade FName by 20%, downgrade ID by 60% WHEN fieldA = 'BAD'
  • Rule 2-Downgrade FName by 40%, downgrade ID by 40%, downgrade FLAG_A by 80% WHEN FLAG_A = 'N'
  • Rule 3-Downgrade FName by 10%, downgrade ID by 70% WHEN FLAG_B='N'

In this set of validation rules, note that the downgrade in rule 1 is for two columns, whereas rule 2 has three columns for downgrade and rule 3 has two columns for downgrade. If the situation arises in which all three rules are satisfied, then the final outcome of the downgrade will be based on a combination, such as:

"downgrade FName by 10%, dowgrade ID by 70% and downgrade FLAG_A by 80%"

The downgrade process sequentially applies the downgrade rule that meets the condition and stores the downgraded results in a temp table. In this example, the values inserted will be for Rule 1, which includes only the FName and ID columns. Rule 2 will overwrite those values for this rowid_object with FName, ID and FLAG_A. Rule 3 will then overwrite the same record with values only for columns FName and ID. This processing results in the downgrade values that go across rules.

If all the downgrade rules are met, then only the values from one downgrade rule per column (not always the same one) will be applied. Therefore, the downgrade values are not cumulative.

The grouping and ordering of the downgrade rules should be done by grouping and defining validation rules that have the same columns. Therefore, it is possible to end up defining multiple rules with the same WHERE clause, which would definitely increase the number of validation rules. The previous example would need to be broken down as:

  • Rule 1-Downgrade FName by 10% WHEN FLAG_B='N'
  • Rule 2-Downgrade FName by 20% WHEN fieldA = 'BAD'
  • Rule 3-Downgrade FName by 40% WHEN FLAG_A = 'N'
  • Rule 4-Downgrade ID by 40%, downgrade FLAG_A by 80% WHEN FLAG_A ='N'
  • Rule 5-Downgrade ID by 60% WHEN fieldA = 'BAD' Rule 6 - Downgrade ID by 70% WHEN FLAG_B='N'

Compared to the previous example, if all rules were met, then this would give us a final result of "downgrade FName by 40%, downgrade ID by 70% and downgrade FLAG_A by 80%"

Best Practices for Validation Rules

This section describes best practices for validation rules.

Using Cross-Column Validation

Consider how data is coming in terms of grouping of data. Do all columns come in together from staging tables and PUTs? If not, then the validation rules are not valid.

Using Complex Validation Rules

It is essential to have foreign keys when using complex validation rules.

Validation and It’s Effect on Load and Merge Performance

Validation rules have an impact on the performance of Load and Merge jobs because they involve running more queries and maintaining more metadata. Therefore, use validation rules judiciously and only where needed. Consider the following issues:

  • Use validation rules for a column only when they are truly required.
  • Limit the number of validation rules per column.
  • If a Load job is slow, manually create indexes in the database on the staging table for columns used as criteria.
  • Joining to other tables involves a lot of overhead. If a join is necessary, join only low volume tables. It is better to have that data be part of the ETL process than the validation process.

Using SQL in Validation Rules

Make sure that any SQL used in a validation rule is well formed and well-tuned. For example:

  • If the validation rules contain multiple conditions, enclose the validation rule in parentheses, especially if the validation rule contains OR conditions. The SQL fragment being defined in a validation rule is appended to an existing SQL fragment in the MDM Hub, and badly formed queries can result in unexpected results and long-running queries
  • Use the following syntax:

x IN (value1, value2, value3)

instead of the following syntax:

(x = value1 or x = value2 or x = value3)

as it is more efficient for the RDBMS to evaluate a subset than multiple OR


Using Trust and Validation Together

This section describes using trust levels and validation rules together.

Scenarios Involving Trust and Validation for a Column

This section describes the following scenarios: Column with No Configured Trust Levels or Validation Rules, Column Configured With Validation Rules But No Trust Levels, Column Configured With Trust Levels But No Validation Rules, and Column Configured With Trust Levels and One or More Validation Rules

Column with No Configured Trust Levels or Validation Rules

If a given column has no configured trust settings or validation rules, then the most recently loaded source value for the cell is always the winner, and the cell will be updated in the base object. In a merge, the value from the record that the MDM Hub deems to be the merge source will survive after the merge.

Column Configured With Validation Rules But No Trust Levels

If a given column is configured with one or more validation rules but no trust settings, then the following will occur:

  • If the validation rule is specified as 100% downgrade without the Reserve Minimum Trust option, then a cell that meets the validation rule condition (meaning that the data is invalid) will not survive in the base object. If no other source exists that can provide an update value for the cell in the base object, then the default value specified for the cell survives in the base object. If no default is specified, then the surviving value is NULL.
  • If the validation rule is specified as something other than a 100% downgrade, and/or if the rule has the Reserve Minimum Trust option, then the most recently updated source value for the cell is always the winner.

Column Configured With Trust Levels But No Validation Rules

If a given column is configured with trust settings but no validation rules, then the decayed trust score is calculated based on the last update date of the source record, and the trust settings for the column for that system. The winning cell is the one with the highest trust score after decay.

Column Configured With Trust Levels and One or More Validation Rules

If a given column is configured with trust settings and one or more validation rules, then the validation downgrade is applied for the most severe rule (defined by the validation rule sequence in the Hub Console) that fails validation, and then the trust score for that data is downgraded by that percentage. If the new trust score is below the minimum trust for the rule, then the minimum trust setting is the final trust score. Finally, the two cell trust scores are compared and the data in the cell with the highest trust score is chosen as the winning data that updates the cell.

What Happens When a Record Is Updated

When a record is updated, the cross-reference records for the data are always updated. The base object records are updated only by data that have higher trust levels than the existing data in the target base object. Whenever the Load procedure updates the base object, it also updates the control and history tables associated with the base object.

Note: Load allows a NULL value to come in only if the initial load base object has the NULL value or if allow_null_update is enabled.

The cross-reference will always get updated for the source system. The base object will get updated only if the trust score of the latest update for the cell is higher than the trust score of the base object cell.

Example Using Trust Levels and Validation Rules Together

This section provides an example of using trust levels and validation rules together for a column based on the Scenarios Involving Trust and Validation for a Column.

When merging record A into record B, if no trust or validation settings are configured, then all of the data from record A will be kept. This is not always desirable when there are numerous data sources of differing levels of trustworthiness providing potential values for the consolidated record. To achieve a goal of greater data reliability, trust and validation must be implemented.

Consider the following data.

Record A

Record B

Final Output

First_Name: Mark

First_Name: Mark

First_Name: Mark

Middle_Name: L

Middle_Name: Lawrence

Middle_Name: L

Last_Name: Hoare

Last_Name: Hoa

Last_Name: Hoa

isRegistered: N

isRegistered: Y

isRegistered: N

Suppose trust and validation were enabled on all four columns and the following validation rules were created:

Rule Name: "Downgrade trust on short Middle Name" Rule Type: Custom

Rule Columns: Middle_Name

Rule SQL: Where Length(S.Middle_Name) < 3

Downgrade Percentage: 80

Rule Name: "Downgrade trust on short Last Name" Rule Type: Custom

Rule Columns: Last_Name

Rule SQL: Where Length(S.Last_Name) <= 3

Downgrade Percentage: 80

To keep this example simple, assume that the source for each record is the MDM Hub Admin System and that the Maximum Trust is set to 90 on all columns.

Consider the trust scores after these records are loaded/inserted for the Admin System.




For Record A, the trust score for Last_Name is 90 because the value “Hoare” does not result in a trust downgrade. For Record B, the trust score for Last_Name is (90 - (90 *80/100) = 18 after the validation downgrade.


For Record B, the trust score for Middle_Name is 90 because the value "Lawrence" does not result in a trust downgrade. For Record A, the trust score for Middle_Name is (90 - (90 *80/100) = 18 after the validation downgrade.

The following results from merging the two records with these settings:

First_Name: Mark

Middle_Name: Lawrence

Last_Name: Hoare

isRegistered: N

Notice that the prevailing value for Middle Name was selected from the record with the highest final trust score for that cell (Record B). The winning value for Last Name was selected from the record with the highest final trust score for that cell (Record A).

Because validation rules were not defined for the First_Name or isRegistered columns, the surviving values were picked from the most recently updated source record.

Table of Contents


Link Copied to Clipboard