Informatica 10.5 release is for all customers and prospects who want to take advantage of the latest PowerCenter, Data Engineering Integration, Data Engineering Quality, Data Engineering Streaming, Enterprise Data Catalog, and Enterprise Data Preparation capabilities. This update provides the latest ecosystem & connectivity support, security enhancements, cloud support, and performance enhancements while improving the user experience.
Enterprise Data Catalog and Enterprise Data Preparation are aligned within the Data Catalog product family.
Data Engineering Integration (DEI)
- Mapping audit: You can validate the mapping jobs' consistency and validity by creating audit rules and conditions. Rules can be scheduled before and after the mapping runs, either from the Developer tool or infacmd.
- File manager utility: You can administer the preprocessing and file watching capabilities for a cloud ecosystem. The utility eliminates the need for a complex setup or custom script. You can use the credentials and connections set up within Informatica Administrator for file management.
- Claire Recommendation: Enterprises that use Enterprise Data Catalog to tag columns as sensitive can share the information with developers. This information empowers developers to take appropriate masking actions to secure data in the Data Engineering pipeline.
- CI/CD: You can compare objects across services in the same domain and between services across domains.
- Debugging: The LogPacker utility can now aggregate logs from ephemeral cloud clusters and Spark job servers. A new API is also available to trigger and perform log collection programmatically.
- Dynamic flattening in mappings: You can flatten complex hierarchical data types in dynamic mappings. Dynamic flattening improves the reusability of mappings when files are processed with hierarchical data types.
- Multi-match lookups: Helps developers leverage the full functionality of feature-rich Informatica lookups in mappings that run on the Spark engine.
- Databricks warm pool: Users of the Databricks pushdown capabilities can leverage warm pool instances to shorten cluster startup time with ephemeral or standard clusters from Data Engineering Integration.
- File watcher: Added support for file watcher or file-preprocessing commands that allows file copy, read, list, rename, move, and watch.
Data Engineering Streaming (DES)
Streaming Data Integration
- Support for high precision decimal numbers
- Support for logical data types in Avro data format
- Support for a periodic refresh of lookup cache in long-running streaming mappings
- Enhanced parsers for CSV, XML, JSON, and Avro data format for addressing complex use cases
- Support for offset header port in Kafka source
- Parameterized connect string: Provides the ability to parameterize the connect string attribute in relational connections for the Oracle database.
- Oracle Multitenant CDB/PDB support: Applies to the domain and the PowerCenter Repository Service.
- Integration of system trace and stack trace: Improved debugging with system trace integration and stack trace collection on the Linux platform.
- TLS-enabled Mail Server Support: Send secured emails using through SMTP servers that use TLS protocol.
- GENERATE_UUID(): GENERATE_UUID( ) function for pushdown optimization with ODBC for GBQ. Use for use cases such as autogenerating string values as surrogate keys in the tables.
- Error logging: Improved error logging for the PowerCenter Integration Service to improve supportability.
- Diffie-Hellman Ciphers: Supports Diffie-Hellman Ciphers in PowerCenter services to mitigate the Forward Secrecy issue.
Enterprise Data Catalog (EDC)
- Profiling pushdown to Databricks: Support for pushdown to Databricks cluster for column profiling and data domain discovery.
- Enhancements to similarity discovery such as grouping resources, reducing false positives, and computing similarity on enabled features.
Re-architecture of Enterprise Data Catalog
- Deprecated support for installation on internal and external Hadoop clusters.
- Support to back up metadata store, search database, and other stores either separately or in parallel.
User Experience enhancements
- Search result page. Enhanced with the following features to improve user experience in searching and identifying assets of interest:
- Enhanced search bar with customizable search pre-filters.
- Simplified search result filter pane, search result asset content layout, and pagination.
- Additional information pane to show important details about the selected asset.
- Asset notification. Improvements include filtering and export options.
- Data Asset Analytics enhancements
- Data Flow Analytics (Technical Preview):
- Advanced Scanners integration:
- Install Advanced Scanners. which are bundled with the Enterprise Data Catalog installer binary files.
- Implemented native models for the following Advanced Scanners:
- Code: Oracle, SQL Server, Teradata, IBM DB2, Netezza, and Sybase.
- BI: SAS, Microsoft SSAS, and SSRS.
- Legacy: Cobol and JCL.
- ETL: Oracle Data Integrator, Talend DI, IBM DataStage, and Microsoft SSIS.
- Support for connection-less configuration for the dependent systems.
- Added support for S3 compatible filesystem that is Scality Ring certified.
- New Snowflake scanner with advanced view SQL parsing and cross resource connection assignment capabilities.
- SAP S/4 HANA scanner that works for both SAP ECC and SAP S/4HANA is now available with support for data profiling.