Following are the new features in Enterprise Data Catalog (EDC) 10.5:
EDC Architecture Change in 10.5: Re-architecture of EDC
- Deprecated support for installation on internal and external Hadoop clusters.
- Replaced HBase with MongoDB as the metadata store.
- Nomad by HashiCorp was introduced as the orchestration framework.
- Support to back up metadata store, search database, and other stores separately or in parallel.
- Effective in version 10.5, EDC uses mTLS in place of Kerberos for improved security.
Advanced Scanners integration
- Install Advanced Scanners. which are bundled with the EDC installer binary files.
- Implemented native models for the following Advanced Scanners:
- Code: Oracle, SQL Server, Teradata, IBM DB2, Netezza, and Sybase.
- BI: SAS, Microsoft SSAS, and SSRS.
- Legacy: Cobol and JCL.
- ETL: Oracle Data Integrator, Talend DI, IBM DataStage, and Microsoft SSIS.
- Support for connection-less configuration for the dependent systems.
- Support for reference resources.
- Support for connection assignments with other resource endpoints.
- Axon scanner enhancement: Ability to filter lifecycle statuses for each object type.
- Added support for S3 compatible filesystem that is Scality Ring certified.
- New Snowflake scanner with advanced view SQL parsing and cross resource connection assignment capabilities.
- New Advanced scanner for Oracle Data Integrator and Talend DI.
- SAP S/4 HANA scanner that works for both SAP ECC and SAP S/4HANA is now available with support for data profiling.
- Catalog export utility: The utility is available in the following directory where you extracted the installer binary files:
<Extracted installer binary file location>/properties/utils/upgrade/EDC/export.jar
- Tableau Resource Enrichments Migration Utility: From the Akamai Download Manager, download and extract the Informatica_1050_TableauEnrichmentMigrationUtility.zip
- Primary key-Foreign key Enrichments Migration Utility: From the Akamai Download Manager, download and extract the Informatica_1050_PkFkEnrichmentMigrationUtility.zip file on the domain machine.
- GenerateCustomSslUtility: utility bundled with the installer to perform the following tasks:
- Generate the required custom SSL certificates that can sign other certificates. Sign the certificates using a CA bundled within the utility. Copy the generated certificates to the required folders.
location: <Location of installer files>/properties/utils/CustomSslCertsUtility/
The enhancements in EDC 10.5 are as follows:
User Experience Enhancements
Search result page: Enhanced with the following features to improve user experience in searching and identifying assets of interest:
- Enhanced search bar with customizable search pre-filters.
- Simplified search result filter pane, search result asset content layout, and pagination.
- Additional information pane to show important details about the selected asset.
Asset notification: Improvements include filtering and export options. Clone a resource configuration: Accelerates the creation of resources with similar types and settings. New Catalog Administrator contextual walkthroughs: Accelerates the onboarding of Catalog Administrator:
- Introduction to Home page
- Create a Resource
- Create a Custom Attribute
- Create a Data Domain
- Overview of Security and Permissions Management
New EDC contextual walkthroughs: Provides an overview of the following features:
- Application Configuration
- Business Term Overview
- Data Domain Overview
- Profiling pushdown to Databricks: Support for pushdown to Databricks cluster for column profiling and data domain discovery.
- Enhancements to similarity discovery include grouping resources, reducing false positives, and computing similarity on enabled features.
- For more information, see the "EDC Concepts" chapter in the Informatica 10.5. Enterprise Catalog Administrator Guide.
Data Asset Analytics enhancements
Expose and document the Data Asset Analytics views that allow users to connect to report-type datasets through any business intelligence tools.
Data Asset Analytics repository support for the following databases:
Microsoft SQL Server Named Instance.
Oracle RAC with SCAN type connection.
Enhancements to improve user experience.
The following are the Data Flow Analytics (Technical Preview):
- Data Flow Analytics for data mapping insight and discoveries with AI/ML to accelerate data modernization, improve data mapping efficiency, and reduce operational cost.
- Data Flow Analytics for PowerCenter mappings automates the following discoveries:
Similar mapping groups and the representative mapping within each group.
Reusable transformation candidates
User-defined functions candidates
Technical preview functionality is supported for evaluation purposes but is unwarranted and is not production-ready. Informatica recommends that you use it in non-production environments only. Informatica intends to include the preview functionality in an upcoming release for production use but might choose not to under changing market or technical circumstances. For more information, contact Informatica Global Customer Support.
For more information, you can refer to the What’s New and Changed document available for 10.5.
Product Availability Matrix
Informatica's products rely upon and interact with an extensive range of products supplied by third-party vendors. Examples include database systems, ERP applications, web and application servers, browsers, and operating systems. Support for significant third-party product releases is determined and published in Informatica's Product Availability Matrix (PAM).
The PAM states which third-party product release is supported in combination with a specified Version of an Informatica product.
To obtain the installers download link, contact Informatica Shipping Team.
Find the list of required installers below for EDC v10.5:
Linux 64-bit Server Installer
Extended Scanner Binaries
Catalog Agent Installer for Windows
Tableau Resource Enrichment Migration Utility
Primary key-Foreign key Enrichments Migration Utility
SAP Scanner Binaries
You can download the installers based on your Operating System type and version.
Note: The Informatica Server is recommended to run on a separate machine(s) from the Informatica Cluster services
As per PAM v10.5 Informatica, Cluster Service is supported on the following Operating System.
RHEL 6.7 to RHEL 8.3
SUSE 11 SP4 to SUSE 12SP2
CentOS 6.5 to CentOS 7.0
It is mandatory to rename the binary file as below.
Rename ScannerBinaries_[OStype].zip to ScannerBinaries.zip. For example, rename ScannerBinaries_RHEL.zip to ScannerBinaries.zip
Ensure that renamed binary is copied to <Installer>/source/ directory before you start the upgrade from 10.4.0 or 10.4.1.
You can directly upgrade to EDC 10.5 from Informatica 10.4.0 and 10.4.1.
If you are in a version before 10.4.0, then first upgrade to 10.4.0 or 10.4.1 before upgrading to 10.5
You can upgrade EDC deployed on both existing and embedded clusters to version 10.5.
An embedded cluster refers to the Hadoop cluster bundled with Informatica EDC installer. An existing cluster refers to a version of any Hadoop cluster supported by the EDC you have set up in your organization.
The following are the upgrade path steps:
- If you are currently on EDC version 10.4.0 or 10.4.1, follow the steps listed here to upgrade EDC.
- If you had installed Data Engineering Integration 10.1 or later without EDC, you could first upgrade to Data Engineering Integration 10.4.1. Then, you can install EDC 10.4.1 if required. You need to follow the steps listed in the Upgrade Data Engineering Integration and Install EDC section to install EDC on the upgraded Domain. Then you can upgrade to 10.5
- If you had installed Data Engineering Integration 10.1 or later with EDC in an External Cluster, follow the EDC Upgrade Guide's steps to upgrade EDC to 10.4.0 or 10.4.1 in an Existing Cluster. Then you can upgrade to 10.5.
- If you had installed Data Engineering Integration 10.1 or later with EDC in an Embedded Cluster, follow the steps listed in the EDC Upgrade Guide to upgrade EDC 10.4.0 or 10.4.1 in an Existing Cluster. Then you can upgrade to 10.5
If the product version that is currently installed cannot be upgraded to Informatica 10.4.0 or 10.4.1, you must first upgrade to a supported version.
Note: For Customers having DPM service, it is mandatory to upgrade to 10.4.0
For those having DPM service, it is mandatory to have an Internal Cluster in 10.4.x before upgrading to 10.5.
Please refer to the Data Privacy Management 10.5 Upgrade Guide for more details.
For Customers who have only EDC, we recommend upgrading to 10.4.1.3 as we have covered most of the Bugs.
Note: Click here for Support EOL statements
The upgrade checklist summarizes the tasks that you must perform to complete an upgrade.
We can perform either in-place (Inline) upgrade or parallel upgrade of the domain:
In-place Upgrade: Install 10.5 on top of 10.4.x. shutdown 10.4.x and upgrade.
Parallel Upgrade: Clone the 10.4.x to a new machine, set up a new 10.5. have 10.4.x and 10.5 runs in parallel.
Note: No sharing of cluster hardware between ICS and IHS. This means we cannot run 10.4.x and 10.5 concurrently on the same cluster servers.
You can upgrade from EDC deployed on both external and internal clusters to version 10.5.
The following are the scenarios:
- External cluster is shared with other products (both INFA and 3rd-party). We expect the customer to provide clean cluster nodes, procure new hardware, decommission some cluster nodes from external clusters, and re-purpose the hardware.
- The external cluster is dedicated to Catalog Service, and the steps are documented. When you upgrade from an external cluster, you can plan to deploy EDC on one, three, or six nodes in a cluster.
- Internal cluster: We own all the cluster nodes, and the steps are documented
Before upgrading the domain and server files, complete the following pre-upgrade tasks:
- Log in to the machine with the same user account that you used to install the previous version.
- Review the Operating System specific requirements. Review the prerequisites and environment variable configuration.
Unset the following environment variables before you start the Upgrade: INFA_HOME, INFA_DOMAINS_FILE, DISPLAY, JRE_HOME, INFA_TRUSTSTORE, INFA_TRUSTSTORE_PASSWORD
Verify that LD_LIBRARY_PATH does not contain earlier versions of Informatica.
Verify that the PATH environment variables do not contain earlier versions of Informatica.
- Ensure an additional 100 GB of free disk space on the machine where the Informatica domain runs.
- Install the following applications and packages on all nodes before you upgrade EDC: Bash shell, libtirpc-devel, rsync, libcurl, xz-libs.
- Java Development Kit (JDK) Min 1.8 must be installed on all cluster nodes.
- Verify that ntpd is synchronized between the Informatica domain node and the cluster nodes.
- Verify that you have the new license key for Informatica Cluster Service.
- Copy the 10.5 installer binary files from the Akamai download location mentioned in the fulfillment email and extract the files to a directory on the machine where you plan to upgrade EDC.
- Copy the new scanner binaries from the Akamai Download Manager to your installation/source directory.
- Clear the configured values for the INFA_TRUSTSTORE and INFA_TRUSTSTORE_PASSWORD environment variables if the domain is enabled for Secure Sockets Layer (SSL).
Note: In a multi-node domain, upgrade a gateway node before you upgrade other nodes.
- Verify that the firewall is disabled before starting ICS. Check the firewall status in all the cluster machines using the command service firewalld status.
- Make sure, hostname -f returns a FQDN on all cluster nodes.
- For customer who has large MRS, 8 GB of heap memory is needed for MRS upgrade. The customer must explicitly increase it to 8GB before triggering Upgrade. We can Increase the MRS heap size in admin console->MRS->Properties-> Advanced properties-> Maximum Heap Size
If Backup contains more profiling data (Similarity data), then the infacmd process needs 4GB of heap. Which can be set using below Property
ICMD_JAVA_OPTS environment variable applies to the infacmd command line program.
- Verify that you have taken the below backups before you plan to upgrade.
- Domain Backup
- MRS backup
- Back Up Catalog Data Using the Export Utility
- LDM backup using infacmd ldm BackupContents command
- Back up the domain truststore and keystore files
- Backup the default keystore files
- Backup all the keystores and truststores used by different services
- Back up the Sitekey, domains.infa,server.xml,nodemeta.xml files
- It is best practice to take the complete INFA_HOME binaries backup in case of an in-place Upgrade.
- Take the Backup of Enrichments for tableau resource
- Backup of DAA Database schema
- Back Up Primary key-Foreign key Enrichments Using the Migration Utility
Here are some of the additional checks to be performed:
- In the multimode Domain, make sure Binaries are copied to all nodes. And that the upgrade is performed on the master gateway node before you upgrade other nodes.
- Make sure to take the catalog service backup using the export utility and command line.
- Rename the binaries zip files and move to <InstallerExtractedDirectory>/source before starting the upgrade.
Rename ScannerBinaries_[OStype].zip to ScannerBinaries.zip. For example, rename ScannerBinaries_RHEL.zip to ScannerBinaries.zip
Note: You can contact the Informatica Shipping team to download links for Informatica 10.5 server and client installers.
Plan for the Number of Nodes in the ICS Deployment
You can deploy Informatica Cluster Service on a single data node or three or six data nodes to automatically enable the service's high availability. A data node represents a node that runs the applications and services. If you plan to deploy Data Privacy Management with EDC, you can arrange a six-data-node deployment. In a six-node deployment, the nodes are split equally between EDC and Data Privacy Management.
You can configure a maximum of three service instances of Nomad, Apache Solr, and MongoDB to install EDC. You cannot configure multiple instances of the same service on a node.
Note: You cannot configure more than one PostgreSQL database instance.
A processing node represents a node where profiling jobs or metadata scan jobs run. There are no restrictions on the number of processing nodes that you can configure in the deployment.
The Informatica Cluster Service uses the following services to run and manage EDC:
Prepare to Configure Custom SSL Certificates
You can use the default SSL certificates included with the Informatica domain or use SSL certificates of your choice to secure Informatica Cluster Service. If you plan to use SSL certificates of your choice, referred to as custom SSL certificates, review the following scenarios:
Scenario 1. Custom SSL certificate that can sign other certificates.
During install or upgrade, you have a custom SSL certificate for the Informatica domain that you can use to sign other certificates. For scenario 1, you do not need to perform any manual steps.
Scenario 2. Custom SSL certificate that cannot sign other certificates, and you want to generate CA-signed certificates.
You have a custom SSL certificate for the Informatica domain during an install or upgrade, but you cannot use this certificate to sign other certificates. You want to generate CA-signed certificates for the cluster and clients.
Scenario 3. Custom SSL certificates cannot sign other certificates, and you want to use a different set of custom SSL certificates that can sign other certificates.
During an upgrade, you have a custom SSL certificate for the Informatica domain, but you cannot use this certificate to sign other certificates. You have a set of other custom SSL certificates that you want to use to sign other certificates.
Refer to the Upgrade Guide for more information.
Upgrading EDC deployed in an internal cluster involves the following steps:
Upgrading EDC deployed on an external cluster involves the following steps:
The steps followed for Preparing Backups:
- Prepare the Domain. Back up the domain and verify database user account permissions.
- Prepare the Model repository, back up the Model repository.
- Back up the Catalog using the export utility: Refer to this article.
- Back-Up the Catalog Using the LDM backupContents Command.
- Note: You must back up the Catalog using both the methods listed to upgrade and restore data successfully or restore data in an upgrade failure event.
- Because of the change from Hadoop to mongo/nomad, we must create a DLM backup in the form of an export folder, which is created using a backup utility that converts the existing 10.4.x Hadoop data into a format that can be imported into 10.5. We cannot take a 10.4.1 ldm backupContents and directly restore into 10.5 using the restore command. We must use Migrate content command, which takes the export folder as a command-line argument and migrates the catalog content from 10.4.x to 10.5.
- Note: Irrespective of the upgrade type, say in-place, parallel or fresh 10.5 installs and restore the catalog content from 10.4.x, it is mandatory to take the catalog backup using the export utility.
- Optional: Take the HDFS backup of Service Cluster Name, which can be used in situations where we cannot follow the usual approach due to any uncertain errors. For more information, refer to HOW TO: Perform an HDFS backup and restore for the Catalog service.
- If you are upgrading on a cluster enabled for Kerberos and SSL, back up the domain truststore files.
- If you are upgrading on an SSL-enabled cluster, take a backup of the default keystore files.
- If you are using the default SSL certificate to secure the Informatica domain, copy the default.keystore file to a directory that you can access after the upgrade.
- Backup all the keystores and truststores used by different services. Also, back up the Sitekey, domains.infa,server.xml,nodemeta.xml files.
- Additionally, it is always recommended to take the complete INFA_HOME directory backup for fallback purposes.
- Take the backup of Enrichments for tableau resource. Refer to KB article: HOW TO: Use Tableau Enrichment restore Utility in EDC.
- Take the backup of DAA Database schema: Database export or database dump.
- Back Up Primary key-Foreign key Enrichments Using the Migration Utility. Refer to this document.
- If you have configured the repository for Advanced Scanners, perform a backup of the Advanced Scanners repository database.
Disabling Application services
- Catalog Service
- Informatica Cluster Service
- Content Management Service
- Data Integration Service
- Model Repository Service
To disable an application service, select the service in Informatica Administrator and click Actions > Disable Service.
Delete the Contents in the Cluster
If you are reprovisioning the Internal Hadoop Cluster: After taking backups, shutdown Hadoop cluster, clean up and re-purpose cluster nodes.
Disable IHS. perform "Clean Cluster."
use infacmd ihs cleanCluster command.
Disable the PostgreSQL Service
Use the following command on the machine where Informatica Administrator runs to disable the PostgreSQL Service: service postgresql-<PostgreSQL version> stop.
For example, if you are using PostgreSQL version 9.6, use the following command: service postgresql-9.6 stop.
ICS will install Postgres 12. It is mandatory to remove all the old postgres instances from the cluster machine if they exist.
Show status of postgresql: systemctl status postgresql sudo yum remove postgre* => To remove the old versions of Postgres sudo rm -rf /var/lib/pgsql/* => To remove the data of older postgres versions
Shut Down the Informatica Domain
Shut down the domain. You must shut down the domain before you upgrade. To shut down the domain, stop the Informatica service process on each node in the domain.
Complete the Default SSL Configuration
If the Informatica domain is enabled for SSL using the default SSL certificates, you can configure the Informatica Cluster Service and Catalog Service to use the 10.5 keystore. To configure the services to use the 10.5 keystore, perform the following steps:
- Edit the process options for the services in Informatica Administrator.
- In the infatrusttore. jks file, remove the existing value configured for the infa_dflt property, and provide the path to the 10.5 default.keystore file.
Pre validation checks
- Run the Informatica Upgrade Advisor: Informatica provides utilities to facilitate the Informatica services installation process. You can run the utility before you upgrade Informatica services. The Informatica Upgrade Advisor helps validate the services and checks for obsolete services in the domain before performing an upgrade. The Informatica Upgrade Advisor is packaged with the installer. You can select to run the Upgrade advisor.
- Run the Pre-Installation System Check Tool (i10Pi) System Check Tool to verify whether the domain machine meets the system requirements for a fresh installation. This is part of the installer, and we can choose this option while running the installer for a fresh installation.
- Run the ICS pre-validation utility to verify whether the cluster machine meets the system requirements for ICS installation.The utilities are present under <Location of installer files>/properties/utils/prevalidation
To run the utility, edit the input.properties file with appropriate cluster node information.
Then run the below command:
java -jar InformaticaClusterValidationUtility.jar -in <Location of the properties file configured in the prerequisites>
Upgrading the domain
You can upgrade in console mode or silent mode to upgrade the same machine's domain and the same domain configuration repository database. Refer to this document for more information.
Complete the following post requisites after you upgrade the Informatica domain:
- Set the INFA_TRUSTSTORE and INFA_TRUSTSTORE_PASSWORD environment variable values if the domain is enabled for Secure Sockets Layer (SSL).
- Copy the third-party JAR or ZIP files that you had configured for resources such as Teradata, JDBC, and IBM Netezza from the following location <Informatica installation directory>services/CatalogService/ScannerBinaries to the same location in the machine that hosts the upgraded Informatica domain.
- Add the Informatica Domain License.
- Update all the application services with the new license.
- Enable the application services, except the Catalog Service. From Informatica Administrator, enable the following application services and upgrade them:
- Model Repository Service
- Data Integration Service
- Content Management Service
- Verify the Gateway User Prerequisites: The gateway user must be a non-root user with sudo access. You must enable a passwordless SSH connection between the Informatica domain and the gateway host for the gateway user.
- Update the sudoers file to configure sudo privileges to the ICS gateway user.
- Enable the Informatica Cluster Service if it was an upgrade in an Internal Cluster. If it was upgraded from an External Cluster, create and enable the Informatica Cluster Service. When you create the Informatica Cluster Service, you must use the license for the 10.5 version of Informatica Cluster Service. And then associate Informatica Cluster Service with the Catalog Service.
- Enable the catalog service. Enable the email service if you had configured the service for the Catalog Service. Upgrade the Catalog Service. We will see the below warning in the catalog service once the upgrade is a success.
Migrate the Backed-Up Catalog Content.
After the catalog service upgrade, Use the infacmd.sh LDM migrateContents command to migrate the data that you backed up using the export utility. Migrate content prepares the 10.4 catalog content compatible with 10.5. Hence, we are running this command after upgrading the Catalog. Refer to this document for more information.
Performance Parameters for Migrate Content
If you want to tune the performance parameters for migrating data or skipping failed resources, modify the parameters specified in the MigrationModuleConfigurations.properties file available at the following location: <INFA HOME>/services/CatalogService/Binaries.
Default Memory Configuration is sufficient for Catalog's Restore process with 20 Mill Assets with 32GB Domain Machine. If Domain Node is shared across multiple applications (DPM, EDP, EDC), then Max Heap Memory of export.jar process can be limit by updating Migration Module Configuration Property under
Default Memory Configuration is sufficient for Catalog's Restore process with 50 Mill Assets with 64GB Domain Machine. If Domain Node is shared across multiple applications (DPM, EDP, EDC), then Max Heap Memory of export.jar process can be limit by updating Migration Module Configuration Property under
For Similarity Content Restore
It is recommended to set a minimum of 4 GB heap under infacmd.sh file. Need to set default maximum java heap memory allocation pool to 4096m and initial java heap memory allocation pool to 64
Under infacmd.sh update → ICMD_JAVA_OPTS="-Xms64m -Xmx4096m"
- Verify the Migrated Content: You can use the infacmd migrateContents -verify command as shown in the following sample to verify the migrated content: ./infacmd.sh LDM migrateContents -un Administrator -pd Administrator -dn Domain -sn CS -id /data/Installer1050/properties/utils/upgrade/EDC/export -verify.
- Remove the Custom Properties for the Catalog Service related to the Hadoop cluster.
Q1. What are the changes included in 10.5 that I should be aware of?
A. In 10.5, the embedded cluster components previously shipped with EDC (Hortonworks Data Platform) are replaced by a different technology stack. The details are as follows. The set of components will be known as the Informatica Cluster Services.
Q2. Why are we making the architecture change from Hadoop to Nomad?
A. HDP being planned to be EOL end of 2021, there is an opportunity to rethink EDC's underlying component. With Hadoop get less adoption in the market than a few years back, it makes sense to choose newer technologies that will be better supported/maintained in the future.
Q3. What is the support model for the component replacing the Hortonworks Data Platform (HDP)?
A. EDC will provide full support for the deployment and maintenance of the component shipped with EDC 10.5. This includes:
- Nomad & MTLS
Q4. Will EDC 10.5 continue to support deployment on the external cluster?
A. No, all new customers will have to provide the necessary hardware, and deploy the customers who are running with external cluster will be invited to migrate to the embedded cluster deployment model as they are upgrading to 10.5. Communication will be sent to customers who use an external cluster to help plan 10.5 adoptions before the end of 2020.
Q5. What will be the upgrade process from the version before 10.5 to 10.5?
A. For customers that are using the embedded cluster today, the upgrade will be like the initial upgrade: backup the content, clean up the cluster nodes (using provided scripts), upgrade to 10.5, deploy new service components, restore catalog content, and upgrade the content. For customers using the external cluster, cleanup of the cluster nodes will not be necessary.
Q6. Should Informatica-related environment variables be unset before upgrading?
A. Yes, it is recommended to unset all the environment variables related to Informatica before proceeding with the upgrade.
Q7. If we Skip configuring the Advanced Scanner repository while Upgrade/Installation 10.5, can we install the Advanced scanner later?
A. Yes, we can install it later. Refer to this document for more information.
Q8. Do we need to install postgre SQL DB on the cluster machine, and if yes, what version?
A. No, postgre SQL is bundled with Informatica installer.
Q9. Can we use the root user for installing the 10.5 EDC domain?
A. No, we need to use a non-root user for the EDC installation. If we use the root user, it will fail to install the postgre SQL, and it is a requirement from the postgre side.
Q10. Do we need to install MongoDB on the cluster machine?
A. No, MongoDB is bundled with Informatica installer.
Q11. Is it mandatory to rename ScannerBinaries_[OStype].zip?
A. Yes, it is mandatory to rename the binaries files as below.
Rename ScannerBinaries_[OStype].zip to ScannerBinaries.zip. For example, rename ScannerBinaries_RHEL.zip to ScannerBinaries.zip.
Q12. How to use Technical Preview Features in EDC?
A. Contact Informatica Shipping Team to obtain Technical Preview License to use Technical Preview Features in EDC.
Q13. Where can I see the advanced scanner option in Domain?
A. The advance scanner is not a Domain service. The advanced scanner URL is a separate webapp that you can access with the <domain host>:< MDX port>, provided during installation.
Q14. How do you proceed when after Installing the Advanced scanner as part of EDC installation, it throws an error "No License available" in the Advanced scanner UI?
A. Post accessing the URL, you need to set global variables in the Advanced Scanner URL with a set of global variables pointing to EDC URL, Username, and Password and restart Metadex Server. Please refer to this document for more information.
Q15. Where Can I find the advanced scanner Binaries?
- Advanced Scanner binaries are present in the location: <INFA_HOME>/services/CatalogService/AdvancedScannersApplication/app
Q16. How to restart the Advanced Scanner?
- To Stop Advanced Scanner: <INFA_HOME>/services/CatalogService/AdvancedScannersApplication/app/server.sh stop
- To Start Advanced Scanner:<INFA_HOME>/services/CatalogService/AdvancedScannersApplication/app/server.sh &
Q17. Where can I find the logs for the services such as MongoDB, Apache Solr, ZooKeeper, Nomad, and PostgreSQL?
A. The log files are present in the following directory by default: /opt/Informatica/ics. If you configured a custom directory for the services, the log files are present in the /ics directory in the custom directory.
Q18. Will there be Kerberos in the Nomad cluster?
A. No kerberos, security will be handled through another mechanism, mTLSauth and encryption implemented.
Q19. What is the use of the ICS Status File?
A. ICS status file is created in /etc/infastatus.conf, including domain namd and ics cluster service name. It's created on all cluster nodes and is used for ICS to ensure other ICS does not share the cluster node.
Q20. Can we add/delete processing/data nodes?
A. Only processing nodes (nomad client node) can be deleted/added. Data node is not allowed to change, even incremented.
Q21. What if the user wants to add/delete the data node?
A. Workaround: Backup data, clean cluster, and create a new ICS with a new number of data nodes. For more FAQs, please refer to the following link.