Known Issues

The CCP 2.0.1 release has the following known issues:

Installing Spark 2 fails
Workaround: Before attempting to install Spark2 with Ambari, install the MySQL Connector manually and ensure Ambari can find it. This needs to be run on the node hosting Ambari.
yum -y install mysql-connector-java
ln -s /usr/share/java/mysql-connector-java.jar /var/lib/ambari-server/resources/mysql-connector-java.jar
cd /var/lib/ambari-server/resources/
ln -s /usr/share/java/mysql-connector-java.jar mysql-connector-java.jar
Kerberos authentication against HDFS from Metron's Storm topologies can fail.

If this issue occurs, the Storm worker is unable to present a valid Kerberos ticket to authenticate against HDFS. This impacts the Enrichment and Batch Indexing topologies, each of which interact with HDFS.

To mitigate this problem, before starting the Metron topologies in a secured cluster using Kerberos authentication, perform the following additional installation steps:

  1. Schedule a periodic job to obtain and cache a Kerberos ticket.
  2. Schedule the job on each node hosting a Storm Supervisor.
  3. Run the job as user metron.
  4. Ensure the job performs kinit using the Metron keytab which is often located at /etc/security/keytabs/metron.headless.keytab.
  5. Schedule the job to run at least as frequently as the ticket lifetime to ensure that a ticket is always cached and available for the topologies.
During CCP installation, some versions of Zeppelin might fail to install.
If the Zeppelin notebooks are not installed, import the Apache Zeppelin Notebook manually.
The Kerberization process might lock solr directories.
If this occurs you will see the following message in the logs: is locked (lockType=hdfs). Throwing exception. and you will not see Solr alerts in the Alerts UI. If this issue occurs, remove the write.lock file located at /solr/bro/core_node1/data-index/write.lock or, in Ambari, navigate to Solr > config > Advanced solr-hdfs and check the Delete write.lock files on HDFS checkbox. After you have deleted the write.lock file, restart Solr.
When running a large sized PCAP query, the REST API can die silently if the result set exceeds the memory available to the REST server.
On Kerberized clusters Storm rebalances can fail to correctly distribute tickets.
This can be resolved by running storm upload-credentials against each topology.