Known Issues in Cloudera Search

This topic describes known issues and unsupported features for using Cloudera Search in this release of Cloudera Runtime.

Known Issues

Cloudera Bug ID:
CDPD-20577
Summary:
Splitshard of HDFS index checks local filesystem and fails
Description:

When performing a shard split on an index that is stored on HDFS, SplitShardCmd still evaluates free disk space on the local file system of the server where Solr is installed. This may cause the command to fail, perceiving that there is no adequate disk space to perform the shard split.

Workaround:
None
Cloudera Bug ID:
OPSAPS-58059
Summary:
Solr log rotation counts the number of retained log files daily instead of globally
Description:
With CDP 7.1.1, Search moved to Log4Jv2. This has affected Solr log rotation behavior in an unwanted way. With the default configuration, Solr log file names include a date and a running index, for example: solr-cmf-solr-SOLR_SERVER-solrserver-1.my.corporation.com.log.out.2020-08-31-9. The number of retained log files is configured in Cloudera Manager, however the configured number now applies for each day, instead of applying globally for all log files of the particular server.
Workaround:
Using Cloudera Manager, edit the Solr Server Logging Advanced Configuration Snippet (Safety Valve) property of your Solr service and add a new line containing: appender.DRFA.filePattern=${log.dir}/${log.file}.%i
Cloudera Bug ID:

DOCS-5717

Summary:

Lucene index handling limitation

Description:

The Lucene index can only be upgraded by one major version. Solr 8 will not open an index that was created with Solr 6 or earlier.

Workaround:

There is no workaround, you need to reindex collections.

Cloudera Bug ID:

CDH-82042

Summary:
Solr service with no added collections causes the upgrade process to fail
Description:
Upgrade fails while performing the bootstrap collections step of the solr-upgrade.sh script with the error message:
Failed to execute command Bootstrap Solr Collections on service Solr
if there are no collections present in Solr.
Workaround:
If there are no collections added to it, remove the Solr service from your cluster before you start the upgrade.
Cloudera Bug ID:

CDH-66345

Summary:
Solr SQL, Graph, and Stream Handlers are Disabled if Collection Uses Document-Level Security
Description:
The Solr SQL, Graph, and Stream handlers do not support document-level security, and are disabled if document-level security is enabled on the collection. If necessary, these handlers can be re-enabled by setting the following Java system properties, but document-level security is not enforced for these handlers:
  • SQL: solr.sentry.enableSqlQuery=true
  • Graph: solr.sentry.enableGraphQuery=true
  • Stream: solr.sentry.enableStreams=true
Workaround:
None.
Cloudera Bug ID:

CDH-34050

Summary:
Collection Creation No Longer Supports Automatically Selecting A Configuration If Only One Exists
Description:

Before CDH 5.5.0, a collection could be created without specifying a configuration. If no -c value was specified, then:

  • If there was only one configuration, that configuration was chosen.

  • If the collection name matched a configuration name, that configuration was chosen.

Search now includes multiple built-in configurations. As a result, there is no longer a case in which only one configuration can be chosen by default.

Workaround:
Explicitly specify the collection configuration to use by passing -c <configName> to solrctl collection --create.
Cloudera Bug ID:

CDH-22190

Summary:
CrunchIndexerTool which includes Spark indexer requires specific input file format specifications
Description:

If the --input-file-format option is specified with CrunchIndexerTool, then its argument must be text, avro, or avroParquet, rather than a fully qualified class name.

Workaround:
None.
Cloudera Bug ID:

CDH-19923

Summary:
The quickstart.sh file does not validate ZooKeeper and the NameNode on some operating systems
Description:

The quickstart.sh file uses the timeout function to determine if ZooKeeper and the NameNode are available. To ensure this check can be complete as intended, the quickstart.sh determines if the operating system on which the script is running supports timeout. If the script detects that the operating system does not support timeout, the script continues without checking if the NameNode and ZooKeeper are available. If your environment is configured properly or you are using an operating system that supports timeout, this issue does not apply.

Workaround:

This issue only occurs in some operating systems. If timeout is not available, the quickstart continues and final validation is always done by the MapReduce jobs and Solr commands that are run by the quickstart.

Cloudera Bug ID:

CDH-26856

Summary:
Field value class guessing and Automatic schema field addition are not supported with the MapReduceIndexerTool nor with the HBaseMapReduceIndexerTool
Description:
The MapReduceIndexerTool and the HBaseMapReduceIndexerTool can be used with a Managed Schema created via NRT indexing of documents or via the Solr Schema API. However, neither tool supports adding fields automatically to the schema during ingest.
Workaround:
Define the schema before running the MapReduceIndexerTool or HBaseMapReduceIndexerTool. In non-schemaless mode, define in the schema using the schema.xml file. In schemaless mode, either define the schema using the Solr Schema API or index sample documents using NRT indexing before invoking the tools. In either case, Cloudera recommends that you verify that the schema is what you expect, using the List Fields API command.
Cloudera Bug ID:

CDH-19407

Summary:
The Browse and Spell Request Handlers are not enabled in schemaless mode
Description:
The Browse and Spell Request Handlers require certain fields to be present in the schema. Since those fields cannot be guaranteed to exist in a Schemaless setup, the Browse and Spell Request Handlers are not enabled by default.
Workaround:

If you require the Browse and Spell Request Handlers, add them to the solrconfig.xml configuration file. Generate a non-schemaless configuration to see the usual settings and modify the required fields to fit your schema.

Cloudera Bug ID:

CDH-17978

Summary:
Enabling blockcache writing may result in unusable indexes
Description:

It is possible to create indexes with solr.hdfs.blockcache.write.enabled set to true. Such indexes may appear corrupt to readers, and reading these indexes may irrecoverably corrupt indexes. Blockcache writing is disabled by default.

Workaround:

None.

Cloudera Bug ID:

CDH-58276

Summary:
Users with insufficient Solr permissions may receive a "Page Loading" message from the Solr Web Admin UI
Description:
Users who are not authorized to use the Solr Admin UI are not given a page explaining that access is denied to them, instead receive a web page that never finishes loading.
Workaround:

None.

Cloudera Bug ID:

CDH-15441

Sumary:
Using MapReduceIndexerTool or HBaseMapReduceIndexerTool multiple times may produce duplicate entries in a collection
Description:
Repeatedly running the MapReduceIndexerTool on the same set of input files can result in duplicate entries in the Solr collection. This occurs because the tool can only insert documents and cannot update or delete existing Solr documents. This issue does not apply to the HBaseMapReduceIndexerTool unless it is run with more than zero reducers.
Workaround:
To avoid this issue, use HBaseMapReduceIndexerTool with zero reducers. This must be done without Kerberos.
Cloudera Bug ID:

CDH-58694

Summary:
Deleting collections might fail if hosts are unavailable
Description:
It is possible to delete a collection when hosts that host some of the collection are unavailable. After such a deletion, if the previously unavailable hosts are brought back online, the deleted collection may be restored.
Workaround:
Ensure all hosts are online before deleting collections.
Cloudera Bug ID:

CDPD-4139

Summary:
Collection state goes down after Solr SSL
Description:

If you enable TLS/SSL on a Solr instance with existing collections, the collections will break and become unavailable. Collections created after enabling TLS/SSL are not affected by this issue.

Workaround:
Recreate the collection after enabling TLS.
Cloudera Bug ID:
CDPD-13923
Summary:
Every Configset is Untrusted Without Kerberos
Description:
Solr 8 introduces the concept of ‘untrusted configset’, denoting configsets that were uploaded without authentication. Collections created with an untrusted configset will not initialize if <lib> directives are used in the configset.
Workaround:
Select one of the following options if you would like to use untrusted configsets with <lib> directives:
  • If the configset contains external libraries, but you do not want to use them, simply upload the configsets after deleting the <lib> directives.

  • If the configset contains external libraries, and you want to use them, choose one from the following options:

    • Secure your cluster before reuploading the configset.

    • Add the libraries to Solr’s classpath, then reupload the configset without the <lib> directives.

Unsupported Features

The following Solr features are currently not supported in Cloudera Data Platform:

Limitations

Default Solr core names cannot be changed
Although it is technically possible to give user-defined Solr core names during core creation, it is to be avoided in te context of Cloudera Search. Cloudera Manager expects core names in the default "collection_shardX_replicaY" format. Altering core names results in Cloudera Manager being unable to fetch Solr metrics for the given core and this, eventually, may corrupt data collection for co-located core, or even shard and server level charts.