Known Issues in Solr
Learn about the known issues in Apache Solr, the impact or changes to the functionality, and the workaround.
- HBase Indexer does not work with JDK 17
- Depending on the Cloudera Manager version used with CDP, HBase Indexer (KS Indexer) may have compatibility issues with JDK 17.
- Splitshard operation fails after CDH 6 to CDP upgrade
Collections are not reindexed during an upgrade from CDH 6 to CDP 7 because Lucene 8 (CDP) can read Lucene 7 (CDH 6) indexes.
If you try to execute a
SPLITSHARD
operation against such a collection, it fails with a similar error message:
This happens because the segment created using a Lucene 7 index cannot be merged into a Lucene 8 index.o.a.s.h.a.SplitOp ERROR executing split: => java.lang.IllegalArgumentException: Cannot merge a segment t hat has been created with major version 7 into this index which has been created by major version 8 at org.apache.lucene.index.IndexWriter.validateMergeReader(IndexWriter.java:3044) java.lang.IllegalArgumentException: Cannot merge a segment that has been created with major version 7 into this index which has been created by major version 8 at org.apache.lucene.index.IndexWriter.validateMergeReader(IndexWriter.java:3044) ~[lucene-core-8.11.2.7.1.9.3-2.jar:8.11.2.7.1.9.3-2 a6ff93f9665115dffbdad0ad7f222fd1978d495d - jenkins - 2023-12-02 00:05:23] at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:3110) ~[lucene-core-8.11.2.7.1.9.3-2.jar:8.11.2.7.1.9.3-2 a6ff93f9665115dffbdad0ad7f222fd1978d495d - jenkins - 2023-12-02 00:05:23] at org.apache.solr.update.SolrIndexSplitter.doSplit(SolrIndexSplitter.java:318) ~[solr-core-8.11.2.7.1.9.3-2.jar:8.11.2.7.1.9.3-2 a6ff93f9665115dffbdad0ad7f222fd1978d495d - jenkins - 2023-12-02 00:16:28] at org.apache.solr.update.SolrIndexSplitter.split(SolrIndexSplitter.java:184) ~[solr-core-8.11.2.7.1.9.3-2.jar:8.11.2.7.1.9.3-2 a6ff93f9665115dffbdad0ad7f222fd1978d495d - jenkins - 2023-12-02 00:16:28] at org.apache.solr.update.DirectUpdateHandler2.split(DirectUpdateHandler2.java:922) ~[solr-core-8.11.2.7.1.9.3-2.jar:8.11.2.7.1.9.3-2 a6ff93f9665115dffbdad0ad7f222fd1978d495d - jenkins - 2023-12-02 00:16:28] at org.apache.solr.handler.admin.SplitOp.execute(SplitOp.java:165) ~[solr-core-8.11.2.7.1.9.3-2.jar:8.11.2.7.1.9.3-2 a6ff93f9665115dffbdad0ad7f222fd1978d495d - jenkins - 2023-12-02 00:16:28] at org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:367) ~[solr-core-8.11.2.7.1.9.3-2.jar:8.11.2.7.1.9.3-2 a6ff93f9665115dffbdad0ad7f222fd1978d495d - jenkins - 2023-12-02 00:16:28]
- Changing the default value of Client Connection Registry HBase configuration parameter causes HBase MRIT job to fail
-
If the value of the HBase configuration property
Client Connection Registry
is changed from the defaultZooKeeper Quorum
toMaster Registry
then the Yarn job started by HBase MRIT fails with a similar error message:Caused by: org.apache.hadoop.hbase.exceptions.MasterRegistryFetchException: Exception making rpc to masters [quasar-bmyccr-2.quasar-bmyccr.root.hwx.site,22001,-1] at org.apache.hadoop.hbase.client.MasterRegistry.lambda$groupCall$1(MasterRegistry.java:244) at org.apache.hadoop.hbase.util.FutureUtils.lambda$addListener$0(FutureUtils.java:68) at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) at java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:792) at java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2153) at org.apache.hadoop.hbase.util.FutureUtils.addListener(FutureUtils.java:61) at org.apache.hadoop.hbase.client.MasterRegistry.groupCall(MasterRegistry.java:228) at org.apache.hadoop.hbase.client.MasterRegistry.call(MasterRegistry.java:265) at org.apache.hadoop.hbase.client.MasterRegistry.getMetaRegionLocations(MasterRegistry.java:282) at org.apache.hadoop.hbase.client.ConnectionImplementation.locateMeta(ConnectionImplementation.java:900) at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:867) at org.apache.hadoop.hbase.client.ConnectionImplementation.relocateRegion(ConnectionImplementation.java:850) at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:981) at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:870) at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:319) ... 21 more Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed contacting masters after 1 attempts. Exceptions: java.io.IOException: Call to address=quasar-bmyccr-2.quasar-bmyccr.root.hwx.site/172.27.19.4:22001 failed on local exception: java.io.IOException: java.lang.RuntimeException: Found no valid authentication method from options at org.apache.hadoop.hbase.client.MasterRegistry.lambda$groupCall$1(MasterRegistry.java:243) ... 35 more
- Solr does not support rolling upgrade to release 7.1.9 or lower
-
Solr supports rolling upgrades from release 7.1.9 and higher. Upgrading from a lower version means that all the Solr Server instances are shut down, parcels upgraded and activated and then the Solr Servers are started again. This causes a service interruption of several minutes, the actual value depending on cluster size.
Services like Atlas and Ranger that depend on Solr, may face issues because of this service interruption.
- Unable to see single valued and multivalued empty string values when querying collections after upgrade to CDP
- After upgrading from CDH or HDP to CDP, you are not able to see
single valued and multi Valued empty string values in CDP.
This behavior in CDP is due to the
remove-blank
processor present insolrconfig.xml
in Solr 8. - Cannot create multiple heap dump files because of file name error
- Heap dump generation fails with a similar error
message:
The cause of the problem is thatjava.lang.OutOfMemoryError: Java heap space Dumping heap to /data/tmp/solr_solr-SOLR_SERVER-fc9dacc265fabfc500b92112712505e3_pid{{PID}}.hprof ... Unable to create /data/tmp/solr_solr-SOLR_SERVER-fc9dacc265fabfc500b92112712505e3_pid{{PID}}.hprof: File exists
{{PID}}
does not get substituted during dump file creation with an actual process ID and because of that, a generic file name is generated. This causes the next dump file creation to fail, as the existing file with the same name cannot be overwritten. - Solr coreAdmin status throws Null Pointer Exception
You get a Null Pointer Exception with a similar stacktrace:
This is caused by an error in handling solr admin core STATUS after collections are rebuilt.Caused by: java.lang.NullPointerException at org.apache.solr.core.SolrCore.getInstancePath(SolrCore.java:333) at org.apache.solr.handler.admin.CoreAdminOperation.getCoreStatus(CoreAdminOperation.java:324) at org.apache.solr.handler.admin.StatusOp.execute(StatusOp.java:46) at org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:362)
- Applications fail because of mixed authentication methods within dependency chain of services
- Using different types of authentication methods within a dependency chain, for example, configuring your indexer tool to authenticate using Kerberos and configuring your Solr Server to use LDAP for authentication may cause your application to time out and eventually fail.
- API calls fail with error when used with alias, but work with collection name
- API calls fail with a similar error message when used with an
alias, but they work when made using the collection
name:
[ ] o.a.h.s.t.d.w.DelegationTokenAuthenticationFilter Authentication exception: User: xyz@something.example.com is not allowed to impersonate xyz@something.example.com [c:RTOTagMetaOdd s:shard3 r:core_node11 x:RTOTagMetaOdd_shard3_replica_n8] o.a.h.s.t.d.w.DelegationTokenAuthenticationFilter Authentication exception: User: xyz@something.example.com is not allowed to impersonate xyz@something.example.com
- CrunchIndexerTool does not work out of the box if /tmp is mounted noexec mode
- When you try to run CrunchIndexerTool with the /tmp directory mounted in noexec mode, It throws a snappy-related error.
- Mergeindex operation with --go-live fails after CDH 6 to CDP upgrade
-
During an upgrade from CDH6 to CDP, collections are not reindexed because Lucene 8 (CDP) can read Lucene 7 (CDH6) indexes.
If you try to execute MapReduceIndexerTool (MRIT) or HBase Indexer MRIT with--go-live
against such a collection, you get a similar error message:Caused by: java.lang.IllegalArgumentException: Cannot merge a segment that has been created with major version 8 into this index which has been created by major version 7 at org.apache.lucene.index.IndexWriter.validateMergeReader(IndexWriter.java:2894) at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2960) at org.apache.solr.update.DirectUpdateHandler2.mergeIndexes(DirectUpdateHandler2.java:570) at org.apache.solr.update.processor.RunUpdateProcessor.processMergeIndexes(RunUpdateProcessorFactory.java:95) at org.apache.solr.update.processor.UpdateRequestProcessor.processMergeIndexes(UpdateRequestProcessor.java:63)
This happens because CDP MRIT and HBase indexer use Solr 8 as embedded Solr, which creates a Lucene 8 index. It cannot be merged (using MERGEINDEXES) into an older Lucene 7 index.
- Apache Tika upgrade may break morphlines indexing
- The upgrade of Apache Tika from 1.27 to 2.3.0 brought potentially
breaking changes for morphlines indexing. Duplicate/triplicate keys names were removed and
certain parser class names were changed (For example,
org.apache.tika.parser.jpeg.JpegParser
changed toorg.apache.tika.parser.image.JpegParser
). - CDPD-28006: Solr access via Knox fails with impersonation error though auth_to_local and proxy user configs are set
- Currently the names of system users which are impersonating users with Solr should match with the names of their respective Kerberos principals.
- CDH-77598: Indexing fails with socketTimeout
-
Starting from CDH 6.0, the HTTP client library used by Solr has a default socket timeout of 10 minutes. Because of this, if a single request sent from an indexer executor to Solr takes more than 10 minutes to be serviced, the indexing process fails with a timeout error.
This timeout has been raised to 24 hours. Nevertheless, there still may be use cases where even this extended timeout period proves insufficient.
- CDPD-12450: CrunchIndexerTool Indexing fails with socketTimeout
- The http client library uses a socket timeout of 10 minutes. The Spark Crunch Indexer does not override this value, and in case a single batch takes more than 10 minutes, the entire indexing job fails. This can happen especially if the morphlines contain DeleteByQuery requests.
- CDPD-29289: HBaseMapReduceIndexerTool fails with socketTimeout
- The http client library uses a socket timeout of 10 minutes. The HBase Indexer does not override this value, and in case a single batch takes more than 10 minutes, the entire indexing job fails.
- CDPD-20577: Splitshard operation on HDFS index checks local filesystem and fails
-
When performing a shard split on an index that is stored on HDFS,
SplitShardCmd
still evaluates free disk space on the local file system of the server where Solr is installed. This may cause the command to fail, perceiving that there is no adequate disk space to perform the shard split.
- DOCS-5717: Lucene index handling limitation
- The Lucene index can only be upgraded by one major version. Solr 8 will not open an index that was created with Solr 6 or earlier.
- CDH-22190: CrunchIndexerTool which includes Spark indexer requires specific input file format specifications
- If the
--input-file-format
option is specified with CrunchIndexerTool, then its argument must betext
,avro
, oravroParquet
, rather than a fully qualified class name.
- CDH-26856: Field value class guessing and Automatic schema field addition are not supported with the MapReduceIndexerTool nor with the HBaseMapReduceIndexerTool
- The MapReduceIndexerTool and the HBaseMapReduceIndexerTool can be used with a Managed Schema created via NRT indexing of documents or via the Solr Schema API. However, neither tool supports adding fields automatically to the schema during ingest.
- CDH-19407: The Browse and Spell Request Handlers are not enabled in schemaless mode
- The Browse and Spell Request Handlers require certain fields to be present in the schema. Since those fields cannot be guaranteed to exist in a Schemaless setup, the Browse and Spell Request Handlers are not enabled by default.
- CDH-17978: Enabling blockcache writing may result in unusable indexes
- It is possible to create indexes with
solr.hdfs.blockcache.write.enabled
set totrue
. Such indexes may appear corrupt to readers, and reading these indexes may irrecoverably corrupt indexes. Blockcache writing is disabled by default.
- CDH-58276: Users with insufficient Solr permissions may receive a "Page Loading" message from the Solr Web Admin UI
- Users who are not authorized to use the Solr Admin UI are not given a page explaining that access is denied to them, instead receive a web page that never finishes loading.
- CDH-15441: Using MapReduceIndexerTool or HBaseMapReduceIndexerTool multiple times may produce duplicate entries in a collection
- Repeatedly running the MapReduceIndexerTool on the same set of input files can result in duplicate entries in the Solr collection. This occurs because the tool can only insert documents and cannot update or delete existing Solr documents. This issue does not apply to the HBaseMapReduceIndexerTool unless it is run with more than zero reducers.
- CDH-58694: Deleting collections might fail if hosts are unavailable
- It is possible to delete a collection when hosts that host some of the collection are unavailable. After such a deletion, if the previously unavailable hosts are brought back online, the deleted collection may be restored.
- CDPD-13923: Every Configset is Untrusted Without Kerberos
- Solr 8 introduces the concept of ‘untrusted configset’, denoting configsets that were uploaded without authentication. Collections created with an untrusted configset will not initialize if <lib> directives are used in the configset.
Unsupported features
- Panel with security info in admin UI's dashboard
- Incremental backup mode
- Schema Designer UI
- Package Management System
- HTTP/2
- Solr SQL/JDBC
- Graph Traversal
- Cross Data Center Replication (CDCR)
- SolrCloud Autoscaling
- HDFS Federation
- Saving search results
- Solr contrib modules (Spark, MapReduce and Lily HBase indexers are not contrib modules but part of the Cloudera Search product itself, therefore they are supported).
Limitations
- Default Solr core names cannot be changed
- Although it is technically possible to give user-defined Solr core names during core creation, it is to be avoided in the context of Cloudera's distribution of Apache Solr. Cloudera Manager expects core names in the default "collection_shardX_replicaY" format. Altering core names results in Cloudera Manager being unable to fetch Solr metrics for the given core and this may corrupt data collection for co-located core, or even shard, and server level charts.