What's New in Schema Registry
Learn about the new features for Schema Registry in Cloudera Runtime 7.1.9.
Schema Registry supports connections to databases secured using TLS 1.2 and TCPS
Schema Registry can connect to TLS-enabled MySQL, MariaDB, or PostgreSQL databases and TCPS-enabled Oracle database. To connect to a TLS/TCPS-enabled database while adding the Schema Registry service to a cluster, see Configure TLS 1.2 for Schema Registry. You can also enable TLS/TCPS on an existing database and then configure Schema Registry to connect to it. See Set up and configure TLS 1.2 for Schema Registry . For more information about Oracle TCPS, see How to connect CDP components to a TCPS-enabled Oracle database.
Schema Registry instances behind load balancer
You can now use load balancer in front of Schema Registry instances. It is very common to have multiple instances of the same application and have a load balancer in front of them. This can be useful for failover reasons in HA environments, and it can also help sharing the load between instances. You can also use load balancer in front of Schema Registry instances in an environment with Kerberos or SSL enabled.
AvroConverter support for KConnect logical types
AvroConverter now converts between Connect and Avro temporal and decimal types.
Support for alternative jersey connectors in SchemaRegistryClient
connector.provider.class can be configured in Schema Registry Client. If
it is configured,
schema.registry.client.retry.policy should also be
configured to be different than default.
This also fixes the issue with some third party load balancers where the client is expected to follow redirects and authenticate while doing that.
Schema Registry with Knox uses round-robin load balancing
When multiple instances of Schema Registry are running, Knox uses round-robin to forward the requests.
Upgraded Avro version to 1.11.1
Avro got upgraded from version 1.9.1 to 1.11.1.
KafkaAvroSerializer and KafkaAvroDeserializer improvements
KafkaAvroDeserializercan now handle null values without Avro
KafkaAvroDeserializernow support a configuration property called
null.passthrough.enabled, which is false by default. If enabled, null data is handled as null, and no schema is sent to Schema Registry. This behavior enables client applications to write tombstone messages into compact topics. The
KafkaAvroDeserializeralso handles null values by returning null without any regards to the schema.
- Support deserialization when the topic and schema names don't match
- From now on, the
KafkaAvroDeserializeruses the schema version's ID in the Avro byte stream to access the actual schema and disregards schema names.
- Logical types conversion for the
- The KafkaAvroSerializer and
KafkaAvroDeserializercan now properly handle and convert Avro logical types at a record level. This means that if you have a record that has a field with a built-in Avro logical type (for example a
BYTEStype and decimal logical type), you can now properly serialize the records. After deserialization, a
GenericRecordis returned, including the typed
BigDecimalfield, instead of a
ByteBuffer. Logical type conversion can be enabled using the
logical.type.conversion.enabledproperty. This property is set to
falseby default for backward compatibility.
Principal mapping rules can be defined without quotes
The SSL Client Authentication Mapping Rules
schema.registry.ssl.principal.mapping.rules) property now accepts rules
that are defined without quotes. As a result, when adding multiple rules, you no longer need
to enclose each rule in quotes.
Modules section are removed from the registry.yaml configuration structure
In previous versions, the
registry.yaml configuration file contained a
modules section. This section was used to list pluggable modules that extended Schema
Registry’s functionality. However, modules were never fully supported and have been removed
in a previous release. The modules section in
registry.yaml was kept for
backwards compatibility. Starting with this version, the modules section is removed by