Setting up the Debezium SQL Server Source connector
Learn about the Cloudera specific setup steps required before you can deploy the Debezium SQL Server Source connector.
In Cloudera, before deploying an instance of the Debezium SQL Server Source connector, you must download and deploy the SQL Server JDBC driver on all Kafka Connect hosts. Otherwise, you will not be able to deploy the connector. The following list of steps walks you through this process.
For more information regarding how the Debezium PostgreSQL connector works as well as its configuration and properties, see the Debezium connector for SQL Server in the Debezium documentation.
History topic producer and consumer properties reference
This section collects the required pass-through properties for the history topic. The list includes both producer and consumer properties. When you configure the Debezium SQL Server connector in a secure environment, you must configure both the producer and consumer properties.
database.history.kafka.bootstrap.servers=${cm-agent:ENV:KAFKA_BOOTSTRAP_SERVERS}database.history.kafka.topic=sqlserver.schema-changes.historydatabase.history.producer.security.protocol=SASL_SSLdatabase.history.producer.ssl.keystore.location=${cm-agent:ENV:CONNECT_SSL_SERVER_KEYSTORE_LOCATION}database.history.producer.ssl.keystore.password=${cm-agent:ENV:CONNECT_SSL_SERVER_KEYSTORE_PASSWORD}database.history.producer.ssl.truststore.location=${cm-agent:ENV:CONNECT_SSL_SERVER_TRUSTSTORE_LOCATION}database.history.producer.ssl.truststore.password=${cm-agent:ENV:CONNECT_SSL_SERVER_TRUSTSTORE_PASSWORD}database.history.producer.ssl.key.password=${cm-agent:ENV:CONNECT_SSL_SERVER_KEYSTORE_KEYPASSWORD}database.history.producer.sasl.kerberos.service.name=${cm-agent:ENV:kafka_service_user_name}database.history.producer.sasl.mechanism=GSSAPIdatabase.history.producer.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true serviceName="${cm-agent:ENV:kafka_service_user_name}" keyTab="${cm-agent:keytab}" principal="${cm-agent:ENV:kafka_connect_service_principal}"database.history.consumer.security.protocol=SASL_SSLdatabase.history.consumer.ssl.keystore.location=${cm-agent:ENV:CONNECT_SSL_SERVER_KEYSTORE_LOCATION}database.history.consumer.ssl.keystore.password=${cm-agent:ENV:CONNECT_SSL_SERVER_KEYSTORE_PASSWORD}database.history.consumer.ssl.truststore.location=${cm-agent:ENV:CONNECT_SSL_SERVER_TRUSTSTORE_LOCATION}database.history.consumer.ssl.truststore.password=${cm-agent:ENV:CONNECT_SSL_SERVER_TRUSTSTORE_PASSWORD}database.history.consumer.ssl.key.password=${cm-agent:ENV:CONNECT_SSL_SERVER_KEYSTORE_KEYPASSWORD}database.history.consumer.sasl.mechanism=GSSAPIdatabase.history.consumer.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true serviceName="${cm-agent:ENV:kafka_service_user_name}" keyTab="${cm-agent:keytab}" principal="${cm-agent:ENV:kafka_connect_service_principal}"
cloudera.offset.flush.interval.ms
Specifies the interval, in milliseconds, at which connector task offsets are committed for this
connector. When set, this value overrides the role-level Offset Flush
Interval (offset.flush.interval.ms) setting. Type: Long.
Default: inherits role-level offset.flush.interval.ms when not set.
