Configuring Cloudera Streaming Analytics - Kubernetes Operator for FIPS mode
Run Cloudera Streaming Analytics - Kubernetes Operator on a FIPS-enabled OpenShift cluster by using FIPS-approved cryptographic providers and custom container images.
Deploying Cloudera Streaming Analytics - Kubernetes Operator on a FIPS-enabled environment requires manual preparation to ensure every component starts, operates correctly, and uses FIPS-approved cryptographic libraries.
This procedure targets FIPS-enabled Red Hat OpenShift clusters that run on FIPS-enabled Red Hat nodes. The base container images ship with Red Hat builds of OpenJDK.
The steps assume the following FIPS providers are available and supplied by your organization:
com.safelogic.cryptocomply.jcajce.provider.CryptoComplyFipsProvider(ccj-3.0.2.1.jar)org.bouncycastle.jsse.provider.BouncyCastleJsseProvider(bctls.jar)
If you use different providers, adjust the configuration steps accordingly.
The manual workflow includes the following actions:
- Supplying the crypto provider JAR files
- Creating a modified java.security file
- Building custom container images and pushing them to a registry that your cluster can access
- Creating truststores in
.bcfksformat - Applying updated manifests and Helm values to enable components to consume the new assets
The examples assume that Cloudera Streaming Analytics - Kubernetes Operator is installed in the flink namespace with a command similar to the following:
helm install -n flink --set flink-kubernetes-operator.watchNamespaces={flink} -f values.yaml csa-operator helm/csa-operator
Your values.yaml file must at minimum provide license and image repository details, for example:
flink-kubernetes-operator:
clouderaLicense:
ssb:
sqlRunner:
Verify that you can push custom images to [***YOUR-DOCKER-REGISTRY***] and that the cluster can pull from it. Commands include the --platform linux/amd64 option to support building on a different architecture, such as macOS on Apple Silicon.
- Access to a FIPS-enabled Red Hat OpenShift cluster that runs on FIPS-enabled Red Hat nodes
- Availability of the ccj-3.0.2.1.jar and bctls.jar provider files
- Credentials for a Docker registry that the cluster can access
- Ability to run
docker,kubectl,helm,keytool, and Cloudera SQL clients from an administrative workstation - A valid Cloudera license and access to the values.yaml file used by your Helm deployment
- Cert-manager installed if you plan to keep the Flink Operator webhook enabled
-
Prepare the FIPS security provider artifacts and update the Java security configuration for the Flink Kubernetes Operator.
You create a working directory, copy the provider JARs, and fetch the baseline java.security file from the running operator pod.
-
Create the workspace and copy the provider files.
mkdir fips cd fips # Copy the provider JARs into this directory. ls bctls.jar ccj-3.0.2.1.jar -
Retrieve the java.security file from the operator pod.
kubectl -n flink get pods NAME READY STATUS RESTARTS AGE flink-kubernetes-operator-5b74769568-lb7ts 2/2 Running 0 14m kubectl -n flink exec -it pod/flink-kubernetes-operator-5b74769568-lb7ts -- find /etc -name java.security /etc/java/java-11-openjdk/java-11-openjdk-11.0.25.0.9-7.el9.x86_64/conf/security/java.security kubectl -n flink cp flink-kubernetes-operator-5b74769568-lb7ts:/etc/java/java-11-openjdk/java-11-openjdk-11.0.25.0.9-7.el9.x86_64/conf/security/java.security . -
Edit the security providers in java.security.
Add your provider classes to the top of the
security.providerandfips.providerlists and renumber the remaining entries. Setfips.keystore.typetoBCFKS.# List of providers and their preference orders. security.provider.1=com.safelogic.cryptocomply.jcajce.provider.CryptoComplyFipsProvider security.provider.2=org.bouncycastle.jsse.provider.BouncyCastleJsseProvider fips:CCJ security.provider.3=SUN security.provider.4=SunRsaSign ... # Security providers used when FIPS mode support is active. fips.provider.1=com.safelogic.cryptocomply.jcajce.provider.CryptoComplyFipsProvider fips.provider.2=org.bouncycastle.jsse.provider.BouncyCastleJsseProvider fips:CCJ fips.provider.3=SunPKCS11 ${java.home}/conf/security/nss.fips.cfg fips.provider.4=SUN ... # Default keystore type used when global crypto-policies are set to FIPS. fips.keystore.type=BCFKS
-
Create the workspace and copy the provider files.
-
Build and push a custom
flink-kubernetes-operatorimage that bundles the FIPS provider configuration.You create a Dockerfile that copies the provider JARs and the customized java.security file into the image and publishes the image to your registry.
-
Create the Dockerfile.
cat <<'EOF' > Dockerfile-flink-kubernetes-operator-fips FROM container.repository.cloudera.com/cloudera/flink-kubernetes-operator:1.20.1-csaopX.X.X-bXX COPY --chown=flink:flink bctls.jar $CLOUDERA_LIB/bctls.jar COPY --chown=flink:flink ccj-3.0.2.1.jar $CLOUDERA_LIB/ccj-3.0.2.1.jar COPY --chown=flink:flink java.security $CLOUDERA_LIB/java.security ENV FIPS_JARS="/opt/flink/cloudera/bctls.jar:/opt/flink/cloudera/ccj-3.0.2.1.jar" ENV FIPS_ARGS="-Djava.security.properties==/opt/flink/cloudera/java.security" EOF -
Build and push the image.
docker build --platform linux/amd64 -f Dockerfile-flink-kubernetes-operator-fips -t [***YOUR-DOCKER-REGISTRY***]/flink/flink-kubernetes-operator:fips-1 . docker push [***YOUR-DOCKER-REGISTRY***]/flink/flink-kubernetes-operator:fips-1
-
Create the Dockerfile.
-
Convert the webhook keystore to BCFKS format and update the Kubernetes secret.
-
Retrieve the keystore password and the existing PKCS12 file.
cd flink-kubernetes-operator/fips kubectl get secret -n flink flink-operator-webhook-secret -o jsonpath='{.data.password}' | base64 -d password1234 kubectl get secret -n flink webhook-server-cert -o jsonpath='{.data.keystore\.p12}' | base64 -d > keystore.p12 -
Review the PKCS12 keystore contents.
keytool -list -keystore keystore.p12 -storetype PKCS12 -storepass password1234 Keystore type: PKCS12 Keystore provider: SUN Your keystore contains 1 entry ... -
Convert the keystore to BCFKS format.
keytool -importkeystore \ -srckeystore keystore.p12 \ /* Lines 186-192 omitted */ \ -providerpath ccj-3.0.2.1.jar Importing keystore keystore.p12 to keystore.bcfks... Entry for alias 1 successfully imported. Import command completed: 1 entries successfully imported, 0 entries failed or cancelled -
Validate the BCFKS keystore and update the secret with the new file.
keytool -list \ -keystore keystore.bcfks \ /* Lines 201-204 omitted */ \ -providerpath ccj-3.0.2.1.jar Keystore type: BCFKS Keystore provider: CCJ Your keystore contains 1 entry ... kubectl patch secret -n flink webhook-server-cert \ --patch="{\"data\": {\"keystore.bcfks\": \"$(cat keystore.bcfks | base64)\"}}"
-
Retrieve the keystore password and the existing PKCS12 file.
-
Update the
flink-kubernetes-operatordeployment to reference the custom image and BCFKS keystore.You edit the deployment to configure both containers to use the new image, adjust the environment variables for the webhook keystore, and mount the BCFKS file from the secret.
kubectl edit -n flink deployment flink-kubernetes-operator- Set each container image to
[***YOUR-DOCKER-REGISTRY***]/flink/flink-kubernetes-operator:fips-1. - Update
WEBHOOK_KEYSTORE_FILEto keystore.bcfks and ensure the corresponding secret item exposes the file with that name. SetWEBHOOK_KEYSTORE_TYPEtobcfksif it is defined. - Reference the new keystore.bcfks key in the projected secret volume.
After you save and exit, Kubernetes restarts the pod. Confirm that the webhook container logs report the BCFKS keystore.
2025-07-15 08:57:22,325 o.a.f.k.o.s.ReloadableSslContext [INFO ] Creating keystore with type: bcfks 2025-07-15 08:57:22,329 o.a.f.k.o.s.ReloadableSslContext [INFO ] Loading keystore from file: /certs/keystore.bcfks 2025-07-15 08:57:22,718 o.a.f.k.o.s.ReloadableSslContext [INFO ] Initializing key manager with keystore and password Jul 15, 2025 8:57:23 AM org.bouncycastle.jsse.provider.PropertyUtils getStringSystemProperty INFO: Found string system property [java.home]: /usr/lib/jvm/java-11-openjdk-11.0.25.0.9-3.el9.x86_64 Jul 15, 2025 8:57:23 AM org.bouncycastle.jsse.provider.PropertyUtils getStringSystemProperty INFO: Found string system property [javax.net.ssl.trustStoreType]: pkcs12 Jul 15, 2025 8:57:23 AM org.bouncycastle.jsse.provider.PropertyUtils getStringSystemProperty INFO: Found string system property [java.home]: /usr/lib/jvm/java-11-openjdk-11.0.25.0.9-3.el9.x86_64 Jul 15, 2025 8:57:23 AM org.bouncycastle.jsse.provider.PropertyUtils getStringSystemProperty INFO: Found string system property [javax.net.ssl.trustStoreType]: pkcs12 2025-07-15 08:57:23,125 o.a.f.k.o.a.FlinkOperatorWebhook [INFO ] Keystore path is resolved to real filename: keystore.bcfks 2025-07-15 08:57:23,131 o.a.f.k.o.f.FileSystemWatchService [INFO ] Starting watching path: /certs 2025-07-15 08:57:23,132 o.a.f.k.o.f.FileSystemWatchService [INFO ] Path is resolved to real path: /certs 2025-07-15 08:57:23,429 o.a.f.k.o.a.FlinkOperatorWebhook [INFO ] Webhook listening at 0:0:0:0:0:0:0:0:9443 - Set each container image to
-
Build and push a custom
flink-extendedimage.-
Create the Dockerfile.
cat <<'EOF' > Dockerfile-flink-extended-fips FROM container.repository.cloudera.com/cloudera/flink-extended:1.20.1-csaopX.X.X-bXX COPY --chown=9999:0 bctls.jar $FLINK_HOME/lib/bctls.jar COPY --chown=9999:0 ccj-3.0.2.1.jar $FLINK_HOME/lib/ccj-3.0.2.1.jar COPY --chown=9999:0 java.security $FLINK_HOME/lib/java.security ENV FIPS_JARS="/opt/flink/lib/bctls.jar:/opt/flink/lib/ccj-3.0.2.1.jar" ENV FIPS_ARGS="-Djava.security.properties==/opt/flink/lib/java.security" EOF -
Build and push the image.
docker build --platform linux/amd64 -f Dockerfile-flink-extended-fips -t [***YOUR-DOCKER-REGISTRY***]/flink/flink-extended:fips-1 . docker push [***YOUR-DOCKER-REGISTRY***]/flink/flink-extended:fips-1
-
Create the Dockerfile.
-
Build and push a custom
ssb-sseimage.-
Create the Dockerfile.
cat <<'EOF' > Dockerfile-ssb-sse-fips FROM container.repository.cloudera.com/cloudera/ssb-sse:1.20.1-csaopX.X.X-bXX COPY --chown=9999:0 bctls.jar $FLINK_HOME/lib/bctls.jar COPY --chown=9999:0 ccj-3.0.2.1.jar $FLINK_HOME/lib/ccj-3.0.2.1.jar COPY --chown=9999:0 java.security $FLINK_HOME/lib/java.security ENV FIPS_JARS="/opt/flink/lib/bctls.jar:/opt/flink/lib/ccj-3.0.2.1.jar" ENV FIPS_ARGS="-Djava.security.properties==/opt/flink/lib/java.security" EOF -
Build and push the image.
docker build --platform linux/amd64 -f Dockerfile-ssb-sse-fips -t [***YOUR-DOCKER-REGISTRY***]/flink/ssb-sse:fips-1 . docker push [***YOUR-DOCKER-REGISTRY***]/flink/ssb-sse:fips-1
-
Create the Dockerfile.
-
Update Helm values to reference the FIPS images and redeploy the release.
You edit values.yaml to point to the custom image tags and run
helm upgrade.flink-kubernetes-operator: image: repository: [***YOUR-DOCKER-REGISTRY***]/flink/flink-kubernetes-operator tag: fips-1 ssb: sqlRunner: image: repository: [***YOUR-DOCKER-REGISTRY***]/flink/flink-extended tag: fips-1 sse: image: repository: [***YOUR-DOCKER-REGISTRY***]/flink/ssb-sse tag: fips-1helm upgrade -n flink --set flink-kubernetes-operator.watchNamespaces={flink} -f values.yaml csa-operator helm/csa-operatorValidate the deployment by running a simple SQL job.
DROP TABLE IF EXISTS datagen_table_1; DROP TABLE IF EXISTS print_sink_1; CREATE TABLE `datagen_table_1` ( `col_str` STRING, /* Lines 348-349 omitted */ `col_ts` TIMESTAMP(3) ) WITH ( 'connector' = 'datagen', 'rows-per-second' = '1' ); CREATE TABLE `print_sink_1` ( `col_str` STRING, /* Lines 357-358 omitted */ `col_ts` TIMESTAMP(3) ) WITH ( 'connector' = 'print' ); insert into print_sink_1 select * from datagen_table_1; -
Configure a Kafka connection that uses a BCFKS truststore.
You create a truststore from the Kafka cluster certificate, expose it through a secret, update Helm values, and validate sampling in the UI.
-
Create the BCFKS truststore and verify its contents.
keytool -importcert \ -alias kafka \ /* Lines 380-386 omitted */ \ -providerpath ccj-3.0.2.1.jar keytool -list \ -keystore kafka.truststore.bcfks \ /* Lines 391-394 omitted */ \ -providerpath ccj-3.0.2.1.jar -
Create the Kubernetes secret.
kubectl -n flink create secret generic kafka-tls --from-file=kafka.truststore.bcfks -
Reference the secret in values.yaml and run
helm upgrade.ssb: podVolumes: # Configure volume mounts that expose the kafka-tls secret.helm upgrade -n flink --set flink-kubernetes-operator.watchNamespaces={flink} -f values.yaml csa-operator helm/csa-operator -
Enable sampling in the Cloudera SQL Stream Builder UI.
In the Cloudera SQL Stream Builder UI, navigate to Configuration > Sampling and set the following values:
- Enabled: yes
- Brokers: [***YOUR-KAFKA-BROKER***]
- Protocol: SSL
- Custom TrustStore: yes
- Kafka TrustStore: /opt/flink/tls/kafka.truststore.bcfks
- Kafka TrustStore Type: bcfks
- Kafka TrustStore Password: changeit
Select Validate to confirm the configuration, then choose Save.
-
Test the Kafka connection with a simple job.
DROP TABLE IF EXISTS kafka_1; DROP TABLE IF EXISTS print_sink_kafka_1; CREATE TABLE `kafka_1` ( `id` INT, `name` STRING ) WITH ( 'connector' = 'kafka', /* Lines 457-464 omitted */ 'format' = 'json' ); CREATE TABLE `print_sink_kafka_1` ( `id` INT, `name` STRING ) WITH ( 'connector' = 'print' ); insert into print_sink_kafka_1 select * from kafka_1;
-
Create the BCFKS truststore and verify its contents.
-
Configure secure access to S3-compatible storage with FIPS truststores.
You create a BCFKS truststore for the S3 endpoint, expose it, update Helm values, and configure SQL jobs to use the truststore.
-
Create and validate the truststore.
keytool -importcert \ -alias s3 \ /* Lines 490-496 omitted */ \ -providerpath ccj-3.0.2.1.jar keytool -list \ -keystore truststore.bcfks \ /* Lines 501-504 omitted */ \ -providerpath ccj-3.0.2.1.jar -
Create the Kubernetes secret.
kubectl create secret generic truststore -n flink --from-file=truststore.bcfks -
Add storage configuration values and redeploy.
ssb: storageConfiguration: # Provide S3 access properties and mount the truststore secret.helm upgrade -n flink --set flink-kubernetes-operator.watchNamespaces={flink} -f values.yaml csa-operator helm/csa-operator -
Run a job that sets the Java truststore properties globally.
DROP TABLE IF EXISTS `datagen_table_2`; DROP TABLE IF EXISTS `print_sink_2`; CREATE TABLE `datagen_table_2` ( `col_str` STRING, /* Lines 549-550 omitted */ `col_ts` TIMESTAMP(3) ) WITH ( 'connector' = 'datagen', 'rows-per-second' = '1' ); CREATE TABLE `print_sink_2` ( `col_str` STRING, /* Lines 558-559 omitted */ `col_ts` TIMESTAMP(3) ) WITH ( 'connector' = 'print' ); SET 'high-availability.type' = 'kubernetes'; SET 'high-availability.storageDir' = 's3://[***BUCKET***]/flink/recovery'; SET 'state.checkpoints.dir' = 's3://[***BUCKET***]/flink/checkpoints'; SET 'state.savepoints.dir' = 's3://[***BUCKET***]/flink/savepoints'; SET 'execution.checkpointing.interval' = '3s'; SET 'env.java.opts.all' = '-Djavax.net.ssl.trustStore=/opt/flink/tls/truststore.bcfks -Djavax.net.ssl.trustStoreType=bcfks -Djavax.net.ssl.trustStorePassword=changeit'; insert into print_sink_2 select * from datagen_table_2;
-
Create and validate the truststore.
-
Configure LDAP authentication with a BCFKS truststore.
-
Create and verify the LDAP truststore.
keytool -importcert \ -alias ldap \ /* Lines 589-595 omitted */ \ -providerpath ccj-3.0.2.1.jar keytool -list \ -keystore ldap_truststore.bcfks \ /* Lines 600-603 omitted */ \ -providerpath ccj-3.0.2.1.jar -
Create the secret that stores LDAP connection properties.
kubectl create secret generic ssb-ldap -n flink \ --from-literal=SSB_LDAP_URL=ldaps://[***YOUR-LDAP-SERVER***]:636 \ /* Lines 611-621 omitted */ \ --from-literal=SSB_LDAP_SSL_TRUSTSTORE_TYPE=bcfks -
Update values.yaml and redeploy.
ssb: userManagement: # Reference the ssb-ldap secret and enable LDAP authentication.helm upgrade -n flink --set flink-kubernetes-operator.watchNamespaces={flink} -f values.yaml csa-operator helm/csa-operator
-
Create and verify the LDAP truststore.
-
Use JDBC and CDC connectors with PostgreSQL over SSL.
-
Create a secret that contains the PostgreSQL CA certificate.
kubectl create secret generic postgresql-ssl -n flink --from-file=ca.crt=postgres-ca.crt -
Mount the secret in values.yaml.
ssb: podVolumes: # Reference the postgresql-ssl secret so Flink jobs can read ca.crt. -
Configure PostgreSQL for logical decoding.
psql -U [***USERNAME***] -d [***DATABASE***] -c "ALTER SYSTEM SET wal_level = logical;" psql -U [***USERNAME***] -d [***DATABASE***] -c "SHOW wal_level;" -
Run a JDBC example job.
DROP TABLE IF EXISTS jdbc_projects; DROP TABLE IF EXISTS print_sink_jdbc; CREATE TABLE jdbc_projects ( id VARCHAR NOT NULL, /* Lines 679-682 omitted */ active_environment_id INT ) WITH ( 'connector' = 'jdbc', /* Lines 685-688 omitted */ 'password' = '[***PASSWORD***]' ); CREATE TABLE `print_sink_jdbc` ( id VARCHAR NOT NULL, /* Lines 693-696 omitted */ active_environment_id INT ) WITH ( 'connector' = 'print' ); INSERT INTO print_sink_jdbc SELECT * from jdbc_projects; -
Run a CDC example job.
DROP TABLE IF EXISTS projects_cdc; DROP TABLE IF EXISTS cdc_print; CREATE TABLE projects_cdc ( `id` VARCHAR(32), /* Lines 719-722 omitted */ `active_environment_id` INT ) WITH ( 'connector' = 'postgres-cdc', /* Lines 725-734 omitted */ 'debezium.properties.database.sslrootcert' = '/opt/flink/postgresql-ssl/ca.crt' ); CREATE TABLE `ssb`.`ssb_default`.`cdc_print` ( `id` VARCHAR(32), /* Lines 739-742 omitted */ `active_environment_id` INT ) WITH ( 'connector' = 'print' ); INSERT INTO cdc_print SELECT * FROM projects_cdc;
-
Create a secret that contains the PostgreSQL CA certificate.
-
Use JDBC and CDC connectors with MySQL over SSL.
-
Create and validate the MySQL truststore.
keytool -importcert \ -alias mysql \ /* Lines 758-764 omitted */ \ -providerpath ccj-3.0.2.1.jar keytool -list \ -keystore mysql_truststore.bcfks \ /* Lines 769-772 omitted */ \ -providerpath ccj-3.0.2.1.jar -
Create the Kubernetes secret.
kubectl create secret generic mysql-truststore -n flink --from-file=mysql_truststore.bcfks=mysql_truststore.bcfks -
Update values.yaml.
ssb: podVolumes: # Reference the mysql-truststore secret for Flink jobs. -
Create sample data in MySQL.
USE demo; CREATE TABLE users ( id INT PRIMARY KEY AUTO_INCREMENT, name VARCHAR(255) ); INSERT INTO users (name) VALUES ('Alice'), /* Lines 808-809 omitted */ ('Charlie'); SELECT * from users; -
Run the JDBC example job.
DROP TABLE IF EXISTS jdbc_projects; DROP TABLE IF EXISTS print_sink_jdbc; CREATE TABLE jdbc_projects ( id INT NOT NULL, name VARCHAR ) WITH ( 'connector' = 'jdbc', /* Lines 824-827 omitted */ 'password' = 'password' ); CREATE TABLE `print_sink_jdbc` ( id INT NOT NULL, name VARCHAR ) WITH ( 'connector' = 'print' ); INSERT INTO print_sink_jdbc SELECT * from jdbc_projects; -
Run the CDC example job and review the current limitation.
DROP TABLE IF EXISTS projects_cdc; DROP TABLE IF EXISTS cdc_print; CREATE TABLE projects_cdc ( `id` INT, `name` VARCHAR(255) ) WITH ( 'connector' = 'mysql-cdc', /* Lines 851-860 omitted */ 'debezium.database.ssl.truststore.password' = 'changeit' ); CREATE TABLE `ssb`.`ssb_default`.`cdc_print` ( `id` INT, `name` VARCHAR(255) ) WITH ( 'connector' = 'print' ); INSERT INTO cdc_print SELECT * FROM projects_cdc;
-
Create and validate the MySQL truststore.
