Complete list of model-related configurations for setting up the Hue SQL AI Assistant
Review the list of service, model, and semantic search-related configurations used for custom configuring the AI services and models you want to use with the SQL AI Assistant and how to specify them in the Hue Advanced Configuration Snippet in the Cloudera Manager.
List of service and model-related configurations
You can configure the AI services and models you want to use by going to
and adding the following
lines:[desktop]
[[ai_interface]]
[***CONFIG-KEY1***]='[***VALUE***]'
[***CONFIG-KEY2***]='[***VALUE***]'
[[semantic_search]]
[***CONFIG-KEY1***]='[***VALUE***]'
[***CONFIG-KEY2***]='[***VALUE***]'
AI interface-related configurations
Here is the complete list of configurations under [[ai_interface]], which allows you to specify the service and model to be used:
AI interface config key | Description |
---|---|
service | API service to be used for AI tasks. AI is disabled when a service is not configured. For example, the Cloudera AI Workbench and Cloudera AI Inference service are API services. |
service_version | API service version to be used for AI tasks |
trusted_service | Indicates whether the LLM is trusted or not. Turn on to disable the
warning. The default value is True . |
model | The AI model you want to use for AI tasks. For example,
gpt and llama . |
model_name | The fully qualified name of the model to be used. For example,
gpt-3.5-turbo-16k . |
model_ref | The `model_ref` is a placeholder for adding the access key of the specific model you want to use. |
base_url | Service API base URL. |
add_table_data | When enabled, sample rows from the table are added to the prompt.
The default value is True . |
table_data_cache_size | Size of the LRU cache used for storing table sample data. |
auto_fetch_table_meta_limit | Number of tables to load from a database, initially. |
token | Service API secret token. |
token_script | Provides a secure way to get the service API secret token. |
enabled_sql_tasks | A comma-separated list of SQL-related AI tasks available in the Editor. |
Semantic search-related configurations
Specify the semantic search-related configurations used for RAG under the
[[semantic_search]] section, as listed in the following table:
Semantic search config key | Description |
---|---|
relevancy | The technology you want to use for semantic search. Acceptable
values are vector_search or vector_db . |
embedding_model | The model you want to use for data-embedding. This must be compatible with SentenceTransformer. |
top_k | Number of top-ranking items returned by semantic search. |
cache_size | Size of the LRU cache used for storing embedding. |