Service and model-related configurations for setting up the Hue SQL AI Assistant
Review the list of service, model, and semantic search-related configurations used for custom configuring the AI services and models you want to use with the SQL AI Assistant and how to specify them in the Hue Advanced Configuration Snippet in Cloudera Manager.
List of service and model-related configurations
You can configure the AI services and models you want to use by going to Cloudera Manager > Clusters > Hue service > Configurations > Hue Service Advanced Configuration Snippet (Safety Valve) for
hue_safety_valve.ini and adding the following
lines:
[desktop]
[[ai_interface]]
[***CONFIG-KEY1***]='[***VALUE***]'
[***CONFIG-KEY2***]='[***VALUE***]'
[[semantic_search]]
[***CONFIG-KEY1***]='[***VALUE***]'
[***CONFIG-KEY2***]='[***VALUE***]'
Specify the service and model-related configurations under the [[ai_interface]]
section as listed in the following table:
AI interface config key | Description |
---|---|
service | API service to be used for AI tasks. AI is disabled when a service
is not configured. For example, azure , openai ,
bedrock , and ai_assistant . |
trusted_service | Indicates whether the LLM is trusted or not. Turn on to disable the
warning. The default value is True . |
model | The AI model you want to use for AI tasks. For example,
gpt and llama . |
model_name | The fully qualified name of the model to be used. For example,
gpt-3.5-turbo-16k . |
base_url | Service API base URL. |
add_table_data | When enabled, sample rows from the table are added to the prompt.
The default value is True . |
table_data_cache_size | Size of the LRU cache used for storing table sample data. |
auto_fetch_table_meta_limit | Number of tables to load from a database, initially. |
token | Service API secret token. |
token_script | Provides a secure way to get the service API secret token. |
List of semantic search-related configurations
Specify the semantic search-related configurations used for RAG under the
[[semantic_search]] section, as listed in the following table:
Semantic search config key | Description |
---|---|
relevancy | The technology you want to use for semantic search. Acceptable
values are vector_search or vector_db . |
embedding_model | The model you want to use for data-embedding. This must be compatible with SentenceTransformer. |
cache_size | Size of the LRU cache used for storing embedding. |