Bedrock

Description:

Invokes different type of models with the given prompt via Bedrock.

Tags:

Bedrock, Amazon, AWS, AI

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

Display NameAPI NameDefault ValueDescription
AWS Access Key IDAWS Access Key IDAWS Access Key ID to be used for authentication.
Sensitive Property: true
Supports Expression Language: true (will be evaluated using Environment variables only)
AWS Secret KeyAWS Secret KeyAWS Secret Key to be used for authentication.
Sensitive Property: true
Supports Expression Language: true (will be evaluated using Environment variables only)
RegionRegionus-west-2The AWS Region to connect to.
ModelModelnameModel to be used during invoke.
PromptPromptThe prompt to generate completions for, encoded as a string. If left empty, the flowfile content will be used instead.
Supports Expression Language: true (will be evaluated using flow file attributes and Environment variables)
TemperatureTemperature0.5The temperature is a number between 0 and 1 which affects how deterministic the model's response is. With lower temperatures, the response becomes more deterministic. Higher temperature leads to a more creative and random response.
Supports Expression Language: true (will be evaluated using flow file attributes and Environment variables)
Top PTop P1Instead of temperature sampling, the model considers the outcomes of tokens that have the top_p probability mass. For example a Top P value of 0.1 indicates that only the tokens representing the highest 10% probability mass are taken into consideration.
Supports Expression Language: true (will be evaluated using flow file attributes and Environment variables)
Max TokensMax Tokens300Determines the maximum number of tokens that can be used to generate the response by the model.
Supports Expression Language: true (will be evaluated using flow file attributes and Environment variables)
GuardRail IdentifierGuardRail IdentifierIdentifier of the GuardRail to be used during invoke.
Supports Expression Language: true (will be evaluated using flow file attributes and Environment variables)
GuardRail VersionGuardRail VersionVersion of the GuardRail to be used during invoke.
Supports Expression Language: true (will be evaluated using flow file attributes and Environment variables)

Example Use Cases:

Use Case:

Invokes different type of models with the given prompt via Bedrock.

Keywords:

bedrock, amazon, aws, embedding, vector, text

Configuration:

Configure an AWS Credential Service with the required credential data.

Select a region.

Set "Model" with a model that is supported in the given region and should be used during invoke.

Set "Prompt" with data that should be sent to Bedrock.

Text models have additional properties to set which are "Temperature", "Top P" and "Max tokens". Changing these properties can help to finetune the request.

Optionally Guardrails can be used during model invoke, to include them the "GuardRail Identifier" and "GuardRail Version" properties should be populated with a valid guardrail identifier and version.