
Youtube video for this section is still under creation. Please be patient ^^
Representation of an LLM agent that interacts with the Ollama inference server.
Name | Type | Description |
---|---|---|
name |
str   | Name of the agent. |
model_name |
str   | Name of the LLM model. |
system_prompt |
str | None   | System prompt defining the LLM's behavior. |
model_settings |
ModelSettings   | Model configuration settings. |
api_token |
str   | API token for authentication. |
headers |
dict   | Custom headers for requests. |
endpoint |
str | None   | Endpoint URL for the inference server. |
runtime_config |
Dict   | Runtime configuration for the agent. |
task_runtime_config |
Dict   | Runtime configuration for tasks. |
history |
History   | The conversation history. |
_tags |
List[str]   | Internal list of tags. |
thinking_tokens |
Tuple[str, str] | None   |
A tuple containing the start and end tokens of a thinking LLM. For instance, " Setting this prevents the framework from getting sidetracked during the thinking steps and helps maintain focus on the final result. |
Parameter name | Parameter type | Parameter description |
---|---|---|
name |
str   | Name of the agent. Use something short and meaningful that doesn't contradict the system prompt. |
model_name |
str   | Name of the LLM model that will be sent to the inference server (e.g., 'llama:3.1' or 'mistral:latest'). |
system_prompt |
str | None   Optional |
Defines the way the LLM will behave (e.g., "You are a pirate" to have it talk like a pirate). Defaults to None. |
endpoint |
str   Optional | The Ollama endpoint URL. Defaults to "http://127.0.0.1:11434". |
headers |
dict   Optional | Custom headers to be sent with the inference request. Defaults to None. |
model_settings |
OllamaModelSettings   Optional | All settings that Ollama currently supports as model configuration. Defaults to None. |
runtime_config |
Dict | None   Optional | Runtime configuration for the agent. Defaults to None. |
thinking_tokens |
Tuple[str, str] | None   Optional |
A tuple containing the start and end tokens of a thinking LLM. For instance, " Setting this prevents the framework from getting sidetracked during the thinking steps and helps maintain focus on the final result. |
Parameter name | Parameter type | Parameter description |
---|---|---|
file_path |
str   |
Path of the file in which to save the data. Specify the path + filename. Be wary when using relative paths. |
strip_api_token |
bool   Optional | If True, removes the API token from the exported data. Defaults to False. |
strip_headers |
bool   Optional | If True, removes headers from the exported data. Defaults to False. |
Representation of an LLM agent that interacts with the OpenAI API.
Name | Type | Description |
---|---|---|
name |
str   | Name of the agent. |
model_name |
str   | Name of the LLM model. |
system_prompt |
str | None   | System prompt defining the LLM's behavior. |
model_settings |
ModelSettings   | Model configuration settings. |
api_token |
str   | API token for authentication. |
headers |
dict   | Custom headers for requests. |
endpoint |
str | None   | Endpoint URL for the inference server. |
runtime_config |
Dict   | Runtime configuration for the agent. |
task_runtime_config |
Dict   | Runtime configuration for tasks. |
history |
History   | The conversation history. |
_tags |
List[str]   | Internal list of tags. |
thinking_tokens |
Tuple[str, str] | None   |
A tuple containing the start and end tokens of a thinking LLM. For instance, " Setting this prevents the framework from getting sidetracked during the thinking steps and helps maintain focus on the final result. |
Parameter name | Parameter type | Parameter description |
---|---|---|
name |
str   | Name of the agent. Use something short and meaningful that doesn't contradict the system prompt. |
model_name |
str   | Name of the LLM model that will be sent to the inference server (e.g., 'gpt-4' or 'gpt-3.5-turbo'). |
system_prompt |
str | None   Optional |
Defines the way the LLM will behave (e.g., "You are a pirate" to have it talk like a pirate). Defaults to None. |
endpoint |
str | None   Optional | The OpenAI endpoint URL. Defaults to None (uses OpenAI's default endpoint). |
api_token |
str   Optional | The API token for authentication. Defaults to an empty string. |
headers |
dict   Optional | Custom headers to be sent with the inference request. Defaults to None. |
model_settings |
OpenAiModelSettings   Optional | All settings that OpenAI currently supports as model configuration. Defaults to None. |
runtime_config |
Dict | None   Optional | Runtime configuration for the agent. Defaults to None. |
thinking_tokens |
Tuple[str, str] | None   Optional |
A tuple containing the start and end tokens of a thinking LLM. For instance, " Setting this prevents the framework from getting sidetracked during the thinking steps and helps maintain focus on the final result. |
Parameter name | Parameter type | Parameter description |
---|---|---|
file_path |
str   |
Path of the file in which to save the data. Specify the path + filename. Be wary when using relative paths. |
strip_api_token |
bool   Optional | If True, removes the API token from the exported data. Defaults to False. |
strip_headers |
bool   Optional | If True, removes headers from the exported data. Defaults to False. |
A class representing a task to be solved by an LLM agent.
Name | Type | Description |
---|---|---|
prompt |
str   | The task to solve. It is the prompt given to the assigned LLM. |
agent |
GenericAgent   | The agent assigned to this task. |
json_output |
bool   | If True, will force the LLM to answer as JSON. |
structured_output |
Type[BaseModel] | None   | The expected structured output type for the task. |
tools |
List[Tool]   | A list of tools that the LLM will get access to when trying to solve this task. |
medias |
List[str] | None   | An optional list of paths pointing to images on the filesystem. |
llm_stops_by_itself |
bool   |
Only useful when the task is part of a GroupSolve(). Signals the assigned LLM that it will have to stop talking by its own means. |
use_self_reflection |
bool   |
Only useful when the task is part of a GroupSolve(). Allows keeping the self reflection process done by the LLM in the next GS iteration. |
forget |
bool   |
When True, the Agent won't remember this task after completion. Useful for routing purposes. |
streaming_callback |
Callable | None   | Optional callback for streaming responses. |
runtime_config |
Dict | None   | Optional runtime configuration for the task. |
tags |
List[str] | None   | Optional list of tags that will be added to all message(s) corresponding to this task. |
Parameter name | Parameter type | Parameter description |
---|---|---|
prompt |
str   | The task to solve. It is the prompt given to the assigned LLM. |
agent |
GenericAgent   | The agent assigned to this task. |
json_output |
bool   Optional | If True, will force the LLM to answer as JSON. Defaults to False. |
structured_output |
Type[BaseModel] | None   Optional |
The expected structured output type for the task. If provided, the LLM's response will be validated against this type. Defaults to None. |
tools |
List[Tool]   Optional |
A list of tools that the LLM will get access to when trying to solve this task. Defaults to an empty list. |
medias |
List[str] | None   Optional | An optional list of paths pointing to images on the filesystem. Defaults to None. |
llm_stops_by_itself |
bool   Optional |
Only useful when the task is part of a GroupSolve(). Signals the assigned LLM that it will have to stop talking by its own means. Defaults to False. |
use_self_reflection |
bool   Optional |
Only useful when the task is part of a GroupSolve(). Allows keeping the self reflection process done by the LLM in the next GS iteration. Defaults to False. |
forget |
bool   Optional |
When True, the Agent won't remember this task after completion. Useful for routing purposes. Defaults to False. |
streaming_callback |
Callable | None   Optional | Optional callback for streaming responses. Defaults to None. |
runtime_config |
Dict | None   Optional | Optional runtime configuration for the task. Defaults to None. |
tags |
List[str] | None   Optional | Optional list of tags that will be added to all message(s) corresponding to this task. Defaults to None. |
Parameter name | Parameter type | Parameter description |
---|---|---|
tool |
Tool   | The tool to add to the task's tool list. |
A class representing a tool that can be used by an LLM to perform specific tasks.
Name | Type | Description |
---|---|---|
tool_name |
str   | The name of the tool. |
function_description |
str   | A description of the tool's functionality. |
function_ref |
Callable   | Function reference that the tool will call. |
optional |
bool   | Indicates if the tool is optional. |
usage_examples |
List[dict]   | A list of usage examples for the tool. The dict keys should match the function parameters. |
max_custom_error |
int   | Maximum number of custom errors (raised from the function) allowed before stopping the task. |
max_call_error |
int   | Maximum number of call errors (eg: python can't find the function) allowed before stopping the task. |
Parameter name | Parameter type | Parameter description |
---|---|---|
tool_name |
str   | A name for the tool. Should be concise and related to what the tool does. |
function_description |
str   |
A description for the tool. Should be concise and related to what the tool does. May contain an example of how to use. Refer to the documentation. |
function_ref |
Callable   | The reference to a python function that will be called with parameters provided by the LLM. |
optional |
bool   Optional |
Allows to a certain extent the LLM to choose to use the given tool or not depending on the task to solve. Defaults to False. |
usage_examples |
List[dict]   Optional |
A list of python dictionary examples of how the tool should be called. The examples will be given to the LLM to help it call the tool correctly. Use if the LLM struggles to call the tool successfully. Defaults to an empty list. |
max_custom_error |
int   Optional |
The max errors a tool can raise. A tool should raise a ToolError(...) exception with a detailed explanation of why it failed. The LLM will get the exception message and try again, taking into account the new knowledge it gained from the error. When reaching the max iteration the MaxToolErrorIter() exception is thrown and the task is stopped. Defaults to 5. |
max_call_error |
int   Optional |
The max number of times Yacana can fail to call a tool correctly. Note that Yacana uses the parameters given to the LLM to call the tool so if they are invalid then Yacana will have a hard time to fix the situation. You should try to give examples to the LLM on how to call the tool either in the tool description or when using the @usage_examples attribute to help the model. Defaults to 5. |
For Yacana users or simple text based interactions.
The smallest entity representing an interaction with the LLM. Can be manually added to the history.
Name | Type | Description |
---|---|---|
id |
str   | The unique identifier of the message. |
role |
MessageRole   | From whom is the message from. |
content |
str | None   | The actual message content. |
tool_calls |
List[ToolCallFromLLM] | None   | List of tool calls associated with the message. |
medias |
List[str]   | List of media file paths. |
structured_output |
Type[T] | None   | Pydantic model for structured output. |
tool_call_id |
str | None   | ID of the associated tool call. |
tags |
List[str]   | List of tags associated with the message. |
Parameter name | Parameter type | Parameter description |
---|---|---|
role |
MessageRole   | The role of the message sender. |
content |
str   | The content of the message. |
tags |
List[str]   Optional | Optional list of tags associated with the message. |
Parameter name | Parameter type | Parameter description |
---|---|---|
tags |
List[str]   | The tag to add to the message. |
Parameter name | Parameter type | Parameter description |
---|---|---|
members |
Dict   | Dictionary containing the message data. |
Parameter name | Parameter type | Parameter description |
---|---|---|
tag |
str   | The tag to remove from the message. |
Use for duck typing only.
The smallest entity representing an interaction with the LLM.
Use child class type to determine what type of message this is and the .role member to know from whom the message is from.
Name | Type | Description |
---|---|---|
id |
str   | The unique identifier of the message. |
role |
MessageRole   | From whom is the message from. |
content |
str | None   | The actual message content. |
tool_calls |
List[ToolCallFromLLM] | None   | List of tool calls associated with the message. |
medias |
List[str]   | List of media file paths. |
structured_output |
Type[T] | None   | Pydantic model for structured output. |
tool_call_id |
str | None   | ID of the associated tool call. |
tags |
List[str]   | List of tags associated with the message. |
Parameter name | Parameter type | Parameter description |
---|---|---|
role |
MessageRole   | From whom is the message from. See the MessageRole Enum. |
content |
str | None   Optional | The actual message content. Can be None if tool_calls is provided. |
tool_calls |
List[ToolCallFromLLM] | None   Optional | An optional list of tool calls that are sent by the LLM to the user. |
medias |
List[str] | None   Optional | An optional list of path pointing to images or audio on the filesystem. |
structured_output |
Type[T] | None   Optional | An optional pydantic model that can be used to store the result of a JSON response by the LLM. |
tool_call_id |
str | None   Optional | The ID of the tool call this message is associated with. |
tags |
List[str] | None   Optional | Optional list of tags associated with the message. |
id |
uuid.UUID | None   Optional | The unique identifier of the message. If None, a new UUID will be generated. |
Parameter name | Parameter type | Parameter description |
---|---|---|
tags |
List[str]   | The tag to add to the message. |
Parameter name | Parameter type | Parameter description |
---|---|---|
members |
Dict   | Dictionary containing the message data. |
Parameter name | Parameter type | Parameter description |
---|---|---|
tag |
str   | The tag to remove from the message. |
ENUM: The available types of message creators.
User messages are the ones that are sent by the user to the LLM.
Assistant messages are the ones that are sent by the LLM to the user.
System messages are the ones that defines the behavior of the LLM.
Tool messages are the ones containing the result of a tool call and then sent to the LLM. Not all LLMs support this type of message.
Name | Type | Description |
---|---|---|
USER |
str   | User messages are the ones that are sent by the user to the LLM. |
ASSISTANT |
str   | Assistant messages are the ones that are sent by the LLM to the user. |
SYSTEM |
str   | System messages are the ones that defines the behavior of the LLM. |
TOOL |
str   | Tool messages are the ones containing the result of a tool call and then sent to the LLM. |
Container for an alternation of Messages representing a conversation between the user and an LLM.
To be precise, the history is a list of slots and not actual messages. Each slot contains at least one or more messages.
This class does its best to hide the HistorySlot implementation. Meaning that many methods allows you to deal with the messages directly, but under the hood it always manages the slot wrapper.
Name | Type | Description |
---|---|---|
slots |
List[HistorySlot]   | List of history slots. |
_checkpoints |
Dict[str, list[HistorySlot]]   | Dictionary of checkpoints for the history. |
Parameter name | Parameter type | Parameter description |
---|---|---|
message |
GenericMessage   | The message to add to the history. |
Parameter name | Parameter type | Parameter description |
---|---|---|
history_slot |
HistorySlot   | The slot to add to the history. |
position |
int | SlotPosition   Optional |
The position where to add the slot. Can be an integer or a SlotPosition enum value. Defaults to SlotPosition.BOTTOM. |
Parameter name | Parameter type | Parameter description |
---|---|---|
members |
Dict   | Dictionary containing the history data. |
Parameter name | Parameter type | Parameter description |
---|---|---|
message |
Message   | The message to delete. |
Parameter name | Parameter type | Parameter description |
---|---|---|
message_id |
str   | The ID of the message to delete. If the ID does not exist, it logs a warning. |
Parameter name | Parameter type | Parameter description |
---|---|---|
slot |
HistorySlot   | The slot to delete. |
Parameter name | Parameter type | Parameter description |
---|---|---|
slot_id |
str   | The ID of the slot to delete. |
Parameter name | Parameter type | Parameter description |
---|---|---|
index |
int   | The index of the message to return. |
Parameter name | Parameter type | Parameter description |
---|---|---|
tags |
List[str]   | The tags to filter messages by. |
strict |
bool   Optional |
Controls the matching mode: - If False (default), returns messages that have ANY of the specified tags. For example, searching for ["tag1"] will match messages with ["tag1", "tag2"]. This is useful for broad filtering. - If True, returns messages that have EXACTLY the specified tags (and possibly more). For example, searching for ["tag1", "tag2"] will match messages with ["tag1", "tag2", "tag3"] but not messages with just ["tag1"] or ["tag2"]. This is useful for precise filtering. |
Parameter name | Parameter type | Parameter description |
---|---|---|
id |
str   | The ID of the slot to return. |
Parameter name | Parameter type | Parameter description |
---|---|---|
index |
int   | The index of the slot to return. |
Parameter name | Parameter type | Parameter description |
---|---|---|
message |
GenericMessage   | The message to search for. |
Parameter name | Parameter type | Parameter description |
---|---|---|
uid |
str   | The unique identifier of the checkpoint to load. |
A slot is a container for messages. It can contain one or more messages.
Most of the time it will only contain one message but when using `n=2` or`n=x` in the OpenAI API, it will contain multiple variations hence multiple messages.
Name | Type | Description |
---|---|---|
id |
str   | The unique identifier for the slot. |
creation_time |
int   | The timestamp when the slot was created. |
messages |
List[GenericMessage]   | List of messages in the slot. |
raw_llm_json |
str | None   | The raw LLM JSON response for the slot. |
main_message_index |
int   | The index of the main message in the slot. |
Parameter name | Parameter type | Parameter description |
---|---|---|
messages |
List[GenericMessage]   Optional |
A list of messages. Each message is a variation of the main message (defined by the @main_message_index parameter). |
raw_llm_json |
str   Optional |
The raw LLM JSON response for the slot. This is the raw JSON from the inference server. When using OpenAI this may contain more than one message hence the slot system acts as a container for the messages. |
Parameter name | Parameter type | Parameter description |
---|---|---|
message |
GenericMessage   | The message to add to the slot. |
Parameter name | Parameter type | Parameter description |
---|---|---|
members |
Dict   | Dictionary containing the slot data. |
Parameter name | Parameter type | Parameter description |
---|---|---|
message_index |
int | None   Optional | The index of the message to return. If None, returns the currently selected message. |
Parameter name | Parameter type | Parameter description |
---|---|---|
message_index |
int   | The index of the message to select as the main message. |
Parameter name | Parameter type | Parameter description |
---|---|---|
raw_llm_json |
str   | The raw JSON response from the LLM. |
ENUM: The position of a slot in the history. This is only a syntactic sugar to make the code more readable.
Name | Type | Description |
---|---|---|
BOTTOM |
int   | The slot is at the bottom of the history. |
TOP |
int   | The slot is at the top of the history. |
This class allows multiple agents to enter a conversation with each other.
Name | Type | Description |
---|---|---|
tasks |
List[Task]   | All tasks that must be solved during group chat. |
mode |
EndChatMode   | The modality to end a chat with multiple agents. |
reconcile_first_message |
bool   | Should the first message from both LLMs be available to one another. Only useful in dual chat. |
max_iter |
int   | The max number of iterations in a conversation. An iteration is complete when we get back to the first speaker. |
shift_owner |
Task   | The Task to which the shift message should be assigned to. In the end it's rather the corresponding Agent |
shift_content |
str | None   | A custom message instead of using the opposite agent response as shift message content. |
Parameter name | Parameter type | Parameter description |
---|---|---|
tasks |
List[Task]   | All tasks that must be solved during group chat. |
end_chat |
EndChat   | Defines the modality of how and when LLMs stop chatting. |
reconcile_first_message |
bool   Optional |
Should the first message from both LLMs be available to one another. Only useful in dual chat. Defaults to False. |
shift_message_owner |
Task   Optional |
The Task to which the shift message should be assigned to. In the end it's rather the corresponding Agent than the Task that is involved here. Defaults to None. |
shift_message_content |
str   Optional | A custom message instead of using the opposite agent response as shift message content. Defaults to None. |
Defines the modality of how and when LLMs stop chatting.
Parameter name | Parameter type | Parameter description |
---|---|---|
mode |
EndChatMode   | The modality to end a chat with multiple agents. |
max_iterations |
int   Optional |
The max number of iterations in a conversation. An iteration is complete when we get back to the first speaker. Defaults to 5. |
ENUM: All types of group chat completion.
Name | Type | Description |
---|---|---|
ALL_TASK_MUST_COMPLETE |
str   |
Chat will continue going until all LLMs with @llm_stops_by_itself=True says they are finished. Set precise completion goals in the task prompt if you want this to actually work. |
ONE_LAST_CHAT_AFTER_FIRST_COMPLETION |
str   | One agent will have the opportunity to respond after the completion of one agent allowing it to answer one last time. |
ONE_LAST_GROUP_CHAT_AFTER_FIRST_COMPLETION |
str   | All agents will have one last table turn to speak before exiting the chat after the first completion arrives. |
END_CHAT_AFTER_FIRST_COMPLETION |
str   | Immediately stops group chat after an agent has reached completion. |
MAX_ITERATIONS_ONLY |
str   |
Agents won't be asked if they have fulfilled their objectives but instead will loop until achieving max iteration. Max iteration can be set in the EndChat() class. |
Settings for Ollama model configuration.
Name | Type | Description |
---|---|---|
mirostat |
int   Optional | Controls the model's creativity level (0: off, 1: on, 2: extra on). |
mirostat_eta |
float   Optional | Adjusts how quickly the model learns from context (e.g., 0.1). |
mirostat_tau |
float   Optional | Controls topic adherence (e.g., 5.0). |
num_ctx |
int   Optional | Determines context window size (e.g., 4096). |
num_gqa |
int   Optional | Controls parallel task handling (e.g., 8). |
num_gpu |
int   Optional | Sets GPU utilization (e.g., 50). |
num_thread |
int   Optional | Controls parallel processing (e.g., 8). |
repeat_last_n |
int   Optional | Controls repetition prevention window (e.g., 64). |
repeat_penalty |
float   Optional | Penalty for repeated content (e.g., 1.1). |
temperature |
float   Optional | Controls response randomness (e.g., 0.7). |
seed |
int   Optional | Random seed for reproducibility (e.g., 42). |
stop |
List[str]   Optional | Stop sequences for generation. |
tfs_z |
float   Optional | Controls response randomness reduction (e.g., 2.0). |
num_predict |
int   Optional | Maximum tokens to generate (e.g., 128). |
top_k |
int   Optional | Limits token selection (e.g., 40). |
top_p |
float   Optional | Controls token selection probability (e.g., 0.9). |
Parameter name | Parameter type | Parameter description |
---|---|---|
mirostat |
int   Optional | Controls the model's creativity level (0: off, 1: on, 2: extra on). |
mirostat_eta |
float   Optional | Adjusts how quickly the model learns from context (e.g., 0.1). |
mirostat_tau |
float   Optional | Controls topic adherence (e.g., 5.0). |
num_ctx |
int   Optional | Determines context window size (e.g., 4096). |
num_gqa |
int   Optional | Controls parallel task handling (e.g., 8). |
num_gpu |
int   Optional | Sets GPU utilization (e.g., 50). |
num_thread |
int   Optional | Controls parallel processing (e.g., 8). |
repeat_last_n |
int   Optional | Controls repetition prevention window (e.g., 64). |
repeat_penalty |
float   Optional | Penalty for repeated content (e.g., 1.1). |
temperature |
float   Optional | Controls response randomness (e.g., 0.7). |
seed |
int   Optional | Random seed for reproducibility (e.g., 42). |
stop |
List[str]   Optional | Stop sequences for generation. |
tfs_z |
float   Optional | Controls response randomness reduction (e.g., 2.0). |
num_predict |
int   Optional | Maximum tokens to generate (e.g., 128). |
top_k |
int   Optional | Limits token selection (e.g., 40). |
top_p |
float   Optional | Controls token selection probability (e.g., 0.9). |
Parameter name | Parameter type | Parameter description |
---|---|---|
members |
Dict   |
Dictionary containing the settings and the 'type' field indicating which subclass to instantiate. |
Settings for OpenAI model configuration.
Name | Type | Description |
---|---|---|
audio |
Any   Optional | Parameters for audio output when using audio modality. |
frequency_penalty |
float   Optional | Penalty for token frequency (-2.0 to 2.0). |
logit_bias |
Dict   Optional | Token bias adjustments (-100 to 100). |
logprobs |
bool   Optional | Whether to return token log probabilities. |
max_completion_tokens |
int   Optional | Maximum tokens to generate. |
metadata |
Dict   Optional | Additional metadata (max 16 key-value pairs). |
modalities |
List[str]   Optional | Output types to generate (e.g., ["text", "audio"]). |
n |
int   Optional | Number of completion choices to generate. |
prediction |
Any   Optional | Configuration for predicted output. |
presence_penalty |
float   Optional | Penalty for token presence (-2.0 to 2.0). |
reasoning_effort |
str   Optional | Reasoning effort level ("low", "medium", "high"). |
seed |
int   Optional | Random seed for reproducibility. |
service_tier |
str   Optional | Latency tier for processing ("auto" or "default"). |
stop |
str | List   Optional | Stop sequences for generation. |
store |
bool   Optional | Whether to store completion output. |
stream_options |
Any   Optional | Options for streaming response. |
temperature |
float   Optional | Sampling temperature (0 to 2). |
top_logprobs |
int   Optional | Number of top tokens to return (0 to 20). |
top_p |
float   Optional | Nucleus sampling parameter. |
user |
str   Optional | End-user identifier. |
web_search_options |
Any   Optional | Web search configuration. |
Parameter name | Parameter type | Parameter description |
---|---|---|
audio |
Any   Optional | Parameters for audio output when using audio modality. |
frequency_penalty |
float   Optional | Penalty for token frequency (-2.0 to 2.0). |
logit_bias |
Dict   Optional | Token bias adjustments (-100 to 100). |
logprobs |
bool   Optional | Whether to return token log probabilities. |
max_completion_tokens |
int   Optional | Maximum tokens to generate. |
metadata |
Dict   Optional | Additional metadata (max 16 key-value pairs). |
modalities |
List[str]   Optional | Output types to generate (e.g., ["text", "audio"]). |
n |
int   Optional | Number of completion choices to generate. |
prediction |
Any   Optional | Configuration for predicted output. |
presence_penalty |
float   Optional | Penalty for token presence (-2.0 to 2.0). |
reasoning_effort |
str   Optional | Reasoning effort level ("low", "medium", "high"). |
seed |
int   Optional | Random seed for reproducibility. |
service_tier |
str   Optional | Latency tier for processing ("auto" or "default"). |
stop |
str | List   Optional | Stop sequences for generation. |
store |
bool   Optional | Whether to store completion output. |
stream_options |
Any   Optional | Options for streaming response. |
temperature |
float   Optional | Sampling temperature (0 to 2). |
top_logprobs |
int   Optional | Number of top tokens to return (0 to 20). |
top_p |
float   Optional | Nucleus sampling parameter. |
user |
str   Optional | End-user identifier. |
web_search_options |
Any   Optional | Web search configuration. |
Parameter name | Parameter type | Parameter description |
---|---|---|
members |
Dict   |
Dictionary containing the settings and the 'type' field indicating which subclass to instantiate. |
Manages logging configuration for the application.
Name | Type | Description |
---|---|---|
LOG_LEVEL |
str   | Default log level for the application. |
AVAILABLE_LOG_LEVELS |
list[str]   | List of valid log levels that can be set. |
Parameter name | Parameter type | Parameter description |
---|---|---|
library_name |
str   | The name of the target library. |
log_level |
str   |
The desired log level. Must be one of: "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL", or None. |
Parameter name | Parameter type | Parameter description |
---|---|---|
log_level |
str | None   |
The desired log level. Must be one of: "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL", or None. If None, logging will be disabled. |
Parameter name | Parameter type | Parameter description |
---|---|---|
log_level |
str | None   Optional |
The desired log level. Must be one of: "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL", or None. If None, uses the default LOG_LEVEL. Defaults to None. |
Exception raised when a tool encounters incorrect parameters.
Parameter name | Parameter type | Parameter description |
---|---|---|
message |
str   |
A descriptive message that will be given to the LLM to help fix the issue. Example: 'Parameter xxxx expected type int but got type string'. |
Exception raised when maximum error iterations are reached.
Parameter name | Parameter type | Parameter description |
---|---|---|
message |
str   | Information about which specific iteration counter reached its maximum. |
Exception raised when the framework is used in an incoherent way.
Parameter name | Parameter type | Parameter description |
---|---|---|
message |
str   | A description of the illogical configuration or usage. |
Exception raised when a task has been successfully completed.
Exception raised when the model refuses to complete a task.
Parameter name | Parameter type | Parameter description |
---|---|---|
message |
str   | The reason for the task completion refusal. |
Exception raised when the model returns an unknown response.