Technical documentation - Yacana 0.2.1

Roadmap

OllamaAgent

Representation of an LLM agent that interacts with the Ollama inference server.


▶️ Member variables

Name Type Description
name str   Name of the agent.
model_name str   Name of the LLM model.
system_prompt str | None   System prompt defining the LLM's behavior.
model_settings ModelSettings   Model configuration settings.
api_token str   API token for authentication.
headers dict   Custom headers for requests.
endpoint str | None   Endpoint URL for the inference server.
runtime_config Dict   Runtime configuration for the agent.
task_runtime_config Dict   Runtime configuration for tasks.
history History   The conversation history.
_tags List[str]   Internal list of tags.
thinking_tokens Tuple[str, str] | None   A tuple containing the start and end tokens of a thinking LLM. For instance, "" and "" for Deepseek-R1.
Setting this prevents the framework from getting sidetracked during the thinking steps and helps maintain focus on the final result.

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
name str   Name of the agent. Use something short and meaningful that doesn't contradict the system prompt.
model_name str   Name of the LLM model that will be sent to the inference server (e.g., 'llama:3.1' or 'mistral:latest').
system_prompt str | None   Optional Defines the way the LLM will behave (e.g., "You are a pirate" to have it talk like a pirate).
Defaults to None.
endpoint str   Optional The Ollama endpoint URL. Defaults to "http://127.0.0.1:11434".
headers dict   Optional Custom headers to be sent with the inference request. Defaults to None.
model_settings OllamaModelSettings   Optional All settings that Ollama currently supports as model configuration. Defaults to None.
runtime_config Dict | None   Optional Runtime configuration for the agent. Defaults to None.
thinking_tokens Tuple[str, str] | None   Optional A tuple containing the start and end tokens of a thinking LLM. For instance, "" and "" for Deepseek-R1.
Setting this prevents the framework from getting sidetracked during the thinking steps and helps maintain focus on the final result.
Return type: OllamaAgent

export_to_file(...)

Exports the current agent configuration to a file.

Parameter name Parameter type Parameter description
file_path str   Path of the file in which to save the data. Specify the path + filename.
Be wary when using relative paths.
strip_api_token bool   Optional If True, removes the API token from the exported data. Defaults to False.
strip_headers bool   Optional If True, removes headers from the exported data. Defaults to False.
Return type: None


OpenAiAgent

Representation of an LLM agent that interacts with the OpenAI API.


▶️ Member variables

Name Type Description
name str   Name of the agent.
model_name str   Name of the LLM model.
system_prompt str | None   System prompt defining the LLM's behavior.
model_settings ModelSettings   Model configuration settings.
api_token str   API token for authentication.
headers dict   Custom headers for requests.
endpoint str | None   Endpoint URL for the inference server.
runtime_config Dict   Runtime configuration for the agent.
task_runtime_config Dict   Runtime configuration for tasks.
history History   The conversation history.
_tags List[str]   Internal list of tags.
thinking_tokens Tuple[str, str] | None   A tuple containing the start and end tokens of a thinking LLM. For instance, "" and "" for Deepseek-R1.
Setting this prevents the framework from getting sidetracked during the thinking steps and helps maintain focus on the final result.

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
name str   Name of the agent. Use something short and meaningful that doesn't contradict the system prompt.
model_name str   Name of the LLM model that will be sent to the inference server (e.g., 'gpt-4' or 'gpt-3.5-turbo').
system_prompt str | None   Optional Defines the way the LLM will behave (e.g., "You are a pirate" to have it talk like a pirate).
Defaults to None.
endpoint str | None   Optional The OpenAI endpoint URL. Defaults to None (uses OpenAI's default endpoint).
api_token str   Optional The API token for authentication. Defaults to an empty string.
headers dict   Optional Custom headers to be sent with the inference request. Defaults to None.
model_settings OpenAiModelSettings   Optional All settings that OpenAI currently supports as model configuration. Defaults to None.
runtime_config Dict | None   Optional Runtime configuration for the agent. Defaults to None.
thinking_tokens Tuple[str, str] | None   Optional A tuple containing the start and end tokens of a thinking LLM. For instance, "" and "" for Deepseek-R1.
Setting this prevents the framework from getting sidetracked during the thinking steps and helps maintain focus on the final result.
Return type: OpenAiAgent

export_to_file(...)

Exports the current agent configuration to a file.

Parameter name Parameter type Parameter description
file_path str   Path of the file in which to save the data. Specify the path + filename.
Be wary when using relative paths.
strip_api_token bool   Optional If True, removes the API token from the exported data. Defaults to False.
strip_headers bool   Optional If True, removes headers from the exported data. Defaults to False.
Return type: None


Task

A class representing a task to be solved by an LLM agent.


▶️ Member variables

Name Type Description
prompt str   The task to solve. It is the prompt given to the assigned LLM.
agent GenericAgent   The agent assigned to this task.
json_output bool   If True, will force the LLM to answer as JSON.
structured_output Type[BaseModel] | None   The expected structured output type for the task.
tools List[Tool]   A list of tools that the LLM will get access to when trying to solve this task.
medias List[str] | None   An optional list of paths pointing to images on the filesystem.
llm_stops_by_itself bool   Only useful when the task is part of a GroupSolve(). Signals the assigned LLM
that it will have to stop talking by its own means.
use_self_reflection bool   Only useful when the task is part of a GroupSolve(). Allows keeping the self
reflection process done by the LLM in the next GS iteration.
forget bool   When True, the Agent won't remember this task after completion. Useful for
routing purposes.
streaming_callback Callable | None   Optional callback for streaming responses.
runtime_config Dict | None   Optional runtime configuration for the task.
tags List[str] | None   Optional list of tags that will be added to all message(s) corresponding to this task.

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
prompt str   The task to solve. It is the prompt given to the assigned LLM.
agent GenericAgent   The agent assigned to this task.
json_output bool   Optional If True, will force the LLM to answer as JSON. Defaults to False.
structured_output Type[BaseModel] | None   Optional The expected structured output type for the task. If provided, the LLM's response
will be validated against this type. Defaults to None.
tools List[Tool]   Optional A list of tools that the LLM will get access to when trying to solve this task.
Defaults to an empty list.
medias List[str] | None   Optional An optional list of paths pointing to images on the filesystem. Defaults to None.
llm_stops_by_itself bool   Optional Only useful when the task is part of a GroupSolve(). Signals the assigned LLM
that it will have to stop talking by its own means. Defaults to False.
use_self_reflection bool   Optional Only useful when the task is part of a GroupSolve(). Allows keeping the self
reflection process done by the LLM in the next GS iteration. Defaults to False.
forget bool   Optional When True, the Agent won't remember this task after completion. Useful for
routing purposes. Defaults to False.
streaming_callback Callable | None   Optional Optional callback for streaming responses. Defaults to None.
runtime_config Dict | None   Optional Optional runtime configuration for the task. Defaults to None.
tags List[str] | None   Optional Optional list of tags that will be added to all message(s) corresponding to this task. Defaults to None.
Return type: Task

add_tool(...)

Add a tool to the list of tools available for this task.

Parameter name Parameter type Parameter description
tool Tool   The tool to add to the task's tool list.
Return type: None

solve()

Execute the task using the assigned LLM agent.
Return type: None


Tool

A class representing a tool that can be used by an LLM to perform specific tasks.


▶️ Member variables

Name Type Description
tool_name str   The name of the tool.
function_description str   A description of the tool's functionality.
function_ref Callable   Function reference that the tool will call.
optional bool   Indicates if the tool is optional.
usage_examples List[dict]   A list of usage examples for the tool. The dict keys should match the function parameters.
max_custom_error int   Maximum number of custom errors (raised from the function) allowed before stopping the task.
max_call_error int   Maximum number of call errors (eg: python can't find the function) allowed before stopping the task.

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
tool_name str   A name for the tool. Should be concise and related to what the tool does.
function_description str   A description for the tool. Should be concise and related to what the tool does.
May contain an example of how to use. Refer to the documentation.
function_ref Callable   The reference to a python function that will be called with parameters provided by the LLM.
optional bool   Optional Allows to a certain extent the LLM to choose to use the given tool or not depending on the task to solve.
Defaults to False.
usage_examples List[dict]   Optional A list of python dictionary examples of how the tool should be called.
The examples will be given to the LLM to help it call the tool correctly.
Use if the LLM struggles to call the tool successfully. Defaults to an empty list.
max_custom_error int   Optional The max errors a tool can raise.
A tool should raise a ToolError(...) exception with a detailed explanation of why it failed.
The LLM will get the exception message and try again, taking into account the new knowledge it gained from the error.
When reaching the max iteration the MaxToolErrorIter() exception is thrown and the task is stopped. Defaults to 5.
max_call_error int   Optional The max number of times Yacana can fail to call a tool correctly.
Note that Yacana uses the parameters given to the LLM to call the tool so if they are invalid then Yacana will have a hard time to fix the situation.
You should try to give examples to the LLM on how to call the tool either in the tool description or when using the @usage_examples attribute to help the model.
Defaults to 5.
Return type: Tool


Message

For Yacana users or simple text based interactions.
The smallest entity representing an interaction with the LLM. Can be manually added to the history.


▶️ Member variables

Name Type Description
id str   The unique identifier of the message.
role MessageRole   From whom is the message from.
content str | None   The actual message content.
tool_calls List[ToolCallFromLLM] | None   List of tool calls associated with the message.
medias List[str]   List of media file paths.
structured_output Type[T] | None   Pydantic model for structured output.
tool_call_id str | None   ID of the associated tool call.
tags List[str]   List of tags associated with the message.

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
role MessageRole   The role of the message sender.
content str   The content of the message.
tags List[str]   Optional Optional list of tags associated with the message.
Return type: Message

add_tags(...)

Add a tag to the message.

Parameter name Parameter type Parameter description
tags List[str]   The tag to add to the message.
Return type: None

create_instance(...)

@  static method
Create a new instance of a GenericMessage subclass from a dictionary.

Parameter name Parameter type Parameter description
members Dict   Dictionary containing the message data.
Return type: GenericMessage

get_as_pretty()

Get a pretty-printed string representation of the message.
Return type: str

get_message_as_dict()

Convert the message to a dictionary format.
Return type: dict

remove_tag(...)

Remove a tag from the message. Tags are used to filter messages in the history.

Parameter name Parameter type Parameter description
tag str   The tag to remove from the message.
Return type: None


GenericMessage

Use for duck typing only.
The smallest entity representing an interaction with the LLM.
Use child class type to determine what type of message this is and the .role member to know from whom the message is from.


▶️ Member variables

Name Type Description
id str   The unique identifier of the message.
role MessageRole   From whom is the message from.
content str | None   The actual message content.
tool_calls List[ToolCallFromLLM] | None   List of tool calls associated with the message.
medias List[str]   List of media file paths.
structured_output Type[T] | None   Pydantic model for structured output.
tool_call_id str | None   ID of the associated tool call.
tags List[str]   List of tags associated with the message.

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
role MessageRole   From whom is the message from. See the MessageRole Enum.
content str | None   Optional The actual message content. Can be None if tool_calls is provided.
tool_calls List[ToolCallFromLLM] | None   Optional An optional list of tool calls that are sent by the LLM to the user.
medias List[str] | None   Optional An optional list of path pointing to images or audio on the filesystem.
structured_output Type[T] | None   Optional An optional pydantic model that can be used to store the result of a JSON response by the LLM.
tool_call_id str | None   Optional The ID of the tool call this message is associated with.
tags List[str] | None   Optional Optional list of tags associated with the message.
id uuid.UUID | None   Optional The unique identifier of the message. If None, a new UUID will be generated.
Return type: GenericMessage

add_tags(...)

Add a tag to the message.

Parameter name Parameter type Parameter description
tags List[str]   The tag to add to the message.
Return type: None

create_instance(...)

@  static method
Create a new instance of a GenericMessage subclass from a dictionary.

Parameter name Parameter type Parameter description
members Dict   Dictionary containing the message data.
Return type: GenericMessage

get_as_pretty()

Get a pretty-printed string representation of the message.
Return type: str

get_message_as_dict()

Convert the message to a dictionary format.
Return type: dict

remove_tag(...)

Remove a tag from the message. Tags are used to filter messages in the history.

Parameter name Parameter type Parameter description
tag str   The tag to remove from the message.
Return type: None


MessageRole

ENUM: The available types of message creators.
User messages are the ones that are sent by the user to the LLM.
Assistant messages are the ones that are sent by the LLM to the user.
System messages are the ones that defines the behavior of the LLM.
Tool messages are the ones containing the result of a tool call and then sent to the LLM. Not all LLMs support this type of message.


▶️ Member variables

Name Type Description
USER str   User messages are the ones that are sent by the user to the LLM.
ASSISTANT str   Assistant messages are the ones that are sent by the LLM to the user.
SYSTEM str   System messages are the ones that defines the behavior of the LLM.
TOOL str   Tool messages are the ones containing the result of a tool call and then sent to the LLM.

▶️ Methods

__init__()

Return type: MessageRole


History

Container for an alternation of Messages representing a conversation between the user and an LLM.
To be precise, the history is a list of slots and not actual messages. Each slot contains at least one or more messages.
This class does its best to hide the HistorySlot implementation. Meaning that many methods allows you to deal with the messages directly, but under the hood it always manages the slot wrapper.


▶️ Member variables

Name Type Description
slots List[HistorySlot]   List of history slots.
_checkpoints Dict[str, list[HistorySlot]]   Dictionary of checkpoints for the history.

▶️ Methods

__init__()

Return type: History

add_message(...)

Adds a new message to the history by creating a new slot.

Parameter name Parameter type Parameter description
message GenericMessage   The message to add to the history.
Return type: HistorySlot

add_slot(...)

Adds a new slot to the history at the specified position.

Parameter name Parameter type Parameter description
history_slot HistorySlot   The slot to add to the history.
position int | SlotPosition   Optional The position where to add the slot. Can be an integer or a SlotPosition enum value.
Defaults to SlotPosition.BOTTOM.
Return type: None

clean()

Resets the history, preserving only the initial system prompt if present.
Return type: None

create_check_point()

Creates a checkpoint of the current history state.
Return type: str

create_instance(...)

@  static method
Creates a new instance of History from a dictionary.

Parameter name Parameter type Parameter description
members Dict   Dictionary containing the history data.
Return type: History

delete_message(...)

Deletes a message from all slots in the history.

Parameter name Parameter type Parameter description
message Message   The message to delete.
Return type: None

delete_message_by_id(...)

Deletes a message from all slots in the history by its ID.

Parameter name Parameter type Parameter description
message_id str   The ID of the message to delete. If the ID does not exist, it logs a warning.
Return type: None

delete_slot(...)

Deletes a slot from the history.

Parameter name Parameter type Parameter description
slot HistorySlot   The slot to delete.
Return type: None

delete_slot_by_id(...)

Deletes a slot from the history by its ID. If the ID does not exist, it logs a warning.

Parameter name Parameter type Parameter description
slot_id str   The ID of the slot to delete.
Return type: None

get_all_messages()

Returns all messages in the history.
Return type: None

get_last_message()

Returns the last message in the history.
Return type: GenericMessage

get_last_slot()

Returns the last slot of the history. A good syntactic sugar to get the last item from the conversation.
Return type: HistorySlot

get_message(...)

Returns the message at the given index.

Parameter name Parameter type Parameter description
index int   The index of the message to return.
Return type: GenericMessage

get_messages_as_dict()

Returns all messages in the history as a list of dictionaries.
Return type: None

get_messages_by_tags(...)

Returns messages that match the given tags based on the matching mode.

Parameter name Parameter type Parameter description
tags List[str]   The tags to filter messages by.
strict bool   Optional Controls the matching mode:
- If False (default), returns messages that have ANY of the specified tags.
For example, searching for ["tag1"] will match messages with ["tag1", "tag2"].
This is useful for broad filtering.
- If True, returns messages that have EXACTLY the specified tags (and possibly more).
For example, searching for ["tag1", "tag2"] will match messages with ["tag1", "tag2", "tag3"]
but not messages with just ["tag1"] or ["tag2"].
This is useful for precise filtering.
Return type: None

get_slot_by_id(...)

Returns the slot with the given ID.

Parameter name Parameter type Parameter description
id str   The ID of the slot to return.
Return type: HistorySlot

get_slot_by_index(...)

Returns the slot at the given index.

Parameter name Parameter type Parameter description
index int   The index of the slot to return.
Return type: HistorySlot

get_slot_by_message(...)

Returns the slot containing the given message.

Parameter name Parameter type Parameter description
message GenericMessage   The message to search for.
Return type: HistorySlot

load_check_point(...)

Loads a checkpoint of the history. Perfect for a timey wimey rollback in time.

Parameter name Parameter type Parameter description
uid str   The unique identifier of the checkpoint to load.
Return type: None

pretty_print()

Prints the history to stdout with colored output.
Return type: None


HistorySlot

A slot is a container for messages. It can contain one or more messages.
Most of the time it will only contain one message but when using `n=2` or`n=x` in the OpenAI API, it will contain multiple variations hence multiple messages.


▶️ Member variables

Name Type Description
id str   The unique identifier for the slot.
creation_time int   The timestamp when the slot was created.
messages List[GenericMessage]   List of messages in the slot.
raw_llm_json str | None   The raw LLM JSON response for the slot.
main_message_index int   The index of the main message in the slot.

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
messages List[GenericMessage]   Optional A list of messages. Each message is a variation of the main message (defined by the
@main_message_index parameter).
raw_llm_json str   Optional The raw LLM JSON response for the slot. This is the raw JSON from the inference server.
When using OpenAI this may contain more than one message hence the slot system acts as a
container for the messages.
Return type: HistorySlot

add_message(...)

Adds a new message to the slot.

Parameter name Parameter type Parameter description
message GenericMessage   The message to add to the slot.
Return type: None

create_instance(...)

@  static method
Creates an instance of the HistorySlot class from a dictionary.
Mainly used to import the object from a file.

Parameter name Parameter type Parameter description
members Dict   Dictionary containing the slot data.
Return type: HistorySlot

get_all_messages()

Returns all the messages in the slot.
Return type: None

get_main_message_index()

Returns the index of the main message in the slot.
Return type: int

get_message(...)

Returns the main message of the slot or the one at the given index if index is provided.

Parameter name Parameter type Parameter description
message_index int | None   Optional The index of the message to return. If None, returns the currently selected message.
Return type: GenericMessage

keep_only_selected_message()

Keeps only the currently selected message in the slot and deletes all the others.
If there's only one message, this method does nothing.
Return type: None

set_main_message_index(...)

A slot can contain any number of concurrent message. But only one can be the main slot message and actually be part of the History.
This method sets the index of the main message within the list of available messages in the slot.

Parameter name Parameter type Parameter description
message_index int   The index of the message to select as the main message.
Return type: None

set_raw_llm_json(...)

Sets the raw LLM JSON response for the slot.
This is the raw JSON from the inference server. When using OpenAI this may contain more than one message hence the slot system acts as a container for the messages.

Parameter name Parameter type Parameter description
raw_llm_json str   The raw JSON response from the LLM.
Return type: None


SlotPosition

ENUM: The position of a slot in the history. This is only a syntactic sugar to make the code more readable.


▶️ Member variables

Name Type Description
BOTTOM int   The slot is at the bottom of the history.
TOP int   The slot is at the top of the history.

▶️ Methods

__init__()

Return type: SlotPosition


GroupSolve

This class allows multiple agents to enter a conversation with each other.


▶️ Member variables

Name Type Description
tasks List[Task]   All tasks that must be solved during group chat.
mode EndChatMode   The modality to end a chat with multiple agents.
reconcile_first_message bool   Should the first message from both LLMs be available to one another. Only useful in dual chat.
max_iter int   The max number of iterations in a conversation. An iteration is complete when we get back to the first speaker.
shift_owner Task   The Task to which the shift message should be assigned to. In the end it's rather the corresponding Agent
shift_content str | None   A custom message instead of using the opposite agent response as shift message content.

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
tasks List[Task]   All tasks that must be solved during group chat.
end_chat EndChat   Defines the modality of how and when LLMs stop chatting.
reconcile_first_message bool   Optional Should the first message from both LLMs be available to one another. Only useful in dual chat.
Defaults to False.
shift_message_owner Task   Optional The Task to which the shift message should be assigned to. In the end it's rather the corresponding Agent
than the Task that is involved here. Defaults to None.
shift_message_content str   Optional A custom message instead of using the opposite agent response as shift message content. Defaults to None.
Return type: GroupSolve

solve()

Starts the group chat and allows all LLMs to solve their assigned tasks.
Return type: None


EndChat

Defines the modality of how and when LLMs stop chatting.


▶️ Member variables

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
mode EndChatMode   The modality to end a chat with multiple agents.
max_iterations int   Optional The max number of iterations in a conversation. An iteration is complete when we get back to the first speaker.
Defaults to 5.
Return type: EndChat


EndChatMode

ENUM: All types of group chat completion.


▶️ Member variables

Name Type Description
ALL_TASK_MUST_COMPLETE str   Chat will continue going until all LLMs with @llm_stops_by_itself=True says they are finished.
Set precise completion goals in the task prompt if you want this to actually work.
ONE_LAST_CHAT_AFTER_FIRST_COMPLETION str   One agent will have the opportunity to respond after the completion of one agent allowing it to answer one last time.
ONE_LAST_GROUP_CHAT_AFTER_FIRST_COMPLETION str   All agents will have one last table turn to speak before exiting the chat after the first completion arrives.
END_CHAT_AFTER_FIRST_COMPLETION str   Immediately stops group chat after an agent has reached completion.
MAX_ITERATIONS_ONLY str   Agents won't be asked if they have fulfilled their objectives but instead will loop until achieving max iteration.
Max iteration can be set in the EndChat() class.

▶️ Methods

__init__()

Return type: EndChatMode


OllamaModelSettings

Settings for Ollama model configuration.


▶️ Member variables

Name Type Description
mirostat int   Optional Controls the model's creativity level (0: off, 1: on, 2: extra on).
mirostat_eta float   Optional Adjusts how quickly the model learns from context (e.g., 0.1).
mirostat_tau float   Optional Controls topic adherence (e.g., 5.0).
num_ctx int   Optional Determines context window size (e.g., 4096).
num_gqa int   Optional Controls parallel task handling (e.g., 8).
num_gpu int   Optional Sets GPU utilization (e.g., 50).
num_thread int   Optional Controls parallel processing (e.g., 8).
repeat_last_n int   Optional Controls repetition prevention window (e.g., 64).
repeat_penalty float   Optional Penalty for repeated content (e.g., 1.1).
temperature float   Optional Controls response randomness (e.g., 0.7).
seed int   Optional Random seed for reproducibility (e.g., 42).
stop List[str]   Optional Stop sequences for generation.
tfs_z float   Optional Controls response randomness reduction (e.g., 2.0).
num_predict int   Optional Maximum tokens to generate (e.g., 128).
top_k int   Optional Limits token selection (e.g., 40).
top_p float   Optional Controls token selection probability (e.g., 0.9).

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
mirostat int   Optional Controls the model's creativity level (0: off, 1: on, 2: extra on).
mirostat_eta float   Optional Adjusts how quickly the model learns from context (e.g., 0.1).
mirostat_tau float   Optional Controls topic adherence (e.g., 5.0).
num_ctx int   Optional Determines context window size (e.g., 4096).
num_gqa int   Optional Controls parallel task handling (e.g., 8).
num_gpu int   Optional Sets GPU utilization (e.g., 50).
num_thread int   Optional Controls parallel processing (e.g., 8).
repeat_last_n int   Optional Controls repetition prevention window (e.g., 64).
repeat_penalty float   Optional Penalty for repeated content (e.g., 1.1).
temperature float   Optional Controls response randomness (e.g., 0.7).
seed int   Optional Random seed for reproducibility (e.g., 42).
stop List[str]   Optional Stop sequences for generation.
tfs_z float   Optional Controls response randomness reduction (e.g., 2.0).
num_predict int   Optional Maximum tokens to generate (e.g., 128).
top_k int   Optional Limits token selection (e.g., 40).
top_p float   Optional Controls token selection probability (e.g., 0.9).
Return type: OllamaModelSettings

create_instance(...)

@  static method
Create a new instance of a ModelSettings subclass from a dictionary.

Parameter name Parameter type Parameter description
members Dict   Dictionary containing the settings and the 'type' field indicating
which subclass to instantiate.
Return type: ModelSettings

get_settings()

Get all current settings as a dictionary.
Return type: dict

reset()

Reset all settings to their initial values.
Return type: None


OpenAiModelSettings

Settings for OpenAI model configuration.


▶️ Member variables

Name Type Description
audio Any   Optional Parameters for audio output when using audio modality.
frequency_penalty float   Optional Penalty for token frequency (-2.0 to 2.0).
logit_bias Dict   Optional Token bias adjustments (-100 to 100).
logprobs bool   Optional Whether to return token log probabilities.
max_completion_tokens int   Optional Maximum tokens to generate.
metadata Dict   Optional Additional metadata (max 16 key-value pairs).
modalities List[str]   Optional Output types to generate (e.g., ["text", "audio"]).
n int   Optional Number of completion choices to generate.
prediction Any   Optional Configuration for predicted output.
presence_penalty float   Optional Penalty for token presence (-2.0 to 2.0).
reasoning_effort str   Optional Reasoning effort level ("low", "medium", "high").
seed int   Optional Random seed for reproducibility.
service_tier str   Optional Latency tier for processing ("auto" or "default").
stop str | List   Optional Stop sequences for generation.
store bool   Optional Whether to store completion output.
stream_options Any   Optional Options for streaming response.
temperature float   Optional Sampling temperature (0 to 2).
top_logprobs int   Optional Number of top tokens to return (0 to 20).
top_p float   Optional Nucleus sampling parameter.
user str   Optional End-user identifier.
web_search_options Any   Optional Web search configuration.

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
audio Any   Optional Parameters for audio output when using audio modality.
frequency_penalty float   Optional Penalty for token frequency (-2.0 to 2.0).
logit_bias Dict   Optional Token bias adjustments (-100 to 100).
logprobs bool   Optional Whether to return token log probabilities.
max_completion_tokens int   Optional Maximum tokens to generate.
metadata Dict   Optional Additional metadata (max 16 key-value pairs).
modalities List[str]   Optional Output types to generate (e.g., ["text", "audio"]).
n int   Optional Number of completion choices to generate.
prediction Any   Optional Configuration for predicted output.
presence_penalty float   Optional Penalty for token presence (-2.0 to 2.0).
reasoning_effort str   Optional Reasoning effort level ("low", "medium", "high").
seed int   Optional Random seed for reproducibility.
service_tier str   Optional Latency tier for processing ("auto" or "default").
stop str | List   Optional Stop sequences for generation.
store bool   Optional Whether to store completion output.
stream_options Any   Optional Options for streaming response.
temperature float   Optional Sampling temperature (0 to 2).
top_logprobs int   Optional Number of top tokens to return (0 to 20).
top_p float   Optional Nucleus sampling parameter.
user str   Optional End-user identifier.
web_search_options Any   Optional Web search configuration.
Return type: OpenAiModelSettings

create_instance(...)

@  static method
Create a new instance of a ModelSettings subclass from a dictionary.

Parameter name Parameter type Parameter description
members Dict   Dictionary containing the settings and the 'type' field indicating
which subclass to instantiate.
Return type: ModelSettings

get_settings()

Get all current settings as a dictionary.
Return type: dict

reset()

Reset all settings to their initial values.
Return type: None


LoggerManager

Manages logging configuration for the application.


▶️ Member variables

Name Type Description
LOG_LEVEL str   Default log level for the application.
AVAILABLE_LOG_LEVELS list[str]   List of valid log levels that can be set.

▶️ Methods

__init__()

Return type: LoggerManager

set_library_log_level(...)

@  static method
Set the logging level for a specific Python library.

Parameter name Parameter type Parameter description
library_name str   The name of the target library.
log_level str   The desired log level. Must be one of: "DEBUG", "INFO", "WARNING",
"ERROR", "CRITICAL", or None.
Return type: None

set_log_level(...)

@  static method
Set the logging level for the application.

Parameter name Parameter type Parameter description
log_level str | None   The desired log level. Must be one of: "DEBUG", "INFO", "WARNING",
"ERROR", "CRITICAL", or None. If None, logging will be disabled.
Return type: None

setup_logging(...)

@  static method
Initialize logging configuration for the application.

Parameter name Parameter type Parameter description
log_level str | None   Optional The desired log level. Must be one of: "DEBUG", "INFO", "WARNING",
"ERROR", "CRITICAL", or None. If None, uses the default LOG_LEVEL.
Defaults to None.
Return type: None


ToolError

Exception raised when a tool encounters incorrect parameters.


▶️ Member variables

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
message str   A descriptive message that will be given to the LLM to help fix the issue.
Example: 'Parameter xxxx expected type int but got type string'.
Return type: ToolError


MaxToolErrorIter

Exception raised when maximum error iterations are reached.


▶️ Member variables

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
message str   Information about which specific iteration counter reached its maximum.
Return type: MaxToolErrorIter


IllogicalConfiguration

Exception raised when the framework is used in an incoherent way.


▶️ Member variables

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
message str   A description of the illogical configuration or usage.
Return type: IllogicalConfiguration


ReachedTaskCompletion

Exception raised when a task has been successfully completed.


▶️ Member variables

▶️ Methods

__init__()

Return type: ReachedTaskCompletion


TaskCompletionRefusal

Exception raised when the model refuses to complete a task.


▶️ Member variables

▶️ Methods

__init__(...)


Parameter name Parameter type Parameter description
message str   The reason for the task completion refusal.
Return type: TaskCompletionRefusal


UnknownResponseFromLLM

Exception raised when the model returns an unknown response.


▶️ Member variables

▶️ Methods

__init__()

Return type: UnknownResponseFromLLM