Technical documentation - Yacana 0.2.0
▶️ Member variables
Name |
Type |
Description |
name |
str |
Name of the agent. Can be used during conversations. Use something short and meaningful that doesn't contradict the system prompt.
|
model_name |
str |
Name of the LLM model that will be sent to the inference server. For instance 'llama:3.1' or 'mistral:latest' etc.
|
system_prompt |
str | None |
Defines the way the LLM will behave. For instance set the SP to "You are a pirate" to have it talk like a pirate.
|
model_settings |
ModelSettings |
All settings that Ollama currently supports as model configuration. This needs to be tested with other inference servers.
This allows modifying deep behavioral patterns of the LLM.
|
headers |
dict |
Custom headers for requests.
|
endpoint |
str | None |
Endpoint URL for the inference server.
|
runtime_config |
Dict |
Runtime configuration for the agent.
|
task_runtime_config |
Dict |
Runtime configuration for tasks.
|
history |
History |
The conversation history.
|
▶️ Methods
➡️ __init__(...)
Initialize a new OllamaAgent instance.
Parameter name |
Parameter type |
Parameter description |
name |
str |
Name of the agent. Use something short and meaningful that doesn't contradict the system prompt.
|
model_name |
str |
Name of the LLM model that will be sent to the inference server (e.g., 'llama:3.1' or 'mistral:latest').
|
system_prompt |
str | None |
Defines the way the LLM will behave (e.g., "You are a pirate" to have it talk like a pirate). Defaults to None.
|
endpoint |
str |
The Ollama endpoint URL. Defaults to "http://127.0.0.1:11434".
|
headers |
dict |
Custom headers to be sent with the inference request. Defaults to None.
|
model_settings |
OllamaModelSettings |
All settings that Ollama currently supports as model configuration. Defaults to None.
|
runtime_config |
Dict | None |
Runtime configuration for the agent. Defaults to None.
|
Return type: OllamaAgent
➡️ export_to_file(...)
Exports the current agent configuration to a file.
This contains all the agents data and history. This means that you can use the
import_from_file method to load this agent back again and continue where you left off.
Parameter name |
Parameter type |
Parameter description |
file_path |
str |
Path of the file in which to save the data. Specify the path + filename.
Be wary when using relative paths.
|
strip_api_token |
bool |
If True, removes the API token from the exported data. Defaults to False.
|
strip_headers |
bool |
If True, removes headers from the exported data. Defaults to False.
|
Return type: None
➡️ import_from_file(...)
Loads the state previously exported from the export_to_file method.
This will return an Agent in the same state as it was before it was saved,
allowing you to resume the agent conversation even after the program has exited.
Parameter name |
Parameter type |
Parameter description |
file_path |
str |
The path to the file from which to load the Agent.
|
Return type: OllamaAgent
▶️ Member variables
Name |
Type |
Description |
name |
str |
Name of the agent. Use something short and meaningful that doesn't contradict the system prompt.
|
model_name |
str |
Name of the LLM model that will be sent to the inference server (e.g., 'gpt-4' or 'gpt-3.5-turbo').
|
system_prompt |
str | None |
Defines the way the LLM will behave (e.g., "You are a pirate" to have it talk like a pirate).
|
model_settings |
OpenAiModelSettings |
All settings that OpenAI currently supports as model configuration.
|
api_token |
str |
The API token for authentication.
|
headers |
dict |
Custom headers to be sent with the inference request.
|
endpoint |
str | None |
The OpenAI endpoint URL.
|
runtime_config |
Dict |
Runtime configuration for the agent.
|
task_runtime_config |
Dict |
Runtime configuration for tasks.
|
history |
History |
The conversation history.
|
▶️ Methods
➡️ __init__(...)
Initialize a new OpenAiAgent instance.
Parameter name |
Parameter type |
Parameter description |
name |
str |
Name of the agent. Use something short and meaningful that doesn't contradict the system prompt.
|
model_name |
str |
Name of the LLM model that will be sent to the inference server (e.g., 'gpt-4' or 'gpt-3.5-turbo').
|
system_prompt |
str | None |
Defines the way the LLM will behave (e.g., "You are a pirate" to have it talk like a pirate). Defaults to None.
|
endpoint |
str | None |
The OpenAI endpoint URL. Defaults to None (uses OpenAI's default endpoint).
|
api_token |
str |
The API token for authentication. Defaults to an empty string.
|
headers |
dict |
Custom headers to be sent with the inference request. Defaults to None.
|
model_settings |
OpenAiModelSettings |
All settings that OpenAI currently supports as model configuration. Defaults to None.
|
runtime_config |
Dict | None |
Runtime configuration for the agent. Defaults to None.
|
Return type: OpenAiAgent
➡️ export_to_file(...)
Exports the current agent configuration to a file.
This contains all the agents data and history. This means that you can use the
import_from_file method to load this agent back again and continue where you left off.
Parameter name |
Parameter type |
Parameter description |
file_path |
str |
Path of the file in which to save the data. Specify the path + filename.
Be wary when using relative paths.
|
strip_api_token |
bool |
If True, removes the API token from the exported data. Defaults to False.
|
strip_headers |
bool |
If True, removes headers from the exported data. Defaults to False.
|
Return type: None
➡️ import_from_file(...)
Loads the state previously exported from the export_to_file method.
This will return an Agent in the same state as it was before it was saved,
allowing you to resume the agent conversation even after the program has exited.
Parameter name |
Parameter type |
Parameter description |
file_path |
str |
The path to the file from which to load the Agent.
|
Return type: OpenAiAgent
▶️ Member variables
Name |
Type |
Description |
prompt |
str |
The task to solve. It is the prompt given to the assigned LLM.
|
agent |
GenericAgent |
The agent assigned to this task.
|
json_output |
bool |
If True, will force the LLM to answer as JSON.
|
structured_output |
Type[BaseModel] | None |
The expected structured output type for the task. If provided, the LLM's response will be validated against this type.
|
tools |
List[Tool] |
A list of tools that the LLM will get access to when trying to solve this task.
|
medias |
List[str] | None |
An optional list of paths pointing to images on the filesystem.
|
llm_stops_by_itself |
bool |
Only useful when the task is part of a GroupSolve(). Signals the assigned LLM that it will have to stop talking by its own means.
|
use_self_reflection |
bool |
Only useful when the task is part of a GroupSolve(). Allows keeping the self reflection process done by the LLM in the next GS iteration.
|
forget |
bool |
When True, the Agent won't remember this task after completion. Useful for routing purposes.
|
streaming_callback |
Callable | None |
Optional callback for streaming responses.
|
runtime_config |
Dict | None |
Optional runtime configuration for the task.
|
tags |
List[str] | None |
Optional list of tags that will be added to all message(s) corresponding to this task.
|
▶️ Methods
➡️ __init__(...)
Initialize a new Task instance.
Parameter name |
Parameter type |
Parameter description |
prompt |
str |
The task to solve. It is the prompt given to the assigned LLM.
|
agent |
GenericAgent |
The agent assigned to this task.
|
json_output |
bool |
If True, will force the LLM to answer as JSON. Defaults to False.
|
structured_output |
Type[BaseModel] | None |
The expected structured output type for the task. If provided, the LLM's response will be validated against this type. Defaults to None.
|
tools |
List[Tool] |
A list of tools that the LLM will get access to when trying to solve this task. Defaults to an empty list.
|
medias |
List[str] | None |
An optional list of paths pointing to images on the filesystem. Defaults to None.
|
llm_stops_by_itself |
bool |
Only useful when the task is part of a GroupSolve(). Signals the assigned LLM that it will have to stop talking by its own means. Defaults to False.
|
use_self_reflection |
bool |
Only useful when the task is part of a GroupSolve(). Allows keeping the self reflection process done by the LLM in the next GS iteration. Defaults to False.
|
forget |
bool |
When True, the Agent won't remember this task after completion. Useful for routing purposes. Defaults to False.
|
streaming_callback |
Callable | None |
Optional callback for streaming responses. Defaults to None.
|
runtime_config |
Dict | None |
Optional runtime configuration for the task. Defaults to None.
|
tags |
List[str] | None |
Optional list of tags that will be added to the message(s) corresponding to the user's prompt. Defaults to None.
|
Return type: Task
➡️ add_tool(...)
Add a tool to the list of tools available for this task.
Parameter name |
Parameter type |
Parameter description |
tool |
Tool |
The tool to add to the task's tool list.
|
Return type: None
➡️ solve()
Execute the task using the assigned LLM agent.
This method will call the assigned LLM to perform inference on the task's prompt.
If tools are available, the LLM may use them, potentially making multiple calls
to the inference server.
Return type: Message
▶️ Properties
➡️ uuid
Get the unique identifier for this task.
Return type: Message
▶️ Member variables
Name |
Type |
Description |
slots |
List[HistorySlot] |
List of history slots.
|
▶️ Methods
➡️ add_slot(...)
Adds a new slot to the history at the specified position.
Parameter name |
Parameter type |
Parameter description |
history_slot |
HistorySlot |
The slot to add to the history.
|
position |
int | SlotPosition |
The position where to add the slot. Can be an integer or a SlotPosition enum value. Defaults to SlotPosition.BOTTOM.
|
Return type: None
➡️ delete_slot(...)
Deletes a slot from the history.
Parameter name |
Parameter type |
Parameter description |
index |
int |
The index of the slot to delete.
|
Return type: None
➡️ get_last_slot()
Returns the last slot of the history. A good syntactic sugar to get the last item from the conversation.
Return type: HistorySlot
➡️ get_slot_by_index(...)
Returns the slot at the given index.
Parameter name |
Parameter type |
Parameter description |
index |
int |
The index of the slot to return.
|
Return type: HistorySlot
➡️ get_slot_by_id(...)
Returns the slot with the given ID.
Parameter name |
Parameter type |
Parameter description |
id |
str |
The ID of the slot to return.
|
Return type: HistorySlot
➡️ get_slot_by_message(...)
Returns the slot containing the given message.
Parameter name |
Parameter type |
Parameter description |
message |
GenericMessage |
The message to search for.
|
Return type: HistorySlot
➡️ add_message(...)
Adds a new message to the history by creating a new slot.
Parameter name |
Parameter type |
Parameter description |
message |
GenericMessage |
The message to add to the history.
|
Return type: None
➡️ get_messages_as_dict()
Returns all messages in the history as a list of dictionaries.
Return type: List[Dict]
➡️ pretty_print()
Prints the history to stdout with colored output.
Return type: None
➡️ create_check_point()
Creates a checkpoint of the current history state.
Return type: str
➡️ load_check_point(...)
Loads a checkpoint of the history. Perfect for a timey wimey rollback in time.
Parameter name |
Parameter type |
Parameter description |
uid |
str |
The unique identifier of the checkpoint to load.
|
Return type: None
➡️ get_message(...)
Returns the message at the given index.
Parameter name |
Parameter type |
Parameter description |
index |
int |
The index of the message to return.
|
Return type: GenericMessage
➡️ get_messages_by_tags(...)
Returns messages that match the given tags based on the matching mode.
Parameter name |
Parameter type |
Parameter description |
tags |
List[str] |
The tags to filter messages by.
|
strict |
bool |
Controls the matching mode. If False (default), returns messages that have ANY of the specified tags. If True, returns messages that have EXACTLY the specified tags (and possibly more). Defaults to False.
|
Return type: Sequence[GenericMessage]
➡️ get_last_message()
Returns the last message in the history.
Return type: GenericMessage |
➡️ get_all_messages()
Returns all messages in the history.
Return type: List[Message] |
➡️ clean()
Resets the history, preserving only the initial system prompt if present.
▶️ Member variables
Name |
Type |
Description |
id |
str |
The unique identifier for the slot.
|
creation_time |
int |
The timestamp when the slot was created.
|
messages |
List[GenericMessage] |
List of messages in the slot.
|
raw_llm_json |
str | None |
The raw LLM JSON response for the slot.
|
main_message_index |
int |
The index of the main message in the slot.
|
▶️ Methods
➡️ __init__(...)
Initialize a new HistorySlot instance.
Parameter name |
Parameter type |
Parameter description |
messages |
List[GenericMessage] | None |
A list of messages. Each message is a variation of the main message defined by the @main_message_index parameter. Defaults to None.
|
raw_llm_json |
str | None |
The raw LLM JSON response for the slot. This is the raw JSON from the inference server. When using OpenAI this may contain more than one message hence the slot system acts as a container for the messages. Defaults to None.
|
Return type: HistorySlot
➡️ set_main_message_index(...)
A slot can contain any number of concurrent messages. But only one can be the main slot message and actually be part of the History.
This method sets the index of the main message within the list of available messages in the slot.
Parameter name |
Parameter type |
Parameter description |
message_index |
int |
The index of the message to select as the main message.
|
Return type: None
➡️ get_main_message_index()
Returns the index of the main message in the slot.
Return type: int
➡️ add_message(...)
Adds a new message to the slot.
Parameter name |
Parameter type |
Parameter description |
message |
GenericMessage |
The message to add to the slot.
|
Return type: None
➡️ get_message(...)
Returns the main message of the slot or the one at the given index if index is provided.
Parameter name |
Parameter type |
Parameter description |
message_index |
int | None |
The index of the message to return. If None, returns the currently selected message.
|
Return type: GenericMessage
➡️ get_all_messages()
Returns all the messages in the slot.
Return type: List[GenericMessage]
➡️ set_raw_llm_json(...)
Sets the raw LLM JSON response for the slot.
Parameter name |
Parameter type |
Parameter description |
raw_llm_json |
str |
The raw JSON response from the LLM.
|
Return type: None
➡️ delete_message_by_index(...)
Deletes a message from the slot by index.
Parameter name |
Parameter type |
Parameter description |
message_index |
int |
The index of the message to delete.
|
Return type: None
➡️ delete_message_by_id(...)
Deletes a message from the slot by id.
Parameter name |
Parameter type |
Parameter description |
message_id |
str |
The ID of the message to delete.
|
Return type: None
➡️ keep_only_selected_message()
Keeps only the currently selected message in the slot and deletes all the others.
If there's only one message, this method does nothing.
Return type: None
▶️ Member variables
Name |
Type |
Description |
id |
str |
The unique identifier of the message.
|
role |
MessageRole |
From whom is the message from.
|
content |
str | None |
The actual message content. Can be None if tool_calls is provided.
|
tool_calls |
List[ToolCallFromLLM] | None |
List of tool calls associated with the message.
|
medias |
List[str] |
List of media file paths.
|
structured_output |
Type[T] | None |
Pydantic model for structured output.
|
tool_call_id |
str | None |
ID of the associated tool call.
|
tags |
List[str] |
List of tags associated with the message.
|
▶️ Methods
➡️ __init__(...)
Initialize a new Message instance.
Parameter name |
Parameter type |
Parameter description |
role |
MessageRole |
The role of the message sender.
|
content |
str |
The content of the message.
|
tags |
List[str] |
Optional list of tags associated with the message. Defaults to None.
|
Return type: Message
➡️ get_message_as_dict()
Convert the message to a dictionary format.
Return type: dict
➡️ get_as_pretty()
Get a pretty-printed string representation of the message.
Return type: str
➡️ add_tags(...)
Add a tag to the message.
Parameter name |
Parameter type |
Parameter description |
tags |
List[str] |
The tag to add to the message.
|
Return type: None
➡️ remove_tag(...)
Remove a tag from the message. Tags are used to filter messages in the history.
Parameter name |
Parameter type |
Parameter description |
tag |
str |
The tag to remove from the message.
|
Return type: None
▶️ Member variables
Name |
Type |
Description |
tool_name |
str |
A name for the tool. Should be concise and related to what the tool does.
|
function_description |
str |
A description for the tool. Should be concise and related to what the tool does.
May contain an example of how to use. Refer to the documentation.
|
function_ref |
Callable |
The reference to a python function that will be called with parameters provided by the LLM.
|
optional |
bool |
Allows to a certain extent the LLM to choose to use the given tool or not depending on the task to solve.
|
usage_examples |
List[dict] |
A list of python dictionary examples of how the tool should be called. The examples will be given to the LLM to help it call the tool correctly.
|
max_custom_error |
int |
The max errors a tool can raise. A tool should raise a ToolError(...) exception with a detailed explanation of why it failed.
|
max_call_error |
int |
The max number of times Yacana can fail to call a tool correctly.
|
▶️ Methods
➡️ __init__(...)
Initialize a new Tool instance.
Parameter name |
Parameter type |
Parameter description |
tool_name |
str |
A name for the tool. Should be concise and related to what the tool does.
|
function_description |
str |
A description for the tool. Should be concise and related to what the tool does.
May contain an example of how to use.
|
function_ref |
Callable |
The reference to a python function that will be called with parameters provided by the LLM.
|
optional |
bool |
Whether the tool is optional. Defaults to False.
|
usage_examples |
List[dict] | None |
Examples of how to use the tool. Defaults to None.
|
max_custom_error |
int |
Maximum number of custom errors allowed. Defaults to 5.
|
max_call_error |
int |
Maximum number of call errors allowed. Defaults to 5.
|
Return type: Tool
▶️ Member variables
Name |
Type |
Description |
mirostat |
int | None |
Controls the model's creativity level (0: off, 1: on, 2: extra on).
|
mirostat_eta |
float | None |
Adjusts how quickly the model learns from context (e.g., 0.1).
|
mirostat_tau |
float | None |
Controls topic adherence (e.g., 5.0).
|
num_ctx |
int | None |
Determines context window size (e.g., 4096).
|
num_gqa |
int | None |
Controls parallel task handling (e.g., 8).
|
num_gpu |
int | None |
Sets GPU utilization (e.g., 50).
|
num_thread |
int | None |
Controls parallel processing (e.g., 8).
|
repeat_last_n |
int | None |
Controls repetition prevention window (e.g., 64).
|
repeat_penalty |
float | None |
Penalty for repeated content (e.g., 1.1).
|
temperature |
float | None |
Controls response randomness (e.g., 0.7).
|
seed |
int | None |
Random seed for reproducibility (e.g., 42).
|
stop |
List[str] | None |
Stop sequences for generation.
|
tfs_z |
float | None |
Controls response randomness reduction (e.g., 2.0).
|
num_predict |
int | None |
Maximum tokens to generate (e.g., 128).
|
top_k |
int | None |
Limits token selection (e.g., 40).
|
top_p |
float | None |
Controls token selection probability (e.g., 0.9).
|
▶️ Methods
➡️ __init__(...)
Initialize a new OllamaModelSettings instance.
Parameter name |
Parameter type |
Parameter description |
mirostat |
int | None |
Controls the model's creativity level (0: off, 1: on, 2: extra on). Defaults to None.
|
mirostat_eta |
float | None |
Adjusts how quickly the model learns from context (e.g., 0.1). Defaults to None.
|
mirostat_tau |
float | None |
Controls topic adherence (e.g., 5.0). Defaults to None.
|
num_ctx |
int | None |
Determines context window size (e.g., 4096). Defaults to None.
|
num_gqa |
int | None |
Controls parallel task handling (e.g., 8). Defaults to None.
|
num_gpu |
int | None |
Sets GPU utilization (e.g., 50). Defaults to None.
|
num_thread |
int | None |
Controls parallel processing (e.g., 8). Defaults to None.
|
repeat_last_n |
int | None |
Controls repetition prevention window (e.g., 64). Defaults to None.
|
repeat_penalty |
float | None |
Penalty for repeated content (e.g., 1.1). Defaults to None.
|
temperature |
float | None |
Controls response randomness (e.g., 0.7). Defaults to None.
|
seed |
int | None |
Random seed for reproducibility (e.g., 42). Defaults to None.
|
stop |
List[str] | None |
Stop sequences for generation. Defaults to None.
|
tfs_z |
float | None |
Controls response randomness reduction (e.g., 2.0). Defaults to None.
|
num_predict |
int | None |
Maximum tokens to generate (e.g., 128). Defaults to None.
|
top_k |
int | None |
Limits token selection (e.g., 40). Defaults to None.
|
top_p |
float | None |
Controls token selection probability (e.g., 0.9). Defaults to None.
|
Return type: OllamaModelSettings
➡️ get_settings()
Get all current settings as a dictionary.
Return type: dict
➡️ reset()
Reset all settings to their initial values.
Return type: None
▶️ Member variables
Name |
Type |
Description |
audio |
Any | None |
Parameters for audio output when using audio modality.
|
frequency_penalty |
float | None |
Penalty for token frequency (-2.0 to 2.0).
|
logit_bias |
Dict | None |
Token bias adjustments (-100 to 100).
|
logprobs |
bool | None |
Whether to return token log probabilities.
|
max_completion_tokens |
int | None |
Maximum tokens to generate.
|
metadata |
Dict | None |
Additional metadata (max 16 key-value pairs).
|
modalities |
List[str] | None |
Output types to generate (e.g., ["text", "audio"]).
|
n |
int | None |
Number of completion choices to generate.
|
prediction |
Any | None |
Configuration for predicted output.
|
presence_penalty |
float | None |
Penalty for token presence (-2.0 to 2.0).
|
reasoning_effort |
str | None |
Reasoning effort level ("low", "medium", "high").
|
seed |
int | None |
Random seed for reproducibility.
|
service_tier |
str | None |
Latency tier for processing ("auto" or "default").
|
stop |
str | List | None |
Stop sequences for generation.
|
store |
bool | None |
Whether to store completion output.
|
stream_options |
Any | None |
Options for streaming response.
|
temperature |
float | None |
Sampling temperature (0 to 2).
|
top_logprobs |
int | None |
Number of top tokens to return (0 to 20).
|
top_p |
float | None |
Nucleus sampling parameter.
|
user |
str | None |
End-user identifier.
|
web_search_options |
Any | None |
Web search configuration.
|
▶️ Methods
➡️ __init__(...)
Initialize a new OpenAiModelSettings instance.
Parameter name |
Parameter type |
Parameter description |
audio |
Any | None |
Parameters for audio output when using audio modality. Defaults to None.
|
frequency_penalty |
float | None |
Penalty for token frequency (-2.0 to 2.0). Defaults to None.
|
logit_bias |
Dict | None |
Token bias adjustments (-100 to 100). Defaults to None.
|
logprobs |
bool | None |
Whether to return token log probabilities. Defaults to None.
|
max_completion_tokens |
int | None |
Maximum tokens to generate. Defaults to None.
|
metadata |
Dict | None |
Additional metadata (max 16 key-value pairs). Defaults to None.
|
modalities |
List[str] | None |
Output types to generate (e.g., ["text", "audio"]). Defaults to None.
|
n |
int | None |
Number of completion choices to generate. Defaults to None.
|
prediction |
Any | None |
Configuration for predicted output. Defaults to None.
|
presence_penalty |
float | None |
Penalty for token presence (-2.0 to 2.0). Defaults to None.
|
reasoning_effort |
str | None |
Reasoning effort level ("low", "medium", "high"). Defaults to None.
|
seed |
int | None |
Random seed for reproducibility.
|
service_tier |
str | None |
Latency tier for processing ("auto" or "default").
|
stop |
str | List | None |
Stop sequences for generation.
|
store |
bool | None |
Whether to store completion output.
|
stream_options |
Any | None |
Options for streaming response.
|
temperature |
float | None |
Sampling temperature (0 to 2).
|
top_logprobs |
int | None |
Number of top tokens to return (0 to 20).
|
top_p |
float | None |
Nucleus sampling parameter.
|
user |
str | None |
End-user identifier.
|
web_search_options |
Any | None |
Web search configuration.
|
Return type: OpenAiModelSettings
➡️ get_settings()
Get all current settings as a dictionary.
Return type: dict
➡️ reset()
Reset all settings to their initial values.
Return type: None
Name |
Value |
Description |
ALL_TASK_MUST_COMPLETE |
"ALL_TASK_MUST_COMPLETE" |
Chat will continue going until all LLMs with @llm_stops_by_itself=True says they are finished.
Set precise completion goals in the task prompt if you want this to actually work.
|
ONE_LAST_CHAT_AFTER_FIRST_COMPLETION |
"ONE_LAST_CHAT_AFTER_FIRST_COMPLETION" |
One agent will have the opportunity to respond after the completion of one agent allowing it to answer one last time.
|
ONE_LAST_GROUP_CHAT_AFTER_FIRST_COMPLETION |
"ONE_LAST_GROUP_CHAT_AFTER_FIRST_COMPLETION" |
All agents will have one last table turn to speak before exiting the chat after the first completion arrives.
|
END_CHAT_AFTER_FIRST_COMPLETION |
"END_CHAT_AFTER_FIRST_COMPLETION" |
Immediately stops group chat after an agent has reached completion.
|
MAX_ITERATIONS_ONLY |
"MAX_ITERATIONS_ONLY" |
Agents won't be asked if they have fulfilled their objectives but instead will loop until achieving max iteration.
Max iteration can be set in the EndChat() class.
|
All types of group chat completion.
The difficulty in making agents talk to each other is not to have them talk but to have them stop talking.
Note that only tasks that have the @llm_stops_by_itself=True are actually impacted by the mode set here.
Use in conjunction with EndChat().
▶️ Member variables
Name |
Type |
Description |
mode |
EndChatMode |
The modality to end a chat with multiple agents.
|
max_iterations |
int |
The max number of iterations in a conversation. An iteration is complete when we get back to the first speaker.
|
▶️ Methods
➡️ __init__(...)
Defines the modality of how and when LLMs stop chatting.
Parameter name |
Parameter type |
Parameter description |
mode |
EndChatMode |
The modality to end a chat with multiple agents.
|
max_iterations |
int |
The max number of iterations in a conversation. An iteration is complete when we get back to the first speaker. Defaults to 5.
|
Return type: EndChat
▶️ Member variables
Name |
Type |
Description |
tasks |
List[Task] |
All tasks that must be solved during group chat.
|
mode |
EndChatMode |
The modality to end a chat with multiple agents.
|
reconcile_first_message |
bool |
Should the first message from both LLMs be available to one another. Only useful in dual chat.
|
max_iter |
int |
The max number of iterations in a conversation. An iteration is complete when we get back to the first speaker.
|
shift_owner |
Task |
The Task to which the shift message should be assigned to. In the end it's rather the corresponding Agent than the Task that is involved here.
|
shift_content |
str | None |
A custom message instead of using the opposite agent response as shift message content.
|
▶️ Methods
➡️ __init__(...)
Initialize a new GroupSolve instance.
Parameter name |
Parameter type |
Parameter description |
tasks |
List[Task] |
All tasks that must be solved during group chat.
|
end_chat |
EndChat |
Defines the modality of how and when LLMs stop chatting.
|
reconcile_first_message |
bool |
Should the first message from both LLMs be available to one another. Only useful in dual chat. Defaults to False.
|
shift_message_owner |
Task |
The Task to which the shift message should be assigned to. In the end it's rather the corresponding Agent than the Task that is involved here. Defaults to None.
|
shift_message_content |
str | None |
A custom message instead of using the opposite agent response as shift message content. Defaults to None.
|
Return type: GroupSolve
➡️ solve()
Starts the group chat and allows all LLMs to solve their assigned tasks.
Note that 'dual chat' and '3 and more' chat have a different way of starting.
Refer to the official documentation for more details.
Return type: None
▶️ Member variables
Name |
Type |
Description |
LOG_LEVEL |
str |
Default log level for the application.
|
AVAILABLE_LOG_LEVELS |
list[str] |
List of valid log levels that can be set.
|
▶️ Static Methods
➡️ set_log_level(...)
Set the logging level for the application.
This method allows changing the logging level at runtime. All log levels have
specific colors, and for INFO level, the color depends on whether it's a user
prompt or an AI answer. Setting log_level to None will disable logging.
Parameter name |
Parameter type |
Parameter description |
log_level |
str | None |
The desired log level. Must be one of: "DEBUG", "INFO", "WARNING",
"ERROR", "CRITICAL", or None. If None, logging will be disabled.
|
Return type: None
➡️ set_library_log_level(...)
Set the logging level for a specific Python library.
This is useful to control logging from external libraries independently
from the main application logging level.
Parameter name |
Parameter type |
Parameter description |
library_name |
str |
The name of the target library.
|
log_level |
str |
The desired log level. Must be one of: "DEBUG", "INFO", "WARNING",
"ERROR", "CRITICAL", or None.
|
Return type: None
▶️ ToolError
Exception raised when a tool encounters incorrect parameters.
This exception is raised by the user from inside a tool when the tool determines
that the given parameters are incorrect. The error message should be didactic
to help the LLM fix its mistakes, otherwise it may loop until reaching
MaxToolErrorIter.
Parameter name |
Parameter type |
Parameter description |
message |
str |
A descriptive message that will be given to the LLM to help fix the issue.
Example: 'Parameter xxxx expected type int but got type string'.
|
▶️ MaxToolErrorIter
Exception raised when maximum error iterations are reached.
This exception is raised when the maximum number of allowed errors has been
reached. This includes both errors raised from the tool itself and errors
when Yacana fails to call the tool correctly.
Parameter name |
Parameter type |
Parameter description |
message |
str |
Information about which specific iteration counter reached its maximum.
|
▶️ ReachedTaskCompletion
Exception raised when a task has been successfully completed.
This exception is used to signal that the current task has reached its
completion state. It is typically used to break out of processing loops
or to indicate successful task termination.
▶️ IllogicalConfiguration
Exception raised when the framework is used in an incoherent way.
This exception indicates that the framework has been configured or used
in a way that doesn't make logical sense. The stacktrace message should
provide details about the specific configuration issue.
Parameter name |
Parameter type |
Parameter description |
message |
str |
A description of the illogical configuration or usage.
|
▶️ TaskCompletionRefusal
Exception raised when the model refuses to complete a task.
This exception is raised when the model explicitly refuses to complete
the requested task, typically due to ethical, safety, or capability reasons.
Parameter name |
Parameter type |
Parameter description |
message |
str |
The reason for the task completion refusal.
|
▶️ UnknownResponseFromLLM
Exception raised when the model returns an unknown response.
Parameter name |
Parameter type |
Parameter description |
message |
str |
The unknown response that was received from the LLM.
|