Youtube video for this section is still under creation. Please be patient ^^
V. Tool calling
Concept of calling tools
Allowing the LLM to call a tool is the most important thing an agent can do! But what is a "tool"? A "tool" simply refers to a Python function. This function can be the entry point to any level of underlying complexity. But it doesn't matter. What matters is that the LLM can call the tool with parameters that match the function. This way, LLMs can interact with classic programming interfaces that produce deterministic results (aka normal programming).
For instance, let's say you want a calculator powered by an LLM. You cannot rely on the LLM to do the math because even though it knows how to decompose equations to an extent and has basic arithmetics, it will fail on more advanced calculations. Therefore we do not expect the LLM to perform the operation itself. We already have the CPU to do this task perfectly. On the other hand, we expect the LLM to decompose the equation correctly and call tools for each arithmetic operation needed to solve it.
In what way is Yacana different than other frameworks?
Other frameworks assign their tools to the agent during its initialization. This creates a hard link between the tools and the agents. In our opinion, this implementation tends to confuse the agent because it's getting access to many tools that may not be relevant to the immediate task it is given. In Yacana tools are only available at the Task level. Thus no noise is generated before having to solve a particular task. The tool is made available to the LLM only when it's needed and not before. Also, the Agent doesn't keep the memory of the available tools so it won't be tempted to use them elsewhere, where it wouldn't be appropriate.
Understanding the underlying mechanism of tool calling in LLMs
A side note for those interested...
If you don't understand how a text-to-text neural network can call a Python function let me tell you: It doesn't.
When we refer to tool calling we also refer to function calling which is very
poorly named. Function calling is the ability of an inference server to make the LLM output the
text in a particular format. As of today, only JSON is supported but there is no doubt that more
formats will be available soon.
That said, now that we can control how the LLM answers, we can parse a JSON of which we know the
structure. Therefore we can ask the LLM for a JSON that matches the prototype of a Python
function. For instance the function name and some parameter values.
Some LLMs have been trained to output JSON in a particular way that matches a particular JSON
structure. This particular JSON structure has become a convention and was pushed by big AI
players like OpenAI.
Unfortunately, the size and complexity of this JSON doesn't work very well with our dumb 8B
LLMs. This is a problem that ChatGPT, Claude, Grok and other smart LLMs don't have.
To overcome this particular issue, Yacana comes with its own JSON structure to call Python
functions! It's way lighter than the OpenAI standard and Yacana uses percussive
maintenance to force the LLM to output the JSON in a way that the tool
expects.
Writing good tool prompts
The title spoils one of the most important things about tool calling in Yacana.
The prompt is to guide the LLM on how to use the tool and not what to do with the tool
result!
It is of the utmost importance that you understand this concept. This implies that you will have
to create a second Task to deal with the output of the first one.
This is a step-by-step of the internal mechanism:
- The Task is called upon to be solved:
.solve()
; - There is a Tool assigned, so the Agent must use it ;
- Yacana provides examples of how to use the tool ;
- Yacana gives the initial Task prompt to the Agent so it knows what to do with the tool ;
- The agent decides what values to send the Tool based on the prompt and previous History ;
- The Tool is called with the previously mentioned parameters ;
- The tool returns a value ;
- The Task's final output is the tool's value ;
As you can see, nothing was made with the tool result itself. This means the tool return value
must carry all the necessary information so the next Task can work with it!
That said, not all tools must return something useful. Thus removing the need to create a second
Task to act upon the result of the tool call. For instance, an LLM might be of paramount
importance to call a function with complex arguments but as it may only make minor adjustments
to some variable in your code, there might be no point in the tool returning any information to
the LLM.
Example of a bad prompt with a Tool that gets the current weather of a city:
Task(f"I give you the following city: '{city}'. Get the current weather there and output 'So much sun !' if there is some sun else say 'So much rain !'", some_agent, tools=[get_weather_tool]).solve()
=> You'll never get the output 'So much sun !' or 'So much rain !'. The result of this task will be the tool output.
The prompt is ONLY useful to the Agent to extract the city name that must be given to the tool.
To actually use the result of the tool you have two options:
1. Split the Task in two:Task(f"I give you the following city: '{city}'. Get the current weather there.", some_agent, tools=[get_weather_tool]).solve()
Task(f"Output 'So much sun !' if there is some sun else say 'So much rain !'", some_agent).solve()
Splitting the tasks into two allows the second Task to work on the output of the first one (the tool output). FYI, it's also a type of self-reflection.
- Let's write a pseudo code
get_weather
tool
def get_weather(city: str) -> str:
some_json = curl weather.com?city=$city
if some_json["sun_level_percent"] > 50:
return "So much sun !"
else:
return "So much rain !"
This tool would return either "So much sun !" or "So much rain !" based on the
output of some
fake weather API. This was the output we needed. However, the original prompt
is still wrong! The section "output 'So much sun !' if there is some sun else say 'So much
rain !'" of the initial bad prompt is still useless as it's the Tool itself
that will output this string. Not the LLM!
You should rewrite the prompt like this: "I give you the following city: '{city}'. Get the
current weather there.". The tool will return the correct string.
⚠️⚠️The full example for those who wish to test IRL. However, we haven't even shown a real tool example yet, so this section should be optional!⚠️⚠️
You should read the next section "Calling a tool" below and come back to this later.
Bad prompt:
from yacana import Agent, Tool, Task
agent1 = Agent("AI assistant", "llama3.1:8b")
def get_weather(city: str) -> str:
# Faking the weather API response
return "So much sun !"
get_weather_tool = Tool("get_weather", "Returns the weather for a given city.", get_weather)
Task(f"I give you the following city: LA. Get the current weather there and output 'So much sun !' if there is some sun else say 'So much rain !'", agent1, tools=[get_weather_tool]).solve()
print("--history--")
agent1.history.pretty_print()
Good prompt:
from yacana import Agent, Tool, Task
agent1 = Agent("AI assistant", "llama3.1:8b")
def get_weather(city: str) -> str:
# Faking the weather API response
return "So much sun !"
get_weather_tool = Tool("get_weather", "Returns the weather for a given city.", get_weather)
Task(f"I give you the following city: LA. Get the current weather there.", agent1, tools=[get_weather_tool]).solve()
Task(f"Output 'So much sun !' if there is some sun else say 'So much rain !'", agent1).solve()
print("--history--")
agent1.history.pretty_print()
Calling a tool
Let's write our first tool call to perform a simple addition!
First, let's define our tool:
def adder(first_number: int, second_number: int) -> int:
print(f"Tool adder was called with param {first_number}) ({type(first_number)} and {second_number} ({type(second_number)})")
return first_number + second_number
What do we have here?
- The name of the function must be relevant to what the function does. Here the function
performs an addition so we'll call it
adder
; - The same thing goes for the parameters. The name you choose is very important as it will help the LLM to know what value to give this parameter ;
- Duck typing the prototype is very important! You must set the type of each parameter and also the return type of the function ;
- We perform the operation between the two parameters and return the final result ;
⚠️ Be aware that whatever the return of your function, Yacana will cast it to string using the
built-in str(...)
function. LLMs can only understand text so make sure that
whatever you send back can be cast correctly (override the str if needed).
Let's create a Tool instance using the Yacana Tool constructor. It takes a
name, a description, and a reference to the actual function.
I can only emphasize once more on the importance of
providing an accurate description.
adder_tool: Tool = Tool("Adder", "Adds two numbers and returns the result", adder)
Now let's assign our adder_tool to a Task. How to do that? It's simple, the Task() class
takes an optional tools=[]
parameter. It's an array so be sure not to forget those
[]
!
Task(f"What's 2+2 ?", agent1, tools=[adder_tool]).solve()
Full code:
from yacana import Agent, Tool, Task
def adder(first_number: int, second_number: int) -> int:
print(f"Tool adder was called with param {first_number} {type(first_number)} and {second_number} ({type(second_number)})")
return first_number + second_number
agent1 = Agent("Ai assistant", "llama3.1:8b")
adder_tool: Tool = Tool("Adder", "Adds two numbers and returns the result", adder)
result: str = Task(f"What's 2+2 ?", agent1, tools=[adder_tool]).solve().content
print("Equation result = ", result)
▶️ Output:
INFO: [PROMPT]: I give you the following tool definition that you must use to fulfill a future task: adder(first_number: int, second_number: int) -> int - Adds two numbers and returns the result. Please acknowledge the given tool.
INFO: [AI_RESPONSE]: Thank you for providing me with the `adder` tool definition! I understand that this is a function that takes two integer arguments (`first_number` and `second_number`) and returns an integer result, which represents the sum of these two input numbers.
I will keep this in mind as we progress through our tasks. Please go ahead and give me the next instruction or task to complete!
INFO: [PROMPT]: To use the tool you MUST extract each parameter and use it as a JSON key like this: {"arg1": "<value1>", "arg2": "<value2>"}. You must respect the arguments type. For instance, the tool `getWeather(city: str, lat: int, long: int)` would be structured like this {"city": "new-york", "lat": 10, "lon": 20}. In our case, the tool call you must use must look like that: {'first_number': 'arg 0', 'second_number': 'arg 1'}
INFO: [AI_RESPONSE]: Thank you for clarifying how to structure the tool calls.
In that case, I will extract each parameter and use it as a JSON key. For the `adder` tool, I will structure the tool call as follows:
{'first_number': 3, 'second_number': 5}
Please let me know what's next!
INFO: [PROMPT]: You have a task to solve. Use the tool at your disposition to solve the task by outputting as JSON the correct arguments. In return you will get an answer from the tool. The task is:
What's 2+2 ?
INFO: [AI_RESPONSE]: {"first_number": "2", "second_number": "2"}
Tool adder was called with param '2' (<class 'str'>) and '2' (<class 'str'>)
Equation result = 22
The multiple INFO logs you are seeing here is Yacana doing its magic to make the LLM call the tool.
Unfortunately, even though the tool is indeed called, getting a correct result failed
spectacularly! ^^
Is 2 + 2 = 22
? No, I don't think so. Can you find out what went wrong?
When looking at the logs we can see that the tool was called with the following JSON:
{"first_number": "2", "second_number": "2"}
. The values are of type
string
. Later confirmed by the print() inside the tool itself:
param '2' ('str') and '2' ('str')
.
So instead of having integers, we got strings and what's the result of "2" + "2"
in
Python?
Not "4" but "22" (concatenation of strings). Bummer! ^^
Fortunately, we can fix this easily in several ways.
Improving tool-calling results
As you saw in the previous adder example we ran into trouble with the 2 + 2
call
sent as a string. Let's fix that.
Providing tool call examples
If you followed this tutorial from the start you saw that multi-shot prompting yields good
results. The Tool class allows this too, using the usage_examples=[]
optional
parameter. You can provide a Python dictionary where each key corresponds to a function's
parameter and the value, a valid value. It's inside an array so you can provide multiple
examples if needed. In general one or two is enough.
These dictionaries will be presented by Yacana to the LLM as examples of how to call the tool
correctly.
Let's look at an example with this new tool instance:
adder_tool: Tool = Tool("Adder", "Adds two numbers and returns the result", adder, usage_examples=[{"first_number": 2, "second_number": 4}, {"first_number": 8, "second_number": -2}])
We provided above two examples for the LLM to look at. Each time giving first_number
and second_number
different integer values. No strings. Actual integers!
Let's run our program again and see if we get the correct input types this time:
INFO: [PROMPT]: I give you the following tool definition that you must use to fulfill a future task: adder(first_number: int, second_number: int) -> int - Adds two numbers and returns the result. Please acknowledge the given tool.
INFO: [AI_RESPONSE]: Acknowledged!
The tool definition provided is:
`adder(first_number: int, second_number: int) -> int`
This tool takes two integer inputs `first_number` and `second_number`, and returns their sum as an integer.
I'm ready to use this tool for any future tasks that require addition!
INFO: [PROMPT]: To use the tool you MUST extract each parameter and use it as a JSON key like this: {"arg1": "", "arg2": ""}. You must respect the arguments type. For instance, the tool `getWeather(city: str, lat: int, long: int)` would be structured like this {"city": "new-york", "lat": 10, "lon": 20}. In our case, the tool call you must use must look like that: {'first_number': 'arg 0', 'second_number': 'arg 1'}
INFO: [AI_RESPONSE]: I understand now!
So, for the `adder` tool, I need to extract each parameter and structure it as a JSON key-value pair. Here's the result:
{'first_number': 'int', 'second_number': 'int'}
This means that when using this tool, I should specify two integer values for `first_number` and `second_number`, respectively.
INFO: [PROMPT]: You have a task to solve. Use the tool at your disposition to solve the task by outputting as JSON the correct arguments. In return, you will get an answer from the tool. The task is:
What's 2+2 ?
INFO: [AI_RESPONSE]: {"first_number": 2, "second_number": 2}
Tool adder was called with param '2' () and '2' ()
Equation result = 4
It worked!
The LLM saw that the tool needed integers in input. As such, it called the tool with the correct
types therefore the adder tool returned 4
as it was expected. Houra!
⚠️ Do not abuse this technic as it tends to create some noise. Trying to manage too many hypothetical use cases might, in the end, degrade the performance of the tool call.
Note that the multi-shot prompts (the JSON examples) are not
shown in the INFO logs. This is
because no actual request is made to the LLM. They are appended to the History()
programmatically as shown in the multi-shot example. However, if you add an
agent1.history.pretty_print()
at the end of the script, you'll see both JSON
examples given to the LLM as history context.
Adding validation inside the Tool
The previous trick is good to nudge the LLM into the right direction. But it's not the best way to get accurate results. The technique presented here is by far the most effective and should be preferred over the previous one.
As LLM are not deterministic we can never assure what will be given to our tool. Therefore, you should look at a tool like you would a web server route. I'm talking here about server-side validation. Your tool must check that what is given to it is valid and raise an error if not.
This means adding heavy checks on our tool. Thus, when the LLM sends an incorrect value an error
will be raised. But not any error! Specifically a ToolError(...)
. This exception
will be caught by Yacana which will instruct the LLM that something bad happened while calling
the tool. This also means that you must give precise error messages in the exception because the
LLM will try to change his next tool call based on this message.
Let's upgrade our adder tool!
from yacana import Agent, Tool, Task, ToolError
def adder(first_number: int, second_number: int) -> int:
print(f"Tool adder was called with param {first_number} {type(first_number)} and {second_number} ({type(second_number)})")
# Adding type validation
if not (isinstance(first_number, int)):
raise ToolError("Parameter 'first_number' expected a type integer")
if not (isinstance(second_number, int)):
raise ToolError("Parameter 'second_number' expected a type integer")
We added type validation on both parameters. But you should also check for None values, etc. As I said. Think of this as server-side validation. You cannot trust AI more than humans...
Let's remove the "examples" set in the previous section. The LLM will be blind once again. As such, he will probably make mistakes but the ToolError exception will guide it onto the correct path. Let's see:
Complete code
from yacana import Agent, Tool, Task, ToolError
agent1 = Agent("Ai assistant", "llama3.1:8b")
def adder(first_number: int, second_number: int) -> int:
print(f"Tool adder was called with param '{first_number}' ({type(first_number)}) and '{second_number}' ({type(second_number)})")
if not (isinstance(first_number, int)):
raise ToolError("Parameter 'first_number' expected a type integer")
if not (isinstance(second_number, int)):
raise ToolError("Parameter 'second_number' expected a type integer")
ret = first_number + second_number
print("Result of added tool is: ", ret)
return ret
# No more examples
adder_tool: Tool = Tool("Adder", "Adds two numbers and returns the result", adder)
result: str = Task(f"What's 2+2 ?", agent1, tools=[adder_tool]).solve().content
print(f"Equation result = {result}")
▶️ Output:
INFO: [PROMPT]: I give you the following tool definition that you must use to fulfill a future task: adder(first_number: int, second_number: int) -> int - Adds two numbers and returns the result. Please acknowledge the given tool.
INFO: [AI_RESPONSE]: Acknowledged!
I have taken note of the `adder` tool definition:
`adder(first_number: int, second_number: int) -> int`
This tool takes two integer arguments, `first_number` and `second_number`, and returns their sum as an integer.
I'm ready to use this tool when needed.
INFO: [PROMPT]: To use the tool you MUST extract each parameter and use it as a JSON key like this: {"arg1": "", "arg2": ""}. You must respect argument type. For instance, the tool `getWeather(city: str, lat: int, long: int)` would be structured like this {"city": "new-york", "lat": 10, "lon": 20}. In our case, the tool call you must use must look like that: {'first_number': 'arg 0', 'second_number': 'arg 1'}
INFO: [AI_RESPONSE]: Thank you for the clarification.
For the `adder` tool, I will extract each parameter and use it as a JSON key. Here is the result:
{"first__number": "arg 0", "second__number": "arg 1"}
I will make sure to respect the argument types (in this case, both are integers) when using this structure in the future.
Thank you for the guidance!
INFO: [PROMPT]: You have a task to solve. Use the tool at your disposition to solve the task by outputting as JSON the correct arguments. In return, you will get an answer from the tool. The task is:
What's 2+2 ?
INFO: [AI_RESPONSE]: {"first__number": 2, "second__number": 2}
WARNING: Yacana failed to call tool 'Adder' correctly based on the LLM output
INFO: [PROMPT]: The tool returned an error: `adder() got an unexpected keyword argument 'first__number'`
Using this error message, fix the JSON arguments you gave.
Remember that you must output ONLY the tool arguments as valid JSON. For instance: {'first_number': 'arg 0', 'second_number': 'arg 1'}
INFO: [AI_RESPONSE]: {"first_number": "arg 0", "second_number": "arg 1"}
Tool adder was called with param 'arg 0' () and 'arg 1' ()
WARNING: Tool 'Adder' raised an error
INFO: [PROMPT]: The tool returned an error: `Parameter 'first_number' expected a type integer`
Using this error message, fix the JSON arguments you gave.
INFO: [AI_RESPONSE]: {"first_number": 2, "second_number": 2}
Tool adder was called with param '2' () and '2' ()
Result of added tool is: 4
Equation result = 4
It worked!
2 warnings happened here:
- "WARNING: Yacana failed to call tool 'Adder' correctly based on the LLM output"
- "WARNING: Tool 'Adder' raised an error"
- Warning 1: Regarding the first one if you look closely at the output you
can
see a strange malformation in the JSON:
{"first__number": "arg 0", "second__number": "arg 1"}
. The first parameter was called with two underscores for some reason (LLMs...). Fortunately, Yacana banged on the LLM's head and it was fixed in the next iteration. - Warning 2: Concerning the second warning, it was the tool itself that
raised
the exception:
The tool returned an error: Parameter 'first_number' expected a type integer
. This is only logical as the LLM sent catastrophic values to the tool:{'first_number': 'arg 0', 'second_number': 'arg 1'}
. When the ToolError was raised the error message was given to the LLM and a third iteration started. This time all was correct:{"first_number": 2, "second_number": 2}
and we got our result from the tool which is 4.
You should combine both technics. Providing one example could
prevent one tool call failure
hence less lost CPU time but adding many validation checks in your tool raising with explicit
error messages is the best way to ensure that nothing breaks. Nothing beats good all fashion
if
checks!
Maximum tool errors
What happens if the LLM is stubborn and gets stuck in a loop? Even though Yacana's percussive
maintenance should avoid that by shifting LLM internal configuration during runtime more or less
randomly, the LLM still might go into an infinite loop. And this is NOT a viable option!
Fortunately, Yacana comes with a default of 5 iterations (tries) for each of the 2 types of
errors we encountered earlier:
- Either the calling error like the
"first__number"
error seen above. - Or the custom ToolError that the tool threw.
This means that if one of these two counters gets to 5 then an error is raised. One that is
not caught by Yacana.
Specifically aMaxToolErrorIter
exception. You should try/catch all of your Tasks that utilize Tools as they might loop too many times and trigger this exception.
However, you can also set these counters to the value you wish... Move them higher or lower with
the following Tool optional parameters:
max_call_error: int = 5, max_custom_error: int = 5
For instance:
# Doubling the number of iterations the LLM can do before raising `MaxToolErrorIter`: 5 -> 10
adder_tool: Tool = Tool("Adder", "Adds two numbers and returns the result", adder, max_custom_error=10, max_call_error=10)
Optional tools
Sometimes you assign a Tool to a Task without knowing for sure that the tool will be useful. If
you have a fine-tuned model or doing basic operations you may want to rely on the LLM's
reasoning to choose if it really needs to call the tool or use his own training knowledge.
Setting the optional: bool = True
will tweak how Yacana proposes the Tools to the
LLM, leaving it a chance to pass on the offer of the tool and use its own knowledge instead.
To demonstrate this, let's make a tool that returns a temperature from a city. It will return a
fake temperature as we don't really care. We won't set
optionnal=True
so it will be forced to use the tool:
from yacana import Task, Agent, Tool
def get_temperature(city: str) -> int:
return 20
agent1 = Agent("Ai assistant", "llama3.1:8b")
result: str = Task(f"What's the temperature in NY ?", agent1, tools=[Tool("get_temp", "Returns the celsius temperature of a given city", get_temperature)]).solve().content
print(f"Temperature = {result}")
▶️ Output:
INFO: [PROMPT]: I give you the following tool definition that you must use to fulfill a future task: get_temperature(city: str) -> int - Adds two numbers and returns the result. Please acknowledge the given tool.
INFO: [AI_RESPONSE]: I've received the tool definition!
So, I understand that I have been given a tool called `get_temperature` that takes one argument `city` which is a string, and it's supposed to return an integer.
However, I notice that the tool definition says "Adds two numbers and returns the result" but the function name `get_temperature` suggests it should be related to getting the temperature of a city.
I'll make sure to keep this in mind for any future tasks that require using this tool!
INFO: [PROMPT]: To use the tool you MUST extract each parameter and use it as a JSON key like this: {"arg1": "", "arg2": ""}. You must respect arguments type. For instance, the tool `getWeather(city: str, lat: int, long: int)` would be structured like this {"city": "new-york", "lat": 10, "lon": 20}. In our case, the tool call you must use must look like that: {'city': 'arg 0'}
INFO: [AI_RESPONSE]: So I understand now. Thank you for the clarification!
To use the `get_temperature` tool, I will extract each parameter and use it as a JSON key with its corresponding value.
The tool definition is: `get_temperature(city: str) -> int`
So, I will structure my tool call like this: {"city": "arg0"}
Please let me know what's the next step!
INFO: [PROMPT]: You have a task to solve. Use the tool at your disposition to solve the task by outputting as JSON the correct arguments. In return you will get an answer from the tool. The task is:
What's the temperature in NY ?
INFO: [AI_RESPONSE]: { "city": "NY" }
Temperature = 20
The tool was called with { "city": "NY" }
which returned 20. Good!
Now let's ask something completely unrelated but keep the tool mandatory:
result: str = Task(f"Why is the sky blue ?", agent1, tools=[Tool("get_temp", "Returns the celsius temperature of a given city", get_temperature)]).solve().content
▶️ Output:
INFO: [PROMPT]: I give you the following tool definition that you must use to fulfill a future task: get_temperature(city: str) -> int - Returns the celsius temperature of a given city. Please acknowledge the given tool.
INFO: [AI_RESPONSE]: Acknowledged!
You've provided me with a tool definition for `get_temperature(city: str) -> int`, which returns the Celsius temperature of a given city.
I'm ready to use this tool whenever I need it in future tasks!
INFO: [PROMPT]: To use the tool you MUST extract each parameter and use it as a JSON key like this: {"arg1": "", "arg2": ""}. You must respect the arguments type. For instance, the tool `getWeather(city: str, lat: int, long: int)` would be structured like this {"city": "new-york", "lat": 10, "lon": 20}. In our case, the tool call you must use must look like that: {'city': 'arg 0'}
INFO: [AI_RESPONSE]: For the `get_temperature( city: str ) -> int` tool, I understand that I need to extract each parameter and use it as a JSON key.
So, for this specific tool, I would structure my JSON call as follows:
{"city": "arg0"}
Where "arg0" is replaced with the actual value of the city I'm interested in.
INFO: [PROMPT]: You have a task to solve. Use the tool at your disposition to solve the task by outputting as JSON the correct arguments. In return you will get an answer from the tool. The task is:
Why is the sky blue ?
INFO: [AI_RESPONSE]: {"city": "Paris"}
Temperature = 20
It asked the tool for the temperature in the city 'Paris' without any particular reason. We can't blame it. The tool we provided and the Task to solve have nothing to do with each other. So it's making things up, like this city name.
Now let's make the tool optional with optional=True
and keep our unrelated Task:
result: str = Task(f"Why is the sky blue ?", agent1, tools=[Tool("get_temp", "Returns the celsius temperature of a given city", get_temperature, optional=True)]).solve().content
▶️ Output:
INFO: [PROMPT]: I give you the following tool definition that you may use to fulfill a future task: get_temperature(city: str) -> int - Returns the celsius temperature of a given city. Please acknowledge the given tool.
INFO: [AI_RESPONSE]: Acknowledged!
I have taken note of the `get_temperature` tool, which takes a city name as input (`city: str`) and returns the Celsius temperature of that city (`-> int`). I'm ready to use this tool whenever you'd like me to fulfill a task in the future!
INFO: [PROMPT]: To use the tool you MUST extract each parameter and use it as a JSON key like this: {"arg1": "", "arg2": ""}. You must respect arguments type. For instance, the tool `getWeather(city: str, lat: int, long: int)` would be structured like this {"city": "new-york", "lat": 10, "lon": 20}. In our case, the tool call you must use must look like that: {'city': 'arg 0'}
INFO: [AI_RESPONSE]: So for the `get_temperature( city: str )` tool, I would extract the single parameter as:
{"city": "arg0"}
Please let me know what's next!
INFO: [PROMPT]: You have a task to solve. In your opinion, is using the tool "get_temp" relevant to solve the task or not ? The task is:
Why is the sky blue ?
INFO: [AI_RESPONSE]: Ha ha, that's an interesting question!
In my opinion, using the `get_temperature` tool is NOT relevant to solving this task. The task asks about why the sky is blue, and temperature doesn't seem to be directly related to that.
So, I'll have to come up with a different approach or use a different tool (if one exists) to tackle this question!
INFO: [PROMPT]: To summarize in one word your previous answer. Do you wish to use the tool or not ? Respond ONLY by 'yes' or 'no'.
INFO: [AI_RESPONSE]: no
INFO: [PROMPT]: Why is the sky blue ?
INFO: [AI_RESPONSE]: A question that has puzzled humans for centuries!
The sky appears blue because of a phenomenon called Rayleigh scattering, named after the British physicist Lord Rayleigh. Here's what happens:
1. **Sunlight**: The sun emits white light, which is made up of all the colors of the visible spectrum (red, orange, yellow, green, blue, indigo, and violet).
2. **Atmosphere**: When this sunlight enters Earth's atmosphere, it encounters tiny molecules of gases like nitrogen (N2) and oxygen (O2). These molecules are much smaller than the wavelength of light.
[BLABLA]
5. **Sky appears blue**: The combined effect of all these scattered blue and violet photons is what makes the sky appear blue to our eyes! The more direct sunlight that reaches our eyes, the whiter it will appear.
So, to summarize: the sky appears blue because of the selective scattering of shorter wavelengths (like blue and violet) by tiny molecules in the atmosphere, which dominates the colors we see when looking up at the sky.
As you can see it chose to ignore the tool when Yacana proposed it. It said:
In my opinion, using the `get_temperature` tool is NOT relevant to solving this task. The task asks about why the sky is blue, and temperature doesn't seem to be directly related to that.
Tools that don't return anything
If you write a tool that doesn't have a reason to answer anything to the LLM, you could be
tempted to let it return None
.
We wouldn't encourage this behavior as LLMs generally expect some kind of answer to guide them.
You should preferably return a success message. It will act as positive reinforcement.
However, if your tool doesn't return anything, a default message will be added automatically:
"Tool {tool.tool_name} was called successfully. It didn't return anything.".
Assigning multiple Tools
In this section, we will see that you can assign more than one tool to a Task. You can add as many Tools as you wish and the LLM will be asked what tool it wants to use. After using one of the tools it will be asked if it considers its Task complete. If it says "no" then Yacana will propose the list of tools again and a new iteration starts.
This is roughly what the tool-calling mechanism looks like:
This doesn't take into account many tweaks Yacana makes like model's runtime config updates (in case of infinite loops), optional tools, self-reflection, multi-shot tool call examples, history cleaning, exiting when reaching max iterations, etc. However, it definitely is the classic process of calling tools one after the other.
Additional behavior information:
When only one tool is assigned, the Agent won't be proposed to use it again. One tool is one shot! When giving multiple tools, the agent will then be proposed to use another tool. He could choose to always use the same one though.
In the future, Yacana may allow you to have more control over how the tools are being chosen:
- Allowing one tool to be re-called (when assigning only one tool) ;
- When using multiple tools, each tool being used would be removed from the tool list, ensuring that each tool can only be used once ;
- Add a setting to force the LLM to use all the given tools from the list before exiting the Task ; Currently, giving more than one tool only ensures it makes use of one of them but could decide to stop after the first use if it wished to. Stay tuned for the next patch!
⚠️ For this next section we assume that you have already read section Assigning a tool to a Task of the documentation.
Let's make a more advanced calculator and solve 2 + 2 - 6 * 8
. We'll add the missing
tools and give them some
"server-side" checking to help the LLM use them properly.
from yacana import Task, Agent, Tool, ToolError
def adder(first_number: int, second_number: int) -> int:
print("Adder was called with types = ", str(type(first_number)), str(type(second_number)))
if not (isinstance(first_number, int)):
raise ToolError("Parameter 'first_number' expected a type integer")
if not (isinstance(second_number, int)):
raise ToolError("Parameter 'second_number' expected a type integer")
print(f"Adder was called with param = |{first_number}| and |{second_number}|")
return first_number + second_number
def multiplier(first_number, second_number) -> int:
print("Multiplier was called with types = ", str(type(first_number)), str(type(second_number)))
if not (isinstance(first_number, int)):
raise ToolError("Parameter 'first_number' expected a type integer")
if not (isinstance(second_number, int)):
raise ToolError("Parameter 'second_number' expected a type integer")
print(f"Multiplier was called with param = |{first_number}| and |{second_number}|")
return first_number * second_number
def substractor(first_number, second_number) -> int:
print("substractor was called with types = ", str(type(first_number)), str(type(second_number)))
if not (isinstance(first_number, int)):
raise ToolError("Parameter 'first_number' expected a type integer")
if not (isinstance(second_number, int)):
raise ToolError("Parameter 'second_number' expected a type integer")
print(f"substractor was called with param = |{first_number}| and |{second_number}|")
return first_number - second_number
agent1 = Agent("Ai assistant", "llama3.1:8b")
# Defining 3 tools
adder_tool: Tool = Tool("Adder", "Adds two numbers and returns the result", adder)
substractor_tool: Tool = Tool("Substractor", "Subtracts two numbers and returns the result. When having, for instance, 2 - 6 the arguments are 2 and 6 and not 2 and -6. The tool does the subtraction.", substractor, usage_examples=[{"first_number": 2, "second_number": 4}])
multiplier_tool: Tool = Tool("Multiplier", "Multiplies two numbers and returns the result.", multiplier)
# Creating a Task to solve and assigning tools to it
result: str = Task(
f"What's the result of '2 + 2 - 6 * 8'. Decompose the math before trying to solve it. Start with the multiplication. Use the tools at your disposal and do not do the maths yourself.",
agent1, tools=[
adder_tool,
substractor_tool,
multiplier_tool
]).solve().content
print("Result = ", result)
▶️ Output:
INFO: [PROMPT]: You have access to this list of tools definitions you can use to fulfill tasks :
* Name: 'Adder' - Usage: adder(first_number: int, second_number: int) -> int - Description: Adds two numbers and returns the result
* Name: 'Substractor' - Usage: substractor(first_number, second_number) -> int - Description: Subtracts two numbers and returns the result. When having, for instance, 2 - 6 the arguments are 2 and 6 and not 2 and -6. The tool does the subtraction.
* Name: 'Multiplier' - Usage: multiplier(first_number, second_number) -> int - Description: Multiplies two numbers and returns the result.
Please acknowledge the given tools.
INFO: [AI_RESPONSE]: I acknowledge that I have access to a list of tools with the following definitions:
1. **Adder**: A tool used for adding two numbers together and returning the result.
* Usage: `adder(first_number: int, second_number: int) -> int`
2. **Substractor**: A tool used for subtracting one number from another and returning the result.
* Usage: `substractor(first_number, second_number) -> int` (note that the order of arguments implies subtraction, e.g., 2 - 6 would be `substractor(2, 6)`)
3. **Multiplier**: A tool used for multiplying two numbers together and returning the result.
* Usage: `multiplier(first_number, second_number) -> int`
I'm ready to use these tools to fulfill tasks!
INFO: [PROMPT]: You have a task to solve. I will give it to you between these tags `<task></task>`. However, your actual job is to decide if you need to use any of the available tools to solve the task or not. If you do need tools then output their names. The task to solve is <task>What's the result of '2 + 2 - 6 * 8'. Decompose the math before trying to solve it. Start with the multiplication. Use the tools at your disposal and do not do the maths yourself.</task> So, would any tools be useful in relation to the given task ?
INFO: [AI_RESPONSE]: I've got a task! Let's break down the math expression: "2 + 2 - 6 * 8".
The first operation I'll focus on is the multiplication: 6 * 8. This seems like a good candidate for using a tool!
Given the available tools, I think **Multiplier** could be useful here. It can help me calculate the result of multiplying 6 and 8.
So, my answer is: **Multiplier**
INFO: [PROMPT]: In order to summarize your previous answer in one word. Did you chose to use any tools ? Respond ONLY by 'yes' or 'no'.
INFO: [AI_RESPONSE]: Yes
INFO: [PROMPT]: You can only use one tool at a time. From this list of tools which one do you want to use: [Adder, Substractor, Multiplier]. You must answer ONLY with the single tool name. Nothing else.
INFO: [AI_RESPONSE]: Multiplier
INFO: [PROMPT]: To use the tool you MUST extract each parameter and use it as a JSON key like this: {"arg1": "<value1>", "arg2": "<value2>"}. You must respect arguments type. For instance, the tool `getWeather(city: str, lat: int, long: int)` would be structured like this {"city": "new-york", "lat": 10, "lon": 20}. In our case, the tool call you must use must look like that: {'first_number': 'arg 0', 'second_number': 'arg 1'}
INFO: [AI_RESPONSE]: {"first_number": 6, "second_number": 8}
INFO: [PROMPT]: Now that I showed you examples on how the tool is used it's your turn. Output the tool as valid JSON.
INFO: [AI_RESPONSE]: {"first_umber": 6, "second_number": 8}
WARNING: Yacana failed to call tool 'Multiplier' correctly based on the LLM output
INFO: [PROMPT]: The tool returned an error: `multiplier() got an unexpected keyword argument 'first_umber'`
Using this error message, fix the JSON arguments you gave.
Remember that you must output ONLY the tool arguments as valid JSON. For instance: {'first_number': 'arg 0', 'second_number': 'arg 1'}
INFO: [AI_RESPONSE]: {"first_number": 6, "second_number": 8}
Multiplier was called with types = 'int' 'int'
Multiplier was called with param = |6| and |8|
INFO: [PROMPT]: Now that the tool responded do you need to make another tool call ? Explain why and what the remaining steps are if any.
INFO: [AI_RESPONSE]: Yes, I do need to make another tool call!
After using the **Multiplier** tool to get the result of 6 * 8 = 48, I now have to perform the addition and subtraction operations.
The expression is still: "2 + 2 - 48"
I need to use another tool to add 2 and 2. Then, I'll subtract 48 from the result.
So, my next step would be to use either the **Adder** or **Substractor** tool!
INFO: [PROMPT]: To summarize your previous answer in one word. Do you need to make another tool call ? Answer ONLY by 'yes' or 'no'.
INFO: [AI_RESPONSE]: Yes
INFO: Continuing tool calls loop
INFO: [PROMPT]: You can only use one tool at a time. From this list of tools which one do you want to use: [Adder, Substractor, Multiplier]. You must answer ONLY with the single tool name. Nothing else.
INFO: [AI_RESPONSE]: Adder
INFO: [PROMPT]: To use the tool you MUST extract each parameter and use it as a JSON key like this: {"arg1": "<value1>", "arg2": "<value2>"}. You must respect arguments type. For instance, the tool `getWeather(city: str, lat: int, long: int)` would be structured like this {"city": "new-york", "lat": 10, "lon": 20}. In our case, the tool call you must use must look like that: {'first_number': 'arg 0', 'second_number': 'arg 1'}
INFO: [AI_RESPONSE]: {"first_number": 2, "second_number": 2}
INFO: [PROMPT]: Now that I showed you examples of how the tool is used it's your turn. Output the tool as valid JSON.
INFO: [AI_RESPONSE]: {"first_number": 2, "second_number": 2}
Adder was called with types = 'int' 'int'
Adder was called with param = |2| and |2|
INFO: [PROMPT]: Now that the tool responded do you need to make another tool call ? Explain why and what the remaining steps are if any.
INFO: [AI_RESPONSE]: Yes, I do need to make another tool call!
After using the **Adder** tool to get the result of 2 + 2 = 4, I now have: "4 - 48"
I still have subtraction operation left, so I'll use either **Substractor** or **Multiplier** (which is not necessary in this case) and then I can finish the task.
So, my next step would be to use the **Substractor** tool!
INFO: [PROMPT]: To summarize your previous answer in one word. Do you need to make another tool call ? Answer ONLY by 'yes' or 'no'.
INFO: [AI_RESPONSE]: Yes
INFO: Continuing tool calls loop
INFO: [PROMPT]: You can only use one tool at a time. From this list of tools which one do you want to use: [Adder, Substractor, Multiplier]. You must answer ONLY with the single tool name. Nothing else.
INFO: [AI_RESPONSE]: Substractor
INFO: [PROMPT]: To use the tool you MUST extract each parameter and use it as a JSON key like this: {"arg1": "<value1>", "arg2": "<value2>"}. You must respect arguments type. For instance, the tool `getWeather(city: str, lat: int, long: int)` would be structured like this {"city": "new-york", "lat": 10, "lon": 20}. In our case, the tool call you must use must look like that: {'first_number': 'arg 0', 'second_number': 'arg 1'}
INFO: [AI_RESPONSE]: {"first_number": 4, "second_number": 48}
INFO: [PROMPT]: Now that I showed you examples on how the tool is used it's your turn. Output the tool as valid JSON.
INFO: [AI_RESPONSE]: {"first_number": 4, "second_number": 48}
substractor was called with types = 'int' 'int'
substractor was called with param = |4| and |48|
INFO: [PROMPT]: Now that the tool responded do you need to make another tool call ? Explain why and what are the remaining steps are if any.
INFO: [AI_RESPONSE]: No, I don't need to make another tool call!
The result of -44 is the final answer for the original expression: "2 + 2 - 6 * 8".
I've used all the necessary tools (Multiplier, Adder, and Substractor) to break down the math expression and get the correct answer!
INFO: [PROMPT]: To summarize your previous answer in one word. Do you need to make another tool call ? Answer ONLY by 'yes' or 'no'.
INFO: [AI_RESPONSE]: no
INFO: Exiting tool calls loop
Result = No, I don't need to make another tool call!
The result of -44 is the final answer for the original expression: "2 + 2 - 6 * 8".
I've used all the necessary tools (Multiplier, Adder, and Substractor) to break down the math expression and get the correct answer!
-44 is the correct answer. You could throw in maybe one more operation. However, in our tests using Llama 3.0, going over 4 operations does not guarantee a correct result anymore. It may be about prompt engineering but we also think that Yacana should continue improving. For the moment the LLM tends to contradict itself at some point which sends the final result off. In the next update, Yacana will try to detect errors in reasoning and self-correct between each tool call. Stay tuned for updates.