II. Agents & Tasks

Creating Agents and Tasks to solve

Creating an Agent

Now that you have an Ollama server running and Yacana installed let's create our first agent!

Create a Python file with this content:


from yacana import OllamaAgent

agent1 = OllamaAgent("AI assistant", "llama3.1:8b", system_prompt="You are a helpful AI assistant", endpoint="http://127.0.0.1:11434")
					

The OllamaAgent() class takes...

2 mandatory parameters:
  1. The agent name: Choose something short about the agent's global focus
  2. A model name: The Ollama model that this Agent will use. You may have multiple Agents running different models. Some models are better suited for some specific jobs so it can be interesting to mix LLM models. Use ollama list to list the models you have downloaded.
many optional parameters that we will discover in this tutorial. Here we can see 2 of them:
  1. The system prompt: Helps define the personality of the Agent.
  2. The endpoint: The URL of your Ollama instance. It points by default to your localhost and on the Ollama default port. If you are using Ollama on your computer you can remove this optional parameter and the default value will be used.

Creating Tasks

The whole concept of the framework lies here. If you understand this following section then you have mastered 80% of Yacana's building principle. Like in LangGraph, where you create nodes that you link together, Yacana has a Task() class which takes as argument a task to solve. There are no hardcoded links between the Tasks so it's easy to refactor and move things around. The important concept to grasp here is that through these Tasks you will give instructions to the LLM in a way that the result must be computable. Meaning instructions must be clear and the prompt to use must reflect that. It's a Task, it's a job, it's something that needs solving but written like it is given as an order! Let's see some examples :


from yacana import OllamaAgent, Task

# First, let's make a basic AI agent
agent1 = OllamaAgent("AI assistant", "llama3.1:8b", system_prompt="You are a helpful AI assistant")

# Now we create a task and assign the agent1 to the task
task1 = Task(f"Solve the equation 2 + 2 and output the result", agent1)

# For something to happen, you must call the .solve() method on your task.
task1.solve()
					

What's happening above?

  • First, we instantiated an Agent with the llama3.1:8b model. You might need to update that depending on what LLM you downloaded from Ollama ;
  • Second, we instantiated a Task ;
  • Third, we asked that the Task be solved ;

For easing the learning curve the default logging level is INFO. It will show what is going on in Yacana. Note that NOT ALL prompts are shown.

The output should look like this:


INFO: [PROMPT][To: AI assistant]: Solve the equation 2 + 2 and output the result

INFO: [AI_RESPONSE][From: AI assistant]: The answer to the equation 2 + 2 is... (drumroll please)... 4!
							
So, the result of solving the equation 2 + 2 is indeed 4.
							

If your terminal is working normally you should see the task's prompts in green and starting with the '[PROMPT]' string. The LLM's answer should appear purple and start with the [AI_RESPONSE] string.

Task parameters

The Task class takes 2 mandatory parameters:

  • The prompt: It is the task to be solved. Use imperative language, be precise, and ask for step-by-step thinking for complex Tasks and expected outputs if needed.
  • The agent that will be assigned to this task. The agent will be in charge of solving the task.

Many other parameters can be given to a Task. We will see some of them in the following sections of this tutorial. But you can already check out the Task class documentation.

In what way is this disruptive compared to other frameworks?

In the above code snippet, we assigned the agent to the Task. So it's the Task that leads the direction that the AI takes. In most other frameworks it's the opposite. You assign work to an existing agent. This reversed way allows to have fine-grained control on each resolution step as the LLM only follows breadcrumbs (the tasks). The pattern will become even more obvious as we get to the Tool section of this tutorial. As you'll see the Tools are also assigned at the Task level and not to the Agent directly.

Compared with LangGraph, we cannot generate a call graph as an image because we don't bind the tasks together explicitly. However, Yacana's way gives more flexibility and allows a hierarchical programming way of scheduling the tasks and keeping control of the flow. It also allows creating new Tasks dynamically if the need arises. You shall rely on your programming skills and good OOP to have a clean code and good Task ordering. Yacana will never propose any hardlinked code or flat configurations.


Getting the result of a Task

Although the logs appear in the terminal's standard output, we still need to extract the LLM's response to the Task in order to use it.
Getting the string message out of it is quite easy as the .solve() method returns a Message() class.
Maybe you are thinking "Ho nooo, another class to deal with". Well, let me tell you that it's always better to have an OOP class than some semi-random Python dictionary where you'll forget what keys it contains in a matter of minutes. Also, the Message class is very straightforward. It exposes a content attribute. Update the current code to look like this:


from yacana import OllamaAgent, Task, Message

# First, let's make a basic AI agent
agent1 = OllamaAgent("AI assistant", "llama3.1:8b", system_prompt="You are a helpful AI assistant")

# Now we create a task and assign the agent1 to the task
task1 = Task(f"Solve the equation 2 + 2 and output the result", agent1)

# So that something actually happens you must call the .solve() method on your task
my_message: Message = task1.solve()

# Printing the LLM's response
print(f"The AI response to our task is : {my_message.content}")
						

There you go! Give it a try.

Note that we used duck typing to postfix all variables declaration with their type my_message: Message. Yacana's source code is entirely duck-typed so that your IDE always knows what type it's dealing with and proposes the best methods and arguments. We recommend that you do the same as it's the industry's best standards.


Don't like having 100 lines of code for something simple? Then chain them all!


from yacana import OllamaAgent, Task

# First, let's make a basic AI agent
agent1 = OllamaAgent("AI assistant", "llama3.1:8b", system_prompt="You are a helpful AI assistant")

# Creating the task, solving it, extracting the result
result: str = Task(f'Solve the equation 2 + 2 and output the result', agent1).solve().content
# Print the result
print(f"The AI response to our task is: {result}")
					

Chaining Tasks

Agents keep track of the History of what they did (aka, all the Tasks they solved). So just call a second Task and assign the same Agent. For instance, let's multiply by 2 the result of the initial Task. Append this to our current script:


task2_result: str = Task(f'Multiply by 2 our previous result', agent1).solve().content
print(f"The AI response to our second task is : {task2_result}")
					

You should get:


The AI response to our task is: If we multiply the previous result of 4 by 2, we get:
							
8
					

Without tools this only relies on the LLM's ability to do the maths and is dependent on its training.


See? The assigned Agent remembered that it had solved Task1 previously and used this information to solve the second task.
You can chain as many Tasks as you need. You can build anything now!


Logging levels

As entering the AI landscape can get a bit hairy we decided to leave the INFO log level by default. This allows to log to the standard output all the requests made to the LLM.
Note that NOT everything of Yacana's internal magic appears in these logs. We don't show everything because many time-traveling things are going around inside the history of an Agent and printing a log at the time it is generated wouldn't always make sense.
However, we try to log a maximum of information to help you understand what is happening internally and allow you to tweak your prompts accordingly.

Nonetheless, you are the master of what is logged and what isn't. You cannot let Yacana log anything while working with an app in production.
There are 5 levels of logs:

  1. "DEBUG"
  2. "INFO" Default
  3. "WARNING"
  4. "ERROR"
  5. "CRITICAL"
  6. None No logs

To configure the log simply add this line at the start of your program:


from yacana import LoggerManager

LoggerManager.set_log_level("INFO")
					

Note that Yacana utilizes the Python logging package. This means that setting the level to "DEBUG" will print other libraries' logs on the debug level too.

If you need a library to stop spamming, you can try the following:


from yacana import LoggerManager

LoggerManager.set_library_log_level("httpx", "WARNING")
					

The above example sets the logging level of the network httpx library to warning, thus reducing the log spamming.


Let's build

Using what we know, let's build a simple chat interface:


from yacana import OllamaAgent, Task, GenericMessage, LoggerManager

LoggerManager.set_log_level(None)

ollama_agent = OllamaAgent("AI assistant", "llama3.2:latest", system_prompt="You are a helpful AI assistant", endpoint="http://127.0.0.1:11434")
print("Ask me questions!")
while True:
    user_input = input("> ")
    message: GenericMessage = Task(user_input, ollama_agent).solve()
    print(message.content)
                    
Output:

Ask me questions!
> Why do boats float (short answer)
Boats float due to their displacement in water, which creates an upward buoyant force equal to the weight of the fluid (water) displaced. According to Archimedes' Principle, any object partially or fully submerged in a fluid will experience an upward buoyant force that equals the weight of the fluid it displaces.
>
                    
If we change the agent's system prompt we can talk to a pirate!

ollama_agent = OllamaAgent("AI assistant", "llama3.2:latest", system_prompt="You are a pirate", endpoint="http://127.0.0.1:11434")
                    
Output:

Ask me questions!
> Where is the map ?
*looks around cautiously, then leans in close*

Ahoy, matey! I be thinkin' ye be lookin' fer me trusty treasure map, eh? *winks*

Alright, I'll let ye in on a little secret. It's hidden... *pauses for dramatic effect* ...on the island o' Tortuga! Ye can find it at the old windmill on the outskirts o' town. But be warned, matey: ye won't be the only scurvy dog afterin' that map!

Now, I be trustin' ye to keep this little chat between us, savvy?
>
                    


Configuring LLM's settings

For advanced users, Yacana provides a way to tweak the LLM runtime behavior!
For instance, lowering the temperature setting makes the model less creative in its responses. On the contrary, raising this setting will make the LLM more chatty and creative.
Yacana provides you with a class that exposes all the possible LLM properties. Also if you need a good explanation for each of them I would recommend the excellent video Matt Williams did on this subject.

These settings are set at the Agent level so that you can have the same underlying model used by two separate agents and have them behave differently.

The OllamaModelSettings class is tailored for the Ollama backend. You can use OpenAiModelSettings to configure non-Ollama LLMs with their own set of available settings.

We use the OllamaModelSettings class to configure the settings we need.

For example, let's lower the temperature of an Agent to 0.4:


from yacana import OllamaModelSettings, OllamaAgent

ms = OllamaModelSettings(temperature=0.4)

agent1 = OllamaAgent("Ai assistant", "llama3.1:8b", model_settings=ms)
					

If you're wondering what the default values of these are when not set. Well, Ollama sets the defaults for you. They can also be overridden in the Model config file (looks like a dockerfile but for LLMs) and finally, you can set them through Yacana during runtime.

A good way to show how this can have a real impact on the output is by setting the num_predict parameter. This one allows control of how many tokens should be generated by the LLM. Let's make the same Task twice but with different num_predict values:


from yacana import OllamaModelSettings, OllamaAgent, Task

# Setting temperature and max token to 100
ms = OllamaModelSettings(temperature=0.4, num_predict=100)

agent1 = OllamaAgent("Ai assistant", "llama3.1:8b", model_settings=ms)
Task("Why is the sky blue ?", agent1).solve()

print("-------------------")

# Settings max token to 15
ms = OllamaModelSettings(num_predict=15)

agent2 = OllamaAgent("Ai assistant", "llama3.1:8b", model_settings=ms)
Task("Why is the sky blue ?", agent2).solve()
					

▶️ Output:

INFO: [PROMPT]: Why is the sky blue ?

INFO: [AI_RESPONSE]: The sky appears blue because of a phenomenon called Rayleigh scattering, named after the British physicist Lord Rayleigh. Here's what happens:

1. **Sunlight**: When sunlight enters Earth's atmosphere, it contains all the colors of the visible spectrum (red, orange, yellow, green, blue, indigo, and violet).
2. **Molecules**: The atmosphere is made up of tiny molecules of gases like nitrogen (N2) and oxygen (O2). These molecules are much smaller than

-------------------

INFO: [PROMPT]: Why is the sky blue ?

INFO: [AI_RESPONSE]: The sky appears blue because of a phenomenon called Rayleigh scattering, named after
					

As you can see above the two agents didn't output the same number of tokens.
For OpenAiAgent the method is the same. But instead of using OllamaModelSettings you can use OpenAiModelSettings. For more information please refer to the Other inference servers section.


Accessing the underlying client library

Yacana cannot recreate every functionality made available by the underlying client library. This is why we provide you with a direct access to the underlying library. This way, if you need more control over the LLM, you can set the desired parameters yourself.

You can pass any parameter supported by the underlying library to the Agent during runtime.

Configuration of the runtime can be done at two separate levels using the runtime_config parameter:
  • At the Agent level:

vllm_agent = OpenAiAgent("AI assistant", "meta-llama/Llama-3.1-8B-Instruct", endpoint="http://127.0.0.1:8000/v1", api_token="", runtime_config={"extra_body": {'guided_decoding_backend': 'outlines'}})
                    

  • At the task level:

Task("Tell me 2 facts about Canada.", agent, runtime_config={"extra_body": {'guided_decoding_backend': 'xgrammar'}}).solve()
                    

Configuration at the Agent level will be effective during the whole agent's life span.
Configuration at the task level will be effective for the current task only and will overide parameters given at Agent level.
To look at a more complete example, please refer to the VLLM inference server section.


Routing

Routing

Concepts of routing

Other frameworks tend to make abstractions for everything. Even things that don't need any. That's why I'll show you how to do routing with only what we have seen earlier. Yacana doesn't provide routing abstraction because there is no need to do so.

But what is routing? Well, having LLMs solve a Task and then chaining many others in a sequence is good but to be efficient you have to create conditional workflows. In particular when using local LLMs that don't have the power to solve all Tasks with only one prompt. You must create an AI workflow in advance that will go forward step by step and converge to some expected result. AI allows you to deal with some level of unknown but you can't expect having a master brain (like in crewAI + chatGPT) that distributes tasks to agents and achieves an expected result. It's IMPOSSIBLE with local LLMs. They are too dumb! Therefore they need you to help them along their path. This is why LangGraph works well with local LLMs and Yacana does too. You should create workflows and when conditions are met switch from one branch to another, treating more specific cases.


The most common routing mechanic is "yes" / "no". Depending on the result, your program can do different things next. Let's see an example:

from yacana import OllamaAgent, Task

agent1 = OllamaAgent("AI assistant", "llama3.1:8b", system_prompt="You are a helpful AI assistant")

# Let's invent a question about 'plants'
question: str = "Why do leaves fall in autumn ?"

# Ask if the question is plant related: yes or no
router_answer: str = Task(f"Is the following question about plants ? <question>{question}</question> Answer ONLY by 'yes' or 'no'.", agent1).solve().content

if "yes" in router_answer.lower():
    print("Yes, question is about plants")
    # next step in the workflow that involves plants

elif "no" in router_answer.lower():
    print("No, question is NOT about plants")
    # Next step in the workflow that DOESN'T involve plants

You should get the following output:


INFO: [PROMPT]: Is the following question about plants? <question>Why do leaves fall in autumn ?</question> Answer ONLY by 'yes' or 'no'.

INFO: [AI_RESPONSE]: yes
Question is about plants

➡️ Many things are happening here. We didn't implement an abstraction to simplify things but the downside is that you must learn a few tricks:

  1. Always compare with lower case string: Because LLMs have their own mind they do not always answer a straight yes. Sometimes you get "Yes" or even full cap "YES" for no reason.
  2. Always start by searching for "yes": We do a substring match using the in keyword of Python because the LLM doesn't always respect the instructions of outputting ONLY 'yes' or 'no'. Sometimes you'll get "yes!" or "Great idea, I say yes". Substring match will match "yes" anywhere in the LLM answer.
    But what if you looked for "no" instead and the LLM generated "Not sure but I would say yes" ? 🤔
    => Because we search for substrings the condition would match the "no" part of the word "Not" even though the LLM said yes.
    We could use regexe to fix this but it's easier to just start the condition by looking for "yes" as there are no English words that contain "yes" in a substring (at least no common ones ^^).
  3. Force the LLM to respect the instruction: Tell it to answer ONLY with 'xx'. See the use of the upper cap on "ONLY"? Also, the single quotes around the possible choices 'yes' and 'no' help the LLM that sees them as delimiters.
  4. Use formatting tags: The question that is mentioned in the prompt is then given in custom <question> tags. LLMs love delimiters. This way the LLM knows when the question starts and when the question ends. This technique helps to differentiate your prompt from the dynamic part. You don't have to add tags everywhere but they can prove useful. Do not abuse them or the LLM might start using them in its response. Just keep this trick in mind.

This is all basic prompt engineering. If you wish to build an app with local models you will definitely have to learn those tricks. LLMs are unpredictable.


Self-reflection routing

As local models are a bit dumb you need to let them think on their own before making a decision. This is called self-reflection. It will cost one more Task to solve but you'll get significantly better results during routing, in particular when routing on more complex things (other than "yes"|"no").

Let's update the routing section of our code to look like this:

# Asking for a reasoning step
Task(f"Is the following question about plants ? {question}\nExplain your reasoning.", agent1).solve()

# Basic yes/no routing based on the previous reasoning
router_answer: str = Task(f"To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.", agent1).solve().content

We added one more Task that executes BEFORE the router.

You should get this type of output:


INFO: [PROMPT][To: AI assistant]: Is the following question about plants ? Why do leaves fall in autumn ?
Explain your reasoning

INFO: [AI_RESPONSE][From: AI assistant]: A great question!

Yes, I believe this question is indeed about plants. Specifically, it's related to the fascinating process of leaf senescence and abscission that occurs during autumn (or fall) in many plant species.

Here's why:

1. The question focuses on leaves, which are a crucial part of plant biology.
2. The term "autumn" is a season when deciduous plants typically shed their leaves as the weather cools down and daylight hours shorten.
3. The context suggests that the questioner wants to understand the underlying mechanism or reason behind this natural process.

Given these cues, I'm confident that the question is about plant biology, specifically the behavior of leaves during autumn.

INFO: [PROMPT][To: AI assistant]: To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.

INFO: [AI_RESPONSE][From: AI assistant]: Yes
Question is about plants

See how the LLM had an "intense" reflection on the subject. This is very good. You want LLMs to do reasoning like this. It will improve the overall result for the next Tasks to solve.

▶️ The prompt engineering techniques used here are:

  1. Make it think: Using the expression "Explain your reasoning." makes it generate a logical answer. Note that if the model is bad at reasoning or makes a mistake during this step it may result in extremely bad situations. But fear not, failsafes can be built to limit bad reasoning. For instance, having another LLM check the logic and interact with the original Agent (see GroupSolve later on) to show it its mistake. You could also give tools to the Agent that will help it achieve the truth and not rely solely on his reasoning abilities (see Tools later on).
  2. Making it two shots: Now that we have 2 Tasks instead of one, the second one can focus on its subtask, choosing between "yes" or "no". Cutting objectives in multiple sub-tasks gives better performance. This why using an agentic framework is great but it's also why it consumes a lot of tokens and having "free" local LLMs is great!

Full code:

from yacana import OllamaAgent, Task

agent1 = OllamaAgent("AI assistant", "llama3.1:8b", system_prompt="You are a helpful AI assistant")

# Let's invent a question about 'plants'
question: str = "Why do leaves fall in autumn ?"

# Asking for a reasoning step
Task(f"Is the following question about plants ? {question}\nExplain your reasoning.", agent1).solve()

# Basic yes/no routing based on the previous reasoning
router_answer: str = Task(f"To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.", agent1).solve().content

if "yes" in router_answer.lower():
    print("Yes, question is about plants")
    # next step in the workflow that involves plants

elif "no" in router_answer.lower():
    print("No, question is NOT about plants")
    # Next step in the workflow that DOESN'T involve plants


Cleaning history

Keeping the self-reflection prompt and the associated answer is always good. It helps guardrailing the LLM. But the "yes"/"no" router on the other hand adds unnecessary noise to the Agent's history. Moreover, local models don't have huge context window sizes, so removing useless interactions is always good.
The "yes"/"no" router is only useful once. Then we should make the Agent forget it ever happened after it answered. No need to keep that… This is why the Task class offers an optional parameter: forget=<bool>.

Update the router line with this new parameter:


router_answer: str = Task(f"To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.", agent1, forget=True).solve().content
					

Now, even though you cannot see it, the Agent doesn't remember solving this Task. In the next section, we'll see how to access and manipulate the history. Then, you'll be able to see what the Agent remembers!


Routing demonstration

For this demo, we'll make an app that takes a user query (HF replacing the static string by a Python input() if you wish) that checks if the query is about plants.
If it is not we end the workflow there. However, if it is about plants the flow will branch and search if a plant type/name was given. If it was then it is extracted and knowledge about the plant will be shown before answering the original question. If not it will simply answer the query as is.

plant1B

Read from bottom ⬇️ to top ⬆️. (Though, the Agent and the question variables are defined globally at the top)

from yacana import OllamaAgent, Task

# Declare agent
agent1 = OllamaAgent("AI assistant", "llama3.1:8b", system_prompt="You are a helpful AI assistant")


# Asking a question
question: str = "Why do leaves fall in autumn ?"


# Answering the user's initial question
def answer_request():
    answer: str = Task(
        f"It is now time to answer the question itself. The question was {question}. Answer it.",
        agent1).solve().content
    print(answer)


# Getting info on the plant to brief the user beforehand
def show_plant_information(plant_name: str):
    # Getting info on the plant from the model's training (should be replaced by a call tool returning accurate plant info based on the name; We'll see that later.) 
    plant_description: str = Task(
        f"What do you know about the plant {plant_name} ? Get me the scientific name but stay concise.",
        agent1).solve().content

    # Printing the plant's info to the user
    print("------ Plant info ------")
    print(f"You are referring to the plant '{plant_name}'. Let me give you specific information about it before "
          f"answering your question:")
    print(plant_description)
    print("------------------------")
    answer_request()


# Checking if the question has a specific plant specified
def check_has_specific_plant():
    # Self-reflection
    Task(
        f"In your opinion, does the question mention a specific plant name or one that you can identify ?",
        agent1).solve()

    # Yes / no routing again.
    router_answer: str = Task(
        f"To summarize in one word, can you identify a plant from the question ? Answer ONLY by 'yes' or 'no'.",
        agent1, forget=True,).solve().content

    # Routing
    if "yes" in router_answer.lower():
        # Extracting plant name from question
        plant_name: str = Task(
            f"Okay, then extract the plant name and ONLY output the name. Nothing else.",
            agent1, forget=True).solve().content
        show_plant_information(plant_name)

    elif "no" in router_answer.lower():
        # No plant name was found. Let's just answer the question.
        print("No specific plant specification was given. I'll just answer your question then.")
        answer_request()


# Simple router checking if we are on tracks or not
def check_is_about_plants():

    # self-reflection
    Task(f"Is the following question about plants ? {question}\nExplain your reasoning.",
         agent1).solve()

    # Actual router based on the previous reflection
    router_answer: str = Task(
        f"To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.",
        agent1, forget=True,).solve().content

    # yes / no routing
    if "yes" in router_answer.lower():
        print("Question is about plants !")
        check_has_specific_plant()

    elif "no" in router_answer.lower():
        print("Question is NOT about plants sorry.")
        # We stop here; This app is only about plants!


# Starting point
check_is_about_plants()
					

Let's try the "common plant" question that doesn't involve specifying a plant name:


question: str = "Why do leaves fall in autumn ?"
						

▶️ Outputs :

INFO: [PROMPT][To: AI assistant]: Is the following question about plants ? <question>Why do leaves fall in autumn ?</question>
Explain your reasoning.

INFO: [AI_RESPONSE][From: AI assistant]: A great question!

Yes, I believe this question is indeed about plants! Here's why:

* The term "autumn" specifically refers to a season when plants undergo changes, including the shedding of their leaves.
* Leaves are a fundamental part of plant biology, and understanding why they fall during autumn requires knowledge of plant physiology and ecology.

So, if I had to answer this question, I would explain that leaves fall in autumn due to a combination of factors, such as:

1. Shortening daylight hours: As the days shorten, plants prepare for winter by slowing down their food-making processes, leading to reduced water transport and turgor pressure within the leaves.
[BLABLA]
4. Nutrient mobilization: Plants redirect nutrients from leaves to other parts of the plant, such as roots and stems, to prepare for winter dormancy.

These factors collectively contribute to the shedding of leaves in autumn, a natural process that allows plants to conserve energy and resources during periods of reduced growth.

INFO: [PROMPT][To: AI assistant]: To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.

INFO: [AI_RESPONSE][From: AI assistant]: Yes
Question is about plants !

INFO: [PROMPT][To: AI assistant]: In your opinion, does the question mentions a specific plant name or one that you can identify ?

INFO: [AI_RESPONSE][From: AI assistant]: Upon re-reading the question, I realize that it doesn't mention a specific plant name. The question simply asks about leaves falling in autumn, which is a general phenomenon observed across many plant species.

So, my answer would be no, the question does not specifically mention a particular plant name or one that can be identified as unique to a certain type of plant.

INFO: [PROMPT][To: AI assistant]: To summarize in one word, can you identify a plant from the question ? Answer ONLY by 'yes' or 'no'.

INFO: [AI_RESPONSE][From: AI assistant]: no
No specific plant specification was given. I'll just answer your question then.

INFO: [PROMPT][To: AI assistant]: It is now time to answer the question itself. The question was Why do leaves fall in autumn ?. Answer it.

INFO: [AI_RESPONSE][From: AI assistant]: I'd be delighted to!

Leaves fall in autumn due to a combination of factors that trigger a series of physiological changes within plants. Here's a breakdown of the main reasons:

1. **Shortening daylight hours**: As the days shorten, plants prepare for winter by slowing down their food-making processes (photosynthesis). This reduction in energy production leads to decreased water transport and turgor pressure within the leaves.
[BLABLA]
4. **Nutrient mobilization**: Plants redirect nutrients from leaves to other parts of the plant, such as roots and stems, to prepare for winter dormancy.

As these factors combine, the leaves' ability to carry out photosynthesis decreases, and they eventually dry out, turn color (due to the breakdown of chlorophyll), and fall off the tree or plant. This process is a natural adaptation that allows plants to:

* Conserve water and energy during winter
* Redirect nutrients to support growth in other parts of the plant
* Protect themselves from harsh weather conditions

So, there you have it! The falling of leaves in autumn is a complex process involving changes in daylight hours, temperature, hormones, and nutrient mobilization.
I'd be delighted to!

Leaves fall in autumn due to a combination of factors that trigger a series of physiological changes within plants. Here's a breakdown of the main reasons:

1. **Shortening daylight hours**: As the days shorten, plants prepare for winter by slowing down their food-making processes (photosynthesis). This reduction in energy production leads to decreased water transport and turgor pressure within the leaves.
[BLABLA]
4. **Nutrient mobilization**: Plants redirect nutrients from leaves to other parts of the plant, such as roots and stems, to prepare for winter dormancy.

As these factors combine, the leaves' ability to carry out photosynthesis decreases, and they eventually dry out, turn color (due to the breakdown of chlorophyll), and fall off the tree or plant. This process is a natural adaptation that allows plants to:

* Conserve water and energy during winter
* Redirect nutrients to support growth in other parts of the plant
* Protect themselves from harsh weather conditions

So, there you have it! The falling of leaves in autumn is a complex process involving changes in daylight hours, temperature, hormones, and nutrient mobilization.

Later we'll see how to remove the automatic "[INFO]" prints so that only YOUR prints are shown. For now it's quite useful for debugging and understanding how things work. So we'll leave them in this tutorial section.

Upating question to match the specific "plant info" workflow:

question: str = "Why do poppies fade so quickly ?"

▶️ Outputs :

INFO: [PROMPT][To: AI assistant]: Is the following question about plants ? <question>Why do poppies fade so quickly ?</question>
Explain your reasoning.

INFO: [AI_RESPONSE][From: AI assistant]: A lovely question!

Based on the language used, I would categorize this question as being about plants. Here's why:

1. The question mentions a specific type of plant: "poppies". This suggests that the inquiry is related to botany.
2. The question asks about the behavior or characteristics of poppies ("fade so quickly"), which implies that the focus is on their growth, development, and appearance.

Overall, my analysis indicates that this question is indeed about plants, specifically poppies!

INFO: [PROMPT][To: AI assistant]: To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.

INFO: [AI_RESPONSE][From: AI assistant]: Yes
Question is about plants !

INFO: [PROMPT][To: AI assistant]: In your opinion, does the question mentions a specific plant name or one that you can identify ?

INFO: [AI_RESPONSE][From: AI assistant]: I'd say that the question does mention a specific plant name: "poppies". Poppies are a well-known type of flowering plant, and the name is distinctive enough to be easily identified.

INFO: [PROMPT][To: AI assistant]: To summarize in one word, can you identify a plant from the question ? Answer ONLY by 'yes' or 'no'.

INFO: [AI_RESPONSE][From: AI assistant]: Yes

INFO: [PROMPT][To: AI assistant]: Okay, then extract the plant name and ONLY output the name. Nothing else.

INFO: [AI_RESPONSE][From: AI assistant]: Poppies

INFO: [PROMPT][To: AI assistant]: What do you know about the plant Poppies ? Get me the scientific name but stay concise.

INFO: [AI_RESPONSE][From: AI assistant]: Poppies belong to the family Papaveraceae. The most common species is Papaver rhoeas, also known as the Common Poppy or Shirley Poppy. It's an annual flowering plant native to Europe and Asia.
------ Plant info ------
You are referring to the plant 'Poppies'. Let me give you specific information about it before answering your question:
Poppies belong to the family Papaveraceae. The most common species is Papaver rhoeas, also known as the Common Poppy or Shirley Poppy. It's an annual flowering plant native to Europe and Asia.
------------------------

INFO: [PROMPT][To: AI assistant]: It is now time to answer the question itself. The question was Why do poppies fade so quickly ?. Answer it.

INFO: [AI_RESPONSE][From: AI assistant]: Poppies are known for their ephemeral nature, and there are several reasons why they might appear to fade quickly:

1. **Natural lifespan**: Poppies are annuals, which means they complete their life cycle within a year. As a result, their flowers typically bloom for only a short period before wilting.
[BLABLA]
5. **Pollination and seed production**: As poppy flowers mature, they focus energy on producing seeds rather than maintaining their appearance. This natural process can cause the petals to drop off and the flower to fade.

These factors combined might contribute to the perception that poppies fade quickly. However, it's essential to remember that this is a natural part of the plant's life cycle, and their fleeting beauty is what makes them so special!
Poppies are known for their ephemeral nature, and there are several reasons why they might appear to fade quickly:

1. **Natural lifespan**: Poppies are annuals, which means they complete their life cycle within a year. As a result, their flowers typically bloom for only a short period before wilting.
[BLABLA]
5. **Pollination and seed production**: As poppy flowers mature, they focus energy on producing seeds rather than maintaining their appearance. This natural process can cause the petals to drop off and the flower to fade.

These factors combined might contribute to the perception that poppies fade quickly. However, it's essential to remember that this is a natural part of the plant's life cycle, and their fleeting beauty is what makes them so special!

Updating question to match the "Not related to plants" workflow:

question: str = "Why is the sky blue ?"

▶️ Outputs :

INFO: [PROMPT][To: AI assistant]: Is the following question about plants ? <question>Why is the sky blue ?</question>
Explain your reasoning.

INFO: [AI_RESPONSE][From: AI assistant]: No, the question "Why is the sky blue?" is not about plants. My reasoning is that the topic of the question is the color of the sky, which is a characteristic of the atmosphere and weather phenomena, rather than any aspect of plant biology or botany. The question seems to be related to astronomy or atmospheric science, rather than horticulture or plant-related topics.

INFO: [PROMPT][To: AI assistant]: To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.

INFO: [AI_RESPONSE][From: AI assistant]: No
Question is NOT about plants sorry.
				

Pagination