
Youtube video for this section is still under creation. Please be patient ^^
Now that you have an Ollama server running and Yacana installed let's create our first agent!
Create a Python file with this content:
from yacana import OllamaAgent
agent1 = OllamaAgent("AI assistant", "llama3.1:8b", system_prompt="You are a helpful AI assistant", endpoint="http://127.0.0.1:11434")
The OllamaAgent()
class takes...
ollama list
to list the
models you have downloaded.
The whole concept of the framework lies here. If you understand this following section then you have mastered 80% of Yacana's building principle. Like in LangGraph, where you create nodes that you link together, Yacana has a Task() class which takes as argument a task to solve. There are no hardcoded links between the Tasks so it's easy to refactor and move things around. The important concept to grasp here is that through these Tasks you will give instructions to the LLM in a way that the result must be computable. Meaning instructions must be clear and the prompt to use must reflect that. It's a Task, it's a job, it's something that needs solving but written like it is given as an order! Let's see some examples :
from yacana import OllamaAgent, Task
# First, let's make a basic AI agent
agent1 = OllamaAgent("AI assistant", "llama3.1:8b", system_prompt="You are a helpful AI assistant")
# Now we create a task and assign the agent1 to the task
task1 = Task(f"Solve the equation 2 + 2 and output the result", agent1)
# For something to happen, you must call the .solve() method on your task.
task1.solve()
What's happening above?
llama3.1:8b
model. You might need to
update that depending on what LLM you downloaded from Ollama ;
For easing the learning curve the default logging level is INFO. It will show what is going on in Yacana. Note that NOT ALL prompts are shown.
The output should look like this:
INFO: [PROMPT][To: AI assistant]: Solve the equation 2 + 2 and output the result
INFO: [AI_RESPONSE][From: AI assistant]: The answer to the equation 2 + 2 is... (drumroll please)... 4!
So, the result of solving the equation 2 + 2 is indeed 4.
If your terminal is working normally you should see the task's prompts in green and starting with the '[PROMPT]' string. The LLM's answer should appear purple and start with the [AI_RESPONSE] string.
The Task class takes 2 mandatory parameters:
Many other parameters can be given to a Task. We will see some of them in the following sections of this tutorial. But you can already check out the Task class documentation.
In the above code snippet, we assigned the agent to the Task. So it's the Task that leads the direction that the AI takes. In most other frameworks it's the opposite. You assign work to an existing agent. This reversed way allows to have fine-grained control on each resolution step as the LLM only follows breadcrumbs (the tasks). The pattern will become even more obvious as we get to the Tool section of this tutorial. As you'll see the Tools are also assigned at the Task level and not to the Agent directly.
Compared with LangGraph, we cannot generate a call graph as an image because we don't bind the tasks together explicitly. However, Yacana's way gives more flexibility and allows a hierarchical programming way of scheduling the tasks and keeping control of the flow. It also allows creating new Tasks dynamically if the need arises. You shall rely on your programming skills and good OOP to have a clean code and good Task ordering. Yacana will never propose any hardlinked code or flat configurations.
Although the logs appear in the terminal's standard output, we still need to extract the LLM's response to the Task in order to use it.
Getting the string message out of it is quite easy as the .solve()
method returns a Message() class.
Maybe you are thinking "Ho nooo, another class to deal with". Well, let me tell you
that it's always better to have an OOP class than some semi-random Python dictionary where
you'll forget what keys it contains in a matter of minutes. Also, the Message class is very
straightforward. It exposes a content
attribute. Update the current code to look
like this:
from yacana import OllamaAgent, Task, Message
# First, let's make a basic AI agent
agent1 = OllamaAgent("AI assistant", "llama3.1:8b", system_prompt="You are a helpful AI assistant")
# Now we create a task and assign the agent1 to the task
task1 = Task(f"Solve the equation 2 + 2 and output the result", agent1)
# So that something actually happens you must call the .solve() method on your task
my_message: Message = task1.solve()
# Printing the LLM's response
print(f"The AI response to our task is : {my_message.content}")
There you go! Give it a try.
Note that we used duck typing to postfix all variables
declaration with their type my_message: Message
. Yacana's source code is entirely
duck-typed so that your IDE always knows what type it's dealing with and proposes the best
methods and arguments. We recommend that you do the same as it's the industry's best standards.
Don't like having 100 lines of code for something simple? Then chain them all!
from yacana import OllamaAgent, Task
# First, let's make a basic AI agent
agent1 = OllamaAgent("AI assistant", "llama3.1:8b", system_prompt="You are a helpful AI assistant")
# Creating the task, solving it, extracting the result
result: str = Task(f'Solve the equation 2 + 2 and output the result', agent1).solve().content
# Print the result
print(f"The AI response to our task is: {result}")
Agents keep track of the History of what they did (aka, all the Tasks they solved). So just call a second Task and assign the same Agent. For instance, let's multiply by 2 the result of the initial Task. Append this to our current script:
task2_result: str = Task(f'Multiply by 2 our previous result', agent1).solve().content
print(f"The AI response to our second task is : {task2_result}")
You should get:
The AI response to our task is: If we multiply the previous result of 4 by 2, we get:
8
Without tools this only relies on the LLM's ability to do the maths and is dependent on its training.
See? The assigned Agent remembered that it had solved Task1 previously and used this
information to solve the second task.
You can chain as many Tasks as you need. You can build anything now!
As entering the AI landscape can get a bit hairy we decided to leave the INFO log level by
default. This allows to log to the standard output all the requests made to the LLM.
Note that NOT everything of Yacana's internal magic appears in these logs. We don't show
everything because many time-traveling things are going around inside the history of an Agent and
printing a log at the time it is generated wouldn't always make sense.
However, we try to log a maximum of information to help you understand what is happening
internally and allow you to tweak your prompts accordingly.
Nonetheless, you are the master of what is logged and what isn't. You cannot let Yacana log
anything while working with an app in production.
There are 5 levels of logs:
"DEBUG"
"INFO"
Default"WARNING"
"ERROR"
"CRITICAL"
None
No logsTo configure the log simply add this line at the start of your program:
from yacana import LoggerManager
LoggerManager.set_log_level("INFO")
Note that Yacana utilizes the Python logging package. This means that setting the level to "DEBUG" will print other libraries' logs on the debug level too.
If you need a library to stop spamming, you can try the following:
from yacana import LoggerManager
LoggerManager.set_library_log_level("httpx", "WARNING")
The above example sets the logging level of the network httpx library to warning, thus reducing the log spamming.
Using what we know, let's build a simple chat interface:
from yacana import OllamaAgent, Task, GenericMessage, LoggerManager
LoggerManager.set_log_level(None)
ollama_agent = OllamaAgent("AI assistant", "llama3.2:latest", system_prompt="You are a helpful AI assistant", endpoint="http://127.0.0.1:11434")
print("Ask me questions!")
while True:
user_input = input("> ")
message: GenericMessage = Task(user_input, ollama_agent).solve()
print(message.content)
Output:
Ask me questions!
> Why do boats float (short answer)
Boats float due to their displacement in water, which creates an upward buoyant force equal to the weight of the fluid (water) displaced. According to Archimedes' Principle, any object partially or fully submerged in a fluid will experience an upward buoyant force that equals the weight of the fluid it displaces.
>
If we change the agent's system prompt we can talk to a pirate!
ollama_agent = OllamaAgent("AI assistant", "llama3.2:latest", system_prompt="You are a pirate", endpoint="http://127.0.0.1:11434")
Output:
Ask me questions!
> Where is the map ?
*looks around cautiously, then leans in close*
Ahoy, matey! I be thinkin' ye be lookin' fer me trusty treasure map, eh? *winks*
Alright, I'll let ye in on a little secret. It's hidden... *pauses for dramatic effect* ...on the island o' Tortuga! Ye can find it at the old windmill on the outskirts o' town. But be warned, matey: ye won't be the only scurvy dog afterin' that map!
Now, I be trustin' ye to keep this little chat between us, savvy?
>
For advanced users, Yacana provides a way to tweak the LLM runtime behavior!
For instance, lowering the temperature
setting makes the model less creative in its
responses. On the contrary, raising this setting will make the LLM more chatty and creative.
Yacana provides you with a class that exposes all the possible LLM properties. Also if you need a
good explanation for each of them I would
recommend the excellent video
Matt Williams did on this subject.
These settings are set at the Agent level so that you can have the same underlying model used by two separate agents and have them behave differently.
The OllamaModelSettings class is tailored for the Ollama backend. You can use OpenAiModelSettings to configure non-Ollama LLMs with their own set of available settings.
We use the OllamaModelSettings class to configure the settings we need.
For example, let's lower the temperature of an Agent to 0.4:
from yacana import OllamaModelSettings, OllamaAgent
ms = OllamaModelSettings(temperature=0.4)
agent1 = OllamaAgent("Ai assistant", "llama3.1:8b", model_settings=ms)
If you're wondering what the default values of these are when not set. Well, Ollama sets the defaults for you. They can also be overridden in the Model config file (looks like a dockerfile but for LLMs) and finally, you can set them through Yacana during runtime.
A good way to show how this can have a real impact on the output is by setting the
num_predict
parameter. This one allows control of how many tokens should be
generated by the LLM. Let's make the same Task twice but with different num_predict
values:
from yacana import OllamaModelSettings, OllamaAgent, Task
# Setting temperature and max token to 100
ms = OllamaModelSettings(temperature=0.4, num_predict=100)
agent1 = OllamaAgent("Ai assistant", "llama3.1:8b", model_settings=ms)
Task("Why is the sky blue ?", agent1).solve()
print("-------------------")
# Settings max token to 15
ms = OllamaModelSettings(num_predict=15)
agent2 = OllamaAgent("Ai assistant", "llama3.1:8b", model_settings=ms)
Task("Why is the sky blue ?", agent2).solve()
INFO: [PROMPT]: Why is the sky blue ?
INFO: [AI_RESPONSE]: The sky appears blue because of a phenomenon called Rayleigh scattering, named after the British physicist Lord Rayleigh. Here's what happens:
1. **Sunlight**: When sunlight enters Earth's atmosphere, it contains all the colors of the visible spectrum (red, orange, yellow, green, blue, indigo, and violet).
2. **Molecules**: The atmosphere is made up of tiny molecules of gases like nitrogen (N2) and oxygen (O2). These molecules are much smaller than
-------------------
INFO: [PROMPT]: Why is the sky blue ?
INFO: [AI_RESPONSE]: The sky appears blue because of a phenomenon called Rayleigh scattering, named after
As you can see above the two agents didn't output the same number of tokens.
For OpenAiAgent the method is the same. But instead of using OllamaModelSettings
you can use OpenAiModelSettings
. For more information please refer to the Other inference servers section.
Yacana cannot recreate every functionality made available by the underlying client library. This is why we provide you with
a direct access to the underlying library. This way, if you need more control over the LLM, you can set the desired parameters yourself.
runtime_config
parameter:
vllm_agent = OpenAiAgent("AI assistant", "meta-llama/Llama-3.1-8B-Instruct", endpoint="http://127.0.0.1:8000/v1", api_token="", runtime_config={"extra_body": {'guided_decoding_backend': 'outlines'}})
Task("Tell me 2 facts about Canada.", agent, runtime_config={"extra_body": {'guided_decoding_backend': 'xgrammar'}}).solve()
Other frameworks tend to make abstractions for everything. Even things that don't need any. That's why I'll show you how to do routing with only what we have seen earlier. Yacana doesn't provide routing abstraction because there is no need to do so.
But what is routing? Well, having LLMs solve a Task and then chaining many others in a sequence is good but to be efficient you have to create conditional workflows. In particular when using local LLMs that don't have the power to solve all Tasks with only one prompt. You must create an AI workflow in advance that will go forward step by step and converge to some expected result. AI allows you to deal with some level of unknown but you can't expect having a master brain (like in crewAI + chatGPT) that distributes tasks to agents and achieves an expected result. It's IMPOSSIBLE with local LLMs. They are too dumb! Therefore they need you to help them along their path. This is why LangGraph works well with local LLMs and Yacana does too. You should create workflows and when conditions are met switch from one branch to another, treating more specific cases.
The most common routing mechanic is "yes" / "no". Depending on the result, your program can do different things next. Let's see an example:
from yacana import OllamaAgent, Task
agent1 = OllamaAgent("AI assistant", "llama3.1:8b", system_prompt="You are a helpful AI assistant")
# Let's invent a question about 'plants'
question: str = "Why do leaves fall in autumn ?"
# Ask if the question is plant related: yes or no
router_answer: str = Task(f"Is the following question about plants ? <question>{question}</question> Answer ONLY by 'yes' or 'no'.", agent1).solve().content
if "yes" in router_answer.lower():
print("Yes, question is about plants")
# next step in the workflow that involves plants
elif "no" in router_answer.lower():
print("No, question is NOT about plants")
# Next step in the workflow that DOESN'T involve plants
You should get the following output:
INFO: [PROMPT]: Is the following question about plants? <question>Why do leaves fall in autumn ?</question> Answer ONLY by 'yes' or 'no'.
INFO: [AI_RESPONSE]: yes
Question is about plants
➡️ Many things are happening here. We didn't implement an abstraction to simplify things but the downside is that you must learn a few tricks:
yes
. Sometimes you get "Yes" or even full
cap "YES" for no reason.
in
keyword of Python because the LLM doesn't always respect the instructions of
outputting ONLY 'yes' or 'no'
. Sometimes you'll get "yes!" or "Great
idea, I say yes".
Substring match will match "yes" anywhere in the LLM answer.answer ONLY with
'xx'
. See the use of the upper cap on "ONLY"? Also, the single quotes around the
possible
choices 'yes'
and 'no'
help the LLM that sees them as delimiters.
<question>
tags. LLMs love delimiters. This way the LLM
knows
when the question starts and when the question ends. This technique helps to differentiate
your prompt from the dynamic part. You don't have to add tags everywhere but they can prove
useful. Do not abuse them or the LLM might start using them in its response. Just keep this
trick in mind.
This is all basic prompt engineering. If you wish to build an app with local models you will definitely have to learn those tricks. LLMs are unpredictable.
As local models are a bit dumb you need to let them think on their own before making a decision. This is called self-reflection. It will cost one more Task to solve but you'll get significantly better results during routing, in particular when routing on more complex things (other than "yes"|"no").
Let's update the routing section of our code to look like this:
# Asking for a reasoning step
Task(f"Is the following question about plants ? {question} \nExplain your reasoning.", agent1).solve()
# Basic yes/no routing based on the previous reasoning
router_answer: str = Task(f"To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.", agent1).solve().content
We added one more Task that executes BEFORE the router.
You should get this type of output:
INFO: [PROMPT][To: AI assistant]: Is the following question about plants ? Why do leaves fall in autumn ?
Explain your reasoning
INFO: [AI_RESPONSE][From: AI assistant]: A great question!
Yes, I believe this question is indeed about plants. Specifically, it's related to the fascinating process of leaf senescence and abscission that occurs during autumn (or fall) in many plant species.
Here's why:
1. The question focuses on leaves, which are a crucial part of plant biology.
2. The term "autumn" is a season when deciduous plants typically shed their leaves as the weather cools down and daylight hours shorten.
3. The context suggests that the questioner wants to understand the underlying mechanism or reason behind this natural process.
Given these cues, I'm confident that the question is about plant biology, specifically the behavior of leaves during autumn.
INFO: [PROMPT][To: AI assistant]: To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.
INFO: [AI_RESPONSE][From: AI assistant]: Yes
Question is about plants
See how the LLM had an "intense" reflection on the subject. This is very good. You want LLMs to do reasoning like this. It will improve the overall result for the next Tasks to solve.
▶️ The prompt engineering techniques used here are:
Full code:
from yacana import OllamaAgent, Task
agent1 = OllamaAgent("AI assistant", "llama3.1:8b", system_prompt="You are a helpful AI assistant")
# Let's invent a question about 'plants'
question: str = "Why do leaves fall in autumn ?"
# Asking for a reasoning step
Task(f"Is the following question about plants ? {question} \nExplain your reasoning.", agent1).solve()
# Basic yes/no routing based on the previous reasoning
router_answer: str = Task(f"To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.", agent1).solve().content
if "yes" in router_answer.lower():
print("Yes, question is about plants")
# next step in the workflow that involves plants
elif "no" in router_answer.lower():
print("No, question is NOT about plants")
# Next step in the workflow that DOESN'T involve plants
Keeping the self-reflection prompt and the associated answer is always good. It helps
guardrailing the LLM. But the "yes"/"no" router on the other hand adds unnecessary noise to the
Agent's history. Moreover, local models don't have huge context window sizes, so removing
useless interactions is always good.
The "yes"/"no" router is only useful once. Then we should make the Agent forget it ever happened
after it answered. No need to keep that… This is why the Task class offers an optional
parameter: forget=<bool>
.
Update the router line with this new parameter:
router_answer: str = Task(f"To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.", agent1, forget=True).solve().content
Now, even though you cannot see it, the Agent doesn't remember solving this Task. In the next section, we'll see how to access and manipulate the history. Then, you'll be able to see what the Agent remembers!
For this demo, we'll make an app that takes a user query (HF replacing the static string by a
Python input()
if you wish) that checks if the query is about plants.
If it is not we end the workflow there. However, if it is about plants the flow will branch and
search if a plant type/name was given. If it was then it is extracted and knowledge about the
plant will be shown before answering the original question. If not it will simply answer the
query as is.
Read from bottom ⬇️ to top ⬆️. (Though, the Agent and the question variables are defined globally at the top)
from yacana import OllamaAgent, Task
# Declare agent
agent1 = OllamaAgent("AI assistant", "llama3.1:8b", system_prompt="You are a helpful AI assistant")
# Asking a question
question: str = "Why do leaves fall in autumn ?"
# Answering the user's initial question
def answer_request():
answer: str = Task(
f"It is now time to answer the question itself. The question was {question}. Answer it.",
agent1).solve().content
print(answer)
# Getting info on the plant to brief the user beforehand
def show_plant_information(plant_name: str):
# Getting info on the plant from the model's training (should be replaced by a call tool returning accurate plant info based on the name; We'll see that later.)
plant_description: str = Task(
f"What do you know about the plant {plant_name} ? Get me the scientific name but stay concise.",
agent1).solve().content
# Printing the plant's info to the user
print("------ Plant info ------")
print(f"You are referring to the plant '{plant_name}'. Let me give you specific information about it before "
f"answering your question:")
print(plant_description)
print("------------------------")
answer_request()
# Checking if the question has a specific plant specified
def check_has_specific_plant():
# Self-reflection
Task(
f"In your opinion, does the question mention a specific plant name or one that you can identify ?",
agent1).solve()
# Yes / no routing again.
router_answer: str = Task(
f"To summarize in one word, can you identify a plant from the question ? Answer ONLY by 'yes' or 'no'.",
agent1, forget=True,).solve().content
# Routing
if "yes" in router_answer.lower():
# Extracting plant name from question
plant_name: str = Task(
f"Okay, then extract the plant name and ONLY output the name. Nothing else.",
agent1, forget=True).solve().content
show_plant_information(plant_name)
elif "no" in router_answer.lower():
# No plant name was found. Let's just answer the question.
print("No specific plant specification was given. I'll just answer your question then.")
answer_request()
# Simple router checking if we are on tracks or not
def check_is_about_plants():
# self-reflection
Task(f"Is the following question about plants ? {question} \nExplain your reasoning.",
agent1).solve()
# Actual router based on the previous reflection
router_answer: str = Task(
f"To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.",
agent1, forget=True,).solve().content
# yes / no routing
if "yes" in router_answer.lower():
print("Question is about plants !")
check_has_specific_plant()
elif "no" in router_answer.lower():
print("Question is NOT about plants sorry.")
# We stop here; This app is only about plants!
# Starting point
check_is_about_plants()
Let's try the "common plant" question that doesn't involve specifying a plant name:
question: str = "Why do leaves fall in autumn ?"
▶️ Outputs :
INFO: [PROMPT][To: AI assistant]: Is the following question about plants ? <question>Why do leaves fall in autumn ?</question>
Explain your reasoning.
INFO: [AI_RESPONSE][From: AI assistant]: A great question!
Yes, I believe this question is indeed about plants! Here's why:
* The term "autumn" specifically refers to a season when plants undergo changes, including the shedding of their leaves.
* Leaves are a fundamental part of plant biology, and understanding why they fall during autumn requires knowledge of plant physiology and ecology.
So, if I had to answer this question, I would explain that leaves fall in autumn due to a combination of factors, such as:
1. Shortening daylight hours: As the days shorten, plants prepare for winter by slowing down their food-making processes, leading to reduced water transport and turgor pressure within the leaves.
[BLABLA]
4. Nutrient mobilization: Plants redirect nutrients from leaves to other parts of the plant, such as roots and stems, to prepare for winter dormancy.
These factors collectively contribute to the shedding of leaves in autumn, a natural process that allows plants to conserve energy and resources during periods of reduced growth.
INFO: [PROMPT][To: AI assistant]: To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.
INFO: [AI_RESPONSE][From: AI assistant]: Yes
Question is about plants !
INFO: [PROMPT][To: AI assistant]: In your opinion, does the question mentions a specific plant name or one that you can identify ?
INFO: [AI_RESPONSE][From: AI assistant]: Upon re-reading the question, I realize that it doesn't mention a specific plant name. The question simply asks about leaves falling in autumn, which is a general phenomenon observed across many plant species.
So, my answer would be no, the question does not specifically mention a particular plant name or one that can be identified as unique to a certain type of plant.
INFO: [PROMPT][To: AI assistant]: To summarize in one word, can you identify a plant from the question ? Answer ONLY by 'yes' or 'no'.
INFO: [AI_RESPONSE][From: AI assistant]: no
No specific plant specification was given. I'll just answer your question then.
INFO: [PROMPT][To: AI assistant]: It is now time to answer the question itself. The question was Why do leaves fall in autumn ?. Answer it.
INFO: [AI_RESPONSE][From: AI assistant]: I'd be delighted to!
Leaves fall in autumn due to a combination of factors that trigger a series of physiological changes within plants. Here's a breakdown of the main reasons:
1. **Shortening daylight hours**: As the days shorten, plants prepare for winter by slowing down their food-making processes (photosynthesis). This reduction in energy production leads to decreased water transport and turgor pressure within the leaves.
[BLABLA]
4. **Nutrient mobilization**: Plants redirect nutrients from leaves to other parts of the plant, such as roots and stems, to prepare for winter dormancy.
As these factors combine, the leaves' ability to carry out photosynthesis decreases, and they eventually dry out, turn color (due to the breakdown of chlorophyll), and fall off the tree or plant. This process is a natural adaptation that allows plants to:
* Conserve water and energy during winter
* Redirect nutrients to support growth in other parts of the plant
* Protect themselves from harsh weather conditions
So, there you have it! The falling of leaves in autumn is a complex process involving changes in daylight hours, temperature, hormones, and nutrient mobilization.
I'd be delighted to!
Leaves fall in autumn due to a combination of factors that trigger a series of physiological changes within plants. Here's a breakdown of the main reasons:
1. **Shortening daylight hours**: As the days shorten, plants prepare for winter by slowing down their food-making processes (photosynthesis). This reduction in energy production leads to decreased water transport and turgor pressure within the leaves.
[BLABLA]
4. **Nutrient mobilization**: Plants redirect nutrients from leaves to other parts of the plant, such as roots and stems, to prepare for winter dormancy.
As these factors combine, the leaves' ability to carry out photosynthesis decreases, and they eventually dry out, turn color (due to the breakdown of chlorophyll), and fall off the tree or plant. This process is a natural adaptation that allows plants to:
* Conserve water and energy during winter
* Redirect nutrients to support growth in other parts of the plant
* Protect themselves from harsh weather conditions
So, there you have it! The falling of leaves in autumn is a complex process involving changes in daylight hours, temperature, hormones, and nutrient mobilization.
Later we'll see how to remove the automatic "[INFO]" prints so that only YOUR prints are shown. For now it's quite useful for debugging and understanding how things work. So we'll leave them in this tutorial section.
Upating question to match the specific "plant info" workflow:
question: str = "Why do poppies fade so quickly ?"
▶️ Outputs :
INFO: [PROMPT][To: AI assistant]: Is the following question about plants ? <question>Why do poppies fade so quickly ?</question>
Explain your reasoning.
INFO: [AI_RESPONSE][From: AI assistant]: A lovely question!
Based on the language used, I would categorize this question as being about plants. Here's why:
1. The question mentions a specific type of plant: "poppies". This suggests that the inquiry is related to botany.
2. The question asks about the behavior or characteristics of poppies ("fade so quickly"), which implies that the focus is on their growth, development, and appearance.
Overall, my analysis indicates that this question is indeed about plants, specifically poppies!
INFO: [PROMPT][To: AI assistant]: To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.
INFO: [AI_RESPONSE][From: AI assistant]: Yes
Question is about plants !
INFO: [PROMPT][To: AI assistant]: In your opinion, does the question mentions a specific plant name or one that you can identify ?
INFO: [AI_RESPONSE][From: AI assistant]: I'd say that the question does mention a specific plant name: "poppies". Poppies are a well-known type of flowering plant, and the name is distinctive enough to be easily identified.
INFO: [PROMPT][To: AI assistant]: To summarize in one word, can you identify a plant from the question ? Answer ONLY by 'yes' or 'no'.
INFO: [AI_RESPONSE][From: AI assistant]: Yes
INFO: [PROMPT][To: AI assistant]: Okay, then extract the plant name and ONLY output the name. Nothing else.
INFO: [AI_RESPONSE][From: AI assistant]: Poppies
INFO: [PROMPT][To: AI assistant]: What do you know about the plant Poppies ? Get me the scientific name but stay concise.
INFO: [AI_RESPONSE][From: AI assistant]: Poppies belong to the family Papaveraceae. The most common species is Papaver rhoeas, also known as the Common Poppy or Shirley Poppy. It's an annual flowering plant native to Europe and Asia.
------ Plant info ------
You are referring to the plant 'Poppies'. Let me give you specific information about it before answering your question:
Poppies belong to the family Papaveraceae. The most common species is Papaver rhoeas, also known as the Common Poppy or Shirley Poppy. It's an annual flowering plant native to Europe and Asia.
------------------------
INFO: [PROMPT][To: AI assistant]: It is now time to answer the question itself. The question was Why do poppies fade so quickly ?. Answer it.
INFO: [AI_RESPONSE][From: AI assistant]: Poppies are known for their ephemeral nature, and there are several reasons why they might appear to fade quickly:
1. **Natural lifespan**: Poppies are annuals, which means they complete their life cycle within a year. As a result, their flowers typically bloom for only a short period before wilting.
[BLABLA]
5. **Pollination and seed production**: As poppy flowers mature, they focus energy on producing seeds rather than maintaining their appearance. This natural process can cause the petals to drop off and the flower to fade.
These factors combined might contribute to the perception that poppies fade quickly. However, it's essential to remember that this is a natural part of the plant's life cycle, and their fleeting beauty is what makes them so special!
Poppies are known for their ephemeral nature, and there are several reasons why they might appear to fade quickly:
1. **Natural lifespan**: Poppies are annuals, which means they complete their life cycle within a year. As a result, their flowers typically bloom for only a short period before wilting.
[BLABLA]
5. **Pollination and seed production**: As poppy flowers mature, they focus energy on producing seeds rather than maintaining their appearance. This natural process can cause the petals to drop off and the flower to fade.
These factors combined might contribute to the perception that poppies fade quickly. However, it's essential to remember that this is a natural part of the plant's life cycle, and their fleeting beauty is what makes them so special!
Updating question to match the "Not related to plants" workflow:
question: str = "Why is the sky blue ?"
▶️ Outputs :
INFO: [PROMPT][To: AI assistant]: Is the following question about plants ? <question>Why is the sky blue ?</question>
Explain your reasoning.
INFO: [AI_RESPONSE][From: AI assistant]: No, the question "Why is the sky blue?" is not about plants. My reasoning is that the topic of the question is the color of the sky, which is a characteristic of the atmosphere and weather phenomena, rather than any aspect of plant biology or botany. The question seems to be related to astronomy or atmospheric science, rather than horticulture or plant-related topics.
INFO: [PROMPT][To: AI assistant]: To summarize in one word, was the question about plants ? Answer ONLY by 'yes' or 'no'.
INFO: [AI_RESPONSE][From: AI assistant]: No
Question is NOT about plants sorry.