In this tutorial, we show how to use the Uagents Framework to build a light AI agent architecture and focused on events in addition to Google's Gemini API. We will start by applying Nest_asyncio to activate the nested event loops, then configure your gemini and instantry key the Genai client. Then, we will define our communication contracts, question and respond to the pydatic models and run two uagents: a “gemini_agent” which listens to the messages of incoming questions, invokes the “flash” model Gemini to generate answers and emits answer messages; And a “client_age” which triggers a request when starting and manages the incoming response. Finally, we will learn to execute these agents simultaneously using the Python multiproacher utility and to graciously stop the event loop once the exchange is finished, illustrating the transparent orchestration of uagents of the inter-agent messaging.
!pip install -q uagents google-genai
We install the UAGEVENTS framework and the Google GENAI customer library, providing the tools necessary to create and execute your AI agents focused on events with Gemini. The Q indicator silently executes the installation, keeping the output of your clean laptop. Discover the Notebook here
import os, time, multiprocessing, asyncio
import nest_asyncio
from google import genai
from pydantic import BaseModel, Field
from uagents import Agent, Context
nest_asyncio.apply()
We have configured our Python environment by important essential modules, system utilities (OS, Time, Multiprocessing, Asyncio), Nest_asyncio to activate the nested event loops (criticism in notebooks), the Google Genai client, the worldly validation for the scheme and the basic UAGEnts. Finally, nest_asyncio.apply () corrects the event loop so that you can execute the asynchronous working flows in a transparent manner in interactive environments. Discover the Notebook here
os.environ("GOOGLE_API_KEY") = "Use Your Own API Key Here"
client = genai.Client()
Here, we define our gemini API key in the environment. Make sure to replace the space reserved by your real key, then initialize the Genai client, who will process all the subsequent requests for the Gemini models from Google. This step guarantees that our agent has authenticated access to generate content via the API.
class Question(BaseModel):
question: str = Field(...)
class Answer(BaseModel):
answer: str = Field(...)
These Pedantes models define the structured messages formats that our agents will exchange between them. The questions model has a unique questions chain field and the answer model has a single answer field field. Using Pydontic, we obtain automatic validation and serialization of incoming and outgoing messages, ensuring that each agent always works with well -trained data.
ai_agent = Agent(
name="gemini_agent",
seed="agent_seed_phrase",
port=8000,
endpoint=("http://127.0.0.1:8000/submit")
)
@ai_agent.on_event("startup")
async def ai_startup(ctx: Context):
ctx.logger.info(f"{ai_agent.name} listening on {ai_agent.address}")
def ask_gemini(q: str) -> str:
resp = client.models.generate_content(
model="gemini-2.0-flash",
contents=f"Answer the question: {q}"
)
return resp.text
@ai_agent.on_message(model=Question, replies=Answer)
async def handle_question(ctx: Context, sender: str, msg: Question):
ans = ask_gemini(msg.question)
await ctx.send(sender, Answer(answer=ans))
In this block, we instance the “gemini_agent” uagents with a unique name, a sentence of seeds (for deterministic identity), a listening port and an HTTP termination point for messages of messages. We then record a starter event manager who connects when the agent is ready, ensuring visibility in his life cycle. ASK_GEMINI synchronous assistance envelops the Genai customer's call to the Gemini's “Flash” model. At the same time, the manager @ ai_agent.on_message desires the incoming question messages, invokes Ask_gemini and asynchronously returns a useful load validated to the original sender. Discover the Notebook here
client_agent = Agent(
name="client_agent",
seed="client_seed_phrase",
port=8001,
endpoint=("http://127.0.0.1:8001/submit")
)
@client_agent.on_event("startup")
async def ask_on_start(ctx: Context):
await ctx.send(ai_agent.address, Question(question="What is the capital of France?"))
@client_agent.on_message(model=Answer)
async def handle_answer(ctx: Context, sender: str, msg: Answer):
print("📨 Answer from Gemini:", msg.answer)
# Use a more graceful shutdown
asyncio.create_task(shutdown_loop())
async def shutdown_loop():
await asyncio.sleep(1) # Give time for cleanup
loop = asyncio.get_event_loop()
loop.stop()
We have set up a “client_agent” who, at start -up, sends a question to the gemini_agent requesting the capital of France, then listens to an answer, prints the answer received and graciously closes the event loop after a brief delay. Discover the Notebook here
def run_agent(agent):
agent.run()
if __name__ == "__main__":
p = multiprocessing.Process(target=run_agent, args=(ai_agent,))
p.start()
time.sleep(2)
client_agent.run()
p.join()
Finally, we define a Helper Run_agent function which calls agent.run (), then uses the multiproache of Python to launch the gemini_agent in its process. After giving him a moment to turn, he performs the customer_agent in the main process, blocking until the end of the response and finally joins the background process to ensure a clean stop.
In conclusion, with this tutorial focused on uagents, we now have a clear plan for the creation of modular AI services which communicate via well -defined event hooks and message schemes. You have seen how Uagents simplifies the management of the agent's life cycle, the recording of startup events, the management of incoming messages and the sending of structured responses, all without driver networking code. From there, you can extend your uagent configuration to include more sophisticated conversation workflows, several types of messages and a dynamic discovery of agents.
Discover the Notebook here. All the merit of this research goes to researchers in this project. Also, don't hesitate to follow us Twitter And don't forget to join our Subseubdredit 100k + ml and subscribe to Our newsletter.
Asif Razzaq is the CEO of Marktechpost Media Inc .. as a visionary entrepreneur and engineer, AIF undertakes to exploit the potential of artificial intelligence for social good. His most recent company is the launch of an artificial intelligence media platform, Marktechpost, which stands out from its in-depth coverage of automatic learning and in-depth learning news which are both technically solid and easily understandable by a large audience. The platform has more than 2 million monthly views, illustrating its popularity with the public.
