In this tutorial, we show how to build an intelligent request management agent in several steps using Langgraph and Gemini 1.5 Flash. The main idea is to structure the reasoning of AI as a workflow with state, where an incoming request goes through a series of useful nodes: routing, analysis, research, generation of response and validation. Each node works as a functional block with a well -defined role, which makes the agent not only reactive but analytically aware. Using Stategraph de Langgraph, we orchestrate these nodes to create a loop system which can reanySate and improve its output until the answer is validated as complete or that a maximum iteration threshold is reached.
!pip install langgraph langchain-google-genai python-dotenv
First of all, the order! Pip Install Langgraph Langchain-Google-Genai Python-Dotenv installs three essential Python packages for the construction of intelligent agent workflows. Langgraph allows a graphic orchestration of AI, Langchain-Google-Genai agents provides integration with Google Gemini models, and Python-Dotenv allows secure loading of environmental variables from.
import os
from typing import Dict, Any, List
from dataclasses import dataclass
from langgraph.graph import Graph, StateGraph, END
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.schema import HumanMessage, SystemMessage
import json
os.environ("GOOGLE_API_KEY") = "Use Your API Key Here"
We import essential modules and libraries for the construction of agents' workflows, including chargooglegenerativate to interact with the gemini and stategraph models to manage the conversational state. The bone line (“Google_api_key”) = “Use your API key here” assigns the API key to an environment variable, allowing the Gemini model to authenticate and generate answers.
@dataclass
class AgentState:
"""State shared across all nodes in the graph"""
query: str = ""
context: str = ""
analysis: str = ""
response: str = ""
next_action: str = ""
iteration: int = 0
max_iterations: int = 3
Discover the Notebook here
This agent data plate defines the shared state that persists on different nodes in a Langgraph workflow. He follows the key fields, including the user's request, the context recovered, any analysis made, the response generated and the next recommended action. It also includes an iteration counter and a max_iteration limit to control the number of times that the workflow can loop, allowing iterative reasoning or decision -making by the agent.
@dataclass
class AgentState:
"""State shared across all nodes in the graph"""
query: str = ""
context: str = ""
analysis: str = ""
response: str = ""
next_action: str = ""
iteration: int = 0
max_iterations: int = 3
This AgentState dataclass defines the shared state that persists across different nodes in a LangGraph workflow. It tracks key fields, including the user's query, retrieved context, any analysis performed, the generated response, and the recommended next action. It also includes an iteration counter and a max_iterations limit to control how many times the workflow can loop, enabling iterative reasoning or decision-making by the agent.
class GraphAIAgent:
def __init__(self, api_key: str = None):
if api_key:
os.environ("GOOGLE_API_KEY") = api_key
self.llm = ChatGoogleGenerativeAI(
model="gemini-1.5-flash",
temperature=0.7,
convert_system_message_to_human=True
)
self.analyzer = ChatGoogleGenerativeAI(
model="gemini-1.5-flash",
temperature=0.3,
convert_system_message_to_human=True
)
self.graph = self._build_graph()
def _build_graph(self) -> StateGraph:
"""Build the LangGraph workflow"""
workflow = StateGraph(AgentState)
workflow.add_node("router", self._router_node)
workflow.add_node("analyzer", self._analyzer_node)
workflow.add_node("researcher", self._researcher_node)
workflow.add_node("responder", self._responder_node)
workflow.add_node("validator", self._validator_node)
workflow.set_entry_point("router")
workflow.add_edge("router", "analyzer")
workflow.add_conditional_edges(
"analyzer",
self._decide_next_step,
{
"research": "researcher",
"respond": "responder"
}
)
workflow.add_edge("researcher", "responder")
workflow.add_edge("responder", "validator")
workflow.add_conditional_edges(
"validator",
self._should_continue,
{
"continue": "analyzer",
"end": END
}
)
return workflow.compile()
def _router_node(self, state: AgentState) -> Dict(str, Any):
"""Route and categorize the incoming query"""
system_msg = """You are a query router. Analyze the user's query and provide context.
Determine if this is a factual question, creative request, problem-solving task, or analysis."""
messages = (
SystemMessage(content=system_msg),
HumanMessage(content=f"Query: {state.query}")
)
response = self.llm.invoke(messages)
return {
"context": response.content,
"iteration": state.iteration + 1
}
def _analyzer_node(self, state: AgentState) -> Dict(str, Any):
"""Analyze the query and determine the approach"""
system_msg = """Analyze the query and context. Determine if additional research is needed
or if you can provide a direct response. Be thorough in your analysis."""
messages = (
SystemMessage(content=system_msg),
HumanMessage(content=f"""
Query: {state.query}
Context: {state.context}
Previous Analysis: {state.analysis}
""")
)
response = self.analyzer.invoke(messages)
analysis = response.content
if "research" in analysis.lower() or "more information" in analysis.lower():
next_action = "research"
else:
next_action = "respond"
return {
"analysis": analysis,
"next_action": next_action
}
def _researcher_node(self, state: AgentState) -> Dict(str, Any):
"""Conduct additional research or information gathering"""
system_msg = """You are a research assistant. Based on the analysis, gather relevant
information and insights to help answer the query comprehensively."""
messages = (
SystemMessage(content=system_msg),
HumanMessage(content=f"""
Query: {state.query}
Analysis: {state.analysis}
Research focus: Provide detailed information relevant to the query.
""")
)
response = self.llm.invoke(messages)
updated_context = f"{state.context}\n\nResearch: {response.content}"
return {"context": updated_context}
def _responder_node(self, state: AgentState) -> Dict(str, Any):
"""Generate the final response"""
system_msg = """You are a helpful AI assistant. Provide a comprehensive, accurate,
and well-structured response based on the analysis and context provided."""
messages = (
SystemMessage(content=system_msg),
HumanMessage(content=f"""
Query: {state.query}
Context: {state.context}
Analysis: {state.analysis}
Provide a complete and helpful response.
""")
)
response = self.llm.invoke(messages)
return {"response": response.content}
def _validator_node(self, state: AgentState) -> Dict(str, Any):
"""Validate the response quality and completeness"""
system_msg = """Evaluate if the response adequately answers the query.
Return 'COMPLETE' if satisfactory, or 'NEEDS_IMPROVEMENT' if more work is needed."""
messages = (
SystemMessage(content=system_msg),
HumanMessage(content=f"""
Original Query: {state.query}
Response: {state.response}
Is this response complete and satisfactory?
""")
)
response = self.analyzer.invoke(messages)
validation = response.content
return {"context": f"{state.context}\n\nValidation: {validation}"}
def _decide_next_step(self, state: AgentState) -> str:
"""Decide whether to research or respond directly"""
return state.next_action
def _should_continue(self, state: AgentState) -> str:
"""Decide whether to continue iterating or end"""
if state.iteration >= state.max_iterations:
return "end"
if "COMPLETE" in state.context:
return "end"
if "NEEDS_IMPROVEMENT" in state.context:
return "continue"
return "end"
def run(self, query: str) -> str:
"""Run the agent with a query"""
initial_state = AgentState(query=query)
result = self.graph.invoke(initial_state)
return result("response")
Discover the Notebook here
The pepper class defines an AI workflow based on Langgraph using Gemini models to analyze, search, search, respond and validate responses to user queries. He uses modular nodes, such as the router, the analyzer, the researcher, the answering machine and the validator, to reason through complex tasks, refine the responses by controlled iterations.
def main():
agent = GraphAIAgent("Use Your API Key Here")
test_queries = (
"Explain quantum computing and its applications",
"What are the best practices for machine learning model deployment?",
"Create a story about a robot learning to paint"
)
print("🤖 Graph AI Agent with LangGraph and Gemini")
print("=" * 50)
for i, query in enumerate(test_queries, 1):
print(f"\n📝 Query {i}: {query}")
print("-" * 30)
try:
response = agent.run(query)
print(f"🎯 Response: {response}")
except Exception as e:
print(f"❌ Error: {str(e)}")
print("\n" + "="*50)
if __name__ == "__main__":
main()
Finally, the main function () initializes graphaar with an API gemini key and performs it on a set of test requests covering technical, strategic and creative tasks. He prints each request and the response generated by AI-AI, showing how the agent focused on Langgraph treats various types of input using the reasoning and generation capacities of Gemini.
In conclusion, by combining the structured state machine of Langgraph with the power of the conversational intelligence of Gemini, this agent represents a new paradigm in the engineering of the workflow of AI, that which reflects cycles of human reasoning, analysis and validation. The tutorial provides a modular and extensible model to develop advanced AI agents which can manage various tasks independently, ranging from the response to complex requests to the generation of creative content.
Discover the Notebook here. All the merit of this research goes to researchers in this project.
🆕 Did you know? Marktechpost is the fastest growth media platform – injured by more than a million monthly readers. Book a strategy call to discuss the objectives of your campaign. Also, don't hesitate to follow us Twitter And don't forget to join our 95K + ML Subdreddit and subscribe to Our newsletter.
Asif Razzaq is the CEO of Marktechpost Media Inc .. as a visionary entrepreneur and engineer, AIF undertakes to exploit the potential of artificial intelligence for social good. His most recent company is the launch of an artificial intelligence media platform, Marktechpost, which stands out from its in-depth coverage of automatic learning and in-depth learning news which are both technically solid and easily understandable by a large audience. The platform has more than 2 million monthly views, illustrating its popularity with the public.
