In this tutorial, we create a complete multi-agent research team system using Langgraph and the Google Gemini API. We use agents specific to the role, researcher, analyst, writer and supervisor, each responsible for a separate part of the research pipeline. Together, these agents collect in collaboration with data, analyze information, synthesize a report and coordinate the workflow. We also incorporate features such as the persistence of memory, the coordination of agents, personalized agents and performance monitoring. At the end of the configuration, we can perform automated and intelligent research sessions that generate structured relationships on a given subject.
!pip install langgraph langchain-google-genai langchain-community langchain-core python-dotenv
import os
from typing import Annotated, List, Tuple, Union
from typing_extensions import TypedDict
import operator
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, SystemMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_google_genai import ChatGoogleGenerativeAI
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.memory import MemorySaver
import functools
import getpass
GOOGLE_API_KEY = getpass.getpass("Enter your Google API Key: ")
os.environ("GOOGLE_API_KEY") = GOOGLE_API_KEY
We start by installing the necessary libraries, including the Google Gemini integration of Langgraph and Langchain. Then, we import the essential modules and configure our environment by entering the API Google key safely using the Getpass module. This ensures that we can authenticate our gemini LLM without exposing the key to the code.
class AgentState(TypedDict):
"""State shared between all agents in the graph"""
messages: Annotated(list, operator.add)
next: str
current_agent: str
research_topic: str
findings: dict
final_report: str
class AgentResponse(TypedDict):
"""Standard response format for all agents"""
content: str
next_agent: str
findings: dict
def create_llm(temperature: float = 0.1, model: str = "gemini-1.5-flash") -> ChatGoogleGenerativeAI:
"""Create a configured Gemini LLM instance"""
return ChatGoogleGenerativeAI(
model=model,
temperature=temperature,
google_api_key=os.environ("GOOGLE_API_KEY")
)
We define two Typeddict classes to maintain the structured state and the responses shared between all the agents of the Langgraph. Agents and follows the messages, the state of the workflow, the subject and the results collected, while the agent to standardize the exit of each agent. We also create an assistance function to start the Gemini LLM with a specified model and temperature, guaranteeing a coherent behavior of all agents.
def create_research_agent(llm: ChatGoogleGenerativeAI) -> callable:
"""Creates a research specialist agent for initial data gathering"""
research_prompt = ChatPromptTemplate.from_messages((
("system", """You are a Research Specialist AI. Your role is to:
1. Analyze the research topic thoroughly
2. Identify key areas that need investigation
3. Provide initial research findings and insights
4. Suggest specific angles for deeper analysis
Focus on providing comprehensive, accurate information and clear research directions.
Always structure your response with clear sections and bullet points.
"""),
MessagesPlaceholder(variable_name="messages"),
("human", "Research Topic: {research_topic}")
))
research_chain = research_prompt | llm
def research_agent(state: AgentState) -> AgentState:
"""Execute research analysis"""
try:
response = research_chain.invoke({
"messages": state("messages"),
"research_topic": state("research_topic")
})
findings = {
"research_overview": response.content,
"key_areas": ("area1", "area2", "area3"),
"initial_insights": response.content(:500) + "..."
}
return {
"messages": state("messages") + (AIMessage(content=response.content)),
"next": "analyst",
"current_agent": "researcher",
"research_topic": state("research_topic"),
"findings": {**state.get("findings", {}), "research": findings},
"final_report": state.get("final_report", "")
}
except Exception as e:
error_msg = f"Research agent error: {str(e)}"
return {
"messages": state("messages") + (AIMessage(content=error_msg)),
"next": "analyst",
"current_agent": "researcher",
"research_topic": state("research_topic"),
"findings": state.get("findings", {}),
"final_report": state.get("final_report", "")
}
return research_agent
We are now creating our first specialized agent, the AI research specialist. This agent is invited to deeply analyze a given subject, to extract key areas of interest and to suggest instructions for more in -depth exploration. Using a catflower chat, we define its behavior and connect it with our Gemini LLM. The Research_AGENT function performs this logic, updates the state shared with the results and messages and transmits control to the following agent online, the analyst.
def create_analyst_agent(llm: ChatGoogleGenerativeAI) -> callable:
"""Creates a data analyst agent for deep analysis"""
analyst_prompt = ChatPromptTemplate.from_messages((
("system", """You are a Data Analyst AI. Your role is to:
1. Analyze data and information provided by the research team
2. Identify patterns, trends, and correlations
3. Provide statistical insights and data-driven conclusions
4. Suggest actionable recommendations based on analysis
Focus on quantitative analysis, data interpretation, and evidence-based insights.
Use clear metrics and concrete examples in your analysis.
"""),
MessagesPlaceholder(variable_name="messages"),
("human", "Analyze the research findings for: {research_topic}")
))
analyst_chain = analyst_prompt | llm
def analyst_agent(state: AgentState) -> AgentState:
"""Execute data analysis"""
try:
response = analyst_chain.invoke({
"messages": state("messages"),
"research_topic": state("research_topic")
})
analysis_findings = {
"analysis_summary": response.content,
"key_metrics": ("metric1", "metric2", "metric3"),
"recommendations": response.content.split("recommendations:")(-1) if "recommendations:" in response.content.lower() else "No specific recommendations found"
}
return {
"messages": state("messages") + (AIMessage(content=response.content)),
"next": "writer",
"current_agent": "analyst",
"research_topic": state("research_topic"),
"findings": {**state.get("findings", {}), "analysis": analysis_findings},
"final_report": state.get("final_report", "")
}
except Exception as e:
error_msg = f"Analyst agent error: {str(e)}"
return {
"messages": state("messages") + (AIMessage(content=error_msg)),
"next": "writer",
"current_agent": "analyst",
"research_topic": state("research_topic"),
"findings": state.get("findings", {}),
"final_report": state.get("final_report", "")
}
return analyst_agent
We now define the AI of the data analyst, which plunged more deep into the research results generated by the previous agent. This agent identifies models, trends and key measures, offering usable information supported by evidence. Using a tailor -made system prompt and the gemini LLM, the analyst_age function enriches the state of a structured analysis, preparing the basic work so that the report editor synthesizes everything in a final document.
def create_writer_agent(llm: ChatGoogleGenerativeAI) -> callable:
"""Creates a report writer agent for final documentation"""
writer_prompt = ChatPromptTemplate.from_messages((
("system", """You are a Report Writer AI. Your role is to:
1. Synthesize all research and analysis into a comprehensive report
2. Create clear, professional documentation
3. Ensure proper structure with executive summary, findings, and conclusions
4. Make complex information accessible to various audiences
Focus on clarity, completeness, and professional presentation.
Include specific examples and actionable insights.
"""),
MessagesPlaceholder(variable_name="messages"),
("human", "Create a comprehensive report for: {research_topic}")
))
writer_chain = writer_prompt | llm
def writer_agent(state: AgentState) -> AgentState:
"""Execute report writing"""
try:
response = writer_chain.invoke({
"messages": state("messages"),
"research_topic": state("research_topic")
})
return {
"messages": state("messages") + (AIMessage(content=response.content)),
"next": "supervisor",
"current_agent": "writer",
"research_topic": state("research_topic"),
"findings": state.get("findings", {}),
"final_report": response.content
}
except Exception as e:
error_msg = f"Writer agent error: {str(e)}"
return {
"messages": state("messages") + (AIMessage(content=error_msg)),
"next": "supervisor",
"current_agent": "writer",
"research_topic": state("research_topic"),
"findings": state.get("findings", {}),
"final_report": f"Error generating report: {str(e)}"
}
return writer_agent
We are now creating the AI report writer, who is responsible for transforming research and analysis collected into a polite and polished structured document. This agent synthesizes all previous information on a clear and professional report with a summary, detailed results and conclusions. By invoking the Gemini model with a structured prompt, the writer's agent updates the final report to the shared state and hand control to the supervisor for examination.
def create_supervisor_agent(llm: ChatGoogleGenerativeAI, members: List(str)) -> callable:
"""Creates a supervisor agent to coordinate the team"""
options = ("FINISH") + members
supervisor_prompt = ChatPromptTemplate.from_messages((
("system", f"""You are a Supervisor AI managing a research team. Your team members are:
{', '.join(members)}
Your responsibilities:
1. Coordinate the workflow between team members
2. Ensure each agent completes their specialized tasks
3. Determine when the research is complete
4. Maintain quality standards throughout the process
Given the conversation, determine the next step:
- If research is needed: route to "researcher"
- If analysis is needed: route to "analyst"
- If report writing is needed: route to "writer"
- If work is complete: route to "FINISH"
Available options: {options}
Respond with just the name of the next agent or "FINISH".
"""),
MessagesPlaceholder(variable_name="messages"),
("human", "Current status: {current_agent} just completed their task for topic: {research_topic}")
))
supervisor_chain = supervisor_prompt | llm
def supervisor_agent(state: AgentState) -> AgentState:
"""Execute supervisor coordination"""
try:
response = supervisor_chain.invoke({
"messages": state("messages"),
"current_agent": state.get("current_agent", "none"),
"research_topic": state("research_topic")
})
next_agent = response.content.strip().lower()
if "finish" in next_agent or "complete" in next_agent:
next_step = "FINISH"
elif "research" in next_agent:
next_step = "researcher"
elif "analy" in next_agent:
next_step = "analyst"
elif "writ" in next_agent:
next_step = "writer"
else:
current = state.get("current_agent", "")
if current == "researcher":
next_step = "analyst"
elif current == "analyst":
next_step = "writer"
elif current == "writer":
next_step = "FINISH"
else:
next_step = "researcher"
return {
"messages": state("messages") + (AIMessage(content=f"Supervisor decision: Next agent is {next_step}")),
"next": next_step,
"current_agent": "supervisor",
"research_topic": state("research_topic"),
"findings": state.get("findings", {}),
"final_report": state.get("final_report", "")
}
except Exception as e:
error_msg = f"Supervisor error: {str(e)}"
return {
"messages": state("messages") + (AIMessage(content=error_msg)),
"next": "FINISH",
"current_agent": "supervisor",
"research_topic": state("research_topic"),
"findings": state.get("findings", {}),
"final_report": state.get("final_report", "")
}
return supervisor_agent
We now bring the supervisor, which supervises and orchestrates the entire multi-agent workflow. This agent assesses the current progress, knowing which member of the team has just completed their task and intelligently decides the next step: if the research must be continued, analyze, initiate reporting or marking the project as complete. By analyzing the context of the conversation and using the Gemini for reasoning, the supervisor provides smooth transitions and quality control throughout the search pipeline.
def create_research_team_graph() -> StateGraph:
"""Creates the complete research team workflow graph"""
llm = create_llm()
members = ("researcher", "analyst", "writer")
researcher = create_research_agent(llm)
analyst = create_analyst_agent(llm)
writer = create_writer_agent(llm)
supervisor = create_supervisor_agent(llm, members)
workflow = StateGraph(AgentState)
workflow.add_node("researcher", researcher)
workflow.add_node("analyst", analyst)
workflow.add_node("writer", writer)
workflow.add_node("supervisor", supervisor)
workflow.add_edge("researcher", "supervisor")
workflow.add_edge("analyst", "supervisor")
workflow.add_edge("writer", "supervisor")
workflow.add_conditional_edges(
"supervisor",
lambda x: x("next"),
{
"researcher": "researcher",
"analyst": "analyst",
"writer": "writer",
"FINISH": END
}
)
workflow.set_entry_point("supervisor")
return workflow
def compile_research_team():
"""Compile the research team graph with memory"""
workflow = create_research_team_graph()
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)
return app
def run_research_team(topic: str, thread_id: str = "research_session_1"):
"""Run the complete research team workflow"""
app = compile_research_team()
initial_state = {
"messages": (HumanMessage(content=f"Research the topic: {topic}")),
"research_topic": topic,
"next": "researcher",
"current_agent": "start",
"findings": {},
"final_report": ""
}
config = {"configurable": {"thread_id": thread_id}}
print(f"🔍 Starting research on: {topic}")
print("=" * 50)
try:
final_state = None
for step, state in enumerate(app.stream(initial_state, config=config)):
print(f"\n📍 Step {step + 1}: {list(state.keys())(0)}")
current_state = list(state.values())(0)
if current_state("messages"):
last_message = current_state("messages")(-1)
if isinstance(last_message, AIMessage):
print(f"💬 {last_message.content(:200)}...")
final_state = current_state
if step > 10:
print("⚠️ Maximum steps reached. Stopping execution.")
break
return final_state
except Exception as e:
print(f"❌ Error during execution: {str(e)}")
return None
Discover the complete Codes
We are now assessing and enforcing the entire multi-agent work flow using Langgraph. First of all, we define the graph of the research team, which consists of nodes for each agent, researcher, analyst, writer and supervisor, linked to logical transitions. Then we comply this graph with memory using Memorysaver to persist the history of the conversation. Finally, the Run_research_Team () function initializes the process with a subject and diffuses the execution step by step, allowing us to follow the contribution of each agent in real time. This orchestration provides a fully automated collaborative research pipeline.
if __name__ == "__main__":
result = run_research_team("Artificial Intelligence in Healthcare")
if result:
print("\n" + "=" * 50)
print("📊 FINAL RESULTS")
print("=" * 50)
print(f"🏁 Final Agent: {result('current_agent')}")
print(f"📋 Findings: {len(result('findings'))} sections")
print(f"📄 Report Length: {len(result('final_report'))} characters")
if result('final_report'):
print("\n📄 FINAL REPORT:")
print("-" * 30)
print(result('final_report'))
def interactive_research_session():
"""Run an interactive research session"""
app = compile_research_team()
print("🎯 Interactive Research Team Session")
print("Enter 'quit' to exit\n")
session_count = 0
while True:
topic = input("🔍 Enter research topic: ").strip()
if topic.lower() in ('quit', 'exit', 'q'):
print("👋 Goodbye!")
break
if not topic:
print("❌ Please enter a valid topic.")
continue
session_count += 1
thread_id = f"interactive_session_{session_count}"
result = run_research_team(topic, thread_id)
if result and result('final_report'):
print(f"\n✅ Research completed for: {topic}")
print(f"📄 Report preview: {result('final_report')(:300)}...")
show_full = input("\n📖 Show full report? (y/n): ").lower()
if show_full.startswith('y'):
print("\n" + "=" * 60)
print("📄 COMPLETE RESEARCH REPORT")
print("=" * 60)
print(result('final_report'))
print("\n" + "-" * 50)
def create_custom_agent(role: str, instructions: str, llm: ChatGoogleGenerativeAI) -> callable:
"""Create a custom agent with specific role and instructions"""
custom_prompt = ChatPromptTemplate.from_messages((
("system", f"""You are a {role} AI.
Your specific instructions:
{instructions}
Always provide detailed, professional responses relevant to your role.
"""),
MessagesPlaceholder(variable_name="messages"),
("human", "Task: {task}")
))
custom_chain = custom_prompt | llm
def custom_agent(state: AgentState) -> AgentState:
"""Execute custom agent task"""
try:
response = custom_chain.invoke({
"messages": state("messages"),
"task": state("research_topic")
})
return {
"messages": state("messages") + (AIMessage(content=response.content)),
"next": "supervisor",
"current_agent": role.lower().replace(" ", "_"),
"research_topic": state("research_topic"),
"findings": state.get("findings", {}),
"final_report": state.get("final_report", "")
}
except Exception as e:
error_msg = f"{role} agent error: {str(e)}"
return {
"messages": state("messages") + (AIMessage(content=error_msg)),
"next": "supervisor",
"current_agent": role.lower().replace(" ", "_"),
"research_topic": state("research_topic"),
"findings": state.get("findings", {}),
"final_report": state.get("final_report", "")
}
return custom_agent
Discover the complete Codes
We finish our system with execution and personalization capacities. The main block allows us to trigger an execution directly, which makes it perfect to test the pipeline with a real subject, such as artificial intelligence in health care. For a more dynamic use, the interactive_research_session () allows several subject requests in a loop, simulating an exploration in real time. Finally, the CREATE_CUSTOM_AGENT () function allows us to integrate new agents with unique roles and instructions, which makes the framework flexible and extensible for specialized workflows.
def visualize_graph():
"""Visualize the research team graph structure"""
try:
app = compile_research_team()
graph_repr = app.get_graph()
print("🗺️ Research Team Graph Structure")
print("=" * 40)
print(f"Nodes: {list(graph_repr.nodes.keys())}")
print(f"Edges: {((edge.source, edge.target) for edge in graph_repr.edges)}")
try:
graph_repr.draw_mermaid()
except:
print("📊 Visual graph requires mermaid-py package")
print("Install with: !pip install mermaid-py")
except Exception as e:
print(f"❌ Error visualizing graph: {str(e)}")
import time
from datetime import datetime
def monitor_research_performance(topic: str):
"""Monitor and report performance metrics"""
start_time = time.time()
print(f"⏱️ Starting performance monitoring for: {topic}")
result = run_research_team(topic, f"perf_test_{int(time.time())}")
end_time = time.time()
duration = end_time - start_time
metrics = {
"duration": duration,
"total_messages": len(result("messages")) if result else 0,
"findings_sections": len(result("findings")) if result else 0,
"report_length": len(result("final_report")) if result and result("final_report") else 0,
"success": result is not None
}
print("\n📊 PERFORMANCE METRICS")
print("=" * 30)
print(f"⏱️ Duration: {duration:.2f} seconds")
print(f"💬 Total Messages: {metrics('total_messages')}")
print(f"📋 Findings Sections: {metrics('findings_sections')}")
print(f"📄 Report Length: {metrics('report_length')} chars")
print(f"✅ Success: {metrics('success')}")
return metrics
def quick_start_demo():
"""Complete demo of the research team system"""
print("🚀 LangGraph Research Team - Quick Start Demo")
print("=" * 50)
topics = (
"Climate Change Impact on Agriculture",
"Quantum Computing Applications",
"Digital Privacy in the Modern Age"
)
for i, topic in enumerate(topics, 1):
print(f"\n🔍 Demo {i}: {topic}")
print("-" * 40)
try:
result = run_research_team(topic, f"demo_{i}")
if result and result('final_report'):
print(f"✅ Research completed successfully!")
print(f"📊 Report preview: {result('final_report')(:150)}...")
else:
print("❌ Research failed")
except Exception as e:
print(f"❌ Error in demo {i}: {str(e)}")
print("\n" + "="*30)
print("🎉 Demo completed!")
quick_start_demo()
We finalize the system by adding powerful utilities for visualization of graphics, performance monitoring and a quick starting demo. The Visualize_Graph () function provides a structural overview of agent connections, ideal for debugging or presentation purposes. The monitor_research_performance () follows the execution, the volume of messages and the size of the report, helping us to assess the efficiency of the system. Finally, Quick_Start_demo () performs several samples of sequence research subjects, showing how transparent agents collaborate to generate insights.
In conclusion, we managed to build and test a fully functional modular research assistant framework using Langgraph. With clear agent roles and automated task routing, we rationalize research from the start of the raw subject to a well -structured final report. Whether we use the quick starting demo, we carry out interactive sessions or monitor performance, this system allows us to manage complex search tasks with minimal intervention. We are now equipped to adapt or extend this configuration further by integrating personalized agents, by visualizing workflows or even deploying it in real applications.
Discover the complete Codes | Sponsorship opportunity: Do you want to reach the most influential AI developers through the United States and Europe? Join our ecosystem of monthly readers of 1M + and 500K + members of the committed community. (Explore sponsorship)
Asif Razzaq is the CEO of Marktechpost Media Inc .. as a visionary entrepreneur and engineer, AIF undertakes to exploit the potential of artificial intelligence for social good. His most recent company is the launch of an artificial intelligence media platform, Marktechpost, which stands out from its in-depth coverage of automatic learning and in-depth learning news which are both technically solid and easily understandable by a large audience. The platform has more than 2 million monthly views, illustrating its popularity with the public.
