In this tutorial, we provide a practical guide for implementation Tonguean orchestration framework rationalized and based on a graphic, integrated in a transparent manner with API Claude d'Anthropic. Thanks to a detailed and executable code optimized for Google Colab, developers learn to create and visualize workflows in AI as interconnected nodes performing distinct tasks, such as the generation of concise responses, critical analysis of responses and automatic composition of technical blog content. The compact implementation highlights the intuitive architecture of intuitive nodes of Langgraph. It can manage complex sequences of tasks in natural language fed by Claude, basic scenarios of questions to pipeline generation of advanced content.
from getpass import getpass
import os
anthropic_key = getpass("Enter your Anthropic API key: ")
os.environ("ANTHROPIC_API_KEY") = anthropic_key
print("Key set:", "ANTHROPIC_API_KEY" in os.environ)
We securely invite users to enter their anthropogenic API key using the Python GetPass module, ensuring that sensitive data is not displayed. He then defines this key as an environment variable (anthropic_api_key) and confirms successful storage.
import os
import json
import requests
from typing import Dict, List, Any, Callable, Optional, Union
from dataclasses import dataclass, field
import networkx as nx
import matplotlib.pyplot as plt
from IPython.display import display, HTML, clear_output
We import essential libraries to build and visualize structured AI workflows. It includes modules to manage data (JSON, requests, data classes), the creation of graphics and the visualization (Networkx, Matplotlib), the interactive display of the laptop (ipython.display) and type annotations (noise) for clarity and maintainability.
try:
import anthropic
except ImportError:
print("Installing anthropic package...")
!pip install -q anthropic
import anthropic
from anthropic import Anthropic
We make sure that the anthropogenic python package is available for use. He tries to import the module and, if he is not found, installs it automatically using PIP in a Google Colar -Colar environment. After installation, it is important the anthropogenic customer, essential to interact with the Claude models via the anthropogenic API. 4o
@dataclass
class NodeConfig:
name: str
function: Callable
inputs: List(str) = field(default_factory=list)
outputs: List(str) = field(default_factory=list)
config: Dict(str, Any) = field(default_factory=dict)
This Nodeconfig data class defines the structure of each node in the Langgraph work flow. Each node has a name, executable function, optional inputs and outputs, and an optional configuration dictionary to store additional parameters. This configuration allows definitions of modular nodes and reusable for AI tasks based on graphics.
class LangGraph:
def __init__(self, api_key: Optional(str) = None):
self.api_key = api_key or os.environ.get("ANTHROPIC_API_KEY")
if not self.api_key:
from google.colab import userdata
try:
self.api_key = userdata.get('ANTHROPIC_API_KEY')
if not self.api_key:
raise ValueError("No API key found")
except:
print("No Anthropic API key found in environment variables or Colab secrets.")
self.api_key = input("Please enter your Anthropic API key: ")
if not self.api_key:
raise ValueError("Please provide an Anthropic API key")
self.client = Anthropic(api_key=self.api_key)
self.graph = nx.DiGraph()
self.nodes = {}
self.state = {}
def add_node(self, node_config: NodeConfig):
self.nodes(node_config.name) = node_config
self.graph.add_node(node_config.name)
for input_node in node_config.inputs:
if input_node in self.nodes:
self.graph.add_edge(input_node, node_config.name)
return self
def claude_node(self, name: str, prompt_template: str, model: str = "claude-3-7-sonnet-20250219",
inputs: List(str) = None, outputs: List(str) = None, system_prompt: str = None):
"""Convenience method to create a Claude API node"""
inputs = inputs or ()
outputs = outputs or (name + "_response")
def claude_fn(state, **kwargs):
prompt = prompt_template
for k, v in state.items():
if isinstance(v, str):
prompt = prompt.replace(f"{{{k}}}", v)
message_params = {
"model": model,
"max_tokens": 1000,
"messages": ({"role": "user", "content": prompt})
}
if system_prompt:
message_params("system") = system_prompt
response = self.client.messages.create(**message_params)
return response.content(0).text
node_config = NodeConfig(
name=name,
function=claude_fn,
inputs=inputs,
outputs=outputs,
config={"model": model, "prompt_template": prompt_template}
)
return self.add_node(node_config)
def transform_node(self, name: str, transform_fn: Callable,
inputs: List(str) = None, outputs: List(str) = None):
"""Add a data transformation node"""
inputs = inputs or ()
outputs = outputs or (name + "_output")
node_config = NodeConfig(
name=name,
function=transform_fn,
inputs=inputs,
outputs=outputs
)
return self.add_node(node_config)
def visualize(self):
"""Visualize the graph"""
plt.figure(figsize=(10, 6))
pos = nx.spring_layout(self.graph)
nx.draw(self.graph, pos, with_labels=True, node_color="lightblue",
node_size=1500, arrowsize=20, font_size=10)
plt.title("LangGraph Flow")
plt.tight_layout()
plt.show()
print("\nGraph Structure:")
for node in self.graph.nodes():
successors = list(self.graph.successors(node))
if successors:
print(f" {node} → {', '.join(successors)}")
else:
print(f" {node} (endpoint)")
print()
def _get_execution_order(self):
"""Determine execution order based on dependencies"""
try:
return list(nx.topological_sort(self.graph))
except nx.NetworkXUnfeasible:
raise ValueError("Graph contains a cycle")
def execute(self, initial_state: Dict(str, Any) = None):
"""Execute the graph in topological order"""
self.state = initial_state or {}
execution_order = self._get_execution_order()
print("Executing LangGraph flow:")
for node_name in execution_order:
print(f"- Running node: {node_name}")
node = self.nodes(node_name)
inputs = {k: self.state.get(k) for k in node.inputs if k in self.state}
result = node.function(self.state, **inputs)
if len(node.outputs) == 1:
self.state(node.outputs(0)) = result
elif isinstance(result, (list, tuple)) and len(result) == len(node.outputs):
for i, output_name in enumerate(node.outputs):
self.state(output_name) = result(i)
print("Execution completed!")
return self.state
def run_example(question="What are the key benefits of using a graph-based architecture for AI workflows?"):
"""Run an example LangGraph flow with a predefined question"""
print(f"Running example with question: '{question}'")
graph = LangGraph()
def question_provider(state, **kwargs):
return question
graph.transform_node(
name="question_provider",
transform_fn=question_provider,
outputs=("user_question")
)
graph.claude_node(
name="question_answerer",
prompt_template="Answer this question clearly and concisely: {user_question}",
inputs=("user_question"),
outputs=("answer"),
system_prompt="You are a helpful AI assistant."
)
graph.claude_node(
name="answer_analyzer",
prompt_template="Analyze if this answer addresses the question well: Question: {user_question}\nAnswer: {answer}",
inputs=("user_question", "answer"),
outputs=("analysis"),
system_prompt="You are a critical evaluator. Be brief but thorough."
)
graph.visualize()
result = graph.execute()
print("\n" + "="*50)
print("EXECUTION RESULTS:")
print("="*50)
print(f"\n🔍 QUESTION:\n{result.get('user_question')}\n")
print(f"📝 ANSWER:\n{result.get('answer')}\n")
print(f"✅ ANALYSIS:\n{result.get('analysis')}")
print("="*50 + "\n")
return graph
The Langgraph class implements a light frame to build and execute AI workflows based on graphics using Claude since Anthropic. It allows users to define modular nodes, either guests to Claude or personalized transformation functions, connect them via dependencies, visualize the entire pipeline and execute them in a topological order. The Run_example function demonstrates this by creating a simple flow of responses and evaluation, presenting the clarity and modularity of the architecture of Langgraph.
def run_advanced_example():
"""Run a more advanced example with multiple nodes for content generation"""
graph = LangGraph()
def topic_selector(state, **kwargs):
return "Graph-based AI systems"
graph.transform_node(
name="topic_selector",
transform_fn=topic_selector,
outputs=("topic")
)
graph.claude_node(
name="outline_generator",
prompt_template="Create a brief outline for a technical blog post about {topic}. Include 3-4 main sections only.",
inputs=("topic"),
outputs=("outline"),
system_prompt="You are a technical writer specializing in AI technologies."
)
graph.claude_node(
name="intro_writer",
prompt_template="Write an engaging introduction for a blog post with this outline: {outline}\nTopic: {topic}",
inputs=("topic", "outline"),
outputs=("introduction"),
system_prompt="You are a technical writer. Write in a clear, engaging style."
)
graph.claude_node(
name="conclusion_writer",
prompt_template="Write a conclusion for a blog post with this outline: {outline}\nTopic: {topic}",
inputs=("topic", "outline"),
outputs=("conclusion"),
system_prompt="You are a technical writer. Summarize key points and include a forward-looking statement."
)
def assembler(state, introduction, outline, conclusion, **kwargs):
return f"# {state('topic')}\n\n{introduction}\n\n## Outline\n{outline}\n\n## Conclusion\n{conclusion}"
graph.transform_node(
name="content_assembler",
transform_fn=assembler,
inputs=("topic", "introduction", "outline", "conclusion"),
outputs=("final_content")
)
graph.visualize()
result = graph.execute()
print("\n" + "="*50)
print("BLOG POST GENERATED:")
print("="*50 + "\n")
print(result.get("final_content"))
print("\n" + "="*50)
return graph
The RUN_ADVANCED_EXAMPLE function presents a more sophisticated use of Langgraph by orchestrating several nodes fueled by Claude to generate a full blog article. It begins by selecting a subject, then creates an outline, an introduction and a conclusion, all using structured Claude prompts. Finally, a transformation knot assembles the content into a formatted blog article. This example shows how Langgraph can automate complex and multi-stage content generation tasks using modular nodes connected in a clear and executable flow.
print("1. Running simple question-answering example")
question = "What are the three main advantages of using graph-based AI architectures?"
simple_graph = run_example(question)
print("\n2. Running advanced blog post creation example")
advanced_graph = run_advanced_example()
Finally, we trigger the execution of the two defined Langgraph work flows. First of all, he executes the example of answers from simple questions by passing a predefined question to the RUN_XAMPLE () function. Then he initiated the most advanced blog articles workflow using Run_Advanced_example (). Together, these calls demonstrate the practical flexibility of Langgraph, interactions based on an prompt to the automation of content in several stages using the Claude d'Anthropic API.
In conclusion, we have implemented Langgraph integrated into the API Claude d'Anthropic, which illustrates the ease of design of modular workflows which exploit powerful language models in structured pipelines based on graphics. By visualizing the flows of tasks and separating responsibilities among the nodes, such as the treatment of questions, analytical evaluation, contribution of content and assembly, developers acquire a practical experience in the construction of maintainable and evolutionary AI systems. Langgraph's clear dependencies and Claude's sophisticated language skills provide an effective solution to orchestrate complex AI processes, in particular for rapid prototyping and execution in environments such as Google Colar.
Discover the Colaab. All the merit of this research goes to researchers in this project. Also, don't hesitate to follow us Twitter And don't forget to join our 95K + ML Subdreddit and subscribe to Our newsletter.
Asif Razzaq is the CEO of Marktechpost Media Inc .. as a visionary entrepreneur and engineer, AIF undertakes to exploit the potential of artificial intelligence for social good. His most recent company is the launch of an artificial intelligence media platform, Marktechpost, which stands out from its in-depth coverage of automatic learning and in-depth learning news which are both technically solid and easily understandable by a large audience. The platform has more than 2 million monthly views, illustrating its popularity with the public.
