An implementation of coding with arcade: Integrate the API tools of the Gemini developer in the Langgraph agents for the AI ​​autonomous workflows

by Brenden Burgess

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

Arcade Transform your Langgraph agents of static conversational interfaces into dynamic assistants and action -oriented by providing a rich suite of ready -to -use tools, including scratching and web research, as well as specialized APIs for finance, cards, etc. In this tutorial, we will learn to initialize ArcadeToolManager, to recover individual tools (such as web.Scrapeaurl) or whole tool kits, and integrate them transparently into the GEMINI developer's chat model from Google via Chatgooglegenei de Langchain. In a few steps, we have installed outbuildings, safely loaded your API keys, recovered and inspected your tools, configured the Gemini model and strengthen a react style agent with control memory. Throughout, the intuitive python interface of Arcade has kept your concise code and your concentration squarely on the development of powerful and real workflows, no HTTP calls of low level or required manual analysis.

!pip install langchain langchain-arcade langchain-google-genai langgraph

We integrate all the basic libraries you need, including the main Langchain features, Arcade integration to recover and manage external tools, the Google Genai connector for access to gemini via the API key and the Langgraph orchestration frame, so that you can get started in one go.

from getpass import getpass
import os
if "GOOGLE_API_KEY" not in os.environ:
    os.environ("GOOGLE_API_KEY") = getpass("Gemini API Key: ")
if "ARCADE_API_KEY" not in os.environ:
    os.environ("ARCADE_API_KEY") = getpass("Arcade API Key: ")

We invite you securely to your gemini and arcade API keys, without displaying them on the screen. It defines them in the form of environmental variables, only asking if it is not already defined, to keep your identification information outside your notebook code.

from langchain_arcade import ArcadeToolManager
manager = ArcadeToolManager(api_key=os.environ("ARCADE_API_KEY"))
tools = manager.get_tools(tools=("Web.ScrapeUrl"), toolkits=("Google"))
print("Loaded tools:", (t.name for t in tools))

We initialize the ArcadetoolManager with your API key, then obtain both the Web.Scraper tool and the Complete Google toolbox. Finally, it prints the names of the tools loaded, allowing you to confirm what capacities are now available for your agent.

from langchain_google_genai import ChatGoogleGenerativeAI
from langgraph.checkpoint.memory import MemorySaver
model = ChatGoogleGenerativeAI(
    model="gemini-1.5-flash",  
    temperature=0,
    max_tokens=None,
    timeout=None,
    max_retries=2,
)
bound_model = model.bind_tools(tools)
memory = MemorySaver()

We initialize the developer's cat model Gemini (Gemini-1.5-Flash) with a zero temperature for deterministic responses, bind in your arcade tools so that the agent can call them during his reasoning and configure a MemorysAver to persist the state control point by the agent by checkpoint.

from langgraph.prebuilt import create_react_agent
graph = create_react_agent(
    model=bound_model,
    tools=tools,
    checkpointer=memory
)

We run a Langgraph Agent Langgraph in React which is full of your related gemini model, the arcade tools recovered and the memorysaver verification point, allowing your agent to iterate by thought, the invocation of the tool and the reflection with the persistent state between the calls.

from langgraph.errors import NodeInterrupt
config = {
    "configurable": {
        "thread_id": "1",
        "user_id": "user@example.com"
    }
}
user_input = {
    "messages": (
        ("user", "List any new and important emails in my inbox.")
    )
}
try:
    for chunk in graph.stream(user_input, config, stream_mode="values"):
        chunk("messages")(-1).pretty_print()
except NodeInterrupt as exc:
    print(f"\n🔒 NodeInterrupt: {exc}")
    print("Please update your tool authorization or adjust your request, then re-run.")

We configure the configuration of your agent (ID of thread and user ID) and the user's prompt, then disseminates the responses of the React agent, quite printing each piece on his arrival. If a tool call strikes an authorization guard, he catches the node interrupt and tells you to update your identification information or adjust the request before trying again.

In conclusion, by focusing our architecture as an agent on arcade, we gain instant access to a plug-and-play ecosystem of external capacities which would otherwise be days to build from zero. The Bind_Tools model merges the arcade tool game with the reasoning in Gemini's natural language, while the React frame of Langgraph orchestrates the invocation of the tool in response to user requests. Whether you have crawling websites for real -time data, automation of routine research or integration of APIs specific to the field, arcade scales with your ambitions, allowing you to exchange new tools or new tools as your use cases are evolving.


Here is the Colaab. Also, don't forget to follow us Twitter And join our Telegram And Linkedin Group. Don't forget to join our 90K + ML Subdreddit.

🔥 (Register now) Minicon Virtual Conference on AIA: Free registration + presence certificate + 4 hours (May 21, 9 a.m. to 1 p.m. PST) + Practical workshop


Asif Razzaq is the CEO of Marktechpost Media Inc .. as a visionary entrepreneur and engineer, AIF undertakes to exploit the potential of artificial intelligence for social good. His most recent company is the launch of an artificial intelligence media platform, Marktechpost, which stands out from its in-depth coverage of automatic learning and in-depth learning news which are both technically solid and easily understandable by a large audience. The platform has more than 2 million monthly views, illustrating its popularity with the public.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.