Calling function allows an LLM acting as a bridge between invites in natural language and the code or API of the real world. Instead of simply generating text, the model decides when invoking a predefined function, makes a JSON call structured with the name and the arguments of the function, then wait for your application to execute this call and returns the results. This back and forth can loop, potentially invoking several functions in sequence, allowing rich interactions and in several stages entirely under conversational control. In this tutorial, we will implement a weather assistant with Gemini 2.0 Flash to show how to configure and manage this quality cycle. We will implement different variants of functions of functions. By integrating the function calls, we transform a chat interface into a dynamic tool for real-time tasks, whether to recover live weather data, check the command statutes, plan appointments or update databases. Users no longer fill in complex forms or navigate several screens; They simply describe what they need, and the LLM orchestrates underlying actions transparently. This natural language automation allows the easy construction of AI agents which can access external data sources, make transactions or trigger workflows, all in a single conversation.
Calling function with Google Gemini 2.0 Flash
!pip install "google-genai>=1.0.0" geopy requests
We install the Gemini Python SDK (Google-Genai ≥ 1.0.0), as well as Geopy to convert location names into coordinates and requests to make HTTP calls, ensuring that all basic outbuildings for our Colarb weather assistant are in place.
import os
from google import genai
GEMINI_API_KEY = "Use_Your_API_Key"
client = genai.Client(api_key=GEMINI_API_KEY)
model_id = "gemini-2.0-flash"
We import the SDK Gemini, define your API key and create a configured Genai.Client Instance to use the “Gemini-2.0-Flash” model, establishing the basics of all requests for calling for subsequent functions.
res = client.models.generate_content(
model=model_id,
contents=("Tell me 1 good fact about Nuremberg.")
)
print(res.text)
We send a user prompt (“Tell me 1 good fact on Nuremberg.”) To the Flash Gemini 2.0 model via Generate_Content, then print the text answer of the model, demonstrating a generation call from start to finish with the help of the SDK.
Function calling with the JSON diagram
weather_function = {
"name": "get_weather_forecast",
"description": "Retrieves the weather using Open-Meteo API for a given location (city) and a date (yyyy-mm-dd). Returns a list dictionary with the time and temperature for each hour.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g., San Francisco, CA"
},
"date": {
"type": "string",
"description": "the forecasting date for when to get the weather format (yyyy-mm-dd)"
}
},
"required": ("location","date")
}
}
Here, we define a JSON diagram for our Get_Weather_Forecast tool, specifying its name, a descriptive prompt to guide the Gemini at the time of using it and the exact input parameters (location and date) with their types, descriptions and required fields, so that the model can make valid function calls.
from google.genai.types import GenerateContentConfig
config = GenerateContentConfig(
system_instruction="You are a helpful assistant that use tools to access and retrieve information from a weather API. Today is 2025-03-04.",
tools=({"function_declarations": (weather_function)}),
)
We create a generatecontentconfig which indicates to the Gemini that it acts as a retractable assistant and records your weather function under the tools. Therefore, the model knows how to generate structured calls when asked forecast data.
response = client.models.generate_content(
model=model_id,
contents="Whats the weather in Berlin today?"
)
print(response.text)
This call sends the naked prompt (“What is the weather in Berlin today?”) Without including your configuration (and therefore no function definitions), so Gemini falls back to the completion of raw text, offering generic advice instead of invoking your weather tool.
response = client.models.generate_content(
model=model_id,
config=config,
contents="Whats the weather in Berlin today?"
)
for part in response.candidates(0).content.parts:
print(part.function_call)
By transmitting Config (which includes your JSON – Schema tool), Gemini acknowledges that he should call Get_weather_Forecast rather than responding in raw text. The loop on response. Candidates (0). Contest. Parts then prints the object. Function_call of each part, showing you exactly what function the model decided to invoke (with its name and arguments).
from google.genai import types
from geopy.geocoders import Nominatim
import requests
geolocator = Nominatim(user_agent="weather-app")
def get_weather_forecast(location, date):
location = geolocator.geocode(location)
if location:
try:
response = requests.get(f"https://api.open-meteo.com/v1/forecast?latitude={location.latitude}&longitude={location.longitude}&hourly=temperature_2m&start_date={date}&end_date={date}")
data = response.json()
return {time: temp for time, temp in zip(data("hourly")("time"), data("hourly")("temperature_2m"))}
except Exception as e:
return {"error": str(e)}
else:
return {"error": "Location not found"}
functions = {
"get_weather_forecast": get_weather_forecast
}
def call_function(function_name, **kwargs):
return functions(function_name)(**kwargs)
def function_call_loop(prompt):
contents = (types.Content(role="user", parts=(types.Part(text=prompt))))
response = client.models.generate_content(
model=model_id,
config=config,
contents=contents
)
for part in response.candidates(0).content.parts:
contents.append(types.Content(role="model", parts=(part)))
if part.function_call:
print("Tool call detected")
function_call = part.function_call
print(f"Calling tool: {function_call.name} with args: {function_call.args}")
tool_result = call_function(function_call.name, **function_call.args)
function_response_part = types.Part.from_function_response(
name=function_call.name,
response={"result": tool_result},
)
contents.append(types.Content(role="user", parts=(function_response_part)))
print(f"Calling LLM with tool results")
func_gen_response = client.models.generate_content(
model=model_id, config=config, contents=contents
)
contents.append(types.Content(role="model", parts=(func_gen_response)))
return contents(-1).parts(0).text.strip()
result = function_call_loop("Whats the weather in Berlin today?")
print(result)
We are implementing a complete “agentic” loop: he sends your prompt to Gemini, inspects the answer for a function call, executes Get_weather_Forecast (using Geopy plus an open HTTP request), then strengthens the result of the tool in the model to produce and return the final conversation response.
Function calling using Python functions
from geopy.geocoders import Nominatim
import requests
geolocator = Nominatim(user_agent="weather-app")
def get_weather_forecast(location: str, date: str) -> str:
"""
Retrieves the weather using Open-Meteo API for a given location (city) and a date (yyyy-mm-dd). Returns a list dictionary with the time and temperature for each hour."
Args:
location (str): The city and state, e.g., San Francisco, CA
date (str): The forecasting date for when to get the weather format (yyyy-mm-dd)
Returns:
Dict(str, float): A dictionary with the time as key and the temperature as value
"""
location = geolocator.geocode(location)
if location:
try:
response = requests.get(f"https://api.open-meteo.com/v1/forecast?latitude={location.latitude}&longitude={location.longitude}&hourly=temperature_2m&start_date={date}&end_date={date}")
data = response.json()
return {time: temp for time, temp in zip(data("hourly")("time"), data("hourly")("temperature_2m"))}
except Exception as e:
return {"error": str(e)}
else:
return {"error": "Location not found"}
The Get_Weather_Forecast function first uses Geopy's nominatim to convert a city and status chain to coordinates, then sends an HTTP request to the open-meteo API to recover hourly temperature data for the given date, returning a dictionary that maps each horoditing at its corresponding temperature. He also manages the errors free of charge, returning an error message if the location is not found or if the API call fails.
from google.genai.types import GenerateContentConfig
config = GenerateContentConfig(
system_instruction="You are a helpful assistant that can help with weather related questions. Today is 2025-03-04.", # to give the LLM context on the current date.
tools=(get_weather_forecast),
automatic_function_calling={"disable": True}
)
This configuration saves your Python Get_Weather_Forecast function as a called tool. It defines a clear system prompt (including the date) for the context, while deactivating “Automatic_function_Calling” so that Gemini will make the useful charge of the function call instead of invoking it internally.
r = client.models.generate_content(
model=model_id,
config=config,
contents="Whats the weather in Berlin today?"
)
for part in r.candidates(0).content.parts:
print(part.function_call)
By sending the prompt with your personalized configuration (including the Python tool but with automatic calls deactivated), this extract captures the decision of the RAW function of Gemini. Then, it loops on each response part to print the object .Function_call, allowing you to inspect exactly which tool the model wants to invoke and with what arguments.
from google.genai.types import GenerateContentConfig
config = GenerateContentConfig(
system_instruction="You are a helpful assistant that use tools to access and retrieve information from a weather API. Today is 2025-03-04.", # to give the LLM context on the current date.
tools=(get_weather_forecast),
)
r = client.models.generate_content(
model=model_id,
config=config,
contents="Whats the weather in Berlin today?"
)
print(r.text)
With this configuration (which includes your Get_Weather_Forecast function and leaves the automatic call activated by default), the generate_content call will make Gemini your meteorological tool behind the scenes, then return a response to the natural language. Printing of R.Text releases this final response, including actual temperature forecasts for Berlin on the specified date.
from google.genai.types import GenerateContentConfig
config = GenerateContentConfig(
system_instruction="You are a helpful assistant that use tools to access and retrieve information from a weather API.",
tools=(get_weather_forecast),
)
prompt = f"""
Today is 2025-03-04. You are chatting with Andrew, you have access to more information about him.
User Context:
- name: Andrew
- location: Nuremberg
User: Can i wear a T-shirt later today?"""
r = client.models.generate_content(
model=model_id,
config=config,
contents=prompt
)
print(r.text)
We extend your assistant with a personal context, saying in the name and location of Gemini Andrew (Nuremberg) and asking if it's the T-shirt weather, while using the Get_weather_Forecast tool under the hood. It then prints the recommendation in the natural language of the model according to the real forecasts of this day.
In conclusion, we now know how to define the functions (via the JSON diagram or the Python signatures), configure Gemini 2.0 Flash to detect and make function calls, and implement the “agentic” loop that performs these calls and composes the final response. With these constitutive elements, we can extend any LLM to an assistant capable and compatible with tools that automates workflows, recover data live and interact with your code or API as effortless as to discuss with a colleague.
Here is the Colaab. Also, don't forget to follow us Twitter And join our Telegram And Linkedin Group. Don't forget to join our 90K + ML Subdreddit.
Asif Razzaq is the CEO of Marktechpost Media Inc .. as a visionary entrepreneur and engineer, AIF undertakes to exploit the potential of artificial intelligence for social good. His most recent company is the launch of an artificial intelligence media platform, Marktechpost, which stands out from its in-depth coverage of automatic learning and in-depth learning news which are both technically solid and easily understandable by a large audience. The platform has more than 2 million monthly views, illustrating its popularity with the public.
