In this tutorial, we guide you through the transparent integration of the autogenic and semantic nucleus with the Google Flash Gemini model. We start by setting up our Geminiwrapper and SemantickiceMiniplugin classes to fill the generative power of Gemini with a multi-agent autogenic orchestration. From there, we configure specialized agents, ranging from code examiners to creative analysts, demonstrating how we can take advantage of the Autogen conversable API alongside the functions decorated with the semantic nucleus for text analysis, summary, code examination and creative problem solving. By combining a robust Autogen agent manager with the function of the semantic nucleus, we create an advanced AI assistant that adapts to a variety of tasks with structured and usable information.
!pip install pyautogen semantic-kernel google-generativeai python-dotenv
import os
import asyncio
from typing import Dict, Any, List
import autogen
import google.generativeai as genai
from semantic_kernel import Kernel
from semantic_kernel.functions import KernelArguments
from semantic_kernel.functions.kernel_function_decorator import kernel_function
We start by installing basic outbuildings: Pyautagen, Semantic Brain, Google-Generativaia and Python-Dotenv, ensuring that we have all the necessary libraries for our multi-agent and semantic function configuration. Then, we import essential Python modules (OS, Asyncio, Typing) with Autogen for agent orchestration, Genai for access to the Gemini API and the semantic nucleus and decorators to define our AI functions.
GEMINI_API_KEY = "Use Your API Key Here"
genai.configure(api_key=GEMINI_API_KEY)
config_list = (
{
"model": "gemini-1.5-flash",
"api_key": GEMINI_API_KEY,
"api_type": "google",
"api_base": "https://generativelanguage.googleapis.com/v1beta",
}
)
We define our reserved space gemini_api_key and immediately configure the Genai client so that all the following Gemini calls are authenticated. Then, we build a config_list containing the parameters of the Flash Gemini model, the name of the model, the API key, the type of termination point and the basic URL, which we will give to our agents for LLM interactions.
class GeminiWrapper:
"""Wrapper for Gemini API to work with AutoGen"""
def __init__(self, model_name="gemini-1.5-flash"):
self.model = genai.GenerativeModel(model_name)
def generate_response(self, prompt: str, temperature: float = 0.7) -> str:
"""Generate response using Gemini"""
try:
response = self.model.generate_content(
prompt,
generation_config=genai.types.GenerationConfig(
temperature=temperature,
max_output_tokens=2048,
)
)
return response.text
except Exception as e:
return f"Gemini API Error: {str(e)}"
We clutter all the gemini flash interactions in a Geminiwrapper class, where we initialize a generation generation for our chosen model and expose a simple method generated_pons. In this method, we pass the invite and the temperature in the Gemini general API (capped at 2048 tokens) and refer the raw text or a formatted error.
class SemanticKernelGeminiPlugin:
"""Semantic Kernel plugin using Gemini Flash for advanced AI operations"""
def __init__(self):
self.kernel = Kernel()
self.gemini = GeminiWrapper()
@kernel_function(name="analyze_text", description="Analyze text for sentiment and key insights")
def analyze_text(self, text: str) -> str:
"""Analyze text using Gemini Flash"""
prompt = f"""
Analyze the following text comprehensively:
Text: {text}
Provide analysis in this format:
- Sentiment: (positive/negative/neutral with confidence)
- Key Themes: (main topics and concepts)
- Insights: (important observations and patterns)
- Recommendations: (actionable next steps)
- Tone: (formal/informal/technical/emotional)
"""
return self.gemini.generate_response(prompt, temperature=0.3)
@kernel_function(name="generate_summary", description="Generate comprehensive summary")
def generate_summary(self, content: str) -> str:
"""Generate summary using Gemini's advanced capabilities"""
prompt = f"""
Create a comprehensive summary of the following content:
Content: {content}
Provide:
1. Executive Summary (2-3 sentences)
2. Key Points (bullet format)
3. Important Details
4. Conclusion/Implications
"""
return self.gemini.generate_response(prompt, temperature=0.4)
@kernel_function(name="code_analysis", description="Analyze code for quality and suggestions")
def code_analysis(self, code: str) -> str:
"""Analyze code using Gemini's code understanding"""
prompt = f"""
Analyze this code comprehensively:
```
{code}
```
Provide analysis covering:
- Code Quality: (readability, structure, best practices)
- Performance: (efficiency, optimization opportunities)
- Security: (potential vulnerabilities, security best practices)
- Maintainability: (documentation, modularity, extensibility)
- Suggestions: (specific improvements with examples)
"""
return self.gemini.generate_response(prompt, temperature=0.2)
@kernel_function(name="creative_solution", description="Generate creative solutions to problems")
def creative_solution(self, problem: str) -> str:
"""Generate creative solutions using Gemini's creative capabilities"""
prompt = f"""
Problem: {problem}
Generate creative solutions:
1. Conventional Approaches (2-3 standard solutions)
2. Innovative Ideas (3-4 creative alternatives)
3. Hybrid Solutions (combining different approaches)
4. Implementation Strategy (practical steps)
5. Potential Challenges and Mitigation
"""
return self.gemini.generate_response(prompt, temperature=0.8)
We clutter our semantic nucleus logic in the semantickernelgeminiplugin, where we initialize both the nucleus and our geminiwrapper to power the personalized AI functions. Using the @kernel_function decorator, we declare methods like analysis_Text, generate_summary, code_analysis and creative_solution, each builds a structured prompt and delegates the heaviness in gemini flash. This plugin allows us to register and invoke operations advanced in our semantic nucleus environment.
class AdvancedGeminiAgent:
"""Advanced AI Agent using Gemini Flash with AutoGen and Semantic Kernel"""
def __init__(self):
self.sk_plugin = SemanticKernelGeminiPlugin()
self.gemini = GeminiWrapper()
self.setup_agents()
def setup_agents(self):
"""Initialize AutoGen agents with Gemini Flash"""
gemini_config = {
"config_list": ({"model": "gemini-1.5-flash", "api_key": GEMINI_API_KEY}),
"temperature": 0.7,
}
self.assistant = autogen.ConversableAgent(
name="GeminiAssistant",
llm_config=gemini_config,
system_message="""You are an advanced AI assistant powered by Gemini Flash with Semantic Kernel capabilities.
You excel at analysis, problem-solving, and creative thinking. Always provide comprehensive, actionable insights.
Use structured responses and consider multiple perspectives.""",
human_input_mode="NEVER",
)
self.code_reviewer = autogen.ConversableAgent(
name="GeminiCodeReviewer",
llm_config={**gemini_config, "temperature": 0.3},
system_message="""You are a senior code reviewer powered by Gemini Flash.
Analyze code for best practices, security, performance, and maintainability.
Provide specific, actionable feedback with examples.""",
human_input_mode="NEVER",
)
self.creative_analyst = autogen.ConversableAgent(
name="GeminiCreativeAnalyst",
llm_config={**gemini_config, "temperature": 0.8},
system_message="""You are a creative problem solver and innovation expert powered by Gemini Flash.
Generate innovative solutions, and provide fresh perspectives.
Balance creativity with practicality.""",
human_input_mode="NEVER",
)
self.data_specialist = autogen.ConversableAgent(
name="GeminiDataSpecialist",
llm_config={**gemini_config, "temperature": 0.4},
system_message="""You are a data analysis expert powered by Gemini Flash.
Provide evidence-based recommendations and statistical perspectives.""",
human_input_mode="NEVER",
)
self.user_proxy = autogen.ConversableAgent(
name="UserProxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=2,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
llm_config=False,
)
def analyze_with_semantic_kernel(self, content: str, analysis_type: str) -> str:
"""Bridge function between AutoGen and Semantic Kernel with Gemini"""
try:
if analysis_type == "text":
return self.sk_plugin.analyze_text(content)
elif analysis_type == "code":
return self.sk_plugin.code_analysis(content)
elif analysis_type == "summary":
return self.sk_plugin.generate_summary(content)
elif analysis_type == "creative":
return self.sk_plugin.creative_solution(content)
else:
return "Invalid analysis type. Use 'text', 'code', 'summary', or 'creative'."
except Exception as e:
return f"Semantic Kernel Analysis Error: {str(e)}"
def multi_agent_collaboration(self, task: str) -> Dict(str, str):
"""Orchestrate multi-agent collaboration using Gemini"""
results = {}
agents = {
"assistant": (self.assistant, "comprehensive analysis"),
"code_reviewer": (self.code_reviewer, "code review perspective"),
"creative_analyst": (self.creative_analyst, "creative solutions"),
"data_specialist": (self.data_specialist, "data-driven insights")
}
for agent_name, (agent, perspective) in agents.items():
try:
prompt = f"Task: {task}\n\nProvide your {perspective} on this task."
response = agent.generate_reply(({"role": "user", "content": prompt}))
results(agent_name) = response if isinstance(response, str) else str(response)
except Exception as e:
results(agent_name) = f"Agent {agent_name} error: {str(e)}"
return results
def run_comprehensive_analysis(self, query: str) -> Dict(str, Any):
"""Run comprehensive analysis using all Gemini-powered capabilities"""
results = {}
analyses = ("text", "summary", "creative")
for analysis_type in analyses:
try:
results(f"sk_{analysis_type}") = self.analyze_with_semantic_kernel(query, analysis_type)
except Exception as e:
results(f"sk_{analysis_type}") = f"Error: {str(e)}"
try:
results("multi_agent") = self.multi_agent_collaboration(query)
except Exception as e:
results("multi_agent") = f"Multi-agent error: {str(e)}"
try:
results("direct_gemini") = self.gemini.generate_response(
f"Provide a comprehensive analysis of: {query}", temperature=0.6
)
except Exception as e:
results("direct_gemini") = f"Direct Gemini error: {str(e)}"
return results
We add our IA orchestration from start to finish in the AdvancedgeMiniagent class, where we initialize our semantic nucleus plugin, our Gemini packaging and configure a series of self -employed agents (assistant, code reviewer, creative analyst, data specialist and user proxy). With simple methods for bridging the semantic ship, multi-agent collaboration and direct gemini calls, we allow a complete and complete analysis pipeline for any user request.
def main():
"""Main execution function for Google Colab with Gemini Flash"""
print("🚀 Initializing Advanced Gemini Flash AI Agent...")
print("⚡ Using Gemini 1.5 Flash for high-speed, cost-effective AI processing")
try:
agent = AdvancedGeminiAgent()
print("✅ Agent initialized successfully!")
except Exception as e:
print(f"❌ Initialization error: {str(e)}")
print("💡 Make sure to set your Gemini API key!")
return
demo_queries = (
"How can AI transform education in developing countries?",
"def fibonacci(n): return n if n <= 1 else fibonacci(n-1) + fibonacci(n-2)",
"What are the most promising renewable energy technologies for 2025?"
)
print("\n🔍 Running Gemini Flash Powered Analysis...")
for i, query in enumerate(demo_queries, 1):
print(f"\n{'='*60}")
print(f"🎯 Demo {i}: {query}")
print('='*60)
try:
results = agent.run_comprehensive_analysis(query)
for key, value in results.items():
if key == "multi_agent" and isinstance(value, dict):
print(f"\n🤖 {key.upper().replace('_', ' ')}:")
for agent_name, response in value.items():
print(f" 👤 {agent_name}: {str(response)(:200)}...")
else:
print(f"\n📊 {key.upper().replace('_', ' ')}:")
print(f" {str(value)(:300)}...")
except Exception as e:
print(f"❌ Error in demo {i}: {str(e)}")
print(f"\n{'='*60}")
print("🎉 Gemini Flash AI Agent Demo Completed!")
print("💡 To use with your API key, replace 'your-gemini-api-key-here'")
print("🔗 Get your free Gemini API key at: https://makersuite.google.com/app/apikey")
if __name__ == "__main__":
main()
Finally, we execute the main function which initializes the AdvancedgeMiniagent, prints state messages and itery through a set of demonstration queries. As we execute each request, we collect and display the results of semantic-nursing analyzes, multi-agent collaboration and direct geminal responses, ensuring a clear showcase and step by step of our multi-agent workflow.
In conclusion, we have shown how the autogen and the semantic nucleus complement each other to produce a multi-agent multi-agent system powered by flash gemini. We stressed how Autogen simplifies the orchestration of various expert agents, while the semantic nucleus provides a clean and declarative layer to define and invoke advanced AI functions. By uniting these tools in a colab notebook, we allowed a quick experiment and a prototyping of complex workflows of AI without sacrificing clarity or control.
Discover the Codes. All the merit of this research goes to researchers in this project. Also, don't hesitate to follow us Twitter And don't forget to join our Subseubdredit 100k + ml and subscribe to Our newsletter.
Asif Razzaq is the CEO of Marktechpost Media Inc .. as a visionary entrepreneur and engineer, AIF undertakes to exploit the potential of artificial intelligence for social good. His most recent company is the launch of an artificial intelligence media platform, Marktechpost, which stands out from its in-depth coverage of automatic learning and in-depth learning news which are both technically solid and easily understandable by a large audience. The platform has more than 2 million monthly views, illustrating its popularity with the public.
