Build an intelligent multi-tool AI agent interface using Streamlit for seamless real-time interaction

by Brenden Burgess

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

In this tutorial, we will create a powerful and interactive Streamline Application that brings together Langchain's capacities, the Google Gemini API and a series of advanced tools to create an AI intelligent assistant. Using the intuitive streamlit interface, we will create a cat -based system that can search for the web, recover the Wikipedia content, make calculations, remember key details and manage the history of conversation, all in real time. Whether we are developers, researchers or simply explore AI, this configuration allows us to interact with a multi-agent system directly from the browser with a minimum code and maximum flexibility.

!pip install -q streamlit langchain langchain-google-genai langchain-community
!pip install -q pyngrok python-dotenv wikipedia duckduckgo-search
!npm install -g localtunnel


import streamlit as st
import os
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain.tools import Tool, WikipediaQueryRun, DuckDuckGoSearchRun
from langchain.memory import ConversationBufferWindowMemory
from langchain.prompts import PromptTemplate
from langchain.callbacks.streamlit import StreamlitCallbackHandler
from langchain_community.utilities import WikipediaAPIWrapper, DuckDuckGoSearchAPIWrapper
import asyncio
import threading
import time
from datetime import datetime
import json

We start by installing all the Python and Node. This includes rationalization of the front, Langchain for the logical agent and tools like Wikipedia, Duckduckgo and Ngrok / Localtunnel for external research and accommodation. Once configured, we import all the modules to start building our multi-tool interactive agent.

GOOGLE_API_KEY = "Use Your API Key Here" 
NGROK_AUTH_TOKEN = "Use Your Auth Token Here" 
os.environ("GOOGLE_API_KEY") = GOOGLE_API_KEY

Then we configure our environment by defining the API Google Gemini key and the Ngrok authentication token. We allocate this identification information to the variables and define the Google_Api_Key so that the Langchain agent can safely access the Gemini model during execution.

class InnovativeAgentTools:
   """Advanced tool collection for the multi-agent system"""
  
   @staticmethod
   def get_calculator_tool():
       def calculate(expression: str) -> str:
           """Calculate mathematical expressions safely"""
           try:
               allowed_chars = set('0123456789+-*/.() ')
               if all(c in allowed_chars for c in expression):
                   result = eval(expression)
                   return f"Result: {result}"
               else:
                   return "Error: Invalid mathematical expression"
           except Exception as e:
               return f"Calculation error: {str(e)}"
      
       return Tool(
           name="Calculator",
           func=calculate,
           description="Calculate mathematical expressions. Input should be a valid math expression."
       )
  
   @staticmethod
   def get_memory_tool(memory_store):
       def save_memory(key_value: str) -> str:
           """Save information to memory"""
           try:
               key, value = key_value.split(":", 1)
               memory_store(key.strip()) = value.strip()
               return f"Saved '{key.strip()}' to memory"
           except:
               return "Error: Use format 'key: value'"
      
       def recall_memory(key: str) -> str:
           """Recall information from memory"""
           return memory_store.get(key.strip(), f"No memory found for '{key}'")
      
       return (
           Tool(name="SaveMemory", func=save_memory,
                description="Save information to memory. Format: 'key: value'"),
           Tool(name="RecallMemory", func=recall_memory,
                description="Recall saved information. Input: key to recall")
       )
  
   @staticmethod
   def get_datetime_tool():
       def get_current_datetime(format_type: str = "full") -> str:
           """Get current date and time"""
           now = datetime.now()
           if format_type == "date":
               return now.strftime("%Y-%m-%d")
           elif format_type == "time":
               return now.strftime("%H:%M:%S")
           else:
               return now.strftime("%Y-%m-%d %H:%M:%S")
      
       return Tool(
           name="DateTime",
           func=get_current_datetime,
           description="Get current date/time. Options: 'date', 'time', or 'full'"
       )

Here, we define the innovative to equip class to equip our AI agent with specialized capacity. We implement tools such as a calculator for the evaluation of safe expression, memory tools to record and recall information on turns and a date and time tool to recover the current date and time. These tools allow our rationalized agent to reason, remember and respond contextually, a bit like a real assistant. Discover the complete Notebook here

class MultiAgentSystem:
   """Innovative multi-agent system with specialized capabilities"""
  
   def __init__(self, api_key: str):
       self.llm = ChatGoogleGenerativeAI(
           model="gemini-pro",
           google_api_key=api_key,
           temperature=0.7,
           convert_system_message_to_human=True
       )
       self.memory_store = {}
       self.conversation_memory = ConversationBufferWindowMemory(
           memory_key="chat_history",
           k=10,
           return_messages=True
       )
       self.tools = self._initialize_tools()
       self.agent = self._create_agent()
  
   def _initialize_tools(self):
       """Initialize all available tools"""
       tools = ()
      
       tools.extend((
           DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper()),
           WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
       ))
      
       tools.append(InnovativeAgentTools.get_calculator_tool())
       tools.append(InnovativeAgentTools.get_datetime_tool())
       tools.extend(InnovativeAgentTools.get_memory_tool(self.memory_store))
      
       return tools
  
   def _create_agent(self):
       """Create the ReAct agent with advanced prompt"""
       prompt = PromptTemplate.from_template("""
🤖 You are an advanced AI assistant with access to multiple tools and persistent memory.


AVAILABLE TOOLS:
{tools}


TOOL USAGE FORMAT:
- Think step by step about what you need to do
- Use Action: tool_name
- Use Action Input: your input
- Wait for Observation
- Continue until you have a final answer


MEMORY CAPABILITIES:
- You can save important information using SaveMemory
- You can recall previous information using RecallMemory
- Always try to remember user preferences and context


CONVERSATION HISTORY:
{chat_history}


CURRENT QUESTION: {input}


REASONING PROCESS:
{agent_scratchpad}


Begin your response with your thought process, then take action if needed.
""")
      
       agent = create_react_agent(self.llm, self.tools, prompt)
       return AgentExecutor(
           agent=agent,
           tools=self.tools,
           memory=self.conversation_memory,
           verbose=True,
           handle_parsing_errors=True,
           max_iterations=5
       )
  
   def chat(self, message: str, callback_handler=None):
       """Process user message and return response"""
       try:
           if callback_handler:
               response = self.agent.invoke(
                   {"input": message},
                   {"callbacks": (callback_handler)}
               )
           else:
               response = self.agent.invoke({"input": message})
           return response("output")
       except Exception as e:
           return f"Error processing request: {str(e)}"

In this section, we build the heart of our application, the multi -marketing class. Here, we integrate the Gemini Pro model using Langchain and initialize all the essential tools, including web research, memory and calculator functions. We configure a React style agent using a personalized prompt that guides the use of the tool and memory management. Finally, we define a cat method that allows the agent to deal with user entry, to invoke tools when necessary and to generate intelligent responses and devoted to the context. Discover the complete Notebook here

def create_streamlit_app():
   """Create the innovative Streamlit application"""
  
   st.set_page_config(
       page_title="🚀 Advanced LangChain Agent with Gemini",
       page_icon="🤖",
       layout="wide",
       initial_sidebar_state="expanded"
   )
  
   st.markdown("""
   
   """, unsafe_allow_html=True)
  
   st.markdown("""
   

Powered by LangChain + Gemini API + Streamlit

""", unsafe_allow_html=True) with st.sidebar: st.header("🔧 Configuration") api_key = st.text_input( "🔑 Google AI API Key", type="password", value=GOOGLE_API_KEY if GOOGLE_API_KEY != "your-gemini-api-key-here" else "", help="Get your API key from https://ai.google.dev/" ) if not api_key: st.error("Please enter your Google AI API key to continue") st.stop() st.success("✅ API Key configured") st.header("🤖 Agent Capabilities") st.markdown(""" - 🔍 **Web Search** (DuckDuckGo) - 📚 **Wikipedia Lookup** - 🧮 **Mathematical Calculator** - 🧠 **Persistent Memory** - 📅 **Date & Time** - 💬 **Conversation History** """) if 'agent_system' in st.session_state: st.header("🧠 Memory Store") memory = st.session_state.agent_system.memory_store if memory: for key, value in memory.items(): st.markdown(f"""

{key}: {value}

""", unsafe_allow_html=True) else: st.info("No memories stored yet") if 'agent_system' not in st.session_state: with st.spinner("🔄 Initializing Advanced Agent System..."): st.session_state.agent_system = MultiAgentSystem(api_key) st.success("✅ Agent System Ready!") st.header("💬 Interactive Chat") if 'messages' not in st.session_state: st.session_state.messages = ({ "role": "assistant", "content": """🤖 Hello! I'm your advanced AI assistant powered by Gemini. I can: • Search the web and Wikipedia for information • Perform mathematical calculations • Remember important information across our conversation • Provide current date and time • Maintain conversation context Try asking me something like: - "Calculate 15 * 8 + 32" - "Search for recent news about AI" - "Remember that my favorite color is blue" - "What's the current time?" """ }) for message in st.session_state.messages: with st.chat_message(message("role")): st.markdown(message("content")) if prompt := st.chat_input("Ask me anything..."): st.session_state.messages.append({"role": "user", "content": prompt}) with st.chat_message("user"): st.markdown(prompt) with st.chat_message("assistant"): callback_handler = StreamlitCallbackHandler(st.container()) with st.spinner("🤔 Thinking..."): response = st.session_state.agent_system.chat(prompt, callback_handler) st.markdown(f"""

{response}

""", unsafe_allow_html=True) st.session_state.messages.append({"role": "assistant", "content": response}) st.header("💡 Example Queries") col1, col2, col3 = st.columns(3) with col1: if st.button("🔍 Search Example"): example = "Search for the latest developments in quantum computing" st.session_state.example_query = example with col2: if st.button("🧮 Math Example"): example = "Calculate the compound interest on $1000 at 5% for 3 years" st.session_state.example_query = example with col3: if st.button("🧠 Memory Example"): example = "Remember that I work as a data scientist at TechCorp" st.session_state.example_query = example if 'example_query' in st.session_state: st.info(f"Example query: {st.session_state.example_query}")

In this section, we bring together while creating an interactive web interface using rationalization. We configure the arrangement of the application, define the personalized CSS styles and configure a sidebar to enter the API keys and configure the agent's capacities. We initialize the multi-agent system, maintain the history of messages and allow a cat interface that allows users to interact in real time. To make it even easier to explore, we also provide examples of pimples for research, mathematics and requests related to memory, all in a beautifully style and reactive user interface. Discover the complete Notebook here

def setup_ngrok_auth(auth_token):
   """Setup ngrok authentication"""
   try:
       from pyngrok import ngrok, conf
      
       conf.get_default().auth_token = auth_token
      
       try:
           tunnels = ngrok.get_tunnels()
           print("✅ Ngrok authentication successful!")
           return True
       except Exception as e:
           print(f"❌ Ngrok authentication failed: {e}")
           return False
          
   except ImportError:
       print("❌ pyngrok not installed. Installing...")
       import subprocess
       subprocess.run(('pip', 'install', 'pyngrok'), check=True)
       return setup_ngrok_auth(auth_token)


def get_ngrok_token_instructions():
   """Provide instructions for getting ngrok token"""
   return """
🔧 NGROK AUTHENTICATION SETUP:


1. Sign up for an ngrok account:
  - Visit: https://dashboard.ngrok.com/signup
  - Create a free account


2. Get your authentication token:
  - Go to: https://dashboard.ngrok.com/get-started/your-authtoken
  - Copy your authtoken


3. Replace 'your-ngrok-auth-token-here' in the code with your actual token


4. Alternative methods if ngrok fails:
  - Use Google Colab's built-in public URL feature
  - Use localtunnel: !npx localtunnel --port 8501
  - Use serveo.net: !ssh -R 80:localhost:8501 serveo.net
"""

Here, we have set up an assistance function to authenticate Ngrok, which allows us to expose our local rationalization application to the Internet. We use the Pyngrok library to configure the authentication token and check the connection. If the token is missing or invalid, we provide detailed instructions on how to get one and suggest other tunneling methods, such as Localtunnel or Servo, allowing us to host and share our application from environments like Google Colar.

def main():
   """Main function to run the application"""
   try:
       create_streamlit_app()
   except Exception as e:
       st.error(f"Application error: {str(e)}")
       st.info("Please check your API key and try refreshing the page")

This hand () function acts as the entry point of our rationalized application. We simply call Create_StreamLit_App () to launch the complete interface. If something goes wrong, like a missing API key or an initialization of the failed tool, we graciously attract the error and display a useful message, ensuring that the user knows how to recover and continue to use the application gently.

def run_in_colab():
   """Run the application in Google Colab with proper ngrok setup"""
  
   print("🚀 Starting Advanced LangChain Agent Setup...")
  
   if NGROK_AUTH_TOKEN == "your-ngrok-auth-token-here":
       print("⚠️  NGROK_AUTH_TOKEN not configured!")
       print(get_ngrok_token_instructions())
      
       print("🔄 Attempting alternative tunnel methods...")
       try_alternative_tunnels()
       return
  
   print("📦 Installing required packages...")
   import subprocess
  
   packages = (
       'streamlit',
       'langchain',
       'langchain-google-genai',
       'langchain-community',
       'wikipedia',
       'duckduckgo-search',
       'pyngrok'
   )
  
   for package in packages:
       try:
           subprocess.run(('pip', 'install', package), check=True, capture_output=True)
           print(f"✅ {package} installed")
       except subprocess.CalledProcessError:
           print(f"⚠️  Failed to install {package}")
  
   app_content=""'
import streamlit as st
import os
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain.tools import Tool, WikipediaQueryRun, DuckDuckGoSearchRun
from langchain.memory import ConversationBufferWindowMemory
from langchain.prompts import PromptTemplate
from langchain.callbacks.streamlit import StreamlitCallbackHandler
from langchain_community.utilities import WikipediaAPIWrapper, DuckDuckGoSearchAPIWrapper
from datetime import datetime


# Configuration - Replace with your actual keys
GOOGLE_API_KEY = "''' + GOOGLE_API_KEY + '''"
os.environ("GOOGLE_API_KEY") = GOOGLE_API_KEY


class InnovativeAgentTools:
   @staticmethod
   def get_calculator_tool():
       def calculate(expression: str) -> str:
           try:
               allowed_chars = set('0123456789+-*/.() ')
               if all(c in allowed_chars for c in expression):
                   result = eval(expression)
                   return f"Result: {result}"
               else:
                   return "Error: Invalid mathematical expression"
           except Exception as e:
               return f"Calculation error: {str(e)}"
      
       return Tool(name="Calculator", func=calculate,
                  description="Calculate mathematical expressions. Input should be a valid math expression.")
  
   @staticmethod
   def get_memory_tool(memory_store):
       def save_memory(key_value: str) -> str:
           try:
               key, value = key_value.split(":", 1)
               memory_store(key.strip()) = value.strip()
               return f"Saved '{key.strip()}' to memory"
           except:
               return "Error: Use format 'key: value'"
      
       def recall_memory(key: str) -> str:
           return memory_store.get(key.strip(), f"No memory found for '{key}'")
      
       return (
           Tool(name="SaveMemory", func=save_memory, description="Save information to memory. Format: 'key: value'"),
           Tool(name="RecallMemory", func=recall_memory, description="Recall saved information. Input: key to recall")
       )
  
   @staticmethod
   def get_datetime_tool():
       def get_current_datetime(format_type: str = "full") -> str:
           now = datetime.now()
           if format_type == "date":
               return now.strftime("%Y-%m-%d")
           elif format_type == "time":
               return now.strftime("%H:%M:%S")
           else:
               return now.strftime("%Y-%m-%d %H:%M:%S")
      
       return Tool(name="DateTime", func=get_current_datetime,
                  description="Get current date/time. Options: 'date', 'time', or 'full'")


class MultiAgentSystem:
   def __init__(self, api_key: str):
       self.llm = ChatGoogleGenerativeAI(
           model="gemini-pro",
           google_api_key=api_key,
           temperature=0.7,
           convert_system_message_to_human=True
       )
       self.memory_store = {}
       self.conversation_memory = ConversationBufferWindowMemory(
           memory_key="chat_history", k=10, return_messages=True
       )
       self.tools = self._initialize_tools()
       self.agent = self._create_agent()
  
   def _initialize_tools(self):
       tools = ()
       try:
           tools.extend((
               DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper()),
               WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
           ))
       except Exception as e:
           st.warning(f"Search tools may have limited functionality: {e}")
      
       tools.append(InnovativeAgentTools.get_calculator_tool())
       tools.append(InnovativeAgentTools.get_datetime_tool())
       tools.extend(InnovativeAgentTools.get_memory_tool(self.memory_store))
       return tools
  
   def _create_agent(self):
       prompt = PromptTemplate.from_template("""
🤖 You are an advanced AI assistant with access to multiple tools and persistent memory.


AVAILABLE TOOLS:
{tools}


TOOL USAGE FORMAT:
- Think step by step about what you need to do
- Use Action: tool_name
- Use Action Input: your input
- Wait for Observation
- Continue until you have a final answer


CONVERSATION HISTORY:
{chat_history}


CURRENT QUESTION: {input}


REASONING PROCESS:
{agent_scratchpad}


Begin your response with your thought process, then take action if needed.
""")
      
       agent = create_react_agent(self.llm, self.tools, prompt)
       return AgentExecutor(agent=agent, tools=self.tools, memory=self.conversation_memory,
                          verbose=True, handle_parsing_errors=True, max_iterations=5)
  
   def chat(self, message: str, callback_handler=None):
       try:
           if callback_handler:
               response = self.agent.invoke({"input": message}, {"callbacks": (callback_handler)})
           else:
               response = self.agent.invoke({"input": message})
           return response("output")
       except Exception as e:
           return f"Error processing request: {str(e)}"


# Streamlit App
st.set_page_config(page_title="🚀 Advanced LangChain Agent", page_icon="🤖", layout="wide")


st.markdown("""

""", unsafe_allow_html=True)


st.markdown('

Powered by LangChain + Gemini API

', unsafe_allow_html=True) with st.sidebar: st.header("🔧 Configuration") api_key = st.text_input("🔑 Google AI API Key", type="password", value=GOOGLE_API_KEY) if not api_key: st.error("Please enter your Google AI API key") st.stop() st.success("✅ API Key configured") st.header("🤖 Agent Capabilities") st.markdown("- 🔍 Web Search\\n- 📚 Wikipedia\\n- 🧮 Calculator\\n- 🧠 Memory\\n- 📅 Date/Time") if 'agent_system' in st.session_state and st.session_state.agent_system.memory_store: st.header("🧠 Memory Store") for key, value in st.session_state.agent_system.memory_store.items(): st.markdown(f'

{key}: {value}

', unsafe_allow_html=True) if 'agent_system' not in st.session_state: with st.spinner("🔄 Initializing Agent..."): st.session_state.agent_system = MultiAgentSystem(api_key) st.success("✅ Agent Ready!") if 'messages' not in st.session_state: st.session_state.messages = ({ "role": "assistant", "content": "🤖 Hello! I'm your advanced AI assistant. I can search, calculate, remember information, and more! Try asking me to: calculate something, search for information, or remember a fact about you." }) for message in st.session_state.messages: with st.chat_message(message("role")): st.markdown(message("content")) if prompt := st.chat_input("Ask me anything..."): st.session_state.messages.append({"role": "user", "content": prompt}) with st.chat_message("user"): st.markdown(prompt) with st.chat_message("assistant"): callback_handler = StreamlitCallbackHandler(st.container()) with st.spinner("🤔 Thinking..."): response = st.session_state.agent_system.chat(prompt, callback_handler) st.markdown(f'

{response}

', unsafe_allow_html=True) st.session_state.messages.append({"role": "assistant", "content": response}) # Example buttons st.header("💡 Try These Examples") col1, col2, col3 = st.columns(3) with col1: if st.button("🧮 Calculate 15 * 8 + 32"): st.rerun() with col2: if st.button("🔍 Search AI news"): st.rerun() with col3: if st.button("🧠 Remember my name is Alex"): st.rerun() ''' with open('streamlit_app.py', 'w') as f: f.write(app_content) print("✅ Streamlit app file created successfully!") if setup_ngrok_auth(NGROK_AUTH_TOKEN): start_streamlit_with_ngrok() else: print("❌ Ngrok authentication failed. Trying alternative methods...") try_alternative_tunnels()

In the RUN_IN_COLAB () function, we facilitate the deployment of the Streamlit application directly from a Google Colab environment. We start by installing all the packages required, then generons and dynamically write the complete Streamlit application code in a Streamlit_App.py file. We check the presence of a valid Ngrok token to allow public access to the colaab application, and if it is missing or invalid, we guide ourselves through rescue tunnels. This configuration allows us to interact with our AI agent from anywhere, all in some colab cells. Discover the complete Notebook here

def start_streamlit_with_ngrok():
   """Start Streamlit with ngrok tunnel"""
   import subprocess
   import threading
   from pyngrok import ngrok
  
   def start_streamlit():
       subprocess.run(('streamlit', 'run', 'streamlit_app.py', '--server.port=8501', '--server.headless=true'))
  
   print("🚀 Starting Streamlit server...")
   thread = threading.Thread(target=start_streamlit)
   thread.daemon = True
   thread.start()
  
   time.sleep(5)
  
   try:
       print("🌐 Creating ngrok tunnel...")
       public_url = ngrok.connect(8501)
       print(f"🔗 SUCCESS! Access your app at: {public_url}")
       print("✨ Your Advanced LangChain Agent is now running publicly!")
       print("📱 You can share this URL with others!")
      
       print("⏳ Keeping tunnel alive... Press Ctrl+C to stop")
       try:
           ngrok_process = ngrok.get_ngrok_process()
           ngrok_process.proc.wait()
       except KeyboardInterrupt:
           print("👋 Shutting down...")
           ngrok.kill()
          
   except Exception as e:
       print(f"❌ Ngrok tunnel failed: {e}")
       try_alternative_tunnels()


def try_alternative_tunnels():
   """Try alternative tunneling methods"""
   print("🔄 Trying alternative tunnel methods...")
  
   import subprocess
   import threading
  
   def start_streamlit():
       subprocess.run(('streamlit', 'run', 'streamlit_app.py', '--server.port=8501', '--server.headless=true'))
  
   thread = threading.Thread(target=start_streamlit)
   thread.daemon = True
   thread.start()
  
   time.sleep(3)
  
   print("🌐 Streamlit is running on http://localhost:8501")
   print("\n📋 ALTERNATIVE TUNNEL OPTIONS:")
   print("1. localtunnel: Run this in a new cell:")
   print("   !npx localtunnel --port 8501")
   print("\n2. serveo.net: Run this in a new cell:")
   print("   !ssh -R 80:localhost:8501 serveo.net")
   print("\n3. Colab public URL (if available):")
   print("   Use the 'Public URL' button in Colab's interface")
  
   try:
       while True:
           time.sleep(60)
   except KeyboardInterrupt:
       print("👋 Shutting down...")


if __name__ == "__main__":
   try:
       get_ipython()
       print("🚀 Google Colab detected - starting setup...")
       run_in_colab()
   except NameError:
       main()

In this last part, we have configured the execution logic to execute the application in a local environment or inside Google Colab. The start_streamlit_with_ngrok () function launches the Streamlit server in the background and uses Ngrok to expose it publicly, which facilitates access and sharing. If Ngrok fails, the Try_alternative_Tunnels () function activates with alternative tunnels options, such as Localtunnel and Servo. With the __main__ block, we automatically detect if we are in Colab and launch the appropriate configuration, which makes the entire process of fluid, flexible and shareable deployment process from anywhere.

In conclusion, we will have an entirely functional AI agent who runs inside a Sleek Streamlit interface, capable of responding to requests, remembering the user entries and even sharing his services publicly using Ngrok. We have seen the ease with which rationalization allows us to integrate the AI ​​advanced features in a engaging and friendly application. From there, we can extend the agent's tools, connect it to larger workflows or deploy it as part of our smart applications. With Sationlit like frontal agents and Langchain fueling logic, we have built a solid base for the interactive experiences of the new generation of AI.


Discover the complete Notebook here. All the merit of this research goes to researchers in this project. Also, don't hesitate to follow us Twitter And don't forget to join our Subseubdredit 100k + ml and subscribe to Our newsletter.


Asif Razzaq is the CEO of Marktechpost Media Inc .. as a visionary entrepreneur and engineer, AIF undertakes to exploit the potential of artificial intelligence for social good. His most recent company is the launch of an artificial intelligence media platform, Marktechpost, which stands out from its in-depth coverage of automatic learning and in-depth learning news which are both technically solid and easily understandable by a large audience. The platform has more than 2 million monthly views, illustrating its popularity with the public.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.