A step -by -step coding guide to integrate Dappier AI real -time search and real -time recommendation with the OpenAi Cat API

by Brenden Burgess

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

In this tutorial, we will learn to exploit the power to DAPPIER AIA series of real -time research and recommendation tools, to improve our conversational applications. By combining Dappier's point edge point with its AirecommandationTool area, we can question the latest information from the web and on the surface of personalized items from personalized data models. We guide you step by step by configuring our Google Colab environment, installing outbuildings, firmly loading API keys and initializing each more DAPP module. We will then integrate these tools with an OpenAi cat model (for example, GPT-3.5-Turbo), build a composable fast chain and run end-to-end queries, all in nine concise carnet cells. Whether we need news recovery per minute or conservation of AI -focused content, this tutorial provides a flexible framework to create smart cat experiences.

!pip install -qU langchain-dappier langchain langchain-openai langchain-community langchain-core openai

We demonstrate our colab environment by installing the main Langchain libraries, both the more dapping extensions and the community integrations, alongside the Official Openai Customer. With these packages in place, we will have transparent access to the real -time research and recommendation tools of Dappier, the latest Langchain Runtime and the Openai API, all in a single environment.

import os
from getpass import getpass


os.environ("DAPPIER_API_KEY") = getpass("Enter our Dappier API key: ")


os.environ("OPENAI_API_KEY") = getpass("Enter our OpenAI API key: ")

We solidly capture our more Dappier and Openai API references at the time of execution, thus avoiding the hard coding of sensitive keys in our notebook. Using GetPass, the prompts guarantee that our entries remain hidden and define them because the environment variables make them available to all the following cells without exposing them in newspapers.

from langchain_dappier import DappierRealTimeSearchTool


search_tool = DappierRealTimeSearchTool()
print("Real-time search tool ready:", search_tool)

We import the real -time research module of Dappier and create an instance of the Dappierrealtimesearchtool, allowing our laptop to run live web requests. The print instruction confirms that the tool has been successfully initialized and is ready to manage research requests.

from langchain_dappier import DappierAIRecommendationTool


recommendation_tool = DappierAIRecommendationTool(
    data_model_id="dm_01j0pb465keqmatq9k83dthx34",
    similarity_top_k=3,
    ref="sportsnaut.com",
    num_articles_ref=2,
    search_algorithm="most_recent",
)
print("Recommendation tool ready:", recommendation_tool)

We have configured the recommendation engine fueled by Dappier by specifying our personalized data model, the number of similar items to recover and the source domain for the context. The DAPPIERRECMENTOTOOL instance will now use the “MOST_RENT” algorithm to attract relevant K (here, two) of our specified reference, ready for content suggestions focused on queries.

from langchain.chat_models import init_chat_model


llm = init_chat_model(
    model="gpt-3.5-turbo",
    model_provider="openai",
    temperature=0,
)
llm_with_tools = llm.bind_tools((search_tool))
print("✅ llm_with_tools ready")

We create an OpenAi chat model instance using GPT-3.5-Turbo with a temperature of 0 to guarantee coherent answers, then link the previously initialized search tool so that the LLM can invoke real-time research. The final print statement confirms that our LLM is ready to call Dappier's tools in our conversation flows.

import datetime
from langchain_core.prompts import ChatPromptTemplate


today = datetime.datetime.today().strftime("%Y-%m-%d")
prompt = ChatPromptTemplate((
    ("system", f"we are a helpful assistant. Today is {today}."),
    ("human", "{user_input}"),
    ("placeholder", "{messages}"),
))


llm_chain = prompt | llm_with_tools
print("✅ llm_chain built")

We build the conversational “chain” by first building a catflower chat that injects the current date into a system prompt and defines the locations for the user's entry and previous messages. By killing the model (|) in our llm_with_tools, we create an LLM_Chain which automatically formulates the prompts, invokes the LLM (with real -time research capacity) and manages the answers in a transparent workflow. The final printing confirms that the chain is ready to conduct end -to -end interactions.

from langchain_core.runnables import RunnableConfig, chain


@chain
def tool_chain(user_input: str, config: RunnableConfig):
    ai_msg = llm_chain.invoke({"user_input": user_input}, config=config)
    tool_msgs = search_tool.batch(ai_msg.tool_calls, config=config)
    return llm_chain.invoke(
        {"user_input": user_input, "messages": (ai_msg, *tool_msgs)},
        config=config
    )


print("✅ tool_chain defined")

We define a tool from end to end_chain which first sends our prompt to the LLM (capturing all the tool calls requested), then performs these calls via Search_Tool.batch, and finally feeds the initial AI message and the tool goes back to the LLM for a coherent response. The @Chain decorator transforms this into a single unwinding pipeline, simply allowing us to call Tool_chain.invoke (…) to manage both reflection and research in one step.

res = search_tool.invoke({"query": "What happened at the last Wrestlemania"})
print("🔍 Search:", res)

We demonstrate a direct request to Dappier's real -time search engine, asking “what happened during the last Wrestlemania” and immediately prints the structured result. It shows with what facility we can take advantage of Search_Tool.invoke to recover information on the scale of the moment and inspect the raw answer in our notebook.

rec = recommendation_tool.invoke({"query": "latest sports news"})
print("📄 Recommendation:", rec)


out = tool_chain.invoke("Who won the last Nobel Prize?")
print("🤖 Chain output:", out)

Finally, we present both our recommendation and the full -chain work flows. First of all, he calls recommendation_tool.invoke with the “latest sports news” to recover relevant articles from our personalized data model, then prints these suggestions. Then he executes the_chain.invoke tool (“Who won the last Nobel Prize?”) To make an LLM request from end to end combined with real -time search, by printing the synthesized response of AI and integrating live data.

In conclusion, we now have a robust reference base to integrate the capabilities of the Plus Dappier in any conversational workflow. We have seen how much Dappier's real -time research allows our LLM to access new facts, while the recommendation tool allows us to provide contextually relevant information from proprietary data sources. From there, we can customize the search parameters (for example, refined query filters) or refined recommendation parameters (for example, adjust the similarity thresholds and reference areas) to adapt to our field.


Discover the DAPP PLUSE Platform And Notebook here. Also, don't forget to follow us Twitter And join our Telegram And Linkedin Group. Don't forget to join our 90K + ML Subdreddit.

🔥 (Register now) Minicon Virtual Conference on AIA: Free registration + presence certificate + 4 hours (May 21, 9 a.m. to 1 p.m. PST) + Practical workshop


Nikhil is an intern consultant at Marktechpost. It pursues a double degree integrated into materials at the Indian Kharagpur Institute of Technology. Nikhil is an IA / ML enthusiast who is still looking for applications in fields like biomaterials and biomedical sciences. With a strong experience in material science, he explores new progress and creates opportunities to contribute.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.