This tutorial shows how to implement the auto-rare technique using large language models (LLM) with Mirascope, a powerful framework for building rapid structured workflows. Auto-refine is a rapid engineering strategy where the model assesses its own outing, generates a feedback and improves its response according to this feedback. This refinement loop can be repeated several times to gradually improve the quality and precision of the final response.
The approach of self-raffine is particularly effective for tasks involving reasoning, code generation and content creation, where progressive improvements lead to much higher results. Discover the Complete codes here
Dependencies installation
!pip install "mirascope(openai)"
API OPENAI key
To get an API OPENAI key, visit https://platform.openai.com/settings/organization/api-keys And generate a new key. If you are a new user, you may need to add billing details and make a minimum payment of $ 5 to activate access to the API. Discover the Complete codes here
import os
from getpass import getpass
os.environ("OPENAI_API_KEY") = getpass('Enter OpenAI API Key: ')
Basic implementation of self-raffine
We start by implementing the self-refine technique using Mirascope @ Openai.Call decorators and @prompt_template. The process begins by generating an initial response to a user query. This response is then evaluated by the model itself, which provides constructive feedback. Finally, the model uses this feedback to generate an improved response. The Self_refine function allows us to repeat this refinement process for a specified number of iterations, improving the quality of the output at each cycle. Discover the Complete codes here
from mirascope.core import openai, prompt_template
from mirascope.core.openai import OpenAICallResponse
@openai.call(model="gpt-4o-mini")
def call(query: str) -> str:
return query
@openai.call(model="gpt-4o-mini")
@prompt_template(
"""
Here is a query and a response to the query. Give feedback about the answer,
noting what was correct and incorrect.
Query:
{query}
Response:
{response}
"""
)
def evaluate_response(query: str, response: OpenAICallResponse): ...
@openai.call(model="gpt-4o-mini")
@prompt_template(
"""
For this query:
{query}
The following response was given:
{response}
Here is some feedback about the response:
{feedback}
Consider the feedback to generate a new response to the query.
"""
)
def generate_new_response(
query: str, response: OpenAICallResponse
) -> openai.OpenAIDynamicConfig:
feedback = evaluate_response(query, response)
return {"computed_fields": {"feedback": feedback}}
def self_refine(query: str, depth: int) -> str:
response = call(query)
for _ in range(depth):
response = generate_new_response(query, response)
return response.content
query = "A train travels 120 km at a certain speed. If the speed had been 20 km/h faster, it would have taken 30 minutes less to cover the same distance. What was the original speed of the train?"
print(self_refine(query, 1))
Improved self-rarefine with response model
In this improved version, we define a Mathsolution structured response model using pydantics to capture both the stages of the solution and the final digital response. The improved function_generation_new_respons refines the output by incorporating a feedback generated by the model and forming the improved response in a well -defined diagram. This approach guarantees clarity, consistency and the best conviviality downstream of the refined response, especially for tasks such as mathematical problem solving. Discover the Complete codes here
from pydantic import BaseModel, Field
class MathSolution(BaseModel):
steps: list(str) = Field(..., description="The steps taken to solve the problem")
final_answer: float = Field(..., description="The final numerical answer")
@openai.call(model="gpt-4o-mini", response_model=MathSolution)
@prompt_template(
"""
For this query:
{query}
The following response was given:
{response}
Here is some feedback about the response:
{feedback}
Consider the feedback to generate a new response to the query.
Provide the solution steps and the final numerical answer.
"""
)
def enhanced_generate_new_response(
query: str, response: OpenAICallResponse
) -> openai.OpenAIDynamicConfig:
feedback = evaluate_response(query, response)
return {"computed_fields": {"feedback": feedback}}
def enhanced_self_refine(query: str, depth: int) -> MathSolution:
response = call(query)
for _ in range(depth):
solution = enhanced_generate_new_response(query, response)
response = f"Steps: {solution.steps}\nFinal Answer: {solution.final_answer}"
return solution
# Example usage
result = enhanced_self_refine(query, 1)
print(result)
The improved auto-rare technique has proven to be effective to precisely solve the given mathematical problem:
“A train travels 120 km at a certain speed. If the speed had been faster by 20 km / h, it would have taken 30 minutes less to cover the same distance. What was the original train speed? ”
Thanks to a single refinement iteration, the model delivered a logically sound and step by step by step leading to the correct answer of 60 km / h. This illustrates several key advantages of the auto-rare approach:
- Improved precision thanks to an iterative improvement in feedback.
- Lighter reasoning steps, including the configuration of the variables, the formulation of equation and the application of quadratic solution.
- Greater transparency, which allows users to understand and trust the solution.
In wider applications, this technique is promising for tasks that require precision, structure and iterative improvement, going from the technical resolution of problems to creative and professional writing. However, the implementors should remain attentive to compromise in calculation cost and refine the depth and the invites to feedback to correspond to their specific case.
Discover the Complete codes here. All the merit of this research goes to researchers in this project. Also, don't hesitate to follow us Twitter And don't forget to join our Subseubdredit 100k + ml and subscribe to Our newsletter.
FAQ: Can MarkTechpost help me promote my AI product and position it before IA developers and data engineers?
Rep: Yes, Marktechpost can help promote your AI product by publishing articles, case studies or characteristics of sponsored product, targeting a global audience of AI developers and data engineers. The MTP platform is widely read by technical professionals, increasing the visibility and positioning of your product within the AI community. (Configure a call)
