In this tutorial, we guide users by creating a robust and ready production SDK. It starts by showing how to install and configure essential HTTP asynchronous libraries (Aiohttp, Nest-Asyncio). It then travels the implementation of central components, including structured response objects, limitation of the token-Bucket rate, chatting in memory with TTL and a clean design and led by the data class. We will see how to wrap these parts in an AdvanceDSDK class which supports the management of the asynchronous context, the automatic heating / waiting behavior, the injection of JSON / AUTH headers and the practical HTTP-VERBE methods. Along the way, a demonstration harness against JSONPLACEHOLDER illustrates the effectiveness of cache, recovery by lots with rate limits, error management and even shows how to extend SDK via a fluid “manufacturer” model for personalized configuration.
import asyncio
import aiohttp
import time
import json
from typing import Dict, List, Optional, Any, Union
from dataclasses import dataclass, asdict
from datetime import datetime, timedelta
import hashlib
import logging
!pip install aiohttp nest-asyncio
We install and configure asynchronous execution by important Asyncio and AIOHTTP, alongside utilities for synchronization, JSON manipulation, data closing modeling, cache (via hashlib and datetime) and structured journalization. The line! Pip Install Aiohttp Nest-Asyncio Line guarantees that the notebook can run an event loop transparent in Colab, allowing asynchronous HTTP requests and limited rate flow flows.
@dataclass
class APIResponse:
"""Structured response object"""
data: Any
status_code: int
headers: Dict(str, str)
timestamp: datetime
def to_dict(self) -> Dict:
return asdict(self)
The closure of Apilar data summarizes the details of the HTTP response, the payload (data), the state code, the headers and the recovery of recovery in a single typed object. The assistance to_dict () converts the instance into a simple dictionary for a journalization, serialization or easy downstream treatment.
class RateLimiter:
"""Token bucket rate limiter"""
def __init__(self, max_calls: int = 100, time_window: int = 60):
self.max_calls = max_calls
self.time_window = time_window
self.calls = ()
def can_proceed(self) -> bool:
now = time.time()
self.calls = (call_time for call_time in self.calls if now - call_time < self.time_window)
if len(self.calls) < self.max_calls:
self.calls.append(now)
return True
return False
def wait_time(self) -> float:
if not self.calls:
return 0
return max(0, self.time_window - (time.time() - self.calls(0)))
The Ratelimite class applies a simple token-bucket policy by following the horodatages of recent calls and allowing Max_Calls in a time_window. When the limit is reached, can_proced () returns False and Wait_Time () calculates how long to stop before making the following request.
class Cache:
"""Simple in-memory cache with TTL"""
def __init__(self, default_ttl: int = 300):
self.cache = {}
self.default_ttl = default_ttl
def _generate_key(self, method: str, url: str, params: Dict = None) -> str:
key_data = f"{method}:{url}:{json.dumps(params or {}, sort_keys=True)}"
return hashlib.md5(key_data.encode()).hexdigest()
def get(self, method: str, url: str, params: Dict = None) -> Optional(APIResponse):
key = self._generate_key(method, url, params)
if key in self.cache:
response, expiry = self.cache(key)
if datetime.now() < expiry:
return response
del self.cache(key)
return None
def set(self, method: str, url: str, response: APIResponse, params: Dict = None, ttl: int = None):
key = self._generate_key(method, url, params)
expiry = datetime.now() + timedelta(seconds=ttl or self.default_ttl)
self.cache(key) = (response, expiry)
The cache class provides a TTL cache in light memory for API responses by haunting the signature of the request (method, URL, params) in a single key. It returns valid cache aircraft objects before expiration and automatically explains outdated entrances after their time has passed.
class AdvancedSDK:
"""Advanced SDK with modern Python patterns"""
def __init__(self, base_url: str, api_key: str = None, rate_limit: int = 100):
self.base_url = base_url.rstrip('/')
self.api_key = api_key
self.session = None
self.rate_limiter = RateLimiter(max_calls=rate_limit)
self.cache = Cache()
self.logger = self._setup_logger()
def _setup_logger(self) -> logging.Logger:
logger = logging.getLogger(f"SDK-{id(self)}")
if not logger.handlers:
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
return logger
async def __aenter__(self):
"""Async context manager entry"""
self.session = aiohttp.ClientSession()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
"""Async context manager exit"""
if self.session:
await self.session.close()
def _get_headers(self) -> Dict(str, str):
headers = {'Content-Type': 'application/json'}
if self.api_key:
headers('Authorization') = f'Bearer {self.api_key}'
return headers
async def _make_request(self, method: str, endpoint: str, params: Dict = None,
data: Dict = None, use_cache: bool = True) -> APIResponse:
"""Core request method with rate limiting and caching"""
if use_cache and method.upper() == 'GET':
cached = self.cache.get(method, endpoint, params)
if cached:
self.logger.info(f"Cache hit for {method} {endpoint}")
return cached
if not self.rate_limiter.can_proceed():
wait_time = self.rate_limiter.wait_time()
self.logger.warning(f"Rate limit hit, waiting {wait_time:.2f}s")
await asyncio.sleep(wait_time)
url = f"{self.base_url}/{endpoint.lstrip('/')}"
try:
async with self.session.request(
method=method.upper(),
url=url,
params=params,
json=data,
headers=self._get_headers()
) as resp:
response_data = await resp.json() if resp.content_type == 'application/json' else await resp.text()
api_response = APIResponse(
data=response_data,
status_code=resp.status,
headers=dict(resp.headers),
timestamp=datetime.now()
)
if use_cache and method.upper() == 'GET' and 200 <= resp.status < 300:
self.cache.set(method, endpoint, api_response, params)
self.logger.info(f"{method.upper()} {endpoint} - Status: {resp.status}")
return api_response
except Exception as e:
self.logger.error(f"Request failed: {str(e)}")
raise
async def get(self, endpoint: str, params: Dict = None, use_cache: bool = True) -> APIResponse:
return await self._make_request('GET', endpoint, params=params, use_cache=use_cache)
async def post(self, endpoint: str, data: Dict = None) -> APIResponse:
return await self._make_request('POST', endpoint, data=data, use_cache=False)
async def put(self, endpoint: str, data: Dict = None) -> APIResponse:
return await self._make_request('PUT', endpoint, data=data, use_cache=False)
async def delete(self, endpoint: str) -> APIResponse:
return await self._make_request('DELETE', endpoint, use_cache=False)
The Advancedsdk class envelope envelope in a clean and asynchronized client: it manages an Aiohttp session via asynchronous context managers, injects JSON and AUTH headers, and coordinate our ratelimite and hide under the hood. His _make_request method Centralizes Get / Post / Put / Delete logic, management of cache research, rate limit expectations, error journalization and response packaging in appropriate objects, while GET / POST / PUT / DELETE aid give us ergonomic and high level calls.
async def demo_sdk():
"""Demonstrate SDK capabilities"""
print("🚀 Advanced SDK Demo")
print("=" * 50)
async with AdvancedSDK("https://jsonplaceholder.typicode.com") as sdk:
print("\n📥 Testing GET request with caching...")
response1 = await sdk.get("/posts/1")
print(f"First request - Status: {response1.status_code}")
print(f"Title: {response1.data.get('title', 'N/A')}")
response2 = await sdk.get("/posts/1")
print(f"Second request (cached) - Status: {response2.status_code}")
print("\n📤 Testing POST request...")
new_post = {
"title": "Advanced SDK Tutorial",
"body": "This SDK demonstrates modern Python patterns",
"userId": 1
}
post_response = await sdk.post("/posts", data=new_post)
print(f"POST Status: {post_response.status_code}")
print(f"Created post ID: {post_response.data.get('id', 'N/A')}")
print("\n⚡ Testing batch requests with rate limiting...")
tasks = ()
for i in range(1, 6):
tasks.append(sdk.get(f"/posts/{i}"))
results = await asyncio.gather(*tasks)
print(f"Batch completed: {len(results)} requests")
for i, result in enumerate(results, 1):
print(f" Post {i}: {result.data.get('title', 'N/A')(:30)}...")
print("\n❌ Testing error handling...")
try:
error_response = await sdk.get("/posts/999999")
print(f"Error response status: {error_response.status_code}")
except Exception as e:
print(f"Handled error: {type(e).__name__}")
print("\n✅ Demo completed successfully!")
async def run_demo():
"""Colab-friendly demo runner"""
await demo_sdk()
The Coroutine Demo_SDK travels the main features of the SDK, issuing a request for cache, making a message, performing a lot of rate limitation objects and management errors, against the JSONPlaceHolder API, printing of state codes and data samples to illustrate each capacity. The Run_Demo assistance guarantees that this demo takes place gently in the existing event loop of a colab notebook.
import nest_asyncio
nest_asyncio.apply()
if __name__ == "__main__":
try:
asyncio.run(demo_sdk())
except RuntimeError:
loop = asyncio.get_event_loop()
loop.run_until_complete(demo_sdk())
class SDKBuilder:
"""Builder pattern for SDK configuration"""
def __init__(self, base_url: str):
self.base_url = base_url
self.config = {}
def with_auth(self, api_key: str):
self.config('api_key') = api_key
return self
def with_rate_limit(self, calls_per_minute: int):
self.config('rate_limit') = calls_per_minute
return self
def build(self) -> AdvancedSDK:
return AdvancedSDK(self.base_url, **self.config)
Finally, we apply Nest_asyncio to activate the ends of nested events in Colab, then enforce the demo via asyncio.run (with a manual loop execution if necessary). It also introduces an SDKBUILDER class which implements a fluid manufacturer model to easily configure and instantly instantly instantly the AdvanceDSDK with authentication parameters and personalized rate limit.
In conclusion, this Didacticiel SDK provides an evolutionary basis for any relaxing integration, combining modern Idioms Python (data classes, asynchronous / waiting, context managers) with practical tools (rate limiter, cache, structured journalization). By adapting the models shown here, in particular the separation of concerns between the orchestration of requests, the cache and the modeling of the response, the teams can accelerate the development of new API customers while guaranteeing predictability, observability and resilience.
Discover the Codes. All the merit of this research goes to researchers in this project. Also, don't hesitate to follow us Twitter And don't forget to join our Subseubdredit 100k + ml and subscribe to Our newsletter.
Sana Hassan, consulting trainee at Marktechpost and double -degree student at Iit Madras, is passionate about the application of technology and AI to meet the challenges of the real world. With a great interest in solving practical problems, it brings a new perspective to the intersection of AI and real life solutions.
