Building Your First Context-Aware AI App with Context Kitten: A Technical Tutorial

A practical step-by-step tutorial showing developers how to enhance their AI applications with private knowledge and real-time data using Context Kitten's OpenAI-compatible API, complete with Python and TypeScript code examples.

Building Your First Context-Aware AI App with Context Kitten: A Technical Tutorial Last week, a developer reached out with a common challenge: their AI application was giving outdated responses about their product documentation. Despite having a great chatbot interface built with OpenAI's API, the responses weren't incorporating their latest product updates. Their frustration was palpable - they needed their AI to understand both their private documentation and current market data. This is a challenge we've seen repeatedly. Organizations invest significant resources in building AI applications, only to find their AI responses lacking critical context. Today, we'll walk through how to solve this using Context Kitten's API, which seamlessly combines your private data with real-time internet information. The Challenge of Context-Aware AI Traditional AI implementations face several limitations:

Limited access to private organizational knowledge Outdated information due to training cutoff dates Complex integration requirements across multiple data sources High costs of retraining models with new data

Solution: Building with Context Kitten

Let's transform a basic OpenAI-powered chatbot into a context-aware AI application. We'll use Python for the backend and TypeScript for the frontend.

Step 1: Setting Up Authentication

First, replace your OpenAI client configuration with Context Kitten:

Before

from openai import OpenAI
client = OpenAI(api_key="your-api-key")

After

from openai import OpenAI
client = OpenAI(
    api_key="your-context-kitten-key",
    base_url="https://api.contextkitten.com/v1"
)

Step 2: Document Upload Implementation

Now, let's add document management to incorporate your private knowledge:

async def upload_document(file_path: str):
    with open(file_path, 'rb') as f:
        files = {'file': f}
        response = await client.post(
            'https://api.contextkitten.com/v1/documents/upload',
            files=files
        )
        return response.json()

Step 3: Enhanced Chat Completions

Here's how to upgrade your chat interface to include context from both your documents and the internet:

from typing import List, Dict
import asyncio
from fastapi import FastAPI, WebSocket
from pydantic import BaseModel

class Message(BaseModel):
    role: str
    content: str

class ChatService:
    def __init__(self, api_key: str):
        self.client = OpenAI(
            api_key=api_key,
            base_url="https://api.contextkitten.com/v1"
        )
        self.messages: List[Message] = []
    
    async def send_message(self, content: str) -> Dict:
        self.messages.append({"role": "user", "content": content})
        
        response = await self.client.chat.completions.create(
            messages=self.messages,
            search_options={
                "enable_document_search": True,
                "enable_web_search": True
            }
        )
        
        assistant_message = response.choices[0].message
        self.messages.append(assistant_message)
        return assistant_message

# FastAPI implementation
app = FastAPI()
chat_service = ChatService(api_key="your-context-kitten-key")

@app.websocket("/chat")
async def chat_endpoint(websocket: WebSocket):
    await websocket.accept()
    
    try:
        while True:
            content = await websocket.receive_text()
            response = await chat_service.send_message(content)
            await websocket.send_json(response)
    except Exception as e:
        print(f"Error in chat: {e}")
        await websocket.close()

The Results: Before and After

Let's compare responses to the question "What are the current Context Kitten pricing tiers?":

Before (OpenAI Only):

We offer Basic, Pro, and Enterprise tiers, but I don't have specific pricing information.

After (With Context Kitten):

We currently offer three tiers:

  1. Free Tier: 1GB storage and $1 in free API credits
  2. Scale ($30/month): 20GB storage with API credits at cost
  3. Enterprise: Custom solutions with flexible storage and advanced security features Additional storage is available at $0.50/GB for the Scale tier.

Advanced Features

Semantic Search

async def search_documents(query: str):
    response = await client.get(
        'https://api.contextkitten.com/v1/search',
        params={
            'query': query,
            'enable_document_search': True,
            'enable_web_search': True
        }
    )
    return response.json()

Model Selection

async def list_available_models():
    response = await client.get(
      'https://api.contextkitten.com/v1/models'
    )
    return response.json()

Best Practices and Tips

  1. Document Processing: Upload documents in smaller batches to optimize processing time
  2. Context Management: Use specific search queries to get more relevant context
  3. Error Handling: Implement proper error handling for document processing status
  4. Rate Limiting: Stay within the limits (60 requests/minute for free tier, 1000 requests/minute for paid)

Getting Started

  1. Sign up at contextkitten.com/signup
  2. Get your API key from the dashboard
  3. Upload your initial document set
  4. Start making enhanced chat completion requests

Conclusion

Building context-aware AI applications doesn't have to be complex. With Context Kitten, you can quickly upgrade your existing AI implementation to include both private knowledge and real-time information. The API's compatibility with OpenAI's interface makes migration straightforward, while adding powerful new capabilities to your applications. Want to see more? Check out our API documentation or join our growing developer community.