Chatbot Example
This example demonstrates a complete chatbot implementation with Ladger integration, featuring:
- Express.js REST API
- OpenAI GPT integration
- Cost tracking per request
- Session management
- Graceful shutdown
Project Setup
Create a new project:
mkdir ladger-chatbotcd ladger-chatbotnpm init -ynpm install express openai @ladger/sdk dotenvnpm install -D typescript @types/express @types/node tsxConfiguration
tsconfig.json
{ "compilerOptions": { "target": "ES2022", "module": "NodeNext", "moduleResolution": "NodeNext", "esModuleInterop": true, "strict": true, "outDir": "dist" }, "include": ["src"]}.env
OPENAI_API_KEY=sk-...LADGER_API_KEY=ladger_sk_live_...PORT=3000Implementation
src/index.ts
import 'dotenv/config';import express from 'express';import OpenAI from 'openai';import { LadgerTracer } from '@ladger/sdk';
// Initialize OpenAI clientconst openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY,});
// Initialize Ladger tracerconst tracer = new LadgerTracer({ apiKey: process.env.LADGER_API_KEY!, flowName: 'openai-chatbot', projectUrl: process.env.LADGER_PROJECT_URL || 'https://ladger.pages.dev/api', debug: process.env.NODE_ENV === 'development',});
const app = express();app.use(express.json());
// Pricing lookup (simplified)const PRICING: Record<string, { input: number; output: number }> = { 'gpt-4o': { input: 0.005 / 1000, output: 0.015 / 1000 }, 'gpt-4o-mini': { input: 0.00015 / 1000, output: 0.0006 / 1000 }, 'gpt-3.5-turbo': { input: 0.0005 / 1000, output: 0.0015 / 1000 },};
function calculateCost(model: string, inputTokens: number, outputTokens: number): number { const pricing = PRICING[model] || PRICING['gpt-4o']; return inputTokens * pricing.input + outputTokens * pricing.output;}
/** * Simple chat endpoint */app.post('/chat', async (req, res) => { const { message, model = 'gpt-4o-mini' } = req.body;
if (!message) { return res.status(400).json({ error: 'Message is required' }); }
// Start a new session for each request tracer.newSession();
try { const response = await tracer.trace('chat-completion', async (span) => { // Add request metadata span.setAttributes({ 'user.message_length': message.length, 'request.model': model, });
// Call OpenAI const completion = await openai.chat.completions.create({ model, messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: message }, ], });
const usage = completion.usage!; const cost = calculateCost(model, usage.prompt_tokens, usage.completion_tokens);
// Record cost span.recordCost({ provider: 'openai', model, inputTokens: usage.prompt_tokens, outputTokens: usage.completion_tokens, costUsd: cost, });
return { content: completion.choices[0].message.content, usage: { inputTokens: usage.prompt_tokens, outputTokens: usage.completion_tokens, totalTokens: usage.total_tokens, }, cost, model, }; });
res.json({ response: response.content, metadata: { model: response.model, usage: response.usage, cost: `$${response.cost.toFixed(6)}`, }, }); } catch (error) { console.error('Chat error:', error); res.status(500).json({ error: 'Failed to generate response' }); }});
/** * Health check endpoint */app.get('/health', (req, res) => { res.json({ status: 'ok', tracer: { sessionId: tracer.getSessionId(), pendingSpans: tracer.getPendingSpanCount(), } });});
const PORT = process.env.PORT || 3000;
// Graceful shutdownconst shutdown = async () => { console.log('Shutting down...'); await tracer.shutdown(); process.exit(0);};
process.on('SIGTERM', shutdown);process.on('SIGINT', shutdown);
app.listen(PORT, () => { console.log(`🤖 Chatbot running on http://localhost:${PORT}`); console.log(`📊 Traces: ${tracer.getProjectUrl()}`); console.log('\nEndpoints:'); console.log(' POST /chat - Chat with the bot'); console.log(' GET /health - Health check');});Running the Example
# Developmentnpx tsx src/index.ts
# Productionnpx tscnode dist/index.jsTesting
Simple Chat
curl -X POST http://localhost:3000/chat \ -H "Content-Type: application/json" \ -d '{"message": "What is the capital of France?"}'Response:
{ "response": "The capital of France is Paris.", "metadata": { "model": "gpt-4o-mini", "usage": { "inputTokens": 25, "outputTokens": 8, "totalTokens": 33 }, "cost": "$0.000009" }}With Different Model
curl -X POST http://localhost:3000/chat \ -H "Content-Type: application/json" \ -d '{"message": "Explain quantum computing", "model": "gpt-4o"}'Viewing in Dashboard
After making requests, visit ladger.pages.dev to see:
- Cost breakdown by request
- Token usage over time
- Model distribution
- Session timelines
Extending the Example
Add Conversation History
interface Message { role: 'user' | 'assistant' | 'system'; content: string;}
const conversations = new Map<string, Message[]>();
app.post('/chat/:conversationId', async (req, res) => { const { conversationId } = req.params; const { message, model = 'gpt-4o-mini' } = req.body;
// Get or create conversation if (!conversations.has(conversationId)) { conversations.set(conversationId, [ { role: 'system', content: 'You are a helpful assistant.' } ]); tracer.newSession(); }
const messages = conversations.get(conversationId)!; messages.push({ role: 'user', content: message });
const response = await tracer.trace('chat-with-history', async (span) => { span.setAttributes({ 'conversation.id': conversationId, 'conversation.length': messages.length, });
const completion = await openai.chat.completions.create({ model, messages, });
span.recordCost({ provider: 'openai', model, inputTokens: completion.usage?.prompt_tokens, outputTokens: completion.usage?.completion_tokens, });
return completion.choices[0].message.content; });
messages.push({ role: 'assistant', content: response! });
res.json({ response });});Add Rate Limiting
import rateLimit from 'express-rate-limit';
const limiter = rateLimit({ windowMs: 60 * 1000, // 1 minute max: 20, // 20 requests per minute handler: (req, res) => { res.status(429).json({ error: 'Too many requests' }); },});
app.use('/chat', limiter);Add Request Validation
import { z } from 'zod';
const chatSchema = z.object({ message: z.string().min(1).max(4000), model: z.enum(['gpt-4o', 'gpt-4o-mini', 'gpt-3.5-turbo']).optional(),});
app.post('/chat', async (req, res) => { const result = chatSchema.safeParse(req.body); if (!result.success) { return res.status(400).json({ error: result.error.issues }); } // ... rest of handler});Next Steps
- Explore the Multi-Agent Example
- Learn about Task Classification
- Set up Cost Alerts