Cost Tracking
Cost tracking is the core feature of Ladger. Record token usage, model information, and costs to get full visibility into your AI spending.
CostEvent Interface
interface CostEvent { provider: string; // Required: 'openai', 'anthropic', etc. model: string; // Required: 'gpt-4o', 'claude-3-sonnet', etc. inputTokens?: number; // Input/prompt tokens outputTokens?: number; // Output/completion tokens costUsd?: number; // Optional: explicit cost in USD}Recording Costs
Use span.recordCost() to attach usage data:
span.recordCost({ provider: 'openai', model: 'gpt-4o', inputTokens: 150, outputTokens: 50, costUsd: 0.0025,});Provider-Specific Examples
import OpenAI from 'openai';
const openai = new OpenAI();
const span = tracer.startSpan('chat');const completion = await openai.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Hello!' }],});
span.recordCost({ provider: 'openai', model: 'gpt-4o', inputTokens: completion.usage?.prompt_tokens, outputTokens: completion.usage?.completion_tokens,});span.end();import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
const span = tracer.startSpan('chat');const message = await anthropic.messages.create({ model: 'claude-3-sonnet-20240229', max_tokens: 1024, messages: [{ role: 'user', content: 'Hello!' }],});
span.recordCost({ provider: 'anthropic', model: 'claude-3-sonnet', inputTokens: message.usage.input_tokens, outputTokens: message.usage.output_tokens,});span.end();import { GoogleGenerativeAI } from '@google/generative-ai';
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY!);
const span = tracer.startSpan('chat');const model = genAI.getGenerativeModel({ model: 'gemini-pro' });const result = await model.generateContent('Hello!');
span.recordCost({ provider: 'google', model: 'gemini-pro', inputTokens: result.response.usageMetadata?.promptTokenCount, outputTokens: result.response.usageMetadata?.candidatesTokenCount,});span.end();Calculating Costs
You can either let Ladger estimate costs based on tokens, or provide explicit costs.
Option 1: Token-Based (Recommended)
Provide tokens and let Ladger calculate costs using current pricing:
span.recordCost({ provider: 'openai', model: 'gpt-4o', inputTokens: 1000, outputTokens: 500, // costUsd is calculated by Ladger});Option 2: Explicit Cost
Calculate and provide the cost yourself:
const PRICING = { 'gpt-4o': { input: 0.005 / 1000, output: 0.015 / 1000 }, 'gpt-4o-mini': { input: 0.00015 / 1000, output: 0.0006 / 1000 },};
function calculateCost(model: string, inputTokens: number, outputTokens: number) { const pricing = PRICING[model]; return inputTokens * pricing.input + outputTokens * pricing.output;}
span.recordCost({ provider: 'openai', model: 'gpt-4o', inputTokens: 1000, outputTokens: 500, costUsd: calculateCost('gpt-4o', 1000, 500), // 0.0125});Supported Providers
Ladger recognizes these provider names for automatic cost estimation:
| Provider | Value | Example Models |
|---|---|---|
| OpenAI | 'openai' | gpt-4o, gpt-4o-mini, gpt-3.5-turbo |
| Anthropic | 'anthropic' | claude-3-opus, claude-3-sonnet, claude-3-haiku |
'google' | gemini-pro, gemini-flash | |
| Cohere | 'cohere' | command, command-light |
| Mistral | 'mistral' | mistral-large, mistral-medium |
| Ollama | 'ollama' | llama2, codellama, mistral |
| Azure OpenAI | 'azure' | gpt-4, gpt-35-turbo |
| AWS Bedrock | 'bedrock' | anthropic.claude-v2, amazon.titan |
Model Naming
Use the model identifier as returned by the provider:
// OpenAIspan.recordCost({ provider: 'openai', model: 'gpt-4o-2024-08-06', ... });
// Anthropicspan.recordCost({ provider: 'anthropic', model: 'claude-3-sonnet-20240229', ... });
// Short names also workspan.recordCost({ provider: 'openai', model: 'gpt-4o', ... });Pricing Reference
Common model pricing (as of January 2025):
OpenAI
| Model | Input | Output |
|---|---|---|
| gpt-4o | $5.00 / 1M | $15.00 / 1M |
| gpt-4o-mini | $0.15 / 1M | $0.60 / 1M |
| gpt-3.5-turbo | $0.50 / 1M | $1.50 / 1M |
Anthropic
| Model | Input | Output |
|---|---|---|
| claude-3-opus | $15.00 / 1M | $75.00 / 1M |
| claude-3-sonnet | $3.00 / 1M | $15.00 / 1M |
| claude-3-haiku | $0.25 / 1M | $1.25 / 1M |
| Model | Input | Output |
|---|---|---|
| gemini-1.5-pro | $3.50 / 1M | $10.50 / 1M |
| gemini-1.5-flash | $0.075 / 1M | $0.30 / 1M |
Embeddings and Other Operations
For non-chat operations, record tokens appropriately:
Embeddings
const span = tracer.startSpan('create-embedding');const embedding = await openai.embeddings.create({ model: 'text-embedding-3-small', input: 'Hello world',});
span.recordCost({ provider: 'openai', model: 'text-embedding-3-small', inputTokens: embedding.usage.total_tokens, // Embeddings have no output tokens});span.end();Image Generation
const span = tracer.startSpan('generate-image');const image = await openai.images.generate({ model: 'dall-e-3', prompt: 'A sunset over mountains', size: '1024x1024',});
span.recordCost({ provider: 'openai', model: 'dall-e-3', // Image pricing is per-image, not per-token costUsd: 0.04, // $0.04 for 1024x1024});span.end();Multiple Cost Events
Each span can have one cost event. For multi-step operations, use nested spans:
// ❌ Don't do this - second call overwrites firstspan.recordCost({ model: 'gpt-4o', ... });span.recordCost({ model: 'gpt-3.5', ... }); // Overwrites!
// ✅ Do this - separate spansconst span1 = tracer.startSpan('call-1');span1.recordCost({ model: 'gpt-4o', ... });span1.end();
const span2 = tracer.startSpan('call-2');span2.recordCost({ model: 'gpt-3.5', ... });span2.end();Cost Aggregation
Ladger aggregates costs at multiple levels:
- Per Span: Individual operation cost
- Per Session: Sum of all spans in a session
- Per Flow: Sum of all sessions in a flow
- Per Project: Sum of all flows
View aggregations in the dashboard by time range, provider, model, and flow.