Skip to content

Ladger Documentation

Track, analyze, and optimize your AI workload costs with end-to-end observability

Why Ladger?

Organizations scaling AI workloads face critical challenges: blind spending, optimization paralysis, and complexity blindness. Ladger solves these with intelligent observability.

Track AI Costs

Automatically capture token usage, latency, and costs across all your AI providers with a lightweight SDK.

Visualize Flows

Interactive flow maps show exactly how your AI calls connect, revealing cost hotspots and optimization opportunities.

Classify Tasks

ML-powered classification identifies task complexity and type, enabling smart model routing decisions.

Optimize Confidently

Test optimizations against historical data with quality validation before deploying changes.

Quick Example

import { LadgerTracer } from '@ladger/sdk';
const tracer = new LadgerTracer({
apiKey: process.env.LADGER_API_KEY!,
flowName: 'my-chatbot',
});
// Track an AI call
const span = tracer.startSpan('chat-completion');
const response = await openai.chat.completions.create({ ... });
span.recordCost({
provider: 'openai',
model: 'gpt-4o',
inputTokens: response.usage?.prompt_tokens,
outputTokens: response.usage?.completion_tokens,
});
span.end();

Supported Providers

OpenAI

GPT-4o, GPT-4o-mini, GPT-3.5 Turbo, and more

Anthropic

Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku

Google

Gemini Pro, Gemini Flash

Ollama

Self-hosted models: Llama, Mistral, and more

Next Steps