Skip to content

Quickstart

This guide will have you tracking AI costs in under 5 minutes.

Prerequisites

  1. Initialize the Tracer

    Create a tracer instance with your API key and flow name:

    import { LadgerTracer } from '@ladger/sdk';
    const tracer = new LadgerTracer({
    apiKey: process.env.LADGER_API_KEY!, // starts with 'ladger_'
    flowName: 'my-chatbot', // groups traces by workflow
    });
  2. Wrap Your AI Call with a Span

    Create a span before making an AI call and end it after:

    import OpenAI from 'openai';
    const openai = new OpenAI();
    async function chat(message: string) {
    // Start tracking
    const span = tracer.startSpan('chat-completion');
    try {
    const completion = await openai.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [{ role: 'user', content: message }],
    });
    // Record cost data
    span.recordCost({
    provider: 'openai',
    model: 'gpt-4o-mini',
    inputTokens: completion.usage?.prompt_tokens,
    outputTokens: completion.usage?.completion_tokens,
    });
    return completion.choices[0].message.content;
    } finally {
    // Always end the span
    span.end();
    }
    }
  3. Make a Request

    Call your function and watch the trace appear in the dashboard:

    const response = await chat('What is the capital of France?');
    console.log(response);
  4. View in Dashboard

    Open ladger.pages.dev to see your trace:

    • Cost breakdown by provider and model
    • Latency measurements
    • Token usage statistics

Using the trace() Helper

For cleaner code, use the trace() helper that automatically manages span lifecycle:

const response = await tracer.trace('chat-completion', async (span) => {
const completion = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
});
span.recordCost({
provider: 'openai',
model: 'gpt-4o-mini',
inputTokens: completion.usage?.prompt_tokens,
outputTokens: completion.usage?.completion_tokens,
});
return completion.choices[0].message.content;
});

Session Management

Group related requests into sessions (e.g., a conversation):

// Start a new session for each conversation
tracer.newSession();
// All subsequent spans belong to this session
await chat('Hello!');
await chat('What was my first message?');

Graceful Shutdown

Always flush pending spans before your application exits:

process.on('SIGTERM', async () => {
await tracer.shutdown();
process.exit(0);
});

Complete Example

Here’s a complete working example:

import 'dotenv/config';
import { LadgerTracer } from '@ladger/sdk';
import OpenAI from 'openai';
const tracer = new LadgerTracer({
apiKey: process.env.LADGER_API_KEY!,
flowName: 'quickstart-demo',
debug: true, // Enable debug logging
});
const openai = new OpenAI();
async function main() {
const response = await tracer.trace('chat-completion', async (span) => {
span.setAttributes({ 'demo': true });
const completion = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Say hello in 3 words!' }],
});
span.recordCost({
provider: 'openai',
model: 'gpt-4o-mini',
inputTokens: completion.usage?.prompt_tokens,
outputTokens: completion.usage?.completion_tokens,
});
return completion.choices[0].message.content;
});
console.log('Response:', response);
// Ensure all spans are sent
await tracer.shutdown();
}
main();

Next Steps