withNeuraMeter()
Wraps an OpenAI client so that every chat.completions.create() call is automatically tracked in NeuraMeter. No proxy required.
Usage
import OpenAI from 'openai';
import { withNeuraMeter } from '@neurameter/core';
const openai = withNeuraMeter(new OpenAI(), {
apiKey: 'nm_xxx',
projectId: 'proj_xxx',
agentName: 'MyAgent',
});Config
interface WithNeuraMeterConfig {
/** NeuraMeter API key (format: nm_{orgId}_{secret}) */
apiKey: string;
/** NeuraMeter project ID */
projectId: string;
/** Agent name for cost events (default: 'default') */
agentName?: string;
/** NeuraMeter ingestion endpoint (default: 'https://meter.neuria.tech') */
endpoint?: string;
}| Field | Required | Default | Description |
|---|---|---|---|
apiKey | Yes | — | Your NeuraMeter API key |
projectId | Yes | — | Your NeuraMeter project ID |
agentName | No | 'default' | Name to label this agent’s events |
endpoint | No | 'https://meter.neuria.tech' | Ingestion API URL |
How It Works
Non-streaming
- Intercepts
chat.completions.create()calls - Starts a timer (
Date.now()) - Calls the original OpenAI method
- Extracts
response.usage(prompt_tokens, completion_tokens, etc.) - Calculates cost using built-in pricing tables
- Records the event via
NeuraMeter.record()(batched, async) - Returns the response unchanged
Streaming
- Detects
stream: truein params - Auto-injects
stream_options: { include_usage: true }(so OpenAI includes usage in the final chunk) - Wraps the async iterable stream
- Yields every chunk unchanged to your code
- Captures usage from the final chunk
- Records the event after stream completion
What Gets Tracked
Each call records:
| Field | Source |
|---|---|
model | From request params |
inputTokens | usage.prompt_tokens |
outputTokens | usage.completion_tokens |
reasoningTokens | usage.completion_tokens_details.reasoning_tokens |
cachedTokens | usage.prompt_tokens_details.cached_tokens |
costMicrodollars | Calculated from built-in pricing tables |
latencyMs | Date.now() - startTime |
provider | 'openai' |
agentName | From config |
Cleanup
The wrapper attaches a __neurameter_destroy method for cleanup:
// When shutting down your application
await (openai as any).__neurameter_destroy();This flushes any remaining buffered events before shutdown.
Notes
- The wrapper has zero dependency on the
openainpm package — it uses minimal interfaces internally, so it works with any version of the OpenAI SDK - Events are batched (50 events or 5s interval) and sent asynchronously — no impact on your API call latency
- If the NeuraMeter ingestion endpoint is unreachable, events are silently dropped (fire-and-forget)
- The wrapper only intercepts
chat.completions.create(). Other OpenAI methods (embeddings, images, etc.) pass through unchanged.