# Glossary (/docs/glossary)

This file defines canonical terminology used across the TokenLens v2 codebase and docs.

## provider id

* The vendor namespace for a model (e.g., `openai`, `anthropic`, `xai`).
* Appears as the prefix in canonical ids such as `provider/model`.

## model id (canonical)

* Fully qualified identifier in the form `provider/model`.
* Example: `openai/gpt-4o-mini`, `anthropic/claude-3-5-sonnet-20241022`.

## catalog

* Identifier for the dataset Tokenlens loads (`auto`, `openrouter`, `models.dev`, or `vercel`).
* Configured via `TokenlensOptions.catalog` or per-call helper options such as `gateway`.

## SourceProvider

* Raw provider metadata emitted by a loader.
* Fields include `id`, `name?`, `api?`, `doc?`, `env?`, `models`.
* Represents source-specific data before Tokenlens composes higher-level DTOs.

## SourceModel

* Raw per-model metadata entry delivered by a loader.
* Fields include `id`, `name`, `limit`, `cost`, `modalities`, capability flags, and source-specific `extras`.

## ModelDetails

* Alias for the `SourceModel` returned by `getModelData` (or `undefined` when not found).

## Usage

* Union type capturing usage counters from common SDKs (e.g., `input_tokens`, `outputTokens`, `reasoning_tokens`).
* Passed to `computeCostUSD`, `estimateCostUSD`, or `getContextHealth` to compute derived metrics.

## TokenCosts

* Normalized USD cost breakdown calculated by `computeCostUSD` / `computeTokenCostsForModel`.
* Fields: `inputTokenCostUSD`, `outputTokenCostUSD`, optional `reasoningTokenCostUSD`, caching costs, and `totalTokenCostUSD` plus the `ratesUsed`.

## provider id normalization

* Tokenlens trims suffixes like `.responses` or slash-prefixed ids so `openai.responses/gpt-4o` resolves to the canonical `openai/gpt-4o` entry.
* Needed for Vercel AI SDK integration where provider ids include namespaces.

## LanguageModelV2

* AI SDK model abstraction (`@ai-sdk/*`) whose `modelId` and `provider`/`providerId` can be passed directly to Tokenlens helpers.
* Tests covering this live in `packages/provider-tests/tests/ai-sdk-usage.spec.ts`.

## models.dev

* External dataset consumed via `https://models.dev/api.json`.
* In v2 we ingest it into `packages/models/src/modelsdev/`.

## OpenRouter

* External dataset consumed via `https://openrouter.ai/api/v1/models`.
* In v2 we ingest it into `packages/models/src/openrouter/`.

## Vercel AI Gateway

* External dataset consumed via `https://ai-gateway.vercel.sh/v1/models`.
* Accessed live via `fetchVercel` or the `"vercel"` source.

## context limits

* Token budget derived from provider metadata.
* `limit.context` represents combined tokens; `limit.input` / `limit.output` provide per-direction caps when available.

## pricing

* Approximate USD cost per 1M tokens as supplied by the source catalog (`cost.input`, `cost.output`, etc.).
* Tokenlens converts these to per-request USD using `computeCostUSD`.

## modalities

* Supported input/output modalities of a model (`modalities.input`, `modalities.output`).
* Used to derive hints returned from `getModelData`.

## computeCostUSD

* Stand-alone helper (and class method) that resolves a model, normalizes usage, and returns `TokenCosts`.

## getContextLimits

* Stand-alone helper returning `{ context?, input?, output? }` for a resolved model.

## Tokenlens (class)

* Configurable client responsible for loading catalogs, caching provider metadata, and exposing helpers (`getModelData`, `computeCostUSD`, `getContextLimits`, `getContextHealth`, `estimateCostUSD`, `countTokens`).
* Instances share a cache unless `cacheKey` is overridden.

## createTokenlens

* Convenience factory that instantiates `Tokenlens` with sensible defaults (OpenRouter catalog when none provided; `{ catalog: "auto" }` is equivalent to the same OpenRouter gateway).

## MemoryCache

* Default cache adapter for Tokenlens.
* Stores provider catalogs in-memory with TTL jitter to prevent thundering herds.

***

Conventions

* Prefer `provider/model` in code and docs.
* Reference terms by anchor, e.g., “See docs/glossary.md#model”.
* New terms must be added here in the same PR.
