See where your prompt spends its tokens.

Real tokenizers for GPT-4o, Claude, Grok, and Gemini. Structural breakdown by section. Savings envelope from a 200-prompt blind-judged benchmark.

No account. No install. No prompt logging. A free diagnostic from Ordica's Context Optimization Layer.

Every production prompt has a structure: system instructions, few-shot examples, retrieved documents, conversation history, the user's actual question. The structure predicts where your tokens leak.

You can run tiktoken.encode() in a notebook and get a single total. That's not enough. You need four tokenizers in one place, the structural breakdown of your prompt, and an honest savings envelope from a benchmark you can verify yourself.

Built for the engineer who already writes tiktoken.encode() in a notebook and wants the next layer of insight.

0 characters Ctrl+Enter to analyze
Models:
No prompt handy? Try one:

Your prompt is processed in memory only — never written to disk, never sent to a model for inference. We log content-free metadata for abuse prevention. View audit policy →

How this works

  1. Real tokenizers, not approximations. GPT-4o uses tiktoken cl100k_base locally. Claude uses Anthropic's count_tokens API. Gemini uses Google's countTokens REST endpoint. Grok uses cl100k_base as an approximation (xAI does not publish a tokenizer).
  2. Section parsing is heuristic. We detect system instructions, few-shot examples, RAG context, tool definitions, and user queries from public structural markers (role-based JSON, "Document N:", "Example:", etc.). Confidence is reported per analysis.
  3. Savings estimates come from a benchmark, not your prompt. We match your prompt's structure to a cohort in our 200-prompt × 4-provider blind-judged benchmark and return that cohort's percentile range. We never run our compression engine on your input.
  4. Compression preserves quality better in some categories than others. We surface per-provider equivalence scores (1-5 scale) so you can see where compression is reliable and where it isn't. Don't trust this page — verify the cohort table yourself.

Privacy contract