The system prompt tokenizer reveals the hidden cost of your LLM system prompts. Since system prompts are sent on every API call, even a modestly long prompt can cost hundreds of dollars per month at production scale. This tool calculates your exact token count, cost per call, projected daily and monthly spend, and context window consumption — then suggests how to optimize.

Paste Your System Prompt

0 characters 0 words

Model & Usage

Each call sends the full system prompt

Claude: 90% discount on cached input. GPT-4o: 50% discount.

Token count 0
Cost per call $0.0000
Daily cost $0.00
Monthly cost $0.00

Cost Comparison Across Models

Same prompt, same call volume, different models

Model Context Used Per Call Daily Monthly Annual

Optimization Suggestions

Paste a system prompt to get personalized optimization suggestions.