Bring Your Own Key AI

Bring your own AI key. 10 providers. No model lock-in.

Triple Whale, Northbeam, and Hyros fund AI inference at the platform level — and tie you to one model. Admaxxer lets you paste your own API key from any of 10 providers and pay provider pricing directly.

What is Bring Your Own Key (BYOK)?

Bring Your Own Key means you paste your own API key from an AI provider — OpenAI, Anthropic, Google, xAI, etc. — into Admaxxer, and the in-app Maxxer chat agent uses your key for inference. The provider bills you directly at their list price. Admaxxer does not see your usage bill, does not pool your tokens with anyone else's, and does not mark up inference.

This is the opposite of how Triple Whale, Northbeam, and Hyros ship AI. Those platforms fund inference at the platform level — meaning they pick the model, they pay the provider, and they bake the cost into your subscription. The trade is: you get convenience, but you get no model choice, no transparency on the unit economics, and a hard ceiling on how much AI you can use before the platform starts rate-limiting you.

Why BYOK is the right architecture for analytics

Analytics agents are token-heavy. A serious "explain this week's revenue dip" prompt easily pulls in 50–100k tokens of context: campaign timeseries, cohort tables, attribution diffs, prior conversation. A platform-funded model has to constrain that context aggressively to keep margins, which means the agent's reasoning runs on a slice of your data instead of the whole picture.

BYOK removes that constraint. You decide the budget. You pick the model. You see the bill. If you want to spend $40 on a single analysis because the question is worth $40 of inference, you can — and the agent will use the full context window of your chosen model to do it. If you want to cap monthly spend at $20 because you're running a side project, your provider's spend limits handle that natively.

Ten supported providers

Each provider's setup guide lives under /documentation/ai-providers/. We test the live-handshake against the latest stable API of each provider on every Admaxxer release.

Live key validation on save

When you paste a key into Settings → AI Providers, Admaxxer fires a no-op handshake against the provider's models endpoint before persisting the key. If the handshake returns 401, we surface "this key is invalid" inline and refuse to save. If it returns 429, we surface "this key is rate-limited right now" and let you save anyway with a warning. This catches the single most common BYOK failure — a key copied with a leading whitespace, or copied from the wrong provider's dashboard — at the moment the user can actually fix it, not three days later when the agent fails silently.

AES-256-GCM at rest, workspace-scoped

Keys are encrypted at rest using AES-256-GCM with a 96-bit IV per key. The encryption key (ENCRYPTION_KEY) is held in Replit's secret store, never committed to git, and rotated on a documented schedule. Decryption happens only at the moment of inference inside the server process; the plaintext key is never written to disk and never logged. See the BYOK encryption architecture doc for the full threat model and key-rotation flow.

Keys are workspace-scoped. A workspace can hold one key per provider; a user with access to multiple workspaces can use a different key (or a different provider entirely) in each. This is intentional — a teammate's personal OpenAI key should not show up in your client's workspace by accident.

Per-session model picker

The Maxxer chat agent shows a model picker at the top of every chat. You pick the provider+model on a per-session basis. Switching mid-conversation is supported — chat history persists across providers, so you can start a thread on a cheap fast model to scope the question, then switch to a frontier reasoning model for the deep analysis. The agent re-sends the conversation history to the new provider on the first turn after the switch.

Learn more


Methodology

As of: April 30, 2026.

All product claims on this page reflect the live behavior of the Admaxxer platform on the date above. Where a metric is cited (e.g., the 92% sustained CAPI match rate), the measurement window, sample size, and source are stated inline next to the figure. Pricing tiers and plan inclusions are documented on the pricing page; if you find a discrepancy between this page and the pricing page, the pricing page is canonical and we'd like to know — please email hello@admaxxer.com.

Comparisons to third-party products (Triple Whale, Northbeam, Hyros) reflect publicly documented behavior of those products as of the as-of date. We do not maintain inside knowledge of competitor roadmaps; if a competitor has shipped a feature that changes the comparison, please let us know and we will update this page.

Key Benefits

Frequently Asked Questions

What is Bring Your Own Key (BYOK)?

You paste your own API key from an AI provider into Admaxxer, and the in-app Maxxer chat agent uses your key for inference. The provider bills you directly at list price. Admaxxer does not pool tokens, does not see your invoice, and does not mark up inference.

How is this different from buying tokens through Admaxxer?

We don't sell tokens. Other platforms (Triple Whale, Northbeam, Hyros) fund inference at the platform level and bake the cost into your subscription, which means they pick the model and they cap your usage. BYOK gives you the model choice and the spend control directly.

How is the key stored?

Encrypted at rest with AES-256-GCM, 96-bit IV per key, encryption key held in the platform's secret store and never committed to source. Decryption happens only at inference time inside the server process. Plaintext keys are never logged or written to disk.

What happens if my key is rate-limited by the provider?

The agent surfaces the provider's rate-limit error inline in the chat ("Anthropic returned 429 — try again in 28 seconds") and lets you retry, switch models, or switch providers without losing the conversation. We do not silently swallow provider errors.

Can I use different keys for different workspaces?

Yes — keys are workspace-scoped. Each workspace can hold one key per provider; a user with access to multiple workspaces can use a different key (or a different provider entirely) in each. This is the right model for agencies and consultants juggling multiple clients.

Does the model pick affect chat quality?

Yes, materially. Frontier reasoning models (Claude Opus 4.6, GPT-5, Gemini 2.5 Pro) handle long-context analytics prompts dramatically better than smaller cheap models. We default new sessions to a strong-but-affordable middle option (Claude Sonnet 4.6 for Anthropic users, GPT-5 for OpenAI users) and let you upgrade per-session.

Can I revoke a key?

Yes — Settings → AI Providers → Remove key. Revocation is immediate; the next inference attempt with that provider will fail until a new key is saved. We do not retain the plaintext after deletion.

Are reasoning models like o3 and Claude Sonnet thinking supported?

Yes. OpenAI o3/o4-mini and Anthropic's extended-thinking Sonnet 4.6 are first-class. The agent passes the appropriate reasoning_effort or thinking budget through to the provider, and surfaces the reasoning trace inline in the chat for prompts where the provider returns it.

Try Admaxxer Free