What is OngoingAI Gateway
OngoingAI Gateway is a headless AI gateway that runs between your application and AI providers such as OpenAI and Anthropic.
You point existing SDK or CLI traffic at the gateway base URL. The gateway forwards requests, returns streaming responses, and records trace metadata for audit and cost visibility.
Quick shape of a deployment
Most teams deploy one gateway per environment and only change client base URLs. Application request payloads and SDK calls stay the same.
export OPENAI_BASE_URL=http://localhost:8080/openai/v1
export ANTHROPIC_BASE_URL=http://localhost:8080/anthropicTypical outcomes
Use OngoingAI Gateway when you need to:
- Attribute AI traffic and cost by gateway key,
org_id, orworkspace_id. - Apply one access-control layer across providers and models.
- Investigate production failures with request-level trace records.
- Keep provider API keys in client requests and out of gateway storage.
How request handling works
- Your client sends a provider request to a gateway route such as
/openai/v1/chat/completionsor/anthropic/v1/messages. - If gateway auth is enabled, your client also sends
X-OngoingAI-Gateway-Key. - The gateway validates access, tenant scope, and limits.
- The gateway strips its own auth header and forwards the provider credential upstream.
- The provider response streams back through the gateway to your client.
- The gateway writes trace metadata asynchronously so proxy latency stays predictable.
What data it records
For each proxied request, the gateway records metadata such as:
- Provider, route, HTTP method, status code, and latency.
- Model, token usage, and estimated cost when the provider returns usage fields.
- A hashed provider-key identifier for attribution.
By default, the gateway does not capture request or response bodies. To capture
payloads for debugging, set tracing.capture_bodies: true.
If gateway auth is enabled, each trace is scoped to org_id and
workspace_id.
What it does not do
OngoingAI Gateway intentionally has a narrow scope:
- It does not include a web UI.
- It does not manage browser sessions or end-user sign-in.
- It does not replace your application authorization model.
- It does not persist upstream provider API keys.
Deployment model
You can run OngoingAI Gateway as a single self-hosted binary.
- Default storage is SQLite for local or single-node deployments.
- Postgres is available for team and multi-service deployments.