Integrations and exporters
Use this page to integrate OngoingAI Gateway data with external dashboards, SIEM pipelines, and reporting jobs. It covers current export surfaces and practical pull-based integration patterns.
Export surfaces
- Exposes JSON API endpoints for health, traces, and analytics data.
- Emits structured JSON logs to stdout for request lifecycle and audit events.
Logs include
trace_idandspan_idwhen an active span is in context, enabling direct log-to-trace correlation in Grafana Loki, Elasticsearch, or any log backend. - Persists trace records to SQLite or Postgres for downstream reporting jobs.
- Optionally exports gateway spans and metrics with native OpenTelemetry OTLP HTTP.
- Exposes a native Prometheus
/metricsscrape endpoint whenprometheus_enabled: true. - OpenTelemetry spans and metrics use normalized request-scope attributes:
provider,model,org_id,workspace_id, androute. - When auth is enabled and identity is present, spans also include
key_idandrole. - Proxy and provider latency histograms attach exemplars with
trace_id, enabling click-through from dashboard latency spikes to the exact trace. - Go runtime metrics (
go_memory_*,go_goroutine_*,go_gc_*) are registered automatically for process health monitoring. - Supports browser-based integration clients with CORS headers on API routes.
Operational fit
- You need external usage, cost, latency, and error dashboards from gateway trace data.
- You need centralized audit and auth-deny visibility in your log pipeline.
- You need export behavior that does not block proxy request forwarding.
Integration flow
- Proxy traffic is captured and written asynchronously to trace storage.
- API routes under
/api/...read from storage and return JSON responses. /api/tracessupports filtered and cursor-based pull exports./api/analytics/*exposes usage, cost, model, key, latency, error-rate, and summary aggregates.- Gateway logs are emitted as JSON lines to stdout and can be shipped by your log collector.
- If
auth.enabled=true, integration readers must use a gateway key withanalytics:readfor trace and analytics APIs. - If
observability.otel.enabled=true, the gateway emits OTLP HTTP spans and metrics to your configured collector endpoint.
Starter integration config
No separate integration feature flag is required. You can integrate by reading HTTP APIs and shipping stdout logs.
YAML
auth:
enabled: true
header: X-OngoingAI-Gateway-Key
storage:
driver: sqlite
path: ./ongoingai.db
tracing:
capture_bodies: false
body_max_size: 1048576
observability:
otel:
enabled: true
endpoint: localhost:4318
insecure: true
service_name: ongoingai-gateway
traces_enabled: true
metrics_enabled: true
sampling_ratio: 1.0
export_timeout_ms: 3000
metric_export_interval_ms: 10000
# prometheus_enabled: false
# prometheus_path: /metricsWith capture_bodies=false, your integrations still get usage, latency, and
metadata fields without storing request and response payload bodies.
Integration patterns
- Dashboard polling:
poll
/api/analytics/summaryon a fixed interval. - Incremental trace export:
use
/api/traces?limit=200with returnednext_cursor. - SIEM integration:
ship JSON stdout logs and index
audit_action,audit_outcome, andpath. - Multi-reader analytics workloads: use Postgres storage for shared query access patterns.
Example integrations
Pull summary metrics for a dashboard job
Bash
curl "http://localhost:8080/api/analytics/summary" \
-H "X-OngoingAI-Gateway-Key: GATEWAY_KEY"Placeholder:
GATEWAY_KEY: Gateway key token withanalytics:readpermission.
Export traces incrementally with cursor pagination
Bash
curl "http://localhost:8080/api/traces?limit=200" \
-H "X-OngoingAI-Gateway-Key: GATEWAY_KEY"
curl "http://localhost:8080/api/traces?limit=200&cursor=NEXT_CURSOR" \
-H "X-OngoingAI-Gateway-Key: GATEWAY_KEY"Placeholder:
NEXT_CURSOR: Cursor value from the previous/api/tracesresponse.
Filter audit-focused log lines from stdout
Bash
ongoingai serve | jq -c 'select(.audit_action != null)'This stream includes gateway auth deny and gateway key lifecycle audit events.
Validation checklist
-
Start the gateway:
Bashongoingai config validate ongoingai serve -
Send one proxied provider request through
/openai/...or/anthropic/.... -
Query integration APIs:
Bashcurl "http://localhost:8080/api/health" curl "http://localhost:8080/api/traces?limit=1" \ -H "X-OngoingAI-Gateway-Key: GATEWAY_KEY" curl "http://localhost:8080/api/analytics/summary" \ -H "X-OngoingAI-Gateway-Key: GATEWAY_KEY"
You should see:
- JSON responses from all three endpoints.
- Trace and summary data after proxied traffic.
- Structured JSON log lines in gateway stdout.
Troubleshooting
Integration APIs return 401 or 403
- Symptom:
/api/tracesor/api/analytics/*requests are rejected. - Cause: Missing gateway key, or key is missing
analytics:read. - Fix: Use a valid gateway key with
analytics:read.
/api/traces returns no items
- Symptom: Trace export responses are empty.
- Cause: No proxied provider requests were captured, or writes are still pending in the async writer queue.
- Fix: Send provider traffic through gateway routes, then retry after a short delay.
Log pipeline does not parse gateway output as JSON
- Symptom: Log collector indexes gateway lines as unstructured text.
- Cause: Collector parser is not configured for newline-delimited JSON.
- Fix: Configure the collector source as JSON line input for gateway stdout.
Prometheus /metrics returns 404
- Symptom: Requests to
/metricsreturn404. - Cause:
prometheus_enabledis not set totrue. - Fix: Set
observability.otel.prometheus_enabled: truein YAML orONGOINGAI_PROMETHEUS_ENABLED=trueas an env var. Verify thatprometheus_pathmatches the requested path.
OTLP exporter targets fail
- Symptom: OTLP export errors in gateway logs.
- Cause: OpenTelemetry export is disabled or the collector endpoint is unreachable.
- Fix: Set
observability.otel.enabled=trueand verifyobservability.otel.endpointreachability.