Integrations and exporters
Use this page to integrate OngoingAI Gateway data with external dashboards, SIEM pipelines, and reporting jobs. It covers current export surfaces and practical pull-based integration patterns.
Export surfaces
- Exposes JSON API endpoints for health, traces, and analytics data.
- Emits structured JSON logs to stdout for request lifecycle and audit events.
- Persists trace records to SQLite or Postgres for downstream reporting jobs.
- Optionally exports gateway spans and metrics with native OpenTelemetry OTLP HTTP.
- OpenTelemetry spans include gateway tenant identity attributes (
org_id,workspace_id,key_id) when auth is enabled. - Supports browser-based integration clients with CORS headers on API routes.
Operational fit
- You need external usage and cost dashboards from gateway trace data.
- You need centralized audit and auth-deny visibility in your log pipeline.
- You need export behavior that does not block proxy request forwarding.
Integration flow
- Proxy traffic is captured and written asynchronously to trace storage.
- API routes under
/api/...read from storage and return JSON responses. /api/tracessupports filtered and cursor-based pull exports./api/analytics/*exposes usage, cost, model, key, and summary aggregates.- Gateway logs are emitted as JSON lines to stdout and can be shipped by your log collector.
- If
auth.enabled=true, integration readers must use a gateway key withanalytics:readfor trace and analytics APIs. - If
observability.otel.enabled=true, the gateway emits OTLP HTTP spans and metrics to your configured collector endpoint.
Starter integration config
No separate integration feature flag is required. You can integrate by reading HTTP APIs and shipping stdout logs.
YAML
auth:
enabled: true
header: X-OngoingAI-Gateway-Key
storage:
driver: sqlite
path: ./ongoingai.db
tracing:
capture_bodies: false
body_max_size: 1048576
observability:
otel:
enabled: true
endpoint: localhost:4318
insecure: true
service_name: ongoingai-gateway
traces_enabled: true
metrics_enabled: true
sampling_ratio: 1.0
export_timeout_ms: 3000
metric_export_interval_ms: 10000With capture_bodies=false, your integrations still get usage, latency, and
metadata fields without storing request and response payload bodies.
Integration patterns
- Dashboard polling:
poll
/api/analytics/summaryon a fixed interval. - Incremental trace export:
use
/api/traces?limit=200with returnednext_cursor. - SIEM integration:
ship JSON stdout logs and index
audit_action,audit_outcome, andpath. - Multi-reader analytics workloads: use Postgres storage for shared query access patterns.
Example integrations
Pull summary metrics for a dashboard job
Bash
curl "http://localhost:8080/api/analytics/summary" \
-H "X-OngoingAI-Gateway-Key: GATEWAY_KEY"Placeholder:
GATEWAY_KEY: Gateway key token withanalytics:readpermission.
Export traces incrementally with cursor pagination
Bash
curl "http://localhost:8080/api/traces?limit=200" \
-H "X-OngoingAI-Gateway-Key: GATEWAY_KEY"
curl "http://localhost:8080/api/traces?limit=200&cursor=NEXT_CURSOR" \
-H "X-OngoingAI-Gateway-Key: GATEWAY_KEY"Placeholder:
NEXT_CURSOR: Cursor value from the previous/api/tracesresponse.
Filter audit-focused log lines from stdout
Bash
ongoingai serve | jq -c 'select(.audit_action != null)'This stream includes gateway auth deny and gateway key lifecycle audit events.
Validation checklist
-
Start the gateway:
Bashongoingai config validate ongoingai serve -
Send one proxied provider request through
/openai/...or/anthropic/.... -
Query integration APIs:
Bashcurl "http://localhost:8080/api/health" curl "http://localhost:8080/api/traces?limit=1" \ -H "X-OngoingAI-Gateway-Key: GATEWAY_KEY" curl "http://localhost:8080/api/analytics/summary" \ -H "X-OngoingAI-Gateway-Key: GATEWAY_KEY"
You should see:
- JSON responses from all three endpoints.
- Trace and summary data after proxied traffic.
- Structured JSON log lines in gateway stdout.
Troubleshooting
Integration APIs return 401 or 403
- Symptom:
/api/tracesor/api/analytics/*requests are rejected. - Cause: Missing gateway key, or key is missing
analytics:read. - Fix: Use a valid gateway key with
analytics:read.
/api/traces returns no items
- Symptom: Trace export responses are empty.
- Cause: No proxied provider requests were captured, or writes are still pending in the async writer queue.
- Fix: Send provider traffic through gateway routes, then retry after a short delay.
Log pipeline does not parse gateway output as JSON
- Symptom: Log collector indexes gateway lines as unstructured text.
- Cause: Collector parser is not configured for newline-delimited JSON.
- Fix: Configure the collector source as JSON line input for gateway stdout.
Expected Prometheus or OpenTelemetry exporter endpoint is missing
- Symptom: Requests to
/metricsor OTLP exporter targets fail. - Cause:
/metricsis not implemented as a native Prometheus scrape endpoint, or OpenTelemetry export is disabled/misconfigured. - Fix: For Prometheus, use a collector bridge (OTLP -> Prometheus). For OTEL,
set
observability.otel.enabled=trueand verifyobservability.otel.endpointreachability.