The hosted OpAMP control plane
for OpenTelemetry Collectors
Monitor collector health, inspect effective configs, sync from GitHub, and roll out changes safely — while your telemetry keeps flowing to the tools you already use.
| Collector | Version | Config | Heartbeat | Status |
|---|---|---|---|---|
| otel-edge-01.iad | 0.105.0 | prod-ingress@v14 | 2s ago | healthy |
| otel-edge-02.iad | 0.105.0 | prod-ingress@v14 | 2s ago | healthy |
| otel-edge-03.iad | 0.105.0 | prod-ingress@v14 | 3s ago | healthy |
| otel-gateway-01 | 0.104.2 | prod-gateway@v9 | 5s ago | queue 84% |
| otel-gateway-02 | 0.104.2 | prod-gateway@v9 | 14m ago | offline |
| k8s-daemon-iad-a | 0.105.0 | k8s-daemon@v22 | 1s ago | healthy |
Collectors are easy to deploy.
Fleets are hard to operate.
Once collectors spread across clusters, regions, and teams, scattered YAML and ad hoc scripts stop scaling. You don't need a new place to send telemetry — you need a place to operate the layer that sends it.
Which collectors are healthy?
You added 30 nodes last sprint. Three of them haven't reported a heartbeat in an hour — and nobody noticed.
Which config is actually running?
The config in the repo is v14. Half the fleet is on v12. The intended state and effective state quietly drift apart.
What changed, and who changed it?
Three engineers pushed YAML last week. There is no timeline of who applied what, where, and when.
Which rollout failed?
You shipped a tail-sampling tweak. Some collectors picked it up, some rejected it, and the script exited 0 anyway.
Are collectors quietly dropping data?
Exporter queues are full. Spans are being dropped. Your backend dashboards look fine. The collector knows; nobody is listening.
Start with visibility.
No remote changes required.
Connect collectors in monitor-only mode to see health, heartbeat, version, capabilities, and effective configuration. Upgrade to managed mode only when you and your security team are ready.
Enable managed mode
Allow O11yFleet to push managed configurations to selected collectors. Existing monitor-only collectors are unaffected.
Version, roll out, and roll back
without touching a node.
Store and version collector configs in one place. Compare intended state with the effective config each collector reports, and revert instantly when something goes wrong.
-
v14applyingtighten batch processor timeouta91f3b2 · maya · 6m ago
-
v13applied · 42add k8sattributes processor7c2ed11 · jordan · 2d ago
-
v12rolled backtry otlp/http exporter3a01f9e · sam · 4d ago
-
v11applied · 42raise memory_limiter to 1500MiBb9d44a2 · maya · 1w ago
Pick the workflow that fits
each configuration.
Production gateways want pull requests and CODEOWNERS. Sandboxes and on-call tweaks want a UI editor. Choose per configuration — edit in the app, or back it with a GitHub repo and a watched path. Both land in the same rollout pipeline.
See collector problems before
telemetry disappears.
Pipeline throughput, queue pressure, exporter errors, and drops — the things the collector knows but your backend dashboards don't. Your application telemetry still flows to the tools you already use; we only surface what the collector says about itself.
Your telemetry stays yours.
We manage collectors. We don't replace your observability backend. Control the pipeline without moving the destination.
nodes
k8s pods
managed by O11yFleet
O11yFleet sits beside the data path — not in it. Your telemetry never traverses our infrastructure.
Free visibility. Paid control.
Start free with monitor-only mode. Upgrade when you need managed configurations, rollout workflows, and team governance.
Free
monitor-onlySee every collector's health, version, and effective config. Forever free.
Start freeBusiness
most teamsManaged configs, rollouts, GitHub sync, flow dashboards, RBAC, webhooks.
Start trialEnterprise
customSSO, SCIM, advanced RBAC, audit export, extended retention, SLA.
Talk to salesConnect your first collector in under a minute.
Free, monitor-only, no credit card. Add managed configs and team workflows when you're ready.