Concepts

Events

How Yorker emits, stores, and queries the unified stream of check, alert, and derived telemetry events.

Events

Every check execution, alert state transition, and derived signal in Yorker becomes an event. Events are the unit of currency for the dashboard timeline, the OTel emission layer, and the /api/events query interface. This page explains where events come from, what catalog of types exists, and how to query them.

Two sources

Yorker materializes events from two underlying tables:

SourceTableEvent types
Check executionscheck_resultscheck.completed, check.failed, check.mcp_schema_drift
Alert state transitionsalert_eventsalert.triggered, alert.acknowledged, alert.recovered, alert.resolved, alert.escalated, alert.notified, alert.muted

The unified /api/events endpoint reads from both, merges by timestamp, and applies pagination so consumers see one stream.

A single MCP check run can emit two events from one check_results row: the success/failure event and a check.mcp_schema_drift event when the run detected a tool list or signature change. The two events share the same top-level traceId and the same synthetics.run.id inside resourceAttributes, so a consumer can correlate them.

Relation to OTel emission

The events stream you see in the dashboard and via /api/events is the same set of facts Yorker also emits to your configured OTel collector. The two surfaces serve different audiences:

SurfaceSourceWhen
/api/eventscheck_results + alert_events (Postgres)Always available, no OTel endpoint required
OTel collectorOutbox shipped as OTLP log records to your backendOnly when a team OTLP endpoint is configured under Settings > Telemetry (OTLP)

Derived events (incident lifecycle, certificate observations, SLO burn warnings, deployment markers, maintenance windows, cross-monitor correlation) are produced by the control plane and shipped to your collector via the orchestrator outbox. They do not appear under /api/events today; that endpoint is scoped to check executions and alert state transitions. To see derived events, query your OTel backend (HyperDX, ClickStack, Grafana, etc.).

For the architecture of the two emission paths (runner-direct vs control-plane outbox), see Architecture > Telemetry flow.

The full OTel event catalog

When a team OTLP endpoint is configured, Yorker emits the events below as OTLP log records via the outbox. Resource attributes vary by event scope:

  • Check-scoped events (synthetics.check.completed, synthetics.check.failed, synthetics.step.completed, synthetics.mcp.schema_drift, synthetics.tls.*, synthetics.correlation.detected) carry synthetics.check.id, synthetics.check.name, synthetics.check.type (and url.full when known) as resource attributes. synthetics.run.id, synthetics.location.id, synthetics.location.name, and synthetics.check.status are emitted as LogRecord attributes, not resource attributes.
  • Alert events (synthetics.alert.state_changed) carry service.name = "synthetics" plus synthetics.check.name (always emitted; the upstream alert pipeline requires a non-null check name) and synthetics.check.id (when set on the alert) as resource attributes. They are linked to a check, but they don't carry the full per-check resource set (no synthetics.check.type, no location attrs at the resource level).
  • Team-scoped events (synthetics.deployment.created, synthetics.maintenance_window.*, synthetics.incident.*, synthetics.slo.budget.warning) carry only service.name = "synthetics" plus event-specific identifiers as LogRecord attributes; they are not bound to a single check.

Event-specific attributes are inlined in the OpenTelemetry concepts page for the cross-monitor correlation event. For other events, the simplest path is to query a sample row in your OTel backend and inspect the attribute set, or grep the source: apps/web/src/lib/otel-events.ts for check, step, MCP drift, TLS, deployment, maintenance, SLO, incident, and correlation events; apps/web/src/lib/otel-emit.ts for the alert state-changed event.

Event nameFired whenSource
synthetics.check.completedA check run finished, regardless of outcome (LogRecord severity is INFO on success, ERROR on failure/error/timeout). The closest analog to "this check ran."Per-result, outbox
synthetics.check.failedA check run failed (failure, error, or timeout). Fires in addition to synthetics.check.completed, not instead of it.Per-result, outbox
synthetics.step.completedA browser-check step completedPer-step, outbox
synthetics.alert.state_changedAn alert instance transitioned statePer-transition, outbox
synthetics.slo.budget.warningAn SLO crossed a burn-rate thresholdBurn-rate evaluator, outbox
synthetics.maintenance_window.startedA maintenance window became activeWindow scheduler, outbox
synthetics.maintenance_window.endedA maintenance window endedWindow scheduler, outbox
synthetics.deployment.createdA deployment marker was recorded via POST /api/events/deploymentsDeployment ingest, outbox
synthetics.tls.certificate_observedThe TLS certificate for a hostname was observed for the first time, or after a changeCert tracker, outbox
synthetics.tls.certificate_changedThe leaf certificate fingerprint changed between runsCert tracker, outbox
synthetics.tls.expiring_soonA tracked certificate is within its expiry-warning windowCert tracker, outbox
synthetics.mcp.schema_driftAn MCP server's tool list or signatures changedPer-result, outbox
synthetics.correlation.detectedTwo or more failing browser checks share a third-party dependency in a 5-minute windowCorrelation pipeline, outbox
synthetics.incident.openedA new incident was created from correlated alertsIncident pipeline, outbox
synthetics.incident.alert_attachedAn additional alert joined an active incidentIncident pipeline, outbox
synthetics.incident.severity_changedAn incident's severity escalated or de-escalatedIncident pipeline, outbox
synthetics.incident.acknowledgedA user acknowledged an incidentIncident pipeline, outbox
synthetics.incident.auto_resolvedAll member alerts recovered and the cool-down elapsedIncident pipeline, outbox
synthetics.incident.closedA user closed an incidentIncident pipeline, outbox
synthetics.incident.reopenedA user reopened a previously resolved incidentIncident pipeline, outbox
synthetics.incident.note_addedA user added a freeform note to an incidentIncident pipeline, outbox

See OpenTelemetry concepts for the standard set of identifying attributes Yorker emits across signals, and for the dedicated attribute reference for the cross-monitor correlation event. Note that the OpenTelemetry page describes the metrics/traces emission shape (where check, location, run, and labels are all on resource attributes); the log-event outbox path documented above splits those across resource attributes (check identifiers) and LogRecord attributes (run, location, status). Team-scoped log events (deployment, maintenance, incident, SLO burn) carry a smaller resource set centred on service.name.

Querying via /api/events

The unified events endpoint returns check and alert events in a single time-ordered stream. See REST API > Events for the full schema.

curl 'https://yorkermonitoring.com/api/events?range=24h&limit=50' \
  -H "Authorization: Bearer sk_..."
{
  "events": [
    {
      "id": "res_xyz789",
      "eventType": "check.completed",
      "timestamp": "2026-05-14T10:05:00.142Z",
      "checkId": "chk_abc123",
      "checkName": "Homepage",
      "checkType": "http",
      "severity": null,
      "traceId": "4bf92f3577b34da6a3ce929d0e0e4736",
      "summary": "142 ms",
      "resourceAttributes": {
        "synthetics.check.id": "chk_abc123",
        "synthetics.check.name": "Homepage",
        "synthetics.check.type": "http",
        "synthetics.run.id": "run_abc",
        "synthetics.location.id": "loc_us_east",
        "synthetics.location.name": "US East (Ashburn)"
      },
      "attributes": {
        "synthetics.http.response_time": 142,
        "synthetics.check.success": 1,
        "synthetics.http.status_code": 200
      }
    }
  ],
  "limit": 50,
  "offset": 0,
  "hasMore": true
}

Filters

ParameterTypeDefaultDescription
typecomma-separated stringall typesRestrict to specific event types: check.completed, check.failed, check.mcp_schema_drift, or any alert.* event. Unknown types return 400.
checkIdstring(any check)Restrict to a single check.
severitycritical | warning | info(any)Restrict alert events by severity. Has no effect on check.* events.
range1h, 6h, 24h, 7d, 14d, 30d24hTime window measured back from now.
limitinteger50Page size, clamped to the range 1-200.
offsetinteger0Pagination offset. The handler returns 400 when offset > 10000; narrow the time range instead of paging deeper.

Event shape

Every event returned by /api/events has the same envelope:

FieldTypeDescription
idstringUnique event ID. For drift events synthesized alongside a check.completed, the ID is suffixed with :drift to disambiguate.
eventTypestringOne of the values from the type table above.
timestampISO-8601 stringEvent time, drawn from the source row's createdAt.
checkIdstring | nullThe check that produced the event. null for alert events not bound to a check.
checkNamestring | nullCurrent check name at query time (joined from the checks table on each request, not snapshotted at event time). Renaming a check changes the name on all of its historical events.
checkTypehttp | browser | mcp | nullCurrent check type at query time, joined the same way as checkName.
severitycritical | warning | info | nullcritical for failed checks; warning for drift events; the alert instance severity (critical, warning, or info) for alert events; null for successful checks.
traceIdstring | nullW3C trace ID for trace correlation.
summarystring | nullHuman-readable one-line summary.
resourceAttributesobjectEvent-type-specific identifying bag. For check.* events: synthetics.check.id, synthetics.check.name, synthetics.check.type, synthetics.run.id, plus synthetics.location.id and synthetics.location.name. For alert.* events: synthetics.alert.instance_id, synthetics.alert.rule_id (nullable), synthetics.alert.rule_name (nullable), plus synthetics.check.id and synthetics.check.name when the alert is bound to a check. The /api/events route is built directly from Postgres (check_results and alert_events), so this bag is constructed by the route from row columns; it uses OTel-style attribute names (from OTEL_ATTRS) for consistency with the OTLP path but is API-native, not an ingested OTLP payload.
attributesobjectEvent-payload attributes. For check.completed / check.failed: synthetics.http.response_time, synthetics.check.success, synthetics.http.status_code (when present), and on failures both synthetics.check.error_message and error.message. For check.mcp_schema_drift: synthetics.mcp.drift.added_count, synthetics.mcp.drift.removed_count, synthetics.mcp.drift.modified_count, synthetics.mcp.drift.total_count, and the matching tool-name arrays synthetics.mcp.drift.added_tools, synthetics.mcp.drift.removed_tools, synthetics.mcp.drift.modified_tools (each capped at 50 entries). For alert.* events: synthetics.alert.state, synthetics.alert.severity, synthetics.alert.event_type, plus the optional synthetics.alert.actor, synthetics.alert.channel_type, synthetics.alert.context, synthetics.alert.details (each present only when the underlying alert event row carries that field).

The keys here are stable across /api/events calls but they don't all match the OTLP LogRecord attribute names emitted by the outbox path: for example /api/events carries synthetics.http.response_time while OTLP LogRecords use synthetics.response_time_ms, and /api/events carries synthetics.http.status_code while OTLP uses http.response.status_code. Plan to translate keys when reusing query logic across the two surfaces. The single source of truth is apps/web/src/app/api/events/route.ts for /api/events and packages/shared/src/constants/otel.ts (OTEL_ATTRS) for the OTLP path.

Pagination

Within each source (check executions, alert events) the endpoint orders by (timestamp DESC, id DESC) so a single source paginates stably even when two rows share a millisecond. The merged stream returned to the caller sorts by timestamp only; two events from different sources that share a millisecond can swap order across pages. hasMore: true means at least one more event exists past offset + limit. To page through, increment offset by the page size; once offset > 10000 the handler returns 400, so narrow the time range and start a fresh paging cycle instead.

Deployment markers

Recording a deployment marker is a write-side concern that lives at the same prefix as the read-side feed:

curl -X POST https://yorkermonitoring.com/api/events/deployments \
  -H "Authorization: Bearer sk_..." \
  -H "Content-Type: application/json" \
  -d '{
    "service": "checkout-api",
    "version": "v2.18.0",
    "environment": "production",
    "commit_sha": "a1b2c3d4",
    "commit_message": "fix: retry idempotency",
    "deployed_by": "drewpost",
    "source": "github-actions"
  }'

The POST body uses snake_case field names (commit_sha, commit_message, deployed_by) for CI-friendliness. The GET response side uses camelCase to match the rest of the read API. Both are documented in REST API > Deployment Events.

A deployment marker becomes a synthetics.deployment.created OTLP event (when an OTel endpoint is configured) and a row in deployment_events that the dashboard correlates with check-result anomaly windows.

Where events surface

SurfaceWhat it shows
Dashboard timelineAll check.* and alert.* events for the selected time range, with severity and trace links
/api/eventsSame data, programmatic access
OTel collector (HyperDX / ClickStack / etc.)The full OTel event catalog above, including derived events not in /api/events
Per-check detail pageFiltered to a single checkId

If you need to slice events by attribute (e.g., "all check.failed from loc_eu_west in the last hour"), the OTel backend is the right surface; /api/events filters are limited to the columns above. Trace correlation works in both directions: the traceId on every event links back to the originating check execution span and out to your backend's distributed trace view.