Navigation
Getting Started
Guides
Integrations
Concepts
Events
How Yorker emits, stores, and queries the unified stream of check, alert, and derived telemetry events.
Events
Every check execution, alert state transition, and derived signal in Yorker becomes an event. Events are the unit of currency for the dashboard timeline, the OTel emission layer, and the /api/events query interface. This page explains where events come from, what catalog of types exists, and how to query them.
Two sources
Yorker materializes events from two underlying tables:
| Source | Table | Event types |
|---|---|---|
| Check executions | check_results | check.completed, check.failed, check.mcp_schema_drift |
| Alert state transitions | alert_events | alert.triggered, alert.acknowledged, alert.recovered, alert.resolved, alert.escalated, alert.notified, alert.muted |
The unified /api/events endpoint reads from both, merges by timestamp, and applies pagination so consumers see one stream.
A single MCP check run can emit two events from one check_results row: the success/failure event and a check.mcp_schema_drift event when the run detected a tool list or signature change. The two events share the same top-level traceId and the same synthetics.run.id inside resourceAttributes, so a consumer can correlate them.
Relation to OTel emission
The events stream you see in the dashboard and via /api/events is the same set of facts Yorker also emits to your configured OTel collector. The two surfaces serve different audiences:
| Surface | Source | When |
|---|---|---|
/api/events | check_results + alert_events (Postgres) | Always available, no OTel endpoint required |
| OTel collector | Outbox shipped as OTLP log records to your backend | Only when a team OTLP endpoint is configured under Settings > Telemetry (OTLP) |
Derived events (incident lifecycle, certificate observations, SLO burn warnings, deployment markers, maintenance windows, cross-monitor correlation) are produced by the control plane and shipped to your collector via the orchestrator outbox. They do not appear under /api/events today; that endpoint is scoped to check executions and alert state transitions. To see derived events, query your OTel backend (HyperDX, ClickStack, Grafana, etc.).
For the architecture of the two emission paths (runner-direct vs control-plane outbox), see Architecture > Telemetry flow.
The full OTel event catalog
When a team OTLP endpoint is configured, Yorker emits the events below as OTLP log records via the outbox. Resource attributes vary by event scope:
- Check-scoped events (
synthetics.check.completed,synthetics.check.failed,synthetics.step.completed,synthetics.mcp.schema_drift,synthetics.tls.*,synthetics.correlation.detected) carrysynthetics.check.id,synthetics.check.name,synthetics.check.type(andurl.fullwhen known) as resource attributes.synthetics.run.id,synthetics.location.id,synthetics.location.name, andsynthetics.check.statusare emitted as LogRecord attributes, not resource attributes. - Alert events (
synthetics.alert.state_changed) carryservice.name = "synthetics"plussynthetics.check.name(always emitted; the upstream alert pipeline requires a non-null check name) andsynthetics.check.id(when set on the alert) as resource attributes. They are linked to a check, but they don't carry the full per-check resource set (nosynthetics.check.type, no location attrs at the resource level). - Team-scoped events (
synthetics.deployment.created,synthetics.maintenance_window.*,synthetics.incident.*,synthetics.slo.budget.warning) carry onlyservice.name = "synthetics"plus event-specific identifiers as LogRecord attributes; they are not bound to a single check.
Event-specific attributes are inlined in the OpenTelemetry concepts page for the cross-monitor correlation event. For other events, the simplest path is to query a sample row in your OTel backend and inspect the attribute set, or grep the source: apps/web/src/lib/otel-events.ts for check, step, MCP drift, TLS, deployment, maintenance, SLO, incident, and correlation events; apps/web/src/lib/otel-emit.ts for the alert state-changed event.
| Event name | Fired when | Source |
|---|---|---|
synthetics.check.completed | A check run finished, regardless of outcome (LogRecord severity is INFO on success, ERROR on failure/error/timeout). The closest analog to "this check ran." | Per-result, outbox |
synthetics.check.failed | A check run failed (failure, error, or timeout). Fires in addition to synthetics.check.completed, not instead of it. | Per-result, outbox |
synthetics.step.completed | A browser-check step completed | Per-step, outbox |
synthetics.alert.state_changed | An alert instance transitioned state | Per-transition, outbox |
synthetics.slo.budget.warning | An SLO crossed a burn-rate threshold | Burn-rate evaluator, outbox |
synthetics.maintenance_window.started | A maintenance window became active | Window scheduler, outbox |
synthetics.maintenance_window.ended | A maintenance window ended | Window scheduler, outbox |
synthetics.deployment.created | A deployment marker was recorded via POST /api/events/deployments | Deployment ingest, outbox |
synthetics.tls.certificate_observed | The TLS certificate for a hostname was observed for the first time, or after a change | Cert tracker, outbox |
synthetics.tls.certificate_changed | The leaf certificate fingerprint changed between runs | Cert tracker, outbox |
synthetics.tls.expiring_soon | A tracked certificate is within its expiry-warning window | Cert tracker, outbox |
synthetics.mcp.schema_drift | An MCP server's tool list or signatures changed | Per-result, outbox |
synthetics.correlation.detected | Two or more failing browser checks share a third-party dependency in a 5-minute window | Correlation pipeline, outbox |
synthetics.incident.opened | A new incident was created from correlated alerts | Incident pipeline, outbox |
synthetics.incident.alert_attached | An additional alert joined an active incident | Incident pipeline, outbox |
synthetics.incident.severity_changed | An incident's severity escalated or de-escalated | Incident pipeline, outbox |
synthetics.incident.acknowledged | A user acknowledged an incident | Incident pipeline, outbox |
synthetics.incident.auto_resolved | All member alerts recovered and the cool-down elapsed | Incident pipeline, outbox |
synthetics.incident.closed | A user closed an incident | Incident pipeline, outbox |
synthetics.incident.reopened | A user reopened a previously resolved incident | Incident pipeline, outbox |
synthetics.incident.note_added | A user added a freeform note to an incident | Incident pipeline, outbox |
See OpenTelemetry concepts for the standard set of identifying attributes Yorker emits across signals, and for the dedicated attribute reference for the cross-monitor correlation event. Note that the OpenTelemetry page describes the metrics/traces emission shape (where check, location, run, and labels are all on resource attributes); the log-event outbox path documented above splits those across resource attributes (check identifiers) and LogRecord attributes (run, location, status). Team-scoped log events (deployment, maintenance, incident, SLO burn) carry a smaller resource set centred on service.name.
Querying via /api/events
The unified events endpoint returns check and alert events in a single time-ordered stream. See REST API > Events for the full schema.
curl 'https://yorkermonitoring.com/api/events?range=24h&limit=50' \
-H "Authorization: Bearer sk_..."{
"events": [
{
"id": "res_xyz789",
"eventType": "check.completed",
"timestamp": "2026-05-14T10:05:00.142Z",
"checkId": "chk_abc123",
"checkName": "Homepage",
"checkType": "http",
"severity": null,
"traceId": "4bf92f3577b34da6a3ce929d0e0e4736",
"summary": "142 ms",
"resourceAttributes": {
"synthetics.check.id": "chk_abc123",
"synthetics.check.name": "Homepage",
"synthetics.check.type": "http",
"synthetics.run.id": "run_abc",
"synthetics.location.id": "loc_us_east",
"synthetics.location.name": "US East (Ashburn)"
},
"attributes": {
"synthetics.http.response_time": 142,
"synthetics.check.success": 1,
"synthetics.http.status_code": 200
}
}
],
"limit": 50,
"offset": 0,
"hasMore": true
}Filters
| Parameter | Type | Default | Description |
|---|---|---|---|
type | comma-separated string | all types | Restrict to specific event types: check.completed, check.failed, check.mcp_schema_drift, or any alert.* event. Unknown types return 400. |
checkId | string | (any check) | Restrict to a single check. |
severity | critical | warning | info | (any) | Restrict alert events by severity. Has no effect on check.* events. |
range | 1h, 6h, 24h, 7d, 14d, 30d | 24h | Time window measured back from now. |
limit | integer | 50 | Page size, clamped to the range 1-200. |
offset | integer | 0 | Pagination offset. The handler returns 400 when offset > 10000; narrow the time range instead of paging deeper. |
Event shape
Every event returned by /api/events has the same envelope:
| Field | Type | Description |
|---|---|---|
id | string | Unique event ID. For drift events synthesized alongside a check.completed, the ID is suffixed with :drift to disambiguate. |
eventType | string | One of the values from the type table above. |
timestamp | ISO-8601 string | Event time, drawn from the source row's createdAt. |
checkId | string | null | The check that produced the event. null for alert events not bound to a check. |
checkName | string | null | Current check name at query time (joined from the checks table on each request, not snapshotted at event time). Renaming a check changes the name on all of its historical events. |
checkType | http | browser | mcp | null | Current check type at query time, joined the same way as checkName. |
severity | critical | warning | info | null | critical for failed checks; warning for drift events; the alert instance severity (critical, warning, or info) for alert events; null for successful checks. |
traceId | string | null | W3C trace ID for trace correlation. |
summary | string | null | Human-readable one-line summary. |
resourceAttributes | object | Event-type-specific identifying bag. For check.* events: synthetics.check.id, synthetics.check.name, synthetics.check.type, synthetics.run.id, plus synthetics.location.id and synthetics.location.name. For alert.* events: synthetics.alert.instance_id, synthetics.alert.rule_id (nullable), synthetics.alert.rule_name (nullable), plus synthetics.check.id and synthetics.check.name when the alert is bound to a check. The /api/events route is built directly from Postgres (check_results and alert_events), so this bag is constructed by the route from row columns; it uses OTel-style attribute names (from OTEL_ATTRS) for consistency with the OTLP path but is API-native, not an ingested OTLP payload. |
attributes | object | Event-payload attributes. For check.completed / check.failed: synthetics.http.response_time, synthetics.check.success, synthetics.http.status_code (when present), and on failures both synthetics.check.error_message and error.message. For check.mcp_schema_drift: synthetics.mcp.drift.added_count, synthetics.mcp.drift.removed_count, synthetics.mcp.drift.modified_count, synthetics.mcp.drift.total_count, and the matching tool-name arrays synthetics.mcp.drift.added_tools, synthetics.mcp.drift.removed_tools, synthetics.mcp.drift.modified_tools (each capped at 50 entries). For alert.* events: synthetics.alert.state, synthetics.alert.severity, synthetics.alert.event_type, plus the optional synthetics.alert.actor, synthetics.alert.channel_type, synthetics.alert.context, synthetics.alert.details (each present only when the underlying alert event row carries that field). |
The keys here are stable across /api/events calls but they don't all match the OTLP LogRecord attribute names emitted by the outbox path: for example /api/events carries synthetics.http.response_time while OTLP LogRecords use synthetics.response_time_ms, and /api/events carries synthetics.http.status_code while OTLP uses http.response.status_code. Plan to translate keys when reusing query logic across the two surfaces. The single source of truth is apps/web/src/app/api/events/route.ts for /api/events and packages/shared/src/constants/otel.ts (OTEL_ATTRS) for the OTLP path.
Pagination
Within each source (check executions, alert events) the endpoint orders by (timestamp DESC, id DESC) so a single source paginates stably even when two rows share a millisecond. The merged stream returned to the caller sorts by timestamp only; two events from different sources that share a millisecond can swap order across pages. hasMore: true means at least one more event exists past offset + limit. To page through, increment offset by the page size; once offset > 10000 the handler returns 400, so narrow the time range and start a fresh paging cycle instead.
Deployment markers
Recording a deployment marker is a write-side concern that lives at the same prefix as the read-side feed:
curl -X POST https://yorkermonitoring.com/api/events/deployments \
-H "Authorization: Bearer sk_..." \
-H "Content-Type: application/json" \
-d '{
"service": "checkout-api",
"version": "v2.18.0",
"environment": "production",
"commit_sha": "a1b2c3d4",
"commit_message": "fix: retry idempotency",
"deployed_by": "drewpost",
"source": "github-actions"
}'The POST body uses snake_case field names (commit_sha, commit_message, deployed_by) for CI-friendliness. The GET response side uses camelCase to match the rest of the read API. Both are documented in REST API > Deployment Events.
A deployment marker becomes a synthetics.deployment.created OTLP event (when an OTel endpoint is configured) and a row in deployment_events that the dashboard correlates with check-result anomaly windows.
Where events surface
| Surface | What it shows |
|---|---|
| Dashboard timeline | All check.* and alert.* events for the selected time range, with severity and trace links |
/api/events | Same data, programmatic access |
| OTel collector (HyperDX / ClickStack / etc.) | The full OTel event catalog above, including derived events not in /api/events |
| Per-check detail page | Filtered to a single checkId |
If you need to slice events by attribute (e.g., "all check.failed from loc_eu_west in the last hour"), the OTel backend is the right surface; /api/events filters are limited to the columns above. Trace correlation works in both directions: the traceId on every event links back to the originating check execution span and out to your backend's distributed trace view.