Alerts that correlateTriage that explains
Single-location failures are noise. Yorker correlates across locations, detects shared dependency failures across monitors, and attaches full OTel context to every alert.
monitors:
- name: Checkout Flow
type: browser
script: ./monitors/checkout.ts
alerts:
- conditions:
- type: multi_location_failure
minLocations: 2
windowMinutes: 5
channels:
- "@ops-slack"
- name: API Health
type: http
alerts:
- conditions:
- type: consecutive_failures
count: 3
channels:
- "@pagerduty"How it works
Failure detected at one location
A check fails at a single location. Not yet actionable — geographic noise is common.
Multi-location correlation
Yorker checks whether N of M configured locations confirm the failure within the time window.
Cross-monitor pattern detection
Whether other monitors share the same failure signature — indicating a shared dependency rather than an isolated issue.
Alert fired with OTel context
One alert, with the OTel trace link, recent results context, and triage summary attached.
Multi-location correlation
Only alert when N of M configured locations confirm the failure within a configurable time window. A single location failure in Singapore at 3am is noise. The same failure confirmed from Singapore, Tokyo, and Sydney is a real incident.
Configurable per monitor. A critical payment endpoint might require 2 of 3 locations. A less critical health check might require 3 of 5.
Cross-monitor pattern detection
When multiple monitors degrade with a shared signature, Yorker surfaces the pattern. One shared dependency alert instead of six separate monitor alerts to correlate manually at 2am.
The alert tells you which monitors are affected, what the shared dependency is, and how many locations confirm the pattern — not just that something broke.
Alert triage and context
Every alert carries the OTel trace link, recent results context, and a triage summary with root cause hypothesis and recovery outlook. Not just a notification that something broke — context for what to do about it.
- OTel trace link in every alert — one click to the distributed trace
- Recent results context — how long has this been failing?
- Root cause hypothesis based on cross-monitor correlation
- Recovery outlook — is it getting better or worse?
cdn.tagmanager.net responding 3.2× slower than baseline. 4 monitors share this dependency. No CDN incident reported.
Failing for 12 minutes across 3 Asia-Pacific locations. European and North American locations passing.
Degradation increasing. Last 3 runs show worsening LCP deviation.
Unlike webhook-based alerting
Most tools fire an alert per monitor per location. Yorker correlates before alerting — you get one signal with context, not a flood of notifications requiring manual correlation.
OTel trace linking means every alert has a direct link to the backend trace showing what happened on your infrastructure at the same moment as the synthetic failure.
Alert triage provides a root cause hypothesis and recovery outlook based on recent results and check context — not just a raw notification that something failed.
Close your observability blind spot
Free tier includes 10,000 HTTP checks and 1,500 browser checks per month. No credit card required.
npx yorker init