OTel-native synthetic intelligence

The front-end intelligence layeryour OTel stack is missing

And the fuel that makes your AI ops tools actually work. Structured synthetic intelligence — pre-correlated, anomaly-scored, OTel-native — flowing directly into your existing stack.

otlp span → your observability backend
# synthetics.check.run
synthetics.check.name    "Checkout Flow"
synthetics.check.type    browser
synthetics.location.name "London, UK"

# anomaly detected — zero config
synthetics.anomaly.detected  true
synthetics.anomaly.metric   browser_lcp
synthetics.anomaly.deviation +2.8σ from 14-day baseline

# shared dependency attribution
synthetics.third_party.domain cdn.tagmanager.net
synthetics.third_party.bytes  847293
synthetics.correlation.checks 4 monitors affected

# cross-link to backend traces
trace_id  4bf92f3577b34da6a3ce929d0e0e4736
screenshot https://r2.yorkermonitoring.com/...

Not metrics. Intelligence.

Basic synthetic monitoring emits a response time and a pass/fail. Yorker emits a rich OTLP insight pack — anomaly deviations, dependency attribution, cross-monitor correlation signals, and screenshot URLs — as standard OTel traces, metrics, and logs. Everything lands in your existing backend, pre-correlated and ready to query.

Anomaly scores in every span

Every metric carries its deviation from a 14-day rolling baseline — per location, per hour-of-day. Your backend receives +2.8σ, not just a raw number.

Dependency attribution

Third-party domains, payload sizes, and request latencies emitted as OTel attributes. See cdn.tagmanager.net in your ClickHouse table, not just a number.

Cross-monitor correlation

When multiple monitors degrade simultaneously, Yorker attaches correlation signals to each span. Your notebook query already knows which checks were co-affected.

Screenshot URLs in traces

Browser check screenshots are stored and linked directly in the OTel span. Pull up the filmstrip from inside Grafana, a ClickStack incident view, or your runbook.

W3C trace propagation

Synthetic browser checks inject traceparent headers into every request. Your backend picks them up. The synthetic check and the backend request share a single distributed trace.

TLS context in every check

Certificate expiry, issuer chain, and fingerprint emitted as span attributes. Your on-call dashboard knows the cert expires in 4 days before a user gets a browser warning.

Flows into any OTel-compatible backend

ClickStackDash0Grafana CloudDatadogHoneycombNew RelicAny OTLP endpoint

Monitors aren't silos.Shared failures are shared signals.

That ad network script loaded via tag manager — the one that never goes through your release process — just added 600ms to six different user journeys simultaneously. Yorker sees the pattern across monitors, not per-monitor in isolation.

  • Request-level attribution, across monitors

    Payload weight and latency tracked per third-party domain across every browser check. When cdn.adnetwork.com degrades, all six affected monitors surface it together.

  • Baseline anomaly alerts on dependencies you don't deploy

    Anomaly detection operates at the dependency level, not just the check level. Get alerted when a third-party script changes behaviour — before your users feel it.

  • Pattern detection across your monitor portfolio

    When multiple monitors fail with a shared signature, Yorker surfaces the correlation automatically. One root cause surfaced, not six separate alerts to correlate manually.

cross-monitor dependency view
MonitorLCP3P impact
Checkout Flow4.2s ↑+820ms
Product Page3.8s ↑+760ms
Search Results3.1s ↑+690ms
Homepage2.9s ↑+450ms
⚠ Shared dependency detected
cdn.tagmanager.net · 4 monitors · never deployed · alert sent

Your ops tools are only as goodas what you feed them.

Automated oncall tools, causal analysis platforms, ClickStack AI Notebooks, PagerDuty runbooks — they reason over the data you give them. Without pre-correlated, trace-linked front-end intelligence, they're working with half the picture.

Yorker gives them what they don't have: the user-facing layer, already processed and attributed, with the right OTel attributes and trace headers to cross-link with everything else in your stack.

  • Anomaly deviations in σ, not just raw numbers
  • Third-party attribution already labelled
  • Cross-monitor correlation signals attached
  • Screenshot URLs embedded in spans
  • W3C traceparent for backend cross-linking
  • Structured logs with full assertion context
clickstack ai notebook
-- Front-end context, automatically available.
-- No joins across separate tools needed.

SELECT
  check_name,
  anomaly_metric,
  anomaly_deviation_sigma,
  third_party_domain,
  correlation_checks_affected,
  screenshot_url
FROM otel_traces
WHERE
  anomaly_detected = true
  AND timestamp > now() - INTERVAL 1 HOUR
ORDER BY anomaly_deviation_sigma DESC

-- Feed directly into incident context,
-- runbook, or causal analysis prompt.

Built for how engineering teamswork in 2026

Agentic workflows in both directions — Yorker as a data source your agents consume, and Yorker as a tool that monitors your AI infrastructure.

Monitor your AI tools

coming soon

HTTP 200 doesn't mean your AI feature is working. Yorker's MCP check type is designed to validate what your Model Context Protocol servers actually return — tool availability, output correctness, and latency baselines.

  • Tool availability — is the MCP server responding?
  • Output validation — does the response contain expected values?
  • Latency baselines — is your model taking longer than usual?

Designed to be used by agents

Full CLI with deploy, validate, test, and status commands. API-first design. YAML config that lives in your repo alongside your code. AI coding assistants speak fluent Yorker — create and manage monitors the same way they write infrastructure.

terminal
$ yorker deploy
2 monitors created

$ yorker status
Checkout Flow    passing
API Health       passing

$ yorker test checkout
Test run complete · 1.2s
yorker.config.yaml
# Define monitors alongside your app code
project: my-app

monitors:
  - name: API Health
    type: http
    url: https://api.example.com/health
    frequency: 1m
    assertions:
      - type: status_code
        value: 200

  - name: Checkout Flow
    type: browser
    script: ./monitors/checkout.ts
    frequency: 5m
    locations: [loc_eu_west, loc_eu_central, loc_ap_northeast]

And yes, it's all code.

Plain YAML. Terraform-style plan/apply. Git-native. CI/CD-ready. If you're running a serious operation in 2026, this is table stakes — and Yorker ships it.

  • Preview changes with --dry-run before applying
  • Clean up orphaned monitors with --prune
  • Validate config in CI with yorker validate
  • Secrets stay in env vars, never in your repo

14 Global Locations

US, Europe, Asia Pacific, South America, Africa, and Oceania. Private locations behind your firewall available on every paid plan.

  • Ashburn
  • Dallas
  • Los Angeles
  • Toronto
  • São Paulo
  • London
  • Paris
  • Frankfurt
  • Stockholm
  • Singapore
  • Tokyo
  • Mumbai
  • Sydney
  • Johannesburg
Simple pricing
$29.99/mo

One plan. Every feature included. Pay for what you run.

See full pricing →
  • Free tier: 10,000 HTTP checks + 1,500 browser checks/mo
  • No credit card required to start
  • All insight packs, all locations, HTTP and browser checks
  • Private locations at 50% off hosted run rates

Close your observability blind spot

Start with the dashboard or go straight to code.

Or start from the terminal

npx @yorker/cli init