Error tracking

Mendral + Sentry

Every new Sentry issue triggers an investigation, with the suspect commit identified before you read the alert.

Or book a 15-min demo and we'll wire Sentry into a live investigation.

Wire Sentry into Mendral and the agent stops being something you ask about errors after the fact — it's something that has already finished investigating by the time the issue lands in your inbox. Webhooks fire when a new issue appears or a regression spikes; Mendral fetches the issue, walks the stack trace, runs git blame on the relevant lines, correlates with recent releases, and posts the suspect commit and a draft fix directly on the issue.

What the agent can investigate

The Sentry skill adds these surfaces to every Mendral investigation that touches Sentry.

Issue ingestion

New issues, regressions, and resolved-then-reopened events trigger investigations. The agent handles the routing — you don't write any glue code.

Stack trace analysis

Maps frames to source files, runs git blame, and identifies the introducing commit. Source-map-resolved frames are used when Sentry has them for the release.

Release correlation

Ties issues to Sentry releases and the deploys that produced them. The investigation knows whether the issue started with the most recent ship or has been silent for weeks.

Breadcrumb context

Pulls breadcrumb trails for client-side errors so the agent can reconstruct how the user reached the failing line.

Tag and environment filtering

Respects Sentry's environment, release, and tag filters. Most teams scope investigations to production-environment events to avoid staging noise.

Comment writeback

Posts findings as a comment on the Sentry issue with the suspect PR link, the offending diff, and the author tagged for follow-up.

How it works

Connect Sentry via OAuth and point a webhook at Mendral's ingestion endpoint. The agent receives a Sentry skill scoped to the projects you select. When a webhook fires, the investigation runs end-to-end: fetch issue → resolve stack frames → blame → correlate releases → post results. You configure which event types trigger investigations and which projects are in scope.

Authentication

OAuth at install time, plus a webhook from the Sentry project to Mendral's ingestion endpoint. PII scrubbing rules from your Sentry org are respected — the agent never re-fetches data Sentry has already scrubbed.

Example investigation

A new KeyError: 'campaign_id' issue starts firing on the production project. Sentry sends a webhook.

  1. 1

    Fetches the issue and the most recent event sample.

  2. 2

    Reads the resolved stack trace — the error originates in analytics/event_handler.py:142.

  3. 3

    Runs git blame on the file. The line was added in PR #5821, merged 4 hours ago.

  4. 4

    Reads the PR diff: a new event['campaign_id'] lookup was added without a default.

  5. 5

    Confirms environment=production, release matches the deploy that includes #5821.

  6. 6

    Comments on the Sentry issue with the suspect PR link, the offending diff, and the author tagged. Drafts a fix PR using .get() with a default.

Outcome

From new issue to draft PR in under three minutes, with the on-call engineer reading evidence instead of digging for it.

Frequently asked

Does Mendral resolve issues on its own?

No. The agent comments on the issue and drafts a fix PR; humans still own the resolve action in Sentry.

What if our Sentry project doesn't have source maps?

The agent uses whatever Sentry stores for the release. With source maps it resolves frames to original files; without them it falls back to the stack frames Sentry has indexed.

How do we avoid noisy investigations?

You can filter by environment, level, tag, or release. Most teams scope to production-environment errors above WARNING and skip muted issues.

Is PII handled correctly?

Yes. Sentry's PII scrubbing applies before the data reaches Mendral. The agent never re-fetches the scrubbed fields and treats event payloads as untrusted user data when generating comments or PRs.

Can investigations also start from CI and pull Sentry context?

Yes. CI-triggered investigations can call into the Sentry skill to pull related issues for the affected service, not just the other way around.

Put Sentry context
in every investigation.

Five-minute install. Sentry connects from the workspace settings. First enriched investigation runs on the next CI failure.

SOC 2 Type IIInstall via GitHub App

Or book a 15-min demo with the founders.