← Blog Product

From audit reports to pull requests

The accessibility industry runs on reports. We think it should run on pull requests. Here's how we redesigned the feedback loop between finding issues and fixing them.

·Arthur Gousset
From audit reports to pull requests

If you've worked on accessibility at any mid-to-large company, you know the drill. A consultant runs an audit. A few weeks later, a report lands in someone's inbox. It's thorough, well-written, and almost entirely disconnected from your development workflow.

The findings reference page URLs that may have changed. The recommendations describe what to fix but not where in the code. The severity ratings don't map to your team's prioritization framework. And the whole thing needs to be manually translated into Jira tickets before a single developer sees it.

This is the industry standard. And it's broken.

The translation tax

Every handoff between "issue found" and "issue fixed" is a place where information degrades and momentum dies.

An auditor finds a focus visibility problem on your checkout page. They write it up in a report. A project manager reads the report and creates a ticket. A developer picks up the ticket, reads the description, opens the page, tries to reproduce the issue, figures out which component is responsible, traces it through the codebase, and writes the fix.

That's at least five context switches before anyone writes a line of code. In practice, tickets sit in the backlog for weeks. By the time someone gets to them, the page might have been redesigned, or the component might have moved, or the developer might not have enough accessibility context to write a correct fix.

We call this the translation tax: the cost of converting audit findings into developer action. It's the reason most accessibility programs move slowly even when teams are motivated.

What if the output was a pull request?

A pull request eliminates most of the translation tax in one step. Instead of describing the problem in prose and hoping someone eventually writes the fix, you deliver the fix directly.

A PR is specific. It points to exact files and lines. It includes the diff: here's what changed and why. It's reviewable by the team that owns the code. It runs through CI. It can be merged, revised, or rejected, all within the workflow developers already use every day.

This is the core design decision behind Workback. Our AI agents don't produce reports. They produce pull requests.

We tested this idea early by building agents that found accessibility issues in open-source repositories and raised PRs with fixes. They got merged without issues. That validated the core hypothesis: developers engage with pull requests in a way they never engage with audit reports.

How it works

When Workback audits your application, our agent does what a skilled accessibility engineer would do, but faster and more frequently:

  1. Navigate: the agent walks through your application's user flows the way a real user would, including keyboard navigation and the accessibility tree
  2. Identify: when the agent encounters a WCAG violation, it captures evidence: screenshots, DOM state, the specific criterion that's failing, and why it matters for users with disabilities
  3. Locate: the agent traces the issue back to your source code, identifying the exact component, file, and line responsible
  4. Fix: it writes a targeted fix that addresses the WCAG criterion without breaking existing functionality
  5. Deliver: the fix arrives as a pull request on your platform (GitHub, GitLab, or Bitbucket), ready for code review

The developer's experience is: a PR shows up, they review the diff, they read the description explaining which WCAG criterion was failing and how the fix addresses it, and they merge or request changes. No meetings. No ticket translation. No context switching.

The feedback loop matters

The other thing that changes with PRs is the feedback loop. When a developer reviews a PR and leaves a comment ("this fix is correct but doesn't match our design system's focus ring style"), that feedback is specific, actionable, and attached to real code.

Compare that to the feedback loop on a traditional audit: the consultant delivers a report, the team works through it over months, and feedback, if any, arrives in the next audit cycle, six months later.

PRs create a tight, continuous feedback loop between finding issues and fixing them. That's not just more efficient. It's how you build an accessibility program that actually improves over time.

Reports still have their place

To be clear, compliance documentation matters. VPATs, conformance reports, and audit summaries are required for procurement and legal purposes. We generate those too.

But the report should be the artifact, not the workflow. The workflow should be: find issues, fix issues, verify fixes, document conformance. Reports are the output of a healthy accessibility program, not the input.

If you're interested in seeing what this looks like in practice, book a call. We'll show you a real audit-to-PR cycle on your own application.

Want results like these?

We're actively onboarding pilot customers. Let's talk about your accessibility program.

Book a Call