What Is the Difference Between Scanning and Remediation?
Scanning finds accessibility problems. Remediation fixes them. Most tools only do one of these — and the difference matters more than you think.
If you've spent any time evaluating document accessibility tools, you've probably seen both "scanning" and "remediation" used in marketing copy — sometimes interchangeably. They are not the same thing. Understanding the difference is essential to choosing a solution that actually makes your documents accessible, rather than one that simply tells you they aren't.
Scanning: Finding the Problems
Scanning is detection. An accessibility scanner examines a document — usually a PDF — and checks it against a set of rules derived from standards like WCAG 2.1, PDF/UA, or Section 508. It looks for missing alternative text on images, incorrect reading order, untagged tables, absent document structure, missing language declarations, and dozens of other potential issues.
The output is a report. That report might be a simple pass/fail checklist, a detailed issue list with page numbers and severity ratings, or a dashboard with charts showing compliance percentages across your document library. Some scanners are quite good at this. They catch issues reliably, categorise them clearly, and present the results in ways that help you understand the scope of the problem.
But here is the critical point: a scan report does not make a single document more accessible. It tells you what's wrong. It does not fix anything.
Remediation: Fixing the Problems
Remediation is the process of actually modifying a document to resolve the issues a scan identifies. This means editing the document's internal structure — adding PDF tags, generating alternative text for images, correcting reading order, marking up table headers and data cells, adding bookmarks, setting document language, and restructuring content so that assistive technologies like screen readers can interpret it correctly.
If scanning is the doctor's diagnosis, remediation is the surgery. Both matter, but only one of them makes the patient better.
The Scan-and-Report Model
Most accessibility workflows today follow a predictable pattern. You upload a document. The tool scans it. You receive a report listing, say, 34 issues: 12 images missing alt text, 6 tables without proper header markup, reading order problems on 8 pages, missing structure tags throughout, and a handful of metadata issues.
Then what? In the scan-and-report model, the answer is: you fix them yourself. Manually. One by one. Using Adobe Acrobat Pro or a similar tool, you open each document, navigate to the tags panel, and start the painstaking work of adding and correcting structure. For a single complex PDF, this can take hours. For a university department with thousands of documents, it can take months — or simply never get done.
This is the gap that most accessibility tools leave wide open. They are excellent at producing reports. They are not in the business of fixing documents.
Why Remediation Is Harder Than Scanning
There's a reason the market evolved this way. Scanning is fundamentally a pattern-matching problem. You define rules — "every image must have alt text," "tables must have header cells," "reading order must follow visual layout" — and you check whether a document satisfies them. This is well-understood computer science. It's deterministic, testable, and relatively straightforward to implement well.
Remediation is a different class of problem entirely. It's not pattern matching — it's reconstruction. Consider what's actually involved for each common issue type:
Reading order. A PDF's internal content stream often bears little resemblance to the visual layout of the page. Text blocks, images, headers, footers, sidebars, and captions may be stored in an arbitrary sequence determined by how the document was originally created. Correcting reading order means analysing the visual layout, inferring the logical flow a sighted reader would follow, and restructuring the tag tree to match. For multi-column layouts, pull quotes, and complex page designs, this requires genuine understanding of document structure — not just rule checking.
Table markup. Scanning can detect that a table lacks header cells. Remediation means determining which cells are headers, whether the table uses row headers, column headers, or both, whether cells span multiple rows or columns, and then creating the correct markup to express those relationships. For tables that were created as visual layouts rather than semantic structures — common in older documents — this can mean rebuilding the table from scratch.
Alternative text. A scanner flags an image with missing alt text. But generating appropriate alt text requires understanding what the image contains, why it's in the document, and what information it conveys to a sighted reader. A decorative border needs alt="" to be correctly ignored. A chart needs a description of its data and trends. A photograph of a campus building needs identification. This is a task that demands visual comprehension and contextual judgment — exactly the kind of problem that required recent advances in AI to automate effectively.
Structure tagging. An untagged PDF is essentially a flat collection of text and graphics with no semantic meaning. Remediating it means identifying every element — headings (and their levels), paragraphs, lists (and their items), block quotes, figures, captions, footnotes — and wrapping each one in the correct PDF tag. This is equivalent to reverse-engineering the document's original structure from its visual appearance.
Each of these tasks is genuinely hard engineering. That's why most tools stop at scanning — detection is a solved problem, but automated remediation requires a fundamentally different technical approach.
Why This Matters Right Now
With the DOJ's ADA Title II deadline approaching in April 2026, higher education institutions are under real pressure to make their digital content accessible. The temptation is to adopt a scanning tool, generate reports showing awareness of the problem, and call it progress. But compliance requires accessible documents — not reports about inaccessible ones.
A scan that identifies 10,000 issues across your document library is useful only if those issues actually get fixed. If the remediation step depends entirely on manual effort, the math doesn't work. There aren't enough staff hours, and the backlog grows faster than humans can address it.
What to Ask Vendors
When evaluating accessibility tools, one question cuts through the marketing: does your tool fix the document, or does it report what's wrong with the document?
If the answer is "we provide detailed reports and recommendations," you're looking at a scanner. If the answer is "we modify the document's structure, tags, and content to resolve issues," you're looking at a remediator. Both have value, but they solve different problems — and only one of them produces accessible documents at the end of the process.
Ask to see a before-and-after. Upload a problematic PDF and ask the vendor to return an accessible version — not a report, but the actual fixed file. Check it with a screen reader. Run it through PAC 2024. That tells you more than any feature comparison chart.
Scanning and Remediation in One Pipeline
The most effective approach treats scanning and remediation as two stages of a single pipeline rather than separate products. Scan to identify issues, then immediately remediate them — programmatically, at scale, without manual intervention for the majority of common issues.
Aelira scans and remediates in one pipeline — not just finding problems, but fixing them. See how it works.

Aelira Team
•Accessibility EngineersThe Aelira team is building AI-powered accessibility tools for higher education. We're on a mission to help universities meet WCAG 2.1 compliance before the April 2026 deadline.
Related Articles
What Is the Best PDF Remediation Tool?
A practical guide to evaluating PDF remediation tools — from manual editors to outsourced services to AI-powered platforms. What actually matters when choosing how to fix inaccessible documents at scale.
How Do I Remediate Thousands of PDFs at Scale?
Universities face backlogs of 10,000 to 50,000+ inaccessible PDFs. Manual remediation is impossible at that volume. Here's a practical framework for triaging, automating, and validating document accessibility at institutional scale.
Can AI Fix PDF Accessibility Automatically?
AI can automate many PDF accessibility fixes — structure tagging, alt text, reading order — but not all fixes are equally reliable. Here's what works, what needs review, and why confidence scoring matters.
Ready to achieve accessibility compliance?
Join the pilot program for early access to Aelira's AI-powered accessibility platform
Apply for Pilot