Methodology
How we calculate
your score.
Passiro uses a transparent, per-page deduction model with diminishing returns. No black boxes. No inflated numbers. Here's exactly how it works.
The engine
Powered by
axe-core.
We use axe-core — the industry-standard open-source accessibility testing engine by Deque Systems. The same engine used by Google Lighthouse, Microsoft Accessibility Insights, and hundreds of other tools.
Each page is loaded in a real Chromium browser, including JavaScript-rendered content, and tested against 80+ accessibility rules covering WCAG 2.1 Level A and AA — the standard referenced by the European Accessibility Act.
80+
accessibility rules tested per page
2.1
WCAG version — Level A & AA criteria
JS
rendered pages — no static-HTML shortcuts
0
proprietary black-box rules
The model
Start at 100. Deduct per rule.
Every page starts with a perfect 100. Points are deducted for each violated accessibility rule, weighted by severity — with diminishing returns for repeated occurrences of the same rule.
Critical
Users with disabilities are completely blocked from accessing content or functionality.
25 × √(occurrences)
Serious
Significant difficulty for users with disabilities. Workarounds may exist but are burdensome.
15 × √(occurrences)
Moderate
Noticeable but doesn't block access. Users may experience some difficulty navigating.
5 × √(occurrences)
Minor
Low impact on accessibility. Often a best-practice improvement rather than a strict violation.
1 × √(occurrences)
Diminishing returns
Why repeated issues
don't obliterate
your score.
The first occurrence of an issue is the most important — it tells you the problem exists. Each additional occurrence of the same rule adds less, because the message is already clear: you need to fix the underlying cause.
This means 100 images missing alt text doesn't cost 100× the penalty of 1 image — it costs 10×. The problem is recognized and penalized, but it doesn't mask everything else on the page.
How √ scales
Not 100× — just 10×. The problem is penalized proportionally without destroying all signal from other issues.
The formula
Simple enough to explain to anyone.
Each page is scored independently, then the site score is the average of all page scores. A single bad page won't destroy the score for 49 clean pages.
Each page starts at 100 points
For each violated rule: deduct severity × √(occurrences)
Floor at 0 — scores never go negative
Site score = average of all page scores
Formula
Per rule on each page:
rule_impact = severity × √(occurrences)
Per page:
page_score = max(0, 100 - Σ rule_impacts)
Site-wide:
site_score = avg(page_scores)
Severity weights: critical = 25, serious = 15, moderate = 5, minor = 1
In practice
A real-world example.
Consider a website with 3 pages. Each page is scored independently, then averaged.
Homepage
68
color-contrast (serious) ×3 → 15×√3 = 26.0
heading-order (moderate) ×1 → 5×√1 = 5.0
meta-viewport (minor) ×1 → 1×√1 = 1.0
100 - 32.0 = 68.0
About
95
region (moderate) ×1 → 5×√1 = 5.0
100 - 5.0 = 95.0
Contact
62
label (critical) ×1 → 25×√1 = 25.0
link-name (serious) ×2 → 15×√2 = 21.2
Total deductions: 46.2 → capped
100 - 46.2 = 53.8 → 53.8
Site Score
72.3
(68.0 + 95.0 + 53.8) ÷ 3 = 72.3
Score ranges
What your score means.
90–100
Excellent
Very few or no automated issues detected. Strong foundation for WCAG AA compliance.
70–89
Good
Minor issues present but accessibility is solid. Prioritize any critical or serious findings.
50–69
Needs Work
Multiple barriers exist. Users with disabilities will encounter problems on parts of your site.
25–49
Poor
Significant barriers across your site. Many users with disabilities cannot effectively use key functionality.
0–24
Critical
Severe, widespread accessibility failures. Immediate remediation required.
Why this approach
Built for honesty.
Not vanity metrics.
Many accessibility tools use opaque or inflated scoring that makes sites look better than they are. We believe you deserve a number you can trust, explain to stakeholders, and act on.
Transparent
Formula and weights are published. Every violation can be verified with open-source axe-core.
Progress visible
Fixing issues always improves your score. Diminishing returns means every fix counts — even on pages with many issues.
Impact-driven
A critical issue costs 25× more than a minor one. Severity matters — not just count.
Per-page normalised
500 issues on one page won't destroy the score for 49 clean pages. Each page contributes equally.
Important to know
What automated scanning can't catch.
Automated tools catch approximately 30–40% of WCAG issues. They excel at structural and code-level problems but cannot evaluate:
- Whether alt text is actually meaningful and accurate
- Cognitive accessibility and readability of content
- Complex keyboard navigation flows and focus management
- Screen reader experience and announcement quality
- Video and audio content accessibility
A score of 100 does not mean your site is fully WCAG compliant — it means no automated violations were detected. We recommend combining automated scanning with manual testing and user testing with people who use assistive technology.
See your score.
Enter your website URL and get a free accessibility scan with a detailed score breakdown in minutes.
No account required. Results in under 2 minutes.