accessibilityopinioninclusive

The Forgotten Users: Why Most AI Tools Fail on Accessibility

Mario Simic

ยท5 min read
ShareXLinkedIn

AI tools are being built for the statistical average user: sighted, typing, English-speaking, using a standard laptop or phone. Users who do not fit this profile โ€” those with visual impairments, motor disabilities, dyslexia, or hearing differences โ€” are frequently treated as edge cases, addressed in a footnote in the accessibility documentation (if there is one), and expected to "find a workaround."

This is not just a usability failing. It is a significant equity issue, given that AI tools are increasingly important for productivity, education, and professional work. Inaccessible AI tools create new barriers for people who already face more than their share of them.

The Specific Failures in Most AI Interfaces

Screen reader support is an afterthought. Chat interfaces that stream responses token by token create a chaotic experience for screen readers โ€” the ARIA live region announcing new text with every token generates a flood of announcements that is functionally unusable. Most chat interfaces have not been tested with screen readers. The "AI is thinking" states are often communicated only visually. Error states are conveyed through color alone, violating WCAG 1.4.1.

Keyboard navigation is frequently broken. Modal dialogs that trap focus, dropdowns that require mouse interaction to close, custom UI elements that do not implement keyboard event handlers. These are not obscure edge cases โ€” they are the core interaction patterns of most AI tools, implemented without the basic accessibility requirements that browser-native elements handle automatically.

Voice-only use is largely unsupported. The default interaction model for virtually every AI tool is text input. Voice input, when supported, is typically limited to dictation into a text field โ€” not a true voice-first interaction where you can navigate the interface, take actions, and receive responses by voice without ever touching the keyboard. For users with motor disabilities that make typing difficult or impossible, this represents a significant barrier to a category of tools that could be transformative for them.

Cognitive accessibility is ignored. Users with dyslexia benefit from specific font choices and spacing. Users with ADHD benefit from reduced visual complexity and clear information hierarchy. Users with cognitive disabilities benefit from plain language options. These needs are rarely considered in AI interface design, and the documentation never mentions them.

What Skales Does Differently

Skales is not a perfect accessibility story โ€” we have work to do, and we know it. But some deliberate design choices move in the right direction. Voice Chat is a primary interaction mode, not an add-on: you can navigate, query, and receive responses entirely through voice. Keyboard navigation is complete for all core interactions. Text size is configurable in Settings. The Desktop Buddy provides an alternative interaction surface that works with keyboard input. We test with screen readers as part of release cycles.

What the Industry Needs to Do

WCAG 2.1 AA compliance is a minimum, not a stretch goal. Screen reader testing with real assistive technology users โ€” not just automated lint checkers โ€” needs to be part of every release. Voice-first interfaces need to be designed from the ground up, not retrofitted. Cognitive accessibility needs to be in the design brief from the start. And companies need to publish their accessibility conformance reports honestly rather than treating accessibility as a legal checkbox. Read about accessibility features in Skales and use cases for older users.

Try it yourself ๐ŸฆŽ

Skales is free for personal use. No Docker. No account.

Download Free โ†’
ShareXLinkedIn