@neil I struggle to articulate my feelings on this topic, so hopefully this comes across as reasonably balanced.
An accessibility ecosystem has sprung up around LLMs. Much of it snake oil, some of it not.
In some cases the capabilities are new (or newly practical), such as visual descriptions for images/videos and guided photo taking. But they've also been able to improve or accelerate existing tasks, like information extraction from inaccessible documents or audio transcription.
A lot of discourse around LLM accuracy understandably takes an all or nothing tone, with any errors representing an automatic dismissal of the tools. But the situation ends up being more nuanced in the daily lives of disabled people. If someone's been dealing with inaccurate OCR engines and patchy access for decades already, any boosts in access can be seen as "better than nothing". And there are services which augment the "AI" with human verification and additions.