Skip to content
Access Advisors logo - home

AI and Accessibility: Keeping it human

Last week I attended AI focused meetups at MadeCurious about the intersection of AI and user experience (UX), and one hosted by Inde for charities wanting to leverage AI. Both events highlighted that AI is a steaming hot topic, but caution is needed.  

Nik, Berys and Stella on stage at the Inde hosted AI for charities event

Both sessions explored how AI streamlines ideation and enables innovation, especially for resource-strapped charities. AI can also help creating more accessible content and interfaces, if used wisely.

At the UX Meetup one of Steve Arnold’s quotes struck the first chord. "Your job won’t be replaced by AI, but it will be replaced by someone who knows how to use AI," a clear call to stay informed, keep learning, and sharpen skills.

Berys Amor from Ngai Tahu, Stella Ward from Streamliners and Rik Roberts from Inde at the AI for charities event reiterated how important it is to start now, make time to learn and understand how AI can help charitable organisations become more efficient. The first step is to get your data organised and protected and try things out on non-confidential data first.

Steve Arnold standing casually answering questions from the auidence. He is standing in front of the MadeCurious sign.Keeping it Human

A common theme at both events was that AI is just a tool and shouldn’t replace humans. Steve talked about how “everyone can cook, but it doesn’t make them a chef.” He was referring to the fact that AI can produce reasonable wireframes, but it can’t think through all the human nuance or replace UX professionals. Stella also mentioned the role of AI but the need for humans in the healthcare environment.

This also applies to accessibility. Just because AI can identify access issues or suggest fixes doesn’t mean accessibility professionals aren’t needed to keep it human. Steve used an example about how AI can analyse images, but it cannot understand the emotional context, lived experience, or importance behind images.

Applying that example to accessibility and crafting alternative text for images, AI can tell you what an image is, but it can’t grasp how someone feels or why this image is being used. Human insight, empathy, and judgment are essential to creating meaningful, inclusive experiences like good alt text.

Steve also discussed how AI excels at creating personalised solutions. He emphasised the inherent risks of allowing people to decide what they want versus what they need. His point reminded me of an early career lesson that users can tell you what they want, but it’s our job as professionals with expertise in human behaviour, emotion, cognition, perception, and ergonomics, to translate those wants into solutions that genuinely serve them best.

Data sovereignty

Berys Amor presentingThe AI for charities event also highlighted the need to start using AI on data that isn’t confidential, highlighting the real concerns around data sovereignty and security. This is another area of concern for accessibility and disabled people.

There is still hesitance from disabled people to use AI or provide data when they don’t understand where or how it will be used or stored and the impact on them. While there is great work being done on data sovereignty for Māori like the work of Dr Karitiana, more work is needed to protect the needs of disabled people.

Biased data

My biggest concern about AI remains the biases in the data especially relating to disabilities. AI models are trained on vast datasets, but these often reflect existing societal prejudices. Without actively challenging the data, AI can perpetuate exclusion and stereotypes instead of reducing them. AI can reinforce harmful ideas about the capability and capacity disabled people have to work.

Both sessions reinforced the need for ongoing learning and experimenting. AI is changing, rapidly, and we need to keep up with the advances, understand the limitations, and develop the skills to deploy it ethically.

In the space of accessibility, there is a particular responsibility to actively confront and address bias, whether in data, design, or implementation. We need to make sure that AI tools serve everyone fairly and inclusively, and that they don’t become a new barrier.

While I definitely do use AI, and can see so how much value it can bring, after both of these events I still have concerns.

If you are interested in the topic get in touch, of have a read of our other AI related blog posts.

Artifical Intelligence and Digitial Accessibility

Designing Inclusive Artifical Intelligence Systems

Risks of Artificial Intelligence

Benefits of Artificial Intelligence