Skip to content
Access Advisors logo - home

Risks of Artificial Intelligence

AI brings incredible potential, but it also comes with risks such as biased data, accuracy, and security.

AI generated picture of a robot hand reaching out to touch a sylized web screen

While AI has the potential to enhance digital accessibility in many ways, it is also important to consider the risks as well. We won’t get into the existential discussion about whether AI will take over the world. But there are real concerns that do need to be considered, such as the data sets, security and accuracy etc. 

Socially Biased Data 

Probably the biggest risk for people with disabilities is the potentially biased data sets used to create these AI systems. Unfortunately, disabled people are less likely to feature in a positive way in mainstream media. This means the data used is highly likely to be inherently biased against the needs and behaviours of people with disabilities. 

AI systems learn from the data they’re trained on, and if that data reflects societal biases, it can lead to unfair outcomes for people with disabilities. For example, an AI used for hiring might disadvantage candidates with disabilities if the data used hasn’t been properly curated to include diverse experiences. This can perpetuate stereotypes and hinder opportunities for individuals who are already facing challenges in navigating the job market.  

As a society we need to make sure that the training data used for AI algorithms is diverse, representative, and inclusive. The data needs to reflects real experiences and characteristics of people with disabilities to ensure AI systems learn and make accurate predictions that cater to the needs of individuals with disabilities.  

Whenever these biases manifest we need to call it out. We can’t just sit back and say these data sets are biased, we need to actually bring it to the attention of people who can make a change. Biases will continue if we don’t correct the data.  

Accuracy 

Another big concern is about the accuracy of the output from some AI systems. When it comes to AI tools, accuracy is crucial, especially for users relying on these systems for assistance. For example, if a speech recognition tool misinterprets commands, individuals with mobility impairments might struggle to control devices, undermining their independence.  

Similarly, AI systems that assist with reading or navigation could lead to misunderstandings, causing frustration and potentially unsafe situations. It’s essential that these technologies are thoroughly tested and refined to ensure they work reliably for everyone. 

Like anything else, humans still need to be involved. We can’t just trust 100% that the image or the text of the answer that is supplied by a machine is going to be accurate or complete. We need to use our own judgement as well. 

Copyright Issues 

There are also some concerns about copyright issues of the input and output from Generative AI solutions. As AI increasingly generates content—like text, music, or images—questions about ownership and rights come into play. For people with disabilities who create or rely on AI-generated content, unclear copyright regulations can create barriers.  

For example, if someone uses an AI tool to help them create art or write a story, they might face challenges in claiming their work as their own due to complex copyright laws. This can discourage creativity and limit the opportunities for self-expression among individuals with disabilities who depend on these tools. 

It is important to remember that any image you’ve ever put out on social media could be part of a data set being used to train AI. There have also been instances where a generative AI solution has produced a string of words that have been written by someone else.  

Security 

AI systems can sometimes store sensitive personal information, and if these systems aren’t properly secured, it can lead to breaches that disproportionately affect people with disabilities. Imagine if someone with a mobility impairment used an AI-driven smart home system; an insecure connection could expose their location and personal routines to unwanted intrusions. Ensuring that these systems protect user data becomes even more essential when vulnerable populations are involved, so stronger security measures should always be a priority. 

AI systems can be designed and developed to be inclusive and accessible, catering to the needs of people with disabilities. If you’d like to know more about making your AI systems inclusive, reach out. We are always happy to help.