AI: Disabled People's Friend or Foe? A Summary of the Key Conversations at M-enabling 2023

When I think of accessibility in the wider conversation of technology and innovation, I’m sure I’m not the only one who considers accessibility and disability conversations to be behind. Accessibility is often an afterthought; a tack-on that doesn’t quite fit. I used AI a few months ago to generate some new headshots, to see how accurate they were. As someone with an energy disability, the process of getting headshots taken can be tiring, taking a whole day out of an already limited working week. I was impressed by the results, but not enough to start using them! 

Suppose you’ve used AI such as ChatGPT or other generative AI that is pulling from a gigantic pool of data, and you’ve asked it to generate something including disability, illness or neurodiversity. In that case, you will have likely got a response with ableist undertones, or outrightly ableist. 

Celia Hensmen also used AI to generate some headshot-style images, and though the images generated were a likeness to her, they failed to include one crucial thing- her disability. After feeding in images of her using her wheelchair, walking stick or tube, none of the images generated included them in the final output. 

The erasure of disability completely is concerning, so it’s not a surprise to learn that the responses are taken from the data available on the internet, all of which was created by people who, as a collective, are more often than not, ableist. This is the very nature of existing in an inaccessible world. But the companies at the forefront of developing this new generation of Generative AI are insistent that they are including accessibility in the conversation from the get-go.

Speaking from the M-Enabling conference, Maria Town, president and CEO of The American Association Of People with Disabilities (AAPD), noted that disability data just isn’t present in large enough, or distinct enough quantities, including those who have the same condition but have different needs and preferences. “With AI, we all become data outliers.”  

So how does feeding generative AI models with limited data on disability impact the way they use this information? “It’s not as easy as feeding it the right data, testing is needed to make sure that ableism isn’t in the model,” Mike Buckley, Chairman and CEO of Be My Eyes highlighted. The company is in the Beta testing phase of removing the need for a volunteer to be the intermediary to provide context and information to blind and low-vision users. Their 19,000 beta users have provided incredible responses- both encouraging and critical- on how to make the user experience more accessible, and show that AI is here to stay. 

It’s easy to see how AI is becoming the buzzword, as 12 months ago it was not in the scope of the everyday person, and now technology is being developed to address every single facet of someone’s life. For disabled people, this is an exciting conversation as technology advances and allows more independent living and working experiences, but also potentially harmful. 

Christopher Patnoe, Head of Accessibility and Disability Inclusion, EMEA at Google, asked for grace when making mistakes. “AI has been around since the 1950s. It’s been around for generations. But we’re at a point where the change is fast enough and the technology is strong enough.” The ‘fail fast’ mentality that comes from technology companies as a way of innovating on a much faster timeline has allowed innovation to move at a faster rate, but at what cost? 

The risk of AI is not unknown to us, such as the investigation underway after an AI tool may have misinterpreted a child’s disability as parental neglect, or the concept of deepfakes, where the provenance of where an image has come from becomes critical to the spreading of accurate information. “Like we have nutrition labels on our food that tell us the ingredients and provenance of where our food comes from, we need to look at content authenticity, available for all users to be able to get access to this information,” suggests Andrew Kirkpatrick, Director, Accessibility at Adobe.

One way of mitigating the risk that comes from this technology is ensuring that employee resource groups are trailing the tech before it reaches the consumer, Jenny Lay-Flurries, Chief Accessibility Officer at Microsoft suggested. This of course requires an organisation to not only have these resource groups set up but also ensure that they are intersectional, representative and run with trust. Trust that the information provided by participants is going to be used ethically if they are used for user testing- but also trust in the communication loop that comes with disabled and neurodivergent people talking openly about their experiences of inaccessibility in their workplace. 

Another way of mitigating risk, or at least preempting the potential risks that will develop as AI itself develops, is the AI Risk Management Framework (AI RMF), a voluntary framework that covers seven desirable characteristics that AI should include. The seven areas are safety, reliability, explainability, accountability, transparency, managing bias and enhancing privacy. “Governance is the most important aspect for the humans around AI systems,” explained Patrick Hall, a contributor to AI RMF and Assistant Professor at The George Washington University. 

“The lense of perfection vs the lense of progress,” Ed Summers, Head Of Accessibility at GitHub offered, citing that 96% of homepages have critical errors when it comes to accessibility. As a proponent of progress over perfection, he would like to see to what degree we can leverage this new technology to increase progress. “Can we get this number below 90% and what role does AI have in this?”

An area of accessibility making progress is alt text. For those who rely on the text that should be provided with images to provide context about the image, this is a crucial part of accessibility when accessing and navigating the digital space. Yet the lack of alt text is a shared frustration for many people- even though it is the first criterion of Web Content Accessibility Guidelines (WCAG). Generative AI feels like a logical solution to this, with the ability to generate an accurate image description, while also ‘scraping’ any surrounding text to provide information about the image in the context of the webpage or text that surrounds it. But we aren’t quite there yet. 

Caroline, Founder and CEO of Scribly, an alt text and audio description service, talked to me about where they are, developing their own AI software. “We are running a prototype AI at the moment, tracking how accurate the AI descriptions being outputted are. But every single one still needs to be looked at by a human before we’re happy to let the alt text go live.” 

It’s exciting to hear that missing alt text may soon be a thing of the past. In my work training those within organisations who are responsible for writing alt text, I’ve found that many people, even those within the disabled community, are hesitant to write alt text due to the fear of ‘getting it wrong’. So they don’t want to try at all. This is reflected in the research study by Susanna Laurin, CPACC – IAAP Honorary Chair & EU Representative. She spoke about an EU-funded project focused on alt text where the results showed that when web authors were provided with bad auto-generated alt text, they were more inclined to go in and change the text. But when alt text wasn’t generated at all and it was left to them to add alt text in, a large percentage of the web authors didn’t bother. “Using AI to do part of the job, with the manual aspect of it, means we aren’t scaling well, but if it gets us 80% or more then we’ll be getting a long way there,” Susanna said about the research findings. 

The idea of AI being used not as a replacement but as a support tool for people to utilise alongside their work may come as a comfort to those who are worried about AI taking over jobs. The consensus seemed to be that though this technology is moving quickly, one of the best ways for people to use the tools is to automate their day-to-day tasks and boost their productivity. 

It’s worth noting, however, that this conversation is currently US-centric “built in America, with American data,” highlighted Christopher Patnoe. “To take AI international in a way that it will have tangible impact we have to get different data… where we’re going with this AI work, it’s often not useful in the real world… (we) have to do it in a thoughtful, intentional way.” 

The connection point between data and disability law in Europe is potentially one of the things limiting the collection of this data- but for good reason. The Data Protection Act means that the ‘meaningful’ and ‘high quality’ data that those like Google are asking for need to be given with permission. This involves the “collaboration of disabled-led organisations who can connect with disabled people in a meaningful and non-paternal way,” Alejandro Moledo del Rio, Deputy Director of the European Disability Forum (EDF) raised during a panel on the European Accessibility Act (EAA). 

One area of great potential- and potential issues- for AI is within employment and recruitment. AI is already actively used for recruitment with personality tests, monitoring engagement in interviews with eye-contact tracking as well as monitoring timeliness- how fast a candidate completes a question. These lead to the employer being told who to hire or not but they are all based on neurotypical answers and behaviours. So what are the accommodation tools required around the use of these tools? “It must become mandatory for employers to indicate that these tools are in use,” Maria offered, “But then what? Is the solution then to offer an in-person interview? If this is the case, we have to account for the unconscious, and conscious, bias of people.”

“Everything having to do with employment is high risk for AI,” said Ryan Carrier, Executive Director and Founder of ForHumanity. Through his work, he has been going through all of the potential use cases for HR-related AI software. “Highlighting the potential risks has been a crucial component to prove that the things the AI is testing for, match up to the ONET descriptions.” ONET is a data scheme managed by the US Department of Labor that makes sure that hiring accounts for an individual's skills and ability to do the tasks required for the job they are applying for. 

The automation of the hiring process may sound like a welcome relief to hiring managers and recruiters everywhere. But it throws up an obstacle. By our definition, the disabled and neurodiverse community is not standardised. 

“Another risk of AI HR platforms that use data to track models of ‘high-performance employees’ in the workplace. They are digitising these people and using them as a benchmark for new candidates through the screening process. Many people with disabilities don’t try or take the same path… of top education. Many of us may choose non-traditional education paths. So how are AI tools going to recognise our unique choices due to barriers or other restrictions during the hiring ecosystem,” Eyra Abraham, For Humanity and AAPD Start Access, shared from her own lived experience. 

This kind of HR gamification is already present in companies such as Uber, which incentivises drivers to work during peak times. But when brought into the everyday office experience, tools that decide if you’re performing well or not are based on the standardisation of what a working day should look like. Eyra continued, “AI has the potential to define what is normal, and this can be a big problem for disabled people.”    

But it isn’t all scary! Jhilika Kumar, Founder and CEO of Mentra, is building a neurodiverse company, bringing hope that disabled and neurodiverse people are developing AI for the community. Mentra is a cognitively accessible job-finding platform. They have over 40,000 job seekers on the platform, with 17 employers including Fortune 500 organisers who actively post job ads. Interestingly, they also welcome over 300 service providers (career coaches and other providers who empower job seekers). Their mission? To ensure no one is left behind in the job-finding process. A new area of exploration for them is leveraging AI to empower job seekers. Still in development, their virtual platform Neuro helps with job interview support, professional emails and professional etiquette. They have received positive responses from the standardised neurodiverse-friendly job descriptions, building a consistent layout that only contains the information needed for the candidate. 

It is encouraging to see platforms like these created, but I wonder whether people will even realise what they use is from AI. “People should be made aware of when they are consuming content produced by AI, so they can make an informed choice as to whether they want to consume the content or not.” Michael Cooper, a Web Accessibility Technical Strategist, suggested reinforcing the need for a flagging system, as suggested by Andrew Kirkpatrick. 

So where are we now, and what does the future hold? It’s evident that Generative AI is here to stay and will continue to evolve. The key now is to ensure that pressure is on the organisations leading the development of AI to consider and actively work on increasing the amount of (ethically sourced) data from disabled people included in the data library their AI systems pull from. One way to do that is to ensure that disabled and neurodiverse people get hired into the roles critical to the development and rollout of this technology- and this is conditional on inclusive and accessible workplaces.

Here is where my focus lies, and I am looking forward to studying in more nuanced detail the link between disability employment, creating an inclusive culture and innovation thanks to the Churchill Fellowship funding. 

And on the future potential and risk of AI? I share the sentiment of Mike Buckley, “The possibilities are limitless… we don’t have a choice but to embrace this, cautiously with trepidation.”

Previous
Previous

Ways to Celebrate National Disability Employment Awareness Month October 2023

Next
Next

The Churchill Fellowship: A Platform for Change and My Journey Towards Inclusive Work Cultures