The opening plenary round table, featuring thought leaders from advocacy, industry, and startups, provides a comprehensive roadmap for participants to explore the many topics and sessions covering AI during the Summit, from AI applied to accessibility coding and testing to speech and image recognition, AI enablers for assistive technologies and the questions that organizations need to tackle to ensure that AI is used in ethical, non-discriminatory, and safe ways. Not to be missed in the context of the most impactful technological evolutions of the past decade!

Moderator: Mark Walker, Head of Marketing and Portfolio, AbilityNet

Speakers:

  • Jenny Lay-Flurrie, Chief Accessibility Officer, Microsoft
  • Christopher Patnoe, Head of Accessibility and Disability Inclusion, EMEA, Google
  • Mike Buckley, Chairman and CEO, Be My Eyes
  • Jhillika Kumar, Founder and CEO, Mentra
  • Maria Town, President and CEO, AAPD – American Association of People with Disabilities

This video is lacking captions. We expect captions by February 13, 2024.

Transcript

FRANCESCA CESA BIANCHI: Thank you so much, Jenny, for the terrific insights. I would like now to invite to the stage the guest speaker for the roundtable, please, the next speakers. Mark Walker will be moderating the session. Mark is head of Marketing and Portfolio at AbilityNet and he will be followed by again, Jenny LayFlurrie, if you would like to join, please come to the stage at this time. The next session is Round Table: How AI is Transforming the Digital Accessibility Ecosystem. Joining on the stage, Jenny LayFlurrie, Christoper Patnoe, head of Accessibility and Disability Inclusion at Google; Mike Buckley, Chairman and CEO of Be My Eyes;, Jhillika Kumar, founder and CEO of Mentra; and Maria Town, President and CEO of the American Association of Persons with Disabilities. Thank you, the floor is yours.

MARK WALKER: Hello, everyone. Jenny, that's fantastic, thank you for setting the scene so brilliantly. I'm Mark Walker from AbilityNet from the U.K., a disability charity that works in accessibility. I know a lot of you are here for lots of different reasons. This panel is perfect for setting the scene I think for where we're at. Jenny, you mentioned this is the moment, this is the different conversation than we would have had 12 months ago because of ChatGPT, suddenly the mere mortals like me understand what it is all about, you can play with it, you can see it, you can touch it, you can make it do things that probably you knew was possible but couldn't quite believe were happening. We're on the cusp of the change, and we think today what we want to do, just unpack the conversation. As Axel said, AI is everywhere now, you can have a conference just about this, just talking about disability and there are I think you said 172 sessions which mention AI? So it is everywhere. We all know that we're all knowing in our community that this is something that we need to take on as a challenge, it is both positive and potentially has downsides. So we just want to talk through some of those things. There was an opportunity for you from the audience to speak to us, through Slack, or we'll have questions as we go, you know, start thinking about them, Christopher is poised somewhere around 10, 15 minutes before the end of the session, we'll start asking for questions. If there are things you want to know, we want to know where your questions and concerns are in terms of how AI is going to change our world.
So there are three parts of this: The first part is about the opportunities. What can it do? We have got some great examples, Jenny perfectly teed those up, we have to look at it from Mentra and we have Mike from Be My Eyes. We will talk about the risks and concerns. Maria will talk about the concerns at policy level and changes that we know will come really at the broadest level in terms of policy that may guide the good or bad outcomes around disability and the lives of disabled people. And then we have got Christopher from Google, Jenny from Microsoft. So they have their hands on these tools. They know where the levers and the strings are that they can pull to make this stuff work in particular ways. We're looking to how that all joins up and how to connect.
The final part is how should we respond? Us, the people in the community, whether you're in a charity, a consultant role, any number of different people in the room that have a concern about AI. What should we be doing, something to do to join in on policy side? What do we do when building tools? What should we worry about? How do we correct the impact of ableism, for example, which we know is going to be embedded in some of the tools? We're trying to bring that together at the end thinking about what you guys could do to be part of this shift as it happens now in the next six, twelve months. Imagine coming back in twelve months and seeing another batch of tools that revolutionized our live, we're on the cusp of that. To kickoff, I want to mention two things: Generally you mentioned old technologies from the '80s and '90s  I say '80s, you say '90s. To me, there are two things that incoming technology can do, we should be thinking about, they can help you do things better, we can be more productive, you can get more done, faster, or you can do better things, you can do something that you couldn't do before. Those are the two options you have when technology is coming at you. Doing one of those two things with it. We'll start with Mike. You're doing better things, I think Be My Eyes, it is really a step change in what's possible for somebody to do. Can you introduce Be My Eyes and tell us about what it does and how AI is part of that solution?

MIKE BUCKLEY: Thank you for the question. Thank you for having me. I'm equally as intimidated as Jenny sitting in a roomful of people who probably know vastly more about most of these subjects than I do. Bear with me. Be My Eyes was originally started as this beautiful merger of technology and human kindness where someone whose blind, has low vision can call a volunteer for sighted assistance. Fast forward to 2023, it is in 150 countries, 180 language, over a million calls a year, over a 90% success rate. This works wonderfully. You know what? Our consumer research told us a couple of things: One, I don't always want to talk to a person; two, what if my kitchen is messy; and I don't want to volunteer in my house. So it that caused us to say what can we do? Further to the theme of what Jenny was talking about to give independent choice, provide a tool that allows someone to access the world in the way that they want. Here comes open AI with a large language model that makes the entirety of information on the Internet and the planet in theory available for consumption, and search, navigation. We partnered with them to launch this tool called Be my AI, allowing a user to take a picture of anything and then within 10 seconds, that's where the latency was yesterday, getting a very full, thoughtful description of what's in that picture. This is everything from an expiration date on a carton of milk, to a green sweater, to airport navigation. Right. To a whole host of things. So I am really, really weary of the hype cycle around AI. I try as best as possible to avoid hyperboulia, it doesn't serve purposes and there are too many companies that are chest bumping on too many things. I go to the comments of the 19,000 blind and low vision beta testers that tested this product who have said this is life changing. I have my independence back for the first time in ten years. I can enjoy Instagram. I'm looking  I'm looking at old family photos, photo albums and it is giving me incredible descriptions.
You talk about possibility, the possibilities in theory are endless. The power and independence is in the hands of the blind and low vision community which gets me incredibly, incredibly excited. So I think where this goes by the way, in the not too distant future is realtime video consumption. It is great to take a picture of something and it tells you what your environment is. Imagine if it is live moving imagery and you can get realtime access to information in that environment? That's coming. It is really just a function of compute power. I think Microsoft, Azure, others will solve it, Google as well. Jenny will now tell me it is pronounced Azure. So look, I'm incredibly excited about this, there is so much more to do. We should be weary of the hype. The possibilities I think are limitless.

MARK WALKER: If you look back, you talk about this 12month period, do you think you could see that coming?

MIKE BUCKLEY: No.

MARK WALKER: Being in it, doing it is the change, it was not obvious how it would turn out. I'm thinking about the people in the room, how do I get my hands on this stuff and do it? Do you, you don't find out by drawing in advance and working that out. You have been on that journey and you have explored that, then you're like look, it is magic.

MIKE BUCKLEY: For two reasons, right. One, it is if you think you know it, you study it, it changes the next day anyway.

MARK WALKER: Yeah. Yeah.

MIKE BUCKLEY: The second, the genie has left the bottle, the train has left the station, it is not going back. I don't think we have a choice but to embrace it, harness it, figure it out. Safely, right, cautiously, with some trepidation, right, but we have to go and try to embrace this, it is potentially the most powerful thing for our communities that certainly I have ever seen in my life.

MARK WALKER: In terms of people you worked with, the work with open AI, they're clearly embracing the inclusive design principle, the design at the edges and you get the middle for free, it sounds like they're endorsing that, working with you, going along that line as well.

MIKE BUCKLEY: They and Microsoft and Google have been remarkable to work with. Specifically with open AI, I'm sure it happened with other technologies, but I have never touched those technologies. This is the first time I have had my hands on a piece of technology where the community of blind and low vision were in at absolutely day one shaping it.
[Applause]
We went after them. Our beta testers were not shy. They gave a lot of feedback, not just open AI but to me, I have been lit up more than a few times on Twitter, elsewhere, deservedly so. Right. But the first stage of design was through people who are blind or who have low vision. I think it is awesome. Our ViceChairmen, Brian, the head of the San Francisco Lighthouse for the Blind, he said it is imperative for our community bends this technology to their needs and I think it is happening.

MARK WALKER: I think  you know, quite apart from the perfect example, I also think that's an  back to the response that we all make to this, get in there, do this, because that's how it is going to bend, that's the shape it needs, looking back on other transformational technologies, maybe we didn't embrace it quickly enough. Maybe it happened and then we joined in, where you're the Vanguard of changing the way that open AI works not in particular for those particular users. Jhillika Kumar, can you tell us about Mentra and the story behind what you're doing and how it works.

JHILLIKA KUMAR: Yeah. So Mentra, it is an employment network for one in every 7 neurodivergent, I must say how excited I am on change in technology in my lifetime and all of these other years. How exciting. How we started, my brother is traditionally nonverbal autistic individual and he first communicated actually at the age of 27. It was through an accessible letter board where he shared his first words. What soon came after was incredible poetry, he started expressing that he wanted to himself feel fulfilled and in a job, through employment, and that sort of inspired the founding of one in every seven humans worldwide who are neurodivergent with autism, ADHD, dyslexia, cognitive variation of the human brain and that potential was not being tapped into in the workforce. You know, there is a 40% unemployment rate in this community and wanted to build essentially this cognitively accessible job finding platform that could empower neurodivergents to enter the workforce. We did just that. We have grown to over 40,000 neurodivergent job seekers across the country, as well as over 17 employers and fortune 500 world and even more interesting, because we have a three sided marketplace. We have the job seeker, the employer, and then the service provider, which is, you know, there are over 300 groups today on the platform that include University job coaches, voc rehab centres, career coaches, anyone that empowers the individual to get the job.
We're sort of building this ecosystem in the community that is built with cognitive accessibility from the ground up and making sure that, you know, everyone  no one is left behind in the job finding process and speaking of AI, for this conference, one thing we're really excited about, it is leveraging AI to for example empower job seekers through virtual job coach. So we have built neuro that is, you know, helping with writing professional emails, helping with interview prep. You know, of course we're not taking the human out of the process, there is that whole job combing ecosystem. However, sometimes, you know, you don't like to talk to a person all the time, so maybe you can just ask AI to help me through the process and help me with professional etiquette which for a population under and unemployed, having the professional skills, it is not, you know always common. So how can we reskill the workforce of the future to adapt to this evolving new set of jobs that are emerging with AI, something that we're thinking about all the time. So really excited to be here, and all of the learnings that we are going to have from the conference to improve and to collaborate.

MARK WALKER: You're sitting in the middle of the ecosystem, connecting up people, because we're talking AI, how do people feel about AI in your experience, that people are comfortable, confident, that they're positive, they are gobbling up the stuff you're throwing at them or are they sitting back?

JHILLIKA KUMAR: We're in beta testing of the virtual job coach, a lot of positive responses we have seen, it is in actually building job descriptions for, every JD today has different formats, has different information, sometimes the information is not even relevant to what the person is going to do at the job and so we have leveraged AI for building a framework for JDs going forward, we hope that's the future. We have seen a lot of positive momentum from companies evenly bracing a neuro inclusive process from the ground up. Now it is about really testing the virtual job coach with job seeker, getting input, building something that actually is valuable, meaningful for them to, you know, embark on a different career, you know, continue on their career and advance, having a profession and become a manager, you know, have the tools needed to manage an employee. So there is really infinite potential and possibilities that we're hoping to tap into with this tech.

MARK WALKER: On a practical basis, how easy is it to find the tools, engage with them, work with them and in terms of the service provider, how do you measure the efficiency, the time and effort you're using to use the new tools and create the technologies and you have to be balanced with the way that you deliver a service and you're profitable. Are you exploring that as you go along, is it easy to get into, you have lots of help and support, I think people here, how do I even start? Where do I go? Where do I get this help to use AI in what I have been doing. You have been through that initial process of who do I phone up and ask or 

JHILLIKA KUMAR: We're so incredibly blessed. When we were going about the fundraising process we actually did have our deck ran by Sam Altman's table and he actually did  you know, a few weeks later decided to invest in Mentra and led our preseed round and followed on in the seed round. When we met him for ten minutes, we asked him, hey, why is this even of interest to you? He shared that, you know, he grew up with neurodiversity in his family, his life. He shared that, he believes that Open AI is one of the biggest strengths, it is in cognitive diversity of the workforce. You know, he believes it is the next step towards our workforce to have that competitive edge through cognitive diversity. You know, very excited, and how successful, you know, we have northern.com we have just bought the domain name and we're, you know, accessible to everyone who has access to the Internet. Hoping to give back.

MARK WALKER: So exciting. Fantastic. I've couple of examples of how amazing it can be. I guess in a policy sense, there is the caution that Mike had mentioned. Can you tell me about the work that the organization does and then see where AI is starting to slot in presumably quite quickly you have had to adopt policies and practices that relate to this subject?

MARIA TOWN: Yes. So the Americans association of Persons with Disabilities is a national cross disability Civil Rights organization that seeks to advance the political and economic power of the more than 60 million disabled people across the United States, a very big mission. Actually we began to work on issues related to AI a few years ago. I just made my fouryear anniversary at AAPD.
[Applause]
Thank you.
And Jenny mentioned Henry Clay Pool who works with us, I remember in the first few months, I began talking to Henry about kind of my vision for what I thought the organization needed to do and what I said, we need to be really focusing on the mortgaging issue, we need to focus on AI and privacy.
In the past few years, we have really centred a lot of our technology work around the role of AI in the lives of disabled people, the potential benefits and its major risks and potential concerns. Because I'm here to talk about the concerns, one of the areas where we have really spent the most time is around addressing potential issues related to AI and employment, specifically in recruitment and selection of job seekers.
Many major employers are now using AI based tools to help them weed through the thousands of applicants they get for jobs. These tools use personality tests that centre on really neuro typical ideas of what it means to be a successful employee. These tools use facial recognition, AIbased software that monitors things like eye contact, and if you are someone who is blind, low vision, someone who is newer row divergent, someone that has to look at notes as you participate in an interview, you will be weeded out. These tools also monitor timeliness, how quickly you can complete questions. All of this data gets turned into a profile which tells the employer to hire you or not. We see people get weeded out. The other issue, I think this is a huge question which we're still trying to answer is what is the accommodation process for these tools? So there have been discussions about things like providing a notice and consent so that a job seeker can know that their application, their interview, their interaction is being judged by an AI based tool. The question is, once you know that's happening what, do you do? Right. We have spent, you know, more than 33 years at this point educating the public on their rights as disabled people to know what they can do if they need an accommodation at the job interview. Right. What does it look like for the AI tools and for the accommodation if it is do a standard facetoface interview, people are still biased. So we have actually had some success which is tough to say as a Civil Rights advocacy organization these days. About a year and a half ago, the Equal Employment Opportunity Commission, the EEOC, issued guidance around AI recruitment and hiring tools, and the Americans with Disabilities Act and the Rehabilitation Act which just turned 50. Basically what the guidance said, is that if the AI tool is questioning and applications are not tied to the essential functions of the job, you as the employer can be violating the Civil Rights doctrines. This is incredible, this was one of the first pieces of Civil Rights and AI guidance out of the gate, that never happens in disability land, we're always asking to please include us. I think because of the great advocacy that's happening in collaboration with industry partners actually, we are beginning to shape the conversation around it.
Outside of employment, there are other major concerns. Is anyone familiar with Electronic Visit Bureau Verification? A couple of hands. Okay. This is a practice that's being used in states for Medicaid beneficiaries, all of whom are People with Disabilities. This is meant to prevent fraud. Individuals have to take a picture of their direct service provider providing them that service. As we know, facial recognition technologies aren't great as picking up faces of people of color. What if they decide, AI decides, that that person didn't provide you the service or that is not the person who is documented as being your service provider? Do you then have to go fight your Medicaid state agency because you have lost your benefits or the direct service provider hasn't gotten paid for the hours they spent with you? These are major, major questions. Unfortunately, each state is answering them in their own way. It is up to organizations like ours to take all of this information and translate it into things that laypeople can understand. I actually think this is where AI can help us. In things like Plain Language translations, and how do you communicate this highly technical bureaucratic language to someone who is just trying to figure out how to get the needs met? Again, then the question becomes who owns that, is it AAPD, ChatGPT, someone else? I could keep going on this. I think we are also in terms of policy achievements, you know the White House issue, the AI Bill of Rights including disability and accessibility, it is now a year old. It is interesting to take a look back across the past year, see what the agencies have done, and see what more we can do to advance the AI Bill of Rights and use that to drive the now bipartisan conversation that's happening in Congress.
I will stop there.

MARK WALKER: That's the progress described there, the question is again from the audience, they're thinking about the separate advocacy groups and organizations connected to, how do you see their role in the next 12, 24 months. You're creating an amazing overview there, as you say, the problems are on the ground really, the problems in daytoday lives of people will be relating to that particular setting that they're working in. Are you seeing a ground swell of interest and activism in the advocacy communities?

MARIA TOWN: There a role for everyone to play, whether an individual, communitybased service provider organization, national advocacy organization. I think as individuals we need to get educated, we need to understand how these tools are being used in our lives. What's happening with our data. We need to be able to determine for ourselves how we want our data to be used. As direct service provider, especially if you're working in the employment space, the kind of Medicaid direct service space, even in community integration, I will tell you I recently had a restaurant use an AI chatbot to try to tell me if the restaurant was accessible. It did not go well.
I asked the bot is your restaurant wheelchair accessible and it literally replied, yes, because of national law we're ADA compliant. No. (Laughter). Even coaching people like me what prompts to put in the phone to get the information I wanted. That needs to be happening. I think in states again, like each state is figuring out a series of policies around how they're going to regulate this from the places like California, which are trying to establish a statewide AI governance bill to again state Medicaid and Social Security offices that are figure out their own platforms, states like Pennsylvania which use AI to make decisions around who can and cannot adopt children, whose families stay together, right? There is so much work for being done and all of us have a role, and I think for groups that work with industry to promote accessibility, that is a continuing need. Right. Whether or not the AI is biased, the question of the actual accessibility, the platform remains consistent. We still need  we have to keep humans in the loop, we have to continue to push for basic accessibility and we will need to no matter how well the AI is trained continue to push against ableism and push for greater representation of disability data in these large language models.
One of the other persistent problems that I would love to talk to everyone here about is the fact that usually disability data is not present.
When it is, we look around the room and see how beautifully diverse, heterogeneous the disability community is. Even within disability types, and I will use myself as an example, I have cerebral palsy, you can have tenor people with cerebral palsy in the room and we all have different needs and importantly different preferences. We should be able to make different choices. What results with AI and disability data is that we all become a community of outliers. We can't have that happen so how do we solve that problem?

MARK WALKER: Should we ask Google?

MARIA TOWN: Yes. (Laughter).

MARK WALKER: Christopher, you have five minutes! What's this look like! We talked about your particular involvement in AI. Looking back to the  you know, as Jenny said, this is not new, some problems that Maria Town is describing, some problems every day, they have a different flavor, a different dynamic.
Taking this standpoint from the position within Google and obviously please introduce yourself. Where does this  what's this come to in terms of your particular role around disability?

CHRISTOPHER PATNOE: For those that don't know, I'm Christopher. I am talking very loudly. I wanted to tell a story, sort of how we're going to screw this up, what we need to do and where it is going to go. The technology is constantly evolving with time. A thing that I wanted to have you understand, it is that this technology is really raw and new, and we are constantly pushing the boundaries and we have to push the boundaries to create the future we want.
When you're pushing boundaries, you're going to blow it. The stakes are going  mistakes will happen. What I want to invite the community leaders here, community leaders in this space, to realize that people are going to make a mistake, we're going to get it wrong. Anyone is going to get it wrong. Don't cancel, don't yell and scream and sue, get on the screen and help. The only way we can actually advance the technology is by making mistakes, learning from it, moving forward. The accessibility thing, it is a process, there is not a destination, we're never going to get there as Jenny said. So, let us screw up, help us learn how to make it better, then move it forward. There we get something that's really powerful.
Two, one of the things that I have learned, sitting in this room, being back in the U.S., it is much of this conversation, it is very U.S.centric, it is a very big world with languages and cultures and norms that are not the same as here in America. The technology that we have, with probably built in America with American data. So what I want us to realize is that we take this technology internationally as we need to, we have to realize that the data has got to change, we need different data, we need different forms, and we need to try to bring the world together in a way that both brings the people together, but also keeps the things that are unique and interesting about us separate and allowing us to be who we want to be, the needs and preferences matter.
The third thing is, this is just this is the third time through AI, we have done VR five times now. What is interesting to me, where we're going with this AI work, now especially taking a look at form factors. Right now, everything is on the phone or laptop. Not useful in the real world. As you take the AI and put in things like glasses, like we all know is going to a part of the future, then we have this really interesting version of I know who you are, I know where you are, I know what you see. If we can do this in a privacy thoughtful way, we can really empower people to live the world  live in the world that was not designed for them because we're supplementing with technology the things that someone may not be able to do as good as other people. With me, walking down the street, I get lost all the time. I would love to have a pair of glasses hey, dummy, five feet behind you, turn around. That's what the technology can do when thoughtfully done and done in a way that allows me to live the world in a way that I wanted to.
For me, AI is a building brick, and how we move technology forward and we have to do it in a thoughtful, international way to have a really heavy impact. We have to let ourselves blow it because if we don't  if we have to proceed so carefully, we can't push those boundary, we are never going to get to the destination we all want it to be and to understand that we can't use it today in the same way in the future. It has to evolve to meet those goals.
So as we look at AI, realizing this is a wonderful moment, it is an exciting moment, we are going to look back and say God, that was cool. Then God, that was painful. Then we get to be on the other side of it, hopefully through this technology, although all of our worlds, it will be better.

MARK WALKER: Cool. In the sense of your work within Google, we were talking about open AI and working with Mike, that sort of edge, have you got particular examples of the stuff  for the program for example, those sorts of things that they're bleeding edge presumably of the use of AI in relation to disability, have you got a couple of examples of those sorts of things that really excite you from within your sort of universe within Google?

CHRISTOPHER PATNOE: I don't want it to be Annie get your gun, I can do better. There are two things that I'm excited about. One, it is the work that we have done in nontypical speech patterns and this is the accessible project that Microsoft and Amazon and Google, meta, we come together, created a dataset allowing anyone who has access to it, and it is at a University, it is not owned by any one company, anyone can create a voice model that has a much better base because we have all contributed voice data that allows us to have nontypical speech patterns represented, this is AI, it may not be generative AI but this is AI 2.0. Then you have the things like look out, we're seeing AI, you have the technologies that really enhances people's lives by looking at the real world around them, listening to the real world around them and providing context that you may not have otherwise. So these  2.0AI stuff is as important as the generative AI stuff. They do different things. What will be fun, watching these things come together and influence each other.
So recently we have had this application on lookout for a couple of years and it is quite useful, if you look it up, use Android, it reads the text of the world around you, describing things for you, but we had our first GenAI feature, doing a photograph  you send a photograph, we provide you another description, you can ask questions about it, have an interaction with your stuff. It is different than what ChatGPT does, but still as useful. It is interesting to watch the technologies merge together to create experience that's been designed with the community to create solutions that hopefully meets the needs of most people.

MARK WALKER: Thank you. Jenny, you mentioned something I didn't know, I don't quite understand how it works. When you said, I think that you put data into the model, brand disability, I hadn't heard that before. A question from the outside, if we're seeing large language models or whatever they may be called in the future, having biases in them, having  because they're  because of where they're coming from, where the data is coming from, it naturally contains all of these things, in what way do you tune out, what way do you influence what comes out of the other side t sounds like what you have used then, that you put the data in to deliberately be found, the better quality information is available, but is that  have I understood that correctly from what you said earlier?

JEN LAY-FLURRIE: Yeah.
No. It is a great  AI is based on data. So whatever you call foundational models, it is based on amounts of data and I think that the real difference this year, it is how much data is it based on. Who knows Harry Potter? Anyone? Okay, Peter has his hand permanently up over here! I like to think of data as a Harry Potter library. One of those infinite data libraries, where it goes up and up and up and up and on the top you have all of those books that you really shouldn't open, and they're locked away, they have multiple spells on them. If for example you open them, all tragedy happens, and you have got some that you can't even see because you're not privileged enough, you just don't have the right magical rights. That's how I imagine that library.
That's not a very technical description, I want to be clear.
That's how I really visualize the language models, and at the end of the day, you  the books you put on the shelf, they matter. Whatever books you have got on the shelf, what the language models are going to be pulling back, getting in response, the easiest. Maria said it best, if you don't put the right data on the shelf, it ain't going to work for the edges of humans, and identity and preference and so you have got to allowing do this. A couple of years ago, it was to make sure if you're writing in Word and you're talking about disability and you want to use a terrible miserable word to describe disability which we all know some folks will try to use, for bias reasons, for lack of education, it was to actually make sure that the language models behind Word were actually saying are you sure you want to use that? That's not a good word. Here is a different word, so way better, not going to offend disabled people.
That's the ease yes, sir, quickest example of that, I think what we're getting to now, as Mike had said, as we evolve from text to images, now video, how do we make sure that those, you know that library includes images of canes of every scooter, of every kind of hearing aid, it  of hearing aid, it contains images, descriptions of those that are modern, that are relevant, whether you're in the U.S., whether you're in Nigeria, so that you can pull them back off. That is very, very instrumental and who bluntly will add the books to the libraries, disabled people. So the biggest thing, listening to all of this, I'm sitting here, kind of  yeah. Just asking do you have your employee resource groups, your disability communities trolling your tech. That's the number one thing that we do. We have to take that feedback in and we have to make sure that those books end up on the library.
It is hard though, right?

JEN LAY-FLURRIE: Not easy.

MIKE BUCKLEY: If there is a model with 7 trillion pieces of information in, it even if you have a repository of good data from a disability space that's several million pieces, how does a full million, several million impact 7 trillion? There will be a lot of iteration and a lot of turning of the dials to make sure that these models work in a way that's truly inclusive, truly accessible. It is not just as easy as just ingesting data from various use cases. This is everything, that you're talking about in terms of the interview process, the screening process, even if you get a little bit of data in there, it is not enough to stop bias, to stop ableism. This is your point about human in the loop, making sure that we're testing these things.

JEN LAY-FLURRIE: If I may, you know the one example that I think you beautifully called out, it is the speech accessibility project. Who has heard of this? The Illinois University, some of you? There is three hands.
Peter, where are you? He doesn't have a hand up on this one. My gosh!

JEN LAY-FLURRIE: He's busy. This came from actually Steve Gleason, folks know Steve Gleason from New Orleans, which I took a long time to learn how to pronounce, he has eye gaze technology, he has his own personal voice that he uses, and he wanted to advance speech technologies, we all do, but every company was doing it in a very siloed way, and so his ceremony, it happened at the capital just before the pandemic, there was a really call to action of how can we work together to build data samples that would elevate and accelerate speech technologies for across whether it is down syndrome, cerebral palsy, ALS, whatever it may be, it took two years to get that thing moving.
It is the first time we have had a tech true partnership between five tech companies working together with an independent host to gather that data from the disabled community with the right privacy security protections that then would help us to accelerate the advancement of a speech technology and speech recognition. It is a great model I think. And we're still learning from that, right.

CHRISTOPHER PATNOE: What would be range interesting, the final thought today, we're lucky, it would be really interesting if we had some kind of a disability data that all of the different models could incorporate together, to have a common view of what's it mean to be disabled. Right now, there is ChatGPT, other, all of these private ones, and we all have different data, we're getting it wrong in different ways. Maybe this is something that we can talk about in the future, maybe we can come up with a set of data relevant to AI around disability so that everyone speaks the same language.

MARK WALKER: The interesting thing on that project, it stands out as being the only one you can think of that is from the consumer point of view, activist point of view, you work together, you have a cool project going on, if the models don't converse and start reaching a point of consensus about the support data, they have them, you continue to have the silos where maybe one model is moralist than another towards blind people, and another one is more ableist toward tech people or something. That's a natural consequence of keeping them separate. That's tough.

CHRISTOPHER PATNOE: There are countries and universities all over the world building t that's why having this data, that can be contributed to internationally that could be accessed internationally, it gives the best chance of having a common experience.
I'm saying that, I'm looking at the Slack questions here (Mark Walker) most are what if the answer is wrong. Essentially being wrong, I wondered if in the testing, few came up with examples, that we did this, we see the bot coming back with this. What sorts of things 

MIKE BUCKLEY: AI is not perfect. We talked about humans in the loop. So the way we have designed be my AI, the user always has an option to go to the sighted volunteer. We put AI at the front of Microsoft's disability answer desk and it is answering as Jenny said 90% of the questions about complex formulas, how to hard reboot the surface tablet, answering about 90% of the questions but what about the other 10%? We have built in systems where they should have the opportunity at the user's discretion and choice to keep a human in the loop. It is wrong. Imagine if you were looking at the ingredients on a package and it got the allergies wrong. What about the pill bottle? What about crossing the street? You should not do that with Be My AI or any other AI system yet. Right.
So we  the power is magical and remarkable. However, it is not 100% and there is not an AI researcher in the world who will tell you that it is 100%.

JEN LAY-FLURRIE: I second, third, thrice that motion. I think it is  AI is another tool in the toolbox. It is not going to solve, and it will help us to elevate and advance if we get it right. Every time it gets it wrong, the feedback goes in, we have seen that will be my eyes, Be My Eyes, we have seen hallucinogens  that's a word I cannot say! We have seen those things and then over time that learning that happens on the back of them, but we have to be incredibly thoughtful about where and how it is used. Making sure folks aren't using it for the wrong thing.

MIKE BUCKLEY: It is confidence in errors too. When it makes a mistake, yeah, that bottle of wine costs $22, well, it says 45 on the menu.

CHRISTOPHER PATNOE: Have you heard it described as a man-splaining copy. M perfect. We have five minutes and we have not mentioned regulation. We have talked briefly before about the E.U. Is there a role in some sense for government to be trying to create solutions to the problems about the biases in the data we're using underneath and equally presumably the other end of it, the applications and the way it is being used in different service areas that impinge on people's lives. How do you see that changing there, you have the building blocks in place and the international angle, of course, it is equally part of that picture from your point of view in terms of the U.S. and equally in other parts of the world.

MARIA TOWN: There a huge role for government, especially to create a standard across the board. For individuals, for organizations, for companies, at AAPD, when we push companies to do something, sometimes we get great responses and they want to do the right thing, even if not required to, but a lot of the time what we hear from major corporations, it is we're waiting to see what the government tells us to do. I'm not  this is not applying to anyone on the stage.
You know, I know you said don't sue, but I will say lawsuits can help. We saw that in again Pennsylvania when an AI model decided that disabled parents could not keep their children, there was a lawsuit and the individuals won. That's going to require that state to rethink their system. So there is definitely a role for government and regulation. I think even government, right, is trying to figure out what that looks like. There is  you know, there is regulations, guidance, there is legislation, which probably won't happen for a long time. We need  we need standards that are applicable to at least everyone in the nation and then, of course, you know, all of the work happening in places like Canada, the E.U., and my question for other countries, I know where disability stands in the AI conversation here in the U.S., and I'm not sure where we're at in other countries.

MARK WALKER: Because we have one minute, I won't tell you!.
(Laughter).
It takes too long. In E.U., lots of stuff is happening, there is a session on the European Disability Act, coming from outside of the E.U., it is that is where the actions are, where it comes to the regulatory framework and there is an AI known of that. The impact and how quickly it will impact is a question.
So it is 29 and a half minutes past. So I'm going to stop. So thank you so much, all of you, for sharing your ideas, your information and your knowledge.

[Applause]
Thank you for the incredible questions. Thank you.

FRANCESCA CESA BIANCHI: Thank you very much. A round of applause and a photo opportunity. Thank you.
And now we can enjoy the networking break. There is exhibiter showcase opening, and it is a half hour, 102:30:11, at 11:00 we'll break into Plenary  into parallel sessions and please come back at 5:30 for the keynote from the White House. The exhibiter showcase, the networking break is presented by Tech for All. Please enjoy. Thank you.

This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.

Leave a Reply

Your email address will not be published. Required fields are marked *