Accessibility Standards Canada (ASC) is developing the Accessible and Equitable Artificial Intelligence (AI) Systems Standard (CAN-ASC-6.2), as a regulatory standard of the Accessible Canada Act. The draft standard will be the first standard to address barriers, risks, and opportunities specific to persons with disabilities. The draft standard is also designed to account for the unpredictable pace of change of AI systems. A kick-off keynote will be delivered by Dr. Jutta Treviranus, the chair of the ASC standard committee and director of the Inclusive Design Research Centre, who drafted the seed standard. Dr Jutta Treviranus is well known for her accessibility work over the past 40 years. In addition to her research and academic responsibilities, she has served as lead project editor of the ISO 24751 standard, ISO/IEC JTC 1/SC 36, which supports automatic matching of user accessibility needs with digital resources and user interface configurations as well as the chair of the Authoring Tool Accessibility Guidelines Working Group (AUWG) of the World Wide Web Consortium (W3C) Web Accessibility Initiative. She is the recipient of the 2022 Women in AI Award for Diversity, Equity and Inclusion.
Introduction of Keynote Speaker by Pina D’Intino, CPACC, Senior Accessibility Adviser, Strategist and Business Implementation, Aequum Global Access Inc.
Keynote will be followed by Q&As from the audience.
Transcript
Good morning, everyone. If you will kindly take your seats, our program will begin in three minutes.
Good morning, everyone, if you will kindly find your seats, our program will begin in three minutes. Thank you.
Good morning, everyone. At this time, please welcome to the stage, the senior accessibility advisor, strategist of business implementation for Aequum Global Access, Pina D'Intino.
(Applause.)
PINA D'INTINO: Thank you, everyone. It is my great, great pleasure to announce and introduce our keynote speaker. Some of you may know her, I would be surprised if there is somebody in this room that doesn't know her actually. But I think every time I spend time with you, Jutta, I learn something new about her. One of the things that I learned is how much within her life from the time that she was just a little child, she has been giving back to people with her kindness, her gentleness, and her willingness to not only advocate but to friend those people as Persons with Disabilities. Her efforts to help newcomers or help efforts to build capacity for Persons with Disabilities and build capacity for those who want to support Persons with Disabilities is immeasurable. She started in accessibility more than 40 years ago. And for the last 30 years has been leading the Inclusive Design Research Centre in Toronto where she has also incubated an inclusive design program which has been running now since 2011. We are proud to see today a lot of graduates from her program being here in this room, and demonstrating their leadership. And all of this because of Jutta has invested, not just in Canada but globally working with the G3ict and now looking at standardizing AI. It is my great pleasure and honor to introduce a colleague and my friend, Jutta Treviranus. Thank you. (Applause.)
JUTTA TREVIRANUS: And, of course, that starts me off blushing, which is a great beginning to a talk. Thank you so much, Pina.
And I'm hoping that we can get my slides up. So before my slides arrive I have called this talk Accessible and Equitable Artificial Intelligence. And that is the topic of a standard that we have been developing in Canada, specific to AI. There has been a ton of buzz about AI here at the conference. Ahh, good, we're there.
And one of the things one of the sayings about AI that I really like that many friends within the field have said is that AI should progress at the speed of trust. And one of the other sayings though that however I have heard here at the conference yesterday was to trust us to break things, to make mistakes, to push forward. and to have the right to ask of us, this of us, of the many people that are here with disabilities is you need to have a deeper understanding of the relationship between disability and technology.
There is another saying relative to disability and technology that says for most people, technology makes things convenient. If you have a disability, technology makes things possible.
And therein lies an awesome responsibility. Technology is relied upon to speak, to read, to write, to learn, to affect the world, to navigate the world, to eat, to express love, to remember, to plan, even to breathe and to live.
And our relationship to technology by necessity is more intimate. It is essential because we have no choice. It is what makes things possible. This relationship also makes us more vulnerable. We should not have to give our trust to an abusive partner. We are disproportionately vulnerable to the mistakes, to the breaking.
And beyond guarding our homes it is even implanted in our brains and in our vital organs. If you have a disability, the opportunities and the risks are at the extremes. Sorry. I'm not catching up. Oh. Shoot. Pressed the wrong button. Okay. Technology.
And there are undeniably quite a few extreme opportunities. We can recognize speech, gestures, patterns. We can find target objects. We match or label objects. We can remember forever and remind on time. We can sort paths and we can detect common mistakes and correct them.
AI is wonderful at mechanizing the formulaic. But this is not what I'm talking about when I'm talking about vulnerability.
There is a less known and less heralded risk that is creating an infrastructure of disability discrimination. It is finding, matching, sorting, labeling, measuring, optimizing, calculating, analyzing people at scale.
And therein lies the problem. Because it is doing things that mean that we are the collateral damage. The way AI is currently designed is hostile to difference. And we are more than a little different. But this this problem proceeds AI. AI is mechanizing, accelerating, amplifying and automating an existing problem.
It's propagating discrimination faster, more efficiently and more accurately. And unfortunately, despite the fact that there's an emergence of an entire AI ethics industry it is missed by these AI ethics efforts.
My first alarm happened about ten years ago. And many of you have probably heard this story told, and I have heard it retold by quite a number of people, sometimes alarmingly. But my first alarm happened when I was invited by our Ontario Ministry of Transportation to help them celebrate their 100th anniversary.
And they gave me an opportunity to test a number of automated vehicle learning models that would be used to guide vehicles through busy intersections. And knowing AI and knowing what AI is good at and what it isn't good at, I decided to test it with an unusual, unexpected scenario. And this was a friend of mine who pushed her wheelchair backwards through the intersection.
And when I tested these learning models, all of them chose to proceed through the intersection. And if this had actually been within an actual vehicle in an intersection they would have run her over. They all said don't worry, these are immature. They haven't had enough data regarding people and wheelchairs and intersections. Come back when we've trained it and we've given it lots of data. When I came back, they all chose to run my friend over with greater confidence.
(Laughter).
JUTTA TREVIRANUS: They were confident that people in wheelchairs and the data had told them that people in wheelchairs travel forward. And this, of course, set me on a tenyear journey of alarm. But it made sense because I had a sense of this issue for quite some time. Over the 44 years that I have been working in the field, I have been collecting data points. I have been asking people what do you need to thrive in your world, in your life, in your work, in your education. And it is a fairly multifacetted, highly diverse set of data. And the only way that I can actually plot it and on the screen I am describing most of the images that I will show, is a multi is a very, very bad twodimensional image of a multivariant scatter plot.
And it looks like a normal distribution. And as in any normal distribution, there are 20% of the needs that are clustered around sorry. 80% of the needs clustered around the 20% of the distribution. And the remaining 20% of the needs are at the 80% of the periphery. So it sort of follows Pretto's 80/20 rule as well. And I call it the human starburst. One of the things that you'll note is that the dots that are at the periphery are far apart or the dots in the middle are very close together.
Meaning that if you are if your needs are in that center, then you are very much like other people within that center, but if your needs are out of the periphery you are very different. People with disabilities are very different.
And because of the way we've designed our world, because of the economies of scale, because of advice like pay attention to the 80% that only require 20% of the effort and ignore those difficult 20%, so that you can have quick wins. Design works if your needs are clustered in the middle. It doesn't work or it gets worse as and things become more difficult as you digress from the middle. And if you are out of the periphery, if your needs exist out there, then most designs won't work.
And unfortunately the same pattern happens in Artificial Intelligence. Things, predictions, decisions, determinations that are made by AI are highly accurate in the middle. Become inaccuracy as you move from the middle and don't work as you get to the periphery. The predictions are wrong. And this ripples through every part of our lives. Whether it is design that either fits or doesn't fit or Moore's law is true or not true, because availability, reliability, functionality and cost is getting better and better if your needs are in the center. It is getting worse and worse as you are out at the periphery because as the speed of technology progresses, it is harder and harder to catch up. Knowledge, truth, evidence.
We've reduced knowledge and evidence and truth to statistical evidence, quantified systems. And so if you cannot reach statistical significance, then your truth is not worth funding. The knowledge that we need to gain isn't going to earn a journal Article, et cetera. You won't be published. Academics that study that are not supported as much. And the disparity proceeds into our education which is doubling down on creating standardized learners. Work which is looking for replaceable workers and democracy is reduced to one person, one vote. So the trivial needs of the many outweigh the critical needs of the few.
Of course, there is another side to this story. Because those individuals in periphery, those needs in the periphery sorry, I'm going backwards again.
Are the ones that we really should be attending to. And this pattern that we have within all of our systems not only damages those individuals that have needs out at that periphery but it damages all of us. Because of the way that we're doing things, we have mass production, mass communication, mass marketing, a popularity push and we're depressing innovation. Because it causes greater conformance, lockin. It reduces reliability, flexibility, resilience and responsiveness. We are thereby reducing diversity. We are homogenizing towards a monoculture.
As was stated by Claus yesterday, at any time when you least expect it with the greatest probability we are all going to end up at that periphery. If we design our AI systems and our systems in general, then right out to the periphery we have room for change and growth. And that's where we find innovation and weak signals.
What's wrong? What am I concerned about in terms of AI? The AI decision systems, the ones that are almost everywhere making decisions in all of your lives, 90% of employers are using this, assume that past success, equals future success.
Optimizing data characteristics associated with past successes increases future successes. And the data characteristics that determine success need not be specified or known to the operators of the AI.
And unlike some of the discussions of AI ethics concerns yesterday, it is more than addressing data gaps. It is not just about a data desert. It is more than removing human bias from algorithms so that the developers of AI who are maybe bigoted, it is more than removing the stereotypes from the labels and proxies.
Because bias towards optimal patterns equal bias against difference. And AI is getting better and more accurate and therefore getting more discriminatory because the more data we have, the more ways we can be different. And this is pervasive. It is an employment, academic admissions, medical calculators and triage tools during the pandemic, policing and patrol, tax auditing, loans, mortgage. We have evidencebased Government investment, political platforms that are driven by AI. Public health decisions, urban planning, emergency preparedness, security measures and even those very trivial cumulative impacts that are out there. The GPS routes that are advised to follow when you try to find somewhere. The supply chain priorities. Design features selected by companies. All of these are guided by these decision systems. So what we have is bad, unfair, inaccurate decisions.
We are also because we are unrecognized like my friend who pushes her wheelchair backwards, flagged in suspicion systems because we're unrecognized. We might be fraudulent, a security risk.
And one of the things that I have been doing over the last ten years, since my alarm started, was to look at harm and incident databases. Many administrations have started to catalog what are the harms in incident database. Unfortunately, this has become a very uncomfortable and awkward I told you so moment. There are disproportionate reports related to disability. Whether it is parents with disabilities falsely flagged as unfit, false positive tax audits, false positive security flagging, I'm not going to read the whole thing. Believe me, there is many.
I think what is of greater concern to individuals with disabilities is not privacy, but data abuse and misuse. Because most people with disabilities have already bartered their privacy for essential services. Think of all the information that you need to give out to get the services that you need. And privacy protections like anonymization at source don't work because you are unique. And anyone unique can be reidentified and privacy protections like differential privacy which remove the characteristics that might result in discrimination, eliminate all the helpful data to help the AI understand you. One of the things that I have realized is that statistical reasoning as a means of making decision does harm of anyone out at that edge. What we know about the majority applies to the minority does harm.
And statistical discrimination even finds its way into those AI ethics measures. We are insignificant when determining the metrics and thresholds. We are invisible or insignificant in error testing which another talk that I gave earlier was saying you need a lot of deaths to flag the error in AR systems, AI that's used in warfare guidance. And AI ethics auditing tools are using things like cluster analysis and comparison. There is really no disability data cluster because we are too different.
You can't detect bias against outliers or small minorities. So we are falling through the cracks and straddling the edges of the clusters used in AI ethics tools. Invisible in a risk benefit framework. We have in the U.S. a risk benefit framework. If you are weighing the benefit against the risks, the benefits will outweigh though very tiny, very diverse risks. An anecdote. So what we have done in Canada and what I'm working on is a draft standard.
And we're doing this very specifically for disability because disability has the most extreme risk with respect to AI. And rather than catching up we've decided that we need to lead. We need to think about how can we protect the individuals that are most vulnerable to the risks of AI. Because AI is traveling so quickly, it is a phased approach where we are starting with those automated decision systems based on data optimization and exploitation. And then we are moving next to the large language models and general AI which seems to now have gained public attention.
We're also trying to harmonize and layer this on top of existing AI guidance. So whether it's the guidance here in the U.S., or the guidance that is in the EU AI Act or ADA or directive on automated decision systems, we have done a fairly thorough scan to ensure that we're not asking for something different that were harmonized but making sure that people with disabilities who have that extra vulnerability anywhere needs are addressed.
And so the things that we've done within this that are specific to disability is, first of all, in Canada we have a commitment to nothing without us because everything is about us. And so we're looking at accessibility, not just for the consumer, but also the participation of people with disabilities in the design, development, deployment, testing, evaluation of AI systems.
We're looking at statistical discrimination, which goes beyond the bias that is flagged in many other of the AI guidance. And we're looking at cumulative harm, most of our systems are based upon impact assessment or risk assessment. And if you use impact assessment or risk assessment there is nothing that you need to do with those trivial issues that arise with AI. But imagine if almost everything in your life is deciding against you, there is a cumulative harm that we need to think about.
And we're worried about education. We want to systematically look at how can we advance this so it doesn't progress. It is in four parts, the standard. Accessible AI, which is equitable AI, organizational processes that support accessible and equitable AI and then lastly education.
In terms of accessible AI it is about ensuring that people with disabilities can participate in the full AI ecosystem. Designing and developing AI systems, implementing AI systems, consumers of AI systems. One of the things that we fail to hear in the lovely stories about how AI miraculously transforms lives, if you need the most it actually works the worst. If your voice pattern is far different from the training pattern then it is not going to work for you. If you live in an area where the AI hasn't been trained to recognize those packages or those signs or in a language that it hasn't been trained in it is not going to work for you.
We are also wanting to ensure that the evaluation and the improvement of AI systems works for people with disabilities and they can participate in that.
And for equitable AI, we want to make sure that equitable treatment of people with disabilities as subject of decisions and represented by AI.
And within that we are addressing first and foremost statistical discrimination. But also all of the other things that you find in many of the AI guidance, including the very good U.S. AI Bill of Rights. Reliability, accuracy and trustworthiness, freedom from negative bias, protection from data abuse, freedom from surveillance, freedom from discriminatory profiling, freedom from discrimination, and manipulation, transparency, reproducibility, individual agency, informed consent and choice, support of human control and oversight, and addressing cumulative harms.
In terms of the organizational processes, we talk about what processes need to be in place to achieve accessible and equitable AI. And there's a number of things that organizations need to do not only to address the issues of vulnerability of people with disabilities, but to create an AI system that will work for everyone and that will not result in the types of harms that we're currently seeing.
And lastly, we have education. Education about accessible and equitable AI. So if you are learning about AI, you should at the same time be learning about accessible and equitable AI. And then the flipside, AI education should be accessible to people with disabilities.
And lastly, education of AI. We should feedback into the AI systems that we're developing, all of the feedback, mistakes, the problems that we're encountering. And that should be a continuous pipeline so that we can continuously improve AI.
And we have within that every approach that we can possibly have. Like most other standards there is a shall/should component to it. And some of the strategies that we are looking at is people with disabilities being exempt from AI decisions, if in a data minority. And therefore, of course, we want to also protect that that doesn't become a second class system, but has equivalents. Special recourse, if you have a disability. But without the burden on the person with a disability to seek and to create and argue for that special recourse.
And, of course, the usual things that we see in other AI guidance, including informed subjects of decisions that AI is used in and require transparency regarding training data and decision systems.
And we are looking at things like trust meters where we have a declaration of what data is used to train the system, and an indicator showing the suitability of decision systems to the instance before you.
And so what we want to do is we want to progress with AI. We don't want to go against innovation because if anyone needs innovation, if anyone needs something new to address all of the barriers we feel, it is people with disabilities. But we want to progress at the speed of trust. Because we know that intelligence that works with the edge of our human starburst, scatter plot is better able to adapt to change, detect risk, transfer to new context, results in greater dynamic resilience and longevity will reduce disparity and may hold the key to our survival. We have a number of people here from Canada who are working on the standards. And we will have a panel at 11 o'clock where we discuss some of the frameworks within Canada. And I would love to hear any questions that you might have.
(Applause.)
JUTTA TREVIRANUS: I know it is not Jenny's talk yesterday which is uplifting. I worried about this. Oh, my gosh, I'm going to do this Debbie downer thing. Yeah, Tim.
Good morning, Jutta. Thank you so much for this excellent talk. I'm in the back here. It is Peter.
JUTTA TREVIRANUS: Oh, great.
I'm very curious, we heard yesterday about the large datasets and about the need to get disability data into those large datasets and the fact that we have such little disability data. And even if we had a lot we would be overwhelmed.
JUTTA TREVIRANUS: Yeah. Statistical discrimination, even if we have full disproportional data we would be overwhelmed with the data.
There are mechanisms with filtering the results. I'm curious about any research or discussions you have around that approach to getting through this challenge.
JUTTA TREVIRANUS: Yeah. So large language models and generative AI, we have tuning systems that allow us to try to push the decisions in a more personalized direction or in a different direction. And we're certainly at our center experimenting with some of that. We are working on things like our baby bliss project where we decided okay, if we were to experiment in doing things differently, who are the people that are most in need of this type of assistance. So we're working with children and adults who are nonspeaking.
And there we have had some success in retuning the models. But one of the things that alarming things that has happened is that some of the kids we've been working with and the adults are saying that you know what, I'm losing my identity here because the system is fixing my grammar. It is offering me the average or typical things that someone would say. It is correcting everything and I'm losing my quirky humor.
And I'm losing all of the things that make me me. While the large language models are lovely and transactional things, I think all of us are probably experiencing some of that reduction of difference. I was at a I gave a talk yesterday at a European conference. And I was accompanied by the next speaker who was talking about disability and employment. And what they touted as the way to address employment and equity was to flag for everyone what the within their resume or within their application or within their interview would be seen would be used to discrimination against them. And to remove those things from their resume and from their CV. And not to mention them in the interview, which seems to be going in completely the wrong direction.
Because if we're denying diversity, and we're denying complexity, that is a really existential threat to us. Diversity is what fuels our progress and the innovation.
Yes, Peter, to your question there are I think there are ways in which we can both tune the systems, single shot tuning. I won't get too technical. And there are interesting experiments. We're hoping that by flagging this issue of statistical discrimination we're going to be able to do that. There are also algorithms that actually invert the prioritization. So rather than using data exploitation they are using data exploration. So within the hiring tool you try to find people that are diverse. You keep the essential requirements of the job, but the other things you try to diversify the languages, you try to diversify a variety of other things that would make a much more resilient team. But unfortunately, those are not deployed very much.
And those 90% of organizations that are using AI hiring tools are not actually a miniscule number have been exploring that.
I'm on my way, sir.
TIM CREAGAN: Thank you. Tim Creagan, U.S. Access Board. I would like to make two points. One with regard to the comment you just made about people scrubbing the resumes for stuff that makes them interesting and quirky, for fear that it will reveal them as having a disability, I have a hearing loss. So it is an invisible disability. Nobody sees it. I did in my resume, I wrote down that I moot court all oral arguments for writing and speaking. And no one raised the issue because my qualifications went directly to the job. Everything else was irrelevant. Second question I have is, with regard to education, you talked about educating for AI. So I can understand in a context like this where you have policymakers and you have industry and you have Government all in the same room and we're all being sort of educated on the same page, how do you see education happening in a broader spectrum? How do you see that happening like at the University level or the individual or what do you see, public service announcements? What points do you want to emphasize in your education? Thank you.
JUTTA TREVIRANUS: One of the things that I always say, this is not a binary choice. We need to produce the content and make it available to Universities and high schools. We need to educate provide education in a variety of forms to people who understand the technology, to people who understand the policy. It can't just be a single sort of message. It has to be directed to people where they are at and where they think their priority interests are. So the education portion of our standard addresses a whole variety and a whole range of means of educating people in various roles and at various levels of understanding and interest. And I see Axel coming here. Probably get me off the stage.
Jutta, thank you so much, very much. Please join me in thanking her. (Applause.)
So this will complete our morning Plenary session. We will have our breakout sessions starting in exactly ten minutes. We will all reconvene for the keynote from Peter Korn at 5:30. He will be talking about harnessing the velocity of constant change to advance accessibility. A very timely topic. See you at 5:30 in this room.
This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.