AI-enabled neurotechnologies are emerging as an entirely new field of innovation in assistive solutions that can transform the lives of persons with disabilities. And while neuro-technologies, including Brain-Computer Interfaces, have been explored for a while, competition and research and development investments are accelerating with major players entering the field, opening the door to marketable solutions for mobile, wearable, and gaming platforms.

Session Chair: Theresa Vaughan, Advisory Council Chair, G3ict/NeuroAbilities, and Research Scientist, National Center for Adaptive Neurotechnologies (NCAN)


  • Darryl Adams, Director of Accessibility, Intel
  • Cathy Bodine, PhD, CCC-SLP, Professor, Department of Bioengineering, College of Engineering, Design and Computing, and Director, Innovation Ecosystem, Colorado Clinical Translational Sciences Institute, University of Colorado
  • Alex Dunn, Founder, Enabled Play




OCTOBER 25, 2022
2:45 PM ET


Services provided by:
Caption First, Inc.
P.O. Box 3066
Monument, CO 80132
800 825 5234

This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, Document, or file is not to be distributed or used in any way That may violate copyright law.

THERESA VAUGHAN: Hello, everyone. Good afternoon. I am Theresa Vaughan. Before we begin, a little truth in advertising, you might have noticed that Adam Molnar is missing today, the Co Founder of Neurable and unfortunately has contracted COVID and doing the responsible thing and staying away. We won't have him here, but he assured me if you contact him on LinkedIn, he would be happy to answer any questions you have.
And if you have do have questions that are not answered today, I want to say, please make sure we receive them so we can give them their just deserves after the panel.
So, now to the business at hand. Thanks to G3ict, and the organizers of this very interesting and important meeting. And an advanced thank you to my august panelists. I am a neuroscientist. I work at the Center for Adaptive Technologies which is at the Stratton VA in Albany, New York, and Washington University and USVC. I am the Chair of the Council of a project called Neuroability, the latest project of GPA CT and my laboratory has partnered with G3ict to investigate how the burgeoning field including interface, can learn from the many years of generating accessible devices and to convene stakeholders in a dialogue about how this very interesting and, sometimes misunderstood technology these technologies, can be made available to people who actually need them.
So, that sets the stage for my three panelists. before us, they are Titans of accessibility. They create, test and market software and hardware that address, among other thing, accessible and wearable tech for people with disabilities at work and at play. So, I am going to let them introduce themselves and talk about why they agreed to be on this panel with me.
Because they are interested in neurotech, and they are adjacent to neurotech, but not exactly involved directly in neurotech, so they are going to school me and you, and my colleagues, about what lessons we need to learn in order to develop accessible technologies.
So, I am going to just interview these three stakeholders here, just as I would if I was involved in getting them to participate in neuroability, which, by the way, at least one of them already has participated quite a bit, Cathy. First, Cathy Bodine, the Executive Director of Assisted Technology Partners at the Centers for Inclusive Design and Engineering, and an Assistant Professor at the University of Colorado, Denver. She has expertise in evaluating technologies intended to support independence and quality of life.
Alex Dunn, next to Cathy, is CEO of a start up called Enabled Play. There he helped individuals and group, including educators and businesses turn almost anything into new input for their computer, game consoles and more.
And, finally, not least, Darryl Adams is the Director of Accessibility at Intel. Among other things he is the innovator and directs the Computer and Innovation Program in an effort to embed inclusive designs across Intel's entire PC portfolio, and that has, as you can imagine, has the power to affect not millions of people but billions of people.
So, a pretty powerful group.
I will start by asking them, why did you agree to be on this panel? And who are you to tell me what to do? (Chuckles) So, Cathy?

CATHY BODINE: Okay. So, I really am on this panel because I like Theresa. And she is a very good person, so I decided to say yes. On the other side, I am a professional on bioengineering, appointments from the School of Medicine and Director from Coleman Institute for Cognitive Disabilities, as well. I work at the intersection of the end user at the Center for Engineering and Medicine, and the ecosystem that surrounds these folks, whether it is school based, vocationally based, play based, whatever it is. I have a real passion for thinking through what it is we are actually doing.
And what is it that these folks actually want and need, versus what do I think. Because what I think doesn't matter. What matters is that we are serving the needs of the people in a are requesting some form of assistance in whatever way that goes.
So, I am here because I think we have this tremendous passion, and suddenly, particularly following COVID, this massive industry interest in our work. I think COVID the silver lining of COVID for people with disabilities, we all know what it is like now to be isolated, and it doesn't feel very good.
So, trying to help all of us come to some form of consensus around what is it we are actually try doing in this space. Thank you.

THERESA VAUGHAN: How about you?

ALEX DUNN: I am Alex Dunn. I am on the panel not for any type of background in neurotechnology or neuroscience, but more so, from my point of view with Enabled Play and generally speaking, looking at new human pure interaction paradigms, and ones that are more ubiquitous and inclusive.
We sort of take a hard look at how we as humans interact with technology. And at its lowest level, while there is more accessibility, the usability is still not level. Taking it to automate, automating it at a higher level. Neurotechnology is an important part of that. When talking about ubiquity, it is not just mobility driven, but how do we get from thought of what we want to , do what action we want to take, what informers we are looking for, to the action, actually seeing it done.
The closest we can get is reading your brain. It is a very interesting aspect to be looking at. But we come from the side of really looking at the artificial intelligence, and how to apply that to the problem, which is a very core piece to make this type of neurotechnology work and this type of access work.

THERESA VAUGHAN: Thank you. Darryl?

DARRYL ADAMS: I am here today because I think ultimately I really believe in the creation of a future of technology that is inclusive and accessible to everyone. And, I think that goes in the time that we have today, the time we are in is pivotal. We look back on this time in history, we will realize we are now shift from a 40 year old paradigm, where it has been the PC and the laptop is a computing experience, and we are getting more and more deeply into the types of different interactions that Alex was just describing.
So, I think Alex and I come at this from very similar viewpoints, but different lenses in terms of coming from the start up landscape to the tech, global, corporate landscape. How do we come together and actually create that vision and execute on it so that the future we are talking about doesn't leave the disability community behind?
From a personal perspective, I am losing my eyesight from the outside in, and I am also deaf in my right ear. So, I have been at Intel for 25 years. The first half of that time I was a technical and project manager doing research and development projects.
The second half of my career I was dealing with how do I keep my job when I can't when I feel like I can't do it like I used to be able to because of my declining eyesight? So, I started having to understand, basically dig into the landscape of assistive technology and accessible solutions for my own purposes.
Not long after I started doing that, I realized that, well, I was hooked basically. I recognized the power of technology and what it can do to bring people together and make people productive. So, I made it my mission to figure out how to make you know, position Intel in a place where we as a global tech company can make a difference, as well.
So, that is my perspective on this. But certainly in terms of the topic today, this is where the future is going. There is a lot to be said, there is a lot to be learned still, and we will see a lot of failures, as well.
But, this is I feel very excited about where we are headed in terms of interacting with the computers with BCI and other sort of non traditional means.

THERESA VAUGHAN: Thank you. I guess I want to tell a little story before I ask the next question to the panel. Cathy whispered to me before we started, what exactly do you want us to talk about today? (Chuckles) Even though we met several times to discuss this, this is a huge area.
In fact, I had the same conversation with Excel when we started the project. I am a BCI researcher, that means brain computer interface. And what Brain Computer Interfaces do is record signals from the nervous system, interpret the signals, and deliver user intent. and that is a closed system. That means that we BCI is a combination of the two adaptive controllers. The device end and the human end. And therein is the challenge, the real challenge, for the science. An underlying understanding of how the brain and central nervous system responds, both motor and sensory, is essential to solving the problems of people who have those issues. Motor issues and sensory issues, right?
So, this marriage of accessibility, of people with those kinds of people who need accessible devices, and the questions that you are asking every day for all of the technology becomes essential and central to what I am doing as a scientist.
And I also wanted to say that Excel is not a small thinker. He was not interested exactly in Brain Computer Interface, but in leveraging the excitement around neurotechnologies. Neurotechnologies is a huge area of study. It includes so many things. And, so, cochlear implants are neurotechnologies. Very successful neurotechnologies. And they are very useful. And perhaps we could see the exo skeletons that Sandy demonstrated yesterday as a neurotechnology, because it depends on the central nervous system reacting with some plasticity.
So, in some ways, I just want to say, this is a very difficult topic to cover, but what I am interested in here, is panel telling me exactly what I am missing. And for those individuals who are involved in start ups, and developing products like our missing panelist, for instance, our message needs to get out that people with accessibility needs need to be involved in the research.
I have some questions. So, the big question is this. And I expect this to generate a conversation among the panelists.
What are the recent advances in gaming and wearable tech in your industries, in hardware and software, that can be leveraged by developing neurotechnology, Brain Computer Interface for me, and for gaming, and wearable technologies combine to be more than the sum of their parts. I think I will direct this question, first, to you, Darryl. Because you said something in our prep session that really struck a nerve for me.
And that was about keyboards. And, so, I wonder if you would like to just talk about losing our keyboards?

DARRYL ADAMS: Yes. This is actually a big topic on its own. If we think about what I just mentioned. We have had 40 plus years of computer interaction that has been designed as a keyboard and a mouse with a screen.
And when that design was first developed, it was not developed with people with disabilities in mind. You wouldn't develop a commuter with a screen for someone who is blind. You wouldn't have a keyboard as a primary input for someone who is not physically able to use a keyboard.
But we have gone through 40 years of basically working around these design limitations. it is seeing pretty fantastic results with many people here in the conference, as well, responsible for that.
But it still gets to me that when we talk about human computer interaction, or human technology interaction, there is an opportunity to change the relationship between people and technology. And to allow technology to facilitate more effective communication and connection between people.
So, I think as we move into more immersive computing paradigms, kind of all the buzz words we have been talking about at this convention and everywhere else, the metaverse, virtual reality and augmented reality, and all this being fueled with various flavors of machine learning, natural English processing, computer vision put all this together, and we now have capabilities, computing capabilities, and sensory capabilities, that really give us far more latitude, breadth, to design experiences that can allow that can basically emphasize what people emphasize how people with interact with technology.
So, if you are not a keyboard user, let's not give you a keyboard. Let's have a more conversational computing approach that can accomplish the same end goals. What is nice about this, or at least the trajectory we are on, when we think of VR, for example, we are trying to solve the problem. You have this immersive experience and the keyboard seems kind of weird in that experience. We are pulling away from the traditional compute, and coming up with a new experience.
I just can't help but think how powerful it could be to bring a new chapel of information into that mix with the brain interface, in allowing the user to passively produce information for the system to consume, and to respond to it. So, I think about things like, as you are interacting with an environment and the system can understand your state of your memory state, in terms of, are you stressed, or are you focused, or, are you fatigued? With that information, software developers with take that and go many different directions. You can help users in those situations. Maybe if you are in a gaming context, you could play on that.
I could imagine if you are playing a horror video game and the system can understand what specifically stress you if in that environment and it would make it more so, something like that. (Chuckles) Kind of dark, but I think it is kind of ultimately, the opposite side of that could be some really good accessibility related things where
It detects stress, changes the context and eases the stress. This is not things that have to happen proactively, but things that just happen naturally based on this additional data stream. So, these are things that there are so much down this path, and it is really exciting.

THERESA VAUGHAN: So what about the gaming experience, Alex? What do you think?

ALEX DUNN: Yeah. I also wanted to I guess frame my answer. I wanted to go back to the first part of your question which is the advances in technology that have sort of brought us to this point.
Really, are we at a point where these things are ready for gaming? The reason why I think gaming is important is in terms of computer interaction, and compute in general, it is one of the most intensive things we can do from a processing point of view, which is why you need a graphics card to even be able to run it, but also from a human interaction point of view.
If I am, for example, going to just sit back and watch a movie on my laptop, the amount of control that I need to do is very minimal. I may have to go to the next episode, leave or search, but when we talk about the spectrum of gaming, the difficulty of gaming, there is a really big jump to the most complex multi inputs at the same time.
So, looking at, are we there yet for the human computer interaction model shift, and my answer to that, I think is yes. For gaming it is a great stress test for it.
I think the reasons why we are getting there now is really advances in hardware and software. We have massive compute in our pockets, a lot of times even op our wrists, pretty serious processors in there now, and a significant amount of memory usage we are allowed to use. Most of our phones have 8 core processors now, we can run much more, because we can't create these augmented inputs, controls and automation without running machine learning and we can't do that fast enough for something like game learning unless you are doing it on the edge directly in the same place you are playing the game, which is already doing that compute just to exist.
So, from the perspective of the experiences we are able to create, there is sort of back filling existing games in 2D and 3D, but then looking into these new modalities in VR where we have an opportunity to basically educate users on how to interact differently.
One of the biggest shifts in human computer interaction is the interaction, the learning curves. Even going to the computer screens it took people time to regularly adopt it as quickly as they do other inputs.
We are seeing some of the similar challenges in AR and VR where we have to create, but we are working through it and people are learning the new ways to interact where we can then backfill other platforms and tool, as well.

CATHY BODINE: I have like ten things my team is working on right now that I would love to talk about. I will talk about a health tech projects we are working on but it is a one off. It ties into our subject matter. If you know somebody with type 1 diabetes, you may know their blood sugars can go up or down. The brain doesn't tell you what is going on, but you become cognitively impaired during those moments. There are 11,000 deaths recorded annually for people having car accidents.
So, one of the projects we are working on, we with are working with Dexcom, a continuous glucose monitoring company, and we are developing, on the early stage of developing an app, very simple, that alerts people when their blood sugar is changing while they are driving.
We are then planning to tie this in with an auto manufacturer, to be named, that will enable us to use autonomous driving features to help navigate folks off the road safely, the nearest ER or whatever needs to happen at that time.
This gets to the human computer interaction in a very different, but I think compelling way, because it is taking a body measurement, if you will, and saying whoops, we have a prop which is influencing your neurology are now, and we need to do something about it.
So, that is another kind of very interesting way I think we can think about how these features can be utilized by a bunch of AI and machine learning to your point, yet at the same time potentially save lives. So, we go to that extent.
I also see, for example, I talked a little bit about this yesterday, we are building a social assistive robot that is designed to help babies get all the practice they need. What we are really doing is the role of development. If someone is born with complex cerebral palsy, that is in essence a brain injury, right?
So, the idea is if we can use machine learning, if we can use AI, if we can have a different type of affect recognition where you can understand how this child is performing and how they are engaging, then we can work with them to hopefully, overtime, develop a stronger brain so 20 years down the road maybe go to college, or maybe they can live more independently.
So, all these pieces and parts, everything I am talking about, we are not touching the patient, the client or the user, and that is where this interests me a lot.

THERESA VAUGHAN: I see. So, I guess what you have all said, I mean, has made me think a lot. One thing that you said, Darryl, was about sensors.
The idea that information can come from the brain, and that we can utilize this information, a data stream, to inform our technologies. So, that is very interesting for me, because everything we do changes the brain. And plasticity is key. Plasticity exists across the life span. And we are going to be changing brains, and that catching up with maybe, Alex, you addressed that the most. We have to somehow be able to, in this technology, anticipate that we are going to be changing our brains.
I just wonder whether you feel like the technologies that you have worked with have, I mean, perhaps that is the diabetes example, where, you know, you are actually developing a feedback mechanism for the person that is adjacent. I mean, I know, I use my phone to help me remember thing, including my kids' phone numbers now, so I am sure this will be ubiquitous, but that is very interesting to me.
Anyway, as an additional signal, that is very interesting. We give you an additional way to play the game. We can get additional information for technologies.
And then we can have medical responses. So, what about this idea of sensor and sensor technology? What is happening in the hardware world that might have an impact on this?

DARRYL ADAMS: Well, I think in general there is a number of maybe obvious progressions where sensing capabilities are, like all technology, becoming more or less and less expensive, to begin with, so we are integrating devices with sensors that have not seen at these levels. A phone that it exists, possesses, or a laptop with a camera and a growing number of sensors available in the lid to understand its context.
So, I think is driver is the cost. The cost and capabilities of these things are just we are hitting more of a sweet spot where we can add more capability, so when you buy a device, it is not just available for a traditional command and response model. It is more about the device understanding its context, and understanding the user.
And over a period of time, as well, so there is a temporal component to that. What you can get with that, if you think about how if your device knows who you are and what your capabilities are. Let's say you are visually impaired, but you are still relying on eyesight, the device can understand, or you can teach the device what you can and cannot see, and then it can produce that experience for you.
So, theoretically, you should be able to have a device that never shows you something you cannot see. If it knows what you can see, why would it give you something different? So, it is that kinds of personalized capability I think we want to be moving toward.
The same thing for audio. If you are Hard of Hearing, but you still rely on hearing, why would your computer or your phone give you an audio signal that you can't hear? So, it is these kinds of things that we can do with sensing technologies and AI. Everything is coming back to machine learning in different ways, which will be super charging all the things we are talking about.
But I think, really the kind of path here is that we are going from broad sort of utility technologies, to really much more personalized technologies that can be super beneficial to people, especially for people with various disabilities that basically represent incompatibilities between the person and the technology today, trying to reduce those incompatibilities and make everything feel like it was designed for you.

ALEX DUNN: To Darryl's comment. We have so many different sensors now on devices. There were cameras that literally don't have power sent to them yet because we know we will want to use the sensor at some point for something, but we also want to shift the device.
The way we look at sensors in general, it is a way to detect intent. whether it that is coming from a BCI, or whether it is coming from microphones and spoken requests, or camera and being able to detect what we do, like facial expressions and body gestures and micro movements that are personal to you, and our touchscreen and the sensors we have built into them. In a place where we are underutilizing the sensors for control, the more ubiquitous it comes. I know I use that word a lot, but it is kind of where it needs to get to the more ubiquitous the sensors are and the more implicitly they are used, we can get to that level of experience, like you are saying, Darryl, it just knows how you want to interact with it and do what you want.
Because we can do everything to derive an intent. The biggest alcohol, beyond just taking a sensor, like take an accelerometer and getting XYZ, how do you turn that into an intent to get everywhere from saying when I tilt my phone to the right, it should do this thing. That is where context and LLMs, or large language models, I think, can expand that space a little more, from intent to really complex action, which is area of research we are focused on, taking any small sensor and knowing, hey, this person is trying to tilt their phone to right, a very simple gesture, but they are doing that in this video game. We should probably be using that for a specific movement in the game, or using that for a specific command, or some type of action with their character, because we should know what they are trying to do implicitly ahead of time without them having to build everything from scratch.

THERESA VAUGHAN: So, Cathy, this wasn't on the list of questions. But I am wondering about evaluation. So, your group have been so involved in evaluation. I have to say, I come from an industry that tends to over promise a bit. And, because, you know, this is not rocket science, but it is something like rocket science. Not that I am that kind of a scientist.
But what I find is that there is not impatience, anymore, for Brain Computer Interface or neurotechnologies to solve the problem, but it is kind of tired, worn out, like all right, already, you said we are going to do this and you still haven't done it, so what is the problem?
I mean, for me, that has something to do with, I mean, as a BCI researcher, that has something to do with markets. It has something to do with embedding the thing in the you know, making the device like a toaster, and the support, and the interest from groups to actually have this happen and some of the science. Anyway, what do you say about disappointment and about evaluation?

CATHY BODINE: I think if I were someone who worked every day to get through the day because of my disability, and I work as hard as people with disabilities work, I would be a little frustrated with people like us.
We talk a lot. This friend of mine was a President of IEEE a long time ago. He would do this thing where he would throw his hands up and say imagine if you will. I got so frustrated with him one time. I said will you stop imagining. I have real people drilling on me on a daily basis and they need me to stop imagining and actually do something.
So, I think it is the 60 minutes thing. You see someone moving the robotic arm with a BCI, we are all thrilled, then we figure out, oh, I can't have that. Or, oh, there is only so much of this happening.
So, I think we need to be thinking of our messaging. It would be very useful if we literally drew a picture of what is going on, where the holes are, and start feeding this to the Federal funding agencies because disability is in the strategic plan for the first time ever in strategic tech, people. They did this for the first time in the last year. I am saying, let's think about this really hard.
But we need to have a coherent message and be brutally honest with ourselves and with the individuals we hope to serve with where are we and what with can we do.
I agree with these guys. It is ubiquitous. it is moving so fast it is hard to keep up. I am right in the camp with you guys, but I think how we message this needs to be open, honest and clear. And we need to be inviting people with disabilities in much more than we do. Everything we do is user centered, everything. And I don't think a lot of people are doing that. Then you see devices that don't work, you see sensors that don't work. You guys know the story.
I think we have to really I think it is interesting that someone who is able bodied can assume that someone with a disability needs X, without asking if they even want it. I think we do that a lot and I think people are tired.
So, I think we have to be better at our messaging. We are great assigns, we are not so good at messaging, and I think we need to work on that. Is that fair?

THERESA VAUGHAN: It sure is. Darryl, do you think that well, I guess this follows something that Cathy said. To talk about embedding technology embedding the expectation that everybody is going to be able to use it in user centered design. Or, retrofitting things. How do you think the industry, your industry, is responding to that issue of not having to retrofit, but, rather, to anticipate and involving users and user centered design?

DARRYL ADAMS: That is a really big question. Because I think so, the industry this is where I want the industry to go is to recognize basically when we think about computer architecture or technology architecture, that architecture informs a hardware design and the hardware design is the complete restraint set that software developers have to work with.
When you are writing software for a computer, you are writing software that will expect a keyboard entry and expect the output to be displayed visually on the screen. That is how you write software for computers.
If we think about how we create architecture that allows for a broader set of hardware designs and a larger collection of sensors and all these different things, then we can actually start thinking about creating products that work for everybody, or at least are configurable to everybody's needs when you buy it, rather than needing to buy something additional or have a separate approach to getting something done.
We can we have the capability to do this now. We just need to be deliberate in our intent to do it.
So, when I think about, like so, we have all lived through 40 years of doing what we do. One thing I sometimes note to people when I started at Intel it was the late '90s. I was able to get a laptop. It was kind of like the first corporate laptop pilot. It was garbage. But it was a laptop. Basically what I did on that was email, Spreadsheets and presentations. And we fast forward 25 years, and I have a laptop. And I do email, and presentations and Spreadsheets.
Be you we have made it far less annoying. Now it is light, it is fast, it is cool looking. But I am doing the same thing. And we now have the technology it feels more cliche and feels like it is buzz words, but the machine learning, all you can do is sensory inputs, and then we bring on this newer newer in terms of commercial application with brain interface. These are the things these are new tools that we can bring to market at scale, and do different make sure that everybody can participate. I just want to add that as we move from that this model that we are all so used to, into different immersive models, we must bring everywhere with us.
If we make this transition and we forget people with disabilities in that mix, and then we have to retrofit VR, that will be devastating. We have everything we need to not do that, and we have to, obviously, need the participation of the disability community, but the time is now to do that.
If we miss it, then we will be talking about this in 10 or 15 years, why VR doesn't work for people with disabilities.

ALEX DUNN: The first thing I want to add is right after this we will get games installed on your laptop so you can do (Chuckles) something a little more than email and Spreadsheets. (Chuckles)

CATHY BODINE: I want to add, we have a virtual reality game we are testing with a variety of people with disabilities right now. It is truly interesting what rookie mistakes are made by developers. So, I would encourage us to really be thinking about education of software designers, and getting into this educational arena now, so that we are teaching these new programmers all of these folk, how to do this from the beginning. That is something I think someone like M Enabling can think about advocating for, for sure.

I will be a little combative there. In my background being in AI and application development, leaning on developers to solve accessibility just hasn't worked. we saw the number was shared yesterday with only 2% of websites are accessible. It is way easier to make a website accessible than it is to make a VR game accessible. We are playing in two dimensions.

CATHY BODINE: I don't disagree with you, but it is not even talked about this those programs is what I am talking about.

ALEX DUNN: I agree we need education, but I think it needs to come from the product and down. Developers need to know about the tools that are there. But in the end, most of the way that software development works, someone is instructing what to build. If their focus is not accessibility not even focus, if that is not in top of mind we are really saying the same thing. It needs to come from leadership down. It needs to be a core part of culture to be adopted all the way through these team into applications.
I think, honestly, with VR we are at the right time to standardize around that and educate everyone around it, but we need to do it now. The second that VR becomes so much more widely adopted and it becomes another channel for every developer to build for, like web and mobile, if we are not prepared, then we will end up with 2% of VR games actually being accessible, and that is sort my big concern with the space. And we need VR solutions for lower level, as well, being able to automate a lot of these things.
Not just depending on the developers and the engineer, but the developer group of the application to implement it. We have abilities now, we have the technology. We have the technology to solve these sort of ahead of time in a more universal way, and there needs to be a shift in focus there.

CATHY BODINE: I agree. We also need the shift at the undergraduate level, academic. That is where I am going with the training. These kids are graduating then they don't know anything about disability.

DARRYL ADAMS: Teach access.

CATHY BODINE: Teach access is a good start, but I think we need to do a lot more.

Teach access everywhere.

CATHY BODINE: Yes, we do.

THERESA VAUGHAN: So, I think we might have some questions?

This is if Rob Fentress at Virginia Tech. Lot a lot of people who are aware of it, I am stunned by things like GPT3 and an interview I saw with Sam Harris where it would ask question, it was like an interview and it would respond exactly with the philosophy of Sam Harris, in an incredible way, and in thinking about this panopticon that we live in, or pan autocon, more relevantly, we have all this data of my interactions with all of the people around me, okay, and we will all have more and more trouble finding the right word.
Some of us will have aphasia, and it will isolate us and make us not able to be independent. It is almost inevitable for most people, I would think.
So, with these technologies, is anybody exploring taking that data and making it so that maybe there is a heads up display. When you start losing your ability to generate word, somebody says something and it says, okay, this is what I think you would have said, in response to this. Is that what you want me to say? And then you could say it?

THERESA VAUGHAN: You are asking the question that Darryl, or a question around the issue that Darryl raised about personal computers and hardware and software responding especially. And knowing you. Knowing who you are. Is that

ROB FENTRESS: Well, yay. Being able to respond or suggest responses as you would have responded in response to what somebody says to you. I mean, we are there now, right?

ALEX DUNN: Absolutely. I will take it first. And I am sure Darryl has something to add. For GPT3, for those of you not in a machining world, it is a LLM, a Large Language Model that, in short, requests, depicts, and generates output. It does it in an amazing way. T5 came out of Google research and now T5X. When it comes to the way those are being implemented, and creating new ways of using that in not just assistive tech but in prediction, is a huge industry focus in machine learning. We take, basically explain what you want your computer to do, and we use a custom T5 model, Large Language Model, running offline, to generate the output and tell the computer what it should be doing at its lowest level there is a lot of research. That is where our heaviest research, but that is really where the machine learning space is now, transforming models and applying them to very specific problems like this.

THERESA VAUGHAN: There is another question behind you. But before that, I think Cathy has a response, as well.

CATHY BODINE: I love this, where we are with the tech. One of the things we have been doing, though, is really, we are in the early stage of thinking through the ethics piece.
So, pre pandemic I was on a committee at the American Academy of Medicine or whatever, and we did an ethics in the AI of machine learning workshop. Because, you know, Congress is not going to ledge state or regulate at the speed of our tech development so we have to think about the ethics behind all these things.
We can do all these crazy things, and we all want to but we have to also bring in the ethics side of this with it. Maybe that isn't the word you would a want to say, but that is the only word that shows up and your fascia causes you to repeat it. So, there is so much going on there.
That space of ethic something we really need to bring to play, right now in conjunction with this wonderful development happening.

THERESA VAUGHAN: I think that goes double for Brain Computer Interface.

CATHY BODINE: Yes, it does.

THERESA VAUGHAN: I think Mohamed as a question and there was a hand raised in the back and one over there.

Thank you, everyone. I am the G3 Director or capacity building and capacity, also a member of the Neuroability Technology Initiative Advisory Council. I have a question, and I hope it won't sound complicated. This is something that, since I have been invited to join the Advisory Council, my focus and my discussions and questions is always about this concept of user centered.
My question is, how do you find your candidates to run your experiments? And to what extent you think and this is my question to Cathy to what extent that you think that BCI is extending beyond the medical model of disability, and enhancing the social inclusion of Persons with Disabilities? I know the answer, but I would like to hear your perspective? And, also, how do I make sure that this technology we are talk about is not just limited to those that can afford it. We want to make sure it is deployed to people who can afford it. Who are poor or who are, you know, unaware of what is happening in this world. So, thank you.

THERESA VAUGHAN: I will let Cathy take the complicated part of the question. I just want to say that one of the goals of neuroabilities has to be kind of the inclusive space for people like me who might cherry pick a population, a group of people in order to keep my cohort simple. So, I think Roger Smith, maybe some of you know, Roger Smith asked this question some time ago in the context of neuroethics. He said, you know, one of the ways in which we can continue to not understand how people with the disabilities may be the same or different, is to keep them out of our experiments, because we have to keep our cohort, you know, we want to ask a simple question, and we want to get a simple answer.
But that limits our understanding of exactly how the brain works. Especially if someone doesn't think the same, or pay attention the same, follow instructions the same. So, I think it is a really good question.

CATHY BODINE: I think I heard about five questions so I will try to remember everything I can. First of all, the subjects. That is a huge issue. I am blessed that I work with a group of folks, who as part of their practice, they work on reducing disparates in health care, including people with disabilities in tech and all that.
So, we recruit widely, broadly and deeply, and, you know, because I am old, I have lots of friends all over the State of Colorado and in other state, and I can pick up the phone or send an email. I am pretty fortunate in regard of being able to do what I need to do.
But we match demographics. So, for example, if there is 8% Hispanic, if there is 40% white, if there is 23% black, or whatever, we match our demographics in every single study we do. Because, if you don't do that you have made the biggest mistake of your life.
We do work in multiple languages. We bring in sign language, whatever we need to communicate effectively and clearly, so we are very passionate about ensuring that everyone is involved in the research.
Now, we are doing a study right now which has nothing to do with neuro, but it is looking at dynamic seating systems in power wheelchairs, or wheelchairs in general. People that are very spastic, if you have give in your seat or somewhere in your leg, that can keep you from breaking bones and make you much more comfortable.
So, we are doing a health disparities study right now, figuring out, going through Medicaid to see how many people of color actually receive the dynamic seating system, versus people who are white. And we are going to be using that as an example.
I think we have to stop.

THERESA VAUGHAN: I think we have to stop very shortly. There were two more questions left. Can we take those questions? Yeah, okay.

My name is (?) from (?) a tech AI company based out of Edinboro and London. I am sure you may have come across our research team (?)

CATHY BODINE: A quick statement I wanted to say is the biggest return on investment you will find is

THERESA VAUGHAN: She can't hear you.

Excellent. We work on building companions using both natural language, assessing vision and combining all that technology to deliver a new experience, grounded AI, which is safe and bias free, which is in opposition to, I guess, what you have what we have seen in the models in GPC3? Can you hear me now? My question is related to Cathy and Darryl. What are the plans in the future, especially with Intel, that is grounded and safe to use across the board.
There is massive spans between user experience and using AI. It is an overly used word, I think. And it seems that to me that we have got a lot of people working on amazing things that aren't necessarily being deployed in the research space. And I think there is a massive disconnect between the space, such as Google, Apple, et cetera, in the space the tools allowing you to deploy with users. Apple has a completely different book and Google has a completely different book. If you try to deploy in the AI and Android devices, it is difficult.

THERESA VAUGHAN: The question?

The question is how do we make it easier to work across all the major big players, because that is going to make the difference?

THERESA VAUGHAN: I will let Darryl field this one, at least to begin with, and then we will hear from Cathy, just because you do represent a big player.

DARRYL ADAMS: I don't know there are a number of things there, but to begin with, I agree with the initial premise that we need to be doing things in as ethical a manner as possible. At Intel we have a responsible AI Council that they have to run through from the beginning. This is not a solution but it is steps in the right direction to ensure that the projects we are choosing and the way we are implementing and the training datasets that we are developing, we are asking the right questions and trying our best to be bias free, or reduce bias and all the other elements that are necessary to be responsible in this space.
So, the challenge you are talking about across the industry and how you make these open, we definitely come from a perspective that it is we want an open ecosystem. That is where we are grounded in and most of the work we do is open source in that space, as well.
The challenge becomes that there are business models around these things that are proprietary depending on the company, they have different models and different reasons. I don't know, if you tie your revenue to these models, I don't know how you get around that.
Our approach is to not do that and to make sure we are creating hardware to accelerate the workloads, whether it is at the edge, the data center, whether it takes place, to optimize it for the specific type of workload. And that is what we are interested in. So, I don't know I don't know that that it doesn't answer your question, but I am not sure there is an answer.


CATHY BODINE: Just quickly, I met with an amazing, brilliant computer scientist coming up with a different approach on bias free machine learning, for example. I think we have to that has to form the core of what everyone is developing. I think for smaller companies that maybe don't have as much access to the talent that larger companies do, we need to figure out a way to get that out, because that will make a huge difference.
If we can use open source, we should do it, but we have to look at this very carefully.

THERESA VAUGHAN: I am going to

(?) I just wanted to say that the cognitive accessibility Task Force with the W3C accessibility platform architecture working group, as well, we are working on standards and specifications that used to be called personalization, which I heard a lot about customizing predictor user need. This is the holy grail of cognitive accessibility, right? We want to have things customized for our needs and just show up and have the platform know what we need and have it transform itself in that way. It is now called VPAT. It used to be the personalization task force. What I will ask all of you to do is please look up our public mailing list on the W3C website and please start to share your work so we can comment and bring it into the fray, what we are doing and stuff like that.
And, also, I heard a lot of things that are really kind of sort of ringing home to what we are doing in our Innovation Sprint on cognitive accessibility testing. What we came up with, it is hard to compel the capability case. It is not just Ed user. We all get cognitively fatigued. You mentioned car accidents. Accidents are the number one deaths in the world, and it is because of cognitive fatigue which causes stress, cortisol in the brain and memory making. Coming one the compelling business cases and getting them from the top down is super important. I just wanted to reiterate that.

THERESA VAUGHAN: Yes. I guess that this also would involve regulation, as well. And the understanding that maybe some of these things have to be supported by government. I guess Cathy's comment about the NIH and NSF including accessibility in their Mission Statements is quite significant when it comes to that.
So, there is

CATHY BODINE: Strategic plans.

THERESA VAUGHAN: Strategic plans. Sorry. Other questions? All right. Well, thank you, all of you. Thank you to the panelists, the G3. Tough panel. A tough bunch of questions, but thank you very much for attending. Thank you.
(Session was concluded at 3:52 PM ET)

This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, Document, or file is not to be distributed or used in any way That may violate copyright law.

Leave a Reply

Your email address will not be published. Required fields are marked *