On July 25, 2023, Senator Ed Markey and Congresswoman Anna Eshoo reintroduced the Communications, Video and Technology Accessibility (CVTA) Act of 2023. In this session, leading consumer advocates will discuss the major ways that this legislation builds upon the CVAA (its predecessor) to achieve equitable technology access. Learn how this bill proposes to improve closed captioning and audio description access to online and television programming, ensure the effective display of ASL when televised, enable access to video conferencing services, close gaps in 911 access, enhance the efficiency and effectiveness of telecommunications relay services for ASL users and people who are DeafBlind, and make certain that our federal accessibility laws keep pace with emerging technologies. After providing an overview of these and other essential elements of the CVTA, our panelists will open up the session in a Town Hall format during which you will be able to ask questions and share your thoughts about this landmark legislation. We hope you can join us for this informative and interactive session on what is sure to be the next legislative milestone for people with disabilities!

Session Chair: Karen Strauss, National Disability Advocate and Historian

The Honorable U.S. Senator Ed Markey (D-Mass) (video statement)

Speakers:

  • AnnMarie Killian, CEO, TDIforAccess, Inc.
  • Howard A. Rosenblum, CEO, National Association of the Deaf (NAD)
  • Clark Rachfal, Director of Advocacy and Governmental Affairs, American Council of the Blind (ACB)
  • Lise Hamlin, Director of Public Policy, Hearing Loss Association of America (HLAA)
  • Larry Goldberg, Accessible Media and Technology Consultant
  • Daniella Decker, Independent Consultant (Town Hall Moderator)

This video is lacking captions. We expect captions by February 14, 2024.

Transcript

Good afternoon, welcome back. We have all of our Town Hall speakers lined up already. I'm pleased to introduce the Town Hall CVTA Coalition moderated by Karen Strauss, National Disability Advocate and Historian. Thank you. The floor is yours.

KAREN STRAUSS: This is a CVTA session, if you want to hear about that, stay here, invite your friends. And we're going to start with a video from Senator Ed Markey, many of you know that Senator Ed Markey has been a champion of every piece of federal communication accessibility legislation since around the '70s, if we can turn that on, that would be great.

ED MARKEY: It was a momentous achievement, but new technologies are being brought for the disability community and we have to move with the times. 13 years ago I fought with so many of you to pass the 21st Century Communication and Video Accessibility Act, the CVTA, to expand access to communication services and televise video programming for People with Disabilities. So the deaf and blind individuals could have access to this technology, we past it. While today we rightfully celebrate the success of the CVTA, I know we have more work to do because over the past 13 years, streaming platforms have exploded in popularity, videoconferencing tools, and became integral to the virtual workplace and new social media sites launch every single day. One thing has not changed. That's why I'm introducing the CVTA to close the technology accessibility gaps in our online ecosystem. Specifically, my bill is going to strengthen standards for television programming and emergency communication. That's a must. I'm also going to work to expand accessibility requirements to online platforms. That's the present and the future. And we also have to equip the federal government with the ability to improve accessibility of emerging technologies. The CVTA will ensure that accessibility is never an afterthought. Not now, not in the future. I look forward to fighting alongside all of you to pass the CVTA and to secure opportunity, independence, inclusion, access for everyone in our society. We have to unlock the human potential of everyone. so that they're full participants in the democracy, and in our economy. I'm looking forward to partnering with you to get those victories and to secure them for the future of our country.
[Applause]

KAREN STRAUSS: Okay. What better start can we have than that? Well, I could think of something. He talked about moving with the times, would all of you stand up in either body or spirit and I'm going to pass it over to some very powerful women who are going to  wait a second, you're not standing up, up. Up. Up. We need help on this one. All right. Go for it. Are you ready?

The YMCA song? Okay! We're going to do it! In real life. I unfortunately am going to need all of my panelists to sing here. Ready. We do the CVTA, okay. We know that you're tired after lunch, so this is your wakeup call for the CVTA! One, two, three! CVTA! CVTA!

One more.

CVTA! We don't know the words!

Thank you! Good! Wow! Okay!

KAREN STRAUSS: All right. Now that you're all awake. Thank you for being here. Ed Markey Decker mentioned the bill was introduced, it was originally introduced in November, all of you know that we're in a very different time than we were before the CVAA, the CVTA's predecessor, we now have greater awareness, we have greater inclusion, we have greater participation and we have greater accessibility. We still have gaps, we still have the future and as you heard from Jenny LayFlurrie, everything is changing, minute by minute. This bill is designed to make sure that as we go forward, we'll keep up with the emerging technologies. We have a panelist of people that are going to share with you an overview of what's in the bill for the first half an hour. Then we'll spend the second half hour on a Town Hall, and so please write down your questions, you can send them by Slack, you can raise your hand at the end, but we want to hear from you. We want to know your questions and comments.
Let me introduce the panelists, we have AnnMarie Killian from DTIforAccess. Raise your hand.
We have Howard Rosenblum, from the National Association of the Deaf, Clark Rachfal, from American Council of the Blind, and Lise Hamlin, from Hearing Loss Association of America, Larry Goldberg, an accessibility media and technology consultant, and Daniella Decker, our Town Hall moderator and also a consultant.
For those of huh that don't know me, I'm Karen Strauss, I have been around for a few years. Come and see me if you want to hear more.
So let me give you a quick overview and I'll pass it along, we have a lot to cover. AnnMarie Killian is going to go over video programming, the video programming sections, Clark is going to do the audio description sections, Larry is going to cover accessibility to video devices, video programming devices that is. I'll give you a quick overview of the telecommunication relay service section. I'll kick it back to Clark for the national deafblind equipment distribution programme. Lise Hamlin will do advisory Committees and FCC reporting and Howard is going to do American Sign Language matters, and also videoconferencing. Lise Hamlin is also going to do realtime text. So I think that's it. Did I miss anything? Okay. No. All right. What?

After, the questions.

KAREN STRAUSS: After that, we'll do questions. So we have a lot to cover but we'll do it in abbreviated form. I'll kick it over now to AnnMarie Killian.

ANNMARIE KILLIAN: I'm going to sign. First of all, hello, I'm AnnMarie Killian. Wow! Beautiful  I'm going to do an image description of myself. I'm tall, I'm height challenged, I'm 5'9", and I have long brown hair, glasses, and I have earrings and a blue jacket blank pants and I'm thrilled to be here. The goal of this session is the parody about closed captions and equality access for closed captions and audio descriptions. It is really important accessibility features today, you know, the video programming, the CVTA itself requires only closed captioning programmes that have a role for broadcast protocol. The CVTA is only available in the broadcast programme if, again, if it is available on TV.
The CVTA removed that limitation which is really important for us to acknowledge, the impact since COVID and it impacted us and we relied on the streaming for the news, information, communication and accessibility was so important. So Clark will expand on the audio descriptions and the protocol for those as well in the priority information there. So closed captions in ADA and CVTA, they follow the closed caption protocols, yes. Including English and Spanish captions. They have an exemption, you know, the economic burden, you know, that both the groups have individually as well. It again rated the user generation captions as well and most are not excused from that exception, so they are making the content and they have to  the businesses, all of those who are accruing 1 million, they have to include the captions. In addition to that, we require all user generated video platforms that don't have availability to make it easily usable for the audio descriptions and captioning for people to be able to upload the streaming and watch videos and make the accessibility easier for the yours experience, making it friendly. It required the closed captions now to update young practices and your protocols from four years past the CVTA, past that, so they'll keep auditing every four years, and that is so important.
In addition to that, why is technology  that we're seeing changing and elevating, what happened today, it won't change tomorrow. So it is so important for the FCR to keep up with the technology, innovations and challenges  FCC  it requires the Commission to look back to 1960s and 1970s, and at that time they were excused for the TV captioning, it was only  it was important for the captions, for example, to be exception to allow the new Internet setup, they typically are waved for four years, on hold, you know, requiring getting the captions. So now they're requiring all new innovations to have availability for captions. That's a brief synopsis of captions and audio description. I'll turn it over.

CLARK RACHFAL: There you go I have to look down a little bit. I'm Clark Rachfal with the American Council with the Blind, before we get started, I'm honored bound to tell you how excited I am to share the stage with my colleague and dearest friend Howard Rosenblum after a very unfortunate Thursday Night Football game. Howard is smarter than I am, funnier than I am and he has better hair than I have. So are we good Howard, are we square now? Fair enough. Fair enough. Audio descriptions of the CVTA would mean that viewers throughout the country, especially those in smaller broadcast designated market areas would not have to wait more than ten years for their broadcasters to pass through audio described programming, that is already available.
The CVTA also means that audio described television and IP programming would need to be labeled, searchable, discoverable on video apparatuses, navigation devices and applications that show programming. It would also require an audio tone played before or at the beginning of a programme to let viewers know that that programme has audio description. If all shows have audio description, this won't be necessary, but currently, especially on linear video programming, consumers may not know when a show starts whether or not audio description will be available and how to take advantage of it.
Also, there would be separate audio tracks for audio description and this would allow for Spanish language audiences as well for viewers seeking audio description to have access to their preferred content and this would, of course, be if it is achievable. Then finally, the CVTA requires the FCC to create audio description quality standards and this would be for the proper voicing and delivery as well as course audio encoding of audio described content and programming and this would need done within three years. At this point, I'll pass it over to Larry to discuss video apparatuses.

LARRY GOLDBERG: Thank you, Clark. Anyone here watch television anymore? How about video streaming once in a while? You ever struggle with the remote control to figure out how to turn it on, find a channel, turn up the audio? Well, the CVAA acquired an equivalent button key or icon so that you could figure out how to turn on captions, audio description, or change your settings. The law never assigned responsibility, so the CVTA will finally clarify the issue that the equivalent so that to control the devices will be available and consistent across platforms. I have six different devices I can watch video on, every single one is different and difficult. So finally, the CVTA will require easy access to the caption settings, the audio description, also compatibility with assistive technology devices like hearing Aids, hands free control, screen reader compatibility, functionality, because we struggle with figuring out how to use the devices and they're different on every one ever them.
Of course we also need to make sure that all of these devices and platforms passed through the captions and descriptions and in the early day, the HDMI standard didn't pay attention to this issue, continues to be a problem today and the FCC would take on this issue under the CVTA.
Finally, we need to clarify the responsibility, a lot of companies recognize there is a problem here. It was never really clear who is in charge of making sure that all of these things work. It will make it clear if you're an app, a provider, a cable network, this is the responsibility across the board. That's the apparatus section and now Howard will talk about ASL.

HOWARD ROSENBLUM: Good afternoon, I'm Howard from the National Association of the Deaf. So there's been several changes that we have not yet touched on, for example, right now, during COVID, there were many presentations, press conferences, meetings, public service announcements that had interpreters that may or may not have had captions but they were not always accessible to deaf and hard of hearing individuals, not all of our deaf community is fluent in English and they don't rely on the caption, they rely on the interpreter, they need to be able to see the interpreters that are being provided. For right now, I'm actually presenting here, the camera may have me in frame, my interpreter that is standing next to me if I had one, may not be in frame, you may only see half of their hand. So this would give the FCC the power to ensure that the interpreters are visible and in frame on the camera if they're being provided at that event.
That also creates a mandate for the number it's. For example, very often a deaf individual has more than 1 phone number. When somebody says may I have your phone number, I don't know which one to give them. My text phone number, do I give them my office phone number, which has my video phone connected? There is a different phone number and if that individual switches them around and tries to call my cellphone, my text phone number that goes into a black hole. I never receive that communication. Again, we want to level the playing field for the deaf and hard of hearing community and provide access. We want to give more power to 911 access, make sure that they have the 911 centres are receiving all of the information available. When an individual is using their portable phone, their mobile device, there is the responsibility of the device to provide that location to  through this app to the 9 1 operator at the call centre. That's not always provided.
That is another  that is something that we're working on, ensuring that an individual's location is provided to the 911 call centres. We also want to address videoconferencing platforms. The FCC made great strides in this area. We want to ensure that these are included and mandated and codified. We want to ensure that interpreters are accessible if they're provided. You know, very often we're working remotely, we're having meetings through Zoom, Teams, whatever conference platform is being used, we want to ensure that they are all standardized and that there is no barriers to interpreting and captions being provided. We want to ensure it is compatible with relay and all of the other things, through Zoom, Microsoft Teams, whatever conferencing platform you're using. We want to make sure that the access that has been provided is codified and ensure that standards are being met. Now, with that being said, I'll turn it over to Lise Hamlin.

LISE HAMLIN: I'm Lise Hamlin, Director of Public Policy at Hearing Loss Association of America. You have already heard how much CVTA would change. I'm here to tell you there is more, we're also looking at realtime text. In 2016, the FCC had rules to facilitate the transition from TTYs to realtime text. Over wireless Internet protocol technologies. In 2016 they also adopted a further notice of proposal and that would connect on integrating RTT and the VLOIP, the timeline, the sunset requirements for RTT to be backward compatible to TTYs and realtime text features that would be needed for people with cognitive disabilities and people who are deaf-blind. So what the bill would do is require the FCC to complete those rules on RTT within two years. And now I'm going to take it back to Karen to talk about even more!

KAREN STRAUSS: Thank you! So I'll talk about telecommunication relay services, many of you are familiar with the services, people that are deaf, hard of hearing, deaf-blind, have a speech disability to communicate with other people by telephone, now also by video communication.
The bill makes several improvements to the services. The first one, is that it would authorize the FCC to improve funding for something called direct video calling in which people who microphone calls to customer service contact centres would be  if they use American Sign Language, would be able to converse with somebody else in American Sign Language. It is faster than video relay service, for example, and provides more effective, efficient calls. The she could thing it would do is to be authorize funding for something called communication facilitators, these are people who would be able to tactically communicate with someone that's deafblind. Today, up until today, most deafblind people who need tactile communication have not been able to benefit from relay services. So it would authorize funding for that as well. The next thing would do is authorize funding for something called certified deaf interrupters is or deaf interrupter, these are  interpreters, these are individuals that specialize in deaf speech or deaf communication that is for people that have linguistic challenges or maybe language deprived and need extra facilitation to complete the relay calls.
The FCC actually has an open rulemaking to authorize certified deaf interpreters and in fact, as was mentioned before, a couple of the things that we put into this piece of legislation are  we may ultimately decide we may not need them in legislation, when we put them in, they were not pending at the FCC. In that way, we're actually happy about the progress we have made, even if it is not directly with Congress.
The next improvement will allow and include as eligible for the telecommunication relay service, people that have an auditory processing disorder. It is not technically a hearing loss, many of are you familiar with the legislator Federman with an auditor processing disorder, he talks about the use of captioning and those soon will be eligible. We have talked about videoconferencing systems, one of the other provisions that we would make sure that relay services can be integrated into the videoconferencing systems so that somebody using a Zoom, a WebEx, a Microsoft Teams, other type of videoconferencing system can integrate a relay assistant or a communication assistant right into the call, whether it be for captioning, sign language or other purposes. The next change would be to require the videoconferencing providers the same systems that I just mentioned to contribute support for relay services.
Right now, support comes from telephone companies, VOIP companies, international companies, local and state telephone companies as well. Videoconferencing companies are contributing and as they start  well, already, after the pandemic we're still using videoconferencing to a great extent. We think that they should be able to contribute to the support for these services.
Last, it directs the FCC as you heard a while ago, the FCC is charged with updating its captioning quality rules every four years, this would similarly require an update of the FCC's rules and mandatory minimum standards to make sure that it keeps current with current technologies because right now we're behind. That happens a lot in disability access as you know, videoconferencing came about in a very big way during the pandemic, but there was nothing to cover it, no laws to cover it. What we want is to not keep going back, we probably will anyway, but we want this to be self-propelling, we want the FCC to have the authority to not only review the current state of technology in the future, but also to be able to write laws to cover the emerging technologies and a bit later, Ella will lead a Town Hall and we're hoping to hear from you on what ways we can move towards a society that does not always have to play catch up and maybe even incorporate other things in the bill. Before we get there, I will pass it back to Clark to talk about the deaf-blind distribution programming changes.

CLARK RACHFAL: This is closuring with ACB, the National Deaf-blind Equipment Distribution program, NDB, or I Can Connect, I'll go with I Can Connect for the rest of the segment. So the I Can Connect program, as formulated under the 21st Century Communication and Video Accessibility Act allows the Federal Communication Commission to allocate up to 10 million dollars, and that is for the necessary equipment and devices to deliver and make telecommunication communications and Internet services accessible to people who are deaf-blind. So that has not changed since 2010 and in this legislation, we would increase that to 20 million and tie it to inflation going forward. Additionally, we're adding to the list of eligible disabilities qualifying folks for service and devices as someone who is deaf-blind, cortical and cerebral vision impairment and audio processing disorders.

KAREN STRAUSS: Now to Lise Hamlin for the last presentation on the advisory Committee's created by the CVTA as well as the FCC reporting on emerging technologies.

LISE HAMLIN: The last piece we want to speak about, in this synopsis, it is a quick synopsis for you all. We want to talk about advisory Committees. So CVTA would mirror the CVAA multi stakeholder approach to bring people together to talk about the issues that impact CVTA and what would help the FCC in providing  not that we would know who is on the Committee yet  but that people who are on the Committee would provide recommendations and videoconferencing, closed captioning, audio description and the commissions implementing regulations. It would also require the FCC to work in consultation with the access board to deliver periodic reports to the Senate Commerce Committee that access accessibility barriers, assess the accessibility barriers  if I can get that out of my mouth, it could happen. And solutions for emerging technologies. So the video programming technologies such as AI, spatial computing, advance machine learning, and virtual extended and dual reality. Finally, it requires the FCC to issue regulations to address accessibility barriers in evolving technology.
So now I'll throw it over to Karen to wrap it up.

KAREN STRAUSS: Yeah. I just want to mention something about the advisory Committees. As you know, for those of you that are around, there were advisory Committees in the CVAA as well and having been one of the people that helped draft that, we were very  those of us that were advocates were caution about that  cautious about that, we thought it was a delay tactic, it turned out to be the best thing ever. When I went to the FCC, I learned that those advisory Committees were packed with expertise and experience in captioning and audio description and emergency created access and those  the reports that they produced were extraordinary and we literally took them, put them in the rules. So we added them to this bill because we think getting the information from people like you who have the expertise on these issues is extremely important in crafting policy. I'm very proud of us. We made it to our halfway point and we still have halfway to go. At this point, I'm going to hand it over to Daniella Decker and shell moderate the Town Hall portion.

DANIELLA DECKER: Thank you for listening in, doing our CVTA dance. We may have to close off with that as we finish the session, I may encourage my panelists to get up one more time. Like you heard today, I think we have talked a lot about AI, we're talking a lot about emerging technology, but it is all of you in the room who have blue ocean, white sky to play with now. We would like to hear from you in the audience what is going to make the CVTA critical for your communities, critical for the technology that you're accessing. Without further ado, Larry is monitoring the Slack channel if anyone wants to pop in there or raise your hand because I think we have got some mics headed out your way.
There is man in the front, if anybody has a mic we can hand it to him.

I'm with the National Cancer Institute, U.S. Department of Health and Services. Excited to hear what I have heard this afternoon, specifically for Clark and for Howard.
Questions of quality: We talk about AI, technology, but when it comes down to the human factor, quality of audio description, quality of captioning, quality of interpreters, what can we do, what's in the act and what can we do to advocate for really addressing the quality of written  the writing of the script for audio description as well as the voicing and the technology of integration? Whether it is through certification standards for audio description, and for captioning, you know interpreter, how do we ensure not just more captioning but better captioning, you know, that is accurate and inclusive, balancing out with speed, you know, staying up to wordforword with the speaker. What's built in around the quality of those things, audio description, captioning, and what role can people play in terms of pushing for better and higher standards.

HOWARD ROSENBLUM: This is Howard speaking. Clark, go ahead, I'll fix what you say.
This is Howard speaking. This is what I have to deal with. Okay. This is Clark ACB. Great question there.
Are three key elements for quality audio description, that's the script, it is the narration, and it is the audio mixing. You can have James Earl Jones, Morgan Freeman reading audio description, if it is coming out of only the left speaker of a stereo set in mono, or if the script is horrendous, it is not going to be a quality experience. I think there are certainly folks that would like for the highest quality, certainly ACB is among them, audio description when it comes to the script, the voicing, quality of the voicing as well as the audio mixing. This is true whether it is television, IP delivery, within section 508 for trainings or modules used by the federal government, correctional the audio description project and our sight and sound impaired Committee of deafblind advocates, one of the reasons that human voiced narration for audio description is so important pour them is because of the  just learned this word  because of the porosity and the voice quality and tonality is easier to hear and understand to effectively communicate with audiences that are deafblind or have hearing loss. What can all of you do? Keep advocating, demand better, that's certainly what we're doing at ACD, what we try to incorporate here in the CVTA here as well. It is why we have within three years the report from the FCC on audio description quality. For writing, voicing, audio mixing, editing.

HOWARD ROSENBLUM: I don't want to grab the floor, it is not limited to Clark and myself. If you want to jump in, please do. I second what you said, good job on that. In response to captioning, technology has a way of getting ahead of people and the standard we have established. So, you know W we're truly playing catch up, we are. And especially with the CVTA we already have the authorities granted to the FCC to ensure that the quality of the captions is within the governance, within their region, but they have not yet exercised that authority. So there is no metrics to look at the quality, so we have looked at various entity, including Gallaudet, others, and we have implemented standards that include and address accuracy as well as appropriateness. We have appropriate measures for those words, it is not just based on word count, error rates, but it is the weight, the quality of the words and the quality of the captions and the content itself. You know, for example, let's eat grandma, without the comma  commas save live, let's eat, grandma, let's eat grandma, it changes the whole meaning of the sentence, that's one example out of men but that reflects the simplicity of the metrics, it is not just accuracy, not just content. I'll be honest, we're not there yet. We really aren't. Once we have metrics established and we can compare metrics, we can talk about the numbers, then we can combine that, we can then install mandates for these metrics and that is currently under the FCC's jurisdiction, we want to meet standards, but the standards need to be determined as well.
So WCAG does have some language, but again, it is not enough, and it is not sufficient metrics. Does anyone else want to add to this?

Quickly, the FCC has standards in place for accuracy placement, being in seasoning and timeliness, they were put in place in 2014 after ten years of the deaf community and hard of hearing community pleading with the FCC to develop standards. But those standards, they require 100% accuracy and compliance for prerecorded programming. The real  even bigger problem, it is realtime programming. And this law does address that, it does say develop standards, metrics, for that, and also for audio description, we're trying to get ahead of the game for audio description as well as expanding audio description. It is tough, because metrics are not easy. I this think we have another question over there. Greg.

I'm Greg Pollock. I work for CSD communication services for the deaf. I wanted to ask a quick question.
To both of you, just to brainstorm, pick your brain a bit. You have mentioned first of all, I full heartedly agree and support captions, captions are not just the only solution for deaf and hard of hearing users and Howard, you hit it right  hit the nail on the head when you said, you know, sign language is an integral part of our communication, that's correct. The second consideration, it is as well that with 911 and emergency access that needs to be accessible for deaf and hard of hearing people and individuals. That's a huge issue. I wish  I know it is a hot topic with the FCC, I applaud all of you for addressing that. I do have a question in regards to the emergency communication and crisis scenarios where information is being spread to the community and needs to be shared with timeliness. So, if there is funding, if there is something in regards to funding, and that is also considered timely information, very often that gets lost or disseminated through various networks, and is there an official hotline, could we set up some kind of a way to communicate that? You know, it is not always built in to the various programmes, and will you address that through the CVTA? You know, is that a solution? Is that being tabled for the next situation? Not only talking about crisis, but just other general communication.

KAREN STRAUSS: Talking about access by people who use ASL, sign language?

Sign language. Yes.

KAREN STRAUSS: The bill actually does address the need for emergency  the improved emergency access for sign language users in a number of ways. Partially through video relay service and partially through direct video communication. In addition, and full disclosure, I'm a consultant to communication service for the deaf, and CSD did file a petition with the FCC asking the FCC to develop a pilot program of direct video calling, again, that's when somebody in a customer call centre contacts centre, crisis hotline, et cetera, knows American Sign Language, preferably a deaf person who knows sign language can communicate directly with somebody else who is deaf and uses sign language. When you have the interpreter in the middle, interpreters are a wonderful relay video service, it is wonderful but especially a crisis situation, being able to communicate directly is very, very important. So there is a petition pending at the F CC asking the FCC to set up a pilot program of direct video calling for government agencies and crisis centres and as some of you know, just very recently, 988, the suicide crisis hotline added American Sign Language. It is really important, it is a need  it is a  it is a requirement or a need and access, accessibility need that really has not been addressed in the past. That's the need for direct sign language and it is time that's come with everyone now having access to video communication. So we're working with the FCC to get that done. If they don't, we have incorporated that as well into this CVTA.
You want to add?

HOWARD ROSENBLUM: I do. I do. I just wanted to add to what Karen mentioned, yes, that's correct. About the 988, we have to push that hard, that's new, for years it has been voice access only and so this is now just starting, and part of the issue is how it's structured. So for example FEMA should be involved as well. They have a tendency to put out announcements and to have resources and then they leave it to the states. The states are then responsible. When FEMA is involved, they do provide ASL service, but it is really in local crisis and porosity disasters, it is up to the states and that's is where that disconnect happens. Right there, it is where that lack of communication or lack of transfer of information occurs.

KAREN STRAUSS: And the good news, the good news is that it was recently announce that had they'll start DVC in proposal a year, so we're hoping that other federal agencies follow suit, but we'll be meeting with the FCC soon doing demos there and going to push them into trying to develop this pilot program.

I think part and part of that, as we saw in Hawaii recently, we have seen this with disasters around the world, it is communication specifically to all communities, deaf, hard of hearing, as well as the blind community. This piece of legislation is for everyone, we talked about the numerous types of disabilities that are also included here, so when you think about the CVTA, don't automatically single out blind or deaf and hard of hearing community, it is for every community that will benefit.
I know Christopher had a question in the back there if that was you raising your hand or if anybody else is in the crowd.

Yeah. I have an interesting question I'll ask about things like captions and audio description, is user generated content. Take a look at a YouTube which has a couple of hundreds of hours of content uploaded all the time, who owns it, who is responsible for it, we all know that AI is not perfect, is there an expectation of prerecorded content having to be perfect, who does that? Who pays for that? Same with audio description, AI is not perfect, if the content owner doesn't create the content, who is responsible, how is one realistically and morally and ethically doing this, right?

KAREN STRAUSS: Everybody is looking at me. Okay. So these are very tough questions. Right. In the CVAA we were told we couldn't cover any user generated content. We're trying to cover that in the CVTA if the content generates at least 1 million in annual revenue or if it is on a user generated platform and that the network, the channel, the system or service that uses that same platform is otherwise covered by the CVTA. In other words, they have other captioning, audio description obligations. This is a tough one. Obviously. We're going to get a lot of push back for this. We do think that if the revenue can support the provision of accessibility, it should be required. If it is not  if the revenue isn't rising to that level, then we're still asking for those entities to be required to provide  what is it called, Larry, the  the ability to write your own script, in other words consumer friendly mechanisms for developing captions and audio descriptions so that entities can add it to it. It is a tough issue. To add that point on authoring  authoring tools, yeah. Authoring tools. Some of the platforms enable you to generate and edit captions very few enable you to add audio descriptions, the idea would be if you're either required or want to add those, the platforms that support user generated content would need to enable that.

KAREN STRAUSS: Better said than me. Peter, back there?

(No microphone).

KAREN STRAUSS: While the mic is going to Peter, I want to  to piggyback on one thing that was said, one of the other areas, people with speech disabilities and there are provisions for requiring either the compatibility or the integration of newer technologies that involve the ability that use automatic speech recognition that specifically enable people with difficult to understand or nonstandards speech to be able to interact with video apparatus and the videoconferencing systems, you will hear more about that at a session later on today after this one.
Peter.

I'm just curious, if this is Peter, I'm with Amazon. The formulation that you made for user generated content, it feels like it is tailored to a video platform, YouTube, some other one, where there is clear monetization, there is content where it is not direct, am I hosting a unboxed video of a product that I bought to the page of a product which may or may not sell in that amount of money that the user isn't getting that money, they just contribute something. I'm curious if the CVTA is covering or contemplating those sorts of situations as well.
So just previously, that's a good point, that's the type of feedback we need (Karen Strauss) in my experience, something that is further defining this, further clarifying the intent, it is something that I think that the FCC would do, if we create a broad brush and a framework for them to work with, that's something that I think they would take on. However, a nice thing, it directs them to define who is responsible and when they're responsible. I think it would fall under that as well.
Other questions.
Good. Paul.

DANIELLA DECKER: Keep them good, I hope you poke holes in the boat and we're having important meetings this week both on capitol hill and downtown DC to figure out how we have got  how to get this past. Keep the questions floating in the brain and ask them, this is a perfect forum to do so.

Paul: With the American printing house for the blind.
The policy nerd in me is curious how you're dealing with jurisdiction for platforms that have traditionally been a problem where the FC doesn't have jurisdiction. The real question I have, as an entertainment watcher is Clark, you mentioned expansion of description to other markets. I missed if you said it about more programming and particularly I'm interested in wondering whether we can ever get to a place where we can start undoing the early FCC decision about live program in particularly the example of sports, which is all but unwatchable on TV if you're a blind person in terms of actually wanting to follow what's really happening. I know it would be very challenging to do, I wonder if this came up and I would specifically like to know what the program  what the decision is on getting that past at 75 hours, et cetera, broadening to other kinds of programming. Other carriers.

CLARK RACHFAL: So yes, in terms of audio description, the FCC is already moving forward under the existing authority to expand ten designated market areas per year, until all broadcast designated market areas are covered, and that's why it would take ten years to do that for the more than 200DMAs in the United States. The CVTA, it is for all video programming to be audio described and closed captioned. Karen, I'll turn to you, there are in the CVAA, there were some it exceptions to that and I just don't have it offhand.

KAREN STRAUSS: Sure. The exceptions to the captioning requirements were put into place around 1997 when the FCC adopted captioning rules. As mentioned earlier, some are very outdated, it is I think mentioned by AnnMarie Killian, one that says that new networks don't have to caption for the first four years of their existence. That's ridiculous. Fortunately, no one follows it to be honest. Captioning is so mainstream right now, they don't even really follow that exception. It shouldn't be on the books. It was put there because of the time that the law was passed, captioning was still  I don't want to say new, but wasn't required. That broad exception was added. Another one, for advertisements, that are shorter than 5 minutes in length. Another one, for music, another one is for locally produced programming, there is a list of them that are very outdated and frankly as you know, when you turn on television, just about everything is captioned now. What we don't want is for similar exemptions to exist for audio description because they are outdated. Having said that, there may be other ones for audio description, I don't know. Actually, Paul, you taught me something, because I didn't realize that there was a problem with sports programming. I think that the FCC may not know that either. They may have just assumed that since there is so much audio it is not an issue.
They have to be educated once this bill gets past.

Paul did ask about jurisdiction, it is up to congress to expand that jurisdiction, what's what this law does. Where the FCC may not presently have jurisdiction on the originals for instance, so this bill would expand that. Then in terms does audio description cover live supports, for example, any live programming, that would be up to an FCC rulemaking to determine if there are exceptions and open for comment from provider, from users, and that is the forum that we would like to see happen.

DANIELLA DECKER: And it is a different language, I know it is not a sport, but the Emmy's this year, I don't know if anyone likes Bad Bunny. The bunnies, it said Spanish language, that's it, that's all you got. I really want to know what that bunny is saying in English, all I got was Spanish language. We have to think outside of the community to recognize this reflects on other communities as well and even further.

That will never happen again!

I don't think so!

(Off microphone).

Some of the challenges here are the limitations of legacy technology. Right. So when you're dealing with broadcast infrastructure, you may have only one secondary audio programming channel. If you're choosing to broadcast Spanish language, which oftentimes they are for supports, live events, there is nowhere for the audio description to go. And that's why as achievable we are pushing for there to be audio description in captioning in both English and Spanish language.

When DTV came about, we thought that there would be multiple audio streams, that was the expectation. We thought that the competition between Spanish and captioning or rather  I'm sorry, Spanish and audio description would go away, there would be no more competition, it just didn't happen. It is a little frustrating.

The separate AD channel, it would be just fantastic for the ability to do your own mix basically. To provide your own balance, how much audio programming you want to hear, how much description, right now the mix is in the hands of the provider, the platform, and quite often it is not very good. It gets drowned out by music and affects, but if you could actually control that separately, it would not be so difficult and would be remarkable improvement on the user experience.

You bet. I wrote it down.

KAREN STRAUSS: Is there anybody here that has concerns about the bill? Here is another question right up front. No, right  yeah. Okay.
I think we need a mic over here. The interpreter is here. There we go. No. She's interpreting.

I'm from Gallaudet University. So the process and the functional equivalency of audio and captioning, you know, in the past captioning, you know, was all upper case. It was  there was a lot of limitations in regards to grammar, punctuation, very often there were misunderstandings, mistakes, things have improved a bit. You know, we now have lower case, we can have different grammarly markers and there is more functional equivalency, for example, you know, there is emojis, you know that we use during text, we can have  we have nonspeech information that can be included. You know, for a phone ringing, environmental noise, emojis to show emotion, sarcasm, things like that. So those are great because it provides that equivalency in a way to revisit that in the future again if the captions could include those types of nonspeech based information.

I think that comes into the question of what makes for quality captions. Good, nonspeech information, speaker ID, it is still not consistent that you get a good speaker identification, so defining quality captions, latency, accuracy, placement, still hasn't really been addressed well. There are some rules under the FCC regs and examples like that, nonspeech information, which by the way, the term was developed by Gallaudet, which I love, and NSI, and there is no rules on that. There should be some freedom, some creativity, if you watch captioning on stranger things, you will be very impressed by the poetic nature of it, does that define quality or not? It needs to be debated.

If I may? So it is not only limited to TV. You know, there is videoconferencing platforms and all of those that we use, so very often they provide what's spoken, they don't provide what's not spoken, what's spoken during the meeting. Some platforms do have speaker ID, some do not, and they're not always reflecting the person who is signing. So there is a lot of variables that we need to consider. That gives us additional data, additional information and more ideas and creativity that we can have, that we can implement within captions for the future.

KAREN STRAUSS: One more question over there and then I think we're going to have to wrap up. We'll be around if you have other questions you want to approach us.

Hey, everybody, Zack from Verizon, a quick question on the kind of political realities behind this. Thank you so much for a great discussion. Really excited to talk about the bill. I'm curious, if you guys have any kind of a window on building a caucus around this bill, trying to move it forward, because obviously, the support of Senator Markey is incredible.

That's a perfect segue, people asked us, how can you get a bill passed when you don't really have a Congress. Fair question. That's all I'll say. Every single bill that this community has put forward starting with the telecommunication under the tabled act, Hearing and Compatibility Act of 1998, the Code Act of 1990, the Captioning Act, 1996, 2005, 1996, the CVAA  I think I'm missing some. Title IV, requiring Telecommunication Relay Services, all enacted, watch us. Now we have to stand you all up once again! Come on! We're going to use our spirit to get the bill past! Go for it!

DANIELLA DECKER: One last remark, one for the people that weren't in the room, mind and body, standing for those who want to stand to do the CVTA like the YMCA, find us all after, we're looking to expand our small group and get everybody else involved to really push this legislation forward. If you're interested, we're all here. All right. On the count ever three. YMCA with the  CVTA, already. On 3. 1, 2, 3, CVTA! CVTA! One more! One more! Thank you! CVTA! Thank you!
That was quite the segue! [Break]

FRANCESCA CESA BIANCHI: Thank you for the terrific Town Hall, CVTA! We have just a few minutes braining and then we'll resume at 2:55. That's in 10 minutes. 10, 15 minutes maximum. We'll have a panel discussion on new frontiers for inclusive speech recognition.
Thank you.

This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.

Leave a Reply

Your email address will not be published. Required fields are marked *