[NFB-Science] Multi-Sensory Access to James Webb Telescope Images
Kendra Schaber
redwing731 at gmail.com
Wed Apr 3 20:26:17 UTC 2024
Hi all!
Last year, I took a geology class. In this class, I had a lot of issues of the visual kind. I hated the first few labs because I felt more like a human version of dictation equipment instead of a full lab partner, mind you, this occurred with my team’s best efforts on the table, my own blindness skills and having to tack on accessibility with a 2 minute warning because of a sudden dumping of part time geology teachers which happened to also dump the teacher I had spent months working to keep just for that class. One example of the fall out was a rock identification lab. This lab couldn’t be done without visual assistance because it had a time limit of one class period which was about 90 minutes long and it required us to match the visual descriptions with the visual colors and designs of the rocks. Even worse, the paperwork was technically accessible because of its descriptions, but for much of that lab, still not usible because the time required for me to find everything even in electronic form would have taken me maybe three or four times the class limit. In Addison, there isn’t enough accountability for products like google earth which falsly claims to be accessible, even though it really isn’t so accessible after all. Given the nature of images, as we tackle this problem, we must also research the time taken for blind people to Annalise the picture descriptions that we get in k through phd education because I think that the physics of both, our fingers with braille and our ears with audio are actually against us when compared with the physics of a sighted person’s ability to scan an immage with their eyes. If that actually is the case, we must update the laws to better accomidate this difference because until we take it this far, as a colective, us blind people will never ever get full participation because not truely understanding this descrepensy, we won’t be able to work around it independently even with all of the best tech in the universe invented and fully avalible and our best training under our belts. Here, I’m talking about the laws of the physics of the unbreakable limits of the other senses that we do have like feeling and hearing. In order for us to fully compete with images and other visual info that doesn’t yet have a comparible work around, we must do our own research about the weaknessis as much as our strengths because, believe it or not, we actually need to fully understand both sides in order to best improve our full participation and to better our lives so we even have a shot at truely living the lives we want in all of lives aspects. In general, we don’t study enough of what it truely takes for accessibility to be built in to classes from the ground up either. I haven’t seen enough study of how blind people even process is visual info with the tech we already have. From my experience, there isn’t enough education on the existing laws that we already have. We must make a class on civil rights laws and another seperete class on disability rights laws and make both classes part of the required education in order to insure our ability as a civilization with the best possible support from teachers and bosses to build anything from the ground up with full accessibility already built in instead of tact on later which is still the norm in k through phd education. As for the tech itself, I think that’s worth researching too because I don’t think we have all the answers to Tina’s questions. Some images are well done in braille, but even braille and by extension, technology has its limits just like audio does because nothing currently exist that could do all images justice. Most sighted humans, even when they’re trained can’t always bring the technical leval of immage description that is required for a geology class to true usability. We have to explore the role of ai and how it impacts our ability to keep up in a science lab and compare the results of how we do with ai to how we do without ai, even when there is a day and knight difference. I don’t think we study our weaknessis or our gaps that do exist enough. No, I’m not just talking simple accessability, or civil rights, but I’m also talking about our afficency of our tech as we use it to compete. Can we do all of our homework in all of its aspects independently? Can we improve the pore quallity of training of our access tech and make sure it includes already trained blind adults in college who may only need to learn a new full page braille display without having to get sighted help with the inaccessible tech videos on YouTube and the horrible searches that don’t have any answers? Even better yet, can we do more study of the mental costs that also limit us farther than just the physical acts of discrimination that still takes place today? Can we put together a portfolio of our research of these questions and make that portfolio part of the required education? I think immage poverty intersects all of these areas of study so, why not take our studies farther than before and explore these hidden roots? What are your thoughts?
Kendra
Get Outlook for iOS<https://aka.ms/o0ukef>
________________________________
From: NFB-Science <nfb-science-bounces at nfbnet.org> on behalf of Tina Hansen via NFB-Science <nfb-science at nfbnet.org>
Sent: Wednesday, April 3, 2024 10:37
To: nfb-science at nfbnet.org <nfb-science at nfbnet.org>
Cc: Tina Hansen <th404 at comcast.net>
Subject: [NFB-Science] Multi-Sensory Access to James Webb Telescope Images
I noticed the article in this month's Braille Monitor about image poverty.
Access to visual images has been an eternal challenge in the blindness
community.
This, and the interest in the fantastic descriptions of the images from the
James Web Telescope, got me wondering about something. Can we find ways to
create multi-sensory approaches to access these images with touch and audio?
I know some attempts at sonifying the images have been talked about, but I
also like the idea of using the descriptions for context, using some kind of
sonification, and having the model be tactile. The descriptions could be
narrated by a skilled voice talent if they were standing alone, or by
whatever voice you have on your computer. I also like the idea of being
given a choice. I like the idea of using skilled talents, just because many
of us get enough of JAWS or Voiceover and need a break.
But I also wonder if apps could help out, as we all witnessed at the last
total solar eclipse, and will likely notice at this one.
So can something like this be done? I'm curious what is already out there,
and what we can do to create something like this. Any thoughts? Thanks.
_______________________________________________
NFB-Science mailing list
NFB-Science at nfbnet.org
http://nfbnet.org/mailman/listinfo/nfb-science_nfbnet.org
To unsubscribe, change your list options or get your account info for NFB-Science:
http://nfbnet.org/mailman/options/nfb-science_nfbnet.org/redwing731%40gmail.com
More information about the NFB-Science
mailing list