[NFB-Science] James Webb Revisited

Tina Hansen th404 at comcast.net
Sun Apr 21 20:44:55 UTC 2024


A couple of weeks ago, I commented on the excellent text descriptions from
the James Webb telescope, and thanks to the solar eclipse, I wanted to
explore going beyond the text descriptions and ponder how to make these
images more interactive.

 

I have a collaborator from the UK discussing this with me. I thought I'd get
the ball rolling with a couple of ideas.

 

What if there were a book of these first images that used some kind of paper
sturdy enough to hold Braille text? Along with that, tactile pages would
show the image and the data.

 

But let's imagine that the book was outfitted with Bluetooth technology so
it would communicate with an app on your phone or tablet. So as you explored
the image of, for instance, the Southern Ring Nebula, sonification would
help you visualize it. But the real show stopper would be the commentary,
narrated by a skilled voice talent, such as Robert Picardo, Kate Mulgrew or
Michael Dorn. I've been throwing out those names because I know there are
likely a lot of Star Trek fans on this list, and they'd recognize these
talents. And what if each talent played a part in this commentary so you'd
have a variety as you went through the book?

 

I've seen similar books like that that used a pulse pen, and the pen handled
the audio. But the sound wasn't all that great, and I think the producers of
these books chose to use the best TTS voice they could find. But that was
some years ago.

 

What if some kind of small optical device, combined with an app, were to be
used for something like this? What if the book included a sturdy surface you
could pull out and place the image onto, as a stage. The stage would have
the Bluetooth, the touch sensors and the platform to mount the paper. The
app would be able to handle the sonification and the recordings.

 

The idea is that having the descriptions in Braille could give students in
the BELL academies something interesting to read, especially if they're
older.

 

This could also be an opportunity to teach students how to understand
graphical data nonvisually, and it  would be a good supplement to anything
else they get.

 

Having the data sonified and explained by a commentator would really make it
interactive. They're not just listening to a description, but they're
interacting with it. Besides, if this could be mass produced, it could be
sold at the NFB Independence Market and through various blindness
distributors. The app would be on your smart phone, and you'd be able to use
your own sound system.

 

I realize the use of TTS voices could be a possibility, but I also recognize
that there are times you just need a break from those voices. Anyone in this
community hears these voices all the time, and you just need a break, which
is why I wanted to use talent.

 

Or let's imagine a public setting, such as a science museum. A parent is
coming in with a blind child, and the parent has had to read every other
display to the child. But they soon come to your area, where you have an
interactive 3-d model of one of these images.

 

Until now, it's been like the typical accessibility challenge with the
parent doing the best she can, but the kid is bored. I'm sure we've heard
enough horror stories about this, and I know someone on this list who had an
experience like this 20 years ago.

 

So let's imagine that this parent and the kid have come up to this image,
one of Webb's first deep field. The kid's nervous because he's not been able
to really get anything out of the rest other than what the parent has read
to him. But you encourage him to touch the image. The parent is desperately
needing a break, since she's had to do all that reading.

 

The kid touches the center of the image, and both hear another voice, that
of Kate Mulgrew, identifying it. The parent is curious, and probably feeling
a bit of relief. She encourages the kid to really explore the image. As he
does, the parent just relaxes, listening along as Kate Mulgrew's commentary
fills both of them in. Both explore the interactive elements, thanks to data
sonification. Periodically, as the sonification lets the kid explore the
graphical data, Kate Mulgrew announces a marked point on the data. The
parent is relieved, since she didn't have to read out everything.

 

They play with this image for a few minutes, then go home.

 

Can something like this be created? I know View Plus does have software that
could help with this, but I've not seen too much that could be used on a
smart phone. To my knowledge, I've not seen a smart phone app that would
give access to an image library with descriptions. But what if these images
could be used to test out the concept? Is it possible to create interactive
3-d models that are large enough to deal with these kinds of images, but
small enough for travel? Can a book be created that has the kind of tech
I've envisioned here?

 

I also want to know what's already out there so we can find out where the
gaps still are. As of now, View Plus seems to be the only company doing
anything like this. There was another one some years back, but I'm not sure
if they're still around. Any comments? Thanks.



More information about the NFB-Science mailing list