[NJTechDiv] Microsoft wants AI to be helpful for disabled persons
Tracy Carcione
carcione at access.net
Wed Oct 14 13:56:23 UTC 2020
Thanks. This was interesting.
Someone on the call Monday was asking what to do when your screen reader isn't responsive. My first go-to is to bring up the SeeingAI app and point my phone at the computer screen. If it starts chattering about updates, I know what's happening. Narrator is good for that, too.
Tracy
-----Original Message-----
From: NJTechDiv [mailto:njtechdiv-bounces at nfbnet.org] On Behalf Of Mario Brusco via NJTechDiv
Sent: Tuesday, October 13, 2020 6:11 PM
To: njtechdiv at nfbnet.org
Cc: Mario Brusco
Subject: [NJTechDiv] Microsoft wants AI to be helpful for disabled persons
Microsoft wants AI to be more helpful for people who are blind or use
wheelchairs.
https://www.techrepublic.com/article/microsoft-wants-ai-to-be-more-helpful-for-people-who-are-blind-or-use-wheelchairs/
by Veronica Combs on October 12, 2020.
Researchers are building diverse training data sets that include
information from people with low vision and individuals living with
conditions like A L S. People who are blind or who use a wheelchair or
who have autism often are early adopters of technology to complete
everyday tasks like communicating, reading, and traveling. Artificial
intelligence powers many of these services such as voice and object
recognition. In many cases, these products are trained on data from
able-bodied or neurotypical people. This means that the algorithms may
have a limited understanding of body types, communication styles, and
facial expressions.
Microsoft is working with researchers and advocacy groups to solve this
data problem and build data sets that better reflect all types of users
and real-world scenarios. Microsoft put the challenges in context in a
post published on Oct. 12 on the company's AI Blog:
"If a self-driving car's pedestrian detection algorithms haven't been
shown examples of people who use wheelchairs or whose posture or gait is
different due to advanced age, for example, they may not correctly
identify those people as objects to avoid or estimate how much longer
they need to safely cross a street.", researchers noted.
"AI models used in hiring processes that try to read personalities or
interpret sentiment from potential job candidates can misread cues and
screen out qualified candidates with autism or who emote differently.
Algorithms that read handwriting may not be able to cope with examples
from people who have Parkinson's disease or tremors. Gesture recognition
systems may be confused by people with amputated limbs or different body
shapes."
"This really points to the question of how 'normal' is defined by AI
systems and who gets to decide that," Kate Crawford, senior principal
researcher at Microsoft Research New York and co-founder of the
company's Fairness, Accountability, Transparency and Ethics (FATE) in AI
group, said in the blog post.
Topic areas range from personalized image recognition for blind or
low-vision people to improved facial recognition for people with
amyotrophic lateral sclerosis(ALS). Microsoft researchers also are
studying how often public datasets used to train AI systems include data
from people older than 80. Age correlates strongly with disability so
having data from older adults could make algorithms smarter when it
comes to aging.
Here are some of the projects that Microsoft is supporting with funding
or technical expertise.
Object Recognition for Blind Image Training (ORBIT): This project is
building a public data set from images taken by people who are blind or
have low vision.
The goal is to personalize image recognition so that an algorithm could
identify a particular cane or set of keys. Generic object recognition
can't do that.
VizWiz data set: University of Texas at Austin researchers are building
on a data set that was started at Carnegie Mellon University. The goal
is to work with people who are blind or with low vision to better
understand their expectations of AI captioning tools and to improve how
computer vision algorithms interpret photos taken by people who are
blind. Danna Gurari, assistant professor at the University of Texas at
Austin, is building a new public dataset to train, validate, and test
image captioning algorithms. It includes more than 39,000 images taken
by blind and low-vision participants Project Insight:
This project in collaboration with Team Gleason will create an open
dataset of facial imagery of people living with ALS to improve computer
vision and train related AI models on a broader dataset. Team Gleason is
a nonprofit that helps people living with A L S by providing them with
innovative technology and equipment and other support.
Researchers and advocates can apply for grants from Microsoft's AI for
Accessibility fund to support their work.
_______________________________________________
NJTechDiv mailing list
NJTechDiv at nfbnet.org
http://nfbnet.org/mailman/listinfo/njtechdiv_nfbnet.org
To unsubscribe, change your list options or get your account info for NJTechDiv:
http://nfbnet.org/mailman/options/njtechdiv_nfbnet.org/carcione%40access.net
More information about the NJTechDiv
mailing list