[NFBMI-Talk] FW: Followup from Ann Arbor Visions 2023 Research Study

rob.parso3389 at gmail.com rob.parso3389 at gmail.com
Sun Feb 18 16:15:27 UTC 2024


Happy Sunday My Federation Friends,

 

The business of the Federation often includes networking with other organizations and professionals about our programs and work. Please take some time to consider your interest in participating in some of the paid research studies offered through our connections at the University of Michigan. Please remember this is NOT a Federation study and it is your decision to participate in both the screening and study if selected. Also, please note that completing the screening does not guarantee your participation in the study. Please read all of the steps included in the process to be fully informed. If you have questions, please use the contact information provided in the thread. 

 

With Love, 

 

Robert

 

From: Anhong Guo <anhong at umich.edu> 
Sent: Thursday, February 15, 2024 9:39 PM
To: president at nfbmi.org
Subject: Re: Followup from Ann Arbor Visions 2023

 

Dear Robert,

 

It was nice meeting you again at the Ann Arbor Visions event. My lab is recently looking for blind participants to join our paid research studies on visual assistive technologies. Could you please help forward it to your mailing list or anyone you know could participate?

 

We are a team of accessibility researchers from the University of Michigan working to develop new applications for blind people to access visual information. We are conducting two paid, in-person research studies and are looking for blind participants! Both studies will take about 2 hours each, take place in the Beyster building on UMich North Campus (2260 Hayward St, Ann Arbor, MI 48109), and be scheduled sometime in March.

 

Study 1: Customizing AI assistive technology. In this study, we are looking for participants to test an application that allows users to customize AI assistive technology to filter visual information.

Study 2: Sound-aware visual descriptions. In this study, we are looking for participants to test an application that considers sound context to provide real-world descriptions, so the spoken descriptions will not be overshadowed by noisy environments or conversations.

 

*	If you are selected to take part in the studies, you will be compensated $25 per hour for your time (as an Amazon gift card or physical check). Transportation costs will also be reimbursed to a limit.
*	Please fill out this quick (5 minute) screening survey to verify eligibility:  <https://forms.gle/YTmWRCLEsSQQPhiS7> https://forms.gle/YTmWRCLEsSQQPhiS7 And feel free to email  <mailto:anhong at umich.edu> anhong at umich.edu if you have any questions.

 

Thanks,

Anhong

 

--
Anhong Guo
Assistant Professor

Computer Science and Engineering

School of Information (by courtesy)
University of Michigan

Mobile: 678-899-3981
Email: anhong at umich.edu <mailto:anhong at umich.edu> 
Web: https://guoanhong.com





On Jun 12, 2023, at 15:56, Anhong Guo <anhong at umich.edu <mailto:anhong at umich.edu> > wrote:

 

Dear Robert,

 

I am Anhong Guo, an Assistant Professor in Computer Science & Engineering at the University of Michigan. It was great meeting you at Ann Arbor Visions last week! 

 

As I mentioned, my lab (the University of Michigan Human-AI Lab <https://humanailab.com/> ) works a lot in the areas of Accessibility, AI, and AR/VR, particularly in developing assistive technologies for blind people. Here is my lab website: http://humanailab.com <http://humanailab.com/>  I’d love to keep in touch, and for the upcoming studies we do (both in person and remotely), I’d really appreciate it if you could help spread the words. 

 

Most recently, we released two accessibility apps on the App Store: 1) VizLens <https://apps.apple.com/app/id1577855541>  that assists blind people to access everyday appliances (vizlens.org <https://vizlens.org/> ), and 2) ImageExplorer <https://apps.apple.com/us/app/image-explorer/id6443923968>  for image understanding for blind people (imageexplorer.org <https://imageexplorer.org/> ). These are informed by our previous research projects thanks to the contributions of our participants, students, and collaborators, and we are deploying them to benefit end users and hope to continue maintaining them in the upcoming years. We are also collecting data from them to enable future research (IRB optional consent in the app). It’d be great if you could forward the app info to your lists, and we’d love to see how people use them! 

 

More info about these apps are in this Google Doc <https://docs.google.com/document/d/1Lk0YLmV4l43ykGkmjUswYDQHmU88rnDVV8PfpTwyz68/edit> , and I’m also pasting it here:

 

The University of Michigan  <http://humanailab.com/> Human-AI Lab recently deployed two accessibility apps on the App Store: 1)  <https://apps.apple.com/app/id1577855541> VizLens that assists blind people to access everyday appliances ( <https://vizlens.org/> vizlens.org), and 2)  <https://apps.apple.com/us/app/image-explorer/id6443923968> ImageExplorer for image understanding for blind people ( <https://imageexplorer.org/> imageexplorer.org). These are informed by our previous research projects thanks to the contributions of our participants, students, and collaborators, and we are deploying them to benefit end users and hope to continue maintaining them in the upcoming years. We are also collecting data from them to enable future research (IRB optional consent in the app).

 

(1)  <https://apps.apple.com/app/id1577855541> VizLens is an iOS app that assists visually impaired people to access everyday appliances, such as microwaves, washing machines, and more, that are not labeled with raised dots or Braille. ( <https://vizlens.org/> vizlens.org)

 

VizLens enables two key features:

- Virtual Interaction: Capture an image of your appliance's interface, and VizLens will create a virtual layout of the buttons relative to each other. Simply touch a button to hear its corresponding text.

- Live Camera Interaction: The innovative feature of VizLens, this view allows you to aim your camera at the appliance interface and use your finger to point at buttons. The app will then audibly speak the text of the buttons you're pointing at.

 

Using VizLens is simple and intuitive:

- Take a photo of your appliance's interface

- Let the app identify and extract button text

- Choose between Virtual Interaction View and Live Camera View to interact with your appliance

 

Here is the App Store link to the VizLens app:  <https://apps.apple.com/us/app/vizlens/id1577855541> https://apps.apple.com/us/app/vizlens/id1577855541

For questions and feedback, please contact:  <mailto:humanailab-vizlens at umich.edu> humanailab-vizlens at umich.edu

 

(2)  <https://apps.apple.com/us/app/image-explorer/id6443923968> ImageExplorer is an iOS app that assists visually impaired people to understand the content of images ( <https://imageexplorer.org/> imageexplorer.org).

 

Key features of ImageExplorer:

- Text summary: Get brief overview of the image through text summary including caption, tags and objects.

- Touch-based interface: Use touch gestures to explore spatial layout of objects on the image.

- Customized Configuration: Customize the settings to only show information that you are interested in.

- Pick an image from your photo library, example images, take a photo, or share from another app!

 

Here is the App Store link to the ImageExplorer app:  <https://apps.apple.com/us/app/image-explorer/id6443923968> https://apps.apple.com/us/app/image-explorer/id6443923968

For questions and feedback, please contact:  <mailto:humanailab-imageexplorer at umich.edu> humanailab-imageexplorer at umich.edu

 

Thanks!

Anhong

 

--
Anhong Guo
Assistant Professor

Computer Science and Engineering
University of Michigan

Mobile: 678-899-3981
Email: anhong at umich.edu <mailto:anhong at umich.edu> 
Web: https://guoanhong.com

 

 



More information about the NFBMI-Talk mailing list