top of page

Inclusion with help of Machine Intelligence and MI4People's CoVision




Dear friends,


We are happy to announce that we have recently kicked-off our third project – CoVision! It is about creating a free-of-charge and easily accessible open-source Computer Vision app that can classify CoVid rapid test results using mobile devices. This app will enable blind and visually impaired to evaluate rapid antigen CoVid tests by themselves.

While blind and visually impaired people are able to perform the tests on their own, they are not able to read out the results from the test and are dependent on help from other people. This dependency results in lack of privacy and considerable inconvenience associated with seeking help from others. Consequently, it makes rapid CoVid tests less attractive and difficult to use for visually impaired people. In the worst case it might lead to less adoption of this technology among blind and visually impaired which would put them at a greater risk.

Our app will increase the convenience and privacy for this group of people and make the tests more accessible. Overall, it is expected that the app will help protect the health of blind and visually impaired and better integrate them into the measures against the pandemic.


Motivated by the original idea from Dr. Stefanie Lämmle from InnovationLab of City of Munich and discussions with Steffen Erzgraber from Bavarian Federation of the Blind and Visually Impaired a group of students has recently created a first working prototype of this application. The team consisting of Simon Farshid, Raphael Feigl, Brigitta Jesica Kartono, and Lennart Maack did the prototype, as the team CoVision, within 48 hours during the TUM.ai Makeathon and won the first prize at this event!

The current prototype shows that such application is feasible but must be improved both in terms of accuracy and usability to deliver highest possible value to the blind people. Such improvements need additional resources and time for research and requires access to further Machine Intelligence talents. Therefore, CoVision team decided to join MI4People and further their applied research work on this topic as a project within MI4People's project portfolio!


We are very happy to welcome the extremely motivated CoVision-team at MI4People!


Also, motivated by this kind of applications of Machine Intelligence (MI) for Public Good, we have decided to dedicate this issue of our newsletter to use cases on how MI can support impaired and disabled people.


Enjoy the newsletter, put your capabilities to help Public Good delivery into action, and let us together make the world a better place for all of us!


Your MI4People Team



Visually impaired and blind

Similar to our CoVision app there are many existing and potential applications of MI that combine Computer Vision and Speech Generation techniques to enable blind and visually impaired to better perceive the world around them and make them more independent.


There are a number of apps that are able to understand typed and handwritten text on photos of documents and letters or on live videos from mobile phone cameras and read it loudly so that blind people can get access to written information. One notable example is the free-of-charge Microsoft’s Seeing AI. Since it is free, it is accessible to everyone who has a mobile phone or a tablet and has further helpful features like identification of persons and their emotions, color and light recognition, and a barcode scanner.


One step further is to use basically the same MI-technology but instead of using it on a mobile device, integrating it in wearable glasses. It is exactly what a startup, Envision is doing. Based on Google Glass platform this Dutch startup has created a device that not only can read text and identify persons but also identify objects and even describe scenes. It also has a video call feature that enables blind and visually impaired to call their persons of trust if they get stuck in a difficult situation so that these persons get access to the video from the glasses and can provide help. Unfortunately, currently this technology is still very expensive (ca. 3,300€) and is not accessible to many people, however, it can be expected that with time this and similar technologies will become much more affordable.


Deaf people

In a similar way, deaf people can also benefit from advance of MI technologies. For example, one can use speech-to-text artificial intelligence (AI) models to instantly transcribe spoken text into the written one. And in fact, there are already apps on the market that are able to do this.


For instance, RogerVoice, a French startup, provides an app that is able to transcribe phone calls and thus helping deaf people in using of this every-day-technology. Another startup, Ava, has created an instant transcription app that transcribes any conversation of a group of people in real-time and even recognizes different conversation members so that a deaf person can follow a conversation with several people without lip-reading. It mainly provides its services to inclusive organizations and enables them to better include people with hearing impairment.


Impaired Speech

Further group of disabled people who might benefit from MI is people with speech impediments. The initiatives like Voiceitt or Google’s Project Euphonia are working on AI models that can understand people with brain injuries, Parkinson’s, ALS and other diseases whose speech may first seem difficult to comprehend by both humans and regular speech-to-text applications. These AI models normalize the otherwise difficult to understand speech in order to create an output of audio or text so that people with non-standard speech can still communicate with others and be understood.



Amputee

Another very interesting fields of research that uses AI to help disabled people is the development of intelligent prostheses, e.g., prosthetic hands and arms.


There was a dramatical improvement in this area in recent years because of technological advancements that allow independently moving fingers, control over multiple joints, personalized 3D printing, and so on. Nonetheless, most users find modern prosthetic arms quite difficult to control because the most common control mechanism called myoelectric sensing relies on recording of the electrical activity in the arm muscles and requires users to contract their muscles in specific combinations of patterns to generate hand or wrist motions. These patterns are often counterintuitive and time-consuming and often results in a frustrating experience.


A researcher team including Diu Khue Luu and Anh Tuan Nguyen from the University of Minnesota has recently published a paper that describes a way how to solve this problem with help of artificial intelligence and has the potential to revolutionize the field of development of prosthetic hands and arms. They have created an AI agent that is based on a recurrent neural network and has learned to translate the amputee's movement intent through a peripheral nerve interface into the actual movements. It is currently able to simultaneously handle six degrees-of-freedom from multichannel nerve data in real-time and was tested with three human amputees.


Further research in this area is required, but it looks like very soon people with amputated limbs could use new generation prostheses that allow very natural motion.



Smart homes

Last but not the least, MI technologies to be mentioned here are smart assistants like Alexa or Google Echo that can be connected with and control smart home devices like smart thermostat, light switches, curtains, vacuum cleaner robots etc.


What many people are using just for convenience or fun, can be life changing for people with disabilities. Through simple voice commands they can communicate what they need and control nearly every aspect of their home by themselves. It makes them more independent and allows them to act more autonomously!



Concluding remarks

There are a lot of existing applications of Machine Intelligence that improve lives of disabled people. However, many of them are still very expensive and not accessible to everyone. Others are just in child’s shoes and need more time and financial resources for further research and development. We encourage the reader to support initiatives like those described above and think about new, innovative ways on how to include disabled people better into our society by means of technology!

35 views0 comments

Recent Posts

See All
bottom of page