SensoSmart Story

Cloud learning and crowd sourcing. Open source solutions. A community of developers and users sharing solutions. Enabling devices to understand their context for greater autonomy. Enabling devices to become capable of learning from one another and from the humans who use them. It just makes sense.

If the world were your oyster...

If you could envision solutions -- the best solutions -- what would they be? You know. The really big picture.

The sky is the limit.

That's where the SensoSmart team began thinking about how to enable assistive technologies such as robots including any kind of robots (exoskeletons, instrumented white canes, humanoids, prostheses, hearing assistants and hearing aids) to be more autonomous, personalized, and smart and so more people can benefit. We realized many of the things we need are available already. Through connectivity and network intelligence SensoSmart provides the virtual sensors to make devices smart and place solutions within reach of many more people.

Many people with disabling conditions live in lower and middle income locations. SensoSmart enables Accommodation on the network lowering the bar and bridging the gap and creating lower cost models for Accommodation.

Easy to use affordable personal devices including smart phones, notebooks, readers, and tablets have exploded in popularity. These devices connect people around the world with family, friends, and resources, for education, in the workplace or business, to receive medical care, for entertainment and many other services; yet there are still some people who are not able to fully access these resources.

Modern devices can provide sophisticated sensors and applications with the potential to transform service and device experiences and extend reach to even more people. Our cognitive sensor project SensoSmart leverages existing infrastructure with new features to personalize the services, simplify control of assistant devices, and improve human-robot-interaction. SensoSmart virtual sensors enable people to perform tasks beyond their own abilities.

We share some examples:

Many people are hard of hearing and benefit from hearing aids; approximately 4 out of 5 people who would benefit from hearing aids do not have them. Hearing aids are quite expensive (between 2 and 6 thousand dollars US) and are not typically covered by insurance. People with hearing loss can miss out on so many important things and are disconnected from enjoyable popular experiences of their peers. (Approximately 38 million Americans have significant hearing loss and many would benefit from hearing aids. Thirty to forty percent of people over 65 and fourteen percent of those between 45 and 64 have some type of hearing loss.)

We propose to introduce an affordable software hearing aid app downloadable to smart phones or accessible on the cloud to assist people who are hard of hearing so they can enjoy audio applications. In the big picture, we envision a personalized code book in the cloud for all media and communications experiences. In this way, many more people will be able to access hearing assistance for a variety of circumstances.

An example of an important advance in this arena was created by world famous Fraunhofer.


Fraunhofer offers individual hearing support

This approach can increase the reach for audience and provide an enjoyable entertainment experience for many who would otherwise have no access to audio applications. Over time, we would like to extend the features to improve the experience of people who are hard of hearing when listening in a variety of audio situations. We will enhance our services with user profiles including audiograms, preferences, and genetics to personalize hearing algorithms and applications; we will conduct network analytics of the behavior of the users to measure app effectiveness and predict new needs of the users.

Another example:

Assistant robots and devices are expensive, limited in function, and can be complex to learn and use, to personalize, or to update. SensoSmart virtual sensors provide a means to exchange data between devices so they can connect to shared resources learning across the cloud. SensoSmart uses steganography and watermarking, machine and human hearing and vision, and provides a convenient means to share useful control programs and features among users of assistive devices through cloud and crowd sourcing. This enables lower cost models to benefit people with disabling conditions. SensoSmart provides an easy way to widely share robot functionality enabling devices to learn from one another and utilize popular strategies available on the cloud.