According to MIT’s Technology Review, two researchers from the University of Nevada Eelke Folmer and Vinitha Khambadkar, created a wearable technology that enables blind people to use only their hands instead of canes to perform spatial perception tasks.
The “hands-free” Gestural Interface for Remote Spatial Perception, more commonly known as GIST enables users to use their hand gestures to extract information from their environment, making everyday tasks much more accessible. Drawing on ideas from other “augmented realty” predecessors (i.e. MIT’s Sixthsenseproject) GIST uses Microsoft’s Kinect sensor to detect and analyze objects within its field of view, and then relays that information back to the user via speech technology.
GIST picks up on the wearers’ arms and fingers to determine what they are pointing at. It then uses different finger and arm gestures to signify various types of spatial information such as: the presence of human beings, colors, and depth. For instance, if the wearer makes a fist and points it in the direction of a human, GIST picks up on the distance between the wearer and the other individual and relays that information to the user.
Gestures aren’t the only things GIST is meant to identify. The interface can also recognize speech, objects and faces.
Admittedly, as with all technology in development, there are a few caveats. For instance, GIST does not have the capability to continue tracking objects as they move farther from the field of view or behind the user. Also, the Kinect sensor is not exactly a light piece of jewelry that is worn around the neck. Its bulk might make it difficult for individuals to readily give up their lighter, although less perceptive canes, for the more intuitive but bulky “eye” piece.
While it may not be all fun and games for the researchers at the University of Nevada, another team of computer scientists at the University of Washington, led by doctoral student Kyle Rector, wrote a software program that makes Yoga accessible to the blind. Eyes-Free Yoga, is a software program written with Microsoft’s Kinect in mind. It moves the user through a series of yoga poses by tracking their body movements and then providing auditory feedback.
The program serves as an “exergame” or exercise game that replaces the visual aspect of yoga and leaves the user free to listen to adjustments. Using the Kinect sensor to detect an individual’s movement, the Kinect uses simple geometric angles and laws of cosine to analyze the individual’s posture. The program then guides users into the correct position by using simple auditory feedback that says, for example, “rotate hip right” or “lean forward more”. Eyes-Free Yoga does not only correct users, it also provides positive feedback once they are aligned properly.
The researchers accomplished this by teaming up with multiple yoga instructors to come up with the criteria necessary to judge proper alignment. The program consists of about six poses, each with 30+ commands for improvement.
One collaborator, Julie Kientz hopes “this acts as a gateway to encouraging people with visual impairments to try exercise on a broader scale.” Eyes-Free Yoga makes this possible by allowing individuals to learn and practice Yoga from the comforts of their own homes.
Both Eyes-Free Yoga and GIST utilize Microsoft’s Kinect to improve the quality of life for those unable to see. Whether it’s easing the challenges of everyday life, or increasing the accessibility of fitness, Microsoft’s Kinect technology is enabling researchers to build a more accessible future with everyone in mind.