CAMBRIDGE, United Kingdom: Theo, a 12-year-old boy who is blind, is seated at a table in a crowded kitchen on a gray and drippy mid-December day. A headband that houses cameras, a depth sensor and speakers rings his sandy-brown hair. He swivels his head left and right until the camera in the front of the headband points at the nose of a person on the far side of a counter.
Theo hears a bump sound followed by the name “Martin” through the headband’s speakers, which are positioned above his ears.
“It took me like five seconds to get you, Martin,” Theo says, his head and body fixed in the direction of Martin Grayson, a senior research software development engineer with Microsoft’s research lab in Cambridge. Grayson stands next to a knee-high black chest that contains computing hardware required to run the machine learning models that power the prototype system Theo used to recognize him.
Elin, Theo’s mother, who is standing against a wall on the opposite side of Theo, says, “I love the way you turned around to find him. It is so nice.”
As Theo begins to turn to face his mother, the speakers sound another bump and the name “Tim.”
“Tim, there you are,” says Theo with delight as his gaze lands on Tim Regan, another senior research software development engineer at the lab, who took Theo under his wing to teach him advanced computer coding skills. Theo and his mother were at Regan’s house for a bi-monthly coding lesson. They met while working on a research project that led to the development of Code Jumper, a physical programming language that’s inclusive of children with all ranges of vision.
Theo is now one of several members of the blind and low-vision community who are working with Regan, Grayson, researcher Cecily Morrison and her team on Project Tokyo, a multipronged research effort to create intelligent personal agent technology that uses artificial intelligence to extend people’s existing capabilities.
For Theo, that means tools to recognize who is around him.
for developers and enthusiasts