The points over the keyboard show the input from the user from the first tap location (red) to the last tap (blue). The user was entering "this will be hard". A simple baseline decoding using the closest key and a long-span character language model resulted in the recognition "thud dunnve gatc".
Touchscreen mobile devices such as the iPhone and iPad have changed the way people communication and access information while on the go. A crucial task on such devices is the entry of text. But without visual or tactile feedback, such text entry tasks are problematic for the world’s many blind and visually-impaired users. Existing solutions such as the iPhone’s VoiceOver feature are slow with entry rates below six words-per-minute. Other faster input techniques require knowledge of Braille and chorded input via multiple fingers simultaneously.
Luckily, assistant professor Keith Vertanen and Montana Tech computer science student Haythem Memmi are on the case. They recently finished a pilot study in which they collected data of users entering text on an iPod touch both while sighted and while blindfolded. Using this data, they are investigating several recognition-based approaches that attempt to decode the noisy input from the blindfolded users. Their work appear on the front page of the Montana Standard.
So far results are encouraging. Combining a variety of probabilistic techniques, substantial error reductions have been realized compared to baseline. But there is still plenty of work left to do. The error rates need further reduction and an error correction interface need to be added. Additionally, the recognition is currently being done offline on a desktop and takes several minutes per sentence. The latest findings from this project will be presented at Techxpo on May 2nd.