We all know what it's like to send a text message or e-mail whose tone is completely misinterpreted. A series of additional messages to better explain ourselves ensues and the efficiency of the original message is long gone.
That's one reason engineers at the University of Washington are testing a tool called MobileASL that uses motion detection to identify American Sign Language and transmit images over U.S. cell networks. Sometimes, words alone just don't cut it.
"Sometimes with texting, people will be confused about what it really means," says Tong Song, a Chinese national who is studying at Gallaudet University, a school for the deaf in Washington, D.C., and participating in UW's summer pilot test. "With the MobileASL, phone people can see each other eye to eye, face to face, and really have better understanding."
Eve Riskin, a UW professor of electrical engineering, says the MobileASL team's study of 11 students is the first to examine how deaf and hearing-impaired people in the U.S. use mobile video phones. The researchers plan to launch a larger field study this winter.
The engineers are now working to optimize compressed video signals for sign language, increasing the quality of the images around the face and hands to reduce the data rate to 30 kilobytes per second. To minimize the amount of battery power, the phones employ motion sensors to determine whether sign language is being used.… Read more