Breaking Ground - Computer Science

Improving the lives of those coping with aphasia

Working on the Aphasia Project are Professor Perry Cook and graduate students Xiaojuan Ma, left, and Sonya Nikolova.

Working on the Aphasia Project are Professor Perry Cook and graduate students Xiaojuan Ma, left, and Sonya Nikolova.

Frank Wojciechowski

Brett Tomlinson
By Brett Tomlinson
3 min read

People who have suffered a stroke or brain injury often encounter difficulty speaking, writing, reading, and understanding speech — a disorder known as aphasia. The loss of spoken and written language can leave an otherwise cognizant person unable to do simple, everyday things like ordering a meal or interacting with a doctor. 

While speech specialists and caregivers have learned to help patients navigate around some of these communication problems by using pictures to represent words, with the assistance of picture cards or laptop computers, technological improvements are possible, says Perry Cook, a professor of computer science. Cook aims to use the advantages of computers, like adaptability and a capacity for vast libraries of images and sounds, to help people with aphasia in their daily lives.

Cook and graduate students Xiaojuan Ma and Sonya Nikolova are working with colleagues at the University of British Columbia on the Aphasia Project, a collaboration first brought to campus by Maria Klawe, the former Princeton engineering dean who is now the president of Harvey Mudd College. Princeton’s contributions helped to build the project’s ESI Planner II (ESI stands for “enhanced with sound and images”), a PDA programmed to help people with aphasia manage appointments and communicate using a customized collection of frequently used phrases. But in its ongoing work, Cook’s group has taken a step back to look at the fundamentals of communicating with images or sounds.

Nikolova is exploring language and vocabulary, trying to determine the best ways to organize and link words in aphasia-friendly applications. Her work draws on data from WordNet, an extensive lexical database developed by emeritus professor George Miller. Andrew Gomory ’79, CEO of the Princeton-based firm Lingraphicare, the leading technology provider for aphasia applications, says that organizing vocabulary has been one of the field’s major challenges. “In different situations, you want very different sets of words,” Gomory says, “and it’s very difficult to have what you want when you want it.”

Ma is working to improve visual representations of sentences, using icons, photographs, and short video clips. In 2007, she began studying the usefulness of video clips to represent verbs. She developed a set of 48 videos for the most commonly used verbs, using uniform parameters for length and a few self-imposed rules — each used a single actor, and mouthing of words was forbidden. In the video of “hope,” for example, the actor uses two familiar gestures, pressing his hands together in prayer and then crossing his fingers. Ma presented the videos, along with other visual representations like pictures and animated drawing, to subjects who represented two age groups: 21 to 39, and 65 and over. 

Both groups found the videos to be more effective in representing verbs, according to Ma’s study, a somewhat surprising result. One might expect young YouTube viewers to prefer videos, Cook explains, but it was encouraging that senior citizens also found them useful, since aphasia disproportionately affects older patients.

Scripting the verb videos presented some creative challenges, but Cook’s next step — finding sound cues for people with aphasia that evoke meaning in ways that photos or words may not — could be even more difficult. The group has started to go through lists of common words to determine which can be represented with sounds. “Baby”? Yes. “Cleaning”? Maybe. “Face”? No. The sounds they choose to test may include simple ones, like a dog barking, as well as more abstract choices from the soundtracks of Tom and Jerry cartoons. In any case, creating the sounds is not a concern. “We have huge collections of sound-effect libraries,” says Cook, an audio expert who also directs the Princeton Laptop Orchestra. “We just want to get the right ones before we do a study.”

Cook will test the sounds by themselves, just as Ma tested her videos, but applications eventually might pair sound and video together, or at least give caregivers and patients that option. Symptoms of aphasia are “as varied as head injuries are,” Cook says, so different patients may respond to different things. But each approach has a common goal: effective communication, to help people with aphasia live more independently.

 

0 Responses

Join the conversation

Plain text

Full name and Princeton affiliation (if applicable) are required for all published comments. For more information, view our commenting policy. Responses are limited to 500 words for online and 250 words for print consideration.

Related News

Newsletters.
Get More From PAW In Your Inbox.

Learn More

Title complimentary graphics