One of the most exciting talks for me was the joint ISWC/ISMAR keynote by Dr. Dylan Schmorrow, one of the program managers for DARPA. The program managers are the guys who decide what research projects DARPA should fund — the best-known PM was probably JCR Licklider, who funded the Intelligence Augmentation research that led to the invention of the Internet, the mouse, the first(?) hypertext system, etc. The current program Dylan talked about was Augmented Cognition, which I’m now convinced could become the biggest breakthrough in wearable computing yet.
Intelligence Augmentation tried to support human mental tasks, especially engineering tasks, by interacting with a computer through models of the data you’re working with — that was really the start of the shift from the mainframe batch-processing model to the interactive computer model. AugCog is about supporting cognitive-level tasks like attention, memory, learning, comprehension, visualization abilities and basic decision making by directly measuring a person’s mental state. The latest technology to come out of this effort is a sensor about the size of your hand with several near-infrared LEDs on it in the shape of a daisy, with a light sensor in the center. The human skull is transparent to near-IR (that’s how you get rid of all the heat your brain produces), so when it’s placed on the scalp you can detect back-scatter from the surface of the brain. By doing signal processing on the returned light you can detect blood-flow and thus brain activity, up to about 5cm deep (basically the cortex). They’ve already got some promising data on detecting understanding — one of the things DARPA is especially interested in is being able to tell a soldier “Do this, then that, then the other thing… got that?” And even if he says “Yup” his helmet can say “no, he didn’t really get it….” Outside of military apps (and getting a little pie-in-the-sky), sometime down the road I can imagine using this kind of data to build interfaces that adapt to your cognitive load in near real-time, adjusting information displayed and output modalities to suit. In the more near-term, these devices are starting to be sold commercially and cost on the order of thousands of dollars, not tens or hundreds of thousands. That means a lot more brain-imaging science can be performed by a lot more diverse groups.