Wearable Computing

ISWC 2005 Call For Participation

We’ve just posted the Call For Papers for the 9th Annual IEEE International Symposium on Wearable Computers (ISWC 2005), to be held October 18 – 21st in Osaka, Japan. This will be the first ISWC in Asia, and I’m proud to be co-program chair along with Professor Kenji Mase-san from Nagoya University.

Initial submissions for all categories to ISWC 2005 are due on May 8th (just four short months from now) at http://www.iswc.net/ — see the CFP for details on potential topics.

ISWC 2005 Call For Participation Read More »

History repeats itself…

Ignoring things like the wrist watch, the earliest wearable computer was built back in 1961 by Ed Thorp (father of the theory of card-counting in Blackjack) and Claude Shannon (father of information theory) to answer a question that had plagued mankind for generations: is there any way I can cheat reliably at roulette?

Now over 43 years later, history repeats itself yet again as a treo has walked away with more than $2.3 million, allegedly having used a cellphone rigged with a laser range-finder to up their odds of winning from 1 in 37 to about 1 in 6. Police have dropped the investigation after deciding there was no interference with the ball in play. (That wouldn’t fly in Vegas, where laws were put in place after wearables users in the ’70s spooked casinos.)

(Thanks to Steve Schwartz for the link!)

History repeats itself… Read More »

UbiComp going mainstream?

Man, I can think of all sorts of mischief I could get in with one of these things…

MyAy

From Personal Tech Pipeline (and thanks to Thad for the link):

Your favorite rodent has learned that Siemens is working on an all-purpose gadget that simply pays attention to what’s happening nearby, and notifies you by SMS when something is strange.

Called the MyAy, the experimental device has a keypad but no display. It monitors its environment with a microphone, an infrared sensor, a temperature sensor, and an acceleration sensor (to tell if the MyAy itself is being moved).

UbiComp going mainstream? Read More »

Quick one-handed keyboards survey…

Yesterday I did a quick scan of the one-handed keyboards that are available, and figured I’d post a quick summary:

Twiddler

Twiddler

  • Type: 16-button chording, straps to hand
  • Price: $219
  • Interfaces: USB, PS/2
  • Words Per Minute (avg): 10 after an hour practice, 30 after 10 hours, top speed in high 60s
  • Studies: Three by Kent Lyons at Georgia Tech (novices, experts and learning aids)
  • Notes: I like the Twiddler, though I’ve not a lot of experience with other one-handed keyboards. Biggest win for the Twiddler is I can touch-type on it (unlike any of the predictive-text systems like T-9 on a cellphone keypad), it has a good top speed and it attaches to my hand so it’s especially convenient for mobile typing. The Twiddler-2 improved on the older model by replacing the nigh-unusable mouse with a Trackpoint and acting like a real keyboard instead of requiring a serial interface, but unfortunately they removed one of the thumb keys, it requires Win98 to remap keys in batch and you can’t remap all the thumb keys anymore. Personally I like my Twiddler-1 better — I miss being able to do things map “NUM + ALT + any key” to be an arrow key in the appropriate direction.

Half-QWERTY

Half-QWERTY

  • Type: literally half a QWERTY keyboard where you hold down a modifier key to type the “mirror-side” keys
  • $295
  • Interfaces: USB, PS/2
  • Words Per Minute (avg): between 24-43 wpm after 10 hours practice, top speed around 60 wpm
  • Studies: Three by Edgar Matias (Transfer from QWERTY, CHI’94, CHI’96)
  • Notes: Never used it myself, though it looks like you can get good speed out of it and it’s quick to learn if you already know QWERTY. Edgar also sells a wearable version that straps to your arm, though unlike the Twiddler that means your other arm is also tied up when you type.

FrogPad

FrogPad

  • Type: Similar to Half-QWERTY, but with common letters mapped to the home-row.
  • Price: $100 to $196 depending on type
  • Interfaces: USB or Bluetooth
  • Words Per Minute (avg): Sales lit claims 40 wpm after 10 hours practice
  • Studies: Their webpage says studies were conducted at Rice University, but I haven’t found the links yet.

CyKey

CyKey

  • Type: 9-button Chording based on the Microwriter Agenda’s chord system
  • Price: £57 – £90 depending on interface
  • Interfaces: Palm IR (IrDA half-duplex) or USB
  • Words Per Minute: Sales lit claims 25-50 wpm
  • Notes: MegaSharp has a “wearability kit” that attaches your PDA and CyKey to your belt, but based on the picture I wouldn’t want to use it unless I was standing still. I also see that Computer Shopper in the UK dinged the CyKey, not for the typing method so much as the fact that the IR is incompatible with a lot of Palm devices. Caveat emptor.

Others

And of course there’s the plethora of cellphone / PDA keyboards like the one-thumbed “chicklet keyboards” on the Treo-600/650 and Blackberry or using Multitap or T-9 on a standard 12-button cellphone keyboard. I’m not a big fan of Multitap or predictive systems like T-9, but I’ve liked the Treo keyboard even for one-handed typing. I expect I’d have more trouble using it eyes-free than I do with the Twiddler, but then again I don’t have years of experience using the Treo to type SMSs under the table when the teacher isn’t looking either…

A couple non-commercial things of interest:

The Data Egg was an integrated PDA & five-button chording keyboard designed and prototyped back in the early ’90s, but it got black-holed after the inventor lost control of his IP. Never tried one myself, but I’ve always liked the idea as a sort of chording-keyboard sleeve over a PDA.

Something else I like the look of is Chordite, which interests me mostly because of its unique hand-fit. Prototype only, researcher claims about 33 wpm.

Quick one-handed keyboards survey… Read More »

Remapping human sensation

There’s a good article in today’s NYTimes about Dr. Paul Bach-y-Rita’s work in remapping human sensation — allowing the blind to “see” via tactile feedback on the tongue for example. Sounds like there have been some breakthroughs recently in terms of miniaturization and wearability (no surprise there), plus some good results in allowing people with damaged vestibular systems to regain normal balance unaided.

Remapping human sensation Read More »

Trends in Wearables

Thinking back on last week’s ISWC & ISMAR, I think there are three especially ripe areas of wearables research in the next few years:

  • Fusion of Wearables and Ubicomp: This is an area I’ve thought was ripe for a while, but apart from location-beacons and markers for AR (Augmented Reality) there’s surprisingly little research that combines Ubiquitous Computing and Wearables. There are exceptions, like Georgia Tech’s work with the Aware Home and some work in adaptive “universal remote controls” for the disabled, but it feels like there should be some good work to be done combining the localization of Ubicomp with the personalization of Wearables. It also nicely fits with Buxton’s argument that the key design work to be done is in the seamless and transparent transitions between different context-specific interfaces.

  • Social Network Computation, Visualization & Augmentation: This research has been going on for awhile, especially at the University of Oregon and more recently at the MIT Media Lab, but it seems to be getting traction lately. This sort of research looks at what can be done with multiple networked wearables users in a community. Typical applications include automatic match-making (along the lines of the Love Getty that was the craze in Japan several years ago), keeping a log of chance business meetings at conferences and trade shows, understanding social dynamics of a group like whether one person dominates the conversations, and real-time visualization of those social dynamics.

  • AugCog / Wearable Brain-Scanning: As I mentioned in a previous post, this is potentially a big breakthrough. I don’t mean in the sense that it solves problem the wearable field has been struggling with, but rather that this could open a whole new branch of research. Neuroscience has taken off in the past 10 years with advances in brain-imaging technology like functional MRI. The downside is that you can only see what the brain is doing when performing tasks inside a lab setting — it’s studying the brain in captivity. Wearable sensors give us the ability to study the brain in the wild, and to correlate that brain activity with other wearable sensors. That plus the lower price should enable all sorts of new research into understanding how we use our brains in our everyday lives. That, in turn, will hopefully lead to new ways to augment our thinking processes, whether by modifying our interfaces to match our cognitive load, providing bio-feedback to help treat conditions like ADHD or perhaps addiction, or even physically stimulating the brain to treat conditions like Parkinson’s.

    That’s not to say there aren’t broad and potentially frightening aspects to this technology, but the issue that concerns me most applies generally to our recent understanding of the brain: I don’t think our society is prepared yet to deal with the coming neuroscience revolution. Our justice system, religion and even our system of government is based on the worn-out Cartesian idea that our minds are somehow distinct from the wetware of our brains and bodies. It’s been clear for decades that that assumption is false, but so far we’ve tried to ignore that fact in spite of warnings from science fiction and emerging policy debates about mental illness, psychoactive medication, addiction as illness and the occasional the-twinkies-made-me-do-it defense. The applications envisioned by AugCog are going to force the issue further, and societies doesn’t make a shift like that without serious growing pains.

Trends in Wearables Read More »

AugCog

One of the most exciting talks for me was the joint ISWC/ISMAR keynote by Dr. Dylan Schmorrow, one of the program managers for DARPA. The program managers are the guys who decide what research projects DARPA should fund — the best-known PM was probably JCR Licklider, who funded the Intelligence Augmentation research that led to the invention of the Internet, the mouse, the first(?) hypertext system, etc. The current program Dylan talked about was Augmented Cognition, which I’m now convinced could become the biggest breakthrough in wearable computing yet.

Intelligence Augmentation tried to support human mental tasks, especially engineering tasks, by interacting with a computer through models of the data you’re working with — that was really the start of the shift from the mainframe batch-processing model to the interactive computer model. AugCog is about supporting cognitive-level tasks like attention, memory, learning, comprehension, visualization abilities and basic decision making by directly measuring a person’s mental state. The latest technology to come out of this effort is a sensor about the size of your hand with several near-infrared LEDs on it in the shape of a daisy, with a light sensor in the center. The human skull is transparent to near-IR (that’s how you get rid of all the heat your brain produces), so when it’s placed on the scalp you can detect back-scatter from the surface of the brain. By doing signal processing on the returned light you can detect blood-flow and thus brain activity, up to about 5cm deep (basically the cortex). They’ve already got some promising data on detecting understanding — one of the things DARPA is especially interested in is being able to tell a soldier “Do this, then that, then the other thing… got that?” And even if he says “Yup” his helmet can say “no, he didn’t really get it….” Outside of military apps (and getting a little pie-in-the-sky), sometime down the road I can imagine using this kind of data to build interfaces that adapt to your cognitive load in near real-time, adjusting information displayed and output modalities to suit. In the more near-term, these devices are starting to be sold commercially and cost on the order of thousands of dollars, not tens or hundreds of thousands. That means a lot more brain-imaging science can be performed by a lot more diverse groups.

For more info check out www.augmentedcognition.org, or go to the Augmented Cognition conference being held as a part of HCI-International in Las Vegas July 22-27, 2005.

AugCog Read More »

Buxton at ISWC: it’s the transitions, stupid!

[I’ve been trip-blogging this past week but haven’t had convenient net access, so I’m afraid the real-time aspects of blogging are lacking… now that I’m hooked into the wireless at DEAF04 here’s some of my backlog.]

Bill Buxton’s ISWC keynote made a lot of points, but the one that struck me most was derived from three basic laws:

  1. Moore’s Law: the number of transistors that can fit in a given area will double approximately every 18 months.
  2. God’s Law (aka the complexity barrier): the number of brain cells we have to work with remains constant.
  3. Buxton’s Law: technology designers will continue to promise functionality proportional to Moore’s Law.

The problem then is how to deliver more functionality without making the interface so unwieldy as to be completely unusable. Buxton went on to talk about the trade-off between generality and ease-of-use: the more specifically-designed an interface the easier it is to use but the more limited its scope.

The key, he argues, is to make lots of specific applications with interfaces well-suited for their particular niche. Then you don’t need a single general interface, but instead can concentrate on the seamlessness and transparency of transitions between interfaces.

It’s a nice way of thinking about things, especially when thinking about the combination of wearables and ubicomp (see next post).

Buxton at ISWC: it’s the transitions, stupid! Read More »

Where are the new innovations in AR?

Like in previous years, the big theme here at ISMAR (the International Symposium of Augmented and Mediated Reality) seems to be registration and tracking — how to detect where objects and people are in the physical world so you can overlay graphics as accurately as possible. AR isn’t my main field, but I’ve had a couple of conversations so far about how we’re really reaching a point of diminishing returns. It’s great that we’re seeing minor incremental improvements in this area, but what we’re really lacking are new, innovative uses of AR to push the field further. Unfortunately, it sounds like at least in part a lot of these new innovations didn’t make the cut for the conference because they lacked in strong evaluation or quantifiable contribution to the field — it’s much easier to judge the quality of a new camera-based image-registration method than it is to judge the usefulness of a brand new application.

The Software Agents field was a response to a similar stagnation in Artificial Intelligence. AI researchers had a lot of good but imperfect tools that had been developed over the years, but kept trying to solve the really hard general problems. Software Agents grew out of the idea that it was OK if your algorithm wasn’t perfect in every condition so long as you cleverly constrained your application domain and designed your user interface to cover for those imperfections. It was a struggle to get acceptance of the idea at first, and in the end a few of the big players in the new domain went and founded their own conference rather than try to fit their own work to the evaluation metrics used for more traditional AI papers. Hopefully it won’t take such a dramatic move on the part of AR researchers to breath new life into this field.

Where are the new innovations in AR? Read More »

Wearable on an iPaq

Every year I think it’ll finally be the year we wearables folk can swap out our custom hardware for an off-the-shelf palmtop with a head-mounted display and one-handed keyboard connected to it, and every year it’s just not quite there. Looks like we’re finally getting there: Kent Lyons from Georgia Tech has now swapped out his CharmIT-PRO for an iPaq.

It’s still not quite plug-and-play: he had to hack the original Twiddler-1 (the serial-port one, not the current PS/2 version) with a different power connector, and the CF-IO card he’s using to connect the iPaq to his Microoptical display has a fairly limited bandwidth, so he had to hack his X server to blit out just the windows changes to the active window. Oh yeah, and he wrote a new Twiddler driver for the iPaq.

He’s promised to put up a how-to guide on the Web soon — I plan to keep bugging him till he does :).




Wearable on an iPaq Read More »