Wearable Computing

SkyScout

skyscout.gif

One of the gadgets announced at CES last week was Celestron’s SkyScout, a hand-held viewfinder that identifies stars being viewed, based on GPS + compass and accelerometer to tell your location and where in the sky you’re looking. Cute concept — assuming they did a good job on the implementation, it’s nice example of hand-held augmented reality that avoids most of the normal difficulties: the environment being tagged (the night sky) is extremely well-modeled and predictable, the user tends to be looking in one place rather than walking around or moving his viewfinder, it’s always outdoors with a good view of the sky so GPS always works, and it’s night so you don’t have to worry about the sun washing out the display (it also uses both text and audio, so presumably you can also avoid having the display wash out your night vision).

(Link via B.K. DeLong.)

Using context to suggest recipients for a photo

Marc Davis and others working at UC Berkeley’s Garage Cinema Research group have some interesting work on using a person’s context when taking a photo with a cellphone (specifically time, location and people who are around) to predict who that photo is likely to be sent to [paper, video]. They’re using that prediction to offer a “one-click” list of people with whom to share a photo that’s just been taken, and report that 70% of the time the correct sharing recipients are within the top 7 people listed. In their study, they found that time was the best predictor of who a likely recipient would be, even beating out what other people were around (determined by detecting other cellphones in the area via Bluetooth).

It’s interesting to compare this to my own work [paper] using the Remembrance Agent on a wearable computer, where I found relatively little benefit in using either location or people in the area to suggest notes I had taken in previous conversations that might be useful in the new situation. It’s clear that the application and user’s lifestyle makes a huge difference. All my notes were taken when I was a grad student, so over a third of my notes were taken in just one of three locations: my office, the room just outside my office and the main classroom at the Media Lab. That’s too clumped to help distinguish among the wide variety of topics I’d talk about in those locations. On the other hand, people in the area had the reverse problem: since I’d be giving demos and talks all the time, over a third of the people I was with when taking notes showed up only once. The “people who are around” feature was too sparse to be helpful. (I never did test time-of-day or day-of-week as feature vectors, because I dropped that feature from the RA when I wrote version 2, but I suspect it would have the same problem location does.)

Demo of xMax 1000 times more efficient than WiMax

Wow. Techworld is reporting on a demonstration of wireless communications sent at 3.7Mbit/s to a radius of 18 miles using just 50mW and an omnidirectional antenna using a technology called xMax, developed by xG Technology. If this is for real, that’s on the order of 1000 times more efficient than GSM, CDMA or WiMax. The company plans to target long-range wireless, but Princeton EE professor Stuart Schwartz claims he has seen it also demonstrated as a personal-area network, giving 2Mbit/s over 40 feet using just 3 nanoWatts.

If this is all true then it’s revolutionary. To his great credit, Techworld reporter Peter Judge has a full companion article laying out the several places where reporters have to take the company at its word about the technology and the honesty of the demo, as well as remaining potential hurdles such as preemptive regulation and the possibility of reflections or interference once other transmitters start using the same system. But we’ll know soon enough whether it’s more than just snake oil, and if so it’s going to be darned impressive.

(Thanks to Kurt for the link.)

ISWC 2005 Fashion Show Pictures

iswc05-led-umbrella.jpg

I just posted pictures from the wearable-technology fashion show that was part of the ISWC 2005 program, sponsored by the KANSAI IT Synergistic Society. This was the third ISWC to include such a show, the first being Beauty and the Bits hosted by the MIT Media Lab at the first ISWC, and the second hosted by Komposite at ISWC 2002 in Seattle.

There were a few practical application garments being shown at this show, but most leaned towards the fashion end, with dance, music and LEDs playing prominent roles. My apologies for the quality of some of the pictures — my little hand-held camera doesn’t work well in low lighting.

ISWC Best Paper winner

activity-recognition-rfid-glove-iswc05.jpg

The winner of this year’s best paper award at ISWC (the first ISWC to have such an award) was a paper by Don Patterson from the University of Washington called Fine-Grained Activity Recognition by Aggregating Abstract Object Usage. All the authors got certificates and Don took home a new video iPod as the prize.

This was one of several papers presented that used an RFID reader in a glove, in this case to classify what kind of activity a person is conducting based on the sequence of objects she has touched. This would be useful, for example, for alerting a care worker if a resident of an assistive-living home had stopped eating.

From the abstract:

In this paper we present results related to achieving fine-grained activity recognition for context-aware computing applications. We examine the advantages and challenges of reasoning with globally unique object instances detected by an RFID glove. We present a sequence of increasingly powerful probabilistic graphical models for activity recognition. We show the advantages of adding additional complexity and conclude with a model that can reason tractably about aggregated object instances and gracefully generalizes from object instances to their classes by using abstraction smoothing. We apply these models to data collected from a morning household routine.

Here are all six nominees for best paper from ISWC’05, which were the top 10% of full papers based on reviewer-rating:

ISWC 2006 in Montreux, Switzerland

It’s decided: next year’s International Symposium on Wearable Computing will be in Montreux, Switzerland on October 11th-13th, with workshops and tutorials after the main conference on October 14th. This’ll be co-located with UIST, which has their doctorial symposium on the 15th and main conference October 16th – 18th.

The conference, by the way, will be held in Casino Montreux. I wonder if we can get back to our roots and try out some roulette-wheel predicting wearables? 😉