Using context to suggest recipients for a photo

Marc Davis and others working at UC Berkeley’s Garage Cinema Research group have some interesting work on using a person’s context when taking a photo with a cellphone (specifically time, location and people who are around) to predict who that photo is likely to be sent to [paper, video]. They’re using that prediction to offer a “one-click” list of people with whom to share a photo that’s just been taken, and report that 70% of the time the correct sharing recipients are within the top 7 people listed. In their study, they found that time was the best predictor of who a likely recipient would be, even beating out what other people were around (determined by detecting other cellphones in the area via Bluetooth).

It’s interesting to compare this to my own work [paper] using the Remembrance Agent on a wearable computer, where I found relatively little benefit in using either location or people in the area to suggest notes I had taken in previous conversations that might be useful in the new situation. It’s clear that the application and user’s lifestyle makes a huge difference. All my notes were taken when I was a grad student, so over a third of my notes were taken in just one of three locations: my office, the room just outside my office and the main classroom at the Media Lab. That’s too clumped to help distinguish among the wide variety of topics I’d talk about in those locations. On the other hand, people in the area had the reverse problem: since I’d be giving demos and talks all the time, over a third of the people I was with when taking notes showed up only once. The “people who are around” feature was too sparse to be helpful. (I never did test time-of-day or day-of-week as feature vectors, because I dropped that feature from the RA when I wrote version 2, but I suspect it would have the same problem location does.)