You may have heard that about a month ago scientists at Berkeley and UW announced they have “discovered” a new never-before-seen color, which they call olo. (Here’s a quick video overview from their paper.)
Before I get into what they did, here’s a quick refresher on how we see color. White light is made up of a broad spectrum of wavelengths, but our eyes only have three types of color detectors (called cones), each sensitive to its own overlapping range of wavelengths. What we see as color is the relative response of just those three type of cones at a certain point, so light around 575 nm triggers the L cones the most, M cones a little less and hardly any S, which we perceive as yellow. But you can get the exact same response from a mixture of light around 549 nm and 612 nm (green and red), which is how RGB monitors get away with displaying practically any color with just three color subpixels. It’s also why we can perceive magenta, a color that isn’t in the rainbow at all but results from a combination of blue and red light (that is, a high response from S and L cones but low response from M cones). Notice that it’s possible to trigger just the S cones with short-wavelength light and just the L cones with long-wavelength light, but there’s normally no way to trigger just the M cones without also triggering L or S cones as well because of the overlap.

The Berkeley and UW team has developed a prototype system, called Oz, that scans a subject’s retina to identify the exact location of S, M and L cones and then uses a laser to excite just the ones to produce a desired color at any given location. In theory such a system could render every possible color, including ones that are impossible to see in nature because they’re outside the range of S/M/L responses you can get with natural light. In practice they estimate up to two thirds of the light “leaks” over to neighboring cones, but that’s still enough to produce a large range of colors including ones outside the natural gamut. One such color, the one produced by stimulating only M cones and no others, they’ve named olo (from 010 — get it?).
In theory only five subjects in the world have seen olo, which they describe as a “blue-green of unprecedented saturation, when viewed relative to a neutral gray background.” But that’s not very satisfying. If you’ve just discovered a new color naturally the first thing anyone is going to ask is “what does it look like?” — it’s much nicer if you can answer “here, take a look” instead of “come back to my lab and I’ll shoot you in the eye with a laser.”
Luckily, I think I’ve found a way to see olo without any of the complex set-up. The Oz team creates olo by selectively stimulating M cones in a region of the retina, but we should be able to get the same effect by first staring at a magenta color field for 20-30 seconds (which suppresses responses from the S and L cones) and then quickly shifting over to a pure green. The difference is analogous to additive vs subtractive color: Oz works by stimulating only the M cones, while color adaptation involves the suppression of the stimulus color (in this case magenta, the complement of green).
Stare at the center dot for 30 seconds. Then without moving your eyes, move your mouse pointer into the square (or tap on mobile). You should briefly see a super-saturated blue-green image.
Cute effect, but is it olo — are you and I actually seeing the same color the Oz team sees with their device? Short answer is… maybe? For an effect that’s so readily testable there’s still surprisingly little consensus about exactly what causes negative after-images, or even exactly what colors one is expected to see. Explanations range from simple cone adaptation to the currently dominant theory that after-images are caused by some kind of opponent process between different color responses higher up the processing chain (probably in the retinal ganglion cells). If something like the cone adaptation model is correct then I’d definitely expect the two methods to produce the same color, modulo how much leakage is in Oz vs. how much the S and L cone responses are suppressed in the after-image. But even if the processing is higher up the chain it wouldn’t surprise me if the effects are essentially the same, because regardless of the underlying mechanism it’s clear that when an after-image is mixed with a real image (e.g. when viewed against a colored background) the result is as if the original stimulus was partially subtracted from the background. That’s why the after-image from magenta appears green on a white background, but greenish-yellow on a yellow background and brownish-red on a red background.
One way to test whether the two colors really are the same would be to do the same tests the Oz team did but with the after-image + green, using a tunable laser plus white light to match the perceived color. Alternatively one could turn the experiment on its head and use Oz itself to match the perceived color directly, and see how close it gets to their olo.
I plan on reaching out to the Oz team to see what they think, and I’ll update if they write back.