DocBug https://www.docbug.com/blog Intelligence, media technologies, intellectual property, and the occasional politics Fri, 23 May 2025 18:12:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 The Cartoonists Club https://www.docbug.com/blog/archives/1296 https://www.docbug.com/blog/archives/1296#respond Fri, 23 May 2025 18:12:46 +0000 https://www.docbug.com/blog/?p=1296 My seven-year-old daughter hasn’t gotten into chapter books yet, even for bedtime stories, but she absolutely loves graphic novels. One of her favorite authors / cartoonists is San Francisco native Raina Telgemeier, whose graphic novels have become my go-to for bedtime reading after she invariably loses interest in whatever book I wanted to read after about the second chapter. Another big success for bedtime reading was Scott McCloud‘s seminal work Understanding Comics, which is essentially a graduate-level class in the medium and art of comic books disguised as a comic itself. This surprised me since it’s not at all written for a young audience, but my daughter was fascinated even as I explained foreign concepts like closure, iconography and the relationship between an author and reader.

So I was thrilled when I discovered that Raina and Scott have collaborated on The Cartoonists Club, which from the description sounds kind of like Understanding Comics for the younger crowd. And now that we’ve read it I can say it is all that and more, and it’s delightful. You can really see the mix of Raina and Scott’s styles shine through, with both the natural-feeling childhood relationships and the hilarious breaking of the fourth wall in the Magic of Comics chapter. In the end it manages to both tell a compelling story about kids coming together around a hobby and convey practical knowledge young aspiring comic book writers can really use to start their journey, all taught by two maters of the craft. [Scholastic]

]]>
https://www.docbug.com/blog/archives/1296/feed 0
Seeing olo https://www.docbug.com/blog/archives/1260 https://www.docbug.com/blog/archives/1260#respond Wed, 21 May 2025 20:55:11 +0000 https://www.docbug.com/blog/?p=1260 You may have heard that about a month ago scientists at Berkeley and UW announced they have “discovered” a new never-before-seen color, which they call olo. (Here’s a quick video overview from their paper.)

Before I get into what they did, here’s a quick refresher on how we see color. White light is made up of a broad spectrum of wavelengths, but our eyes only have three types of color detectors (called cones), each sensitive to its own overlapping range of wavelengths. What we see as color is the relative response of just those three type of cones at a certain point, so light around 575 nm triggers the L cones the most, M cones a little less and hardly any S, which we perceive as yellow. But you can get the exact same response from a mixture of light around 549 nm and 612 nm (green and red), which is how RGB monitors get away with displaying practically any color with just three color subpixels. It’s also why we can perceive magenta, a color that isn’t in the rainbow at all but results from a combination of blue and red light (that is, a high response from S and L cones but low response from M cones). Notice that it’s possible to trigger just the S cones with short-wavelength light and just the L cones with long-wavelength light, but there’s normally no way to trigger just the M cones without also triggering L or S cones as well because of the overlap.

Source: Wikipedia (public domain)

The Berkeley and UW team has developed a prototype system, called Oz, that scans a subject’s retina to identify the exact location of S, M and L cones and then uses a laser to excite just the ones to produce a desired color at any given location. In theory such a system could render every possible color, including ones that are impossible to see in nature because they’re outside the range of S/M/L responses you can get with natural light. In practice they estimate up to two thirds of the light “leaks” over to neighboring cones, but that’s still enough to produce a large range of colors including ones outside the natural gamut. One such color, the one produced by stimulating only M cones and no others, they’ve named olo (from 010 — get it?).

In theory only five subjects in the world have seen olo, which they describe as a “blue-green of unprecedented saturation, when viewed relative to a neutral gray background.” But that’s not very satisfying. If you’ve just discovered a new color naturally the first thing anyone is going to ask is “what does it look like?” — it’s much nicer if you can answer “here, take a look” instead of “come back to my lab and I’ll shoot you in the eye with a laser.”

Luckily, I think I’ve found a way to see olo without any of the complex set-up. The Oz team creates olo by selectively stimulating M cones in a region of the retina, but we should be able to get the same effect by first staring at a magenta color field for 20-30 seconds (which suppresses responses from the S and L cones) and then quickly shifting over to a pure green. The difference is analogous to additive vs subtractive color: Oz works by stimulating only the M cones, while color adaptation involves the suppression of the stimulus color (in this case magenta, the complement of green).

Stare at the center dot for 30 seconds. Then without moving your eyes, move your mouse pointer into the square (or tap on mobile). You should briefly see a super-saturated blue-green image.

Olo demo

Cute effect, but is it olo — are you and I actually seeing the same color the Oz team sees with their device? Short answer is… maybe? For an effect that’s so readily testable there’s still surprisingly little consensus about exactly what causes negative after-images, or even exactly what colors one is expected to see. Explanations range from simple cone adaptation to the currently dominant theory that after-images are caused by some kind of opponent process between different color responses higher up the processing chain (probably in the retinal ganglion cells). If something like the cone adaptation model is correct then I’d definitely expect the two methods to produce the same color, modulo how much leakage is in Oz vs. how much the S and L cone responses are suppressed in the after-image. But even if the processing is higher up the chain it wouldn’t surprise me if the effects are essentially the same, because regardless of the underlying mechanism it’s clear that when an after-image is mixed with a real image (e.g. when viewed against a colored background) the result is as if the original stimulus was partially subtracted from the background. That’s why the after-image from magenta appears green on a white background, but greenish-yellow on a yellow background and brownish-red on a red background.

One way to test whether the two colors really are the same would be to do the same tests the Oz team did but with the after-image + green, using a tunable laser plus white light to match the perceived color. Alternatively one could turn the experiment on its head and use Oz itself to match the perceived color directly, and see how close it gets to their olo.

I plan on reaching out to the Oz team to see what they think, and I’ll update if they write back.

]]>
https://www.docbug.com/blog/archives/1260/feed 0