Mind and Brain

Another reason to not let your toddler watch TV?

Economics professors at Cornell and Indiana U. have found a possible correlation between watching TV before the age of three and autism. The evidence looks even more circumstantial than the study linking early TV viewing to ADHD, but still interesting: really what they’ve found is a correlation between diagnosis of autism and the number of rainy days in a particular county for a given period, which is known to correlate with hours kids spend watching TV. I wonder if they also looked at birth month and whether that has an effect — if it did that might imply a critical period of only a few months. (Thanks to Andrea for the link.)

Update 3:30pm: Here’s the actual study. Plus, Steven Levitt offers some skepticism at the Freakonomics blog. (Thanks to Judith for the links.)

Games we materialists play when you aren’t looking

Living in California as I do, I have a lot of friends who have ideas about the physical world that on their face seem ludicrous to a scientifically-minded materialist like myself. For example, people I love and respect think that some people have the ability to heal by adjusting a patient’s “energies” without touching him, others think that spells and witchcraft have power beyond the psychological, and even more think there’s some “guy” up in heaven that controls what happens here on Earth and that 2000 years ago His son rose from the dead. Since I respect these friends a great deal I’ve been looking for common ground, and have started playing a game with myself where I try to translate these beliefs into a form that a philosophically-minded but skeptical materialist like myself can accept.

I mean translate literally — I look for meanings of the words my believer friends use that make the belief plausible in my own world-view while compromising their actual beliefs as little as possible. There are some limits to the game — no amount of translation is going to make the claim that one can change the weather just with one’s mind any more palatable to me. But there is a surprising amount of room to maneuver. For example, I’ve heard some describe the energy manipulated by reiki practitioners as “electricity,” but when pressed it’s clear that’s just a metaphor for something else — they don’t actually mean that this energy can be measured with a voltmeter any more than a physicist talking about an electrical “current” thinks you could steer a boat down a river of the stuff. The goal of my private game then is to answer the question, “a metaphor for what?”

The fun part of this game is that when I’m being honest with myself I rapidly wind up at logical impasses in my own philosophy as well. My latest conundrum has to do with belief in some sort of soul, a “thing” that is a fundamental part of and unique to every living being (or at least every person), and that persists after that person has died. So the game is to come up with something that is (a) something fundamental to the identity of an individual person and yet (b) still exists after the body has turned to dust. As I cast about for things in my own world-view that might fit the bill (including things like “the patterns of memories left in surviving friends and family” and “the combination of genes and upbringing one leaves in one’s own children”) I started to recognize that the idea of a soul is an answer to a basic philosophical question left unanswered by materialism, namely “when we see an object at two points in time, what features are necessary such that we recognize the two viewings as the being of the same object?” I’ve always heard this called the Granddad’s Axe problem:

I’ve got my Granddad’s old axe. I’ve replaced the handle twice, and the head three times, but it’s still my Granddad’s old axe…

We can certainly accept that Granddad’s axe is still the same axe even if we paint it or sharpen it, and might even accept it’s the “same” axe after we’ve replaced both the head and the handle if we use it in the same way, it evokes the same memories of Granddad that it did before, etc. What about people? It’s been said that every molecule in a person’s body is replaced after a decade or two, and certainly I’m very different in both appearance and thinking than I was when I was 12. Am I still the same person I was then, even with all those changes? If so, why do we connect the atoms that made up that child then with the person sitting here typing this now? And if not, is there some 12-year-old boy living today who, based on similarity to that boy of 24 years ago, is more deserving of the title?

Materialism (or my understanding of it at least) doesn’t offer any answers to these questions, nor does it feel the need to do so. The philosophy simply suggests that there are patterns that exist in the world at different points in time, that they follow certain rules, and that any vocabulary that accurately describes those patterns is equally valid (though potentially more or less practical and comprehensible). Unfortunately, just calling such a pattern “soul” doesn’t get us any further — that just amounts to saying “yes, you are the same person as you were when you were 12, and we’ll call the thing that binds those two defined entities together your soul.”

Depression, stress, and growing new brain cells

There’s a fascinating article in this month’s Seed Magazine called The Reinvention of the Self, describing the latest studies showing that we aren’t actually born with all the brain cells we’ll ever have, how stress and depression seem to keep new neurons from growing, and how antidepressants seem to encourage the growth of new neurons.

While not the main thrust of the article, it highlights what I think is a pretty basic philosophical issue for our age:

Gould’s research inevitably conjures up comparisons to societal problems. And while Gould, like all rigorous bench scientists, prefers to focus on the strictly scientific aspects of her data—she is wary of having it twisted for political purposes—she is also acutely aware of the potential implications of her research.

“Poverty is stress,” she says, with more than a little passion in her voice. “One thing that always strikes me is that when you ask Americans why the poor are poor, they always say it’s because they don’t work hard enough, or don’t want to do better. They act like poverty is a character issue.”

Gould’s work implies that the symptoms of poverty are not simply states of mind; they actually warp the mind. Because neurons are designed to reflect their circumstances, not to rise above them, the monotonous stress of living in a slum literally limits the brain.

The more we peel back the curtains that hide how the mind works, the more we’re forced to face age-old questions about what free will and responsibility mean when you can see the clockworks ticking towards their inevitable action.

(Thanks to XThread for the link!)

Choice Blindness

One of my favorite psych studies is one where a subject who had had his left and right brain hemispheres severed was asked to point (with either his left or his right hand) to one of four given pictures that matches a test picture. Unbeknownst to the subject, his right eye is shown one test picture (say, of a chicken claw) and their left eye is shown a different one (say, a snow scene). When asked to point with his right hand to the matching picture he picked a chicken, when asked to point with his left he picked the snow shovel. The fascinating part is when the subject was asked to verbally explain why he picked the snow shovel. Language is mostly generated in the left hemisphere (which controls the right hand), the half that didn’t pick the shovel. Rather than look confused, he invariably came up with explanations for why he picked what he did — explanations that the experimenter knew were incorrect like “oh, you need the shovel to clean up the chicken coop.”

Now BPS Research Digest points to a new study where they find the same sort of “choice blindness” in normal subjects:

One hundred and twenty participants were shown 15 pairs of female faces (taken from here). For each pair they had to say which of the two faces they found more attractive, and on a fraction of trials they had to say why they’d made that choice, in which case the photo of the face they’d selected was slid across the table to them so they could look at it while they explained their choice. Crucially, on a minority of these trials, the researchers used sleight of hand to surreptitiously pass the participant the photo of the face they had just rejected, rather than the one they’d chosen.

Bizarrely, only about a quarter of these trick trials were noticed by participants, despite the fact the two faces in a pair often bore little resemblance to one another. Even stranger was the way the participants then went on to justify choosing the face on the card they were holding, even though it was actually the face they’d rejected. It’s not that participants weren’t paying attention to the face they’d been passed – the justifications they gave often related to features specific to this face, not the one they’d actually chosen. Independent raters who compared participants’ verbal explanations for choices they had made (non-trick trials), with their explanations for the choices they hadn’t made (trick trials), found no differences in amount of emotional engagement, degree of detail given, or confidence.

As I’ve said before: Man is not a rational creature. Man is a rationalizing creature.

Thoughts on Kurzweil’s Law

I heard Ray Kurzweil speak last night at the Long Now seminar. A friend who also attended says it was essentially the exact same talk he’d heard him give five years ago (ironic considering how fast things are supposed to be changing nowadays), but this was my first time hearing him in person. I must say it’s rare where a talk makes me alternate between thinking“Well, that’s completely bogus!” and “OK, that makes sense…” so many times.

Where I think he’s got it right:

  • People are inherently bad at extrapolating exponential trends, and we are currently experiencing technological exponential growth. This is especially true in the information and communication technologies, namely information processing, sensing and pattern-recognition, and human-to-human communications.

  • Reading between the lines of his talk, information technologies are bootstrapping technologies: once you have them, they make inventing the next stage easier, faster and cheaper.

  • The combination of biotech, new biological sensors and the ability to simulate complex processes are going to seriously challenge how we currently think of ourselves as individuals and even what it means to be human.

Where I think he’s got it wrong:

  • As I mentioned a few days ago, I think some of his exponential curves are the result of our natural tendency to gloss over things that happened in the past and focus on recent developments. (A less generous assessment would say he just did it to make his curve work out, but this isn’t limited to Ray’s charts; in fact, he showed the same graph with points plotted from other lists of momentous inventions drawn from various encyclopedia.) This is not to say there aren’t several exponential growth curves in play at the moment, but I don’t think this is a trend that has been going on for hundreds of thousands of years.

  • It’s an old saw that people overestimate what will be possible in five years and underestimate what will be possible in 20 years. I think his predictions of ubiquitous augmented reality, computers distributed throughout one’s clothing, and head-up display contact lenses (or direct-to retina/optic nerve) will all happen at some point, but not in the next 5 years.

  • Ray talks about the creation of artificial intelligences as if some day in the near future we’ll invent HAL and start talking to it. Ever since Alan Turing described the Turing Test, people have described artificial intelligences in terms of ability to generate and understand language, ability to make human-like decisions, ability to show and understand emotion — in other words, the ability to relate to humans. I see no reason to think the first AIs will think or communicate like us at all, nor do I think they will exist at human scale.

    In fact, I would say several species of human-made hyper-intelligences already walk among us: we call them corporations, nation-states, philosophical or political movements, and civilizations. Their neurons are the people, documents and cognitive artifacts that make up the whole. Their synapses are the communication and social networks that run between these individuals. The specific structure of the intelligence is set by its laws, traditions and culture.

    The dual of the idea that groups of people, documents and cognitive artifacts can be a single intelligence is the idea that my own human intelligence, as an individual, is actually made up of more than just what I can think when I’m lying naked and alone. As Edwin Hutchins points out in Cognition in the Wild, human intelligence is not just the product of what’s inside our skull but stems from the combination of our brains, our culture, and tools such as the paper we write on and the skill of writing itself. I expect by the time a machine with no human in the loop has passed the Turing Test, the continuing augmentation of humans will have long-since forced us to recognize that the test wasn’t all that good a criterion for intelligence in the first place.

  • Even though our knowledge and our information technologies are improving exponentially in many fields, there are some parts of human knowledge that are not growing at this incredible rate. Notably, our understanding of existential questions about the purpose of life, what we as humans value, and the meaning of free will and have not kept apace with technology — even though in many cases new technology and new understandings about the world have pulled the rug out of our previous answers. These questions will become especially important as we start fundamentally modifying our biology and finally unravel the mysteries of the mind itself.

More on the Placebo Effect

Rawhide commented on my previous post that he was surprised there was much doubt about the placebo effect’s existence. It turns out there’ve been some serious questions raised about whether the placebo effect is actually just a myth after a 2001 New England Journal of Medicine article that analyzed 114 placebo-using medical studies and found that, on the average, the placebo effect was minimal if it exists at all.

Dylan Evans (a frequent contributor to the MindHacks blog) has a 2003 book called Placebo: The Belief Effect that argues that the placebo effect only helps with some kinds of conditions — namely pain, swelling, stomach ulcers, depression, and anxiety — and that by lumping everything into one average the meta-study washes out the few places where placebos actually work. He also suggests that placebos probably work by triggering the release of endorphins, which affect the same kinds of symptoms. Given the recent study it looks like he hit the nail on the head on that one.

You can find a nice summary of his idea in this short paper, which also includes a nice history of the discovery and our understanding of the placebo effect.

Update 8/30/05: typo fix

Placebo effect and views on the mind

The Economist has a short article on how researchers have observed that people’s brains emit more endorphins when given a placebo and told it will counteract pain. The article starts with this:

The placebo effect, long considered nothing more than psychological suggestibility, does now appear to be genuine.

It’s hard for me to imagine the worldview necessary for that sentence to make any sense. If you believe (as I do) that the mind is fully implemented by our biology then you wouldn’t at all be surprised that there’s a biological cause for the observed decrease in subjective pain. On the other hand, if you still put Descartes before the horse and believe in a kind of soul or other mind/body dualism then the idea that a non-physical “psychological suggestibility” isn’t genuine (even though it stops the equally non-physical pain) is ludicrous.

It seems to me that The Economist and probably a majority of Westerners want to walk a middle road, accepting only the physical, observable, and scientific world as “genuine” while at the same time refusing to accept that a direct corollary of that belief is that our own minds must be a part of that physical, observable world. It’s no wonder we have such difficulty dealing with issues like mental illness in this culture…