Thursday, February 04, 2010

Optimism, Brains, Thinking, and Artificial Intelligence

Mind and brain questions today. David Airth asks,

"Why are some people optimistic and others pessimistic? I am an optimist, whereas my father was a pessimist."
Certainly, at least part of the answer lies in the brain. Researchers Tali Sharot, Alison Riccardi, Candace Raio, and Elizabeth Phelps had a much heralded article in Science recently that examined something related to this question (hat tip to Kevin). It is a well documented phenomenon that most people tend to be irrationally optimistic about future events. We project our hopes onto the world and convince ourselves that desired outcomes are more likely than they are and tend to act accordingly. This irrational optimism can be healthy in driving us to improve ourselves and our lot in life, but then it can lead to things like the housing bubble also.

This reflexive optimism, however, is not present in those who are depressed. Sharot, et al, looked to see whether there was activity in particular parts of the brain that could be correlated with this and found that there in deed were, areas in the amygdala and the rostral anterior cingulate cortex were much more active in those who were more optimistic.

So, we have a correlation, but what about causation? No doubt there are biological factors that may cause different normal brain states in different individuals and this can play out as a general tendency towards optimism or pessimism. Anyone with more than one child has no doubt been blown away with the subtle differences in personality that must at some level be biological.

Other biological factors no doubt play a part too. We know that exposure to sunlight, for example, has effects that change mood. But, of course, there are factors outside the brain, too. Experience changes the brain. What we see and what we think, is shaped by the brain but also shapes the brain. Those who are more pessimistic often have good reason for pessimism, having experienced insurmountable difficulties, terrible trauma, or shattered dreams. Parenting and childhood experiences surely have some effect.

Of course, it is an incredibly complex dance of biological and biographical factors intertwining that make us who we are. But, personally I'm with you. I'm an optimist. Maybe we can start a club.

Aside: a comic I've worked with several times wrote one of my current favorite jokes, "I went to a meeting of the Optimists' club. When I got here the room was half empty, but by the time I left it was half full."

Anonymous asks,
"How does Heidegger's notion of "thinking" differ from the normal way of thinking about "thinking"?"
Those who are better schooled in Heidegger, please correct me on this, but as I understand Heidegger, the key to being an authentic being (when you are truly authentic, you earn capitalization and are referred to as a Being) is acting, doing, creating yourself. We normally think of thinking as the opposite of doing, as passivity. There is the world we act in and then there is the mental world where we escape from our situation, our troubles, ourselves. This sort of thinking is seen by Heidegger as self-deception and takes you away from your humanity.

But this is not true of real thinking. To live a truly human life, you must not only do, but understand why you do what you do. Thinking, he argues, must be done in language and language is made up of concepts which endow the things they are attached to with meaning. By "thinking" what we are doing is connecting the world with language and thereby giving our lives meaning. "Language is the house of Being," he says. Mere contemplation keeps you from being fully human, merely acting will not work, but active thinking which connects your actions and situation in the world with a deeper sense of meaning will do the trick.

Crossing the analytic/continental divide, SteveD asks,
"If someone does not believe in the supernatural, on what basis can he reject the possibility of artificial intelligence? (Or, more pointedly, are folks like Searle who say strong AI is impossible smuggling in some supernatural notions of mind?)"
Let's start with the difference between weak and strong AI. Weak AI is a non-human brain that sure as heck looks like it's thinking. If I'm chatting over IM with two others, one a computer and one a human, and I can't tell the difference between them, that's weak AI since the simulation gives all appearances of thought. Strong AI is when there is actual thinking, internal experiences like we have, it not only reacts as if it is interested or in pain, but really is interested or in pain.

John Searle argues against the possibility of strong AI most famously with his Chinese Room argument wherein we have a person who speaks no Chinese sitting in a small room filled with manuals that have Chinese characters in them. When messages in Chinese are slipped under the door, he takes them, looks up the characters in the manuals and writes down the corresponding characters and slips the new message back under the door. To a Chinese speaker, it seems like he or she is having a conversation with someone. But who? The person in the room does not speak Chinese and has no idea what he is writing, so there is no thought there. It can only be said that the person is conversing with the room, but surely we don't want to attribute a mind to the room even if the person really, really thinks there is an intelligence conversing with him or her. It will be thus with any attempt to move from weak to strong AI. It will be an impostor mind, not a real mind.

Given that humans are intelligent, there must be something we have that the machine doesn't, indeed can't have. If you want to be a dualist and say that we have both a material brain and an immaterial soul, problem solved. But if the extra metaphysical baggage of a soul bothers you (and there is good reason for this), can you maintain Searle's position and still be a materialist, can we have something the computer can't without moving to anything non-material?

Searle himself is a materialist arguing that strong AI itself presumes dualism, that strong AI only makes sense if we presume that there is this difference between us and computational machines. If we den this, then we deny the possibility of strong AI and keep from importing a soul. It is akin to the move that the logical positivists make with God: it's not that I can prove that God doesn't exist, but rather that such metaphysical claims are not meaningful. Searle would say to you that no, you can't argue against strong AI without the supernatural if you accept the strong AI picture of what AI could be, but then it's a game of three card monty that you lose by even playing. Don't try to win the game, reject the game.