In this remarkable short talk, comic genius John Cleese explains what he has learned about the creative process. Be ready to take notes, because he passes along insights worth remembering every day.
Category Archives: Books and Ideas
If you’ve spent any time on YouTube over the last few years (and you know you have), you’ve likely seen the video of the invisible gorilla experiment (if you’ve somehow missed it, catch yourself up here). The researchers who conducted that study, Dan Simons and Chris Chabris, didn’t realize that they were about to create an instant classic—a psychology study mentioned alongside the greats, and known well outside the slim confines of psych wonks. Milgram taught us about our sheepish obedience to authority; Mischel used marshmallows to teach us about delayed gratification; and Simons and Chabris used a faux gorilla to teach us that we are not the masters of attention we think we are.
The duo’s new book, The Invisible Gorilla, and Other Ways Our Intuitions Deceive Us, is every bit as engaging as the original study was innovative.Using the invisible gorilla study as a jumping off point, the authors go on to explain why so many of our intuitions are off the mark, though we’re typically convinced otherwise. I recently had a chance to chat with Dan Simons about the study, the book, and why we’re usually in the dark about how our minds really work.
DiSalvo: What gave you and Chris Chabris the idea for the invisible gorilla study?
Simons: Our study was actually based on some earlier research by Ulric Neisser conducted in the 1970s. His studies were designed to tease apart whether people focus attention on regions of space or on objects. He wanted to see whether, if people were focusing on one part of a scene, they would automatically notice if something unexpected passed through that “spotlight” of attention. To do that, he made all the objects partly transparent so that they all occupied the same space and could pass through each other. He found that people often missed an unexpected event. But, the strange, ghostly appearance of the displays gave people a ready excuse for why they missed the unexpected event. Oddly, no one followed up on those studies, so we thought we’d give them another look and see whether people would miss something that was fully visible and easy to see. We did our study as part of an undergraduate class project in a class that I was teaching.
Why the gorilla suit?
We were looking for something dramatic so that if people missed it, they would be surprised when we showed it to them again. We also wanted something that would have some humor value to it. Fortunately for us, Jerome Kagan, an eminent developmental psychologist at Harvard, happened to have one in his lab.
I remember the first time I watched the YouTube video of the study and was completely dumfounded when the question, “Did you see the gorilla?” flashed on the screen. As researchers, I can imagine getting that reaction from people is like hitting a home run.
It surprised us the first time we ran the study – we didn’t expect it to work as well as it did. It’s still a thrill to present the video to an audience and have people miss it. Our intuition that we’ll notice something as visible as a gorilla is a hard one to overcome. It took me years before I could trust that some people in almost any audience would miss it.
What do people tell you about their reaction afterwards?
Normally people can’t believe that they missed it. On occasion, they’ve accused us of switching the video. The intuition that we would notice makes it jarring for people to realize that they didn’t.
And that’s really the point, right, that we can’t know what we are missing until our attention is refocused on it?
That’s a big part of it. We can easily miss what’s right in front of us, but we don’t realize that we can. Part of the problem is that we’re only aware of the things we notice and we’re not aware of the things we didn’t notice. Consequently, we often have no idea what we’re missing.
Hence the myth of multi-tasking.
It depends on what you mean by multi-tasking. If you mean simultaneous attention shared across multiple tasks, then yes, it’s a myth. We typically cannot do two things simultaneously. We can perform multiple tasks one after another—a sort of serial tasking.
In the case of the first meaning, simultaneous attention across multiple tasks, why do you think so many of us are convinced we can do it?
I think a lot of people confuse these two possible ways of doing multiple tasks. Because we can do one task and then another, switching back and forth among them, we falsely believe we can do two at once. That confusion happens in part because we don’t realize how impaired we are when doing two things at once. We’re too distracted to notice that we’re distracted. That has dramatic consequences. For example, we can’t talk on the phone while driving because that requires doing two tasks at once rather than sequentially (and both require attention).
Where does the intuition originate?
Our intuitions are based on our experiences. The problem is that our daily experiences frequently support incorrect intuitions about how our minds work. We only are aware of the things we’ve noticed and we aren’t aware of the things we’ve missed, so we assume that we always notice things. We don’t notice when we’re distracted by multitasking, so we think we aren’t distracted. The same sort of principle explains many of our mistaken intuitions.
But why wouldn’t we develop an intuition from our experience that we can’t parse our attention?
Our experience is tied to our awareness. We are aware of what we notice, not of what we miss, so we develop an intuition based on noticing. The principle applies to multi-tasking: we are aware only that we are accomplishing multiple tasks, because our daily life demands it, but we aren’t aware that we’re not really doing them at the same time. As a result, we mistakenly assume that we can do two things at once. Given that we rarely encounter evidence to contradict our awareness — normally, there’s nobody around to point out the gorilla — we don’t learn when our intuitions are wrong.
We see people all the time who know very bad things can happen from, as one example, texting while driving, but they still do it.
That’s true, but most people could drive much of their lives without having an accident. And the longer they go without having an accident, the more they are deluded into thinking they can drive and text safely. Fortunately, accidents are rare, but when they happen, they are catastrophic. Knowing that we have these limits and taking them to heart can save our lives. We learn best from our own experiences, but in this case, you shouldn’t wait to experience the consequences of distracted driving for yourself.
I can’t help but notice how so much of what we’ve been discussing runs counter to the conclusions of one of the most popular non-fiction books out there: Malcolm Gladwell’s Blink. Many people I’ve talked to who have read that book are convinced that we should trust our instincts instead of thinking things through.
The idea that intuition, gut instincts, and rapid decisions are a panacea for all of our decision making problems is really dangerous. Unfortunately, that’s the message that some people have taken from Gladwell’s book. Intuitions can be quite useful for some types of decisions, especially those that involve an emotional preference —who do you find most attractive, what ice cream tastes best—but they can lead us dangerously wrong when they are based on assumptions about how our minds work. Gladwell is an incredible storyteller, but some of the conclusions he reaches in Blink are problematic. Our work, and the work of other cognitive scientists, shows again and again that the intuitions people hold about how their minds work are often wrong. When you dig deeper into the material he covers in Blink, you see that many of the featured examples are of expert pattern recognition, and that’s a very different thing than simply trusting intuition or instinct.
Like the example of a quarterback acting decisively without having time to think?
Yes, that’s expert pattern recognition. Peyton Manning studies films for many hours in preparation for each game, and he has done that for years. Then, in a game situation, he recognizes the pattern really quickly, and that leads him to find the open receiver readily. That said, even expert pattern recognition is far from perfect. If you let Manning analyze the films at a leisurely pace, he’ll find things he missed during the game. The same principle applies to most experts. They can make reasonably good decisions quickly and seemingly based on intuition — they’ll outperform novices with only a glance. But given more time, even the experts often would make better decisions.
Yet the takeaway for many people is that “thinking” is a hindrance.
Thinking takes work, and the idea that we could go with our gut and do better is really appealing. Unfortunately, it’s often not true.
What can we expect as a follow up from you guys? Can you top the gorilla study?
It’s hard to top having people miss a gorilla. I do have a new paper that just came out in the new open-access journal I-Perception. It talks about a new demonstration that I’ve called “The Monkey Business Illusion.” It’s on YouTube now. Basically, I wanted to see if people who knew about the original gorilla video would be immune to this sort of failure of awareness. Try it for yourself!
Link to the authors’ website.
Simons, D. (2010). Monkeying around with the gorillas in our midst: familiarity with an inattentional-blindness task does not improve the detection of unexpected events i-Perception, 1 (1), 3-6 DOI: 10.1068/i0386
For theoretical neurobiologist and author Mark Changizi, “why” has always been more interesting than “how.” While many scientists focus on the mechanics of how we do what we do, his research aims to grasp the ultimate foundations underlying why we think, feel and see as we do. Guided by this philosophy, he has made important discoveries on why we see in color, why we see illusions, why we have forward-facing eyes, why letters are shaped as they are, why the brain is organized as it is, why animals have as many limbs and fingers as they do, and why the dictionary is organized as it is.
His latest book, The Vision Revolution, is a trenchant and insightful investigation into why humans see and interact with the world as we do. His findings are challenging and often surprising, and his witty, engaging style is accessible to a broad range of readers . He was generous enough to spend a few minutes with me recently to discuss his book and other topics.
NN: What originally led you to write a book about human vision in particular, instead of any of the other human evolutionary adaptive traits?
MC: Indeed, I don’t consider myself solely a vision scientist. I call myself a theoretical neurobiologist, more generally, and I have had a number of non-vision research directions, including, for example, the shape and evolution of the brain, and why animals have as many limbs and digits as they do. Some of these research directions were central parts of my first book, The Brain from 25,000 Feet.
I was led to a book on vision because that’s where my research led me, and so the question is, Why did I end up with quite a few research directions in vision?
As a theoretical neurobiologist, I try to find interesting phenomena that I can wrap my head around, with the hope of putting forth and testing rigorous and general explanatory hypotheses. That’s not easy, but there are a number of reasons why it’s easier for vision.
First, relative to other senses and/or behaviors, the amount of data we possess for vision is huge. There’s a century-sized pile of data, much of it not well explained, much less in a unifying manner.
Second, vision is theoretically approachable. You have a visual field, you see objects, and so on. We know how to at least begin thinking about the phenomenology. It’s more difficult for audition, and practically impossible for olfaction, where we have little idea how to even describe our perceptions. …forget about explaining anything!
And, third, for vision we have the best understanding of the underlying mechanisms.
My point is that, as a theorist struggles for phenomena he or she can crack, vision appears as a large attractive target compared with many other aspects of brain and behavior. One may end up attacking vision problems even if one isn’t excited by vision, merely because it’s juicy. (I am excited by it, though, especially to the extent that I can find exciting hypotheses.)
I was intrigued by the “mind reading” aspects of vision. In a nutshell, how does this work, and how do humans benefit from this ability?
Our color vision fundamentally relies upon the cones in our retina, and I argue in my research that color vision evolved in us primates for the purpose of sensing the emotions and states of those around us. We primates have an unusual kind of color vision – our cones sample the visible spectrum in a peculiar fashion – and I have shown that one needs that kind of peculiar color sense in order to pick up the color modulations that occur on our skin when we blush, blanch, redden with anger, and so on. Our funny primate variety of color vision turns out to be optimized for seeing the physiological modulations in the blood in the skin that underlies our primate color signals.
So, we evolved special mechanisms designed for sensing the emotions and states of others around us. That sounds a lot like the evolution of a “mind-reading” mechanism, which is why I (only half in jest) describe it that way.
You mention in the book that reading and writing are relatively recent advances in human development, and yet we take for granted that we “see” and understand words, as if our brains were simply meant to see and understand them. What’s really going on that allows us to make sense of symbols on a page—and why can we do this at all?
In talks I often show a drawing of a child reading a book titled “How to Somersault.” The “joke” is that most kids are able to read very early, often even before they can do stereotypical ape behaviors like somersaults and monkey bars. Sure, they comprehend speech much earlier, but they’re getting orders of magnitude more speech thrown at them than writing. Kids learn to read very early, and very well; and as adults we are ridiculously capable readers, and spend nearly all our day reading.
Aliens might be excused for thinking we evolved to read.
But the invention of writing is only thousands of years old. In addition, for most of us, our grandparents, great grandparents or great great grandparents didn’t read at all. Writing is much too recent for our brains to have evolved to have reading mechanisms.
How does our brain do it?
Is it because our visual system can become good at reading whatever we present to it? No. Kids would surely not be capable readers by around six if they were tasked to read bar codes or fractal patterns.
The solution is that culture made writing easy on the eye, by shaping letters to be what the eye likes. The idea that culture shapes our artifacts to be good for us is not new. What’s new here is a specific hypothesis for what writing should look like in order to be good for us.
To be easy on the eye, writing needs to “look like nature,” just what our illiterate visual systems are fantastically competent at processing. The trick of that research direction was making this “writing looks like nature” idea rigorous, and coming up with ways of testing it. I show that there are certain signature visual patterns found in nearly any natural environment with opaque objects strewn about, and that these signature patterns are found in human writing. In short, writing has evolved so that written words look like visual objects.
2. Don’t give people the power to direct your life (because if you do, they will)
3. Beware those with an inflated sense of self (they’re always trying to expand their pyramid scheme)
4. Become a negotiator (it’s not a business term, it’s a life term)
5. Be nice (but not only nice)
6. Be tenacious (if you think you’re persistent enough, you probably aren’t)
7. Learn how to resolve into strategy (problem-solving must eventually rise to the top of consciousness even during the hardest times)
8. Stay grounded while you’re spiritualizing (and beware spiritual etherealites who aren’t)
9. Don’t become a sycophant, and don’t abide those who are
10. Don’t let a pursuit of your “purpose” short-circuit your passion (purpose isn’t verifiable, passion is)
11. Use mediocrity to find your edge (even doing poorly at something can be useful)
12. Know what you want (or at least try hard to figure it out)
13. Beware the mystification of entitlement (it’s a delusion that distorts reason)
14. Learn to enjoy competition (the race and the win)
15. Develop a love of “play” (it’s not kid stuff, it’s human stuff)
16. Beware the myth of predestination (and those who believe it)
17. Learn to use escapism, but don’t get lost in it
18. Learn to love culture, but don’t get drunk on it
19. Learn to appreciate business, but don’t deify it
20. Learn to manage expectations (yours and others’)
A few years ago I read Louann Brizendine’s book, “The Female Brain”, and marveled at her ability to take weak correlations and turn them into impressively scientific-sounding “facts.”
This really is a talent, I think, though not one that has earned her many fans in the science community. That Brizendine is a trained psychiatrist, and a member of the American Board of Psychiatry and Neurology, has not helped her credibility in science circles—in fact, many feel it only makes her more culpable, since she really ought to know better.
What I think she knows, however, is that popular science books don’t have to be evidence-based to become best sellers, and she’s no doubt correct. Her just-released book, “The Male Brain”, will demonstrate that marketplace truism once again, and once again she is raising the ire of scientists.
A few examples will better illustrate why this tension exists. Brizendine likes to say that men and women are very much alike, but different in a few crucial ways. Fair enough. How are we different?
For one, she claims the “I feel what you feel part of the brain–mirror-neuron system–is larger and more active in the female brain. So women can naturally get in sync with others’ emotions by reading facial expressions, interpreting tone of voice and other nonverbal emotional cues.”
What’s interesting is that the “mirror-neuron system” at the core of her claim may or may not be a “system” at all; in fact, whether mirror neurons even exist is still a point of neuroscientific contention. At the very least, how these neurons work is debatable and there’s anything but widespread agreement about what they do. But Brizendine makes it sound as if the matter is settled and we can confidently draw sweeping conclusions.
But take another look at her statement. Is the conclusion she’s reaching a paradigm-breaking discovery? Not at all. It’s just a regurgitation of the same stereotype we’ve heard for years, that women are more empathetic, more in sync with emotions and better communicators. Only now, according to Brizendine, we have a grandiose scientific underpinning for believing it.
Here’s another claim: “Perhaps the biggest difference between the male and female brain is that men have a sexual pursuit area that is 2.5 times larger than the one in the female brain.” This statement begs the question, where exactly is this “pursuit area”? The reader shouldn’t expect an answer—at least not one with scientific validity—because in all likelihood no such “area” exists. At minimum we should be asking how this skirt-chasing control center was identified.
I have a feature article in the January/February issue of Scientific American Mind about the psychoemotional effects of social networking. A preview of the article is online here, and hard copy is available on newsstands.
Several months back I started following the debate about the role of social network sites like Facebook in fostering loneliness, affecting self-esteem and bolstering narcissism. As is often the case, the debate seemed more about presuppositions and agendas, and less about evidence. This article puts the emphasis solidly on evidence by reviewing a range of research findings from the last few years. If you have a chance to read it, I’d love to hear your thoughts.
I wish everyone a tremendous New Year. Thank you very much for reading Neuronarrative in 2009 – I’m looking forward to another year of exchanging ideas on engaging topics.
All the best to you and yours.
Isn’t trying to understand how the mind works difficult enough that we shouldn’t be trying to “extend” it outside of our heads? Of course not, because philosophy occasionally needs a novel twist to keep people from entirely blowing it off. Such is the role of David Chalmers’ and Andy Clark’s “Extended Mind Theory” (EMT), featured in Clark’s new book, Supersizing the Mind: Embodiment, Action and Cognitive Extension.
I haven’t finished the book, so I’ll hold back on anything resembling a full review. But I will direct your attention to an exceptionally entertaining review by philosopher Jerry Fodor in the London Review of Books.
If you’re unfamiliar with EMT, the basic premise is that things we conventionally think of as tools residing outside of our minds (notebooks, laptops, iPhones, etc) are actually parts of our minds. In other words, the mind/world dualism that most of us, without thinking about it, consider obviously correct is actually quite wrong — just as untenable as mind/body dualism. Chalmers’ now famous example used to make sense of this is the “Otto and Inga” scenario, which goes like this:
Consider the cases of Otto and Inga, both of whom want to go to the museum. Inga remembers where it is and goes there; Otto has a notebook in which he has recorded the museum’s address. He consults the notebook, finds the address, and then goes on his way. The suggestion is that there is no principled difference between the two cases: Otto’s notebook is (or may come with practice to serve as) an ‘external memory’, literally a ‘part of his mind’ that resides outside his body. Correspondingly, Otto’s consulting his notebook and Inga’s consulting her memory are, at least from the viewpoint of an enlightened cognitive scientist, both cognitive processes.
Fodor, with his trademark wit, takes on EMT at its foundation: it’s the content, stupid.
The mark of the mental is its intensionality (with an ‘s’); that’s to say that mental states have content; they are typically about things. And (with caveats presently to be considered) only what is mental has content. It’s thus unsurprising that considerations about content are most of what drives intuitions about what’s mental.
Even very clever tools like iPhones – aren’t parts of minds. Nothing happens in your mind when your iPhone rings (unless, of course, you happen to hear it do so). That’s not, however, because iPhones are ‘external’, it’s because iPhones don’t, literally and unmetaphorically, have contents.
But what about an iPhone’s ringing? That means something; it means that someone is calling. And it happens on the outside by anybody’s standard. And also, come to think of it, what about iPhones that have had numbers programmed in? So, even if shovels and the like can’t be parts of minds, how does insisting on the intensionality of the mental rule out notebooks and iPhones?
That’s a fair question, and part of what I’ve been saying wasn’t quite true. What I should have said isn’t that only what’s literally and unmetaphorically mental has content, but that if something literally and unmetaphorically has content, then either it is mental (part of a mind) or the content is ‘derived’ from something that is mental. ‘Underived’ content (to borrow John Searle’s term) is the mark of the mental; underived content is what minds and only minds have. Since the content of Otto’s notebook is derived (i.e. it’s derived from Otto’s thoughts and his intentions with a ‘t’), the intensionality of its entries does not argue for its being part of Otto’s mind.
In other words, our minds, and only our minds, can possess “underived” content. iPhones can’t possess it — content must be put into the iPhone. Same applies to any external gadget or tool you can think of; content must be entered into the tool for it to be of any use to your mind. Therefore, iPhones and the like are repositories for content that must be derived from elsewhere. They are, to beat the dead horse yet again, external tools.
This is the mind gap, and as Fodor points out in his review, Clark stumbles around it with various metaphors but never soundly addresses it. He also attempts to boil the entire distinction down to what he apparently sees as an internalist red herring: internalists, he says, take “one step too many” when explaining how content is accessed by the mind. For example, Inga, when she remembers the directions, does not need an extra step to reach the information — it’s there in her mind. So, Clark argues, is the case with Otto — we needn’t believe that he is taking another step to reach the information in his notebook, because it’s right there ready for him to use.
Fodor dismantles that position handily:
There is after all, a built-in asymmetry between Otto’s sort of case and Inga’s sort. Otto really does go through one more process than Inga: consulting his notebook really is a link in the causal chain that runs from his wanting to go the museum to his getting there. By contrast, Inga’s ‘consulting her memories’ is a fake; and it’s a particularly naughty fake because 1. it makes Inga’s case look more like Otto’s than it can possibly be, and 2. it obscures the critically important fact that the (derived) intensionality of what happens on the outside depends ontologically on the (underived) intensionality of what happens on the inside. Externalism needs internalism; but not vice versa. External representation is a side-show; internal representation is ineliminably the main event.
It seems as though Chalmers and Clark are really positing a sort of neuro-philosophical mysticism, fueled by vivid metaphor and capable of conjuring interesting theoretical arguments, but short on evidence and failing, as Fodor shows, to address basic issues that have been part of philosophical discourse for years.
I think EMT has an appeal for the same reason that well written science fiction has an appeal: it engages the imagination with a challenging vision. The problem is that when our feet come back down on the ground, the vision can’t really hold up. Though, to Chalmers’ and Clark’s credit, it does influence us to think in new directions.