Category Archives: About Neuroscience

Finding the Money Illusion in the Brain

moneymind2One of the daggers that have pierced the heart of the long-held economic rationality assumption (that we are all rational actors on the economic stage) is the “money illusion” proposition.  Rather than only rationally considering the real value of money (the value of goods that it can buy), we actually consider a combination of the real value and the nominal value (the amount of currency) – and sometimes we ignore the real value altogether.

Using an example from the book Choices, Values and Frames by psychologist Daniel Kahneman, let’s say that you receive a 2% salary increase. Unfortunately, the rate of inflation when you receive this increase is 4%.  In real terms, you are actually in the hole by 2%, which, under the rationality assumption, we’d expect would elicit a negative reaction — the same as we’d expect if someone got a 2% pay cut.  But this isn’t how most people react. Rather, the reaction to the real loss of 2% is tempered by the reaction to the nominal gain of 2%.  In effect, the nominal evaluation interferes with the real evaluation, hence the money illusion.

Now a new fMRI study in the Proceedings of the National Academy of Sciences has tested whether the brain’s reward circuitry exhibits the money illusion, and it turns out that it does.  From the study abstract:

Subjects received prizes in 2 different experimental conditions that were identical in real economic terms, but differed in nominal terms. Thus, in the absence of money illusion there should be no differences in activation in reward-related brain areas. In contrast, we found that areas of the ventromedial prefrontal cortex (vmPFC), which have been previously associated with the processing of anticipatory and experienced rewards, and the valuation of goods, exhibited money illusion. We also found that the amount of money illusion exhibited by the vmPFC was correlated with the amount of money illusion exhibited in the evaluation of economic transactions.

illusionsKahneman often uses a perceptual illustration to show how the money illusion works.  In the image to the left, there are two ways to interpret what we see: as two dimensional figures or as three dimensional objects.  If asked to evaluate the relative size of the figures, it’s necessary to rely on a two-dimensional interpretation to arrive at the correct answer. But, the three-dimensional assessment of the objects’ size biases our perception because it is more accessible, making it difficult to see that the objects are all exactly the same size. 

Same goes for how we perceive money: the real evaluation necessary to arrive at the correct answer is biased by the nominal evaluation that makes arriving at this answer difficult.  In a perfectly rational world, that wouldn’t be the case, but by now we know this ain’t a perfectly rational world — and as this study shows, we’re beginning to identify the brain dynamics underlying that fact. 

Image via Very Evolved

ResearchBlogging.org
Weber, B., Rangel, A., Wibral, M., & Falk, A. (2009). The medial prefrontal cortex exhibits money illusion Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0901490106

Advertisements

10 Comments

Filed under About Neuroscience, About Perception, About Research

This is Your Brain on the Edge of Chaos

389879703_acc141a544What do our brains have in common with piles of sand, earthquakes, forest fires and avalanches?  Each of those is a dynamic system in a self-organized critical state, and according to a new study in PloS Computational Biology, so is the brain. 

Systems in a critical state are on the cusp of a transition between ordered and random behavior.   Take a pile of sand for example: as grains of sand are added to the pile, they eventually form a slope. At a certain point, the sloping sand reaches a “critical state,” and at this point adding even a single grain can cause an avalanche that may be small or large. We can’t predict the moment or size of the avalanche, but we know that when the critical state is reached, there are several potential responses that may occur in the system (pile of sand). 

In effect, the system is globally stable at the same time as being locally unstable. Local instability (small avalanches in the sand pile) can create global instability (large avalanches leading to the collapse of the pile) bringing the system back to a new stable state. The pile of sand reorganizes itself.

While self-organized critical state models have been used to model brain dynamics before (in simulated neural networks), this study took the additional step of linking  modeling with neuroimaging to measure dynamic changes in the synchronization of activity between different regions of the brain’s network.  After developing a profile of brain dynamics with neuroimaging, researchers compared the profile with synchronization of brain activity in critical-state computational models. They found that the computational model results exactly reflected the dynamic activity in the brain, which strongly suggests that the brain exists dynamically in a critical state.

Which is to say, another door has been opened to understanding how the brain functions on the precipice of utter chaos.  Next up will be to study how the brain’s criticality is (or is not) linked to its adaptability, and to cognitive performance overall.  There’s not much evidence out there at all yet pulling these threads together, but this study does establish the groundwork for much more research. 

Another interesting question to consider: to what extent are critical state dynamics in the brain linked with psychiatric disorders?  Can better understanding how the brain teeters on the brink of randomness enable more effective treatments for certain disorders?  It’s difficult to even discuss this possibility without relying too heavily on metaphors (“neuronal avalanche” for example — and that’s a term actually used in the study), but until we have more evidential rudiments to work with, metaphor will have to fill the gaps.

ResearchBlogging.org
Manfred G. Kitzbichler, Marie L. Smith, Søren R. Christensen, Ed Bullmore (2009). Broadband Criticality of Human Brain Network Synchronization PLoS Computational Biology

5 Comments

Filed under About Neuroscience, About Research

You Can Be Afraid To Lose, But Don’t Lose Perspective

anxietyAnyone who has ever stood to lose anything (all of us) knows that emotions play a big part in how we react to potential loss.  Sweaty palms and upper lips, fidgety fingers and bouncing knees, frantic, racing thoughts — all are signs of emotional tumult when facing the risk of loss — and all seem involuntary.  But a recent study indicates that we can influence the degree of emotional reaction, and our level of loss aversion. The solution, in short: think like a trader.

Seasoned traders are careful not to lose perspective when facing potential loss. They view loss as part of the game, but not the end of the game, and they rationally accept that taking a risk entails the possibility of losing.  Researchers wanted to investigate whether cognitive regulation strategies (like those embodied by traders) could be used to affect loss aversion and the physiological correlates of facing loss.

Subjects were given $30 and offered a choice to either gamble the money, and potentially lose it, or keep it.  They could theoretically win up to $572, or lose the $30 and be left with nothing.  The outcomes of their choices were revealed immediately after the choice was made (e.g. “you won”).  Subjects completed two full sets of choices (140 choices per set).  During the first set, subjects were told that the choice was isolated from any larger context (“as if it was the only one to consider”); during the second set, subjects were told that the choice was part of a greater context (“as if creating a portfolio”) — in other words, the introduction of “greater context” (taking a different perspective) functioned as a cognitive regulation strategy.

The researchers conducted this study twice: in the first, they observed behavior; in the second, they observed behavior and administered a skin conductance test (a measure of sympathetic nervous system activity) to measure level of emotional arousal.

The results: using the cognitive regulation strategy had the strong effect of decreasing loss aversion.  Most importantly, only individuals successful at decreasing their loss aversion by taking a different perspective had a corresponding reduction in physiological arousal response to potential loss.  So, cognitive regulation led to less loss aversion, which led to less sweat on the upper lip.

The question remains: is loss aversion a satisfactory response to anticipating discomfort and pain (emotional or physical), or is it more of a judgment error caused by a tendency to exaggerate the outcome of loss?  The results of this study support both positions.  On one hand, losses feel worse than gains feel good because the physiological response is linked to feedback about loss or gain (in other words, it’s easier to feel really bad about a potential loss than it is to feel really good about a potential gain if loss is still a possibility – we tend to dwell on the loss side because we know it hurts). 

On the other hand, the study also shows that fear of loss can be regulated, which means that it’s a changeable quantity.  Even though loss aversion serves a purpose, there’s a high likelihood we begin with too much of it for our own good.

So, it seems even if we are sensitive to the possibility of loss, we can make ourselves less so by changing our thinking. By taking a different, larger perspective, loss loses a few of its teeth and becomes a less scary beast.

(One concluding note: since this study addressed monetary loss, I’d leave the analysis in that category and of those things with similar dynamics (e.g. asking someone out on a date, interviewing for a job, etc.), and not extend it to Loss (with a capital “L”) of life, or life of loves ones; it seems to me that gets into a different area altogether and can’t be as practically addressed.)
ResearchBlogging.org
Sokol-Hessner, P., Hsu, M., Curley, N., Delgado, M., Camerer, C., & Phelps, E. (2009). Thinking like a trader selectively reduces individuals’ loss aversion Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0806761106

6 Comments

Filed under About Neuroscience, About Research

Four Authors Respond to the Social Networking Controversy

social_networkingWhen neuroscientist and Oxford Professor Susan Greenfield warned the British House of Lords about the alleged dangers of social networking, she touched off a firestorm that is still smoldering. Greenfield made several points, some that have been misrepresented in subsequent news, and others that are clearly debatable – but it’s beyond dispute that she hit a nerve, and her words are likely a foretelling of a larger debate still to come. 

At issue is whether Facebook, Bebo, MySpace, Twitter and all other social networking sites, gadgets and tools are adversely affecting our brains–more specifically, children’s brains–and infantilizing our relationships by diminishing our ability to interact in meaningful ways. Additional arguments tagging along with these include whether social networking is promoting loneliness, which is in turn negatively affecting our health.

To further explore the arguments for and against Greenfield’s position, I asked four authors who have addressed this topic from different angles to respond to the controversy. 

Dr. Ben Goldacre, author of Bad Science, has been an outspoken critic of Greenfield, and recently debated Dr. Aric Sigman–whose research has been featured in the social networking controversy–on the BBC (his comments below also appeared on his blog and have been quoted here with his permission).   

Professor Susan Greenfield is the head of the Royal Institution and the person behind the Daily Mail headline “Social websites harm children’s brains: Chilling warning to parents from top neuroscientist”, which has spread around the world (like the last time she said it, and the time before that).

It is my view that Professor Greenfield has been abusing her position as a professor, and head of the Royal Institution, for many years now, using these roles to give weight to her speculations and prejudices in a way that is entirely inappropriate.

We are all free to have fanciful ideas. Professor Greenfield’s stated aim, however, is to improve the public’s understanding of science, and yet repeatedly she appears in the media making wild headline-grabbing claims, without evidence, all the while telling us repeatedly that she is a scientist. By doing this, the head of the RI grossly misrepresents what it is that scientists do, and indeed the whole notion of what it means to have empirical evidence for a claim. It makes me quite sad, when the public’s understanding of science is in such a terrible state, that this is one of our most prominent and well funded champions.

Then there was Dr Aric Sigman. He is the man behind the “Facebook causes cancer” story in the Daily Mail, and many other similar stories over the years (as part of the Daily Mail’s ongoing oncological ontology project). His article can be read in full online here as a PDF. [In a debate on the BBC] I explained that he had cherry picked the evidence in his rather fanciful essay, selectively only mentioning the evidence that supports his case, and ignoring the evidence that goes against it.

Cherry picking is a common crime in the world of pseudoscience – whether it is big pharma or everyday cranks – and to me it is a serious crime against science, because by selectively quoting evidence, you can make almost anything seem either dangerous, or beneficial.

Dr Sigman’s case is that social networking leads to loneliness, and loneliness leads to biological harm (he doesn’t mention cancer specifically, incidentally). I didn’t get near the second half of his argument, though, because he was so spectacularly misleading on the first that it became irrelevant.

I claim no expertise on the question of whether social networking and Internet use is linked to loneliness. I merely have a basic ability to use searchable databases of academic evidence, like anybody else. If you go to PubMed and type in:

loneliness [ti] AND internet

You will get 12 results.  Many of them do not support Dr Sigman’s theory. These are the ones he completely ignores.

1. Caplan SE published a paper in 2007 entitled: “Relations among loneliness, social anxiety, and problematic Internet use.” Dr Sigman did not quote this paper in his article. Why not? “The results support the hypothesis that the relationship between loneliness and preference for online social interaction is spurious.”

2. Sum et al published a paper in 2008 with the title: “Internet use and loneliness in older adults“. Dr Sigman chose not to quote this paper. Why not? I don’t know, although it does contain the line “greater use of the Internet as a communication tool was associated with a lower level of social loneliness.”

3. Subrahmanyam et al published a paper in 2007 called “Adolescents on the net: Internet use and well-being.” It features the line “loneliness was not related to the total time spent online, nor to the time spent on e-mail,” Dr Sigman ignored it.

And so on.

I am not claiming to give you a formal, balanced, systematic review in these examples, I am simply demonstrating to you the way that Dr Sigman has ignored inconvenient evidence in order to build his case.

Was this an oversight? Were these papers hard to find? I think not.

[Addressing the media’s role in this] I think journalists like sensational and improbable stories. The trouble is they know they’re making entertainment, but the public thinks they’re reading news.

Dr. Robert Burton is a neurologist and the author of On Being Certain: Believing You Are Right Even When You’re Not, and a frequent contributor to Salon on brain and mind topics.

I very much agree with Susan Greenfield’s comments.

I’m particularly struck by the decreasing degree of empathy that young folks have toward each other and toward society in general. Given the virtual nature of the relationships developed through electronic interactions, the no holds barred anonymity, lack of personal accountability and the loss of other modes of judgment such as nuances of speech, body language, and perhaps even the presence of pheromones in the air, it isn’t surprising that typed words represent a different form of language than the spoken word.

A text message between a group is not the same as a story told around a campfire. As a result of these new electronic devices, the social bond of communication is being drastically altered.

Moral decisions made at a distance are quite different than face-to-face judgments. As kids become more skillful at impersonal at a distance judgments, something intrinsically human will be lost. Hand-to-hand combat is not the same as firing a missile from the security of a far-off bunker. As eye contact goes, so goes world order.

A couple years ago I wrote a piece on the changing nature of poker as a result of kids learning online rather than live. I got an enormous amount of negative feedback from young players, indicating that I was just a sore loser, or a wimpy crybaby. It was as though the article underscored the very point I was trying to make.

Dr. Gary Small is professor of psychiatry and behavioral sciences at UCLA and the author of iBrain: Surviving the Technological Alteration of the Modern Mind.

Oxford Professor Lady Greenfield warned the British House of Lords of the dangers of Internet social networking to young developing minds.  Laptops, PDAs, iPods, smart phones and other technological gadgets seem to be taking over our purses and pockets with no end in sight.  But could they be altering our families and affecting the way we interact with each other?

Investigators at the University of Minnesota found that traditional family meals have a positive impact on adolescent behavior.  In a 2006 survey of nearly 100,000 teenagers across 25 states, a higher frequency of family dinners was associated with more positive values and a greater commitment to learning.  Adolescents from homes having fewer family dinners were more likely to exhibit high-risk behaviors, including substance abuse, sexual activity, suicide attempts, violence, and academic problems.  In today’s fast-paced, technologically-driven world, some people consider the traditional family dinner to be an insignificant, old-fashioned ritual.  Actually, it not only strengthens our neural circuitry for human contact (the brain’s insula and frontal lobe), but it also helps ease the stress we experience in our daily lives, protecting the medial temporal regions that control emotion and memory.

Many of us remember when dinnertime regularly brought the nuclear family together at the end of the day – everyone having finished work, homework, play, and sports.  Parents and children relaxed, shared their day’s experiences, kept up with each other’s lives, and actually made eye contact while they talked.  

Now, dinnertime tends to be a much more harried affair.  What with emailing, video chatting, and TVs blaring, there is little time set aside for family discussion and reflection on the day’s events.  Conversations at meals sometimes resemble instant messages where family members pop in with comments that have no linear theme.  In fact, if there is time to have a family dinner, many family members tend to eat quickly and run back to their own computer, video game, cell phone or other digital activity. 

Although the traditional dinner can be an important part of family life, whenever surly teenagers, sulking kids, and tired over-worked parents get together at the dining table, conflicts can emerge and tensions may arise.  However, family dinners still provide a good setting for children and adolescents to learn basic social skills in conversation, dining etiquette, and basic empathy. 

The other day I actually heard myself yelling to my teenage son, “Stop playing that darn video game and come down and watch TV with me.”  Our new technology allows us to do remarkable things – we can communicate through elaborate online social networks, get vast amounts of information in an instant, work and play more efficiently. 

The potential negative impact of new technology on the brain depends on its content, duration, and context.  To a certain extent, I think that the opportunities for developing the brain’s neural networks that control our face-to-face social skills – what many define as our humanity – are being lost or at least compromised, as families become more fractured. 

Maggie Jackson is author of  Distracted: The Erosion of Attention and the Coming Dark Age, and writes the “Balancing Acts” column for the Boston Globe.   

No one likes to be called a baby, whether they are age five or 35. That’s one reason why recent comments by British neuroscientist Susan Greenfield that today’s technologies may be “infantilizing the brain” are inspiring heated debate – and plentiful misunderstanding. I don’t agree with all that she said about virtual social relations, but she’s right to raise these fears. Only through well-reasoned public discussion and careful research can we begin to understand the impact of digital life on our social relations and on our cognition.

What did she say? In a statement to the House of Lords and in interviews, Lady Greenfield first pointed out that our environment shapes our highly plastic brains, and so it’s plausible that long hours online can affect us. She’s right. “Background” television is linked to attention-deficient symptoms in toddlers. High stress impedes medical students’ mental flexibility. I agree that “living in two dimensions,” as she puts it, will affect us.

As a result of video games and Facebooking, are we acting like babies, living for the moment, developing shorter attention spans? Again, she’s right to worry. Facebook and video games aren’t passive. Yet much of digital life is reactive. We settle for push-button Googled answers, immerse ourselves in “do-over” alternate realities, spend our days racing to keep up with Twitter, email and IM. This way of life doesn’t promote vision, planning, long-term strategizing, tenacity – skills sorely needed in this era.

Consider this issue as an imbalance of attention. Humans need to stay tuned to their environment in order to survive. We actually get a little adrenaline jolt from new stimuli. But humans need to pursue their goals, whether that means locating dinner or hunting for a new job. By this measure, our digital selves may be our lower-order selves. As ADHD researcher Russell Barkley points out, people with the condition pursue immediate gratification, have trouble controlling themselves and are “more under the control of external events than of mental representations about time and the future.” He writes that ADHD is a disorder of “attention to the future and what one needs to do to prepare for its arrival.” Today, as we skitter across our days, jumping to respond to every beep and ping and ever-craving the new, are we doing a good job preparing for the future?

Finally, Lady Greenfield spoke about two types of social diffusion prevalent in digital living. First, she correctly points out that today’s fertile virtual connectivity has a dark side: it’s difficult to go deeply when one is juggling ever-more relationships. This is both common sense, and backed up by research showing that as social networks expand, visits and telephone calls drop, while email rises. Second, Lady Greenfield observed how virtuality distances us from the “messiness” and “unpredictability” of face-to-face conversations. In other words, digital communications can weaken the very fabric of social ties. As I wrote in my book Distracted, an increasingly virtual world risks downgrading the rich, complex synchronicity of human relations to paper-thin shadow play.

If it weren’t for the Net, I likely wouldn’t have found out about Lady Greenfield’s comments, nor been able to respond to them in this way. Yet going forward, we need to rediscover the value of digital gadgets as tools, rather than elevating them to social and cognitive panacea. Lady Greenfield is right: we need to grow up and take a more mature approach to our tech tools.

You can find more input from Maggie Jackson via her website.

Add to: Facebook | Digg | Del.icio.us | Stumbleupon | Reddit | Blinklist | Twitter | Technorati | Furl | Newsvine

12 Comments

Filed under About Neuroscience, About Research

Discussing Delusion: The Phantom Doubles Edition

malkovitch_2Delusions are a marker of scientific advancement.  In one era, a given delusion is considered a case of demonic possession. In another era, the same delusion is considered generically as the result of generalized insanity or dementia.  And in another, more recent era, the same delusion is linked to a specific brain injury or genetic etiology. 

This progressively more precise and thorough understanding of what a delusion is parallels many other neuroscientific advancements, but is also distinct because of its visibility.  We can “see” delusions play out in others, and be described in literature, depicted in movies, and discussed as a standard part of our cultural vernacular.

One such delusion is Capgras’ Syndrome (aka ‘Phantom Doubles Syndrome’), in which the sufferer believes a family member or friend has been replaced by an impostor who has the exact characteristics of the original. Worse still, the sufferer may believe him/herself to be the impostor. But this goes even beyond believing someone to be an impostor — a person with Capgras’ Syndrome sees an impostor, which affirms and strengthens the belief.  When the impostor is oneself, a person with Capgras’ Syndrome may remove all mirrors in their house to avoid seeing a doppelganger looking back.

Courtesy of PsychNET, these are a few of the delusion’s characteristics:

The person is convinced that one or several persons known by the sufferer have been replaced by a double, an identical looking impostor.

The patient sees true and double persons.

It may extend to animals and objects.

The person is conscious of the abnormality of these perceptions. There is no hallucination.

The double is usually a key figure for the person at the time of onset of symptoms. If married, always the husband or wife accordingly.

The causes of the delusion are not entirely clear or agreed upon, but some linkages have been established.

It has been reported that 35% of Capgras’ Syndrome and related substitution delusions have an organic etiology. Some researchers believe that Capgras’ syndrome can be blamed on a relatively simple failure of normal recognition processes following brain damage from a stroke, drug overdose, or some other cause. This disorder can also follow after accidents that cause damage to the right side of the brain. Therefore, controversies exist about the etiology of Capgras’ Syndrome; some researchers explain it with organic factors, others with psychodynamic factors, or a combination of the two.

The video below is a clip from the BBC show “Phantoms in the Brain” in which V.S. Ramachandran discusses Capras’ Syndrome and two other types of delusional disorders, particularly in light of what is known about the brain’s visual processing system.  It’s roughly 10 minutes long and is part two of five; all parts are available on YouTube.

Add to: Facebook | Digg | Del.icio.us | Stumbleupon | Reddit | Blinklist | Twitter | Technorati | Furl | Newsvine

1 Comment

Filed under About Neuroscience

Get Your Free Brain Book Here!

brain_factsThe Society for Neuroscience offers a selection of free content on its website, and among the offerings, Brain Facts: A Primer on the Brain and Nervous System  is mighty useful.  It’s a 74-page introduction to a variety of brain topics, designed for teachers for use in classrooms and for lay people who want to get a handle on their noggins. You can download it for free right here.

Here’s a rundown of the contents with links to each section:

The Neuron: Neurotransmitters and Neuromodulators | Second Messengers

Brain Development: Birth of Neurons and Brain Wiring | Paring Back | Critical Periods

Sensation and Perception: Vision | Hearing | Taste and Smell | Touch and Pain

Learning, Memory, and Language: Learning and Memory | Language

Movement

Sleep: Brain Activity During Sleep | Sleep Disorders | How is Sleep Regulated?

Stress: The Immediate Response | Chronic Stress

Aging: Aging Neuons | Intellectual Capacity

Neural Disorders: Advances and Challenges: Addiction | Alzheimer’s Disease | Amyotrophic Lateral Sclerosis | Anxiety Disorders | Attention Deficity Hyperactivity Disorder | Autism | Bipolar Disorder | Brain Tumors | Down Syndrome | Dyslexia | Huntington’s Disease | Major Depression | Multiple Sclerosis | Neurological AIDS | Neurological Trauma | Pain | Parkinson’s Disease | Schizophrenia | Seizures and Epilepsy | Stroke | Tourette Syndrome

New Diagnostic Methods: Imaging Techniques | Gene Diagnosis

Potential Therapies: New Drugs | Trophic Factors | Engineered Antibodies | Small Molecules and RNAs | Cell and Gene Therapy

Neuroethics

You can also download a free 10-page publication called Neuroscience Core Concepts right here.

2009baw_logoBy the way, March 16-22, if you didn’t already know, is Brain Awareness Week.  Check out the Society for Neuroscience website for more information.

9 Comments

Filed under About Neuroscience

Survival of the Kindest: An Interview with Dacher Keltner

dacherdalailama2I have an interview with Dacher Keltner, author of Born to Be Good, in Scientific American Mind Matters today.

Keltner is the director of the UC Berkley Social Interaction Laboratory, leading research efforts focusing on the biological and evolutionary origins of human goodness, with a special concentration on compassion, awe, love, and beauty, as well as the study of power, status and social class, and the nature of moral intuitions. He’s also the founder of the Greater Good Science Center and co-editor of Greater Good Magazine.

Plus, he’s chummy with the Dalai Lama.

It was a pleasure interviewing him.

Link to interview

Leave a comment

Filed under About Neuroscience, Interviews