Monthly Archives: March 2009

Why ‘Many’ Might be the Loneliest Number: An Interview with John Cacioppo

cacioppo_colornormalRight now we enjoy more ways to stay connected with people across the globe than at any time in history. What a remarkable irony, then, that “loneliness” is still a topic finding its way into headlines, perhaps now more than ever. How can oceans of distance no longer be an obstacle to communicating, and yet a third of us report being lonely in recent studies, and the number appears to be increasing?

University of Chicago neuroscientist John Cacioppo has dedicated his career to finding the answers. With co-author William Patrick, Cacioppo wrote the definitive book on the topic: Loneliness: Human Nature and the Need for Social Connection, which has set the stage for studying loneliness and its effects in a new light. Dr. Cacioppo recently spent some time discussing his research with David DiSalvo at Neuronarrative.

 
What originally interested you about loneliness as a field of study?

As a social species, we create emergent organizations beyond the individual – structures that range from dyads, families, and groups to cities, civilizations, and cultures. These emergent structures evolved hand in hand with neural and hormonal mechanisms to support them because the consequent social behaviors helped these organisms survive, reproduce, and care for offspring sufficiently long that they too survived to reproduce. To study the effects of social connection, we compared individuals who were socially connected with those who were socially isolated.

Humans are such meaning making creatures that we quickly determined that perceived social isolation was more critical in most instances than objective social isolation, so we compared people who felt they were socially isolated (i.e., lonely) with those who did not feel isolated (i.e., nonlonely). To be sure the effects we were finding were attributable to loneliness, we also performed experiments in which we randomly assign people to conditions that induce feelings of high or low loneliness, and we performed longitudinal studies to compare the effects on individuals when they felt lonely and when they did not feel lonely. Bill Patrick and I wrote Loneliness: Human Nature and the Need for Social Connectionbecause the results of this research suggested a very different view of human nature than the rugged, rational individualist we have seen championed for so long.

 

When most people think of loneliness, they think of someone sitting alone in a room with no one to talk to, leading to a stiff case of the blues. How different is your definition of loneliness from the popular conception?

Loneliness is the feeling of social isolation. Although being isolated from others may increase the likelihood of feeling lonely, being alone is not the same as feeling alone. Writers, for instance, spend a great deal of time alone but they may not feel alone because they have their colleagues, characters, and readers in mind as they work on the story. College freshman who leave family and friends for the first time to attend school, on the other hand, may be around more people at college than at home but they often feel socially isolated because they do not feel well connected to others.

 

What do we know about the effects of loneliness on the brain?

lonelinessWe now know quite a lot, though the full story is still unfolding. For instance, research suggests that social rejection and social pain are associated with the activation of some of the same regions of the brain that are active in physical pain. Using functional MRI, we recently found that there are at least two neural mechanisms differentiating social perception in lonely and nonlonely young adults. For pleasant depictions, lonely individuals appear to be less rewarded by social stimuli, as evidenced by weaker activation of the ventral striatum to pictures of people than of objects, whereas nonlonely individuals showed stronger activation of the ventral striatum to pictures of people than of objects.

These findings fit nicely with behavioral research showing that lonely individuals find pleasant daily social interactions to be less rewarding than nonlonely individuals. For unpleasant depictions, lonely individuals were characterized by greater activation of the visual cortex to pictures of people than of objects, consistent with their attention being drawn more to the distress of others, but nonlonely individuals showed greater activation of the TPJ, consistent with their reflecting less on their own perspective and more on the perspective of those in distress than lonely individuals. These findings help explain why lonely individuals can act in a more egocentric fashion than nonlonely individuals even though lonely individuals want to connect with others.

 

You recently presented at the American Association for the Advancement of Science annual meeting on compelling findings about links between loneliness and physiological problems. What were some of the most surprising of these findings?

The health implications of loneliness are really at the core of our recent book. We have found loneliness to be associated with heightened resistance to blood flow throughout the body; elevated blood pressure as one ages; heightened hypothalamic pituitary adrenocortical activity as indexed by higher morning levels of adrenocorticotropic hormone and larger rises in the stress hormone, cortisol, in the morning; less salubrious sleep; a diminished ability to exert self-control and avoid personal temptations; increased depressive symptomatology even when controlling for current depression; poorer health behaviors such diet and exercise; and higher allostatic load -peripheral biological markers of wear and tear on the body.

But the most surprising finding may be that loneliness is associated with altered gene expressions in the nucleus of immune cells, specifically with the under-expression of genes bearing anti-inflammatory glucocorticoid response elements (GREs) and over-expression of genes bearing response elements for pro-inflammatory NF-κB/Rel transcription factors. These effects may be mediated by the effects of loneliness on neuroendocrine activity, which in turn operates on the immune cells.

 

One revelation about loneliness to me is that it’s subject to genetic variation. How big a role does this play?

Loneliness is about 50% heritable, but this does not mean loneliness is determined by genes. An equal amount is due to situational factors. What appears to be heritable is the intensity of pain felt when one feels socially isolated. Being sensitive or insensitive are each fine, but what is important is to create a social environment that matches one’s predisposition toward feeling social pain. People who are sensitive to possible social disconnection tend to be lonelier more frequently or intensely than people who are relatively insensitive to social disconnection. Whether or not one is socially disconnected depends on the social context and the social world people create for themselves, however. If one is especially sensitive, then it may benefit one’s health and well being to prioritize the development and maintenance of a few high quality relationships.

 

In the next several years, do you see our society becoming more prone to loneliness or less? If more, is there anything we can do to straighten the course?

People interact with more people now than in the 20th century, and the distances at which people interact are greater than ever before. But loneliness is more strongly related to the quality than number of interactions, as anyone who has rushed by family members en route to a long traffic-congested commute to work can attest. We regard loneliness to be a biological construct, a state that has evolved as a signal to change behavior – very much like hunger, thirst, or physical pain – that serves to help one avoid damage and promote the transmission of genes to the gene pool. In the case of loneliness, the signal is a prompt to renew the connections we need to survive and prosper. Viewed in this way, loneliness – either ours or those of our friends and family – can signal us to re-prioritize how we are spending our time so that we can nurture our connections with those in our lives who are especially meaningful.

 

What are you working on now?

The dominant metaphor for the scientific study of the human mind during the latter half of the 20th century has been the computer – a solitary device with massive information processing capacities. Our studies of loneliness left us unsatisfied with this metaphor. Computers today are massively interconnected devices with capacities that extend far beyond the resident hardware and software of a solitary computer. It became apparent to us that the telereceptors of the human brain have provided wireless broadband interconnectivity to humans for millennia. Just as computers have capacities and processes that are transduced through but extend far beyond the hardware of a single computer, the human brain has evolved to promote social and cultural capacities and processes that are transduced through but that extend far beyond a solitary brain.

launch_wb_color20sidebarTo understand the full capacity of humans, one needs to appreciate not only the memory and computational power of the brain but its capacity for representing, understanding, and connecting with other individuals. That is, one needs to recognize that we have evolved a powerful, meaning making social brain. This social brain is not always a benevolent brain, however. Our research certainly says humans have the capacity to be driven by ruthless competition and narrow self-interests, but it also shows that we have an additional, wondrous capacity to cooperate, care about others as well as oneself, and compete in fair and mutually beneficial ways. As a society, it may be important to find ways to promote the latter over the former in individuals. We are now seeking to gain a better understanding of the social brain and what sociocultural norms, rules, or sanctions promote collective actions that are appropriate for the problems we are facing in the 21st century.

Cacioppo, J., & Hawkley, L. (2003). Social Isolation and Health, with an Emphasis on Underlying Mechanisms Perspectives in Biology and Medicine, 46 (3) DOI: 10.1353/pbm.2003.0049

Hawkley, L., Masi, C., Berry, J., & Cacioppo, J. (2006). Loneliness Is a Unique Predictor of Age-Related Differences in Systolic Blood Pressure. Psychology and Aging, 21 (1), 152-164 DOI: 10.1037/0882-7974.21.1.152

Add to: Facebook | Digg | Del.icio.us | Stumbleupon | Reddit | Blinklist | Twitter | Technorati | Furl | Newsvine

Advertisements

7 Comments

Filed under Interviews

For the Brain, Keeping it Real Means Keeping it Relevant

BrainHow does the brain distinguish between reality and fiction — and more importantly, does the brain distinguish between reality and fiction? 

These questions served as the jumping off point for a new fMRI study that attempted to identify how the brain responds when exposed to contexts involving real people or fictional characters.  The study followed up on a similar study conducted in 2008 entitled: “Meeting George Bush versus meeting Cinderella: the neural response when telling apart what is real from what is fictional in the context of our reality“.

In the present study, researchers evaluated subjects’ brain regions–specifically the anterior medial prefrontal and posterior cingulate cortices (amPFC, PCC)–while they were exposed to contexts involving three groups: (1) family and friends (high relevance), (2) famous people (medium relevance), and (3) fictional characters (low relevance).  The working hypothesis was that exposure to contexts with a higher degree of relevance would result in stronger activation of the amPFC and PCC. 

In previous studies, the amPFC and PCC were implicated in self-referential thinking and autobiographical memory retrieval. The idea behind the present hypothesis is that information about real people, as opposed to fictional characters, is coded in the brain in such a way that its elicits a self-referential and autobiographical response. The more personally relevant the context is, the stronger the response.

The results were consistent with the hypothesis, showing a gradient pattern of activation in which higher relevance entities were associated with stronger amPFC and PCC responses (as shown in the graphic below).  This result also held true for several other brain regions to varying degrees.  

In other words, for our brains, reality equals relevance. 

 

journalpone0004741g0031

 

This study is interesting because it sparks a new round of questions about personal “relevance.”   For example, in a social networking context, how is relevance defined?  If you never meet someone face-to-face, but talk to them often, can they still be as “relevant” to you as someone you see and talk to all the time?   I foresee a future fMRI study that examines brain regions while subjects communicate online with people they talk to frequently (online) but have never seen. 

It would also be interesting to know whether the brain’s reality-fiction differentiation system can be short-circuited.  If the brain suffers damage to the amPFC and/or PCC, would one’s ability to determine degrees of personal relevance be handicapped?  Is it possible that people who believe themselves to have closer relationships with others than they actually do suffer a deficit in this area? 

This might be useful for a bit of pop culture analysis as well, such as someone believing they “know” a person 0n a reality TV show.  The very nature of reality TV is set up to elicit this sort of response by supplying personal information about people on the show, which creates a sense of “knowing” them.  In light of this study, I’m seeing that tactic as a way of tricking the brain into encoding information for higher relevance than it deserves.
ResearchBlogging.orgAnna Abraham, & Yves von Cramon (2009). Reality = Relevance? Insights from Spontaneous Modulations of the Brain’s Default Network when Telling Apart Reality from Fiction PLoS ONE

Link to the study on PLoS ONE

Add to: Facebook | Digg | Del.icio.us | Stumbleupon | Reddit | Blinklist | Twitter | Technorati | Furl | Newsvine

7 Comments

Filed under About Research

Should We Be Afraid of Nanotechnology?

Scientists have been raising red flags about nanotechnology threats for the last few years, with increasing urgency.  Though full of promise (amazing advances in medical care, for example; demand for nanotechnology in healthcare is expected to increase nearly 50% in 2009), as with most technologies there’s also potential for abuse — abuse with results we can’t even really imagine yet (or, as with most unknowns, we imagine the worst).

Here’s a piece on potential nanotechnology threats to food,  one on nanoparticle threats to health, an interview discussing a variety of nano-related concerns, and a comprehensive report that tackles all of the above, including nano-environmental hazards.  The literature out there ranges from appropriate to alarmist and all points in between.

And now, Mental Floss has brought comedy to bear on the topic, and brilliantly. 

hat tip: Wired Science

2 Comments

Filed under Uncategorized

Four Authors Respond to the Social Networking Controversy

social_networkingWhen neuroscientist and Oxford Professor Susan Greenfield warned the British House of Lords about the alleged dangers of social networking, she touched off a firestorm that is still smoldering. Greenfield made several points, some that have been misrepresented in subsequent news, and others that are clearly debatable – but it’s beyond dispute that she hit a nerve, and her words are likely a foretelling of a larger debate still to come. 

At issue is whether Facebook, Bebo, MySpace, Twitter and all other social networking sites, gadgets and tools are adversely affecting our brains–more specifically, children’s brains–and infantilizing our relationships by diminishing our ability to interact in meaningful ways. Additional arguments tagging along with these include whether social networking is promoting loneliness, which is in turn negatively affecting our health.

To further explore the arguments for and against Greenfield’s position, I asked four authors who have addressed this topic from different angles to respond to the controversy. 

Dr. Ben Goldacre, author of Bad Science, has been an outspoken critic of Greenfield, and recently debated Dr. Aric Sigman–whose research has been featured in the social networking controversy–on the BBC (his comments below also appeared on his blog and have been quoted here with his permission).   

Professor Susan Greenfield is the head of the Royal Institution and the person behind the Daily Mail headline “Social websites harm children’s brains: Chilling warning to parents from top neuroscientist”, which has spread around the world (like the last time she said it, and the time before that).

It is my view that Professor Greenfield has been abusing her position as a professor, and head of the Royal Institution, for many years now, using these roles to give weight to her speculations and prejudices in a way that is entirely inappropriate.

We are all free to have fanciful ideas. Professor Greenfield’s stated aim, however, is to improve the public’s understanding of science, and yet repeatedly she appears in the media making wild headline-grabbing claims, without evidence, all the while telling us repeatedly that she is a scientist. By doing this, the head of the RI grossly misrepresents what it is that scientists do, and indeed the whole notion of what it means to have empirical evidence for a claim. It makes me quite sad, when the public’s understanding of science is in such a terrible state, that this is one of our most prominent and well funded champions.

Then there was Dr Aric Sigman. He is the man behind the “Facebook causes cancer” story in the Daily Mail, and many other similar stories over the years (as part of the Daily Mail’s ongoing oncological ontology project). His article can be read in full online here as a PDF. [In a debate on the BBC] I explained that he had cherry picked the evidence in his rather fanciful essay, selectively only mentioning the evidence that supports his case, and ignoring the evidence that goes against it.

Cherry picking is a common crime in the world of pseudoscience – whether it is big pharma or everyday cranks – and to me it is a serious crime against science, because by selectively quoting evidence, you can make almost anything seem either dangerous, or beneficial.

Dr Sigman’s case is that social networking leads to loneliness, and loneliness leads to biological harm (he doesn’t mention cancer specifically, incidentally). I didn’t get near the second half of his argument, though, because he was so spectacularly misleading on the first that it became irrelevant.

I claim no expertise on the question of whether social networking and Internet use is linked to loneliness. I merely have a basic ability to use searchable databases of academic evidence, like anybody else. If you go to PubMed and type in:

loneliness [ti] AND internet

You will get 12 results.  Many of them do not support Dr Sigman’s theory. These are the ones he completely ignores.

1. Caplan SE published a paper in 2007 entitled: “Relations among loneliness, social anxiety, and problematic Internet use.” Dr Sigman did not quote this paper in his article. Why not? “The results support the hypothesis that the relationship between loneliness and preference for online social interaction is spurious.”

2. Sum et al published a paper in 2008 with the title: “Internet use and loneliness in older adults“. Dr Sigman chose not to quote this paper. Why not? I don’t know, although it does contain the line “greater use of the Internet as a communication tool was associated with a lower level of social loneliness.”

3. Subrahmanyam et al published a paper in 2007 called “Adolescents on the net: Internet use and well-being.” It features the line “loneliness was not related to the total time spent online, nor to the time spent on e-mail,” Dr Sigman ignored it.

And so on.

I am not claiming to give you a formal, balanced, systematic review in these examples, I am simply demonstrating to you the way that Dr Sigman has ignored inconvenient evidence in order to build his case.

Was this an oversight? Were these papers hard to find? I think not.

[Addressing the media’s role in this] I think journalists like sensational and improbable stories. The trouble is they know they’re making entertainment, but the public thinks they’re reading news.

Dr. Robert Burton is a neurologist and the author of On Being Certain: Believing You Are Right Even When You’re Not, and a frequent contributor to Salon on brain and mind topics.

I very much agree with Susan Greenfield’s comments.

I’m particularly struck by the decreasing degree of empathy that young folks have toward each other and toward society in general. Given the virtual nature of the relationships developed through electronic interactions, the no holds barred anonymity, lack of personal accountability and the loss of other modes of judgment such as nuances of speech, body language, and perhaps even the presence of pheromones in the air, it isn’t surprising that typed words represent a different form of language than the spoken word.

A text message between a group is not the same as a story told around a campfire. As a result of these new electronic devices, the social bond of communication is being drastically altered.

Moral decisions made at a distance are quite different than face-to-face judgments. As kids become more skillful at impersonal at a distance judgments, something intrinsically human will be lost. Hand-to-hand combat is not the same as firing a missile from the security of a far-off bunker. As eye contact goes, so goes world order.

A couple years ago I wrote a piece on the changing nature of poker as a result of kids learning online rather than live. I got an enormous amount of negative feedback from young players, indicating that I was just a sore loser, or a wimpy crybaby. It was as though the article underscored the very point I was trying to make.

Dr. Gary Small is professor of psychiatry and behavioral sciences at UCLA and the author of iBrain: Surviving the Technological Alteration of the Modern Mind.

Oxford Professor Lady Greenfield warned the British House of Lords of the dangers of Internet social networking to young developing minds.  Laptops, PDAs, iPods, smart phones and other technological gadgets seem to be taking over our purses and pockets with no end in sight.  But could they be altering our families and affecting the way we interact with each other?

Investigators at the University of Minnesota found that traditional family meals have a positive impact on adolescent behavior.  In a 2006 survey of nearly 100,000 teenagers across 25 states, a higher frequency of family dinners was associated with more positive values and a greater commitment to learning.  Adolescents from homes having fewer family dinners were more likely to exhibit high-risk behaviors, including substance abuse, sexual activity, suicide attempts, violence, and academic problems.  In today’s fast-paced, technologically-driven world, some people consider the traditional family dinner to be an insignificant, old-fashioned ritual.  Actually, it not only strengthens our neural circuitry for human contact (the brain’s insula and frontal lobe), but it also helps ease the stress we experience in our daily lives, protecting the medial temporal regions that control emotion and memory.

Many of us remember when dinnertime regularly brought the nuclear family together at the end of the day – everyone having finished work, homework, play, and sports.  Parents and children relaxed, shared their day’s experiences, kept up with each other’s lives, and actually made eye contact while they talked.  

Now, dinnertime tends to be a much more harried affair.  What with emailing, video chatting, and TVs blaring, there is little time set aside for family discussion and reflection on the day’s events.  Conversations at meals sometimes resemble instant messages where family members pop in with comments that have no linear theme.  In fact, if there is time to have a family dinner, many family members tend to eat quickly and run back to their own computer, video game, cell phone or other digital activity. 

Although the traditional dinner can be an important part of family life, whenever surly teenagers, sulking kids, and tired over-worked parents get together at the dining table, conflicts can emerge and tensions may arise.  However, family dinners still provide a good setting for children and adolescents to learn basic social skills in conversation, dining etiquette, and basic empathy. 

The other day I actually heard myself yelling to my teenage son, “Stop playing that darn video game and come down and watch TV with me.”  Our new technology allows us to do remarkable things – we can communicate through elaborate online social networks, get vast amounts of information in an instant, work and play more efficiently. 

The potential negative impact of new technology on the brain depends on its content, duration, and context.  To a certain extent, I think that the opportunities for developing the brain’s neural networks that control our face-to-face social skills – what many define as our humanity – are being lost or at least compromised, as families become more fractured. 

Maggie Jackson is author of  Distracted: The Erosion of Attention and the Coming Dark Age, and writes the “Balancing Acts” column for the Boston Globe.   

No one likes to be called a baby, whether they are age five or 35. That’s one reason why recent comments by British neuroscientist Susan Greenfield that today’s technologies may be “infantilizing the brain” are inspiring heated debate – and plentiful misunderstanding. I don’t agree with all that she said about virtual social relations, but she’s right to raise these fears. Only through well-reasoned public discussion and careful research can we begin to understand the impact of digital life on our social relations and on our cognition.

What did she say? In a statement to the House of Lords and in interviews, Lady Greenfield first pointed out that our environment shapes our highly plastic brains, and so it’s plausible that long hours online can affect us. She’s right. “Background” television is linked to attention-deficient symptoms in toddlers. High stress impedes medical students’ mental flexibility. I agree that “living in two dimensions,” as she puts it, will affect us.

As a result of video games and Facebooking, are we acting like babies, living for the moment, developing shorter attention spans? Again, she’s right to worry. Facebook and video games aren’t passive. Yet much of digital life is reactive. We settle for push-button Googled answers, immerse ourselves in “do-over” alternate realities, spend our days racing to keep up with Twitter, email and IM. This way of life doesn’t promote vision, planning, long-term strategizing, tenacity – skills sorely needed in this era.

Consider this issue as an imbalance of attention. Humans need to stay tuned to their environment in order to survive. We actually get a little adrenaline jolt from new stimuli. But humans need to pursue their goals, whether that means locating dinner or hunting for a new job. By this measure, our digital selves may be our lower-order selves. As ADHD researcher Russell Barkley points out, people with the condition pursue immediate gratification, have trouble controlling themselves and are “more under the control of external events than of mental representations about time and the future.” He writes that ADHD is a disorder of “attention to the future and what one needs to do to prepare for its arrival.” Today, as we skitter across our days, jumping to respond to every beep and ping and ever-craving the new, are we doing a good job preparing for the future?

Finally, Lady Greenfield spoke about two types of social diffusion prevalent in digital living. First, she correctly points out that today’s fertile virtual connectivity has a dark side: it’s difficult to go deeply when one is juggling ever-more relationships. This is both common sense, and backed up by research showing that as social networks expand, visits and telephone calls drop, while email rises. Second, Lady Greenfield observed how virtuality distances us from the “messiness” and “unpredictability” of face-to-face conversations. In other words, digital communications can weaken the very fabric of social ties. As I wrote in my book Distracted, an increasingly virtual world risks downgrading the rich, complex synchronicity of human relations to paper-thin shadow play.

If it weren’t for the Net, I likely wouldn’t have found out about Lady Greenfield’s comments, nor been able to respond to them in this way. Yet going forward, we need to rediscover the value of digital gadgets as tools, rather than elevating them to social and cognitive panacea. Lady Greenfield is right: we need to grow up and take a more mature approach to our tech tools.

You can find more input from Maggie Jackson via her website.

Add to: Facebook | Digg | Del.icio.us | Stumbleupon | Reddit | Blinklist | Twitter | Technorati | Furl | Newsvine

12 Comments

Filed under About Neuroscience, About Research

Discussing Delusion: The Phantom Doubles Edition

malkovitch_2Delusions are a marker of scientific advancement.  In one era, a given delusion is considered a case of demonic possession. In another era, the same delusion is considered generically as the result of generalized insanity or dementia.  And in another, more recent era, the same delusion is linked to a specific brain injury or genetic etiology. 

This progressively more precise and thorough understanding of what a delusion is parallels many other neuroscientific advancements, but is also distinct because of its visibility.  We can “see” delusions play out in others, and be described in literature, depicted in movies, and discussed as a standard part of our cultural vernacular.

One such delusion is Capgras’ Syndrome (aka ‘Phantom Doubles Syndrome’), in which the sufferer believes a family member or friend has been replaced by an impostor who has the exact characteristics of the original. Worse still, the sufferer may believe him/herself to be the impostor. But this goes even beyond believing someone to be an impostor — a person with Capgras’ Syndrome sees an impostor, which affirms and strengthens the belief.  When the impostor is oneself, a person with Capgras’ Syndrome may remove all mirrors in their house to avoid seeing a doppelganger looking back.

Courtesy of PsychNET, these are a few of the delusion’s characteristics:

The person is convinced that one or several persons known by the sufferer have been replaced by a double, an identical looking impostor.

The patient sees true and double persons.

It may extend to animals and objects.

The person is conscious of the abnormality of these perceptions. There is no hallucination.

The double is usually a key figure for the person at the time of onset of symptoms. If married, always the husband or wife accordingly.

The causes of the delusion are not entirely clear or agreed upon, but some linkages have been established.

It has been reported that 35% of Capgras’ Syndrome and related substitution delusions have an organic etiology. Some researchers believe that Capgras’ syndrome can be blamed on a relatively simple failure of normal recognition processes following brain damage from a stroke, drug overdose, or some other cause. This disorder can also follow after accidents that cause damage to the right side of the brain. Therefore, controversies exist about the etiology of Capgras’ Syndrome; some researchers explain it with organic factors, others with psychodynamic factors, or a combination of the two.

The video below is a clip from the BBC show “Phantoms in the Brain” in which V.S. Ramachandran discusses Capras’ Syndrome and two other types of delusional disorders, particularly in light of what is known about the brain’s visual processing system.  It’s roughly 10 minutes long and is part two of five; all parts are available on YouTube.

Add to: Facebook | Digg | Del.icio.us | Stumbleupon | Reddit | Blinklist | Twitter | Technorati | Furl | Newsvine

1 Comment

Filed under About Neuroscience

Hello, I’m Evaluating You

magnifying-glass-741749I’m intrigued by a study that came out recently in Nature Neuroscience on the neural circuitry of first impressions. Researchers from New York University and Harvard joined forces to identify what neural systems are in play when we first meet someone.

This has been the subject of quite a lot of observational research (some of which was brought to light in Malcolm Gladwell’s book, Blink), but this study was designed to delve deeper.  Here’s a summary of the methodology from EurekAlert:

To explore the process of first impression formation, the researchers designed an experiment in which they examined the brain activity when  participants made initial evaluations of fictional individuals. The participants were given written profiles of 20 individuals implying different personality traits. The profiles, presented along with pictures of these fictional individuals, included scenarios indicating both positive (e.g., intelligent) and negative (e.g., lazy) traits in their depictions.

After reading the profiles, the participants were asked to evaluate how much they liked or disliked each profiled individual. These impressions varied depending on how much each participant valued the different positive and negative traits conveyed. For instance, if a participant liked intelligence more than they disliked laziness, he or she might form a positive impression. During this impression formation period, participants’ brain activity was observed using functional magnetic resonance imaging (fMRI). Based on the participants’ ratings, the researchers were able to determine the difference in brain activity when they encountered information that was more, as opposed to less, important in forming the first impression.

The results: two areas of the brain showed significant activity during the coding of impression-relevant information– the amygdala, which previous research has linked to emotional learning about inanimate objects and social evaluations of trust;  and the posterior cingulate cortex, which has been linked to economic decision-making and valuation of rewards.  

In other words, both of these areas of the brain have been linked to value processing.  While the line from study results to behavioral inference is never straight (and frought with Voodoo perils), it appears that this study indicates we’re all hardcore value processors even before “Hello” comes out of our mouths. The subjective evaluation we make when meeting someone new includes–to put it bluntly–what’s in it for us.

This is, of course, just an interpretation, and not nearly as cynical as it may seem. We’re all wired to evaluate others in large part on a trust basis, and trust is about rewards.  It makes sense that our brains begin this evaluation from the first moments we make someone’s acquaintance.

Schiller, D., Freeman, J., Mitchell, J., Uleman, J., & Phelps, E. (2009). A neural mechanism of first impressions Nature Neuroscience DOI: 10.1038/nn.2278

Add to: Facebook | Digg | Del.icio.us | Stumbleupon | Reddit | Blinklist | Twitter | Technorati | Furl | Newsvine

Leave a comment

Filed under About Research

Noggin Raisers Vol.10

brain-in-handIs religion the “Xanax of the people”?  Neurocritic evaluates a study that suggests it just might be, with a few important caveats.

Cognitive fallacies, they never stop coming, and thankfully Mind Hacks never stops taking them on – case in point here.

Very solid analysis of the “is Facebook rotting our kids’ brains” controversy here at Neuroanthropology.

Breaking up is hard to do…or is it?  Jena Pincott reports that it’s not as hard as we think.

BPS Research Digest discusses a study that suggests fathers invest more in children who resemble them and mothers invest more in kids with their personality.

Neuroskeptic takes on research on the placebo effect and antidepressants.

Cognitive Daily asks “how distractible are you?” and gives us the scoop on working memory capacity.

The invisibility cloak of Harry Potter fame close to becoming reality?  Machines Like Us reports that it just might be.

Channel N Videos has up an intriguing video on gender and the brain.

Carl Zimmer at The Loom ignited a mutli-post discussion starting here on why fact checking is so important, but sadly lacking in major newspapers.

And don’t forget, Brain Awareness Week is coming soon!

Leave a comment

Filed under Noggin Raisers