When neuroscientist and Oxford Professor Susan Greenfield warned the British House of Lords about the alleged dangers of social networking, she touched off a firestorm that is still smoldering. Greenfield made several points, some that have been misrepresented in subsequent news, and others that are clearly debatable – but it’s beyond dispute that she hit a nerve, and her words are likely a foretelling of a larger debate still to come.
At issue is whether Facebook, Bebo, MySpace, Twitter and all other social networking sites, gadgets and tools are adversely affecting our brains–more specifically, children’s brains–and infantilizing our relationships by diminishing our ability to interact in meaningful ways. Additional arguments tagging along with these include whether social networking is promoting loneliness, which is in turn negatively affecting our health.
To further explore the arguments for and against Greenfield’s position, I asked four authors who have addressed this topic from different angles to respond to the controversy.
Dr. Ben Goldacre, author of Bad Science, has been an outspoken critic of Greenfield, and recently debated Dr. Aric Sigman–whose research has been featured in the social networking controversy–on the BBC (his comments below also appeared on his blog and have been quoted here with his permission).
Professor Susan Greenfield is the head of the Royal Institution and the person behind the Daily Mail headline “Social websites harm children’s brains: Chilling warning to parents from top neuroscientist”, which has spread around the world (like the last time she said it, and the time before that).
It is my view that Professor Greenfield has been abusing her position as a professor, and head of the Royal Institution, for many years now, using these roles to give weight to her speculations and prejudices in a way that is entirely inappropriate.
We are all free to have fanciful ideas. Professor Greenfield’s stated aim, however, is to improve the public’s understanding of science, and yet repeatedly she appears in the media making wild headline-grabbing claims, without evidence, all the while telling us repeatedly that she is a scientist. By doing this, the head of the RI grossly misrepresents what it is that scientists do, and indeed the whole notion of what it means to have empirical evidence for a claim. It makes me quite sad, when the public’s understanding of science is in such a terrible state, that this is one of our most prominent and well funded champions.
Then there was Dr Aric Sigman. He is the man behind the “Facebook causes cancer” story in the Daily Mail, and many other similar stories over the years (as part of the Daily Mail’s ongoing oncological ontology project). His article can be read in full online here as a PDF. [In a debate on the BBC] I explained that he had cherry picked the evidence in his rather fanciful essay, selectively only mentioning the evidence that supports his case, and ignoring the evidence that goes against it.
Cherry picking is a common crime in the world of pseudoscience – whether it is big pharma or everyday cranks – and to me it is a serious crime against science, because by selectively quoting evidence, you can make almost anything seem either dangerous, or beneficial.
Dr Sigman’s case is that social networking leads to loneliness, and loneliness leads to biological harm (he doesn’t mention cancer specifically, incidentally). I didn’t get near the second half of his argument, though, because he was so spectacularly misleading on the first that it became irrelevant.
I claim no expertise on the question of whether social networking and Internet use is linked to loneliness. I merely have a basic ability to use searchable databases of academic evidence, like anybody else. If you go to PubMed and type in:
You will get 12 results. Many of them do not support Dr Sigman’s theory. These are the ones he completely ignores.
1. Caplan SE published a paper in 2007 entitled: “Relations among loneliness, social anxiety, and problematic Internet use.” Dr Sigman did not quote this paper in his article. Why not? “The results support the hypothesis that the relationship between loneliness and preference for online social interaction is spurious.”
2. Sum et al published a paper in 2008 with the title: “Internet use and loneliness in older adults“. Dr Sigman chose not to quote this paper. Why not? I don’t know, although it does contain the line “greater use of the Internet as a communication tool was associated with a lower level of social loneliness.”
3. Subrahmanyam et al published a paper in 2007 called “Adolescents on the net: Internet use and well-being.” It features the line “loneliness was not related to the total time spent online, nor to the time spent on e-mail,” Dr Sigman ignored it.
And so on.
I am not claiming to give you a formal, balanced, systematic review in these examples, I am simply demonstrating to you the way that Dr Sigman has ignored inconvenient evidence in order to build his case.
Was this an oversight? Were these papers hard to find? I think not.
[Addressing the media's role in this] I think journalists like sensational and improbable stories. The trouble is they know they’re making entertainment, but the public thinks they’re reading news.
Dr. Robert Burton is a neurologist and the author of On Being Certain: Believing You Are Right Even When You’re Not, and a frequent contributor to Salon on brain and mind topics.
I very much agree with Susan Greenfield’s comments.
I’m particularly struck by the decreasing degree of empathy that young folks have toward each other and toward society in general. Given the virtual nature of the relationships developed through electronic interactions, the no holds barred anonymity, lack of personal accountability and the loss of other modes of judgment such as nuances of speech, body language, and perhaps even the presence of pheromones in the air, it isn’t surprising that typed words represent a different form of language than the spoken word.
A text message between a group is not the same as a story told around a campfire. As a result of these new electronic devices, the social bond of communication is being drastically altered.
Moral decisions made at a distance are quite different than face-to-face judgments. As kids become more skillful at impersonal at a distance judgments, something intrinsically human will be lost. Hand-to-hand combat is not the same as firing a missile from the security of a far-off bunker. As eye contact goes, so goes world order.
A couple years ago I wrote a piece on the changing nature of poker as a result of kids learning online rather than live. I got an enormous amount of negative feedback from young players, indicating that I was just a sore loser, or a wimpy crybaby. It was as though the article underscored the very point I was trying to make.
Dr. Gary Small is professor of psychiatry and behavioral sciences at UCLA and the author of iBrain: Surviving the Technological Alteration of the Modern Mind.
Oxford Professor Lady Greenfield warned the British House of Lords of the dangers of Internet social networking to young developing minds. Laptops, PDAs, iPods, smart phones and other technological gadgets seem to be taking over our purses and pockets with no end in sight. But could they be altering our families and affecting the way we interact with each other?
Investigators at the University of Minnesota found that traditional family meals have a positive impact on adolescent behavior. In a 2006 survey of nearly 100,000 teenagers across 25 states, a higher frequency of family dinners was associated with more positive values and a greater commitment to learning. Adolescents from homes having fewer family dinners were more likely to exhibit high-risk behaviors, including substance abuse, sexual activity, suicide attempts, violence, and academic problems. In today’s fast-paced, technologically-driven world, some people consider the traditional family dinner to be an insignificant, old-fashioned ritual. Actually, it not only strengthens our neural circuitry for human contact (the brain’s insula and frontal lobe), but it also helps ease the stress we experience in our daily lives, protecting the medial temporal regions that control emotion and memory.
Many of us remember when dinnertime regularly brought the nuclear family together at the end of the day – everyone having finished work, homework, play, and sports. Parents and children relaxed, shared their day’s experiences, kept up with each other’s lives, and actually made eye contact while they talked.
Now, dinnertime tends to be a much more harried affair. What with emailing, video chatting, and TVs blaring, there is little time set aside for family discussion and reflection on the day’s events. Conversations at meals sometimes resemble instant messages where family members pop in with comments that have no linear theme. In fact, if there is time to have a family dinner, many family members tend to eat quickly and run back to their own computer, video game, cell phone or other digital activity.
Although the traditional dinner can be an important part of family life, whenever surly teenagers, sulking kids, and tired over-worked parents get together at the dining table, conflicts can emerge and tensions may arise. However, family dinners still provide a good setting for children and adolescents to learn basic social skills in conversation, dining etiquette, and basic empathy.
The other day I actually heard myself yelling to my teenage son, “Stop playing that darn video game and come down and watch TV with me.” Our new technology allows us to do remarkable things – we can communicate through elaborate online social networks, get vast amounts of information in an instant, work and play more efficiently.
The potential negative impact of new technology on the brain depends on its content, duration, and context. To a certain extent, I think that the opportunities for developing the brain’s neural networks that control our face-to-face social skills – what many define as our humanity – are being lost or at least compromised, as families become more fractured.
Maggie Jackson is author of Distracted: The Erosion of Attention and the Coming Dark Age, and writes the “Balancing Acts” column for the Boston Globe.
No one likes to be called a baby, whether they are age five or 35. That’s one reason why recent comments by British neuroscientist Susan Greenfield that today’s technologies may be “infantilizing the brain” are inspiring heated debate – and plentiful misunderstanding. I don’t agree with all that she said about virtual social relations, but she’s right to raise these fears. Only through well-reasoned public discussion and careful research can we begin to understand the impact of digital life on our social relations and on our cognition.
What did she say? In a statement to the House of Lords and in interviews, Lady Greenfield first pointed out that our environment shapes our highly plastic brains, and so it’s plausible that long hours online can affect us. She’s right. “Background” television is linked to attention-deficient symptoms in toddlers. High stress impedes medical students’ mental flexibility. I agree that “living in two dimensions,” as she puts it, will affect us.
As a result of video games and Facebooking, are we acting like babies, living for the moment, developing shorter attention spans? Again, she’s right to worry. Facebook and video games aren’t passive. Yet much of digital life is reactive. We settle for push-button Googled answers, immerse ourselves in “do-over” alternate realities, spend our days racing to keep up with Twitter, email and IM. This way of life doesn’t promote vision, planning, long-term strategizing, tenacity – skills sorely needed in this era.
Consider this issue as an imbalance of attention. Humans need to stay tuned to their environment in order to survive. We actually get a little adrenaline jolt from new stimuli. But humans need to pursue their goals, whether that means locating dinner or hunting for a new job. By this measure, our digital selves may be our lower-order selves. As ADHD researcher Russell Barkley points out, people with the condition pursue immediate gratification, have trouble controlling themselves and are “more under the control of external events than of mental representations about time and the future.” He writes that ADHD is a disorder of “attention to the future and what one needs to do to prepare for its arrival.” Today, as we skitter across our days, jumping to respond to every beep and ping and ever-craving the new, are we doing a good job preparing for the future?
Finally, Lady Greenfield spoke about two types of social diffusion prevalent in digital living. First, she correctly points out that today’s fertile virtual connectivity has a dark side: it’s difficult to go deeply when one is juggling ever-more relationships. This is both common sense, and backed up by research showing that as social networks expand, visits and telephone calls drop, while email rises. Second, Lady Greenfield observed how virtuality distances us from the “messiness” and “unpredictability” of face-to-face conversations. In other words, digital communications can weaken the very fabric of social ties. As I wrote in my book Distracted, an increasingly virtual world risks downgrading the rich, complex synchronicity of human relations to paper-thin shadow play.
If it weren’t for the Net, I likely wouldn’t have found out about Lady Greenfield’s comments, nor been able to respond to them in this way. Yet going forward, we need to rediscover the value of digital gadgets as tools, rather than elevating them to social and cognitive panacea. Lady Greenfield is right: we need to grow up and take a more mature approach to our tech tools.
You can find more input from Maggie Jackson via her website.