Tag Archives: robots

Ask, Don’t Tell, and Get it Done

Are you the sort of person who routinely tells yourself that you probably can’t achieve whatever it is you’d like to achieve? Does the voice in your head say things like, “Be realistic, you can’t really do this.”  And perhaps, fed up with positive self-talk mumbo jumbo in the media, you think that the only self-talk worth listening to is the “realistic” kind—the kind that tells you how it is. 

Well, whatever your feelings about positive psychology and its many spin-offs, there is some decent research with something to say about all of this—and your little voice should be listening. Research by University of Illinois Professor Dolores Albarracin and her team has shown that those who ask themselves whether they will perform a task generally do better than those who tell themselves that they will.

But first, a slight digression. If you have young kids or even early teens (or just have the misfortune of watching children’s TV shows), you may be familiar with the show “Bob the Builder.”  Bob is a positive little man with serious intentions about building and fixing things.  Prior to taking on any given task, he loudly asks himself and his team, “Can we fix it?”  To which his team responds, “Yes we can!”   Now, compare this approach with that of the Little Engine Who Could, who’s oft repeated success phrase was, “I think I can, I think I can…”  In a nutshell, the research we’re about to discuss wanted to know which approach works best.

Researchers tested these two different motivational approaches first by telling study participants to either spend a minute wondering whether they would complete a task or telling themselves they would. The participants showed more success on an anagram task (rearranging words to create different words) when they asked themselves whether they would complete it than when they told themselves they would.

In another experiment, students were asked to write two seemingly unrelated sentences, starting with either “I Will” or “Will I,” and then work on the same anagram task. Participants did better when they wrote, “Will I” even though they had no idea that the word writing related to the anagram task.  A final experiment added the dimension of having participants complete a test designed to gauge motivation levels.  Again, the participants who asked themselves whether they would complete the task did better on the task, and scored significantly higher on the motivation test.

In other words, by asking themselves a question, people were more likely to build their own motivation than if they simply told themselves they’d get it done.

The takeaway for us: that little voice has a point, sort of.  Telling ourselves that we can achieve a goal may not get us very far. Asking ourselves, on the other hand, can bear significant fruit, indeed. Retool your self-talk to focus on the questions instead of presupposing answers, and allow your mind to build motivation around the questions.

A short-cut:  just remember the battle cry of Bob the Builder.



Filed under About Perception, About Research

Wall-E or the Terminator?

asimo_conduct-772338Independently minded robots, Isaac Asimov told us, need rules.  With well-structured, law abiding robots, we get terrific garbage service, expertly made French toast and great lawn care. With recklessly structured, disobedient robots, we get “He’s been sent from the future to kill you – that’s WHAT he does! That’s ALL he does!”  The choice is clear, and fortunately someone has started the discussion to get us moving in a kinder, gentler robotic direction.

Authors Wendell Wallach, an ethicist at Yale University, and historian and philosopher of cognitive science Colin Allen, at Indiana University have provided us with Moral Machines: Teaching Robots Right from Wrong to help guide the way.  The New Scientist discusses their six strategies for reducing robotic danger here.  Here’s one of them:

Program robots with principles

Building robots motivated to create the “greatest good for the greatest number”, or to “treat 416fkpnotol__sl500_sl150_others as you would wish to be treated” would be safer than laying down simplistic rules.

Likelihood of success: Moderate. Recognising the limits of rules, some ethicists look for an over-riding principle that can be used to evaluate all courses of action.

But the history of ethics is a long debate over the value and limits of many proposed single principles. For example, it could seem logical to sacrifice the lives of one person to save the lives of five people. But a human doctor would not sacrifice a healthy person simply to supply organs to five people needing transplants. Would a robot?

Sometimes identifying the best option under a given rule can be extremely difficult. For example, determining which course of action leads to the greatest good would require a tremendous amount of knowledge, and an understanding of the effects of actions in the world. Making such calculations would require time and a great deal of computing power.

If you think this is far-fetched, please check out the giggling robot, the violin-playing robot and, of course, Asimo.  We’re not as far away from the future Wallach and Allen describe as it first may seem.  And it’s imperative we move fast before this happens…

1 Comment

Filed under Books and Ideas