Why clowns and dolls are so creepy: the uncanny valley

With their painted red cheeks, their black-lined eyes, and big red nose, clowns are some of the creepiest things in existence.

And what of Chucky and his beloved bride, or Annabelle on her rocking chair.
Also creepy.

But what makes them so? What is it about clowns, or dolls, or even androids that make people squirm?

It’s the uncanny valley.

Coined in the ’70s, the term describes the revulsion humans have for things that are human-like, but not quite human.

One theory states that we avoid anything that doesn’t look “right” or “healthy” because it could mean disease. In other words, “pathogen avoidance.” Just like we can be freaked out by people with major physical deformities, because those deformities could be the result of say, fungi or flesh-eating parasites, we tend to be freaked out by human-like things that don’t seem right to us.

We’re just trying to protect ourselves; it’s all evolution, baby.

There’s a graph that describes where the uncanny valley lives. It doesn’t hit until you start getting into the realms of puppets and prostheses, and, of course, zombies. Because, zombies.

We especially don’t like weird human-like things moving, because movement is supposed to be linked to life, and these things shouldn’t be alive, at least to our pathogen-fearing brains. Stuffed animals, which tend to be unmoving, don’t unnerve us. Industrial robots, which aren’t really that human-like, don’t freak us out, either.

But give us a good brain-loving zombie, and that’s it. We’re done. Our brains don’t like it.

Screen Shot 2016-10-04 at 8.00.53 PM.png

Image: Wikipedia/Smurrayinchester via CC by SA 3.0

People were even freaked out by the cute kids in the Polar Express:

Screen Shot 2016-10-04 at 8.15.14 PM.png

I personally think she’s kinda cute, except for the teeth.

Even with the theories abounding, scientists don’t exactly know why the uncanny valley exists.

“We still don’t understand why it occurs or whether you can get used to it, and people don’t necessarily agree it exists,” said Ayse Saygin, a cognitive scientist at the University of California, San Diego. “This is one of those cases where we’re at the very beginning of understanding it.”

There’s definitely a general consensus, though, that human-like behavior, or trying to mimic human behavior, causes the repulsion in real humans. If, say, an android is jerky or doesn’t hold good eye contact, it causes a disconnect in people’s brains between what they think the  motion or behavior should look like, and how it actually looks.

What’s interesting is that anyone–even if they live in some remote tribe in Cambodia–can experience the uncanny valley. BUT, it typically only when researchers show people humanoid faces similar to the viewers’ own ethnic group.

A team of researchers, led by Ayse Pinar Saygin at the University of California, San Diego, ran a bunch of fMRI studies of people watching android videos, compared to robot-like robots or just plain old humans.

The fMRI study does indicate that part of the uncanny valley disconnect is that our brains are having trouble matching up perception and motion.

What’s being called the “action perception system”, the human brain is aware of human motions and appearances.

Subjects aged 20-36 who had no experience working with robots, and who haven’t spent much time in Japan where robots and androids are more accepted, were shown a bunch of videos of an actroid (yes, it’s a word: actress + android) doing normal, everyday, human things, like drinking water, picking something up from a table, or waving at people. These subjects were also shown videos of those actions performed by a human that the actroid was based on. And then shown yet another video of the android stripped of its skin, so that only the metal and wires and such were showing.

The actroid used in the study: Repliee Q2:

screen-shot-2016-10-04-at-8-41-42-pm
(What even is this face?)

So we have three conditions:

  1. Human-like android
  2. Real human
  3. Mechanics-robot

Scanned using fMRI, the major difference found in the brain’s responses to each condition occurred in the parietal cortex , especially with regards to the areas that connect to the visual cortex. In particular, the connections were to visual cortex areas dedicated to processing body movement, along with part of the motor cortex that contains mirror neurons (those monkey-see, monkey-do neurons that relate to empathy).

Uncanny valley
fMRI scan of brain differences when viewing robot, android, and human

There’s evidence of some sort of mismatch happening. When the android was shown, its weird robot-movement wasn’t quite processed by the subjects’ brains.

This makes some sense, since there’s no evolutionary need for a human brain to care about bio-like appearances or biological motion. What the brain is looking for, however, is a match between appearance and motion. Dogs should walk like dogs, jaguars should run like jaguars, humans should walk like humans, and androids..well, they just move weirdly. They look like people, but don’t move like us. So our brains can’t process that, and we feel repulsed.

We wouldn’t be repulsed, however, if an android moved just like a human–our brains can process that, since what we’re seeing (a human-like body) is coupled with human-congruent motion.

Today’s other experiments are trying to figure out where there’s a disconnect in human perception and humanoid figures, like AI robots. There’s a lot of research also happening about empathy and human emotional response during an interaction with, e.g. androids.

What if our society grows to include androids in daily interactions. Would their not-quite-human behavior and look prevent humans from establishing an emotional connection with these figures. Would it matter, if androids don’t have minds or emotions, anyway.

But what if they end up being a lot more emotionally and intellectually aware than we think they are; could this cause a societal issue if humans view androids as lesser beings unworthy of empathy or respect?

 

 

Advertisements

List o’ Books: Neuroscience and Neurological Illness

A Striped Armchair

photo credit

On my post about preferring booklists to challenges last week, Laura answered my call for booklist requests. She said:

Your post asking about topics for booklists got me thinking…I work as an editor at a non-profit professional association that supports neurologists. We have a number of staff but no neurologists that actually work for the association. Much of the work that we do directly affects neurologists and the patients they care for, but many staff members don’t have direct experience with neurology or neurological illness. I have recently started a book club for staff members to become more familiar with these issues. […]We recently had our first meeting, where we discussed “The Man Who Mistook His Wife for a Hat” by Oliver Sacks. I am looking for other books (either fiction or nonfiction) that deal with neurological illness in some way. Some ideas that I’ve had so far:…

View original post 1,702 more words

Self-study Neuroscience

I’m all for self-studying, including going through an entire college curriculum on your own, in less time than a traditional four-year program.

Based on several schools (Yale, Harvard, MIT, Johns Hopkins, University of Pennsylvania, and Oxford University) I have created a 1.5-year study curriculum in neuroscience, using open-courseware.

A great online neuroscience teaser can be found here.A wonderful, complete online neuroscience textbook can be found here.

Year 1:

January 

February

March 

April

  • Bioethics in neuroscience
  • Experimental methods in neuroscience
  • Research stats

May

June

July

  • Circadian neurobiology
  • Perception and decision

August

September

  • Pain
  • Autonomic physiology

October

  • Biochemistry
  • Molecular genetics

November

December

Year 2:

January 

February

March

April

May

  • Brain injury and recovery
  • Neurodegenerative disorders

June

  • Neurobiology of neuropsychiatric disorders
  • Genes, circuits, and behavior

 

 

1.5 year Undergraduate Neuroscience Education

  1. Principles of neuroscience
  2. Neurobiology
  3. Neurobiology of behavior
  4. Animal behavior
  5. Structure and functional organization of the human nervous system
  6. Bioethics in neuroscience
  7. Brain development and plasticity
  8. Cell and molecular neuroscience
  9. Synaptic organization of the nervous system
  10. Molecular transport
  11. Neurobiology of learning and memory
  12. Hippocampus
  13. Circadian neurobiology
  14. Perception and decision
  15. Neuroeconomics
  16. Motor control
  17. Pain
  18. Research stats
  19. Experimental methods in neuroscience
  20. Biochemistry
  21. Molecular genetics
  22. Evolutionary biology
  23. Systems neuroscience
  24. Fundamentals of computational neuroscience
  25. Intro to computing
  26. Cognitive neuroscience
  27. Neuroscience of visual perception
  28. Smell and taste
  29. Auditory system
  30. Drugs and the brain
  31. Biological bases of addiction
  32. Autonomic physiology
  33. Brain injury and recovery
  34. Neurodegenerative disorders
  35. Neurobiology of neuropsychiatric disorders
  36. Neurobiology of emotion
  37. Behavioral pharmacology
  38. Genes, circuits, and behavior
  39. Functional brain imaging

 

 

Our brains are more than Turing Complete

I was listening to a lecture on computer functions and abstractions. A Turing complete computer is able to compute anything. That is, anything that is computable can be computer by a Turing complete computer.

However, what even a Turing complete computer lacks is abstraction. Namely, you have to rebuild a file every time you want to use it, and you can’t use the same variable names in other pieces of code. This, of course, can become quite annoying and very inefficient, if you always have to go back and change pieces of code so that variable names don’t overlap.

So this got me thinking: our brains are like Turing complete computers, but with the ability to abstract. We can replace, modify, add, and delete variables in our minds relatively easily, without the information becoming jumbled. We can also compute near anything, if we sit down to it, assuming at least average intelligence.

Further, the brain can augment its own capabilities. As you learn, plasticity kicks in, making your brain more efficient and better able to connect concepts. I don’t know of any computer or AI that can do that.

So it seems that one of the extraordinary elements of the human brain is not so much simply the ability to compute–any computer does that quite well, and typically, better than a person, or at least faster–but the ability to abstract and augment ability. Computers, I’m sure, will eventually get to that point, but for now, the human brain transcends AI abstraction abilities.

Interpreters, Compilers, and Learning

[Disclaimer: I know very little about computers and operating systems at this point, as I just started going back to college for my second BS, this time in CS. However, with my background in neuroscience, I can’t help but try to find parallels between what I already know about the brain and the things I’m learning about computers. I realize that the worn analogy of brains and computers doesn’t always hold weight, but as I try to understand the new things I’m learning, I’m going to refer back to things I already know, which is the brain.

As I learn more, I’ll probably update articles. If you have any insight into anything I’ve written, please share with me, as I and my readers love to learn!]

*********

Computers can only execute programs that have been written in low-level languages. However, low-level languages are more difficult to write and take more time. Therefore, people tend to write computer programs in high-level languages, which then must be translated by the computer into low-level languages before the program can be run.

Now, there are two kinds of programs that can process high-level languages into low-level languages: interpreters and compilers.

Interpreters read the high-level program and then executes. It does so by reading the code one line at time, executing between lines. Hence the term INTER (between) in INTERpreter.

Compilers reads the programs and translates it completely before the program runs. That is, the compilers translates the program whole, and then runs it. This is unlike the interpreter’s one-line-at-a-time method.

These aspects of programming got me thinking a bit.

Compilers remind me automatic processes, like when we are operating on auto-pilot. Our brain is still taking in information, but it’s not processing it one bit at a time; it’s more big-picture, and less into the details at a given moment.

However, when we are learning something new, our brains are more focused on the details and more interested in processing things in bits, and then “running” it. That is, when we are struggling with new information, or just ingesting new information, our brains are more apt to take it bits of information at a time, processing it, and then moving on to the new piece of information. That way, if something is not understood, it’s realized early on in the process, and that can be remedied.

Findings Friday: The aging brain is a distracted brain

As the brain ages, it becomes more difficult for it to shut out irrelevant stimuli—that is, it becomes more easily distracted. Sitting in a restaurant, having a conversation with your table partner right across the table from you, presents as a new challenge when the restaurant is buzzing with activity.

However, the aging brain does not have to be the distracted brain. Training the mind to shut out irrelevant stimuli is possible, even for the older brain.

Brown University scientists conducted a study involving seniors and college-age students. The experiment was a visual one.

Participants were presented with a letter and number sequence, and asked to report only the numbers, while simultaneously disregarding a series of dots. The dots sometimes moved randomly, and at other times, moved in a clear path. The latter scenario makes the dots more difficult to ignore, as the eye tends to want to watch the dots move.

The senior participants tended to unintentionally learn the dot motion patterns, which was determined when they were asked to describe which way the dots were moving. The college age participants were better able to ignore the dots, and focus on the task at hand (the numbers).

Another study also examined aging and distractibility, or an inability to maintain proper focus on a goal due to attention to irrelevant stimuli. Here, aging brains were trained to be more focused. The researchers used older rats, as well as older humans. Three different sound were played during the experiment, with a target tone presented. Awards were given when the target tone was identified and the other tones ignored. As subjects improved, the tasks became challenging, with the target tone becoming less distinguishable to from the other tones.

However, after training, both the rats and the humans made fewer errors. In fact, electrophysiological brain recordings indicated that neural responses to the non-target, or distracting, tones were decreased.

Interestingly, the researchers indicated that ignoring a task is not the flip side of focusing on a task. Older brains can be just as efficient at focusing as younger brains. The issue in aging brains, however, lies in being able to filter out distractions. This is where training comes in: strengthening the brain’s ability to ignore distractors; not necessarily enhancing the brain’s ability to focus.

The major highlights of the study include training older humans with respect to enhanced aspects of cognitive control, and the adaptive distractor training that sought to selectively suppress distractor responses.