Technology and the Human Brain

My original article can be found on BrainWorld Blog.

Being wired to technology isn’t necessarily good for our neural wiring. Multitasking—which many people think they do well—is not healthy for neural efficiency. With each task switch, there is a cost, which makes our brains less efficient, and depletes nutrients more readily than if we concentrated on one task at a time.

Further, what most people consider “multitasking”—responding to an email while speaking on the phone, while jotting down notes, while reading an article…is actually a series of task switches they do. Human brains are not wired to multitask well—we are wired to perform one main task at a time. When we override this and try to do several things at once, or try to juggle between multiple tasks, we end up drained, less productive, and increase our cortisol levels, which is known to damage neurons.

An MIT neuroscientist, Dr. Earl Miller, is a world expert on divided attention and multitasking. He says that our brains are just “not wired to multitask well…When people think they’re multitasking, they’re actually just switching from one task to another very rapidly. And every time they do, there’s a cognitive cost in doing so.”

This cognitive cost—called a switch cost in cognitive literature—results in slower performance and decreased accuracy. The switch cost continues even when people are aware beforehand a switch is coming. This indicates that there are many executive control processes at play, including attention shifting, goal setting and retrieval, an inhibition of prior tasks. This is not sustainable by the human brain.

Consider allowing yourself to complete just one task at a time. Respond only to an email, then make that phone call. Allot yourself time limits if that helps you with efficiency. Trying to do otherwise—juggling between multiple tasks simultaneously—will make you less productive and more depleted.

Ever have a few minutes of “down-time”? Maybe you’re sitting at the doctor’s office waiting to be called in, or standing in the Starbucks line waiting for your drink to be made. Notice how people quickly snap out their phones and scroll? Or maybe you don’t because you are doing the same thing.

This constant bombardment of stimulation is unhealthy for your brain. The human brain needs moments of silence, moments of downtime, to function well. Giving your brain a break—from information, articles, social media feeds, music, emails, and everything else—means giving your brain a chance to replenish itself.

These moments of silence can foster creativity, reduce stress levels, and enhance overall brain function. You may think you are doing yourself a favor reading the latest news articles, or scrolling through photos of how a rocket is put together, but in reality, you are overstimulating your brain.

Technology has a great capacity to help us. From providing us information to solve problems, acting as a creative outlet, giving us the freedom to explore new concepts, technology has opened up worlds to us. However, these freedoms and abilities come with a cost. As in anything, moderation is key. Allow yourself some time to “unplug” every day. If not every day, then choose one day a week to unplug for a few hours, if not the whole day. Keep a notebook on how you feel. Are you less stressed? More thoughtful? Perhaps more creative?

I’d bet after a few of these unplugs, you’ll find yourself enjoying your own company, and the company of others, more, and you’ll be far more creative and content in your daily life.

You may even begin craving these moments of unwiring.

Computational Neuroscience/Neural Engineering

As I explore more of what I want to (computational neuroscience, neural engineering, software engineering, etc.) I’m finding out more about what can be done with what degrees.

I really am finding myself interested in the idea of building things: not just software, but AI, neurorobotics, etc.

I assume these all need PhDs. But in what field?

Seems like a neuroscience and computer science/engineering background is a must. Or at least the computer background is. Biomedical/computer engineering are also requisites to seem degree.

So the question is now: what do I get a graduate degree in?

I have a BS in neuroscience, am working towards a second BS in computer science. What do I get my masters and potential PhD in? I was thinking about an MS in computer science with a specialization in machine learning. But now I’m thinking an MS in computer engineering would be good too.

So I did a little research in Indeed.com for computational neuroscience jobs in the DC area.

Here’s what I found:

JobTitle Neuroscientist / Biomedical Engineer
Job Highlights Seeking a Ph.D. in Neuroscience, Biomedical Engineering, Biomechanics, or related field. Candidates should have an interest and ability to conduct research and advise federal partners on topics including blast injury, traumatic brain injury, rehabilitation engineering, human performance monitoring and enhancement, and noninvasive brain-computer interfaces. Secondary topics may include biologically-inspired approaches to artificial intelligence, computer vision, and robotics.
Required Skills: The successful candidate will have expertise in applied science and computer programming (Matlab, Java, C++, Python, or other languages relevant to scientific computing), specifically with experience that may include biomechanical modeling of injury and/or musculoskeletal movement, and computational neuroscience.

Other positions (the few there were) demanded similar credentials, along with PhDs. Looks like I should get that PhD after all…

MIT-inspired Neuroscience Curriculum

I’m thinking about going for my PhD in neuroscience, specifically computational neuroscience. With a BS in neuroscience, and working on my second BS in computer science, I need to do all I can to boost my grad application and make sure I have the coursework that will make me an attractive student.

So, I turned to MIT for their undergraduate and graduate neuroscience curriculums, and created this new self-study curriculum for the next year. (I plan on applying in December 2017, so I’ve got a year to refresh on undergrad neuro and teach myself a few graduate neuro topics/courses).

Introduction to Computer Science Programming in Python
Introduction to Computational Thinking and Data Science
Introduction to Computer Science
Introduction to Neuroscience
Introduction to Neural Computation
Intro to Computer/Electrical Engineering
Sensory Systems
Fundamentals of Biology
Intro to Neurorobotics
Cognitive Neuroscience
Brain Structure and its Origins
Perceptual Systems
Neural Circuits
Neurobiology of Learning and Memory
Cellular Neurophysiology
Animal Behavior
Developmental Neurobiology
Neuroendocrine Systems
Intro to Comp and Elec Problem Solving
Genes, Circuits, and Behavior
The Brain and Its Interface with the Body
Cell NeuroBio
Systems Neuroscience
Cell and Molecular I
Cell and Molecular II

December 2016:

  1. Introduction to Neuroscience
  2. Fundamentals of Biology

January 2017:

  1. Introduction to Computer Science
  2. Introduction to Computational Thinking and Data Science

February 2017:

  1. Introduction to Computer Science Programming in Python
  2. Sensory Systems

March 2017:

  1. Intro to Computer/Electrical Engineering

April 2017:

  1. Intro to Comp and Elec Problem Solving

May 2017:

  1. Introduction to Neural Computation

June 2017:

  1. Cellular Neurophysiology

July 2017:

  1. Intro to Neurorobotics

August 2017:

  1. Cell NeuroBio

Why clowns and dolls are so creepy: the uncanny valley

With their painted red cheeks, their black-lined eyes, and big red nose, clowns are some of the creepiest things in existence.

And what of Chucky and his beloved bride, or Annabelle on her rocking chair.
Also creepy.

But what makes them so? What is it about clowns, or dolls, or even androids that make people squirm?

It’s the uncanny valley.

Coined in the ’70s, the term describes the revulsion humans have for things that are human-like, but not quite human.

One theory states that we avoid anything that doesn’t look “right” or “healthy” because it could mean disease. In other words, “pathogen avoidance.” Just like we can be freaked out by people with major physical deformities, because those deformities could be the result of say, fungi or flesh-eating parasites, we tend to be freaked out by human-like things that don’t seem right to us.

We’re just trying to protect ourselves; it’s all evolution, baby.

There’s a graph that describes where the uncanny valley lives. It doesn’t hit until you start getting into the realms of puppets and prostheses, and, of course, zombies. Because, zombies.

We especially don’t like weird human-like things moving, because movement is supposed to be linked to life, and these things shouldn’t be alive, at least to our pathogen-fearing brains. Stuffed animals, which tend to be unmoving, don’t unnerve us. Industrial robots, which aren’t really that human-like, don’t freak us out, either.

But give us a good brain-loving zombie, and that’s it. We’re done. Our brains don’t like it.

Screen Shot 2016-10-04 at 8.00.53 PM.png

Image: Wikipedia/Smurrayinchester via CC by SA 3.0

People were even freaked out by the cute kids in the Polar Express:

Screen Shot 2016-10-04 at 8.15.14 PM.png

I personally think she’s kinda cute, except for the teeth.

Even with the theories abounding, scientists don’t exactly know why the uncanny valley exists.

“We still don’t understand why it occurs or whether you can get used to it, and people don’t necessarily agree it exists,” said Ayse Saygin, a cognitive scientist at the University of California, San Diego. “This is one of those cases where we’re at the very beginning of understanding it.”

There’s definitely a general consensus, though, that human-like behavior, or trying to mimic human behavior, causes the repulsion in real humans. If, say, an android is jerky or doesn’t hold good eye contact, it causes a disconnect in people’s brains between what they think the  motion or behavior should look like, and how it actually looks.

What’s interesting is that anyone–even if they live in some remote tribe in Cambodia–can experience the uncanny valley. BUT, it typically only when researchers show people humanoid faces similar to the viewers’ own ethnic group.

A team of researchers, led by Ayse Pinar Saygin at the University of California, San Diego, ran a bunch of fMRI studies of people watching android videos, compared to robot-like robots or just plain old humans.

The fMRI study does indicate that part of the uncanny valley disconnect is that our brains are having trouble matching up perception and motion.

What’s being called the “action perception system”, the human brain is aware of human motions and appearances.

Subjects aged 20-36 who had no experience working with robots, and who haven’t spent much time in Japan where robots and androids are more accepted, were shown a bunch of videos of an actroid (yes, it’s a word: actress + android) doing normal, everyday, human things, like drinking water, picking something up from a table, or waving at people. These subjects were also shown videos of those actions performed by a human that the actroid was based on. And then shown yet another video of the android stripped of its skin, so that only the metal and wires and such were showing.

The actroid used in the study: Repliee Q2:

screen-shot-2016-10-04-at-8-41-42-pm
(What even is this face?)

So we have three conditions:

  1. Human-like android
  2. Real human
  3. Mechanics-robot

Scanned using fMRI, the major difference found in the brain’s responses to each condition occurred in the parietal cortex , especially with regards to the areas that connect to the visual cortex. In particular, the connections were to visual cortex areas dedicated to processing body movement, along with part of the motor cortex that contains mirror neurons (those monkey-see, monkey-do neurons that relate to empathy).

Uncanny valley
fMRI scan of brain differences when viewing robot, android, and human

There’s evidence of some sort of mismatch happening. When the android was shown, its weird robot-movement wasn’t quite processed by the subjects’ brains.

This makes some sense, since there’s no evolutionary need for a human brain to care about bio-like appearances or biological motion. What the brain is looking for, however, is a match between appearance and motion. Dogs should walk like dogs, jaguars should run like jaguars, humans should walk like humans, and androids..well, they just move weirdly. They look like people, but don’t move like us. So our brains can’t process that, and we feel repulsed.

We wouldn’t be repulsed, however, if an android moved just like a human–our brains can process that, since what we’re seeing (a human-like body) is coupled with human-congruent motion.

Today’s other experiments are trying to figure out where there’s a disconnect in human perception and humanoid figures, like AI robots. There’s a lot of research also happening about empathy and human emotional response during an interaction with, e.g. androids.

What if our society grows to include androids in daily interactions. Would their not-quite-human behavior and look prevent humans from establishing an emotional connection with these figures. Would it matter, if androids don’t have minds or emotions, anyway.

But what if they end up being a lot more emotionally and intellectually aware than we think they are; could this cause a societal issue if humans view androids as lesser beings unworthy of empathy or respect?

 

 

List o’ Books: Neuroscience and Neurological Illness

A Striped Armchair

photo credit

On my post about preferring booklists to challenges last week, Laura answered my call for booklist requests. She said:

Your post asking about topics for booklists got me thinking…I work as an editor at a non-profit professional association that supports neurologists. We have a number of staff but no neurologists that actually work for the association. Much of the work that we do directly affects neurologists and the patients they care for, but many staff members don’t have direct experience with neurology or neurological illness. I have recently started a book club for staff members to become more familiar with these issues. […]We recently had our first meeting, where we discussed “The Man Who Mistook His Wife for a Hat” by Oliver Sacks. I am looking for other books (either fiction or nonfiction) that deal with neurological illness in some way. Some ideas that I’ve had so far:…

View original post 1,702 more words

Self-study Neuroscience

I’m all for self-studying, including going through an entire college curriculum on your own, in less time than a traditional four-year program.

Based on several schools (Yale, Harvard, MIT, Johns Hopkins, University of Pennsylvania, and Oxford University) I have created a 1.5-year study curriculum in neuroscience, using open-courseware.

A great online neuroscience teaser can be found here.A wonderful, complete online neuroscience textbook can be found here.

Year 1:

January 

February

March 

April

  • Bioethics in neuroscience
  • Experimental methods in neuroscience
  • Research stats

May

June

July

  • Circadian neurobiology
  • Perception and decision

August

September

  • Pain
  • Autonomic physiology

October

  • Biochemistry
  • Molecular genetics

November

December

Year 2:

January 

February

March

April

May

  • Brain injury and recovery
  • Neurodegenerative disorders

June

  • Neurobiology of neuropsychiatric disorders
  • Genes, circuits, and behavior