The Self-righteous psychology

We all know that person: the Mr.Always-right co-worker, who always thinks he’s got it down and everyone else is wrong. The self-victimizing acquaintance who thinks she treats everyone generously and kindly but who everyone else treats like dirt. The friend you grew up with who thinks he’s reflective and everyone else needs to learn that skill.

At some point in our lives, we will come across the self-righteous person. With their criticism, indignation, and conceit, they tend to grate on our nerves and throw us off our track—if we let them.

But what is the psychology behind the self-righteous personality, and how does it affect you when you have to deal with that BS?

A number of variables grow the self-righteous mind, but a few characteristics are shared by those who think they’re oh-so-good-and-right:

  1. Overgeneralizations: Take a negative incident, throw some magic growth-powder on it, and you have an exaggeration. Look out for “always”, “never”, and “all” from these people, and you could have some self-righteousness burbling in the cauldron.
  2. Positive-discounting: On the flip side of overgeneralizations is taking the stance that positive things, like characteristics of others, aren’t as important. “Hey, you’re nice, but who cares. Being nice is overrated, anyway.”
  3. Jumping to conclusions: We all do it, but the self-righteous person is skilled in this. Conclusions are arrived at, though there is very little to no non-bias to the conclusion. “So-and-so didn’t give me money as a Christmas present this year; they don’t care about me and are cheap. Ugh, I hate cheap people.”
  4. Black-and-white thinking: Either you’re perfect, or you’re not. If you fall short of expectations, it’s because you’re not [insert some quality here], and that’s a reflection of your entirety.

Of course, the self-righteous person doesn’t always have to share these qualities, and their behavior may not draw from these thinking patterns. I know some people who seem self-righteous, and perhaps they are, but it’s due more to social environment than anything else. That’s not to say they haven’t adopted the self-righteous attitude, but I don’t think they would’ve turned out that way had it not been for certain social attitudes around them.

But what makes the self-righteous attitude such a pervasive form of thinking for those who engage in it?

There are a number of reasons based on basic human psychology.

The backfire effect is a relatively common human tendency to protect whatever is added to your collection of beliefs. That is, whatever you decide to believe, you tend to dismiss what doesn’t match up to that perspective, and suck in the “evidence” that matches up to it. So you end up glued to your beliefs and never questioning them, even when new information comes your way that shakes up the foundations of those beliefs—or rather, could shake those foundations if only the backfire effect didn’t get in the way.

The backfire effect is mostly due to cognitive laziness—our brains don’t want to work, so we sink into those explanations that don’t take too much energy to process. The more strenuous it becomes to process things, the less credibility you think they have.

Think you don’t have this tendency yourself, though? Think again.

The next time you have someone praise you, then another person criticize you, explore how you feel. Chances are, a thousand “You’re so smart”, but one “You’re not smart enough” will affect you differently. You’ll let the praise slip right through your mind, but you’ll leech on to the negative comment.


The backfire effect and another tendency have something to do with it. People tend to spend more time considering information they disagree with than information they accept. Anything that line sup with your way of thinking passes through your processing simply, but anything that threatens your beliefs will grab on to your awareness and hold on. With the backfire effect in play as well, you’ll end up not believing the harder pill to swallow, because it ends up taking too much energy to process. Even if you dwell on the criticism, you’ll fight against believing it and try to find all the ways that that criticism is wrong about you.

Why is this so? Evolution may be at play, here.

Our human ancestors paid more attention to negative stimuli than positive, because if the negative wasn’t addressed, that could mean death.

Biased assimilation has something to do with all this, as well. Kevin Dunbar ran an fMRI experiment where he showed subjects information that confirmed their beliefs about something. Brain regions relating to learning lit up.

But when given contradictory information, those learning brain regions didn’t light up. Rather, brain regions relating to suppression lit up.

In other words, presenting information doesn’t necessarily change the way people think or what they believe. We’re all susceptible to cognitive biases—some of us more so than others.





Why clowns and dolls are so creepy: the uncanny valley

With their painted red cheeks, their black-lined eyes, and big red nose, clowns are some of the creepiest things in existence.

And what of Chucky and his beloved bride, or Annabelle on her rocking chair.
Also creepy.

But what makes them so? What is it about clowns, or dolls, or even androids that make people squirm?

It’s the uncanny valley.

Coined in the ’70s, the term describes the revulsion humans have for things that are human-like, but not quite human.

One theory states that we avoid anything that doesn’t look “right” or “healthy” because it could mean disease. In other words, “pathogen avoidance.” Just like we can be freaked out by people with major physical deformities, because those deformities could be the result of say, fungi or flesh-eating parasites, we tend to be freaked out by human-like things that don’t seem right to us.

We’re just trying to protect ourselves; it’s all evolution, baby.

There’s a graph that describes where the uncanny valley lives. It doesn’t hit until you start getting into the realms of puppets and prostheses, and, of course, zombies. Because, zombies.

We especially don’t like weird human-like things moving, because movement is supposed to be linked to life, and these things shouldn’t be alive, at least to our pathogen-fearing brains. Stuffed animals, which tend to be unmoving, don’t unnerve us. Industrial robots, which aren’t really that human-like, don’t freak us out, either.

But give us a good brain-loving zombie, and that’s it. We’re done. Our brains don’t like it.

Screen Shot 2016-10-04 at 8.00.53 PM.png

Image: Wikipedia/Smurrayinchester via CC by SA 3.0

People were even freaked out by the cute kids in the Polar Express:

Screen Shot 2016-10-04 at 8.15.14 PM.png

I personally think she’s kinda cute, except for the teeth.

Even with the theories abounding, scientists don’t exactly know why the uncanny valley exists.

“We still don’t understand why it occurs or whether you can get used to it, and people don’t necessarily agree it exists,” said Ayse Saygin, a cognitive scientist at the University of California, San Diego. “This is one of those cases where we’re at the very beginning of understanding it.”

There’s definitely a general consensus, though, that human-like behavior, or trying to mimic human behavior, causes the repulsion in real humans. If, say, an android is jerky or doesn’t hold good eye contact, it causes a disconnect in people’s brains between what they think the  motion or behavior should look like, and how it actually looks.

What’s interesting is that anyone–even if they live in some remote tribe in Cambodia–can experience the uncanny valley. BUT, it typically only when researchers show people humanoid faces similar to the viewers’ own ethnic group.

A team of researchers, led by Ayse Pinar Saygin at the University of California, San Diego, ran a bunch of fMRI studies of people watching android videos, compared to robot-like robots or just plain old humans.

The fMRI study does indicate that part of the uncanny valley disconnect is that our brains are having trouble matching up perception and motion.

What’s being called the “action perception system”, the human brain is aware of human motions and appearances.

Subjects aged 20-36 who had no experience working with robots, and who haven’t spent much time in Japan where robots and androids are more accepted, were shown a bunch of videos of an actroid (yes, it’s a word: actress + android) doing normal, everyday, human things, like drinking water, picking something up from a table, or waving at people. These subjects were also shown videos of those actions performed by a human that the actroid was based on. And then shown yet another video of the android stripped of its skin, so that only the metal and wires and such were showing.

The actroid used in the study: Repliee Q2:

(What even is this face?)

So we have three conditions:

  1. Human-like android
  2. Real human
  3. Mechanics-robot

Scanned using fMRI, the major difference found in the brain’s responses to each condition occurred in the parietal cortex , especially with regards to the areas that connect to the visual cortex. In particular, the connections were to visual cortex areas dedicated to processing body movement, along with part of the motor cortex that contains mirror neurons (those monkey-see, monkey-do neurons that relate to empathy).

Uncanny valley
fMRI scan of brain differences when viewing robot, android, and human

There’s evidence of some sort of mismatch happening. When the android was shown, its weird robot-movement wasn’t quite processed by the subjects’ brains.

This makes some sense, since there’s no evolutionary need for a human brain to care about bio-like appearances or biological motion. What the brain is looking for, however, is a match between appearance and motion. Dogs should walk like dogs, jaguars should run like jaguars, humans should walk like humans, and androids..well, they just move weirdly. They look like people, but don’t move like us. So our brains can’t process that, and we feel repulsed.

We wouldn’t be repulsed, however, if an android moved just like a human–our brains can process that, since what we’re seeing (a human-like body) is coupled with human-congruent motion.

Today’s other experiments are trying to figure out where there’s a disconnect in human perception and humanoid figures, like AI robots. There’s a lot of research also happening about empathy and human emotional response during an interaction with, e.g. androids.

What if our society grows to include androids in daily interactions. Would their not-quite-human behavior and look prevent humans from establishing an emotional connection with these figures. Would it matter, if androids don’t have minds or emotions, anyway.

But what if they end up being a lot more emotionally and intellectually aware than we think they are; could this cause a societal issue if humans view androids as lesser beings unworthy of empathy or respect?



List o’ Books: Neuroscience and Neurological Illness

A Striped Armchair

photo credit

On my post about preferring booklists to challenges last week, Laura answered my call for booklist requests. She said:

Your post asking about topics for booklists got me thinking…I work as an editor at a non-profit professional association that supports neurologists. We have a number of staff but no neurologists that actually work for the association. Much of the work that we do directly affects neurologists and the patients they care for, but many staff members don’t have direct experience with neurology or neurological illness. I have recently started a book club for staff members to become more familiar with these issues. […]We recently had our first meeting, where we discussed “The Man Who Mistook His Wife for a Hat” by Oliver Sacks. I am looking for other books (either fiction or nonfiction) that deal with neurological illness in some way. Some ideas that I’ve had so far:…

View original post 1,702 more words

Course Schedule: 1.5 year Undergraduate Neuroscience Education

I’m all for self-studying, including going through an entire college curriculum on your own, in less time than a traditional four-year program.

Based on several schools (Yale, Harvard, MIT, Johns Hopkins, University of Pennsylvania, and Oxford University) I have created a 1.5-year study curriculum in neuroscience, using open-courseware.

A great online neuroscience teaser can be found here.A wonderful, complete online neuroscience textbook can be found here.

Year 1:





  • Bioethics in neuroscience
  • Experimental methods in neuroscience
  • Research stats




  • Circadian neurobiology
  • Perception and decision



  • Pain
  • Autonomic physiology


  • Biochemistry
  • Molecular genetics



Year 2:






  • Brain injury and recovery
  • Neurodegenerative disorders


  • Neurobiology of neuropsychiatric disorders
  • Genes, circuits, and behavior



1.5 year Undergraduate Neuroscience Education

  1. Principles of neuroscience
  2. Neurobiology
  3. Neurobiology of behavior
  4. Animal behavior
  5. Structure and functional organization of the human nervous system
  6. Bioethics in neuroscience
  7. Brain development and plasticity
  8. Cell and molecular neuroscience
  9. Synaptic organization of the nervous system
  10. Molecular transport
  11. Neurobiology of learning and memory
  12. Hippocampus
  13. Circadian neurobiology
  14. Perception and decision
  15. Neuroeconomics
  16. Motor control
  17. Pain
  18. Research stats
  19. Experimental methods in neuroscience
  20. Biochemistry
  21. Molecular genetics
  22. Evolutionary biology
  23. Systems neuroscience
  24. Fundamentals of computational neuroscience
  25. Intro to computing
  26. Cognitive neuroscience
  27. Neuroscience of visual perception
  28. Smell and taste
  29. Auditory system
  30. Drugs and the brain
  31. Biological bases of addiction
  32. Autonomic physiology
  33. Brain injury and recovery
  34. Neurodegenerative disorders
  35. Neurobiology of neuropsychiatric disorders
  36. Neurobiology of emotion
  37. Behavioral pharmacology
  38. Genes, circuits, and behavior
  39. Functional brain imaging



Our brains are more than Turing Complete

I was listening to a lecture on computer functions and abstractions. A Turing complete computer is able to compute anything. That is, anything that is computable can be computer by a Turing complete computer.

However, what even a Turing complete computer lacks is abstraction. Namely, you have to rebuild a file every time you want to use it, and you can’t use the same variable names in other pieces of code. This, of course, can become quite annoying and very inefficient, if you always have to go back and change pieces of code so that variable names don’t overlap.

So this got me thinking: our brains are like Turing complete computers, but with the ability to abstract. We can replace, modify, add, and delete variables in our minds relatively easily, without the information becoming jumbled. We can also compute near anything, if we sit down to it, assuming at least average intelligence.

Further, the brain can augment its own capabilities. As you learn, plasticity kicks in, making your brain more efficient and better able to connect concepts. I don’t know of any computer or AI that can do that.

So it seems that one of the extraordinary elements of the human brain is not so much simply the ability to compute–any computer does that quite well, and typically, better than a person, or at least faster–but the ability to abstract and augment ability. Computers, I’m sure, will eventually get to that point, but for now, the human brain transcends AI abstraction abilities.

Interpreters, Compilers, and Learning

[Disclaimer: I know very little about computers and operating systems at this point, as I just started going back to college for my second BS, this time in CS. However, with my background in neuroscience, I can’t help but try to find parallels between what I already know about the brain and the things I’m learning about computers. I realize that the worn analogy of brains and computers doesn’t always hold weight, but as I try to understand the new things I’m learning, I’m going to refer back to things I already know, which is the brain.

As I learn more, I’ll probably update articles. If you have any insight into anything I’ve written, please share with me, as I and my readers love to learn!]


Computers can only execute programs that have been written in low-level languages. However, low-level languages are more difficult to write and take more time. Therefore, people tend to write computer programs in high-level languages, which then must be translated by the computer into low-level languages before the program can be run.

Now, there are two kinds of programs that can process high-level languages into low-level languages: interpreters and compilers.

Interpreters read the high-level program and then executes. It does so by reading the code one line at time, executing between lines. Hence the term INTER (between) in INTERpreter.

Compilers reads the programs and translates it completely before the program runs. That is, the compilers translates the program whole, and then runs it. This is unlike the interpreter’s one-line-at-a-time method.

These aspects of programming got me thinking a bit.

Compilers remind me automatic processes, like when we are operating on auto-pilot. Our brain is still taking in information, but it’s not processing it one bit at a time; it’s more big-picture, and less into the details at a given moment.

However, when we are learning something new, our brains are more focused on the details and more interested in processing things in bits, and then “running” it. That is, when we are struggling with new information, or just ingesting new information, our brains are more apt to take it bits of information at a time, processing it, and then moving on to the new piece of information. That way, if something is not understood, it’s realized early on in the process, and that can be remedied.