List o’ Books: Neuroscience and Neurological Illness

A Striped Armchair

photo credit

On my post about preferring booklists to challenges last week, Laura answered my call for booklist requests. She said:

Your post asking about topics for booklists got me thinking…I work as an editor at a non-profit professional association that supports neurologists. We have a number of staff but no neurologists that actually work for the association. Much of the work that we do directly affects neurologists and the patients they care for, but many staff members don’t have direct experience with neurology or neurological illness. I have recently started a book club for staff members to become more familiar with these issues. […]We recently had our first meeting, where we discussed “The Man Who Mistook His Wife for a Hat” by Oliver Sacks. I am looking for other books (either fiction or nonfiction) that deal with neurological illness in some way. Some ideas that I’ve had so far:…

View original post 1,702 more words

Course Schedule: 1.5 year Undergraduate Neuroscience Education

I’m all for self-studying, including going through an entire college curriculum on your own, in less time than a traditional four-year program.

Based on several schools (Yale, Harvard, MIT, Johns Hopkins, University of Pennsylvania, and Oxford University) I have created a 1.5-year study curriculum in neuroscience, using open-courseware.

A great online neuroscience teaser can be found here.A wonderful, complete online neuroscience textbook can be found here.

Year 1:

January 

February

March 

April

  • Bioethics in neuroscience
  • Experimental methods in neuroscience
  • Research stats

May

June

July

  • Circadian neurobiology
  • Perception and decision

August

September

  • Pain
  • Autonomic physiology

October

  • Biochemistry
  • Molecular genetics

November

December

Year 2:

January 

February

March

April

May

  • Brain injury and recovery
  • Neurodegenerative disorders

June

  • Neurobiology of neuropsychiatric disorders
  • Genes, circuits, and behavior

 

 

1.5 year Undergraduate Neuroscience Education

  1. Principles of neuroscience
  2. Neurobiology
  3. Neurobiology of behavior
  4. Animal behavior
  5. Structure and functional organization of the human nervous system
  6. Bioethics in neuroscience
  7. Brain development and plasticity
  8. Cell and molecular neuroscience
  9. Synaptic organization of the nervous system
  10. Molecular transport
  11. Neurobiology of learning and memory
  12. Hippocampus
  13. Circadian neurobiology
  14. Perception and decision
  15. Neuroeconomics
  16. Motor control
  17. Pain
  18. Research stats
  19. Experimental methods in neuroscience
  20. Biochemistry
  21. Molecular genetics
  22. Evolutionary biology
  23. Systems neuroscience
  24. Fundamentals of computational neuroscience
  25. Intro to computing
  26. Cognitive neuroscience
  27. Neuroscience of visual perception
  28. Smell and taste
  29. Auditory system
  30. Drugs and the brain
  31. Biological bases of addiction
  32. Autonomic physiology
  33. Brain injury and recovery
  34. Neurodegenerative disorders
  35. Neurobiology of neuropsychiatric disorders
  36. Neurobiology of emotion
  37. Behavioral pharmacology
  38. Genes, circuits, and behavior
  39. Functional brain imaging

 

 

Our brains are more than Turing Complete

I was listening to a lecture on computer functions and abstractions. A Turing complete computer is able to compute anything. That is, anything that is computable can be computer by a Turing complete computer.

However, what even a Turing complete computer lacks is abstraction. Namely, you have to rebuild a file every time you want to use it, and you can’t use the same variable names in other pieces of code. This, of course, can become quite annoying and very inefficient, if you always have to go back and change pieces of code so that variable names don’t overlap.

So this got me thinking: our brains are like Turing complete computers, but with the ability to abstract. We can replace, modify, add, and delete variables in our minds relatively easily, without the information becoming jumbled. We can also compute near anything, if we sit down to it, assuming at least average intelligence.

Further, the brain can augment its own capabilities. As you learn, plasticity kicks in, making your brain more efficient and better able to connect concepts. I don’t know of any computer or AI that can do that.

So it seems that one of the extraordinary elements of the human brain is not so much simply the ability to compute–any computer does that quite well, and typically, better than a person, or at least faster–but the ability to abstract and augment ability. Computers, I’m sure, will eventually get to that point, but for now, the human brain transcends AI abstraction abilities.

Interpreters, Compilers, and Learning

[Disclaimer: I know very little about computers and operating systems at this point, as I just started going back to college for my second BS, this time in CS. However, with my background in neuroscience, I can’t help but try to find parallels between what I already know about the brain and the things I’m learning about computers. I realize that the worn analogy of brains and computers doesn’t always hold weight, but as I try to understand the new things I’m learning, I’m going to refer back to things I already know, which is the brain.

As I learn more, I’ll probably update articles. If you have any insight into anything I’ve written, please share with me, as I and my readers love to learn!]

*********

Computers can only execute programs that have been written in low-level languages. However, low-level languages are more difficult to write and take more time. Therefore, people tend to write computer programs in high-level languages, which then must be translated by the computer into low-level languages before the program can be run.

Now, there are two kinds of programs that can process high-level languages into low-level languages: interpreters and compilers.

Interpreters read the high-level program and then executes. It does so by reading the code one line at time, executing between lines. Hence the term INTER (between) in INTERpreter.

Compilers reads the programs and translates it completely before the program runs. That is, the compilers translates the program whole, and then runs it. This is unlike the interpreter’s one-line-at-a-time method.

These aspects of programming got me thinking a bit.

Compilers remind me automatic processes, like when we are operating on auto-pilot. Our brain is still taking in information, but it’s not processing it one bit at a time; it’s more big-picture, and less into the details at a given moment.

However, when we are learning something new, our brains are more focused on the details and more interested in processing things in bits, and then “running” it. That is, when we are struggling with new information, or just ingesting new information, our brains are more apt to take it bits of information at a time, processing it, and then moving on to the new piece of information. That way, if something is not understood, it’s realized early on in the process, and that can be remedied.

Findings Friday: The aging brain is a distracted brain

As the brain ages, it becomes more difficult for it to shut out irrelevant stimuli—that is, it becomes more easily distracted. Sitting in a restaurant, having a conversation with your table partner right across the table from you, presents as a new challenge when the restaurant is buzzing with activity.

However, the aging brain does not have to be the distracted brain. Training the mind to shut out irrelevant stimuli is possible, even for the older brain.

Brown University scientists conducted a study involving seniors and college-age students. The experiment was a visual one.

Participants were presented with a letter and number sequence, and asked to report only the numbers, while simultaneously disregarding a series of dots. The dots sometimes moved randomly, and at other times, moved in a clear path. The latter scenario makes the dots more difficult to ignore, as the eye tends to want to watch the dots move.

The senior participants tended to unintentionally learn the dot motion patterns, which was determined when they were asked to describe which way the dots were moving. The college age participants were better able to ignore the dots, and focus on the task at hand (the numbers).

Another study also examined aging and distractibility, or an inability to maintain proper focus on a goal due to attention to irrelevant stimuli. Here, aging brains were trained to be more focused. The researchers used older rats, as well as older humans. Three different sound were played during the experiment, with a target tone presented. Awards were given when the target tone was identified and the other tones ignored. As subjects improved, the tasks became challenging, with the target tone becoming less distinguishable to from the other tones.

However, after training, both the rats and the humans made fewer errors. In fact, electrophysiological brain recordings indicated that neural responses to the non-target, or distracting, tones were decreased.

Interestingly, the researchers indicated that ignoring a task is not the flip side of focusing on a task. Older brains can be just as efficient at focusing as younger brains. The issue in aging brains, however, lies in being able to filter out distractions. This is where training comes in: strengthening the brain’s ability to ignore distractors; not necessarily enhancing the brain’s ability to focus.

The major highlights of the study include training older humans with respect to enhanced aspects of cognitive control, and the adaptive distractor training that sought to selectively suppress distractor responses.

Technique Thursday: Microiontophoresis

In the brain, communication is both electrical and chemical. An electric impulse (action potential) propagates down an axon, and chemicals (neurotransmitters) are released. In neuroscientific research, the application of a neurotransmitter to a specific region may be necessary. One way to do this is via microelectrophoretic techniques like microiontophoresis. With this method, neurotransmitter can be administered to a living cell, and the consequent reactions recorded and studied.

Microiontophoresis takes advantage of the fact that ions flow in an electric field. Basically, during the method, current is passed through a micropipette tip and a solute is delivered to a desired location. A key advantage to this localized delivery and application is that very specific behaviors can be studied within context of location. However, a limitation is that the precise concentration of solute may sometimes be difficult to determine or control.

The main component of this technique is the use of electrical current to stimulate the release of solvent. In other words, it is an “injection without a needle.” The main driver of this electric current is galvanic current. The current is typically applied continuously. The solute needs to be ionic, and must be placed under an electrode of the same charge. That is, positively charged ions must be placed under the positive electrode (anode), and negatively charged ions must be placed under the negative electrode (cathode). This way, the anode will repel the positively charged ion into the skin, and the cathode will repel a negatively charged ion into the skin.