Our brains are more than Turing Complete

I was listening to a lecture on computer functions and abstractions. A Turing complete computer is able to compute anything. That is, anything that is computable can be computer by a Turing complete computer.

However, what even a Turing complete computer lacks is abstraction. Namely, you have to rebuild a file every time you want to use it, and you can’t use the same variable names in other pieces of code. This, of course, can become quite annoying and very inefficient, if you always have to go back and change pieces of code so that variable names don’t overlap.

So this got me thinking: our brains are like Turing complete computers, but with the ability to abstract. We can replace, modify, add, and delete variables in our minds relatively easily, without the information becoming jumbled. We can also compute near anything, if we sit down to it, assuming at least average intelligence.

Further, the brain can augment its own capabilities. As you learn, plasticity kicks in, making your brain more efficient and better able to connect concepts. I don’t know of any computer or AI that can do that.

So it seems that one of the extraordinary elements of the human brain is not so much simply the ability to compute–any computer does that quite well, and typically, better than a person, or at least faster–but the ability to abstract and augment ability. Computers, I’m sure, will eventually get to that point, but for now, the human brain transcends AI abstraction abilities.


Findings Friday: The aging brain is a distracted brain

As the brain ages, it becomes more difficult for it to shut out irrelevant stimuli—that is, it becomes more easily distracted. Sitting in a restaurant, having a conversation with your table partner right across the table from you, presents as a new challenge when the restaurant is buzzing with activity.

However, the aging brain does not have to be the distracted brain. Training the mind to shut out irrelevant stimuli is possible, even for the older brain.

Brown University scientists conducted a study involving seniors and college-age students. The experiment was a visual one.

Participants were presented with a letter and number sequence, and asked to report only the numbers, while simultaneously disregarding a series of dots. The dots sometimes moved randomly, and at other times, moved in a clear path. The latter scenario makes the dots more difficult to ignore, as the eye tends to want to watch the dots move.

The senior participants tended to unintentionally learn the dot motion patterns, which was determined when they were asked to describe which way the dots were moving. The college age participants were better able to ignore the dots, and focus on the task at hand (the numbers).

Another study also examined aging and distractibility, or an inability to maintain proper focus on a goal due to attention to irrelevant stimuli. Here, aging brains were trained to be more focused. The researchers used older rats, as well as older humans. Three different sound were played during the experiment, with a target tone presented. Awards were given when the target tone was identified and the other tones ignored. As subjects improved, the tasks became challenging, with the target tone becoming less distinguishable to from the other tones.

However, after training, both the rats and the humans made fewer errors. In fact, electrophysiological brain recordings indicated that neural responses to the non-target, or distracting, tones were decreased.

Interestingly, the researchers indicated that ignoring a task is not the flip side of focusing on a task. Older brains can be just as efficient at focusing as younger brains. The issue in aging brains, however, lies in being able to filter out distractions. This is where training comes in: strengthening the brain’s ability to ignore distractors; not necessarily enhancing the brain’s ability to focus.

The major highlights of the study include training older humans with respect to enhanced aspects of cognitive control, and the adaptive distractor training that sought to selectively suppress distractor responses.

Technique Thursday: Microiontophoresis

In the brain, communication is both electrical and chemical. An electric impulse (action potential) propagates down an axon, and chemicals (neurotransmitters) are released. In neuroscientific research, the application of a neurotransmitter to a specific region may be necessary. One way to do this is via microelectrophoretic techniques like microiontophoresis. With this method, neurotransmitter can be administered to a living cell, and the consequent reactions recorded and studied.

Microiontophoresis takes advantage of the fact that ions flow in an electric field. Basically, during the method, current is passed through a micropipette tip and a solute is delivered to a desired location. A key advantage to this localized delivery and application is that very specific behaviors can be studied within context of location. However, a limitation is that the precise concentration of solute may sometimes be difficult to determine or control.

The main component of this technique is the use of electrical current to stimulate the release of solvent. In other words, it is an “injection without a needle.” The main driver of this electric current is galvanic current. The current is typically applied continuously. The solute needs to be ionic, and must be placed under an electrode of the same charge. That is, positively charged ions must be placed under the positive electrode (anode), and negatively charged ions must be placed under the negative electrode (cathode). This way, the anode will repel the positively charged ion into the skin, and the cathode will repel a negatively charged ion into the skin.

The Pain in Brain Stays Mainly in the…Brain?

Pain is a major force of survival. Without pain, we would, simply, not survive. Of course, pain can be cumbersome, and unnecessary, at times. For example, when you stub your toe on your desk, do you really need that much pain for that long?

More importantly to this discussion, what do you do when you stub your toe? Probably do a bit of hopping, and you certainly grab that toe and squeeze or rub it.

Why do you do that? The Gate Control Theory of Pain can answer that.

At its basic, the Gate Theory of Pain dictates that non-painful input (like that rubbing) closes the gates to painful input. This results in the prevention of the sensation of pain from being fully perceived by the brain. Simply, when the painful stimulus is overridden by a non-painful stimulus, the pain sensation does not travel to the central nervous system (CNS).

Even more simply, non-painful stimuli suppress pain.

Why is that?

Collaterals, or processes, of large sensory fibers that carry cutaneous (skin) sensory input activate inhibitory interneurons. Now, inhibitory interneurons do just what their name implies: they inhibit. And what do they inhibit in this case? Pain sensory pathways. This therefore modulates the transmission of pain information by pain fibers. Non-painful input suppresses pain by “closing the gate” to painful input.

This happens at the spinal cord level: non-painful stimulation will result in presynaptic inhibition on pain (nociceptive) fibers that synapse on nociceptive spinal neurons. This inhibition, which is presynaptic, will therefore block any incoming painful stimuli from reaching the CNS.

More on this topic on this week’s Séance Sunday, coming up!

Monty Python’s influence in Neuroscience

Ever heard of the computer programming language called Python? Where do you think Python got its name from?

Monty Python!

Python programming has had an increasing effect on neuroscientific developments. In fact, with the growing field of computational neuroscience, Python programming has taken a role in how neuroscience research occurs.

Python itself is a high-level programming language. Its syntax is relatively straightforward, as one of its main philosophies is code readability. Therefore, coders can use fewer lines of code than they would in other programming languages.

In neuroscience, python is used in data analysis and simulation. Python is ideal for neuroscience research because both neural analysis and simulation code should be readable and simple to generate and execute. Further, it should be understandable xx time later, by xx people reading the code. That is, it should be timeless and should make sense to anyone who reads the code and tries to execute or replicate it.

Of course, MATLAB is good for neuroscience research purposes, but MATLAB is a closed-source, expensive product, where python is open-source and more readily available to the masses. In fact, you can download Python here. Further, there are a number of courses that can help a dedicated learner teach themselves Python.

Python also has science packages that allow for systems like algebra, packages specifically for neural data analysis and simulation packages to describe neuronal connectivity and function. Python can even be used for database management, which may be important when there are large amounts of data belonging to a given laboratory. Because Python combines features from other languages, it can be used in foreign code and other libraries beyond the ones it was developed in. This allows for effective sharing and collaboration between laboratories. SciPy is “a collection of open source software for scientific computing in Python.”

In relation to neuronal simulations, Python make sense because:

1. It is easy to learn. This is because is has a clear syntax, is an interpreted language (meaning the code is interactive and provides immediate feedback to the coder), and lists/dictionaries are built into the program itself.

2. It has a standard library, which therefore provides  built-in features important for  data processing, database access, network programming and more.

3. It is easy to interface Python with other programming languages, which means that a coder can develop an interface in Python, and then implement it in other fast programming languages, such as C and C++.

An example of Python being used as a neuronal simulator is NEURON. An example of code is the following:

>>> from neuron import h
>>> soma = h.Section()
>>> dend = h.Section()
>>> dend.connect(soma, 0, 0)
>>> soma.insert(’hh’)
>>> syn = h.ExpSyn(0.9, sec=dend)

(taken from Davison, et al., 2009).

Here, a neuron is “built”, with a dendrite, soma, dendrite all “created”, as well as channels. It is clear, then, how simple and straightforward Python code is, and how important it can then be in neuroscience.

Not quite Alibaba: Robber’s Cave Experiment

Muzafer Sherif, an American psychologist of Turkish heritage, made a contribution to psychology via his Realistic Conflict Theory. This theory states that group conflicts, stereotypes and prejudices are the result of competition for resources.

So it’s sort of caveman group 1 meets caveman group 2, all fighting for the same food and other resources, and deciding that the other group is the enemy and need to be hated on.

Sherif performed the famous Robber’s Cave experiment to support his theory.

Unfortunately, Robber’s Cave was not a pirate cove, or Alibaba’s hangout, but it was a state park in Oklahoma. The experiment itself involved two groups of 12-year-old boys, totaling 22 boys.

The boys were all from white middle-class backgrounds, from two-parent Protestant homes, and had no relation or connection to each other. In other words, they were all strangers to each other. The boys were randomly assigned to one of two groups, and each group was unaware of the other group’s existence.

Then, as separate groups, a bus picked them up in the summer of ’54 and took them to a fake summer camp at a 200-acre Boy Scouts camp in Robbers Cave State Park. Even at this state park, the groups were kept separate from each other, but were encouraged to get to know each other as two individual groups via common goals that required discussion, planning and execution.

During the first phase, the two groups did not know of the other group’s existence. Therefore, the boys developed an attachment to the group they belonged to during the first week of camp. They established their own cultural norms via activities such as hiking and swimming. They even chose names for their groups (The Eagles and The Rattlers), and had t-shirts and flags with their group name.

Then came the Competition stage. Over the course of 4-6 days, friction between the two groups was to occur. Basically, there was a turf war.

In this Competition stage, the two groups were brought into competition with each other, such as via baseball, tug-of-war, etc. with prizes like trophies. Individual prizes were also given out to the winning group.

Now, the Rattlers, confident boys that they were, were absolutely confident that they would be the victors. They spend a day discussing the contests, and improving their skills on the ball field, where they were bold enough to put up a “Keep Off” sign. In other words, they set up their own territory. The Rattlers even went so far as to make threatening remarks about what would happen if The Eagles bothered them.

Sherif built in situations that frustrated one group over the other, such as having one group get delayed going to a picnic so that by the time they arrived, the other group had eaten all the food.

Now of course, the prejudice began verbally, with name-calling and taunting. As the Competition phase continued, the verbal abuse became more physical, with The Eagles burning the flag of The Rattlers. The day after, The Rattlers retaliated by ransacking The Eagles’ cabin, stealing private property and overturning the beds. The researchers had to separate the boys because they became so violent with each other.

There was then a 2-day cooling off period, where the boys were instructed to characterize the two groups. Unsurprisingly, each boy described his own group in more favorable terms than the other group.

The results of this experiment indicated that Sherif’s Realistic Conflict Theory was correct; inter-group conflict can produce prejudice and negative behavior.

Now, a major ethical concern with the experiment was deception: the boys were not told of the nature of the experiment, nor were they protected from harm, either psychological or physical, to the best of the researchers’ abilities. The sample was also biased: middle-class, white, and young, the sample is hardly powerful enough to generalize to larger groups, such as nations.

The Sentry (Bob Reynolds) and the Brain

Optogenetics. It sounds like genes being lit up in neon colors, like a flashy Las Vegas sign.

But what is it really?

Optogenetics is a technique that takes advantage of proteins found in certain algae species that respond to different wavelengths of light. This algal response to the wavelengths includes opening a channel (called a channelrhodopsin) in their cell membrane, allowing ions like NA+ and Cl to flow in/out of the cell.

Of course ,this is also how neurons operate: they work via the control of certain ions, such as NA+ and Cl-, in/out of the cell.

So, if you take a gene that encodes the light-sensitive channel of the algae, and force neurons to express that gene, what do you have?

Neurons that have been forced to become responsive to light! Therefore, shining a light on those neurons will force them to fire an action potential (nerve impulse). If you turn off the light, they stop firing. Then, if you use a different channel protein, you can silence those neurons, and they will no longer fire action potentials.

This then gives pointed and reversible control over the neuronal action potential activity patterns.


Source: http://neurobyn.blogspot.se/2011/01/controlling-brain-with-lasers.html

The technique’s major asset is the specificity with which you can control gene expression and neuronal firing. This is possible because different types of cells express different sets of genes. Each gene has two major parts: one part encodes for a specific protein, and the other is a regulatory region which instructs the gene on when and where and how much of a certain protein is to be made. These two regions, the encoding one and the regulatory one, are separate from each other. In fact, the regulatory region is on a neighbouring segment of DNA.

Therefore, you can slice the DNA that makes up the regulatory region and splice it to the protein-encoding segment of another gene, like a channelrhodopsin protein, for example. Then, you can take that hybrid and stick it into a vector, that is, another organism. What do you think that organism wil now be able to do?

Express that protein, like the channelrhodopsin. And that organism will only express the new protein in the cell types directed by the regulatory segment of the DNA you chose in lab.

Now, biological research has a history of using light to control or interact with living systems. For example, a light-based technique called CALI is used to inhibit (by destruction) certain proteins. Lasers have also been used to destroy cells. UV light has also been used to activate a protein that regulates neurons.

In neuroscience, optogentics can be used to silence or activate neurons in different parts of the brain. For example, the amygdala is involved in fear. Of course, we can be conditioned to be fearful of certain things. Like a man who has a fear of driving over bridges because he was once on a bridge that was damaged, or the girl who has a fear of dogs because she nearly got bitten by one, our fear responses that be strong, and without intervention, permanent.

Optogenetics can step in and help us understand the workings of fear, and how it occurs.

For example, researchers conducted a study regarding the development of fear associations. Activation of lateral amygdala pyramidal neurons by aversive stimuli can drive the formation of associative fearful memories. This has been proven by taking channelrhodopsin proteins in lateral amygdala pyramidal neurons, and having an auditory cue paired with light stimulation of those neurons, rather than a direct aversive stimuli. After this experiment, it was found that presenting just the tone produced a fear response.

It is clear, then, that optogenetics can provide real-time information of what neurons are doing, and when. Further, it can also us to control the workings of neurons.

It is possible that optogenetics can get us to the point of not only understanding the brain, including dysfunction, but also offer a way for us to be able to solve problems, including epilepsy and depression. Maybe even schizophrenia.