Interpreters, Compilers, and Learning

[Disclaimer: I know very little about computers and operating systems at this point, as I just started going back to college for my second BS, this time in CS. However, with my background in neuroscience, I can’t help but try to find parallels between what I already know about the brain and the things I’m learning about computers. I realize that the worn analogy of brains and computers doesn’t always hold weight, but as I try to understand the new things I’m learning, I’m going to refer back to things I already know, which is the brain.

As I learn more, I’ll probably update articles. If you have any insight into anything I’ve written, please share with me, as I and my readers love to learn!]

*********

Computers can only execute programs that have been written in low-level languages. However, low-level languages are more difficult to write and take more time. Therefore, people tend to write computer programs in high-level languages, which then must be translated by the computer into low-level languages before the program can be run.

Now, there are two kinds of programs that can process high-level languages into low-level languages: interpreters and compilers.

Interpreters read the high-level program and then executes. It does so by reading the code one line at time, executing between lines. Hence the term INTER (between) in INTERpreter.

Compilers reads the programs and translates it completely before the program runs. That is, the compilers translates the program whole, and then runs it. This is unlike the interpreter’s one-line-at-a-time method.

These aspects of programming got me thinking a bit.

Compilers remind me automatic processes, like when we are operating on auto-pilot. Our brain is still taking in information, but it’s not processing it one bit at a time; it’s more big-picture, and less into the details at a given moment.

However, when we are learning something new, our brains are more focused on the details and more interested in processing things in bits, and then “running” it. That is, when we are struggling with new information, or just ingesting new information, our brains are more apt to take it bits of information at a time, processing it, and then moving on to the new piece of information. That way, if something is not understood, it’s realized early on in the process, and that can be remedied.

Findings Friday: The aging brain is a distracted brain

As the brain ages, it becomes more difficult for it to shut out irrelevant stimuli—that is, it becomes more easily distracted. Sitting in a restaurant, having a conversation with your table partner right across the table from you, presents as a new challenge when the restaurant is buzzing with activity.

However, the aging brain does not have to be the distracted brain. Training the mind to shut out irrelevant stimuli is possible, even for the older brain.

Brown University scientists conducted a study involving seniors and college-age students. The experiment was a visual one.

Participants were presented with a letter and number sequence, and asked to report only the numbers, while simultaneously disregarding a series of dots. The dots sometimes moved randomly, and at other times, moved in a clear path. The latter scenario makes the dots more difficult to ignore, as the eye tends to want to watch the dots move.

The senior participants tended to unintentionally learn the dot motion patterns, which was determined when they were asked to describe which way the dots were moving. The college age participants were better able to ignore the dots, and focus on the task at hand (the numbers).

Another study also examined aging and distractibility, or an inability to maintain proper focus on a goal due to attention to irrelevant stimuli. Here, aging brains were trained to be more focused. The researchers used older rats, as well as older humans. Three different sound were played during the experiment, with a target tone presented. Awards were given when the target tone was identified and the other tones ignored. As subjects improved, the tasks became challenging, with the target tone becoming less distinguishable to from the other tones.

However, after training, both the rats and the humans made fewer errors. In fact, electrophysiological brain recordings indicated that neural responses to the non-target, or distracting, tones were decreased.

Interestingly, the researchers indicated that ignoring a task is not the flip side of focusing on a task. Older brains can be just as efficient at focusing as younger brains. The issue in aging brains, however, lies in being able to filter out distractions. This is where training comes in: strengthening the brain’s ability to ignore distractors; not necessarily enhancing the brain’s ability to focus.

The major highlights of the study include training older humans with respect to enhanced aspects of cognitive control, and the adaptive distractor training that sought to selectively suppress distractor responses.

Technique Thursday: Microiontophoresis

In the brain, communication is both electrical and chemical. An electric impulse (action potential) propagates down an axon, and chemicals (neurotransmitters) are released. In neuroscientific research, the application of a neurotransmitter to a specific region may be necessary. One way to do this is via microelectrophoretic techniques like microiontophoresis. With this method, neurotransmitter can be administered to a living cell, and the consequent reactions recorded and studied.

Microiontophoresis takes advantage of the fact that ions flow in an electric field. Basically, during the method, current is passed through a micropipette tip and a solute is delivered to a desired location. A key advantage to this localized delivery and application is that very specific behaviors can be studied within context of location. However, a limitation is that the precise concentration of solute may sometimes be difficult to determine or control.

The main component of this technique is the use of electrical current to stimulate the release of solvent. In other words, it is an “injection without a needle.” The main driver of this electric current is galvanic current. The current is typically applied continuously. The solute needs to be ionic, and must be placed under an electrode of the same charge. That is, positively charged ions must be placed under the positive electrode (anode), and negatively charged ions must be placed under the negative electrode (cathode). This way, the anode will repel the positively charged ion into the skin, and the cathode will repel a negatively charged ion into the skin.

Manic Monday: Eye of the Storm

A classic experiment on discrimination was Jane Elliott’s Blue eyes/brown eyes experiment. Jane Elliott is a former third-grade teacher, with no research background to speak for. However, the day after Martin Luther King Jr. was shot, she decided to try a little experiment with her young, impressionable students.

What she did next was nothing short of fascinating.

On April 4, 1968, Jane Elliott was ironing a teepee for one of her classroom activities. On the television, she was watching news about the assassination of King. One white reported mentioned something that shocked Elliott:

“When our leader [John F. Kennedy] was killed several years ago, his widow held us together. Who’s going to control your people?”

Elliott could not believe that the white reported felt that Kennedy was a white-person leader, and that black people would now get out of control.

So she decided to twist her little Native American classroom exercise and replace teepees and moccasins with blue-eyed and brown-eyed students.

So, on the first day of her experiment, Elliott decided that since she had blue eyes and was the teacher, blue-eyed students were superior. The blue-eyed and the brown-eyed children were consequently separated based on something as superficial as their eye color.

Blue-eyed children were given brown collars to wrap around their brown-eyed peers. All the best to notice them with.

The blue-eyed children were then given extra helpings of food at lunchtime, five extra minutes at recess, chance to play at the new jungle gym at school. The brown-eyed children were left out of these activities. The blue-eyed children were also allowed to sit at the front of the class, while brown-eyed children were kept at the back.

Blue-eyed children were encouraged to play with other blue-eyeds, but told to ignore brown-eyed peers. Further, blue-eyed students were allowed to drink at the water fountain, while the brown-eyed ones were prohibited from doing so. If they forgot, they were chastised.

Now, of course the children resisted the idea that the blue-eyed students were superior somehow. Elliott countered eloquently, and with a lie: melanin is linked to blue eyes, as well as intelligence.

The students’ initial resistance wore out.

The blue-eyed “superior” students then became arrogant and bossy. They were mean, and excluded their brown-eyed peers. They thought themselves superior, simply on the basis of their eye color.

What’s even more interesting is that the blue-eyed students did better on some of their exams, and performed at higher ability on math and reading than they had previously. Just believing they were superior affected their grades positively.

Even more interesting, but perhaps not surprising, was what happened to the brown-eyed students:

They became shy, timid, and frighteningly, subservient. They did poorer on their tests, and during recess, kept themselves away from the blue-eyed children. Each group effectually grouped themselves according to their eye color.

The next week, Elliott added another twist to the experiment: she made the blue-eyed students inferior, and made the brown-eyed ones superior. Brown collars for the blue-eyeds now.

The brown-eyeds then began to act meanly towards the blue-eyed kids, though at a lesser intensity.

Several days later, the blue-eyed students were told they could remove their brown collars. She then had the students reflect on the experiment by writing down what they thought and had learned from the experiment.

Needless to say, the experiment had a major impact on her students. Elliott continued the experiment with her students for years after, and has appeared on Oprah and other venues, promoting anti-discrimination.

A documentary was filmed about her experiment, called Eye of the Storm.

A beautiful video about a modern re-enactment of the experiment can be found here.

The Pain in Brain Stays Mainly in the…Brain?

Pain is a major force of survival. Without pain, we would, simply, not survive. Of course, pain can be cumbersome, and unnecessary, at times. For example, when you stub your toe on your desk, do you really need that much pain for that long?

More importantly to this discussion, what do you do when you stub your toe? Probably do a bit of hopping, and you certainly grab that toe and squeeze or rub it.

Why do you do that? The Gate Control Theory of Pain can answer that.

At its basic, the Gate Theory of Pain dictates that non-painful input (like that rubbing) closes the gates to painful input. This results in the prevention of the sensation of pain from being fully perceived by the brain. Simply, when the painful stimulus is overridden by a non-painful stimulus, the pain sensation does not travel to the central nervous system (CNS).

Even more simply, non-painful stimuli suppress pain.

Why is that?

Collaterals, or processes, of large sensory fibers that carry cutaneous (skin) sensory input activate inhibitory interneurons. Now, inhibitory interneurons do just what their name implies: they inhibit. And what do they inhibit in this case? Pain sensory pathways. This therefore modulates the transmission of pain information by pain fibers. Non-painful input suppresses pain by “closing the gate” to painful input.

This happens at the spinal cord level: non-painful stimulation will result in presynaptic inhibition on pain (nociceptive) fibers that synapse on nociceptive spinal neurons. This inhibition, which is presynaptic, will therefore block any incoming painful stimuli from reaching the CNS.

More on this topic on this week’s Séance Sunday, coming up!

Monty Python’s influence in Neuroscience

Ever heard of the computer programming language called Python? Where do you think Python got its name from?

Monty Python!

Python programming has had an increasing effect on neuroscientific developments. In fact, with the growing field of computational neuroscience, Python programming has taken a role in how neuroscience research occurs.

Python itself is a high-level programming language. Its syntax is relatively straightforward, as one of its main philosophies is code readability. Therefore, coders can use fewer lines of code than they would in other programming languages.

In neuroscience, python is used in data analysis and simulation. Python is ideal for neuroscience research because both neural analysis and simulation code should be readable and simple to generate and execute. Further, it should be understandable xx time later, by xx people reading the code. That is, it should be timeless and should make sense to anyone who reads the code and tries to execute or replicate it.

Of course, MATLAB is good for neuroscience research purposes, but MATLAB is a closed-source, expensive product, where python is open-source and more readily available to the masses. In fact, you can download Python here. Further, there are a number of courses that can help a dedicated learner teach themselves Python.

Python also has science packages that allow for systems like algebra, packages specifically for neural data analysis and simulation packages to describe neuronal connectivity and function. Python can even be used for database management, which may be important when there are large amounts of data belonging to a given laboratory. Because Python combines features from other languages, it can be used in foreign code and other libraries beyond the ones it was developed in. This allows for effective sharing and collaboration between laboratories. SciPy is “a collection of open source software for scientific computing in Python.”

In relation to neuronal simulations, Python make sense because:

1. It is easy to learn. This is because is has a clear syntax, is an interpreted language (meaning the code is interactive and provides immediate feedback to the coder), and lists/dictionaries are built into the program itself.

2. It has a standard library, which therefore provides  built-in features important for  data processing, database access, network programming and more.

3. It is easy to interface Python with other programming languages, which means that a coder can develop an interface in Python, and then implement it in other fast programming languages, such as C and C++.

An example of Python being used as a neuronal simulator is NEURON. An example of code is the following:

>>> from neuron import h
>>> soma = h.Section()
>>> dend = h.Section()
>>> dend.connect(soma, 0, 0)
>>> soma.insert(’hh’)
>>> syn = h.ExpSyn(0.9, sec=dend)

(taken from Davison, et al., 2009).

Here, a neuron is “built”, with a dendrite, soma, dendrite all “created”, as well as channels. It is clear, then, how simple and straightforward Python code is, and how important it can then be in neuroscience.

Not quite Alibaba: Robber’s Cave Experiment

Muzafer Sherif, an American psychologist of Turkish heritage, made a contribution to psychology via his Realistic Conflict Theory. This theory states that group conflicts, stereotypes and prejudices are the result of competition for resources.

So it’s sort of caveman group 1 meets caveman group 2, all fighting for the same food and other resources, and deciding that the other group is the enemy and need to be hated on.

Sherif performed the famous Robber’s Cave experiment to support his theory.

Unfortunately, Robber’s Cave was not a pirate cove, or Alibaba’s hangout, but it was a state park in Oklahoma. The experiment itself involved two groups of 12-year-old boys, totaling 22 boys.

The boys were all from white middle-class backgrounds, from two-parent Protestant homes, and had no relation or connection to each other. In other words, they were all strangers to each other. The boys were randomly assigned to one of two groups, and each group was unaware of the other group’s existence.

Then, as separate groups, a bus picked them up in the summer of ’54 and took them to a fake summer camp at a 200-acre Boy Scouts camp in Robbers Cave State Park. Even at this state park, the groups were kept separate from each other, but were encouraged to get to know each other as two individual groups via common goals that required discussion, planning and execution.

During the first phase, the two groups did not know of the other group’s existence. Therefore, the boys developed an attachment to the group they belonged to during the first week of camp. They established their own cultural norms via activities such as hiking and swimming. They even chose names for their groups (The Eagles and The Rattlers), and had t-shirts and flags with their group name.

Then came the Competition stage. Over the course of 4-6 days, friction between the two groups was to occur. Basically, there was a turf war.

In this Competition stage, the two groups were brought into competition with each other, such as via baseball, tug-of-war, etc. with prizes like trophies. Individual prizes were also given out to the winning group.

Now, the Rattlers, confident boys that they were, were absolutely confident that they would be the victors. They spend a day discussing the contests, and improving their skills on the ball field, where they were bold enough to put up a “Keep Off” sign. In other words, they set up their own territory. The Rattlers even went so far as to make threatening remarks about what would happen if The Eagles bothered them.

Sherif built in situations that frustrated one group over the other, such as having one group get delayed going to a picnic so that by the time they arrived, the other group had eaten all the food.

Now of course, the prejudice began verbally, with name-calling and taunting. As the Competition phase continued, the verbal abuse became more physical, with The Eagles burning the flag of The Rattlers. The day after, The Rattlers retaliated by ransacking The Eagles’ cabin, stealing private property and overturning the beds. The researchers had to separate the boys because they became so violent with each other.

There was then a 2-day cooling off period, where the boys were instructed to characterize the two groups. Unsurprisingly, each boy described his own group in more favorable terms than the other group.

The results of this experiment indicated that Sherif’s Realistic Conflict Theory was correct; inter-group conflict can produce prejudice and negative behavior.

Now, a major ethical concern with the experiment was deception: the boys were not told of the nature of the experiment, nor were they protected from harm, either psychological or physical, to the best of the researchers’ abilities. The sample was also biased: middle-class, white, and young, the sample is hardly powerful enough to generalize to larger groups, such as nations.