Technique Thursday: Microiontophoresis

In the brain, communication is both electrical and chemical. An electric impulse (action potential) propagates down an axon, and chemicals (neurotransmitters) are released. In neuroscientific research, the application of a neurotransmitter to a specific region may be necessary. One way to do this is via microelectrophoretic techniques like microiontophoresis. With this method, neurotransmitter can be administered to a living cell, and the consequent reactions recorded and studied.

Microiontophoresis takes advantage of the fact that ions flow in an electric field. Basically, during the method, current is passed through a micropipette tip and a solute is delivered to a desired location. A key advantage to this localized delivery and application is that very specific behaviors can be studied within context of location. However, a limitation is that the precise concentration of solute may sometimes be difficult to determine or control.

The main component of this technique is the use of electrical current to stimulate the release of solvent. In other words, it is an “injection without a needle.” The main driver of this electric current is galvanic current. The current is typically applied continuously. The solute needs to be ionic, and must be placed under an electrode of the same charge. That is, positively charged ions must be placed under the positive electrode (anode), and negatively charged ions must be placed under the negative electrode (cathode). This way, the anode will repel the positively charged ion into the skin, and the cathode will repel a negatively charged ion into the skin.


Monty Python’s influence in Neuroscience

Ever heard of the computer programming language called Python? Where do you think Python got its name from?

Monty Python!

Python programming has had an increasing effect on neuroscientific developments. In fact, with the growing field of computational neuroscience, Python programming has taken a role in how neuroscience research occurs.

Python itself is a high-level programming language. Its syntax is relatively straightforward, as one of its main philosophies is code readability. Therefore, coders can use fewer lines of code than they would in other programming languages.

In neuroscience, python is used in data analysis and simulation. Python is ideal for neuroscience research because both neural analysis and simulation code should be readable and simple to generate and execute. Further, it should be understandable xx time later, by xx people reading the code. That is, it should be timeless and should make sense to anyone who reads the code and tries to execute or replicate it.

Of course, MATLAB is good for neuroscience research purposes, but MATLAB is a closed-source, expensive product, where python is open-source and more readily available to the masses. In fact, you can download Python here. Further, there are a number of courses that can help a dedicated learner teach themselves Python.

Python also has science packages that allow for systems like algebra, packages specifically for neural data analysis and simulation packages to describe neuronal connectivity and function. Python can even be used for database management, which may be important when there are large amounts of data belonging to a given laboratory. Because Python combines features from other languages, it can be used in foreign code and other libraries beyond the ones it was developed in. This allows for effective sharing and collaboration between laboratories. SciPy is “a collection of open source software for scientific computing in Python.”

In relation to neuronal simulations, Python make sense because:

1. It is easy to learn. This is because is has a clear syntax, is an interpreted language (meaning the code is interactive and provides immediate feedback to the coder), and lists/dictionaries are built into the program itself.

2. It has a standard library, which therefore provides  built-in features important for  data processing, database access, network programming and more.

3. It is easy to interface Python with other programming languages, which means that a coder can develop an interface in Python, and then implement it in other fast programming languages, such as C and C++.

An example of Python being used as a neuronal simulator is NEURON. An example of code is the following:

>>> from neuron import h
>>> soma = h.Section()
>>> dend = h.Section()
>>> dend.connect(soma, 0, 0)
>>> soma.insert(’hh’)
>>> syn = h.ExpSyn(0.9, sec=dend)

(taken from Davison, et al., 2009).

Here, a neuron is “built”, with a dendrite, soma, dendrite all “created”, as well as channels. It is clear, then, how simple and straightforward Python code is, and how important it can then be in neuroscience.

The Sentry (Bob Reynolds) and the Brain

Optogenetics. It sounds like genes being lit up in neon colors, like a flashy Las Vegas sign.

But what is it really?

Optogenetics is a technique that takes advantage of proteins found in certain algae species that respond to different wavelengths of light. This algal response to the wavelengths includes opening a channel (called a channelrhodopsin) in their cell membrane, allowing ions like NA+ and Cl to flow in/out of the cell.

Of course ,this is also how neurons operate: they work via the control of certain ions, such as NA+ and Cl-, in/out of the cell.

So, if you take a gene that encodes the light-sensitive channel of the algae, and force neurons to express that gene, what do you have?

Neurons that have been forced to become responsive to light! Therefore, shining a light on those neurons will force them to fire an action potential (nerve impulse). If you turn off the light, they stop firing. Then, if you use a different channel protein, you can silence those neurons, and they will no longer fire action potentials.

This then gives pointed and reversible control over the neuronal action potential activity patterns.



The technique’s major asset is the specificity with which you can control gene expression and neuronal firing. This is possible because different types of cells express different sets of genes. Each gene has two major parts: one part encodes for a specific protein, and the other is a regulatory region which instructs the gene on when and where and how much of a certain protein is to be made. These two regions, the encoding one and the regulatory one, are separate from each other. In fact, the regulatory region is on a neighbouring segment of DNA.

Therefore, you can slice the DNA that makes up the regulatory region and splice it to the protein-encoding segment of another gene, like a channelrhodopsin protein, for example. Then, you can take that hybrid and stick it into a vector, that is, another organism. What do you think that organism wil now be able to do?

Express that protein, like the channelrhodopsin. And that organism will only express the new protein in the cell types directed by the regulatory segment of the DNA you chose in lab.

Now, biological research has a history of using light to control or interact with living systems. For example, a light-based technique called CALI is used to inhibit (by destruction) certain proteins. Lasers have also been used to destroy cells. UV light has also been used to activate a protein that regulates neurons.

In neuroscience, optogentics can be used to silence or activate neurons in different parts of the brain. For example, the amygdala is involved in fear. Of course, we can be conditioned to be fearful of certain things. Like a man who has a fear of driving over bridges because he was once on a bridge that was damaged, or the girl who has a fear of dogs because she nearly got bitten by one, our fear responses that be strong, and without intervention, permanent.

Optogenetics can step in and help us understand the workings of fear, and how it occurs.

For example, researchers conducted a study regarding the development of fear associations. Activation of lateral amygdala pyramidal neurons by aversive stimuli can drive the formation of associative fearful memories. This has been proven by taking channelrhodopsin proteins in lateral amygdala pyramidal neurons, and having an auditory cue paired with light stimulation of those neurons, rather than a direct aversive stimuli. After this experiment, it was found that presenting just the tone produced a fear response.

It is clear, then, that optogenetics can provide real-time information of what neurons are doing, and when. Further, it can also us to control the workings of neurons.

It is possible that optogenetics can get us to the point of not only understanding the brain, including dysfunction, but also offer a way for us to be able to solve problems, including epilepsy and depression. Maybe even schizophrenia.

Technique Thursday: Computational Modeling

With neuroscience and computer science bleeding into one another, there are a number of ways that computer programming can help in understanding the brain. This can be achieved via computational modeling.

Computational modeling is the intersection of math, physics, and computer science that is used to study the behavior of complex systems via computer models. When applied to the brain, it becomes computational neuroscience, a growing field.

During computational modeling, simulations are done by adjusting the variables of a given system.

Modeling is a great way to enhance research:

  • conduct multiple studies, simultaneously, utilizing different variables
  • do away with the need for research animals
  • help to identify the physical experiments that need to be done
  • saves funding

Computational modeling can relate structural connectivity to functional connectivity in the brain. Or it can provide models for how the brain works during various activities. Or it can provide information on how the fetal brain develops, or how traumatic brain injury affects cognition. Models can even be created of psychiatric disorders, such as dissociative identity disorder (DID) or schizophrenia.

A great short book on computational modeling in relation to the brain can be found here.

A GREAT video to watch can be found here:

Techniques Thursday: Memory Sleuth: How to tell a memory is false

In light of an earlier article I wrote on the vulnerability of memories, and how false memories can be planted in our minds, I decided to write an article on how to tell a memory is false.

Consider Madrigal (yes, it’s the name of a book character, from Dreams of Gods and Monsters if you must know. I’m reading it now and rather enjoying it. No, I’m not linking it because I get a commission from the publisher if you buy the book. I wish, though. No pun intended. And yes, if you read the trilogy, you would get the pun.)

Moving on.

Consider Madrigal. She claims to have been raped by the White Wolf when she was younger. She repressed the memory, but after undergoing therapy with a psychologist, the memory was unearthed, like old pottery from a deserted cave. Her explanation of the situation and her memories are detailed, emotional and provocative. Can she be trusted? Or more aptly stated, can her memory be trusted?

An examiner an focus on groups of memories or on the individual remembering to determine whether a particular memory is a false one.

one way to potentially do this is via criteria-based content analysis. The idea behind this analysis is that false statement have inherent differences, in relation to veritable statements. 19 criteria are scored, such as logic unusual details, spontaneous corrections, etc.

True memory reports, though, tend to contain more detail than dishonest or false ones. The details are also more sensory.

The best approach to determining false memories are to combine various approaches:

  • focus on groups of memories
  • focus on the individual reporting the memory
  • focus on the details of the memory

 Working to determine how sensory-detailed a memory is, on how this makes sense, and how the structure of the memory is organized, can help determine if a memory is a true or false one. However, more research needs to be done in this arena, to develop not only personal strategies for determining the veracity of a memory, but also for developing laboratory-oriented techniques, such as better neuroimaging, etc. to determine the structure of memories.

Technique Thursday: Deep Brain Stimulation: Electrocuting the brain

Deep brain stimulation (DBS) involves inserting and implanting electrodes within the brain. The electrodes produce electrical impulses that serve to regulate the brain’s abnormal impulses. These electrode impulses can also serve to modulate neurochemistry. A pacemaker-like device controls the electrode impulses, ensuring that the right frequency is delivered. The device is connected to the electrodes via a wire placed under the skin.

DBS can be used with patients who have motor issues, such as tremors seen in Parkinson’s Disease (PD). Other patients that may be treated with DBS include dystonia patients, those with epilepsy, Tourette’s, and even depression.

It is suspected that DBS not only affects neural site at the vicinity of the electrodes, but also may disrupts abnormal signals that reverberate through multiple brain regions, corrupting the communications between regions. This is known as “circuit training.”

Now, some people have taken on a DIY approach to DSB: they purchase a variable resistor, a current regulator, a circuit board, and a 9-volt battery.Using some basic wires, a simple circuit is built. Alligator clips are connected to the circuit to two sponges soaked in saline, and then strapped with the head. The battery is placed, and a small dial is turned up. Voila! Electric current into the brain.

[Disclaimer: I’m not endorsing this DIY project.]

One guy, a certain Brent, goes through this DIY DBS ritual weekly: 2-3 times per week, he shocks his brain, sometimes for 25 (?!) minutes at a time.

The research behind this DIY approach is rather shaky: the results are modest, and some, inconclusive. Of course, it’s best not to do such a thing to be on the safe side. Wouldn’t want to play around with those neurochemicals and electricity , now, would you? It’s your brain, after all…

Technique Thursday: Cognitive Subtraction

In imaging, there are certain methodologies, like the cognitive subtraction methodology. In this method, activity in a control task in subtracted from activity in an experimental task. So for example, take a word task. A simple model of written word recognition is used. In a famous experiment, the Peterson et al. (1988) experiment, they wanted to identify brain regions involved with 1) recognizing written words, 2) saying the words, and 3) retrieving meaning of the words. The researchers used cognitive subtraction to tease out all the things they were testing.

To work out which regions are involved with recognizing written words, the researchers compared brain activity while subjects passively viewed words versus passively viewing a cross (+). The idea behind this was that the same brain regions, the same visual processing is involved in passively viewing things. But, the experimental task involved word recognition visually, and therefore, subtraction could be used to tease out the brain regions involved.

To work out which regions are involved in spoken words, the researchers compared the viewing of written words with reading a word aloud. In this task, both experimental and baseline involved visual processing of the word, and word recognition, and therefore subtracting should cancel out things, but the experimental task would be able to be analyzed afterwards.

To work out which regions are involved in retrieving the meaning of written words, the researchers compared a verb-generation task with reading aloud.

What the researchers found was that the left lateral hemisphere is involved in these processes. Recognizing written words activated bilateral sites in the visual cortex. Producing speech activates the sensorimotor cortex bilaterally. Verb generations activates the left inferior frontal gyrus.

Of course, there are some issues with cognitive subtraction. Can you think of any?

For example, let’s consider the subtraction behind determining which brain regions are associate with written word recognition. The assumptions was that both tasks involve visual processing but the experimental task involves the component of word recognition. Therefore, there is the assumptions that adding an extra component does not affect the operation of earlier ones in the sequence. This is referred to the as the assumption of pure insertion, or pure deletion. It could be that the amount, type, etc of visual processing that deals with written words is NOT the same as for the visual processing that deals with non-linguistic visuals. The added extra component in the tasks has the potential to change the operation of other components in the task. That is, there could be interactions (the effect of one variable upon another) that make the imaging data ambiguous.