Self-learning computer science

I’m sure you’ve heard of Coursera.com, the site that hosts a number of university courses on a number of different topics.

Since I don’t have the time or the money to pursue a formal brick-and-mortar degree in computer science (CS), I have decided to create my own CS syllabus.

The following are courses that I think are important enough to take when learning CS and working towards a career in software engineering/development or tech writing. I based my decisions on various people I have asked personally, and my own background research into the field via blogs, forums, Google searches, etc.

If you have opinions on my syllabus, feel free to share.

  1. Computer Science 101, Stanford University (6 weeks)
  2. Intro to Logic, Stanford University (8 weeks)
  3. Logic: Language and Information I, The University of Melbourne (Self-paced)
  4. Logic: Language and Information II, The University of Melbourne (Self-paced)
  5. Algorithms, Part I, Princeton University (6 weeks)
  6. Algorithms, Part II, Princeton University (6 weeks)
  7. Algorithmic Thinking, Part I, Rice University (4 weeks)
  8. Algorithmic Thinking, Part II, Rice University (4 weeks)
  9. Learning to Program, Crafting Quality Code, University of Toronto (10 weeks)
  10. Programming Languages, University of Washington (Self-paced)
  11. Learn to Program: The Fundamentals, University of Toronto (10 weeks)
  12. An intro to Interactive Programming in Python, Part I, Rice University (5 weeks)
  13. An intro to Interactive Programming in Python, Part II, Rice University (5 weeks)
  14. Cryptogaphy I, Stanford University (6 weeks)
  15. Cryptography II, Stanford University (6 weeks)
  16. Game Theory, Stanford University (9 weeks)
  17. Game Theory II, Advanced Applications, Stanford University (6 weeks)
  18. Software Security, University of Maryland (6 weeks)
  19. Hardware Security, University of Maryland (6 weeks)
  20. The Hardware/Software Interface, University of Washington (8 weeks)
  21. Introduction to Databases, Stanford University (Self-paced)

I figure a typical college student will take 3-4 courses per semester. With 21 courses, at 3 courses per semester, that’s 7 semesters, or 3.5 years.

Alternatively, you could do one course per month if self-paced, so 21 months, or about 2 years.

Non-Coursera courses:

  1. Learn C the Hard Way
  2. Learn SQL the Hard Way
  3. Learn Ruby the Hard Way

Plus,

  1. Codecademy:
  2. HTML/CSS
  3. JavaScript
  4. jQuery

Interpreters, Compilers, and Learning

[Disclaimer: I know very little about computers and operating systems at this point, as I just started going back to college for my second BS, this time in CS. However, with my background in neuroscience, I can’t help but try to find parallels between what I already know about the brain and the things I’m learning about computers. I realize that the worn analogy of brains and computers doesn’t always hold weight, but as I try to understand the new things I’m learning, I’m going to refer back to things I already know, which is the brain.

As I learn more, I’ll probably update articles. If you have any insight into anything I’ve written, please share with me, as I and my readers love to learn!]

*********

Computers can only execute programs that have been written in low-level languages. However, low-level languages are more difficult to write and take more time. Therefore, people tend to write computer programs in high-level languages, which then must be translated by the computer into low-level languages before the program can be run.

Now, there are two kinds of programs that can process high-level languages into low-level languages: interpreters and compilers.

Interpreters read the high-level program and then executes. It does so by reading the code one line at time, executing between lines. Hence the term INTER (between) in INTERpreter.

Compilers reads the programs and translates it completely before the program runs. That is, the compilers translates the program whole, and then runs it. This is unlike the interpreter’s one-line-at-a-time method.

These aspects of programming got me thinking a bit.

Compilers remind me automatic processes, like when we are operating on auto-pilot. Our brain is still taking in information, but it’s not processing it one bit at a time; it’s more big-picture, and less into the details at a given moment.

However, when we are learning something new, our brains are more focused on the details and more interested in processing things in bits, and then “running” it. That is, when we are struggling with new information, or just ingesting new information, our brains are more apt to take it bits of information at a time, processing it, and then moving on to the new piece of information. That way, if something is not understood, it’s realized early on in the process, and that can be remedied.

Findings Friday: The aging brain is a distracted brain

As the brain ages, it becomes more difficult for it to shut out irrelevant stimuli—that is, it becomes more easily distracted. Sitting in a restaurant, having a conversation with your table partner right across the table from you, presents as a new challenge when the restaurant is buzzing with activity.

However, the aging brain does not have to be the distracted brain. Training the mind to shut out irrelevant stimuli is possible, even for the older brain.

Brown University scientists conducted a study involving seniors and college-age students. The experiment was a visual one.

Participants were presented with a letter and number sequence, and asked to report only the numbers, while simultaneously disregarding a series of dots. The dots sometimes moved randomly, and at other times, moved in a clear path. The latter scenario makes the dots more difficult to ignore, as the eye tends to want to watch the dots move.

The senior participants tended to unintentionally learn the dot motion patterns, which was determined when they were asked to describe which way the dots were moving. The college age participants were better able to ignore the dots, and focus on the task at hand (the numbers).

Another study also examined aging and distractibility, or an inability to maintain proper focus on a goal due to attention to irrelevant stimuli. Here, aging brains were trained to be more focused. The researchers used older rats, as well as older humans. Three different sound were played during the experiment, with a target tone presented. Awards were given when the target tone was identified and the other tones ignored. As subjects improved, the tasks became challenging, with the target tone becoming less distinguishable to from the other tones.

However, after training, both the rats and the humans made fewer errors. In fact, electrophysiological brain recordings indicated that neural responses to the non-target, or distracting, tones were decreased.

Interestingly, the researchers indicated that ignoring a task is not the flip side of focusing on a task. Older brains can be just as efficient at focusing as younger brains. The issue in aging brains, however, lies in being able to filter out distractions. This is where training comes in: strengthening the brain’s ability to ignore distractors; not necessarily enhancing the brain’s ability to focus.

The major highlights of the study include training older humans with respect to enhanced aspects of cognitive control, and the adaptive distractor training that sought to selectively suppress distractor responses.

Technique Thursday: Microiontophoresis

In the brain, communication is both electrical and chemical. An electric impulse (action potential) propagates down an axon, and chemicals (neurotransmitters) are released. In neuroscientific research, the application of a neurotransmitter to a specific region may be necessary. One way to do this is via microelectrophoretic techniques like microiontophoresis. With this method, neurotransmitter can be administered to a living cell, and the consequent reactions recorded and studied.

Microiontophoresis takes advantage of the fact that ions flow in an electric field. Basically, during the method, current is passed through a micropipette tip and a solute is delivered to a desired location. A key advantage to this localized delivery and application is that very specific behaviors can be studied within context of location. However, a limitation is that the precise concentration of solute may sometimes be difficult to determine or control.

The main component of this technique is the use of electrical current to stimulate the release of solvent. In other words, it is an “injection without a needle.” The main driver of this electric current is galvanic current. The current is typically applied continuously. The solute needs to be ionic, and must be placed under an electrode of the same charge. That is, positively charged ions must be placed under the positive electrode (anode), and negatively charged ions must be placed under the negative electrode (cathode). This way, the anode will repel the positively charged ion into the skin, and the cathode will repel a negatively charged ion into the skin.

Manic Monday: Eye of the Storm

A classic experiment on discrimination was Jane Elliott’s Blue eyes/brown eyes experiment. Jane Elliott is a former third-grade teacher, with no research background to speak for. However, the day after Martin Luther King Jr. was shot, she decided to try a little experiment with her young, impressionable students.

What she did next was nothing short of fascinating.

On April 4, 1968, Jane Elliott was ironing a teepee for one of her classroom activities. On the television, she was watching news about the assassination of King. One white reported mentioned something that shocked Elliott:

“When our leader [John F. Kennedy] was killed several years ago, his widow held us together. Who’s going to control your people?”

Elliott could not believe that the white reported felt that Kennedy was a white-person leader, and that black people would now get out of control.

So she decided to twist her little Native American classroom exercise and replace teepees and moccasins with blue-eyed and brown-eyed students.

So, on the first day of her experiment, Elliott decided that since she had blue eyes and was the teacher, blue-eyed students were superior. The blue-eyed and the brown-eyed children were consequently separated based on something as superficial as their eye color.

Blue-eyed children were given brown collars to wrap around their brown-eyed peers. All the best to notice them with.

The blue-eyed children were then given extra helpings of food at lunchtime, five extra minutes at recess, chance to play at the new jungle gym at school. The brown-eyed children were left out of these activities. The blue-eyed children were also allowed to sit at the front of the class, while brown-eyed children were kept at the back.

Blue-eyed children were encouraged to play with other blue-eyeds, but told to ignore brown-eyed peers. Further, blue-eyed students were allowed to drink at the water fountain, while the brown-eyed ones were prohibited from doing so. If they forgot, they were chastised.

Now, of course the children resisted the idea that the blue-eyed students were superior somehow. Elliott countered eloquently, and with a lie: melanin is linked to blue eyes, as well as intelligence.

The students’ initial resistance wore out.

The blue-eyed “superior” students then became arrogant and bossy. They were mean, and excluded their brown-eyed peers. They thought themselves superior, simply on the basis of their eye color.

What’s even more interesting is that the blue-eyed students did better on some of their exams, and performed at higher ability on math and reading than they had previously. Just believing they were superior affected their grades positively.

Even more interesting, but perhaps not surprising, was what happened to the brown-eyed students:

They became shy, timid, and frighteningly, subservient. They did poorer on their tests, and during recess, kept themselves away from the blue-eyed children. Each group effectually grouped themselves according to their eye color.

The next week, Elliott added another twist to the experiment: she made the blue-eyed students inferior, and made the brown-eyed ones superior. Brown collars for the blue-eyeds now.

The brown-eyeds then began to act meanly towards the blue-eyed kids, though at a lesser intensity.

Several days later, the blue-eyed students were told they could remove their brown collars. She then had the students reflect on the experiment by writing down what they thought and had learned from the experiment.

Needless to say, the experiment had a major impact on her students. Elliott continued the experiment with her students for years after, and has appeared on Oprah and other venues, promoting anti-discrimination.

A documentary was filmed about her experiment, called Eye of the Storm.

A beautiful video about a modern re-enactment of the experiment can be found here.

The Pain in Brain Stays Mainly in the…Brain?

Pain is a major force of survival. Without pain, we would, simply, not survive. Of course, pain can be cumbersome, and unnecessary, at times. For example, when you stub your toe on your desk, do you really need that much pain for that long?

More importantly to this discussion, what do you do when you stub your toe? Probably do a bit of hopping, and you certainly grab that toe and squeeze or rub it.

Why do you do that? The Gate Control Theory of Pain can answer that.

At its basic, the Gate Theory of Pain dictates that non-painful input (like that rubbing) closes the gates to painful input. This results in the prevention of the sensation of pain from being fully perceived by the brain. Simply, when the painful stimulus is overridden by a non-painful stimulus, the pain sensation does not travel to the central nervous system (CNS).

Even more simply, non-painful stimuli suppress pain.

Why is that?

Collaterals, or processes, of large sensory fibers that carry cutaneous (skin) sensory input activate inhibitory interneurons. Now, inhibitory interneurons do just what their name implies: they inhibit. And what do they inhibit in this case? Pain sensory pathways. This therefore modulates the transmission of pain information by pain fibers. Non-painful input suppresses pain by “closing the gate” to painful input.

This happens at the spinal cord level: non-painful stimulation will result in presynaptic inhibition on pain (nociceptive) fibers that synapse on nociceptive spinal neurons. This inhibition, which is presynaptic, will therefore block any incoming painful stimuli from reaching the CNS.

More on this topic on this week’s Séance Sunday, coming up!

Monty Python’s influence in Neuroscience

Ever heard of the computer programming language called Python? Where do you think Python got its name from?

Monty Python!

Python programming has had an increasing effect on neuroscientific developments. In fact, with the growing field of computational neuroscience, Python programming has taken a role in how neuroscience research occurs.

Python itself is a high-level programming language. Its syntax is relatively straightforward, as one of its main philosophies is code readability. Therefore, coders can use fewer lines of code than they would in other programming languages.

In neuroscience, python is used in data analysis and simulation. Python is ideal for neuroscience research because both neural analysis and simulation code should be readable and simple to generate and execute. Further, it should be understandable xx time later, by xx people reading the code. That is, it should be timeless and should make sense to anyone who reads the code and tries to execute or replicate it.

Of course, MATLAB is good for neuroscience research purposes, but MATLAB is a closed-source, expensive product, where python is open-source and more readily available to the masses. In fact, you can download Python here. Further, there are a number of courses that can help a dedicated learner teach themselves Python.

Python also has science packages that allow for systems like algebra, packages specifically for neural data analysis and simulation packages to describe neuronal connectivity and function. Python can even be used for database management, which may be important when there are large amounts of data belonging to a given laboratory. Because Python combines features from other languages, it can be used in foreign code and other libraries beyond the ones it was developed in. This allows for effective sharing and collaboration between laboratories. SciPy is “a collection of open source software for scientific computing in Python.”

In relation to neuronal simulations, Python make sense because:

1. It is easy to learn. This is because is has a clear syntax, is an interpreted language (meaning the code is interactive and provides immediate feedback to the coder), and lists/dictionaries are built into the program itself.

2. It has a standard library, which therefore provides  built-in features important for  data processing, database access, network programming and more.

3. It is easy to interface Python with other programming languages, which means that a coder can develop an interface in Python, and then implement it in other fast programming languages, such as C and C++.

An example of Python being used as a neuronal simulator is NEURON. An example of code is the following:

>>> from neuron import h
>>> soma = h.Section()
>>> dend = h.Section()
>>> dend.connect(soma, 0, 0)
>>> soma.insert(’hh’)
>>> syn = h.ExpSyn(0.9, sec=dend)

(taken from Davison, et al., 2009).

Here, a neuron is “built”, with a dendrite, soma, dendrite all “created”, as well as channels. It is clear, then, how simple and straightforward Python code is, and how important it can then be in neuroscience.