company of three, black peppermint tea

Tag: music


by cloudier

What Makes Online Content Viral?

Importantly, however, our findings also reveal that virality is driven by more than just valence. Sadness, anger, and anxiety are all negative emotions, but while sadder content is less viral, content that evokes more anxiety or anger is actually more viral. These findings are consistent with our hypothesis about how arousal shapes social transmission. Positive and negative emotions characterized by activation or arousal (i.e., awe, anxiety, and anger) are positively linked to virality, while emotions characterized by deactivation (i.e., sadness) are negatively linked to virality. More broadly, our results suggest that while external drivers of attention (e.g., being prominently featured) shape what becomes viral, content characteristics are of similar importance (see Figure 2). For example, a one-standard deviation increase in the amount of anger an article evokes increases the odds that it will make the most e-mailed list by 34% (Table 4, Model 4). This increase is equivalent to spending an additional 2.9 hours as the lead story on the New York Times website

Drowning doesn’t look like drowning

How did this captain know, from fifty feet away, what the father couldn’t recognize from just ten? Drowning is not the violent, splashing, call for help that most people expect. The captain was trained to recognize drowning by experts and years of experience. The father, on the other hand, had learned what drowning looks like by watching television. If you spend time on or near the water (hint: that’s all of us) then you should make sure that you and your crew knows what to look for whenever people enter the water. Until she cried a tearful, “Daddy,” she hadn’t made a sound. As a former Coast Guard rescue swimmer, I wasn’t surprised at all by this story. Drowning is almost always a deceptively quiet event. The waving, splashing, and yelling that dramatic conditioning (television) prepares us to look for, is rarely seen in real life.

Read the rest of this entry »


by cloudier

What is consciousness?

“Consciousness” refers to several related phenomena, which is why people have such a difficult time agreeing about what it is.

Here are some specific phenomena that fall under the larger umbrella of consciousness and also “the mind”:

awake state — What is different about someone who is awake vs. someone who is in dreamless sleep? In both cases, the brain is highly active and functioning, but in only one case is the individual able to interact with the world and report experiences. Dreams and other altered states of consciousness may lie somewhere between these two extremes. Other variants of non-awakeness include general anesthesia and “persistent vegetative state” (related to coma).

perceptual awareness — What is going on when you are aware of something vs. when you aren’t? In binocular rivalry, two conflicting images are shown to each eye. The information about both images enters the brain, but only one image is seen at a time. Which image is seen changes periodically and spontaneously. There are other examples of information being processed “subliminally” without being perceived “consciously”. In stage magic, what is perceived is different from what is actually happening. What is the difference between sensory signals entering the brain, and something being perceived “consciously”?

subjective (first-person) point of view — Consciousness is private, subjective and experienced from a particular point of view: yours. What accounts for this point of view, for the unique “interiority” that gives the feeling that you exist inside your head somewhere? Is your version of the color red unique to you or the same for everyone? If a machine was conscious, would it have a first-person “experience”? As philosophers would say, is there something that it’s like to be a computer?

unity of experience — Consciousness feels “whole”, indivisible, and irreducible. There is the sense that the world is experienced instantaneously in complete, integrated, and meaningful detail. Hundreds of scientific experiments show that this unity is an illusion (change blindness, attentional filtering, attentional blink, visual illusions, timing errors, split brain patients, mental disorders, various neurological syndromes,  …). But the illusion is so powerful it takes a real force of will to be skeptical of it. When consciousness becomes fragmented, as with dissociative drugs, brain damage, split-brain surgery, or divided attention, has consciousness been degraded?

personal identity (existence, self, ego) — One unique aspect of the human experience is the sense that we exist — that there is an “I” in there somewhere, looking out onto the world. Why do all our experiences come from our body and not someone else’s? Does our uniqueness as an individual come from a “soul” that is somehow attached to the brain, or is it a construct generated by the brain? If someone wakes up with amnesia, or has dementia or dissociative disorder (formerly multiple personality disorder), has their conscious self ceased to exist, even though they seem conscious?

self-awareness — Also uniquely human is our ability to “introspect” onto what is going on in our own mind. Descartes famously said “I think therefore I am.” One complaint about the idea of consciousness in a computer is that a computer seems incapable of answering the question “what is it like to be you?”. If you can’t reflect on your own inner life, are you still conscious?

personal agency — In modern society, an important distinction is made between voluntary action (doing something”intentionally”) and involuntary action (accidental behavior). To do something “consciously” is to do it with forethought and purpose. In Tourette’s Syndrome, people make intentional-seeming actions involuntarily. This ties into the tricky question of “free will” as well as the legal concept of mental competency and the insanity defense. “He was not in conscious control of his actions” the defense might say.

Is the human brain analog or digital?

The brain is neither analog nor digital, but works using a signal processing paradigm that has some properties in common with both.

Unlike a digital computer, the brain does not use binary logic or binary addressable memory, and it does not perform binary arithmetic. Information in the brain is represented in terms of statistical approximations and estimations rather than exact values. The brain is also non-deterministic and cannot replay instruction sequences with error-free precision. So in all these ways, the brain is definitely not “digital”.

At the same time, all of the signals sent around the brain are “either-or” states that are similar to binary. A neuron fires or it does not. These all-or-nothing pulses are the basic language of the brain. So in this sense, the brain is computing using something like binary signals. Instead of 1s and 0s, or “on” and “off”, the brain uses “spike” or “no spike” (referring to the firing of a neuron).

Internal to the neuron, everything works via biochemical pathways, which are somewhat similar to analog. Neurons also perform internal electrical signal integration in an analog fashion. Analogously, the digital logic gates used by computers are implemented internally using transistors and resistors, which are also analog.

What is an understandable, systematic way to understand neuroscience? A source?

The problem you are most likely running up against is the way you’re approaching thinking about neuroscience and the brain.

In short, there is no systematic way to understand neuroscience because every level at which you want to examine the system is incomplete.

First, a caveat: thinking about certain fields of study as “harder” or “easier” than others will do you a disservice. Using those kinds of terms will immediately put other researchers on the defensive, so I’d recommend against thinking in that way so as to protect yourself from unnecessary emotional conflicts when discussion scientific topics.

Philosophy probably feels “easier” to you because philosophy is–by its very nature–a human endeavor built by humans using logical principles as understood by humans. So while there may be difficult concepts in philosophy, those concepts originate from human minds and thus are understandable by humans.

Neuroscience, in contrast, is the study of a natural, not man-made system that doesn’t have to adhere to any logical principles that we can grasp (i.e., the brain is likely deterministic but also chaotic). The issue is that you want to ascribe an orderly, logical system to a messy biological, organic, system that evolved in strange ways over billions of years.

So far you’ve learned about “parts of the brain” and “neurotransmitters”. By way of a physics analogy, so far you’ve learned classical mechanics.

In reality, there is no such thing as a discrete “part” of a brain and the boundary between what is and what is not a neurotransmitter is fuzzy. For example, dopamine is a macromolecule that crosses a synapse and binds to the post-synaptic neuron to modulate the probability of that neuron firing. In contrast, nitric oxide also plays a role in neurotransmission, but because it is a gas it diffuses long distances, can cross a cell’s membrane without any need to bind to a receptor, and yet plays an important role in long-term potentiation (the putative cellular mechanism for learning and memory). Not your traditional neurotransmitter and very hard to study!

Classical mechanics works wonderfully. To a first approximation. Of course the real world is much more complex, and classical mechanics starts to fail and we have to resort to electromagnetics, quantum mechanics, and so on.

So yes, while in general there are “motor” and “vision” parts of the brain, what’s happening is that we’re using language to define an organic system that doesn’t care about your linguistic boundaries and linguistic inadequacies.

Cerebral achromatopsia

Cerebral achromatopsia is a type of color-blindness caused by damage to the cerebral cortex of the brain, rather than abnormalities in the cells of the eye’s retina.

The case of the colorblind painter

The most famous instance of cerebral achromatopsia is that of “Jonathan I.” immortalized in a case study by Oliver Sacks and Robert Wasserman, and published as The Case of the Colorblind Painter.[7] The essay tracks Johnathan I.’s experience with cerebral achromatopsia from the point where an injury to his occipital lobe leaves him without the ability to perceive color, through his subsequent struggles to adapt to a black, white and gray world, and finally to his acceptance and even gratitude for his condition. Especially pertinent is the analysis of how cerebral achromatopsia affects his practice as a painter and artist. Descriptions of cerebral achromatopsia’s effects on his psychological health and visual perception are especially striking. For instance, in recounting Mr. I.’s descriptions of flesh and foods, the authors write:

Mr. I. could hardly bear the changed appearances of people (“like animated gray statues”) any more than he could bear his own changed appearance in the mirror: he shunned social intercourse and found sexual intercourse impossible. He saw people’s flesh, his wife’s flesh, his own flesh, as an abhorrent gray; “flesh-colored” now appeared “rat-colored” to him. This was so even when he closed his eyes, for his preternaturally vivid (“eidetic”) visual imagery was preserved but now without color, and forced on him images, forced him to “see” but see internally with the wrongness of his achromatopsia. He found foods disgusting in their grayish, dead appearance and had to close his eyes to eat. But this did not help very much, for the mental image of a tomato was as black as its appearance.

How well does music predict your politics?

A few highlights:

knowledge is perspective

by cloudier

How to think about science and becoming a scientist

A lot of what is frustrating and off-putting about science at first, including working in the research lab, is the same thing that’s frustrating and off-putting about math: to really enter the conversation you have to have the vocabulary, so there’s a lot of memorizing when you start. Which is just obnoxious. But it doesn’t take too long, and if you start interning in a lab early, then the memorizing feels justifiable and pertinent, even if you feel initially more frustrated at a) not knowing the information and b) not knowing how to apply it. If you don’t get into a lab, however, it’s just hard and pointlessly so (even though it isn’t).

(Virtually all fields have this learning curve, whether you realize it or not; one of Jake’s pet books is Daniel T. Willingham’s Why Don’t Students Like School: A Cognitive Scientist Answers Questions About How the Mind Works and What It Means for the Classroom, which describes how people move from no knowledge to shallow knowledge to deep knowledge. It’s bewildering and disorienting to start with no knowledge on a subject, but you have to endure and transcend that state if you’re going to move to deep knowledge. He says that he’s climbed that mountain with regard to writing, which makes writing way more rewarding than it used to be.)

Once you have the language and are able to think about, say, protein folding, the way you would a paragraph of prose, or the rhythm in a popular song, science takes on a whole new life, like Frankenstein’s Monster but without the self-loathing or murder. You start to think about what questions you can ask, what you can build, and what you can do—as opposed to what you can regurgitate. The questions you pose to people in your lab will lead to larger conversations. Feeling like an insider is nice, not only because it’s nice to belong, but because you’ll realize that even being a small part of the conversation means you’re still part of the larger discussion.

This is really important. Knowledge about a particular subject is mostly learning the vocabulary because this entails an understanding of how the major concepts in a subject link together.1 Jargon is unavoidable in most subjects because plain language is often too inefficient for communicating ideas. It is unfortunately a massive barrier that prevents laypeople from comprehending the ideas presented in new research – let alone understanding its implications – that can also alienate them in the same way that slang alienates people.2 These two factors, in addition to the media,3 is probably what leads to the entitlement and anti-intellectualism4 that fuels climate skepticism and the idea that autism is linked to a vaccine.

The willful ignorance that results from the lack of comprehension of how much a person doesn’t know and the emotional investment they make in their ideas prevents these people from acquiring the skills to assess their own beliefs simply because it’s emotionally painful.5 This is why I believe that it’s important to increase both the breadth of one’s knowledge as well as the depth required for financial sustenance. It’s also why I don’t particularly like it when people say ‘jack of all trades, master of none’: this implies that when I’m learning about a subject that comes under ‘breadth’, it’s displacing the time I spend learning about my field of specialisation.6 This isn’t necessarily true since I don’t spend the entirety of my waking hours learning.

An almost irrelevant comment on the aphorism ‘Knowledge is power’: No it’s not. Power usually means social or economic influence. Sure, you can acquire that influence with leveraged knowledge, but you can also acquire it by, say, being born in the right place at the right time. Let me propose an alternative: ‘Knowledge is perspective’. There are always things that people of a certain profession know that most people don’t, and it is attached to a certain way of looking at life – a perspective which involves focusing on certain aspects of the world we live in that all end up affecting the way we live. For example, immunology focuses on the microscopic immune system, which has effects that spill into macroscopic life, whereas macroeconomics focuses on the behaviour of national and global economies, with effects that again spill into everyday life. The idea that every field is reducible to maths might be true,7 but it’s a bit silly since there are important and relevant patterns that emerge with each level of magnification.8

If you’re logged into WordPress, I think that black bar at the top is going to make all the links for the notes hit one line too low.

  1. Becoming a professional, however, also involves acquiring relevant skills.
  2. As usual with slang, the special vocabulary of hackers helps hold places in the community and expresses shared values and experiences. Also as usual, not knowing the slang (or using it inappropriately) defines one as an outsider, a mundane, or (worst of all in hackish vocabulary) possibly even a suit. All human cultures use slang in this threefold way — as a tool of communication, and of inclusion, and of exclusion.
  3. Seriously, anyone who reports anything related to science in the media should be forced to get a degree before they publish one word. This kind of fuckery costs lives.
  4. Intellectual Humility: Having a consciousness of the limits of one’s knowledge, including a sensitivity to circumstances in which one’s native egocentrism is likely to function self-deceptively; sensitivity to bias, prejudice and limitations of one’s viewpoint. Intellectual humility depends on recognizing that one should not claim more than one actually knows. It does not imply spinelessness or submissiveness. It implies the lack of intellectual pretentiousness, boastfulness, or conceit, combined with insight into the logical foundations, or lack of such foundations, of one’s beliefs.
  5. I don’t mean to say that all professionals should be trusted, always. They’re human so they will make mistakes and sometimes people without any training will be able to find gaping holes in their ideas. However, these are people who spend a much larger proportion of their life thinking about the topic at hand – it is still most likely that they’ll know better than a layperson.
  6. Which, er, doesn’t exist yet. Biology might come close since it’s the only subject where I’ve really gotten a hold on that basic vocabulary. Speaking of which, the HSC does a shitty job of teaching that; NQE training is much better.
  7. xkcd is still awesome.
  8. I think this is the idea behind the name ‘Patterns in Nature’.

/end actual content

Read the rest of this entry »


by cloudier

Doing it tough, far from a typical Australian income

In The Australian’s piece, a couple on $200,000 a year (who admit they pay only 18 per cent tax) complain that they may be forced to get a nanny if their childcare subsidy is reduced.

Now, The Australian itself has called for reductions in ‘middle-class welfare’, so either the editors have changed their mind, or they have a misguided sense of what constitutes a middle income in modern Australia.

I don’t doubt that the family featured in The Australian’s story genuinely thinks they’re more-or-less typical, but they’re wrong. We all tend to judge what’s normal, or typical, with reference to those we work and socialise with. This leads the poor to underestimate the wealth of the rich, and leads the rich to overestimate the wealth of the poor. It also means that a lot of us tend to think we’re ‘middle class’ when we’re not.

Andrew Leigh (before he was an MP) wrote a great little paper on the effect that this misperception has on our public debate, called ‘The Political Economy of Tax Reform in Australia’. In it, he argued that:

Opinion leaders [do] not properly appreciate the distribution of income in Australia. For the most part, the taxation rates applying to most politicians, journalists, business executives and think-tank staffers (and indeed, to academic economists) are not those that apply to the average voter. In all these professions, six-figure salaries are common. Yet only 4.5 per cent of Australian adults have an income that exceeds $100,000 per year, and only 1.5 per cent have an income that exceeds $150,000 per year.

(The paper is from 2006, so the figures are a little out of date, but the principle hasn’t changed).

Leigh also, correctly, notes that “reporting of ‘average’ income in Australia focuses on a measure of earnings which is not that of the typical voter”. Journalists often use average weekly ordinary time earnings for full-time adults (AWOTE) as a measure of a typical income. This is misleading for several reasons.

Read the rest of this entry »

by cloudier


clips: 1 2 3

Limitless: It’s fluff, but it’s extremely beautiful and enjoyable fluff. The moral issues associated with its premise are barely dealt with at all, which was probably a good idea: I doubt that they could have been dealt with gracefully in a movie like this.

Read the rest of this entry »