Things We Take For Granted

During your childhood, did your parents ever tell you that you needed to wait at least an hour before swimming after a meal, or you would get cramps? If you have children, have you ever told them the same thing? If so, then why? Have you ever stopped to consider that it may not be true?

You may have heard that red wine is suitable to drink to red meat while white wine is suitable for fish, and if you have, you probably assume that this is so and act accordingly. You may also have heard that having a bad posture correlates with feeling pain in one’s shoulders and back, and accept this as the truth without questioning it. If you are anything like me, you might now be Googling these things to see if they are “true” or not (or perhaps you have already done so, but if you have not, then check herehere and especially here for some interesting input). It is likely that you have never before questioned these “facts”, but just taken them for granted. Why is that?

We are all born into a world which has already been interpreted by others. During our childhood and onwards through the rest of our life, bit by bit, we are exposed to habits, assumptions &c. that those around us take for granted, and we bring these things into our own subjective inner world, making them a part of us and how we view reality. This process is called socialisation and can be said to consist of three sub-processes: internalisation, objectivation and externalisation. 

To explain these three terms, I have to begin by explaining another, namely the objective reality. The objective reality is the world that exists outside of us and independently of us, as habits, patterns, norms, institutions, &c. (i.e., socially objectivised knowledge). It’s not objective in the sense that it’s objectively true, but objective in the sense that it exists outside of our subjective selves.

The objective reality is carried by language and everyday interaction between humans. We internalise this reality, meaning that we bring it into our inner world, and it becomes our subjective reality (existing within us, as part of our consciousness). Next, we externalise the subjective reality by creating long-lasting creations – such as routines, categorisations and laws – in the outside world. These creations are objectivised through language and everyday interaction, and become the objective reality “outside” of us (now slightly adjusted after having passed through our inner selves). The creations are then once again internalised, and so on.

This may seem a little abstract, so I’ll demonstrate the concept with a figure and an example.

internalisation-externalisation-objectivation-process

When you were told not to swim after a meal as a child, your parents presented you with an objective reality. According to this reality, “swimming after a meal leads to cramps”. You took this as an absolute truth – inadvertent, even. You didn’t question it. In fact, you internalised it, brought it into your inner world. And when, many years later, you found yourself with your own children at the beach or by a swimming pool, you externalised the truth that “swimming after a meal leads to cramps” and turned it once again into an objective reality, this time for your children (and yourself). And this is how the assumption that one should not swim within an hour after having eaten came to be perpetuated through two – and probably more – generations. The same can be said about the wine matches and the correlation between posture and pain, although the things we learn during the first stage of socialisation (i.e., from our parents when we are very young) tend to be more authoritative (and hence feel more or less set in stone) than the things we learn later in life.[1]

The socialisation process is thought-provoking and raises many interesting questions, but in order to connect the beginning and the end of this post, I’ll settle for asking my readers only one of them: Which assumptions are you unknowingly carrying with you from your childhood and to your own children? (Or, alternatively, if you do not have children: Which assumptions are you still taking for granted that you were told in your childhood?)

 


[1] Bo, Inger Glavind, Att tänka socialpsykologiskt, Studentlitteratur, 2014, p. 155.

There is Always a Social Dimension

“[T]he study of our behavior as social beings, covering everything from the analysis of short contacts between anonymous individuals on the street to the study of global social processes . . .” (American Sociological Association, ‘What is sociology?’)

I am a creature that thirsts for knowledge. In the past months, I have had something of a… learning drought – which is also why I have been so very inactive on this blog. But recently, I found myself craving new knowledge again, which is very fortunate, since I have just started two new classes.

One of these is a class in sociology (see the quote above for a defintion). Now, I am not a very social person, and since I have lived in the same place for almost five years, I have a sufficient amount of friends in the area. So when I started this new class last week, I went into it with the mindset that I didn’t need to meet any new people. It was an immense relief – I wouldn’t have to care at all about the social bit. I could just go all into the learning and skip the rest. Not that it would be a bad thing if I met some new people, as long as it was on my terms and without any pressure.

The first week went well. I spoke with some new people, but didn’t feel the need to get to know any of them on a deeper level. Frankly, I didn’t want to. I leaned back and watched as all the students around me started the desperate dance that somehow everyone seems to feel the need to dance in a new situation. It was twice the fun due to the fact that we are all taking a sociology class, where we will be studying people’s (and ourselves’) “behavior as social beings”. I already knew from the class I took in behavioural science last Autumn that human beings are incredibly dependent on social validation, but it was very interesting to watch the social effect so prominently firsthand. In fact, it was so interesting that I didn’t realise until it was too late that I had gotten myself into a very strange situation.

Yesterday, at a lecture, I chose a seat next to a person I had spoken with a little on the first seminar. A stranger then sat down at my other side. During the lecture, we were asked to discuss something in groups of two or three people, and so the three of us naturally started talking to each other. Nothing strange so far. But then, in the 15-minute break between two sessions, the stranger confessed that this was their first class and that they had realised that they needed to meet some new people to study with, because apparently, they didn’t “understand a thing”. They then went on to ask if the three of us could perhaps meet sometime next week to study together. Since I wasn’t prepared for that kind of question, I didn’t know what to say – but the person from the seminar immediately agreed that we should study together.

Yes, you read that right. They agreed that we – that the three of us – should meet and study together. Without even having said a word, I was suddenly expected to participate in something I hadn’t yet agreed to. In the next breath, one of them asked for my last name so that they could add me on Facebook – which I gave them, stupified – and then they asked what time would suit me best. Since I was still in shock, all I could say was, “I’ll have to check my schedule.” And in hindsight, I realise that this must have constituted an indirect approval. But I had gotten myself into a very difficult situation indeed.

While the common reaction to such a proposal one week into a new class might usually be glee or relief – “I have now established contact with other human beings and can relax knowing that I am a piece in a social puzzle” – I was not interested in being a puzzle piece – or at least, I thought it wasn’t important to me. My attitude conflicted with the established norm. So what was I supposed to do? Say “no thank you, I am not interested in socialising with you in my spare time”? Say “yes” and then drag myself to a meeting I didn’t want to go to? While I didn’t much care for bonding with new people, I didn’t want to be seen as rude, and besides, I enjoyed sitting next to the person I met at the seminar, and would like to sit next to them again in the future. So I said nothing. I sat there, dumbfounded, marvelling over the sheer stupidity and irony of the situation, while the other two were excitedly discussing our upcoming meeting.

After the lecture, I met up with two close friends and discussed my problem. They agreed that I had practically accepted the meeting by not outright refusing it, but that my reaction at the time had been understandable. One of them then suggested that I should contact the two via Facebook and retract my approval. But how? How would I do that without being seen as a jerk? From my perspective, I had two alternatives:

  1. Violating the social norm – effectively showing them my true self – by sincerely explaining that I was not interested in studying with them.
  2. Accepting the social norm – effectively showing them a false version of myself – by going to the meeting against my will.

It all came down to one question: What is most important to me? Being part of the social puzzle, or independently standing up for who I am?

Human beings have an innate need to belong. But it is more than just that. As phrased by one of my sociology lecturers, “It is in the encounters with other human beings that we develop our individuality and become members of society.” This statement suggests that without a social context, there are no individuals. Is it even possible, then, to be an independent individual? The two alternatives that I came up with were both based on social validation. If I accepted the social norm, then I would be accepted as a social creature and immediately welcomed into the group. If I violated the social norm, it would perhaps lead to immediate discomfort, but on the other hand, I would show them “my true self” and thus seek acceptance from the group by “opening up” to them about my personal preferences. And in the long run, violating the social norm could very well be the only way to be accepted deeply into the group.

Thinking about all of this made my brain hurt, so in the end, I chose to violate the norm and explain to my new acquaintances that I wasn’t interested in studying with them (not in a rude manner, of course – I simply said that I study best by myself). It seemed like the only way to go if I wanted to accept myself and my needs. Luckily, my explanation was received with understanding, and I was accepted into the group as myself. But the whole dilemma did give me something of an epiphany: We may consider ourselves to be independent individuals, but when it comes down to it, there is always a social dimension.


Sorry for the somewhat unfocused post – it’s been a while. Anyway, seeing that I am currently taking two classes in sociology and IT law respectively, and studying criminology and rationality in my sparetime, I expect to be having a lot of inspiration in the near future, so stay tuned for more soon.

What is Your Glass Half-Full of?

A couple of weeks ago, I was standing in the kitchen when my partner came home in the evening. There was a glass half-full with a black-brown-ish liquid standing on the kitchen counter, and this glass was the first thing his gaze fell upon when he entered the room.

“What is that?” he asked.

“What do you think it is?” I replied, at which he stared at the glass for a few more seconds. He was standing some distance away from the glass, and so couldn’t deduce the exact nature of its content merely from a glance.

“It’s either coffee or coca cola.”

I didn’t say anything, hoping that he would reason aloud about whether it was coffee or coca cola – which he, in fact, went on to do. And in the most adorable way, too. He gave me an excited smile and said,

“I will use the deduction and reasoning techniques that you – and Eliezer Yudkowsky – have taught me!”

Jackpot. This would be fun. I enjoy learning about rationality, and I love sharing it all with him – it would be interesting to see what he had learned. He started by saying,

“We didn’t have any coca cola when I left home this morning. We did have instant coffee, though.”

“I could have gone and bought some coca cola,” I responded matter-of-factly.

“True,” he allowed, “but considering that you really dislike leaving the apartment if you don’t have an important errand, it’s not very likely. And I doubt that you would go out just to buy some coca cola, especially seeing that we’re both trying to consume less sugar.”

“It could be sugarfree.”

“Yes… But still, I find it more likely that you would make some instant coffee.”

“Without milk? You know I hate coffee without milk. And why would I use a glass instead of a cup?”

This confused him for a moment. “You’re right. That seems really odd…” But then he seemed to come up with a new theory. “We’re probably out of milk. Maybe you just wanted a quick energy boost, but you realised that we don’t have any milk, and so you thought that you would take a small glass instead of a cup, so that you could drink the coffee like a kind of energy shot, without tasting the bitter coffee too much.”

I opened the fridge so that he could see that we were indeed out of milk. But, when he seemed to think that he had figured it all out, I countered with, “This isn’t the end just yet – why is the liquid not hot, if it’s coffee? As you can see, there is no vapour.”

“Ah, that’s easy. You didn’t succeed in making it drinkable – you found it disgusting without milk, which is why you only drank half of it. Then you left the glass here on the counter. It’s probably been standing there for a while now, cooling.”

I tried to keep my pokerface on as lifted up the glass and presented it to him. “Pinch your nose and drink it,” I told him. And so he pinched his nose, drank some of the dark liquid, and then removed his fingers. As the taste spread to his taste buds, he started grinning.

“I was right.”

“You were,” I said, grinning with him.

Conclusion: Rationality isn’t just useful, it’s fun too.

(Bonus Conclusion: I should probably go out more.)

Why We Should All Be Very Afraid of Writers

What would you say, if I told you that our minds are naturally wired for stories?

Narratives are our preferred mental structure for storing and retrieving information.[1] Perhaps you have heard about a concept called “mind palace”? I suspect many of you first heard of it in the context of Sherlock Holmes (in which case, thank you, you just prematurely proved that narratives are great for taking in new knowledge); a genius who has a mind palace of his own – at least according to the BBC show.

A mind palace is an abstract place in your mind where you gather all sorts of information. It is a mnemonic – a memory aid. Basically, you imagine that the rooms in your palace are filled with items that help you recall certain memories, and then if you want to remember something, you just walk through the rooms in your mind. I’ll borrow an example from a friend of mine who has mastered the mind palace: he is able to remember small facts about me by having a corner of a room dedicated to me (a corner  filled with various objects connected to me in some way), and in that corner stands a character from a certain video game, in order to remind him of the fact that I originally chose to study law because of that very game. If he needs to recall something that I have told him about myself, he (figuratively) walks into the room and looks around at the things gathered there in the corner.

It takes a lot of practice to conjure up a mind palace of your own, but if you manage to do it, you will then forever be able to gain quick access to all memories and facts tucked in there.

Narratives have wide uses apart from structuring our thoughts. For one, some of the most influential communications of all time have been stories (e.g., the Bible and Uncle Tom’s Cabin), and people all over the world devote much more waking attention to narrative (e.g., TV shows) than to rhetoric (e.g., television commercials interrupting the TV shows).[2]

Furthermore, according to transportation theory, when people are “transported” into a story, their real-world beliefs are changed to reflect that story. For example, in one study, readers of an absorbing kidnapping story became more accepting of false assertions in the story such as “mental illness is contagious” and “eating candy helps to reduce weight”. In other words, while reading, people recruit story-congruent memories that can then continue to exert influence even in other contexts. I have a silly example of my own regarding this, and that is that after reading the scene early in Harry Potter and the Prisoner of Azkaban in which Professor Lupin gives magical chocolate to Harry in order to allow for him to regain his energy, a small thought was implanted into my mind: “eating chocolate can heal various ailments”. Even today, that thought lingers on my mind, because sometimes when I eat chocolate, I imagine that it makes me feel a bit better (beyond the general chocolate-y goodness, of course).

So you see, narratives possess strong persuasive powers; powers which can be used both for good and for bad. For example, in a content analysis of romance novels, only 10% of the studied novels portrayed discussions of condom use, and in those 10%, male characters were always the ones who initiated the discussions, whereas the female characters rejected condom use in half of those cases. It has, as a result, been found that high-frequency romance readers are less positive towards using condoms, and are significantly less likely to report having used condoms or to intend to do so in the future.[3]

Perhaps the danger lies in the fact that we are unaware of the persuasion in narratives – we tell ourselves that we know that it is fiction, we know that it is not true, and so we let our guard down. Or perhaps we want to live our lives like those glorified fictional characters do, and so we take after their beliefs and behaviors. Regardless of reason, in many ways, fiction can impact us just as much – or more than – nonfiction. “[I]ndividuals seem to approach fiction with a plausibility criterion in mind; if the information seems reasonable, it appears to have an impact equal to information labeled as fact.”[4]

Remember the post about the need for cognition, and how people high in the need for cognition are more likely to elaborate on various things in life? Well, in the case of narratives, the need for entertainment can affect the extent of transportation; people in boring or stressful situations might be more motivated to be tranported, and as such, even inferior texts can be sufficient to induce transportation.[5] Also, there is the case of transportability, i.e., the dispositional tendency to become transported into narrative. People who have a high transportability find it very easy to become immersed in narrative worlds, whereas those who have a low transportability are not as easily swept away.[6] Conclusion: If you are a dreamer who lives a boring or stressful life, watch out for persuasive narratives.

But of course, narratives can be a great way to take in new information. In case you hadn’t yet reflected on it by yourself, here comes a little nudge in the right direction: Have you thought about the structure of this post? Or the structure of any of my other posts on here? To make interesting and educational posts, I constantly weave facts together with stories and examples from my own life. We as humans often do this almost subconsciously when trying to convey information to other people; it is effective, and therefore we do it (this is true of many a persuasion techniques out there). But after reading this post, maybe you will be more conscious of all the persuasive narratives around you, and will be able to make use of them in a more efficient manner.

Just make sure not to use this new-found power for evil, all right?

 


[1] Green, Melanie & Brock, Timoty, Persuasiveness of Narratives, In Brock & Green, “Persuasion: Psychological Insights and Perspectives”, 2nd ed., SAGE Publications, 2005, p. 121.

[2] Ibid., p 123.

[3] Ibid., pp 131-132.

[4] Ibid., p 133.

[5] Ibid., p 137.

[6] Ibid., pp. 125-126.

The Ethics of Our Existence

Living on the surface of the planet Earth are carbon-based, oxygen breathing creatures that stuff other carbon-based things into their mouths and swallow in order to survive. Inside these creatures, a red liquid is constantly swirling around, bringing precious oxygen to all the parts of the creatures’ bodies. Many creatures on the planet Earth give birth to small new creatures and then raise them with love until they are old enough to take care of themselves. How long this takes varies between species, just as many other particular traits and biological functions. But all in all, these creatures all have much in common.

You, as a human, are one of them, and I bet that you – just like many others – take the particulars of your life for granted. By that, I mean that you don’t question why you breathe oxygen. You don’t question why there are ~5 litres of blood inside of you. And you probably don’t question why you raise your own children with love. But are these things really self-evident? Are they open and shut, no other alternative solutions possible? We might not know for certain just how important our particular biological structure is to life to be able to exist in general, but we do know that silicon-based life forms could exist somewhere in the universe, and that if these life forms breathe oxygen like us, they exhale quartz.

So our view of reality and how nature works is not self-evident. There may be entire galaxies out there that are filled with creatures the likes of which we cannot even imagine.

The author of Harry Potter and the Method of Rationality, Eliezer Yudkowsky, has written shorter stories as well. One of them is called Three Worlds Collide and takes place several hundred years in the future, during a mission in space. The human space explorers suddenly encounter two other space-faring species in a short period of time: the Babyeaters and the Superhappies (the story behind these names will soon be obvious to you).

The Babyeaters, as their name makes quite clear, eat their own babies. They give birth to hundres of babies and then eat all that are not strong/fast/good enough to escape their parents. The Babyeaters consider this winnowing to be morally sustainable; in fact, they think that it is immoral not to eat babies. As such, they immediately try to convince the humans to eat their own babies.

Then there are the Superhappies, who are, well… super happy. Their existence is characterised by pleasure; in fact, when first communicating with the humans, they propose having sex with them in order to please them. These creatures do not experience any negative emotions, and consider such emotions to be horrible, immoral and completely unacceptable. As such, they want all humans to give up their negative emotions and be happy with the Superhappies for the rest of humanity’s existence.

So now the humans onboard their space vessel are faced with two species that both want to change humanity in some way. The humans struggle with the ethical dilemma presented to them. They can easily eradicate the Babyeaters, but is it the right thing to do? Even though they eat their own babies – and the babies go through weeks of pain before finally dying during this process – they are peaceful, conscious beings that believe in their particular solution. Then there’s the issue with the Superhappies, who are more technologically advanced than the humans, and who want to force humanity to leave their negative emotions behind in favor of an existence of pleasure. Just like the thought of eating babies is abhorrent to the humans, the thought of human babies crying and being in (mental and physical) pain is abhorrent to the Superhappies. So what right do the humans have to decide what is moral and what is not?

There are two endings to this story – one where humanity succumbs to the Superhappies and agree to get their emotions erased, and one where humanity fights back and manages to stop the Superhappies from finding the human home planet.

I’m sure there is more than one moral to this story, but what I gained from it was the insight that what we take for granted is not necessarily self-evident, and not necessarily even ethical seen from a wider perspective. Just as we can look back at the history of humanity and think, “how could [unethical event] ever be allowed to happen?”, other species might someday look at us and completely turn our world view upside down by applying to our values standards that we hadn’t even dreamed of.

When that happens, will we succumb, or will we flee?

How to Defeat Clowns with Rationality: Mass Media, Availability and Bayesian Thinking

“Nowadays, I am more afraid of clowns than I am of rapists.”

The sentence above is a paraphrase of something my sister said yesterday. For years, she has taken nightly walks around her neighbourhood, and I am fairly certain that she has never once seen a clown during one of them. On the other hand, I doubt she has seen a rapist in action either, but why is it that the possibility of evil clowns somehow scares her more than the possiblity of rapists lately?

The culprits: mass media and the deficits of her own brain.[*]

We tend to assess the relative importance of issues by how easy they are to retrieve from memory, and this ease is largely determined by the extent of media coverage. When an issue receives considerable coverage in the media, we assume that it is important and relevant to us; what we see on the news becomes the basis for our beliefs about the state of the world (which is why being in control of what is covered by the mass media is so important to politicians). We call to mind frequently mentioned topics while others quietly slip away from our awareness.[1] The phenomenon of primarily recalling things that are more easily accessible to us is called the availability heuristic.[2]

Applied to the example with my sister, this explains her sudden fear of clowns. She has not seen a clown, nor has anyone else she knows. Viewing her life in isolation, there is no reason whatsoever for her to fear clowns. But, of course, in the modern society of today, viewing one person’s experiences in isolation is hardly possible. We are all connected; the Internet allows us to know what someone on the other side of the globe had for breakfast, or – as in this case – which crimes are being committed over 7’000 km away. And because my sister has Internet access (which has recently been declared a fundamental human right,[3] just as a side note), she is aware of the recent clown attacks in the United States, as well as of the few cases that have popped up in Sweden. The media reports on this issue, which makes it easily accessible in her mind, which results in her falling prey to the availability heuristic, which leads to her fear of encountering clowns during her nightly walks.

Of course, there is a backside to this problem, namely that just as media makes my sister’s clown fear more accessible in her mind, media is also to blame for all the sudden “evil clown sightings” in new areas. Without media reporting on this issue, there would be no reason why people would suddenly start dressing up as clowns and scare or hurt people all over the world. So, in a way, my sister’s fear is justified – but it is still way blown out of proportion.

You see, there is another aspect of this that also contributes to my sister’s fear, and that is the fact that people are generally lousy at estimating probabilities. For example, consider the following: Imagine that you meet a recent university graduate. It is quite obvious from your conversation with this person that they are shy. If you were to guess whether they are either a librarian or a lawyer, what would you deem as the most probable alternative?

Most people would say that the person in question is probably a librarian, because of the shyness; it is more common for librarians to be shy than for lawyers to be shy. This seems reasonable. However, it is easy to forget that there are more lawyers in this world than there are librarians. So is it really more likely that someone is a librarian just because they are shy? Certainly shy lawyers must exist – if to a lesser degree than shy librarians – and coupled with the fact that there are more lawyers than librarians, it might even be more plausible that someone is a lawyer even though they are shy.

This phenomenon of neglecting to consider background data in estimating probabilites is called base rate neglect, and it can be applied to the clown example just as well. Since my sister’s fear of clowns is easily accessible in her mind due to recent mass media coverage, it is easy for her to neglect the background data in this case, i.e., that there are more rapists than there are evil clowns.

In order to avoid faulty estimations, one can make use of a little something called Bayesian thinking.[4] It is not the most simple concept to explain, but simply put, Bayesian thinking involves three aspects. The first one is to remember your priors (i.e., your background data/prior background knowledge). As this has already been explained in the example above, let us move on to the second one, namely to ask yourself, “Would the world look different if I was wrong?” 

Would the world look different from what it currently looks like if the probability of encountering an evil clown was much, much lower than what my sister would expect? The answer to this is a sound “no”. In the world where my sister thinks that there are clowns lurking in the darkness, she has still not seen a single one of them, and there are still very few reported clown sightings in comparison to other (unreported) issues. And in the world where the clown probability is much lower, she will still not see any clowns, but there will be some reported sightings of clowns. Therefore, it can be assumed that my sister is greatly overestimating the clown probability.

Thirdly, according to the Bayesian ways, you should update your beliefs incrementallyThis aspect is a bit harder to apply to the clown example as it stands, so let me change it up a bit. Imagine that, apart from being more afraid of clowns than of rapists, my sister is also completely convinced that the probability of encountering a clown is bigger than that of encountering a rapist. In her mind, this is a fact. But then imagine that someone she respects and looks up to – like me, for example – tells her about base rate neglect and the availability heuristic, and explains the reasons why she might be overestimating the clown probability. Even if she stays confident in her own beliefs, she may still think that what I say makes sense; she just needs more evidence before she is willing to change her mind completely. And until that evidence has been presented to her, she is only willing to lower her estimation a little, to adjust for what she learned from me. As more evidence piles up that makes the world in which evil clowns are very rare more likely than the world in which clowns are around every corner, she will update her beliefs more and more, until eventually she accepts a new, more accurate belief. (Snowflakes of evidence eventually become heavy enough to break a tree branch.) So, in short, the third aspect of Bayesian thinking involves adjusting one’s beliefs accordingly when exposed to new evidence in either direction.

To sum up, be wary of the availability heuristic and of how mass media affects your probabilistic perception of the world, and for the love of everything we hold dear, don’t overestimate your own probability estimations.[5]

 


[*] Disclaimer: My sister is incredibly intelligent, but intelligence has little to do with rationality.

[1] Iyengar, Shanto & McGrady, Jennifer, Mass Media and Political Persuasion, In Brock & Green, “Persuasion: Psychological Insights and Perspectives”, 2nd ed., SAGE Publications, 2005, pp. 230-231; see also Kahneman, Daniel, Thinking, Fast and Slow, Farrar, Straus and Giroux, 2011, pp. 39-40.

[2] As explained in a previous post, heuristics are rules of thumb/mental shortcuts that help us save valuable time and cognitive resources in daily life.

[3] UN Human Rights Council, The promotion, protection and enjoyment of human rights on the Internet, 27 June 2016, A/HRC/32/L.20, § 10; see also the court case Ahmet Yildirim v. Turkey, no. 3111/10, § 31, ECHR 2012.

[4] If you want to know more about Bayesian thinking, click here to watch an excellent video of rationalist Julia Galef explaining it in more detail.

[5] The pun (?) in this last part was unintended but still a very pleasant result of my phrasing.

Oh, and for those of you who are interested, Schroedinger the mouse is dead. After days of doubting its existence, I finally managed to lure it into my deadly trap.

Rationality – What, Why and How

A while ago – back before I knew anything about the concept of “rationality” – a good friend recommended a book to me. Well, not an actual published book, but rather a Harry Potter fan fiction. Fan fiction loans characters and worlds from original works of fiction and switches things up by introducing new storylines, new relationships, or something as small as just a new concept. This particular fan-fic is called “Harry Potter and the Methods of Rationality” (by Eliezer Yudkowsky) and this is its plot according to GoodReads:

“Petunia Evans has married an Oxford biochemistry professor and young genius Harry grows up fascinated by science and science fiction. When he finds out that he is a wizard, he tries to apply scientific principles to his study of magic, with sometimes surprising results.”

In other words, eleven-year-old Harry Potter arrives at Hogwarts and cracks the secret to magic by applying some badass science. This is the book that first introduced me to rationality; having seen Harry use his rationality skills to turn the (wizarding) world upside down, I was inspired to do the same. (I strongly recommend you to read it if you are interested in science and rationality – it is amazing all the way through.) I don’t remember exactly what happened next – all I know is that I’ve galloped through a plentitude of videos, articles, blog posts and wiki entries in the months since, and I now apparently own a rationality blog. Things escalated quickly, let’s just leave it at that.

But what exactly is rationality, and why should one study it?

Let me begin by clarifying something important: scientific rationality does not equal “Spock rationality”. Rationality is not about abandoning your emotions and spewing out random probability percentages. You need to leave your preconceived ideas about rationality behind before we go on.

Ready?

Good.

So, now that we have that out of the way – no source seems to be able to specify exactly what it means to be rational, but the best definition I have to give you is the following. Rationality consists of two subcategories, namely

  1. Epistemic rationality, and
  2. Instrumental rationality.

Very simply put, epistemic rationality is the art of having true beliefs about the world, while instrumental rationality is concerned with your own goals in life and how to reach them. Being rational, then, means working against your own brain in its endeavour to settle down in a comfortable, ignorant bliss; it means being sceptical; it means asking yourself time and time again if what you think you know really is true; it means updating your beliefs when you receive new evidence; it means looking past thought traps, biases and fallacies in order to be able to get what you really want out of life.

Doesn’t this sound like something worth striving for? Yes, I think so too. But the fact of the matter is that it is hard. Changing your mind about something when new evidence pops up is uncomfortable, as is admitting that you are wrong. Not to mention the fact that your own brain is wired against you – it frequently uses so-called heuristics (rules of thumb; shortcuts) to draw conclusions that may be not only wrong, but also detrimental to both us and our surroundings.

As I mentioned in my first blog post, we use two different cognitive systems when we interpret the world. Fittingly, these systems are called system 1 and system 2. System 1 is the one based on heuristics; it is intuitive and reacts instantaneously. When I say 2 + 2, you think “four”; when I say “lemon”, you grimace and start salivating. This is the system you use most often in your daily life. It’s the fastest of the system siblings, but a while after, system 2 kicks in. This system is the “big brother”; the system that deliberates, analyses and reasons. System 2 activates whenever you have to do something you’re not used to doing. When buying a new smartphone, for example, you compare prices, product details and brands; this is part of your deliberate reasoning.

It makes sense to use system 1 when you don’t have time to deliberate; in fact, our lives would be much more difficult and time-consuming (not to mention energy-consuming) if we had to actively think about all our decisions. But sometimes, using system 1 hurts us and our surroundings. System 1 allows for biases and prejudices to slip through, and this has an adverse impact on our ability to make rational decisions. System 1 also complicates becoming more rational, since an important part of rationality is to avoid biases and fallacies; in other words, to be rational, you need to learn how to stop yourself from drawing biased conclusions. You need to learn how to notice your thought processes and ask yourself,

“Did what I just thought really make sense?

“Are there other explanations for what I just experienced, and are those explanations just as – or even more – plausible?”

“Could I have fallen prey to any biases and fallacies just now?

And so, the first step to becoming a rationalist is to understand and accept that you – just like every other human being (even full-fledged rationalists) – are susceptible to biases, fallacies and other misconceptions, and that the road to overcoming this is long and hard.[1] But if you truly feel inspired and motivated to go through with it, it will be a rewarding journey, and I will help you through it. (Even if you are not motivated to do all that, don’t worry – you can still read my rationality posts and apply their secrets and techniques to your life to the extent that you want to. Anything is better than nothing, eh?)

This has been a short introduction to rationality – if you have any questions, feel free to leave a comment.

 


[1] I should add that I have only just begun this journey myself (a few months ago), so I am by no means a master at rationality; however, I have a passion and inspiration to learn as much as I can on the subject, and you are more than welcome to learn with me.

Rat-ionality

“The brain is a flawed lens through which to see reality. This is true of both mouse brains and human brains. But a human brain is a flawed lens that can understand its own flaws—its systematic errors, its biases—and apply second-order corrections to them. (…) Mice can see, but they can’t understand seeing. (…) Their camera does not take pictures of its own lens.” (Eliezer Yudkowsky, ‘The Lens That Sees Its Flaws‘)

One of the names I considered for this blog was “metacognition” (thinking about thinking; being aware of one’s own thought processes). However, the name was already taken – as was “needforcognition”, which is why I replaced the word for with the number 4.

It’s been debated whether metacognition is unique to humans or not; studies have suggested that chimpanzees and dolphins may possess the same ability, and almost exactly a year ago, a scientist named Stephane Savanah published an article on whether rats are rational.[1] With the term”rational”, Savanah refers to the ability to reason independently (a definition which differs from the one used in the “rationality community” [see also this post]).

In one study, tones of various lengths were played to rats, who were then tasked with determining whether the tone was short (2 to 3½ seconds) or long (4½ to 8 seconds). If they gave the right answer (in the form of a pulled lever), they were rewarded with a number of food pellets. However, they were also presented with a third alternative – “uncertain”. If they chose this option, they received some food pellets, but not as many as were rewarded for a correct answer. If the rats chose the third alternative, it would suggest a degree of metacognition, since it could mean that they evaluated their own certainty in their answer (and hence, thought about their own thoughts).

And what did the study show? When the tones closer to the middle-ground (e.g. 3½ or 4½ seconds) were played, the rats were more likely to choose the “uncertain” alternative. However, these results could also be ascribed to mere associative behaviour (that is, the rats learning when to pull which lever from past experiences), meaning that they were not enough as evidence for rationality in rats. This is in line with Morgan’s Canon, a concept in comparative psychology according to which animal behaviour should be explained as simply as possible; it should not be “interpreted in terms of higher psychological processes” if it can just as well be explained by simpler processes.[2]

Other experiments that were conducted gave similar results; indications of metacognition, but only indications. As Savanah states in his article, the actions taken by the rats “do not necessarily require or implicate the capacity for rationality. As such there is as yet insufficient evidence that rats can reason.”

In other words, rat-ionality is still just a theory (pun very much intended), even though it may sometimes seem otherwise. For example, I am fairly sure that there is currently a mouse somewhere in my apartment, but the only reason I have for believing this is that every morning, a piece of paper or plastic will always be lying next to the garbage can instead of inside of it. Usually, I don’t think this would be enough for me to assume that there was a mouse a-foot, but it just so happens that I had another mouse in my apartment a few weeks ago, and that was a damn smart thing.

You see, the only reason I noticed that mouse was that the bottom of the paper compost bag had been gnawed on. And even after noticing that, it took several days before I finally caught the thing, because somehow it managed to successfully steal the pieces of cheese right off the mouse trap like some kind of ninja – even when the cheese was glued on. Eventually I managed to out-smart it, but it took its sweet time.

So after encountering this super mouse, my senses have been finetuned to others like it. My theory is that the current mouse (let’s call it “Schroedinger” since I don’t even know if it truly exists) climbs up onto the bundle of extra paper compost bags next to the garbage can and then pokes about in the trash after something to eat. I suspect that the reason why Schroedinger doesn’t gnaw on the bottom of the compost bag like its predecessor is either because

  1. it’s not intelligent enough to realise that there’s lots of food inside the compost bag, or
  2. it’s so intelligent that it knows that if it gnaws on the compost bag, I will have real evidence of its existence.

So either I have on my hands a dumb but lucky mouse or a sneaky, rational super mouse.

Which one do you think is the most likely?


[1] Savanah, Stephane, Can Rats Reason?, Psychology of Consciousness: Theory, Research, and Practice, 2015, Vol. 2, No. 4, pp. 404–429.

[2] Morgan, C. Lloyd, An introduction to comparative psychology, 2nd ed.,  London, 1903, p. 59.

Time Travel and the Advantages of Sci-Fi

“Every day, everything we do, is at a turning point in history, whether it’s obvious to us or not. And of course, some of these points really matter tremendously and others don’t, but the difference is not announced to us.” – James Gleick [x]

If you could go back in time and change something about your own history or that of the world, would you?

This question forces you to consider two very important aspects. First, it forces you to think about time travel technology – how would such technology work? Could such technology work? Second, there’s the ethical dilemma – if you could, would you? Do you have the “right” to do so?

Countless books and movies revolve around this dilemma: Some fool travels back in time to change a seemingly insignificant detail, just to end up worsening the future tenfold in doing so. Seeing how media keeps telling us time and time again (pun intended) that time travel is bad, would anyone who hasn’t lived under a rock their whole life really be dumb enough to travel through time if they had the chance?

This leads me to ask: Considering the fact that most people that are alive today have been repeatedly exposed to the time travel dilemma, does that change the whole game?

Imagine that you, as a consumer of time travel media (I assume here, but I think it’s a fairly logical assumption), one day stumble upon a time machine. Now, according to the many books and movies on the subject, we’re first supposed to doubt the possibility of time travel. We’re supposed to go, “A time machine? You’re crazy, that’s impossible.” And perhaps that part of the whole dilemma wouldn’t change; you’d probably still be pretty sceptical of time travel even after having been exposed to time travel media, and rightfully so. But what about the next part? The ethical dilemma?

How many times haven’t we watched a horror movie and in annoyance called the characters stupid because they “should obviously have seen that one coming”? This applies to fantasy as well. If you got a letter from Hogwarts, you’d know what to do – you’d obviously hurry off to Diagon Alley to get your course literature, wand, magical pet, etc. If you saw a group of pale and handsome youths sitting in the school cafeteria without eating anything, you’d obviously stay as far away from those freaks as humanly possible.

… Or would you?

I propose that you wouldn’t.

The reason that those things are so obvious in the movies is that they are movies. We’ve been taught that movies are (usually) fictional, and so we watch them with the mindset that it’s not real. Even if magic or vampires or long-haired girls crawling out through a TV are all “real” in the movie universe, they are – and will stay – fictional to us, the consumers. If you received a Hogwarts letter, you would think that it was a prank. If you saw a group of pale teens not eating in the cafeteria, you’d think they were goths with eating disorders (or Twilight enthusiasts, whichever you prefer). And so, you would do what every exemplary fantasy character would in your position – you’d shrug and move on.

However, I’m not saying this would be the case with all fictional concepts. Especially with sci-fi, there is an element of “what if”. No, 2015 didn’t grant us hover boards, but that doesn’t mean we’ll never ever get them. Sci-fi is short for science fiction for a reason. Science might eventually be able to take us where sci-fi media has already been; a fact which differentiates sci-fi from horror and fantasy. It’s not out of the question that time travel technology could eventually be a reality (in reality). And that, friends, is why the time travel dilemma might actually be affected by time travel media.

Because even though we don’t know how such technology would work per se, we are familiar with the overall concept (and the dilemma). After getting over the first hurdle – “Is this really a thing? Someone must be pulling a prank on me. No? It’s real? … Oh.” – we’re presented with the second aspect. And if we assume a general knowledge about time travel media, we can also assume a recollection of how the dilemma turned out for the fictional characters. We’d remember the butterfly effect, the causal loop (a future event is the cause of a past event) and other temporal paradoxes. That is, we’d have learned something from consuming sci-fi media. And that’s something fantasy (generally) can’t give you, because fantasy builds upon the imagination of its creators. Time travel technology does too, but not the dilemma that such technology presents.

And so, since our exposure to time travel media has made us more educated on the subject than we would have been if the concept of time travel had never been touched upon until the very moment it existed, it’s more probable that we would abstain from using our new and shiny time machine upon finding it than had we just used our brains to come up with all the possible consequences of this new adventure.[1] And this is also why it makes actual sense to be annoyed with the characters in time travel media, because if they’re pictured as regular modern humans, they should honestly be familiar with all the shit that might go wrong when one is travelling through time. Just like we are.

Conclusion: Don’t go back in time. You’re expected to know better.

 


[1]This also raises some interesting thoughts about the importance of sci-fi as education, but I think that’s a topic for another day.

Introduction: The Need for Cognition

“[S]ome individuals like to engage in complex, inquisitive, and analytical thoughts. They feel intrinsically motivated to devote effort to cognitive endeavors, striving to understand objects, events, and individuals.” (Dr. Simon Moss, ‘Need for Cognition‘)

The concept of need for cognition originates from behavioural science and the so-called Elaboration Likelihood Model (ELM). ELM, which was developed in the 1980’s, is a dual-process theory which proposes that there are two different ways in which humans process information: the central route and the peripheral route.[1]

The central route, also called systematic and elaborative processing, relies on elaboration. In this route, people “carefully attend to the arguments presented, examine the arguments in light of their relevant experiences and knowledge, and evaluate the arguments along the dimensions they perceive to be central to the merits of the objects.”[2]

The peripheral route (also called heuristic processing), on the other hand, relies on simple cues and shortcuts. Humans are “cognitive misers”; that is, we use simple and time efficient strategies when making decisions. This is not out of laziness, but rather because we do not have the mental capacity to elaborate on everything we come across. Imagine if you would take the central route to decide which foot to place in front of the other while walking, for example – this obviously would not work in the long run. (Hehe, get it? Long run? … Okay, moving on here.)

So which route do we use in our daily lives? Well, it depends. Because the peripheral route is effortful and demanding, we usually only use it when we are both motivated and able to process the information more thouroughly. There are several factors that determine whether an individual will put in the effort needed for this route, but all of them will not be discussed in this post. Instead, I will focus on only one of the motivational factors, namely the need for cognition.

Ah, see? It’s all coming back to the title of the blog (and this post). So let me tell you about this concept. You see, there are individual differences when it comes to motivation for elaboration, and one of them is the need for cognition. This particular need is a trait that some individuals possess that makes them more likely to use the central route when processing information. It can be described as enjoying to think hard about things even when there is no need to do so. (See also the excellent quote at the beginning of this post.) Individuals high in the need for cognition will generally do more thinking about the information in a message than will individuals that are low in the need for cognition.[3] This means that individuals with this trait are more motivated to think and elaborate in their daily lives.

And so, I thought this would be the perfect name for this blog, since:

  1. I, the author/owner, am high in the need for cognition and thus greatly enjoy thinking about stuff. Describing it as a “need” is actually very accurate on my part.
  2.  This is a place where I will verbalise some of the plentitude of thoughts that I have (with a focus on cognitive science and rationality), as an outlet for my cognitive need.

Now, there are several reasons why I started this blog. One of them is that I was inspired by a certain other blogger who said that they started blogging everyday in order to have a reason to write more often. Maybe I won’t blog every single day, but I hope this blog will give me some motivation to write at least a couple of times every week. This is both because the thought of blogging has always enticed me (but I haven’t really known what to write about and eventually I’ve gotten bored) and because I do feel like I haven’t been writing near as much as I want to. (Usually when I “write”, it’s unpublished fiction in my spare time, but writing about nonfiction will also help develop my writing skills.) Another reason is that rationality, psychology, cognitive science and to some extent philosophy are subjects that intrigue me and that I love reading and talking about, so why not take it to the next level and write about it too? Maybe eventually I can even get to know and talk to some likeminded people who find their way to my blog somehow. (Well, it’s certainly a possibility.)

But for now, as this blog stands, it will be a place for me to air my thoughts on certain subjects, first and foremost rationality and cognitive science.

So, if you have just found your way here, hello and welcome! You’re hereby invited to follow me on my journey to enlightenment. ;D


[1] Rationality speaks of system 1 and system 2 reasoning, which is basically the same thing. Read more about that in this post.

[2] Petty, Cacioppo, Strathman & Priester, To Think or Not to Think: Exploring Two Routes to Persuasion, In Brock & Green, “Persuasion: Psychological Insights and Perspectives”, 2nd ed., SAGE Publications, 2005, p. 85.

[3] Idem, p. 94.