I normally do not inflict my reading habits
on others, but I feel compelled to recommend Daniel Kahneman’s Thinking, Fast And Slow. I knew it would
be a good read because I’m a great fan of Dan Ariely, and Ariely regards
Kahneman as one of the most important figures in the behavioral sciences. (Ian
Bremmer, who is not prone to plugging authors, also recommended the book to
Eurasia Group clients.) I’ve only finished three chapters—it has 38, plus an 11
page “Conclusions”—but I can already tell that it will help me think clearly.
Just one, brief example:
“…when people believe a conclusion is true,
they are also very likely to believe arguments that appear to be truesupport it even when these arguments are
unsound.”
Maybe I like this line because I’ve always
believed it on some level to be true—and it appeals to be because I like to
think that I recognize it in others but that I am above it all and always
search thoroughly for counterarguments. In fact, I probably do, and I’ll more
likely than not read about the phenomenon later in the book.
Whatever the case may be, I expect it to be
a great thinking aid.
8 comments:
Matt:
Kahneman is a behavioral scientist, not a philosopher. So, if you want answers to epistemological questions like the following, you will be very disappointed.
“But how would anyone know the arguments were unsound to begin with ... that is the implication of the statement is that some arguments are sound and others aren't – but this presumes there's some way to judge between the two ... what would that way be?”
You work with words. So, if you want insights into how the mind works and perhaps use those insights to detect phony arguments and rhetorical tricks that people use to get their way, this is a book for you. If you’re the kind of person who is more interested in asking what are “phony” and “rhetoric”, then it’s not.
If Karl Popper and his students are correct, then there are no sound arguments.
Yet, the statement made by Kahneman implies the following:
1. There are sound arguments.
2. Kahneman knows how to sort sound arguments from unsound ones.
Note the following syllogism:
All swans are white.
Alf is a swan.
Therefore, Alf is white.
This is logically valid. Is it sound?
Well, what's the criteria for soundness? There isn't one. That's the problem.
I mean consider this:
All Zebras are pink and blue.
Ed is a zebra.
Therefore, Ed is pink and blue.
That's logically valid, but is it a sound argument?
Richard Nesbett has a book called _The Geography of Thought_. I won't go into the positives and negatives of this book, but one claim he makes toward the end of the book is as follows:
"East Asians, then, are more likely to set logic aside in favor of typicality and plausibility of conclusions. They are also more likely to set logic aside in favor of the desirability of conclusions."
I'm not at all sure if I accept this claim, but it fascinates me.
He's suggesting East Asians have more of a tendency to reject logically valid arguments *if* the conclusions are implausible. Why? And does it matter? (Rhetorical questions, you don't need to answer!)
Popper's point is that validity doesn't yield anything positive. You don't get anymore out of a syllogism than you put in. It's tautological. Your conclusions can't tell you anything more than your assumptions *already* told you.
However, the importance of logic is in criticism.
"All swans are white."
How many white swans do we need to *validate* this claim? It can't be done. The claim cannot be validated. (Karl Popper and David Miller have a paper where they analytically demonstrate the impossibility of even *partial* support for a claim. A large part of Popper's work is concerned with probability in science, but I don't really want to go into that …)
Anyway, how many black swans do you need to invalidate the claim, "all swans are white"? One.
Any claim of support at all, any claim to have a sound argument is question begging. So anyone who tries to provide positive reasons for a claim is making an error. But it's not a problem for someone who is only interested in criticism.
Then what is science all about? – the paper I linked to above by Jarvie is a good place to start with in answering such a question.
As far as psychology, Popper's claim is that what is true in logic is true in psychology. So if you don't get the logic right in the first place … this might carry over into errors as far as psychology.
If I'm not mistaken I think Kahneman has been influenced by the work of Nassim Taleb, author of _The Black Swan_. The title of Taleb's book, of course, is in deference to Karl Popper, as Taleb considers himself a student of Popper.
If that's the case, you might only be getting a watered down, muddied version of the real thing here. I suggest going to the source, Karl Popper. :)
1. To repeat, Kahneman is a psychologist, not a philosopher. Kahneman has little to add to what is essentially the back story to to the scientific method, Popper does little to help us understand how human beings behave. If you want to read something useful, read Kahneman. If you want to read something that takes a look at the roots of the understanding process itself, read Popper.
2. Most of Kahneman's body of work precedes Taleb's. If anyone is influencing the other, it's the other way around.
3. If you're not interested in the behavioral sciences, you should stick to Popper, Suassure, and whomever else is to your liking, If you are, Kahneman is a good read. Nothing more, nothing less.
This has to be my last shot at this because my holiday is ending. I did spend some time both reading as much as possible of Kahneman's book as I could, and in searching through the Internet to see if I was alone in my criticism or not.
I found Kahneman's book to be much worse than I anticipated. He continuously engages in the type of fallacies he argues against. He sees his theories being confirmed everywhere. It's sad really. I'm sure his actual paper's are much better than his own popularization of his views. If you look at the 1 star reviews at Amazon, some of them really hit on this point and are quite useful:
http://www.amazon.com/product-reviews/0374275637/ref=cm_cr_pr_hist_1?filterBy=addOneStar
The thing is, as far as very specific claims about this or that bias, my own bias would lead me to think that Kahneman's claims are mostly correct. I just presume bias is rampant, but that institutional safeguards are primarily what prevent this from being a major problem. This would take a lot of explaining, but one example of such an institution would be a free press. On an even deeper level the institution of language itself helps prevent subjective error. If an idea is verbalized, it can be critiqued by another and found wanting.
That is, in some ways my own bias, as it were, is toward believing that cognitive illusions not only exist, but are quite rampant.
However, in looking at this issue more carefully, I realize that as far as probability, Kahneman's claims are far more off than even I would have guessed.
The crux of the issue is this, what kind of biases are we talking about? We are talking about decisions regarding the truth. People don't correctly understand the truth because a bias or "cognitive illusion" gets in the way.
This suggests that to find this bias, we have to first *know* the truth, ourselves. And there's the rub. Who is to say we have that truth?
A lot of Kahneman's most famous work takes place in the area of probability, where he feel certain people fail to intuitively understand certain ideas about probability. But in order to test this, he has to know what ideas are *correct* in the first place.
When it comes to probability one can offer a subjective interpretation or a objective interpretation. In a subjective interpretation, probability is in the mind. It's related to a lack of knowledge. In an objective interpretation, it's related to some objective property existing independent of mind. This is an old debate not likely to be resolved soon. Even within each group, there are many interpretations.
Gerd Gigerenzer, who is a behavioral psychologist – same as Kahneman, was able to take a lot of Kahneman's demonstrations of bias and show that when people were made aware of the epistemological assumptions first, the bias disappeared. Or more correctly, people tend to apply a frequentist interpretations of probability (an objective approach), while Kahneman's questions always imply a almost Bayesian interpretation (a subjective approach). When this is accounted for, people at least appear to be far less prone toward error. This suggests that if an objective interpretation of probability is correct (I think it is), people are far less intuitively off than Kahneman's study portray's them as.
So again, this is why I keep emphasizing to you, this is *not* just a matter of psychology, but of epistemology – and the epistemology has to come first.
In my own opinion Gigerenzer delivers a devastating critique of Kahneman's work. You can read the paper yourself here:
http://library.mpib-berlin.mpg.de/ft/gg/GG_How_1991.pdf
Finally, another issues that needs to be explored is the "Wason selection Task", you can read about this here:
http://en.wikipedia.org/wiki/Wason_selection_task
People given a difficult abstract problem will often fail in the task, but given the same problem in a recognizable context actually easily get the right answer. I'm guessing this would also run up against some of Kahneman's conclusions ...
By the way, in your recent post where you give Kahneman's quote again, you fall into the very same error you claim others to me making. You don't seem to properly understand the role of evidence in regards to claims. This is unfortunate. I would explain this, but I've gone on too long already.
Matt:
I am nowhere near as well read as you are on Kahneman, Ariely, Popper, and the like, not to mention Gigerenzer, whom I hadn’t even heard of, so most of what you write passes over me like a Vince Carter dunk (once would). With that caveat…
Gigerenzer’s beef with Kahneman—I may be mistaken—seems to be that the latter’s work (and that of likeminded people) is nothing but a bundle of rules of thumb. If, as in Ariely’s words, we are “predictably” irrational, then shouldn’t there be a single meta-rule or an interlocking yet non-overlapping set of rules from which all else flows? Now, I may be casting my own bias on Gigerenzer’s thoughts, since that also happened to be my one reservation with Ariely.
However, my thoughts do not go much beyond that, since I like to think small and come up with small answers. Somewhere between a metaphor and a parable would lie the explanation for this to the effect that if I were a civil engineer, I wouldn’t try to go back to the general theory of relativity and quantum theory every time that I designed a bridge or a dam.
Most of the time, this approach has been good enough for me. So, if Gigerenzer claims that Kahneman’s assertion that I quoted is specifically false, then that would be interesting, counterintuitive as it would be. But if his complaint is that it is merely a heuristic, a bastard of an algorithm, and of limited use at that, so be it. Either way, we ignore it at our peril in our everyday lives.
The quote you give from Kahneman is clumsy. I'm sure when Kahneman writes research papers, he's more careful.
Look at the context of the quote. He's talking about logical syllogisms. Syllogisms are either valid or invalid. (The word truth is avoided because it's too confusing in this context.)
For example, this is a *valid* syllogism:
All swans are white.
Alf is a swan.
Therefore, Alf is white.
If my assumptions are correct, then my conclusion can logically be deduced. The reality is that not all swans are white, so my conclusion may not be true at all, but my logic here is *valid*.
But *support* is usually more like this.
Alf is a swan and white.
George is a swan and white.
…
Greta is a swan and white.
Therefore, all swans are white.
Support is referred to when we talk about inductive reasoning. Of course, as a student of Popper, I totally reject inductive reasoning. That is *not* a popular view.
I can't imagine Kahneman doesn't know all this. I'm sure he knows this stuff like the back of his hand. He has to. However, this book is a popularization, so he's not being so careful. He didn't know you'd quote him on it, and then I, a die-hard Popperian who lives in Japan and is interested in Japanese politics, would catch you on it.
But the quote should read more properly like this,
** …when people believe a conclusion is true, they are very likely to accept the conclusions of an invalid syllogism.**
I don't see this as a big deal, because there's usually someone around to point out the invalidity of the logic. There's simply very little controversy over points like this, so while people do regularly make errors, they get corrected right away.
Someone unfamiliar with syllogism might easily get lost anyway. What if my syllogism were like this:
All bears are brown
Bruce is a bear,
Therefore, Bruce is brown.
That's perfectly valid logic, but some people will reject it, because they'll think, but not all bears are brown. Some people don't bother with the logic one way or another, they just hone in on the conclusion.
Clearly context and even culture is relevant here. If Richard Nisbett is correct, East Asians are more adverse to having points demonstrated via logic. More often than westerners they reject *valid* logic when the *think* the conclusion is false. But again, there's this ambiguity issue. My guess is East Asian can do logic every bit as well as westerners, but they more easily misinterpret what's expected from them by the questioner.
My *guess* is that if the questions were clarified in such a manner as to tell the students, hey, be careful to focus on the validity of the argument and *not* on its conclusion, that's what's at stake here … probably these issues would actually go away to some extent. Even logicians get confused, which is why they refer to validity and not truth.
This is what Gigerenzer showed as far as probability. When the ambiguous aspect of the question was removed, people performed much better on the question. Gigerenzer wants to say, hey Kahneman, you are making us look like intuitive fools, but we're actually not so bad.
Having said all this, I readily admit that confirmation bias is real, I do it all the time despite myself. But it's a separate issue …
There's heaps of other things to say here … but it's late. I reserve the right to be wrong about everything I say ... ;)
>If, as in Ariely’s words, we are “predictably” irrational, then shouldn’t there be a single meta-rule or an interlocking yet non-overlapping set of rules from which all else flows?
I hate to keep throwing books at you but I nominate universal selection theory … Gary Cziko, yet another behavioral/educational psychologist has what I take to be a really important book, which is _Without Miracles, Universal Selection Theory and the Second Darwinian Revolution_.
That's one of my all time favorite books. He puts so much information into that book … and explains everything so well.
It's on-line for free here:
http://faculty.education.illinois.edu/g-cziko/wm/
Here's a good paper on how to reason like a Popperian:
"Do We Reason When We Think We Reason, or Do We Think?" by David Miller
http://www2.warwick.ac.uk/fac/soc/philosophy/people/associates/miller/lfd-.pdf
Not that anyone should want to do that but me ...
BTW, if you get to the part where Kahneman attacks the Chicago school of economics for arguing human beings are rational ... I could pull my hair out over that. I admit it, I'm bias toward the free market. But the most powerful arguments don't come from the Chicago school, they come from the *old* classical liberal school. They are premised on the idea that human beings are irrational idiots.
Okay, I might be exaggerating a little bit, but the best arguments for the free market are traditionally that human beings are irrational and market institutions impose rationality on irrational man.
Instead of dealing with this very old and venerable tradition, Kahneman picks on a very recent tradition, the Chicago school. Why? Perhaps because they are easy pickings. Is it his own bias kicking in? Perhaps he just wants to be right?
Even if you 100% disagree, F. A. Hayek's essay at the beginning of _Individualism and Economic Order_ , it clearly shows its traditionally the free market advocates arguing men are irrational scoundrels not the converse. That's a relatively recent development, one Hayek roundly criticizes at a later point in that book. Just read the first few pages of the first essay …
http://mises.org/books/individualismandeconomicorder.pdf
Post a Comment