Journalism or Left-Wing Activism? The Globe and the Peterson Affair

Back in November, Professor Jordan Peterson took part in a debate on free speech at the University of Toronto. The Globe and Mail’s Simona Chiose reported on the event. Here are the first words of her article: “The claims made by University of Toronto psychology professor Jordan Peterson on the impact of anti-discrimination legislation are scientifically and legally wrong.”

Here’s a question: whose side do you think Chiose (and the Globe) are on? That is, is the article for or against the position taken by Dr. Peterson? Do you think that that opening sentence represents a sincere attempt to avoid bias while writing a report on a controversial event, or does it rather push the reader in the direction of a particular point of view? (If you click on the link, you’ll see that the sentence actually ends with a subordinate clause: “… a debate at the Toronto university heard on Saturday.” This does mitigate the effect of the whole, but the effect is there nonetheless.)

I was annoyed at the time by the article, and I’ve just had reason to think of it again, for last week the same Ms. Chiose wrote another article on Dr. Peterson, this one much longer and more biased than that earlier one (quotations below are from this article unless otherwise noted). This longer piece omits relevant facts and emphasises marginal material so as to create a distorted picture, one that effectively maligns Dr. Peterson’s character. In what follows, I’m going to go through several examples of this tendency (though not all), to show just how far the article falls short of fair and unbiased reporting. Has Ms. Chiose – and with her, the Globe – abandoned reporting and instead plunged headlong into activism? After considering the points I make below, I hope readers will be able to decide for themselves.

The main force of Chiose’s article comes from the connection it attempts to make between Dr. Peterson and certain internet trolls (this includes the ‘KEK’ primer at the end), so let us start with that. These trolls, we are told, say vicious and (perhaps) threatening things about his critics. Chiose tells us that “the existence of this parallel, online space is hardly mentioned in free speech debates or arises only in lateral mentions of concerns about ‘safety on campus.’” What she never makes explicit is why this parallel space should be mentioned in free speech debates. No doubt it is true that there are too many trolls online posting childish, vicious and (possibly) even threatening things; no doubt there is often “misogynist and dehumanizing invective” online. This is awful, but why should it have any bearing on the question of free speech or safety on campus? No evidence has been offered that the trolls in question are on the UofT campus. Neither the UofT, nor the government of Ontario, nor the government of Canada – and certainly not Dr. Peterson – can control what people in “Shanghai and Berlin, St. Petersburg and Pune… and San Francisco” write online. Why on earth should the rantings of some loser in his parents’ basement in, say, Colorado, affect what people may and may not say on a university campus in Toronto? Yes, online trolls can be vile, and it would be nice to be able to treat the problem they present, but they seem to me a red herring when considering freedom of speech on campus. Indeed, I they seem to me entirely irrelevant in relation to the matter at the heart of the Peterson affair, namely, these new pronouns, and whether the government or any other institution has the right to force people to use them.

There is one respect, however, in which Ms. Chiose’s focus on internet trolls is not a red herring at all, but drives a point home with some force, and that is the realm of insinuation, of guilt by association. Obviously Dr. Peterson cannot control people typing away in distant cities around the globe, but look at how rotten (or crazy) those people are, those people who support Dr. Peterson – doesn’t that make him look like a rotten person as well? If this thought can, strictly speaking, have no bearing on the matter of pronouns and free speech, still it can have considerable rhetorical force. Empty rhetoric can be effective – in this case, it might encourage those who shout Dr. Peterson down (see below), or those who want a reason to avoid having to confront the force of his arguments, to think that they’re doing the right thing.

It is not only by means of the connection to online trolls, however, that Chiose’s piece paints its picture of Dr. Peterson. As an introduction to her other methods, let us consider some words from Dr. Peterson. At a recent talk at Harvard (video here, the relevant bit is from 6:04) he discusses the behaviour of the UofT’s administration: “the university said that they had received many letters accusing me of making the University of Toronto an unsafe space… they failed to note at the same time that they had received hundreds – perhaps thousands – of letters, as well as a 10,000 signature petition, supporting my stance.” He considers this a falsification by omission.

Chiose’s article suffers from a similar failing. In general, the claims of the UofT administration, as well as accusations against Dr. Peterson, are treated as reliable. Facts that might throw a different, and more positive light upon Dr. Peterson or his cause, repeatedly go unmentioned. For example, we do not hear of the letters of support that he claims to have received from trans-people, who, he says, express their frustration at the fact that a group of activists claim (wrongly) to represent them, and have in fact made their lives more difficult (see here from about 12:43). Is Dr. Peterson making all this up? I don’t know; Chiose might have made an attempt to verify his claims. Instead, the reader is simply left unaware of the possibility of another side to this story.

Consider this sentence as well: “after his application for a federal research grant was turned down this year, Rebel Media… started a crowdfunding effort on its own site on the professor’s behalf.” It would have been worth mentioning at this point that Dr. Peterson claims that his research has been continually highly ranked, and successfully funded, for more than two decades – until this particular grant proposal. If true, it is remarkable that he suddenly failed to get funding the moment he declared himself against the forced use of certain pronouns. This seems to me very relevant indeed to the question of free speech on campus, unlike the internet trolls on whom Chiose lavishes so much attention. Most importantly, we once again see that facts that might have put Dr. Peterson’s cause in a more sympathetic light have been left out.

Consider also the following: “‘no one wanted him to be fired or put in jail…,’ said Cassandra Williams, who was the equity officer for the University of Toronto Students Union throughout the crisis.” Of course, if you have heard Dr. Peterson himself talk about this (see here from about 6:45), you get a different impression: before you fire a professor, there have to be warning letters. He received one, then another. His response was to go for maximum publicity, and he has since been all over YouTube, talking with Gad Saad, with Dave Rubin, with Joe Rogan (among many others), getting millions of hits, and becoming something of an internet celebrity in the process. Having followed this strategy, Peterson was not fired. A teacher in BC, who wanted to remain anonymous, was not so lucky, and was fired. Dr. Peterson was apparently contacted by this anonymous teacher, and advised him, in vain, to follow the publicity route as a means of protection. Did publicity save Dr. Peterson’s job? I don’t know. Maybe Ms. Williams, quoted above, is being entirely accurate. I note only that there was another side to this, and that Ms. Chiose (again) hasn’t mentioned it.

I don’t intend to enumerate every failing in the article, but there is one final omission that is particularly serious. Have you seen the videos of Dr. Peterson getting shouted down at McMaster University? Watch this one from about 3:15) I don’t doubt that there are horrible internet trolls who have taken Dr. Peterson’s side, and behaved reprehensibly in the process (though I also don’t see that there’s much he can do about that – indeed, any issue involving intense disagreement these days is going to be accompanied by people saying vile things online). It is, however, grossly misleading to suggest that this is a problem on only one side of this debate. Indeed, such vile behaviour from so many of Dr. Peterson’s opponents might go some way to explain the reprehensible behaviour of some of his supporters.

Considered in the light of what happened at McMaster, it is simply extraordinary that the article’s subtitle talks of “the fight for ‘free speech,’” putting “free speech” in quotations marks as if this were somehow a dishonest pretense and not the actual substance of Dr. Peterson’s concern. How does that McMaster video not prove that free speech is genuinely at issue here? And, of course, all this takes place in the context of similar episodes elsewhere – go look up the cases of Charles Murray, of Heather Macdonald, or of Bret Weinstein!

It is not only errors of omission that give us a one-sided picture. Both of Chiose’s articles mentioned above contain a rather marvelous example of how language can be chosen so as to suggest that a certain side of a disagreement is at fault, without actually saying as much. In that article from November, we hear of “weeks of controversy on campus over Dr. Peterson’s refusal to use non-gender-specific pronouns when addressing students.” We might instead have heard of “an attempt by activists and the university administration to force the active use of certain words on other people,” but that phrasing would fail to place the onus for the whole business on Dr. Peterson: it is he, not the activists, who is to be seen as the cause of controversy here, even though he was reacting to them (and see the longer article, which I have been focusing on, for a similar choice of phrase in the first sentence). There are plenty of other examples of this sort of thing; try going through the article yourself and looking for them.

Clearly, these are controversial matters, productive of vehement passions, and for that reason, potentially dangerous. Ms. Chiose tells us of people on her side of the debate – and I think she has taken a side – who were willing to speak only on condition of anonymity, and this is no less true of people on the other side (such as me – this may shock you, but ‘Babbington’ is not my real name). In such a situation, surely we ought to aim at ratcheting down the tension, and to proceed as dispassionately as possible, so as to consider the case made by either side on its own merits. Ms. Chiose’s piece does something very different: it creates the impression that Dr. Peterson is a bad person. We are not provided with evidence or arguments that correct intellectual mistakes he has made; rather, we are given the impression that he has failed morally, as a person. If someone has failed intellectually, we reason with him; if he has failed morally, we might be inclined to proceed differently.

I think Simona Chiose should have the right to write opinion pieces on this and other matters, but they should be labelled as such. It is one thing to take a side explicitly and argue its case openly, but another altogether to produce a one-sided account of a controversy while writing what appears on the surface to be sand objective piece of reporting (this article is found online under “Home->News->National->Education”). I think the Globe and Mail dropped the ball here. As someone originally from the left (in Canada I have only ever voted for the NDP), I have to say this has made the notion of a “liberal media,” which reports selectively according to a preferred narrative, look rather more plausible than it previously had.

________________

And now for a few words about the larger controversy…

As I have watched this Peterson Affair unfold in recent months, I have repeatedly been impressed by his ability to make his case. For example, watch this video from about 7:30 to at least 19:00: Peterson is faced here with a questioner, who tries to push the other side of the argument, but the questioner is no match for him at all. On the matter of gender, see 24:46 to 28:55 of the same video. After watching him here, it should be obvious why he is not afraid to debate on this subject: he can present his position in a devastatingly effective, and utterly convincing, manner.

I have not been impressed by the arguments of the other side. I would like to see an answer to the points made in the video sections I have just referenced. Unfortunately, Dr. Peterson’s opponents seem to see any and all disagreement as a form of bigotry; overwhelmingly, when I see his opponents, they focus almost entirely on ad hominem attacks and scarcely at all – if at all – on meeting the substance of his arguments. (Which side of that divide do you think Simona Chiose’s article falls on?) Why don’t they just put the personal attacks to one side – heck, we can assume for the sake of argument that he’s an awful person – and refute the substantive points he has made? Why won’t they debate him? I believe this is because they know they cannot answer his arguments. (I am aware of only one debate that has taken place, and it was not impressive for the anti-Peterson side: I thought Christie Blatchford gave a good overview – and note how her piece is marked “COMMENT,” indicating it contains the personal views of its author.)

A final matter: why I am spending so much energy on this? Why, for that matter, are so many people giving him so much money? (as of this writing, he’s over $42k per month) Simona Chiose might like to think that we Peterson supporters are driven by dark motives, but I can offer another reason. Many of us are dismayed by the replacement of a culture of argument and open discussion with a culture of threats and intimidation, with the constant use of accusations of bigotry as a cudgel to silence dissent. Many of us think Dr. Peterson is correct in his warnings about the totalitarian – yes, totalitarian – nature of the way in which all of this is moving.

In a healthy society, universities are not merely repositories of knowledge and practical skills, but also have a civic function: they produce in their students an attitude of critical inquiry and the habit of frank and open debate on matters even of passionate controversy. As those disgraceful, idiot McMaster students so clearly showed, universities today are producing very different attitudes and habits. Newspapers, too, have a civic function fostering informed debate on contentious issues as part of an open society – or at least they once had such a function.

Dr. Peterson has said (I can’t find his exact words), there are, in the end, two ways of dealing with people: either you talk things out, or you use force. University administrations that fail to act against students who behave on campus in a thuggish manner, and the publications that effectively cheer them on, are effectively deciding that we are to have a more violent political culture in the future.

Where we once might have debated contentious matters in a frank and open manner, today it seems that a group of activists decides what the new orthodoxy is to be, and anyone who tries to argue otherwise is denounced and publicly shamed as a bigot. This needs to change, and that, in the end, is why Dr. Peterson is so important. It’s also why I think he deserves every penny he’s receiving. Certainly he has endured appalling abuse and pressure with exceptional grace and goodwill (far more than I would have mustered), but much more importantly, he has stepped in to defend, as best he can, the civic good that universities are now failing uphold as institutions, and that’s something worth far more than he has so far received. That, above all, is why he has received so much support.

Advertisements

A Quick Thought on Faith

The last 4-5 years have seen my views on many things change dramatically. I used to be well to the left on even the Canadian political spectrum, and now I find myself becoming more conservative, to the point that I could even contemplate voting Republican (once they’re done with Trump). This has come about in response both to things I’ve read and to events (more posts to come about that). I’ve found that as my views change, I occasionally find myself struck, when I hear a view I now strongly reject, by the fact that I used to hold it as an obvious truth.

So it was as I heard Jerry Coyne, the author of a popular (and excellent) blog (Why Evolution is True), talk about religion on the Rubin Report. (watch here and here) At one point (I forget where exactly), Coyne suggests that religious faith is problematic beyond the specific content of any particular religious doctrine, because it gets people in the habit of simply believing things without evidence. That is, there is something inherently harmful about the fact of faith in itself: it is a bad intellectual habit. A good intellectual habit, on this view, would seem to be to get as far from faith as you can, basing your views, as much as possible, on evidence.

Wow – I used to believe that, with some vehemence. And wow: I sure don’t believe it any more. Not at all.

I can think of two kinds of faith that we need and use on a constant basis. Without both, our lives would simply not be possible. The first is central to the possibility of a scientific understanding of the world – that is, the sort of understanding that Coyne would hold up as an alternative to religion. The second deals with our relationship with other people; I wonder if this second kind of faith doesn’t lead on to religion.

The first kind of faith is suggested in an article by Theodore Dalrymple, in which he responds – convincingly, I thought – to the “new atheists.” He makes the point succinctly: “if I questioned whether George Washington died in 1799, I could spend a lifetime trying to prove it and find myself still, at the end of my efforts, having to make a leap, or perhaps several leaps, of faith in order to believe the rather banal fact that I had set out to prove.” Of course, the point can be generalised to include the whole of science. I believe global warming is a major problem, but if a well-educated climate science denier were to debate me, I would be reduced to appeals to authority very quickly indeed. The same would happen to almost all scientists aside from the climate specialists (although almost all real scientists would last longer than me in such arguments). The same will be true of any other field of knowledge: specialists might hold their own, but non-specialists must follow the lead of authorities.

Of course, it is probably true for most of what is understood about the world today that if we investigated any particular matter, we could eliminate most of our faith on the matter. However, for all actual people, the reality is that most of what we understand about the world is in fact a matter of faith. Modern science allows us to put our faith in sound authorities, but we do need that authority. The idea of eliminating faith and replacing it with reason is utopian, and like most utopian ideals, when it is forced upon the real world, it becomes a source of harm as well as benefit (in this case, the utopian ideal of eliminating faith obscures from us the reality of what is actually going on, for a start).

There is a second sort of faith, and this one has to do with people: we cannot live without trust. That is, faith in other people is necessary to a genuinely human life. If you couldn’t trust anyone else, you couldn’t even walk down the street – after all, you might get stabbed. More than that, a great deal of what gives depth and meaning to life comes from trusting other people. Imagine if your interactions with other people consisted of the absolute minimum necessary to sustain your life: you would trust other people enough to walk down the street, to work, and to conduct basic economic transactions. Nothing more. No pleasures of conversation, in which you might reveal something of yourself, no substantial or lasting relationships with others. Most people would find life intolerable without substantial (and thus trusting) relationships with other people, and I don’t think it is reasonable to see in such a life the pinnacle of human flourishing. A life worth living requires faith in other people – indeed, it is scarcely an exaggeration to say that life is more worth living to the extent that we can rightly make use of such faith.

Accordingly, as in understanding the world, so too in our dealings with other people: the trick is not to eliminate faith, but rather to learn where it is appropriate and where not. And it is not hard to see how the matter of faith in other people might lead on to religion: our ability to have faith in other people doesn’t merely increase our chances of survival, but makes life far richer, far worthier of living, than it otherwise might be. What if religious faith is like that? What if we who are not religious are in a situation similar to that of people who miss out on so much of what life has to offer because they never trust anyone else, and thus never have any substantial relationships with other people?

I’m not religious, but sometimes I wonder if I should be.

Why I Subscribed to the National Post

[Yesterday was the second anniversary of the Charlie Hebdo massacre. That means I’ve had a subscription to the National Post for almost two years. I wrote this a year ago, but now seems a good moment to put it up here.]

I grew up reading The Globe and Mail. Sometime in the mid- to late eighties, I began copying my parents, and began reading the paper before heading off to school. Often I would return to it in the evenings. In university, my grandmother gave me a subscription, and I kept on reading. I can still remember certain headlines, certain pictures, and above all, certain editorial cartoons (having now read papers in seven countries, I think the Globe’s Brian Gable is the best anywhere). Having grown up with it, I have long had a certain sentimental fondness for the paper. I was, and would remain, a Globe reader for life.

At some point in the nineties, I became aware that another national newspaper was going to start up soon. Bankrolled by the owners of a very large chunk of the Canadian press, it was to be called The National Post. It would provide a conservative alternative to the Globe, and would have all the advantages that its owners’ deep pockets could provide. From the start, I regarded it with a kind of horror, not only because I was appalled by its conservative standpoint, but also because I worried about the harm it might do to the paper I’d grown up with. I remember taking people to task for subscribing to the Post; on at least one occasion I went without the day’s news because only the Post was available (and the fact that it only it was available on that occasion somehow made me feel that evil and dishonourable machinations were at work). I was, after all, a Globe reader, for life.

This past year something changed that. In the aftermath of the Charlie Hebdo killings, newspapers everywhere were confronted with a question: would they print the cartoons? In France and in Germany, the focus seemed to be on the matter of principle: freedom itself, the bedrock of our society – indeed, of our civilisation – was under attack, and now was the time to stand up for it. In Germany it seemed that every paper reprinted some of the cartoons (if not always the most offensive). One paper had an entire pull-out section with dozens of reprints of Charlie Hebdo covers, some featuring Mohammed, some not. Another reprinted a Hebdo cartoon of Mohammed, but pasted him into their own background, so that he was taking a bath in blood. In France, while so many papers reprinted the cartoons, it was noted in several that virtually the entire press in les pays anglo-saxons (i.e., the English-speaking world) was not printing anything from Charlie Hebdo.

With a few (generally right-wing) exceptions, in Canada, the US and Britain, people who wanted to know exactly what all the fuss was about had to turn to the internet. This situation was not necessarily indefensible. There are two main reasons why editors might have preferred not to publish the cartoons. The first was summed up memorably by Boris Johnson: “about 10 years ago, the whole Danish cartoon controversy blew up – and I remember distinctly concluding that I would never have published them in The Spectator, which I edited, not just because they were gratuitously inflammatory, but because I didn’t see how I could justify my decision to the widows and orphans of my staff, in the event of an attack on our offices.” In other words, we might be moved by fear, perhaps rightly. The second reason is the desire to respect others, and the further pragmatic reflection that if we want to live in peace with others, a posture of conciliation will help us do so. This standpoint sees publishing the cartoons as not only be wrong it itself by virtue of its failure to live up to the civilised ideal of polite regard for others, but also as undermining our hope of peace in the future by provoking further violent acts.

The Globe, along with much of Canada’s English-speaking press (and in contrast to much of French-speaking press) did not print the cartoons. On January 8th, 2015, an editorial explained why. In addition to an argument to the effect that actually seeing the cartoons was not necessary to understand the story, the reason came down to a matter of principle: “we hadn’t published the cartoons before the slaughter and our editorial position remains the same today.” The decision attracted some criticism from readers, particularly online. Three days later came a second justification of the paper’s decision. It was the worst editorial I have ever read.

This second editorial seemed to me not to be altogether consistent. On the one hand, it continued the principled stand of its predecessor: “we made the same decision in 2006 after a Danish magazine was threatened by extremists for publishing cartoons depicting Mohammed as a terrorist. Both decisions were made in accordance with the newspaper’s beliefs and values.” That is, the decision was a matter of maintaining a consistent principle, of standing firm and refusing to be moved by the violence of current events. To say this is precisely to say that the Globe was not moved in this decision by fear.

The editorial, however, contained remarks that seemed to point in another direction: the Globe’s critics, we were told, “have nothing on the line as they gleefully accuse the media of heinous shortcomings in the emotional aftermath of the murders of most of Charlie Hebdo’s senior editorial masthead.” To say that your critics “have nothing on the line” is to draw attention, albeit indirectly, to the fact that you, by contrast, do have something on the line. That is, the appeal here is “you may be criticising us, but we’re the ones who face violent consequences if we publish the cartoons and things go badly. Put yourself in our shoes, and see how that makes you feel (and just now we’re going to mention that most of Charlie Hebdo’s senior editors were killed).” This line is continued a moment later, when it is said of critics, “how incredibly easy. How crass and pompous.” That is, incredibly easy for you… This line of thought invites us to consider: if you were an editor thinking about publishing those cartoons in that context, how would you feel? There’s only one answer: afraid. The appeal is unmistakably to fear. To act out of fear is precisely not to stand on principle.

So in the middle of a defense of the principled and consistent stand taken by the paper’s editors, we are pointed at something quite different from principle and consistency: fear. The argument is something like this: “we stand unflinchingly on principle, unmoved by fear, and anyway, we’re afraid – wouldn’t you be afraid?”

I don’t pass judgment on the editors at the Globe and Mail for being afraid. As they remind me, I don’t have the right to: I’m not running a newspaper; I’m not in their shoes. What I absolutely do judge them is on the terrible job they did of explaining themselves, on the back-handed way in which they introduced fear into the argument, stooping to name-calling at the very moment they were showing that their appeal to principle was nonsense. Because here’s the thing: you can’t stand unflinchingly on principle and act out of fear. It’s either one or the other.

What hit me like a tons of bricks, however, was the fact that the episode brought another historical era to mind. High-sounding declarations of principle on the surface, while an unspoken fear lies underneath, driving behaviour? I had seen this before: this was appeasement.

The word ‘appeasement,’ of course, refers to that period in the thirties, when the Western democracies wanted so desperately to avoid a second world war, and so failed to stand up to Hitler as he went from a cartoon character to a terrifying and almost unstoppable force. Like the editors of the Globe, of course, nobody wants to be seen to be moved by fear – or to see themselves this way – so when we read the words of the appeasers of the thirties, we often find appeals to high-sounding principles. The need for conciliation and trust, the importance of admitting our own mistakes and injustices, the great virtue of universal peace – these are the sorts of appeals we find again and again. You can also find a rather desperate willingness to trust this new Hitler fellow. After all, if you’re intent on building a world on trust rather than violence, you have to be able to trust the other guy. People like Churchill who warned about and criticised Hitler were put under tremendous pressure to stop doing so, and were denied platforms in many places (wouldn’t want to provoke another war).

The appeasement in the thirties was not only a moral failure. It also made the world far less safe – indeed, the appeasers can be said to have caused the second world war in a very real sense. There was never anyone easier to stop than Hitler.

That Globe editorial caused a paradigm shift for me. Suddenly appeasement was not a far-off phenomenon from the pages of history, like the Spanish armada or the Athenian empire. Suddenly it was a defining characteristic of our own times. In addition, I found myself conscious of something I had never felt in my life, almost a sensation in its intensity: I felt ashamed to be Canadian. I was in Germany, where the press had decided, virtually as one, to stand for something, if only for a moment. The French were doing the same thing. And what did the English-speaking world, Canada, and above all, the paper I had grown up with do? They chose to be moved by fear, all the while making high-sounding declarations of principle. They chose appeasement. The National Post chose another path, so suddenly subscribing to it seemed positively like a duty. It also began to occur to me that having this other national newspaper, with a substantially different point of view, might not be such a bad thing after all.

The path of appeasement was not the only one available. Consider what Dan Hodges, at the Daily Telegraph, wrote: “just before I started this piece I was about to tweet the picture of the cartoon of the prophet Muhammad published by the satirical magazine Charlie Hebdo. I was going to do it ‘in solidarity’. And then I stopped. I stopped because I was scared.” That is a contribution, because it gets the fact of fear out there. Hodges can’t feel good about himself in the way that the Globe editors can, because he admits that he’s not acting on the basis of some great principle, he admits he’s acting from fear. But the first step in dealing with fear is to admit it’s there, and so Dan Hodges performed a service by making it a matter of record that there is now a climate of fear, and that people are changing their behaviour because of it. The Globe didn’t even do that. I don’t think that the paper I grew up with would have made the same decision, but I’m quite certain they would have done a better job of explaining themselves.

As appeasement in the thirties made us all less safe, so too does it have the same effect today. We have given way: there now exists an unspoken ban on certain words and pictures where Islam is concerned. Charlie Hebdo is not going to draw the prophet in the future, and the same is true of the Jyllands-Posten. They can hardly be blamed; they’ve done their bit. But when even the freedom of speech fanatics aren’t going to do it, the rest of us certainly cannot.

And we need freedom of speech fanatics. It is the bedrock, the foundation, the sine qua non of everything else we value, of our way of life. For example, freedom of speech is prior to feminism, because feminism is only possible in a society in which basic cultural norms can be safely challenged. Notoriously, when Mary Wollstonecraft put her arguments to certain men in the 18th century, they just laughed at her. But consider what they did not do: they did not throw acid in her face, nor did they stone her or beat her to death. Since the turn of the century, we’ve become conscious of the fact that there do exist societies where such things happen. In the end, the difference comes down to freedom of speech. One could pile up similar examples at some length. (And see David Paxton’s piece on this whole business, which I thought made some good points. Also this: some good stuff on that blog.)

I suspect that the difference between the reaction of the English-speaking world and that of the Germans or the French is to be explained to some degree by a cultural difference: the French and the Germans tend to see the principle more readily, and to act on it; the English-speaking tendency is to focus more on the particular situation at hand. From the latter perspective, one can see why it might not seem so important to print the pictures. After all, by doing so, one doesn’t seem to accomplish all that much, while the danger might be quite real. All the same, a matter of principle is at stake, and principles really do matter. As Thucydides has Pericles say, “this trifle contains the whole seal and trial of your resolution. If you give way, you will instantly have to meet some greater demand, as having been frightened into obedience in the first instance.”

Some Thoughts on the Use of Statistics in Ethical Matters

Some months ago, I was discussing Donald Trump’s election chances with a couple of friends, and I happened to mention a voter I had read about somewhere, a young man in California, who described himself as a feminist. He was concerned about the power of false rape allegations to destroy the lives of men, and was on that account leaning towards a vote for Trump. I said that while I didn’t sympathise with the vote, I certainly understood and sympathised with the concern behind it. One friend responded that such false accusations were surely statistically far less significant than real cases of the crime itself that go unpunished. The other friend agreed, and that seemed to settle the matter, at least for the two of them.

For me, well, it got me thinking. I have since found myself noticing how arguments that proceed by means of statistics are often used as though they provide a sort of final knock-down in ethical matters, unanswerable and definitive – and yet this is something they can almost never provide.

To see why, let’s indulge for a moment in a bit of philosophy. Most undergraduates who have done a philosophy course should have some idea about what utilitarianism is. We can define it as the principle that the greatest good should be done for the greatest number of people possible. Our undergraduates should also have some idea of a potential problem with utilitarianism: it is compatible with an acceptance of radical evil, provided, of course, that the evil is done to a minority. You could literally exterminate people while achieving the greatest good for the greatest number. (The responses of major utilitarian philosophers to this problem is something we can leave aside here.)

Now here’s why I bring this up: it seems to me that the sort of claims you can make with statistics are precisely the sort made by utilitarianism. Statistics give us a coarse overview of a subject; they tell us that something is the case on the whole, and for the most part. And to the extent that we confine ourselves to this view, we open up the possibility of radical evil, so long as it is done to a relatively small number of people. So if we’re going to commit ourselves to statistical state of affairs as a desirable aim of policy or activity, we should think quite clearly about what we’re actually advocating.

In the case I began with, we are indeed dealing with a readiness to do real evil to innocent people in the name of achieving what is taken to be a statistically desirable result. Feminist activists often speak as if the purpose of the justice system is to bring about a certain proportion of convictions relative to the number of complaints made. That is, they claim that only a very small fraction of rape accusations result in convictions, and that this small proportion indicates a failure on the part of the justice system. The implicit aim is a system that produces convictions in some much larger proportion, say, 50% or 98% (though I have never seen an actual number given). There is no room in such calculations for the possibility that the lives of innocents might be utterly destroyed by baseless accusations. Indeed, that a callous indifference to the possibility of crushing innocent people is in fact at work here is confirmed by such well-known cases as the Duke lacrosse team rape trial, or the travesty of reporting about another such supposed crime that Rolling Stone committed. If we were to follow the statistical point of view to its logical endpoint, the proper question to ask is not about the guilt or innocence of the accused at all, but simply whether or not the appropriate number of guilty verdicts have been delivered so far relative to the overall number of accusations.

The purpose of a justice system worthy of the name, however, is not to produce a statistical result, but rather to answer a specific question: what is the truth of the matter in this particular case? Such an approach is not likely to produce the most pleasing proportion of guilty and innocent verdicts, and is likely to fall short of its goal regularly, but it allows the rights of accuser and accused to be balanced against one another. Perhaps most importantly, it keeps the justice system, as much as possible, from the active commission of injustice. The words primum nil nocere – “first, do no harm” – are supposed to apply to doctors, but they should be a principle of the justice system no less.

Certainly statistics might inform and guide us in subsidiary ways, but only to the extent that we observe the more important principle of a system that does not itself do active harm to the innocent. When discussing ethical matters, then, we should always be cautious when confronted by arguments that proceed only on the basis of statistics.

Statistics are dangerous in other ways, too. They can give a seductive, but misleading impression of being simply factual; they can seem to clothe a particular view in the robes of science, giving it an apparently insuperable status in comparison with non-statistical arguments. (I’m reading a book right now that seems to suffer from this very failing; a blog post reviewing it will probably follow in the weeks to come.)

Of course, statistics can be massaged, and where crime is concerned it is clear that they often are massaged. Consider Rainer Wendt, federal head of the German Police Union: “if I, as a senior officer, want narcotics-related crime to go down in my city, then I send the officers responsible for it out to police [vehicle] traffic infringements. Then I promise you, that narcotics crime will go down – at least statistically.”

In fact, things may be far worse than Wendt’s words suggest. There is reason to believe that the arrival of statistics-based crime fighting has corrupted our justice system in a most fundamental manner. Here I will allow Theodore Dalrymple to speak, as he relates his wife’s attempt to report a crime:

She noticed some youths setting fire to the contents of a dumpster just outside our house, a fire that could easily have spread to cars parked nearby. She called the police.

“What do you expect us to do about it?” they asked.

“I expect you to come and arrest them,” she said.

The police regarded this as a bizarre and unreasonable expectation. They refused point-blank to send anyone. Of course, if they had promised to make every effort to come quickly but had arrived too late, or even not at all, my wife would have understood and been satisfied. But she was not satisfied with the idea that youths could set dangerous fires without arousing even the minimal interest of the police. Surely, some or all of the youths would conclude that they could do anything they liked, and move on to more serious crimes.

My wife then insisted that the police should at least place the crime on their records. Again, they refused. She remonstrated with them at length, and at considerable cost to her equanimity. At last, and with the greatest reluctance, they recorded the crime and gave her a reference number for it.

This was not the end of the matter. About 15 minutes later, a more senior policeman telephoned to upbraid her and tell her she had been wasting police time with her insistence on satisfaction in so trivial a matter. The police, apparently, had more important things to do than suppress arson. …

It is not difficult to guess the reason for the senior policeman’s anger. My wife had forced his men to record a crime that they had no intention whatever of even trying to solve (though, with due expedition, it was eminently soluble), and this record in turn meant the introduction of an unwanted breath of reality into the bogus statistics, the manufacture of which is now every British senior policeman’s principal task.

Once the job of the police is to fight crime statistics rather than crime, the possibility of almost infinite corruption has been opened up – indeed, a motivation for corruption has been provided.

As one reads other Dalrymple essays, one encounters similar episodes: police refusing to respond to crime, or even to record it – or, alternately, responding to minor infractions, committed by individuals likely to respond to the authority of the police, rather than to violent crimes committed by people likely to respond in a violent manner (e.g., here or here or here).

Statistics, then, will not only fail in most cases to provide a decisive argument when considering ethical matters, they can also be the cause of great harm. Any insight they provide has to be balanced against direct contact with the problem in question. Localism – taking power away form massive bureaucracies and putting it in the hands of smaller communities – would no doubt be helpful. The more I read Dalrymple, and the more I read of the views of police officers in England, America and in Germany, the more I think that policy on criminal matters should never be allowed to be made on the basis of statistics alone, and that there should be a requirement that people who deal directly with crime are given some substantive power in the production of that policy.

And while I’m dreaming, I’d also like a private jet.

Did the EU Cause the Last 70 Years of European Peace?

Somehow I’ve gotten in the habit of listening to Sam Harris’ podcasts. He seems to have a gift for finding just the right words to express difficult thoughts, and he never fails to cover interesting topics, often joined by interesting people. So if I have to do some boring manual task – chopping vegetables or cleaning the living room – then Sam Harris it is.

One recent podcast had journalist James Kirchick as a guest, and the matter of the EU came up. Kirchick makes the case for the importance of the EU to European peace about as powerfully as one could (from about 51:55): “I talk to a lot of Europeans, and a lot of them who are Eurosceptics, who want to dissolve the EU. And I just ask them simply, ‘What’s your alternative? What period in European history was more prosperous or peaceful than the, basically, seventy years of European integration that you’ve had? Why would you want to risk that? Knowing what the alternative has been, why would you want to risk that by getting rid of this institution? And I agree, the EU has lots of problems… it’s too bureaucratic… I could go on and on. But to scrap it entirely, and to go off in this other direction when we know what’s happened historically, we know what’s happened… when all the European countries have pursued their national agendas at the expense of cooperation.”

This is, I think, a bad argument, but the basic idea behind it is treated by Europhiles as if it were a self-evident truth. I can remember roaming the streets of Brussels in 2015; I happened upon a music festival at which some EU dignitary soon gave a speech. “The EU gave you peace,” she declared. She did not try to argue or show how the EU produced peace; rather she felt able to take it as obvious.

I think there’s a need to push back against this, not only because it’s wrong, but also because the mistake can be dangerous. Certainly I’m willing to admit that the EU has made certain limited contributions to peace, but it has also undermined the basis of peace in Europe, and I’m inclined to think the latter effect is more significant than the former.

Before I say why, let me begin with a concession: the EU has made some contribution to peace. This past summer I was talking with a fellow from Portugal who told me that back in the days of Franco, the Portuguese had real fears of a Spanish invasion. That’s now over. Assuming this is true (I admit I hadn’t known this before), I think it’s reasonable to attribute some credit for this change to the European Union: by creating a focus for political activity over and above that national border, a situation was created in which local tensions could tend naturally to relax and disappear. Northern Ireland provides another example; there may well be other regional tensions that have been allowed to dissolve in a similar manner: the EU has, in certain local situations, been helpful.

But let’s not kid ourselves: in neither of the examples I’ve just given can anyone pretend that the EU has provided 70 years of peace. And neither the founders of the EU nor most of its present-day advocates were thinking of Ireland or the Iberian peninsula when they talked about peace. No, they were (and are) focused on The Big One: the Franco-German fault and the prevention of another general war. Here it should be obvious that the EU has been largely irrelevant.

There are many reasons we haven’t had another Franco-German conflict. For example, Germany has become an essentially pacifist country (I think this is no longer a good thing, but that’s another post). Of course it’s a good thing that the French and the Germans have turned to the path of reconciliation, and the EU was downstream from those efforts. But even if that had not happened, even if Germany had become an angry, hate-filled, militaristic country after 1945, and even if Germany had managed to put a new Hitler in command, still there would not have been another war: France had acquired nuclear weapons. I’ve explained elsewhere why I think Hitler was very far indeed from a military genius, but even Hitler would not have attacked a nuclear-armed France.

And of course, even absent that consideration, there is another reason that there was no third major war in Europe: after 1945, the world was suddenly a very different place, with very different concerns. Western European countries, above all the Germans, were focused on the east; on either side of the iron curtain, countries shared a common danger, and this inclined them towards a common peace with others on their side. Lessons learned from WWII meant that countries came together (in NATO) for common security. The atom bomb and the Americans were sufficient to keep the Russians from rolling the dice. After 1989, peace had become a habit, and anyway, the presence of overwhelming American power presented a significant deterrent.

Where, in all of this, did the EU play any role at all? If there had never been any move at all towards a European Union, at what point exactly, or in what respect, would its absence have been likely to bring about a major war? Looking at the actual situation since 1945, I can see reasons to give credit for seven decades of peace to the United States and to nuclear weapons. Certainly there were plenty of other factors that helped, but none were nearly so important as these two.

Let’s return to Kirchick’s questions: “What period in European history was more prosperous or peaceful than the, basically, seventy years of European integration that you’ve had? Why would you want to risk that?” Yes, we’ve had decades of gradual European integration, and these have coincided with seven decades of common peace, but it does not follow that the integration caused the peace. On the contrary, I think I’ve just outlined the major causes of that peace, and the EU isn’t one of them.

But I don’t think that people who give the EU credit for European peace have a specific scenario in mind. Rather, there’s a more general thesis at work: for the duration of its history as a plurality of countries, Europe has known constant wars; a different result requires a fundamental change; and a single European state would provide that by eliminating the plurality of interests that go along with a plurality of states. I suspect Kirchick is getting at something like this when he says “we know what’s happened… when all the European countries have pursued their national agendas at the expense of cooperation.”

But it is not the case that all wars are the result of conflicting national interests, nor that having one country produces peace, while having many leads to war. Canada and the US are two separate countries, not one, and they have been at peace for two centuries. In that time, the US, which is one country, not many, has had a civil war. So too has Switzerland, which is also one country, not many. So replacing many countries with a single country is very far from a guarantee of an end to war.

All this should suffice to show why I don’t think the EU deserves credit for seventy years of European peace. But in fact, I think the situation is rather worse than that: there are reasons to fear that the EU has made conflict more, rather than less, likely as we look to the future.

Above all, the fact is that bad thinking about the causes of war and peace is a very dangerous thing indeed. The thirties, in which the western democracies indulged in pacifist fantasies when they ought to have been focused on military power, give an example of the danger here. I find myself increasingly inclined to make a connection between the claim that the EU has caused peace and the general European indifference to NATO (an indifference manifested in the tiny number of EU members that reach the 2% of GDP obligation for military spending). That is, by repeating the mantra that the EU has been the cause of peace, one makes it easy to discount the importance of NATO (or, alternately, hard to see its importance). There is a widespread tendency towards a one-sided belief in the value of soft power here; to the extent that the US moves towards isolationism (we will have to see what President Trump does), that could leave Europe, particularly its eastern fringe, dangerously exposed, the very sort of situation that made WWII possible.

But there is a second, perhaps more important regard in which the EU has promoted the possibility of conflict: by the lack of wisdom in its actions and in its founding idea. The euro crisis and austerity has not been a cause of great love for Germany, particularly in Greece, and we haven’t seen the end of it yet. The attempt at forced migration quotas was similarly unfortunate, and could hardly have been better calculated to give fresh impetus to nationalist movements. Above all, the notion of a top-down replacement of national identities with a common European one was not wisdom. Yes, the last few major European wars were caused by excessive or extreme forms of nationalism, but it does not follow that the answer is to rush to the opposing extreme. Pride in local culture and achievements is an altogether natural thing, and identities built around this can be altogether benign. Attempts to impose an identity that ignores the natural attachments of most people greatly increase the risks of encountering the malignant forms of those attachments, in a backlash. The answer that the EU provides to question of peace in Europe is altogether too simple – indeed, it seems to me to refuse really to confront the problem. I do admit that simple solutions to complex problems may work in the short term.

This notion that there is at the very heart of the European project a source of danger, is one that the great English essayist, Theodore Dalrymple, has addressed on a number of occasions, often in a highly diverting manner. Consider, for example, the following: “The journalist … asked whether I thought that nationalism was dangerous. The question implied that the choice before Europe was between the European Union and fascism: that all that stood between us and the ascension to power of new Mussolinis, Francos, and Hitlers were the free lunches of senior Eurocrats. I replied that dangerous forms of nationalism existed, of course, but that in the present circumstances, supranationalism represented by far the greater danger. Not only was such supranationalism undemocratic, for it reflected no widespread demand or sentiment among the population; it also risked provoking the very kind of nationalism against which it was to stand as the bulwark. Further, the breakup of supranational polities in Europe tends to be messy, as history demonstrates.” (see also here)

I don’t mean to say that the EU has done no good in any regard, but surely it’s time to put this one to rest: the EU has not been the cause of peace in Europe in the last seventy years, nor has it been anything like one of the most important causes.

What’s Left by Nick Cohen

This is an important book. It deserves a far wider readership than it will get. It is an introduction to the phenomenon of the regressive left, a recent arrival in left-wing politics that casts aside the proudest traditions of that part of the political spectrum, abandoning the universality that used to come with ideas like human rights, and making its peace with fascism and other illiberal ideas that used to be the exclusive property of the far-right. First printed in 2007, What’s Left has recently come back into print, and has been enjoying a flurry of sales because of Jeremy Corbyn’s election to the leadership of the Labour party in Britain. Certainly Cohen was prescient, and his book can be profitably read to increase one’s understanding of British politics, but I think its significance goes far beyond that. For me, the book is important for the insight it gives into the philosophical school known as postmodernism: the regressive left is what happens when postmodern ideas take root in political life. Cohen covers some difficult intellectual ground with magnificent clarity. The result is a book that reads like a novel and will greatly deepen your understanding of ideas that are coming to define the times we live in (and with the arrival of president Trump, postmodernism has come to have a terrible relevance to American political life as well; I’ll say a word about that at the end).

I’ve long had an interest in how certain seemingly esoteric ideas – that is, ideas that you would assume were of interest only to a few professors in ivory towers – are in fact critically important to everyday life. Cohen’s book shows how ideas that have been peddled by academics for the last half-century or so have trickled down into everyday life, with deeply troubling effects in political life across the West.

The book is not, however, a philosophical tome. What’s Left focuses on recent history, reaching its climax in the second Iraq war. Cohen experience of this time is what brought about the book: as a friend of certain Iraqi exiles, he found himself occupying different ground than many of his fellow leftists, and the perspective proved fruitful. He tells us he didn’t join protests against the first Iraq war because he saw Arabs carrying signs saying things like “Free Kuwait.” He supported the second Iraq war because of Iraqis who vehemently supported it, and because he was all too aware that the line about Saddam Hussein being a brutal dictator who ought to be removed was, well, true (you’ll learn a thing or two about Hussein’s Iraq along the way). But a key insight clearly came as Cohen reflected on the distance the rest of the left had travelled without him since the 80’s. Back then, on the basis of its support for universal human rights, the left was against Hussein and in favour of Iraqi democracy. By the time of the second Iraq war, it was the right that was against Hussein and in favour of Iraqi democracy, while the left was at best confused. Actually, Cohen makes clear that ‘confused’ is too generous, for it becomes clear that many left-wingers had simply abandoned their previous commitment to universal human rights.

Iraq, then, is one of the book’s central threads. If, like me, you were against the Iraq war from the start and have on that account felt rather pleased with yourself ever since, the book can be uncomfortable reading – and all the more valuable for that. This is far the most thought-provoking thing I have read on Iraq.

But I think the book’s real value lies in the ideas that it treats, ideas relevant far beyond a particular conflict. One idea central to the book (and to postmodernism) is relativism. You have probably encountered the doctrine in the form of cultural relativism – i.e., the idea that no culture’s practices are better than any others. So, for example, in some cultures, it’s appropriate to eat with a knife and fork, in others, with chopsticks, in others still, with your hands. The cultural relativist reminds us that no one of these approaches is better than any other – indeed, that it would be deeply wrong and even evil to take any one of them as absolute and to try to impose it on others. In theory, this sounds great, and many of the doctrine’s advocates see it as necessary to sustain or to further develop the left-wing movements of the 20th century, in that they think it the only foundation on which real tolerance and coexistence are possible. Cohen, however, deals with the way the doctrine actually plays out in the real world, and there it proves to have disastrous potential.

One relatively recent application of relativism involves a sharper and more exclusive focus on minority groups. So, for example, Cohen tells us how “the idea that a homosexual black woman should have the same rights as a heterosexual white man was replaced by a relativism which took the original and hopeful challenge of the early feminist, gay and anti-racist movements and flipped it over. Homosexuality, blackness and womanhood became separate cultures that couldn’t be criticized or understood by outsiders applying universal criteria.” This refusal of criticism involving universal criteria was applied above all to the foreign cultures found in other countries.

Consider, for example, Michel Foucault, celebrated among many academics as one of the great minds of the 20th century, and often taken to be a particularly progressive thinker. In the  early 80’s he became enamoured of the Ayatollah’s new regime in Iran, and warned against criticising it – Iranian culture has a different “regime of truth,” he said. That is, they have their own culture with its own practices, and we have no right to try to criticise it in terms proper to our own culture. But what was at issue in this case was not merely matters like how we eat our food, but questions such as whether or not people who protest against the government should be tortured, or questions concerning the rights of women. Cohen brings the heart of the matter out clearly: “if the bishops of the French Catholic Church had achieved the theocratic power of the ayatollahs and used it to prescribe what Foucault and his colleagues could teach at the College du France in Paris, I’m sure Foucault and all his admirers in Anglo-American academe would have gone ape and shouted ‘fascism.’ As it was, the victims of the Ayatollah Khomeini’s theocracy had brown skins and lived in a faraway country.” Far from being a progressive thinker, Foucault supported tyranny and the crushing of human rights – not in his own country and for himself, mind you, but for other people born elsewhere. More than that, he effectively brought the weight of his academic reputation to bear against those in the West who would criticise the Iranian regime and who would support Iranians who wanted something better.

Cohen shows how Foucault is far from an isolated example. There is no shortage of other instances of the same phenomenon, such as the academic we encounter who objects (in almost incomprehensible prose) to the fact that certain people in the West are criticising the practice of burning women to death. To be quite clear: the complaint is not that women are being burned to death, but that people are criticising the practice of burning them. After all, what right do westerners have to criticise another culture?

A major theme of the book, then, is the abandonment of universal values for a view of the world that understands people to be fundamentally different from one another on the basis of some favoured category (gender, race, culture, etc.). People are effectively put into silos, on this view, and told they cannot understand or criticise those who occupy other silos. Spend a little time with this idea and you’ll start to see it all over the place. Consider, for example, how England’s Guardian newspaper produced a fawning puff-piece about the local leader of the extremist Islamist organisation Hizb ut-Tahrir. I can’t remember ever having seen a similar treatment of, say, Tommy Robinson in those pages: he’s a western right-wing extremist, and as such is to be subject to severe censure. More telling still, a few days later, the same Guardian published a considerably less flattering piece on Majid Nawaz, a former member of Hizb ut-Tahrir who has turned into a crusader for a liberal order. The Guardian has internalised the idea that the appropriate relation to culturally foreign ideas (even if they’re now on the streets on England) is to be one of understanding, not criticism, and this is now so deeply rooted that even a reformer of Pakistani heritage gets treated with hostility. (The phenomenon was remarked on at the time; see also this post by David Paxton on a related issue).

Angela Merkel gave another example when she responded to the US presidential election by saying that Germany was prepared to continue to work with the US, but only on the basis of the liberal values both countries had long held dear. Laudable, yes, though a colleague at work pointed out that she had made no such statement concerning president Erdogan in Turkey. For those who have not been paying attention, Erdogan has not merely run a campaign that was beyond the pale, but has recently used his office to do things like close down newspapers and lock up dissenters in their thousands.

But I digress. There is another idea that crops up repeatedly: relativists will tend not to think ideas through to their logical consequences – indeed, the doctrine makes this impossible in some circumstances – and so we encounter people who adopt doctrines in a merely partial fashion, unwilling really to commit to anything and failing to confront the full reality of what they are advocating. This brings about what I call mere criticism – that is, we end up with people who focus all their energy on being against something, often coming up with effective critiques of it, but never offering solutions, never being for anything. So, for example, Cohen points to the oft-repeated claim that the war against Saddam Hussein was ‘illegal:’ “logically, they should then have followed through and demanded that the Americans release Saddam Hussein from prison and restore him to the presidency that the invading forces had ‘illegally’ stolen from him. But, as the theorists of the Eighties and Nineties had anticipated, there wasn’t much call for logic in a post-modern world that welcomed self-righteous fury without positive commitments.”

Noam Chomsky (“the boy at the edge of the gang”) as he appears in the book provides an example of this phenomenon that gives it focus: he is seen to be very much against mass killing, so long as the blame can be pinned on America. If it’s America’s enemies who are to blame – well, let’s just say that chapter Six, together with Chapter 5, contains an utterly devastating, crushing attack on Chomsky, in particular on his moral authority. I can remember people gushing about him in Vancouver in the 90’s, talking about his upcoming talk as though it were the chance of a lifetime to stand in the shadow of one of the greatest intellectual lights of our time. I couldn’t make it; I now feel rather glad not to have been involved. Cohen’s criticism is a knock-out blow: I can no longer take Chomsky seriously as a moral authority, and I will feel forced in the future to question the judgment of those who do.

Chomsky’s one-sided anti-Americanism points to a reality about the regressive left that Cohen has brought out more clearly elsewhere (for example here): they tend often simply to be against the West, or against America, rather than for the universal values that the West (or America) ought to stand for, and sometimes fails to live up to. This notion of mere criticism is also worth keeping in mind as you watch current events, because it is the difference between the regressive left and the left that has integrity.

It is in relation to Iraq, however, that the bad ideas from academia really go mainstream, as the great mass of people opposed to the war fail to make important distinctions, and wind up effectively – and sometimes explicitly and actively – opposing democracy and supporting a brutal dictatorship for the people of Iraq. An ethical approach would have been to oppose the rash activity of Bush and his followers while supporting Iraqis who wanted democracy by all possible means. Instead, we read of occasions on which Westerners actually shouted down left-wing Iraqis who were trying to achieve for their own country the same rights we enjoy in our own. Being against Bush was more important than allowing any kind of help for democratic Iraqis. Thus the German government refused to allow its own officials who had experience in dealing with the legacy of the GDR and Third Reich in Germany to go to Iraq to advise on how to approach the same sort of issues there.

But perhaps the best example in the book of how postmodern thought such as that of Foucault had filtered down to the mainstream comes in the following exchange:

Tony Blair: There is global struggle in which we need a policy based on democracy, on freedom and on justice –

John Humphrys (a BBC presenter): Our idea of democracy…

Blair: I didn’t know that there was another idea of democracy.

Humphrys: If I may say so, that’s naïve.

Blair: The one basic fact about democracy, surely, is that you can get rid of your government if you don’t like them.

Humphrys: The Iranians elected their own government, and we’re now telling them –

Blair: Hold on John, something like 60 per cent of the candidates were excluded.

(BBC Radio 4, Feb. 2007)

I’ve always hated Blair, but there’s no disagreeing with him in this exchange. I’d like to hear if Humphrys made an apology to all those young Iranians who in 2009 proved willing to be imprisoned and tortured because they wanted “our idea of democracy.”

Cohen lands some decisive blows on other leftists I had previously admired (or at least not despised). I can’t pass up the chance to quote Cohen on Michael Moore’s Fahrenheit 9/11: Moore “brushed aside the millions forced into exile and the mass graves and torture chambers and decided instead to present life in one of the worst tyrannies of the late twentieth century as sweet… Presented with propaganda which might have come from the studios of the dictators of the Thirties, the jury at the Cannes Film Festival and audiences in art house cinemas on every continent did not protest at the whitewash of totalitarianism, but rose to their feet and shouted themselves hoarse.”

(And I’d just like to say that I find it extremely pleasing to hear Jean Baudrillard described as “an overrated French theorist.”)

As I said at the start, What’s Left is currently enjoying good sales in Britain because of its relevance to current political events there. But the ideas it treats – bad ideas whose progress Cohen follows from relative obscurity among top academics and fringe movements to subsequent ubiquity in mass street protests and mainstream politics – have come in no small way to define the nature of our times. The book is thus relevant far beyond Britain. As a former humanities academic, I think it should be required reading for all students of postmodernism: it gives a look at what this intellectual movement actually means in practice, something we rarely hear about. There’s more insight into postmodernism in this easy-to-read book than in many pounds of incomprehensible academic prose.

Unfortunately, What’s Left is likely to be relevant for a long time to come, because all levels of our society have been marinated in these bad ideas to the extent that they’ve come to seem self-evident to many. They’re still being taught at every university; read the book and you’ll start to see them everywhere. And because they have the imprimatur of many of the supposed great minds of our times, those who speak against them can often seem like unintelligent cranks. Cohen’s book is a reminder of the importance of good theoretical thinking, and a reminder of the importance of speaking up against bad ideas like these.

A final thought, and one that goes beyond the book: perhaps the most fundamental idea in postmodernism is the denial that there is any reality beyond particular, limited perspectives. I can remember when I was a postmodernist (I was about 18), I would deny that there was any such thing as truth at all. One particularly worrying aspect of both the Brexit campaign in Britain and the US presidential campaign is that both campaigns could legitimately be described as “post-truth:” claims that were simply false were able to take on a life of their own, and the fact of having spoken falsely does not seem to have carried a penalty. Indeed, in the case of Trump, his falsehoods and self-contradictions were so numerous that reporters simply couldn’t keep up. It has not always been like this, and with the arrival of this post-truth reality, our politics have become genuinely postmodern. We can only hope that we can find a way out of this postmodern condition before we’re subject to the most devastating consequences.

What’s Left is a first-rate, readable book that will not only make you think hard about Iraq, but also provides an introduction to some of the defining ideas of our time. A must-read for anyone interested in politics or the deeper intellectual currents that drive things. Books this good are rare indeed.

(also, if you want an idea of what Cohen’s all about, try here or here)

 

How Not to Criticise Big Ideas

I want to try to get clear about a faulty line of reasoning that I seem to encounter regularly. When people are confronted with an idea that is supposed to explain some large-scale historical or social phenomena, they often seem to think that by giving a single counter-example, they have refuted the idea in question, or at least shown it to be too simple. The thinking behind this is wrong, but before I get into why, let me give a recent example of the sort of thing I’m talking about.

A few days ago I came across this interesting article on how Richard Rorty saw the shape of things to come way back in 1998. Here’re the paragraphs that brought it all home:

“… something will crack. The nonsuburban electorate will decide that the system has failed and start looking around for a strongman to vote for — someone willing to assure them that, once he is elected, the smug bureaucrats, tricky lawyers, overpaid bond salesmen, and postmodernist professors will no longer be calling the shots. …

One thing that is very likely to happen is that the gains made in the past 40 years by black and brown Americans, and by homosexuals, will be wiped out. Jocular contempt for women will come back into fashion. … All the resentment which badly educated Americans feel about having their manners dictated to them by college graduates will find an outlet.”

I think Rorty goes too far with the words “wiped out” – here’s hoping I’m right! – but there’s no doubt the passage is extraordinarily prescient. What struck me about the article, however, was the criticism the author made of Rorty:

Is his analysis a bit oversimple? Yes. Even within universities, there have always been optimistic champions of America, those who ever-passionately believe in the moral arc bending toward justice and work ever-diligently on formulating concrete, actionable policies that would make the country more just.

By focusing only on his own environment, academia, Mr. Rorty’s arguments also seem strangely parochial. During the 1960s, the academic left may have started to turn its back on poverty, but actual politicians on the left were still thinking a great deal about it: Robert F. Kennedy was visiting poor white families in Appalachia; Lyndon B. Johnson was building the Great Society.

Right through the ’90s and into the 2000s, we had left-of-center politicians singing the praises of hope, rather than the hopelessness that Mr. Rorty decries.

I should start with the qualifier that I haven’t read the book in question, so my response to this criticism of Rorty should be taken with a large grain of salt (though knowing what I do of Rorty, I doubt he would have been knocked over by what I’ve just quoted). But defending Rorty is not the real issue here. Rather, we have an example of a specific kind of criticism of explanations that seek to explain large-scale human phenomena.

To say that Rorty’s analysis is “a bit oversimple” is to import a heavily loaded assumption into the conversation about what we should expect from an adequate explanation. Clearly the idea is that a less ‘simple,’ and thus more adequate, explanation would not have failed to account for certain pertinent facts (e.g., optimistic champions of America in universities and beyond). The fact that these things are not covered by Rorty’s analysis is thus considered a failure, so that the demand that lies behind this line of thought must be that a fully adequate analysis would include every single detail you could think up.

A more severe variety of this criticism would replace “a bit oversimple” with ‘wrong.’ Some months ago I was telling some friends about a man I’d read about who, having had two children by a woman, not only decided against marrying her, but also decided against playing any substantial role in the lives of his children beyond financial. This was an expression, I claimed, of the lack of regard for the family and its attendant duties in our time. No!, came the response, and an example was given of parents who dote excessively on their children, the very opposite of what I had been suggesting. The same assumption was at work in this criticism: a single counter-example is sufficient to refute the theory, or at least to show its overly simple nature.

If Rorty or I had been trying to set out a law of math or physics, which must apply to every conceivable case, the counter-example criticism would be justified. There are, however, other ways to think about these things.

The comparison of the body politic to a human body goes back a long way indeed. Plato’s analogy of city and soul is well-known, and many have seen in Thucydides an analogy between the progress of an idea in a city or in Greece to the progress of a disease in a body. So let’s think this analogy through.

Imagine if you’re told you have pancreatic cancer, and that you’ll likely be dead in six months. If I were to respond to this diagnosis by saying that your body has a vast number of entirely healthy cells – the great majority of them, even in the pancreas – that wouldn’t show that the diagnosis is wrong. A cancer diagnosis does not assert that every single cell in the body has cancer – and this is true of pretty much all sickness. What matters is that cancer is present, and that there is a logic inherent in the body that means that we can predict, from something that is present in a tiny fraction of the body’s cells, that the whole body will be dead in a matter of months.

With this in mind, look back at Rorty and the criticism made of him. What if the phenomenon he’s pointing to is like a disease in a body? Just as there are healthy cells in a body that’s fatally ill, so too can there be people in academia and beyond who don’t fit Rorty’s case, but that need not affect his analysis. What’s important is that he has identified something that can determine the future of American politics, just as pancreatic cancer can determine the future of the body it’s found in (and before you go getting too upset about the analogy, remember that many diseases are not fatal).

That this manner of thinking can possess genuine foresight should be clear enough from the fact that Rorty saw something the shape of our current politics back in 1998. Not only that, but the sort of foresight at issue here is, in certain contexts, far superior to anything empirical-scientific. No number crunching, data-driven analysis could possibly have seen what Rorty did, and certainly not so far back.

This is not to say that the analogy of a disease to ideas that are said to be driving society is always appropriate. Many ideas are merely partial, though even here, alternate cases can present complementary phenomena – e.g., I’m inclined to see both economic factors and a backlash against political correctness as important to the presidential election result, but not as conflicting phenomena, but instead like two streams running into a river. (And in the case of the example from my conversation with a friend, I’m inclined to see two sides of the same phenomenon: the breakdown of the conventional idea of the family can find expression both in a lack of regard for children and for an excessively sentimental relation to them.)

There is a potential problem with this disease-analogy, of course: how do you criticise it if counter-examples are not necessarily refutations? I’d be inclined to find the answer by pursuing the disease-analogy further, but I think I’ve written enough for now.