Is Good-Faith Debate a Thing of the Past?

We’re misinterpreting the other side on purpose because it makes arguing easier. Last August, the actress Julia Louis-Dreyfus of Veep and Seinfeld fame, hosted the fourth night of the Democratic National Convention.

She performed her hosting duties perfectly competently, but also, I felt, tried to balance being edgy and family-friendly in a way that was at times cringe-inducingly awkward. As I watched online, I tweeted, “Let’s be honest, this is the low point of Julia Louis-Dreyfus’s career.”

A Twitter user who commented often on my posts responded, “You sound a lot like Christopher Hitchens.” They were alluding, of course, to the late essayist’s misogynistic Vanity Fair essay arguing that women aren’t funny.

I was startled by the comment because it wasn’t just a bizarre leap from what I had said, but it was in fact an inversion of what I was getting at. I find Louis-Dreyfus to be extremely funny, and it was clear the strictures of a stuffy political convention were making her not all that funny compared to her usual self. (At least by my lights; surely other people thought she was amusing anyway.) And I don’t generally comment on comics, so the criticism wasn’t built up based on other things I’ve said.

It was a completely inconsequential exchange, but it typified a style of retort which I encounter more and more often.

Another stand-out exchange happened a few weeks ago. In January, Hillary Clinton suggested on her podcast that Donald Trump may have coordinated with Putin in inciting the Capitol Hill riot, and said his phone records should be checked for any such activity. I thought this sort of speculation was silly, so I tweeted that it was “sad to see someone not be able to move on from 2016.”

In response, someone asked if I had missed the U.S. government’s conclusion that Russia had interfered in the 2016 election (something I’ve reported on many times, I should note). I replied to my critic by saying I was “not disputing that Russia launched a (largely ineffective) campaign to interfere with the 2016 election” but was skeptical that implying the January 6 insurrection was orchestrated by Russia was reasonable or enlightening. My critic then said I had “moved the goalposts” from “‘hoax’ to ‘largely ineffective.’”

It was an astonishing response. My interlocutor put the word “hoax” in quotes as if I’d literally said it when I hadn’t. Nor had I implied in any way that Russian interference was fabricated. In fact, my original comment was predicated on the idea that Clinton had become overly susceptible to leaning on a phenomenon that was real — even if it was probably negligible in its electoral impact — in 2016. So, again: Someone had arrived at the opposite of my intended meaning through a series of jumps in which they projected ideas onto me.

I’m providing very small examples, but this style of discourse is endemic to Twitter and plays out on a larger scale all the time — commentators are constantly being characterized as believing things they don’t believe, and entire intellectual positions are stigmatized based on associations with ideas that they don’t have any substantive affiliation with, often just because they don’t appear to fit into classic left-right or liberal-left binaries. If I critique Robin DiAngelo’s White Fragility workshops I’m told I oppose making the workplace diverse; if I say doxing civilians can be hyper-punitive I’m told that I’m a reactionary uninterested in the fight against mass incarceration (something I’ve written about the moral degeneracy of for the better part of a decade). An increasing share of my responses simply involves telling people “I did not say that” or “please read what I wrote.”

So what’s happening?

One way to look at it is that nobody reads. When people are coming to conclusions that just aren’t supported by what they’re reading or making questionable inferences when alternative conclusions are possible, it’s a sign of a low-literacy environment.

But one must also consider the specific kind of illiteracy on display. It’s not just poor reading, it’s poor reading that exhibits discernible patterns of antagonism, and effectively treats public debate as a battlefield. When people are constantly manufacturing positions you don’t hold and bombarding each other with false choices, it’s illustrative of a climate in which nothing is untouched by polarization, in which everything is a proxy for some broader orientation which must be sorted into the bin of good/bad, socially aware/problematic, savvy/out of touch, my team/the enemy.

This filtration process is so intense that people want to sort out any given comment into one of these bins before they’ve even fully interpreted them. If you’re not inclined to “read the room” — one of the most common exhortations among the brain-dead Twitterati — in a specific corner of the internet and dutifully put forward instantly recognizable talking points, then your comments are flagged as at odds with what’s considered sensible. There’s an almost algorithmic quality to the way that this scanning and sorting process works, which would explain how people come to conclusions that often don’t make much sense outside of the function of identifying irregularities.

On a micro level, our debates are stained by straw-manning and nonsequiturs and motte-and-bailey fallacies, but the aggregate effect is something more systematic and more insidious. Call it disinterpretation — incorrect interpretation in an adversarial, antisocial, and exploitative manner.

Or maybe don’t call it disinterpration, it’s kind of an ugly word. But what I’m trying to draw a parallel with is the distinction between misinformation and disinformation. Per Wikipedia: “Misinformation is false or inaccurate information that is communicated regardless of an intention to deceive … Disinformation is a species of misinformation that is deliberately deceptive.”

Misinterpretation is when people incorrectly understand meaning. Disinterpretation is when they don’t have the intention of understanding it.

Needless to say, disinterpretation helps foster a climate that dampens intelligent debate and makes people reluctant to articulate themselves in ways that don’t signal conformity to recognizable factions.

One peculiar outcome of this kind of intellectual environment is that even descriptive accounts of the world are vulnerable to bad faith attacks. One memorable example of this happened when I wrote a very straightforward piece of news analysis for Vox in 2017 about how North Korea’s economy was growing at an unexpectedly rapid clip because Kim Jong Un was allowing for incremental liberalization of the country’s tightly controlled economy. Right-wingers online seized on a Vox tweet about the story and decided it meant that Vox / I thought that North Korea’s growth was a good thing. It went semi-viral on the right, and eventually, I was even invited onto Tucker Carlson’s show on Fox based on this idea that I had never articulated. In a back-and-forth with a producer over email, I declined to come on and insisted that he first read the actual article in question and then specify what the basis for my coming on should be. The producer refused to engage over multiple emails — but he was sure that I should come on to the show.

Slate’s Lili Loofbourow once wrote a sharp thread that touches on how people like me are probably clinging to a romantic conception of intellectual exchange on social media that should’ve died a long time ago. “Bad faith is the condition of the modern internet, and shitposting is the lingua franca of the online world. And not just online. A troll is president. Trolling won. Perhaps we can agree that these platforms aren’t suited to the earnest exchange of big ideas,” she wrote last summer. (The context for her thread was different than the one I’m discussing today, but a lot of the ideas are relevant.) She argues that social media is such a cesspool and so thoroughly dominated by people who chose to disagree through “trolling, sea-lioning, ratios, dunks” that “good faith engagement is actually maladaptive.”

Loofbourow also argues that in a zero-trust and bad-faith environment — she brings up the example of the pointlessness of debating an “all lives matter” proponent — that it makes sense to leap over text to subtext. “If you can predict every step of a controversy (including the backlash), it makes perfect sense to meta-argue instead — over what X *really* means, or implies, or what, down a road we know well, it confirms,” she wrote.

The problem — and Loofbourow acknowledges this in her nuanced thread — is that you can’t always predict the flow of conversations, and you usually cannot predict all of someone’s views from one statement. In fact, even in our highly polarized society, quite often you can’t!

I think a majority of people who thrive on social media ultimately do subscribe to the idea that debate is not a place for learning. We’re tilting toward a universe in which all discourse is subordinate to activism; everything is a narrative, and if you don’t stay on message then you’re contributing to the other team on any given issue. What this does is eliminate the possibility of public ambiguity, ambivalence, idiosyncrasy, self-interrogation.

Ultimately I admit to being spooked by the more pessimistic aspects of Loofbourow’s analysis of social media. If enough of us decide that the text itself doesn’t matter, that real exchange is pointless, and that these spaces are irredeemable, then we move closer to a nihilistic collapse in meaning. It is, in short, another path to will-to-power politics.

Leave a Reply

Your email address will not be published. Required fields are marked *