Yesterday the New Yorker published a detailed exploration of an 'expert on serial killers', Stéphane Bourgoin, who turned out to be comprehensively lying about his own past, the murder of his girlfriend (who appears to not exist), his credentials, how many serial killers he'd interviewed, etc., but who was taken seriously for many years, getting genuine privileged access to serial killers for interviews and to victims' families/support groups as a result of his lying. 

I find serial/compulsive/career liars fascinating.  One of the best serial-liar stories that I ran into as a warning story for journalists is that of Stephen Glass, the 1990s New Republic writer who turned out to be comprehensively making up most of the juicy details of his articles, including forging handwritten transcripts of conversations that never happened to present to the magazine's fact-checkers.

I mostly just read about this because it's fun, but I do think it has crystallized some things for me which are useful to have in mind even if you don't have fun reading about serial liars. (Takeaways are at the bottom if you want to skip to that.)

 The dynamics of how serial liars go unnoticed, and how the socially awkward information "hey, we think that guy is a fraud" gets propagated (or fails to get propagated) seem to me to also describe how other less clear-cut kinds of errors and misconduct go unnoticed.

A recurring theme in the New Yorker article is that people knew this guy was full of crap, but weren't personally motivated to go try to correct all the ways he was full of crap.  

“Neither I nor any of our mutual friends at the time had heard the story of his murdered girlfriend, nor of his so-called F.B.I. training,” a colleague and friend of Bourgoin’s from the eighties told me. “It triggered rounds of knowing laughter among us, because we all knew it was absolutely bogus.”

Bourgoin was telling enough lies that eventually one of them would surely ring wrong to someone, though by then he'd often moved on to a different audience and different lies. I ended up visualizing this as a sort of expanding ring of people who'd encountered Bourgoin's stories. With enough exposure to the stories, most people suspected something was fishy and started to withdraw, but by then Bourgoin had reached a larger audience and greater fame, speaking to new audiences for whom the warning signs hadn't yet started to accumulate. 

Eventually, he got taken down by an irritated group of internet amateurs who'd noticed all the ways in which he was dishonest and had the free time and spite to actually go around comprehensively proving it. 

This is a dynamic I've witnessed from the inside a couple of times. There's a Twitter personality called 'Lindyman' who had his fifteen minutes of internet fame last year, including a glowing New York Times profile. Much of his paid Substack content was plagiarized. A lot of people know this and had strong evidence for a while before someone demonstrated it publicly.

I personally know someone who Lindyman plagiarized from, who seriously debated whether to write a blog post to the effect of 'Lindyman is a plagiarist', but ended up not doing so. It would've taken a lot of time and effort, and probably attracted the wrath of Lindyman's followers, and possibly led to several frustrating weeks of back and forth, and is that really worth it? And that's for plagiarism of large blocks of text, which is probably the single most provable and clear-cut kind of misbehavior, much harder to argue about than the lies Glass or Bourgoin put forward.  Eventually someone got fed up and made the plagiarism public, but it'd been a running joke in certain circles for a while before then.

There are more examples I'm aware of where a researcher is widely known by other researchers to engage in shady research practices, but where no one wants to be the person to say that publicly; when it does, eventually, come out, you often hear from colleagues and peers "I'm not surprised". 

Why not be the change I want to see in the world? Last year, I tried looking into what looked like a pretty clear-cut allegation of scientific misconduct. It ended up consuming tons of my time in a way that was not particularly clarifying. After getting statements from both sides, asking lots of followup questions, getting direct access to the email chain in which they initially disputed the allegations, etc., I still ended up incredibly confused about what had really happened, and unsure enough of any specific thing I could say that even though I suspected misconduct had been involved, I didn't have something I felt comfortable writing.  

...then, about two months after I gave up, the same scientist at the center of that frustrating, unrewarding investigation had another paper identified as fraudulent in a much more clear-cut way. That retroactively clarified a lot about the first debate. The person who pointed it out, though, was immediately the target of a very aggressive and personal online backlash. I don't envy him.

Calling out liars is clearly a public service, but it's not a very rewarding one, and it's a bit hard to say exactly how much of a public service it is. I think people are basically right to anticipate that it's likely to absorb a ton of their time and energy and lead to months of mudslinging and not necessarily get the liar to either shut up or stop being taken seriously. 

But of course, having a public square with even a few prolific liars in it is also quite bad. Glass was so wildly successful as a reporter because the anecdotes he manufactured hit just the right spot; they were funny, memorable, gave vivid 'proof' to something people wanted to believe anyway. Fiction has more degrees of freedom than truth, and is likelier to hit on particularly divisive, memorable, and memetically compelling claims. Scientists who write fraudulent articles can write them faster, and as a result much of the early formative Covid-19 treatment research was fraudulent. I think a public square substantially shaped by lies is much worse than one that isn't. 

One very tentative takeaway:

It's easy to forget that people might just be uncomplicatedly and deliberately lying. Most of the time they're not. But occasionally they are, and if you fail to have it in your hypothesis space then you'll end up incredibly confused by trying to triangulate the truth among different stories, assuming that everyone's misremembering/narrativizing but not actively fabricating the information they're presenting you with. I think it's pretty important to have lying in your hypothesis space, and worth reading about liars until you have the intuition that sometimes people are just lying to you. 

Another very tentative takeaway:

If you are a person interested in doing informal research that's important and neglected, I think identifying scientific fraud, or identifying experts on the TED talk circuit who are doing substantially dishonest or misleading work, is valuable and largely not being done by more experienced and credentialed people, not because they don't have lengthy rants they'll give you off the record but because they don't want to stake their personal credibility on it and don't want to deal with the frustrating ongoing arguments it'll cause. 

My current sense is that this work is not super important, but is reasonably good practice for important work; making sense of a muddle of claims and figuring out whether there's clear-cut dishonesty, and if so how to make it apparent and communicate it, is a skill that transfers pretty well to making sense of other muddles of claims. I'd be pretty excited about hiring someone who'd successfully done and written up a couple of investigations like these.



 

212

0
0

Reactions

0
0

More posts like this

Comments30
Sorted by Click to highlight new comments since: Today at 6:51 PM

Fraud prediction markets 

epistemic status - I think idea feels bad, but I can't find a good argument. I'm moving towards there being a way to do this well


Is there a resolvable event you could run a prediction market on here? Lets imagine you decide not to publish. Soon, you or someone else could created a market for "will a major news org publish an article saying X scientist is a fraud before 2023" then you could buy shares in that market. I know vox employees aren't allowed to gamble, but you get the picture. The more people who agree, see and buy shares in it the more likely the people with the different parts of the story come together and complete it, publish and article and then you all get  money from those who voted no. (People will vote no because most markets will resolve no and it will be a way for speculators to get money)

It seems to me that fraud/scandal prediction markets are underrated. 

Some responses to common criticisms:

  • Wouldn't fraud markets be used to attack people? Yes, but if you know you are innocent, you bet for yourself and make money. People are incentivised towards truth because that's how you maximise expected value
  • Even if the market resolves no, there will be reputational damage to having a markets sit at 10% for any length of time. This is the criticism I think is most serious. That said, at least in this system that would be held accountable. Over time we'd realise how easy it was to create a spike and that it was long term price rises that are worth following.
  • What if they article falsely accuses? That's already a problem and fraud markets don't change that
  • How to decide who to run fraud markets on? Allow them to be run on anyone paying a certain amount of liquidy with a certain level of public presence (eg a wikipedia page). Most people don't have a credible article published about them, so you need a good sense before you spend the money. Most markets will resolve no
  • This makes me uncomfortable. Yeah, I agree, but that's an argument for caution, not silence. 

Anyway, let me know what you think.

I hadn't thought of this and I'm actually intrigued - it seems like prediction markets might specifically be good for situations where everyone 'knows' something is up but no one wants to be the person to call it out. The big problem to my mind is the resolution criterion: even if someone's a fraud, it can easily be ten years before there's a big article proving it.

Disclaimer that I've given this less than ten minutes of thought, but I'm now imagining a site pitched at journalists as an aggregated, anonymous 'tip jar' about fraud and misconduct. I think lots of people would at least look at that when deciding which stories to pursue. (Paying sources, or relying on sources who'd gain monetarily from an article about how someone is a fraud, is extremely not okay by journalistic ethics, which limits substantially what you can do here.)

I don't think resolution criteria area problem. A published article in an agreed set of major newspapers. It's okay if it resolves no for a few years before it resolves yes. You need relatively short time horizons (1 year) for the markets to function.

I don't understand how your site would work. Want to describe it?

I think I would actually be for this, as long as the resolution criteria can be made clear, and at least in the beginning it can only be for people who already have a large online presence .

One potential issue is if the resolution criteria is worded the wrong way, perhaps something like "there will be at least one news article which mentions negative allegations against person X," it may encourage unethical people to try to purposely spread false negative allegations in order to game the market.  The resolution criteria would therefore have to be very carefully thought about so that sort of thing doesn't happen.

Sure, that's a failure mode. I would only support it if the resolution criteria were around verified accusations. Mere accusations cannot be enough.

So what happens in the face of ambiguity?

Alice and Bob are locked away in a room with no observers and flip a coin. Both emerge and Alice states it came up Heads, Bob that it came up Tails.

Of course both cannot be right but there is no further information. The proper thing for a market to do would be to hover at 50/50. But would it actually do that? I suspect not, I suspect that biases in observers (perhaps the name "Alice" is culturally perceived as more trustworthy) would create a wedge and as people see that disparity developing it becomes a race to the bottom much to Bob's dismay.

But even that is the simple case where there is no history and both Alice and Bob are just arbitrary individuals nobody knows anything about beyond their name. Life is not like that. Bob being determined a liar once becomes an input to any future fraud predictions. Are people really running through proper Bayesian analysis? Again, gut feel is that the almost never do. So now you become a self-reinforcing system in a potentially really nasty way.

And even that is a relatively simple case where all inputs are ones in the market already. What if Bob has an arrest for shoplifting when he was 18 published in a local paper? What if he's been a target of one of those shady companies that publishes (real or fake) arrest records and extorts you for takedown money?

It's worth considering the possibility that the sort of dynamics reinforced by a market would be far worse than the ones available without one. At the very least you would want to experimentally test exactly what sort of effects of this nature you might expect.

There will be false positives, and that will be bad, but I think most people believe on the current margin that there are more false negatives. And false negatives are quite bad collectively, even if the harms are diffuse.

Hey George, I  disagree with all the all arguments here except the last one. But I do share your concern with this idea. It feels icky.  But as I say, I don't think your args hold water.

If the market was "would a newspaper publish verified accusations that Bob lied about what happened in the room" the market would stabilise much lower than 50%. Likely no newspaper would publish an article on this. It's not about what happened it's about whether an article will be published stating Bob has acted badly.

As for previous convictions, if Bob knows he's innocent and that no articles are likely to be published on new events, he and those who trust him can bet "no" on the market. And they will likely win. If anything, bettors should be wary of markets based on hearsay, or where the accused has unrelated convictions. I might bet no on those markets.  I think the standard for a credible news org publishing an article with verified claims of fraud is actually very high, even with previous claims. 

As for a general question about the damage these markets do, what about the damage fraud does? I think it's easy to compare to the status quo, but currently charlatans are able to operate under the radar for years. This suggestion would make it easier to combine data. If you see a market about someone you know is shifty that's already quite high, you might be tempted to bet, then contact a journalist.

As for testing it out, I strongly agree that it should be tested. And I think science fraud is a good place to start. It's comparatively less controversial than other scandals.

Where do you think I'm wrong here?
 

[comment deleted]2y1
0
0

There’s a term I learned during the #MeToo revelations: broken step (or missing stair). Kelsey, I think you are right—people rationally look at the cost/benefit of coming forward and conclude that the cost to them personally is too great for them to bear relative to the likely societal benefit.

Anonymous review sites like GlassDoor and VC Guide could possibly be a model for exposing the truth in industries where speaking on the record could be a career-ending decision. And there are news sites like The Intercept that seem more willing to protect sources.

Also, a shout-out to those courageous people who do the calculus and decide public interest outweighs potential personal backlash: Edward Snowden, Thomas Drake, Christine Blasey Ford, and Frances Haugen all come to mind ❤️

Thanks for the post. I'm not sure this work is not super important... I mean, I think someone should be concerned with cases like those precisely before they become "super important" - which might entail a larger counterfactual impact.

Sometimes a liar becomes a biotech CEO, or a powerful financier, and then a journalist (or a short seller, or an investigator, or a whistleblower) will be interested in exposing the lie - because only then it will have repercussion enough to make it worth it (or to find people in the internet who have fun in doing it - anyway, repercussion & incentives is the key). So here is the mismatch of incentives: only when the lie has spread far, someone will show up and expose the liar. Investigative journalists won't dig into a politician's life before the latter becomes powerful or famous; short sellers won't earn much by shorting a pooor new startup; a researcher won't care exposing the methodological problems of someone else's paper before it has started influencing other people...
... But then a tipping point might have been passed, and either the liar or the lie will become resilient and remain. A politician becomes President - and then no matter what you dig, their supporters will give them a free pass  and they become almost impervious to the truth. Or an ineffective treatment will continue to be prescribed or advocated for, because more people have heard of it than of  the corresponding research has shown it to be bogus.
I don't think we should count only on individuals to do it (or on organizations focused on making a profit from exposing the truth). As I said above, it'd be cool to have some sort of organization concerned with killing lies in the cradle, while it's easier, before it becomes "super important" to do so. But it is probably very hard to estimate the corresponding impact of preventing the harm that could have been caused by a rare event.

This post hits very differently post-SBF. Definitely something to think about.

(Note/HT: Linch is the one I saw who noticed this post's special post-SBF relevance, not me.)

Good observations. I wonder if it makes sense to have a role for this, a paid full-time position to seek out and expose liars. Think of a policeman, but for epistemics. Then it wouldn't be a distraction from, or a risk to, that person's main job—it would be their job. They could make the mental commitment up front to be ready for a fight from time to time, and the role would select for the kind of person who is ready and willing to do that.

This would be an interesting position for some EA org to fund. A contribution to clean up the epistemic commons.

I think Ozzie Gooen and QURI are pretty interested in stuff like this.

A central problem is that accusing something of being fraudulant carries an immense cost as its hard to percieve as anything but a direct attack. Whoever did the fraud has every incentive to shut you down and very little to lose which gets very nasty very quickly.

Ideally there would be a low commitment way to accuse someone of fraud that avoids this. Normalising something akin to "This smells fishy to me" and encouraging a culture of not taking it too personally, whenever the hunch turns out wrong might be a first step towards a culture where fraud is caught more quickly.

as a side note, maaaan did this post trigger a strong feeling of imposter syndrome in me!

ooooops, I'm sorry re: the imposter syndrome - do you have any more detail? I don't want to write in a way that causes that!

I wouldn't worry about it, nothing about your writing in particular. It's not something that caused me any real distress! I think the topic of catching fraud is inherently prone to causing imposter-syndrome, if you often go around feeling like a fraud. You get that vague sense of 'oh no they finally caught me' when you see a post on the topic specifically on the EA Forum.

[comment deleted]2y-1
0
0

I think of this the way I think of the law. Our formal laws aim for social order and justice with the least cost to individuals and society. 

In physics terms, it's reducing entropy or finding order in disorder or free energy to do work. A wrong accusation in the right place or time could take a life of its own and ruin someone else's life or career with little to no cost to the accuser. So the burden of proof would always be on the person or group of persons making the accusation and society is right to be wary until there is sufficient proof to turn the tide. It's what we do in science, consilience, to turn a hypothesis or a theory into an undisputed fact.

Justice is patient. Facts are patience.  In the meantime, I feel sorry for the misled.

It takes a while for the truth to catch up with lies. The collateral damage is the price we pay for our occasional wilful ignorance.

I wonder if there are other situations where a person has a "main job" (being a scientist, for instance) and is then presented with a "morally urgent situation" that comes up (realizing your colleague is probably a fraud and you should do something about it).  The traditional example is being on your way to your established job and seeing someone beaten up on the side of the road whom you could take care of.  This "side problem" can be left to someone else (who might take responsibility, or not) and if taken on, may well be an open-ended and energy draining project that has unpredictable outcomes for the person deciding whether to take it on.  Are there other kinds of "morally urgent side problems that come up " and are there any better or worse ways to deal with the decision whether to engage?

"identifying experts on the TED talk circuit who are doing substantially dishonest or misleading work"--easy: with TED talks, assume guilty until proven innocent. Knowing someone has given a TED talk substantially diminishes my estimate of their credibility unless they are the sort of high-profile person who would be invited to give one as a matter of course (e.g. Bill Gates). The genre positively screams out for the invention of feel-good BS, or counterintuitive "insights" that are just false. 

I think that is more or less what I'm trying to say!

Think of security at a company. Asking a colleague to show their badge before you let them into the building can be seen as rude. But enforcing this principle is also incredibly important for keeping your premises secure. So many companies have attempted to develop a culture where this is not seen as a rude thing to do, but rather a collective effort to keep the company secure.

Similarly I would think it's positive if we develop some sort of way to say "hey this smells fishy" without it being viewed as a direct attack, but rather someone participating in the collective effort to catch fraud.

I think identifying scientific fraud, or identifying experts on the TED talk circuit who are doing substantially dishonest or misleading work, is valuable

Is this specific to dishonesty/fraud? I've done some replications showing that findings are not reliable because of data errors or lack of robustness (here, here, here).

I think checking whether results replicate is also important and valuable work which is undervalued/underrewarded, and I'm glad you do it. 

One dynamic that seems unique to fraud investigations specifically is that while most scientists have some research that has data errors or isn't robust, most aren't outright fabricating. Clear evidence of fake data more or less indicts all that scientists's other research (at least to my mind) and is a massive change to how much they'll tend to be respected and taken seriously. It can also get papers redacted, while (infuriatingly) papers are rarely redacted for errors or lack of robustness.

But in general I think of fraud as similar in some important ways to other bad research, like the lack of incentives for anyone to investigate it or call it out and the frequency with which 'everyone knows' that research is shady or doesn't hold up and yet no one wants to be the one to actually point it out. 

[anonymous]1y1
0
0

Great post! Thank you for writing this!

I'm thinking... would it be better to create a new skeptic organization or to improve and apply EA thought on an existing organization? Would a QuackWatch model be scalable with EA methods and funding? Would an organization like The Guardian Foundation be willing to support Skepticism Investigative Journalism? Imagine in the future a section like "Skepticism" on The Guardian just like "The Global Development" section supported by the Bill and Melinda Foundation. Skepticism could be a future focus area for EA funding. I wish it was already one, I would donate to that focus area.

Great article! I'm amazed there weren't any references to:

  • Tinder Swindler
  • Bad Vegan
  • Inventing Anna

Maybe that's a "know your audience" thing for the EA Forum, but I assume the same concepts apply...

The most prominent example I've seen recently is Frank Abagnale, the real-life protagonist of the supposedly-nonfiction movie Catch Me If You Can, who basically totally fabricated his life story, and (AFAICT) makes a living off making appearances where he tells his story, and he still regularly gets paid to do this, even though it's pretty well-documented that he's lying about almost everything.

Not a psychologist or even an academic here, but I'll point out that the effects that you describe on the whistelblower are exactly the same as when actively trying to break up echo chambers. There are some real parallels here. Echo chambers are like "environmental liars", not purposeful, but they put (presumably) incorrect notions in peoples' heads which is something that none of us deal well with.

Over the last eight years I made a project for myself to interject contrary opinions (which I could back up) into conversations that I felt were turning this way among friends, family, and online communities. I've felt very similar backlash to what you describe and its affected me in similar ways. I am now very loathe to push back it on all subjects except for ones where I feel I am a true expert. It simply takes far too much emotional payment.

If you're going to look for research, maybe that's a direction to look as well? Could well be two sides of the same coin.

I think this is a great post. You might already be aware of this, but the podcast Maintenance Phase does some interesting work debunking, although only in the sphere of wellness. And because of the limited sphere, they tend to be aiming for ideas that aren't super widespread yet. One of the hosts, Michael Hobbs, writes articles attempting to debunk claims as well, although he's not aiming for maximizing impacts. 

[comment deleted]2y-2
0
0
Curated and popular this week
Relevant opportunities