D

DPiepgrass

964 karmaJoined Feb 2017

Bio

I'm a senior software developer in Canada (earning ~US$70K in a good year) who, being late to the EA party, earns to give. Historically I've have a chronic lack of interest in making money; instead I've developed an unhealthy interest in foundational software that free markets don't build because their effects would consist almost entirely of positive externalities.

I dream of making the world better by improving programming languages and developer tools, but AFAIK no funding is available for this kind of work outside academia. My open-source projects can be seen at loyc.net, core.loyc.net, ungglish.loyc.net and ecsharp.net (among others).

Comments
145

Sorry if I sounded redundant. I'd always thought of "evaporative cooling of group beliefs" like "we start with a group with similar values/goals/beliefs; the least extreme members gradually get disengaged and leave; which cascades into a more extreme average that leads to others leaving"―very analogous to evaporation. I might've misunderstood, but SBF seemed to break the analogy by consistently being the most extreme, and actively and personally pushing others away (if, at times, accidentally). Edit: So... arguably one can still apply the evaporative cooling concept to FTX, but I don't see it as an explanation of SBF himself.

What if, instead of releasing very long reports about decisions that were already made, there were a steady stream of small analyses on specific proposals, or even parts of proposals, to enlist others to aid error detection before each decision?

You know what, I was reading Zvi's musings on Going Infinite...

Q: But it’s still illegal to mislead a bank about the purpose of a bank account.

Michael Lewis: But nobody would have cared about it.

He seems to not understand that this does not make it not a federal crime? That ‘we probably would not have otherwise gotten caught on this one’ is not a valid answer?

Similarly, Lewis clearly thinks ‘the money was still there and eventually people got paid back’ should be some sort of defense for fraud. It isn’t, and it shouldn’t be.

...

Nor was Sam a liar, in Lewis’s eyes. Michael Lewis continued to claim, on the Judging Sam podcast, that he could trust Sam completely. That Sam would never lie to him. True, Lewis said, Sam would not volunteer information and he would use exact words. But Sam’s exact words to Lewis, unlike the words he saw Sam constantly spewing to everyone else, could be trusted.

It’s so weird. How can the same person write a book, and yet not have read it?

And it occurred to me that all SBF had to do was find a few people who thought like Michael Lewis, and people like that don't seem rare. I mean, don't like 30% of Americans think that the election was stolen from Trump, or that the cases against Trump are a witch hunt, because Trump says so and my friends all agree he's a good guy (and they seek out pep talks to support such thoughts)? Generally the EA community isn't tricked this easily, but SBF was smarter than Trump and he only needed to find a handful of people willing to look the other way while trusting in his Brilliance and Goodness. And since he was smart (and overconfident) and did want to do good things, he needed no grand scheme to deceive people about that. He just needed people like Lewis who lacked a gag reflex at all the bad things he was doing.

Before FTX I would've simply assumed other EAs had a "moral gag reflex" already. Afterward, I think we need more preaching about that (and more "punchy" ways to hammer home the importance of things like virtues, rules, reputation and conscientiousness, even or especially in utilitarianism/consequentialism). Such preaching might not have affected SBF himself (since he cut so many corners in his thinking and listening), but someone in his orbit might have needed to hear it.

this almost confirms for me that FTX belongs on the list of ways EA and rationalist organizations can basically go insane in harmful ways,

I was confused by this until I read more carefully. This link's hypothesis is about people just trying to fit in―but SBF seemed not to try to fit in to his peer group! He engaged in a series of reckless and fraudulent behaviors that none of his peers seemed to want. From Going Infinite:

He had not been able to let Modelbot rip the way he’d liked—because just about every other human being inside Alameda Research was doing whatever they could to stop him. “It was entirely within the realm of possibility that we could lose all our money in an hour,” said one. One hundred seventy million dollars that might otherwise go to effective altruism could simply go poof. [...]

Tara argued heatedly with Sam until he caved and agreed to what she thought was a reasonable compromise: he could turn on Modelbot so long as he and at least one other person were present to watch it, but should turn it off if it started losing money. “I said, ‘Okay, I’m going home to go to sleep,’ and as soon as I left, Sam turned it on and fell asleep,” recalled Tara. From that moment the entire management team gave up on ever trusting Sam.

Example from Matt Levine:

There is an anecdote (which has been reported before) from the early days of Alameda Research, the crypto trading firm that Bankman-Fried started before his crypto exchange FTX, the firm whose trades with FTX customer money ultimately brought down the whole thing. At some point Alameda lost track of $4 million of investor money, and the rest of the management team was like “huh we should tell our investors that we lost their money,” and Bankman-Fried was like “nah it’s fine, we’ll probably find it again, let’s just tell them it’s still here.” The rest of the management team was horrified and quit in a huff, loudly telling the investors that Bankman-Fried was dishonest and reckless.

It sounds like SBF drove away everyone who couldn't stand his methods until only people who tolerated him were left. That's a pretty different way of making an organization go insane.

It doesn't seem like this shouldn't be an EA failure mode when the EA community is working well. Word should have gotten around about SBF's shadiness and recklessness, leading to some kind of investigation before FTX reached the point of collapse. The first person I heard making the case against SBF post-collapse was an EA (Rob Wiblin?), but we were way too slow. Of course it has been pointed out that many people who worked with / invested in FTX were fooled as well, so what I wonder about is: why weren't there any EA whistleblowers on the inside? Edit: was it that only four people plus SBF knew about FTX's worst behaviors, and the chance of any given person whistle-blowing in a situation like that is under 25%ish? But certainly more people than that knew he was shady. Edit 2: I just saw important details on who knew what. P.S. I will never get used to the EA/Rat tendency to downvote earnest comments, without leaving comments of their own...

Superforecasters more than quadruple their extinction risk forecasts by 2100 if conditioned on AGI or TAI by 2070.

  • The data on this table is strange! Originally Superforecasters' gave 0.38% for extinction by 2100 (though 0.088% for RS top quintile) but on this survey it's 0.225%. Why? Also, somehow the first number has 3 digits of precision while the second number is "1%" which is maximally lacking in significant digits (like, if you were rounding off, 0.55% ends up as 1%).
  • The implied result is strange! How could participants' AGI timelines possibly be so long? I notice ACX comments may explain this as a poor process of classifying people as "superforecasters" and/or "experts".

I'd strongly like to see three other kinds of outcome analyzed in future tournaments, especially in the context of AI:

  1. Authoritarian takeover: how likely is it that events in the next few decades weaken the US/EU and/or strengthen China (or another dictatorship), eventually leading to world takeover by dictatorship(s)? How likely is it that AGIs either (i) bestow upon a few people or a single person (*cough*Sam Altman) dictatorial powers or (ii) strengthen the power of existing dictators, either in their own country and/or by enabling territorial and/or soft-power expansion?
  2. Dystopia: what's the chance of some kind of AGI-induced hellscape in which life is worse for most people than today, with little chance of improvement? (This may overlap with other outcomes, of course)
  3. Permanent loss of control: fully autonomous ASIs (genius-level and smarter) would likely take control of the world, such that humans no longer have influence. If this happens and leads to catastrophe (or utopia, for that matter), then it's arguably more important to estimate when loss of control occurs than when the catastrophe itself occurs (and in general it seems like "date of the point of no return on the path to X" is more important than "date of X", though the concept is fuzzier). Besides, I am very skeptical of any human's ability to predict what will happen after a loss of control event. I'm inclined to think of such an event almost like an event horizon, which is a second reason that forecasting the event itself is more important than forecasting the eventual outcome.

Replies to those comments mostly concur:

(Replies to Jacob)

(AdamB) I had almost exactly the same experience.

(sclmlw) I'm sorry you didn't get into the weeds of the tournament. My experience was that most of the best discussions came at later stages of the tournament. [...] 

(Replies to magic9mushroom)

(Dogiv) I agree, unfortunately there was a lot of low effort participation, and a shocking number of really dumb answers, like putting the probability that something will happen by 2030 higher than the probability it will happen by 2050. In one memorable case a forecaster was answering the "number of future humans who will ever live" and put a number less than 100. I hope these people were filtered out and not included in the final results, but I don't know.

I also recommend taking a look at Damien Laird's post-mortem.

Damien and I were in the same group and he wrote it up much better than I could.

FWIW I had AI extinction risk at 22% during the tournament and I would put it significantly higher now (probably in the 30s, though I haven't built an actual model lately). Seeing the tournament results hardly affects my prediction at all. I think a lot of people in the tournament may have anchored on Ord's estimate of 10% and Joe Carlsmith's similar prediction, which were both mentioned in the question documentation, as the "doomer" opinion and didn't want to go above it and be even crazier.

> (Sergio) I don’t think we were on the same team (based on your AI extinction forecast), but I also encountered several instances of low-effort participation and answers which were as baffling as those you mention at the beginning (or worse). One of my resulting impressions was that the selection process for superforecasters had not been very strict.

This post wasn't clear about how the college students were asked about extinction, but here's a hypothesis: public predictions for "the year of human extinction at 2500" and "the number of future humans at 9 billion" are a result of normies hearing a question that mentions "extinction", imagining an extinction scenario or three, guessing a year and simply giving that year as their answer (without having made any attempt to mentally create a probability distribution).

I actually visited this page to learn about how the "persuasion" part of the tournament panned out, though, and I see nothing about that topic here. Guess I'll check the post on AI next...

Apr 30 2022

Is there a newer one? Didn't find one with a quick search.

This post focuses on higher level “cause areas”, not on lower-level “interventions”

Okay, but what if my proposed intervention is a mixture of things? I think of it as a combination of public education, doing Google's job for them (organizing the world's information), promoting rationality/epistemics/empiricism, and reducing catastrophic risk (because popular false beliefs have exacerbated global warming, may destabilize the United States in the future, etc.)

I would caution against thinking the Hard Problem of Consciousness is unsolvable "by definition" (if it is solved, qualia will likely become quantifiable). I think the reasonable thing is to presume it is solvable. But until it is solved we must not allow AGI takeover, and even if AGIs stay under human control, it could lead to a previously unimaginable power imbalance between a few humans and the rest of us.

To me, it's important whether the AGIs are benevolent and have qualia/consciousness. If AGIs are ordinary computers but smart, I may agree; if they are conscious and benevolent, I'm okay being a pet.

Load more