Lukas_Gloor

4570Joined Jan 2015

Sequences
1

Moral Anti-Realism

Comments
395

I agree the Elon Musk thing was sketchy and arguably a bad decision, but that also wasn't public and not related to SBF's EA activities.

Whether it was "sketchy and arguably a bad decision" isn't the primary issue. Peter Wildeford pointed out that Will vouched for SBF. Vouching = staking your reputation to guarantee that someone has integrity/can be trusted.

(Many of the things you say in your comment seem reasonable to me as well, but I feel like we can't just skip over the vouching part even if it was non-public. If he vouched towards Musk he probably did the same in lots of other contexts, or conveyed trust in Sam in less explicit ways, at least.)

Isn't this a bit much hindsight? Conditional that you trust that nothing fraudulent is going on, you might just think "what route the money takes is an ops question that I leave to FTX ops staff to figure out" and not worry about it further. The only reason to get personally involved in insisting that the money flows a particular way seems like when you're already suspicious that something might be really wrong? 

Okay, that seems right. In the article, it's worded like this: 

Give people a second or third chance; adjust when people have changed and improved

The second part of the sentence adds some nuance, as does the contrast table.

Still, I remember feeling a bit weird about the wording even when that article came out, but I didn't comment. (For me, the phrase "third chance" evokes the picture of the person giving the third chance being naive.) (Edit: esp. when it's presented as though this is a somewhat common thing, giving people third chances in "evidence this person is a bad actor" contexts.) 

I think the "or third chance" could be phrased differently. Sure, in specific circumstances, that might be appropriate, but it shouldn't sound like a general rule.  Second chances should suffice. People rarely change.

I think the discussion of hits-based giving is a bit beside the point. Many of the criticisms in the OP ("original post") would speak against the grant even under hits-based giving. The only part where I think "hits-based giving" could be a satisfactory response (if everything else looks promising) is on the issue of prior expertise. If everything else looks promising, then it shouldn't be a dealbreaker if someone lacks prior experience.

As I understand it, hits-based giving still means you have to be able to point to some specific reasons why the grant could turn out to be very impactful. And I understood the OP to be expressing something like, "I can't find any specific reasons to expect this grant to be very impactful, except for its focus area – but there are other projects in the same space, so I wonder why this one was chosen." 

There is scant public information that could justify it as the best-placed and most appropriate recipient, a clear risk of nepotism inherent in the recipient organization, and [...]

When I read this part of your bullet point summary, I thought someone at Open Phil might be related to someone at Helena. But then it became clear that you mean that the Helena founder dropped out of college supported with money from his rich investor dad to start a project that you think "(subjectively) seems like" self-aggrandizing. 

(The word "inherent" probably makes clear what you mean; I just had a prior that nepotism is a problem when someone receives funding, and I didn't know that you were talking about other  funding that Helena also received.) 


 

I watched most of a youtube video on this topic to see what it's about. 

I think I agree that "coordination problems are the biggest issue that's facing us" is an underrated perspective. I see it as a reason for less optimism about the future.

The term "crisis" (in "metacrisis") makes it sound like it's something new and acute, but it seems that we've had coordination problems for all of history. Though maybe their effects are getting worse because of accelerating technological progress?

In any case, in the video I watched, Schmachtenberger mentioned the saying, "If you understand a problem, you're halfway there toward solving it." (Not sure that was the exact wording, but something like that.) Unfortunately, I don't think the saying holds here. I feel quite pessimistic about changing the dynamics about why earth is so unlike Yudkwosky's "dath ilan." Maybe I stopped the Schmachtenberger video before he got to the solution proposals (but I feel like if he had great solution proposals, he should lead with those).  In my view, the catch-22 is that you need well-functioning (and sane and compassionate) groups/companies/institutions/government branches to "reform" anything, which is challenging when your problem is that groups/companies/institutions/government branches don't work well (or aren't sane or compassionate).

I didn't watch the entire video by Schmachtenberger, but I got a sense that he thinks something like, "If we can change societal incentives, we can address the metacrisis." Unfortunately, I think this is extremely hard – it's swimming upstream, and even if we were able to change some societal incentives, they'd at best go from "vastly suboptimal" to "still pretty suboptimal." (I think it would require god-like technology to create anything close to optimal social incentives.) 

Of course, that doesn't mean making things better is not worth trying. If I had longer AI timelines, I would probably think of this as the top priority. (Accordingly, I think it's weird that this isn't on the radar of more EAs, since many EAs have longer timelines than me?) 

My approach is mostly taking for granted that large parts of the world are broken, so I recommend working with the groups/companies/institutions/government branches that still function, expanding existing pockets of sanity and creating new ones.

Of course, if someone had an idea for changing the way people consume news, or making a better version of social media, trying to create more of a shared reality and shared priority about what matters in the world, improving public discourse, I'd be like "this is very much worth trying!." But it seems challenging to compete for attention against clickbait and outrage amplification machinery.

EA already has the cause area "improving institutional decision-making." I think things like approval voting are cool and I like forecasting just like many EAs, but I'd probably place more of a focus on "expanding pockets of sanity" or "building new pockets of sanity from scratch." "Improving" suggests that things are gradual. My cognitive style might be biased towards black-and-white thinking, but to me it really feels like a lot of institutions/groups/companies/government branches mostly fall into two types, "dysfunctional" and "please give us more of that." It's pointless to try to improve the ones with dysfunctional leadership or culture (instead, those have to be reformed or you have to work without them). Focus on what works and create more of it.

That would be a valid reply if I had said it's all about priors. All I said was that I think priors make up a significant implicit source of the disagreement – as suggested by some people thinking 5% risk of doom seems "high" and me thinking/reacting with "you wouldn't be saying that if you had anything close to my priors."

Or maybe what I mean is stronger than "priors." "Differences in underlying worldviews" seems like the better description. Specifically, the worldview I identify more with, which I think many EAs don't share, is something like "The Yudkowskian worldview where the world is insane, most institutions are incompetent, Inadequate Equilibria is a big deal, etc." And that probably affects things like whether we anchor way below 50% or above 50% on what the risks should be that the culmination of accelerating technological progress will go well or not.

In general I’m skeptical of arguments of disagreement which reduce things to differing priors. It’s just not physically or predictively correct, and it feels nice because now you no longer have an epistemological duty to go and see why relevant people have differing opinions.

That's misdescribing the scope of my point and drawing inappropriate inferences. Last time I made an object-level argument about AI misalignment risk was  just 3h before your comment. (Not sure it's particularly intelligible, but the point is, I'm trying! :) )
 So, evidently, I agree that a lot of the discussion should be held at a deeper level than the one of priors/general worldviews.

Quintin has lots of information, I have lots of information, so if we were both acting optimally according to differing priors, our opinions likely would have converged.

I'm a fan of Shard theory and some of the considerations behind it have already updated me towards a lower chance of doom than I had before starting to incorporate it more into my thinking. (Which I'm still in the process of doing.)

Yes to (paraphrased) "5% should plausibly still be civilization's top priority."

However, in another sense, 5% is indeed low!

I think that's a significant implicit source of disagreement over AI doom likelihoods – what sort of priors people start with.

The following will be a bit simplistic (in reality proponents of each side will probably state their position in more sophisticated ways).
 
On one side, optimists may use a prior of "It's rare that humans build important new technology and it doesn't function the way it's intended."

On the other side, pessimists can say that it has almost never happened that people who developed a revolutionary new technology displayed a lot of foresight about its long-term consequences when they started using it. For instance, there were comparatively few efforts at major social media companies to address ways in which social media might change society for the worse. Or, same reasoning for the food industry and the obesity epidemic or online dating and its effects on single parenthood rates. 

I'm not saying revolutions in these sectors were overall negative for human happiness – just that there are what seems to be costly negative side-effects where no one competent has ever been "in charge" of proactively addressing them (nor do we have good plans to address them anytime soon). So, it's not easily apparent how we'll suddenly get rid of all these issues and fix the underlying dynamics, apart from "AI will give us god-like power to fix everything." The pessimists can argue that humans have never seemed particularly "in control" over technological progress. There's this accelerating force that improves things on some metrics but makes other things worse elsewhere. (Pinker-style arguments for the world getting better seem one-sided to me – he mostly looks at trends that were already relevant 100s of years ago, but doesn't talk about "newer problems" that only arose as Molochian side-effects of technological progress.) 
 AI will be the culmination of all that (of the accelerating forces that have positive effects on immediately legible metrics, but negative effects on some other variables due to Molochian dynamics). Unless we use it to attain a degree of control that we never had, it  won't go well.  
To conclude, there's a sense in which believing "AI doom risk is only 5%" is like believing that there's a 95% that AI will solve all the world's major problems. Expressed in that way, it seems like a pretty strong claim.

(The above holds especially for definitions of "AI doom" where humanity would lose most of its long-term "potential." That said, even if by "AI doom" one means something like "people all die," it stands to argue that one likely endpoint/attractor state from not being able to fix all the world's major problems will be people's extinction, eventually.)

I've been meaning to write a longer post on these topics at some point, but may not get to it anytime soon.

That makes sense – I get why you feel like there are double standards. 

I don't agree that there necessarily are.

Regarding Bostrom's apology, I guess you could say that it's part of "truth-seeking" to dive into any mistakes you might have made and acknowledge everything there is to acknowledge. (Whether we call it "truth-seeking" or not, that's certainly how apologies should be, in an ideal world.) On this point, Bostrom's apology was clearly suboptimal. It  didn't acknowledge that there was more bad stuff to the initial email than just the racial slur.

Namely, in my view, it's not really defensible to say "technically true" things without some qualifying context, if those true things are easily interpreted in a misleadingly-negative or harmful-belief-promoting way on their own or even interpreted as, as you say, "racist dogwhistles." (I think that phrase is sometimes thrown around so lightly that it seems a bit hysterical, but it does seem appropriate for the specific example of the sentence Bostrom claimed he "likes.") 

Take for example a newspaper reporting on a person with autism who committed a school shooting. Given the widespread stigma against autism, it would be inappropriate to imply that autism is linked to these types of crimes without some sort of very careful discussion that doesn't make readers prejudiced against people on the spectrum. (I don't actually know if there's any such link.)

What I considered bad about Bostrom's apology was that he didn't say more about why his entire stance on "controversial communication" was a bad take. 

Given all of the above, why did I say that I found Bostrom's apology ""reasonable""?

  • "Reasonable" is a lower bar than "good."
  • Context matters: The initial email was never intended to be seen by anyone who wasn't in that early group of transhumanists. In a small, closed group, communication functions very differently. For instance, among EA friends, I've recently (after the FTX situation) made a joke about how we should run a scam to make money. The joke works because my friends have enough context to know I don't mean it. I wouldn't make the same joke in a group where it isn't common knowledge that I'm joking. Similarly, while I don't know much about the transhumanist reading list, it's probably safe to say that "we're all high-decouplers and care about all of humanity" was common knowledge in that group.  Given that context, it's sort of defensible to think that there's not that much wrong with the initial email (apart from cringiness) other than the use of the racial slur. Bostrom did apologize for the latter (even viscerally, and unambiguously).
  • I thought there was some ambiguity in the apology about whether he was just apologizing for the racial slur, or whether he also meant just the general email when he described how he hated re-reading it. When I said that the apology was "reasonable," I interpreted him to mean the general email. I agree he could have made this more clear.

In any case, that's one way to interpret "truth-seeking" – trying to get to the bottom of any mistakes that were made when apologizing. 

That said, I think almost all the mentions of "truth-seeking is important" in the Bostrom discussion were about something else.

There was a faction of people who thought that people should be socially shunned for holding specific views on the underlying causes of group differences. Another faction that was like "it should be okay to say 'I don't know' if you actually don't know."

While a few people criticized Bostrom's apology for reasons similar to the ones I mentioned above (which I obviously think is reasonable!), my impression is that the people who were most critical of it did so for the "social shunning for not completely renouncing a specific view" reason.

For what it's worth, I agree that emphasis on truth-seeking can go too far. While I appreciated this part of EA culture in the discussion around Bostrom, I've several times found myself accusing individual rationalists of fetishizing "truth-seeking." :)

So, I certainly don't disagree with your impression that there can be biases on both sides.

Load More