C

capybaralet

124 karmaJoined Aug 2015

Comments
61

Great post!

This framing doesn't seem to capture the concern that even slight misspecification (e.g. a reward function that is a bit off) could lead to x-catastrophe.  

I think this is a big part of many people's concerns, including mine.

This seems somewhat orthogonal to the Saint/Sycophant/Schemer disjunction... or to put it another way, it seems like a Saint that is just not quite right about what your interests actually are (e.g. because they have alien biology and culture) could still be an x-risk.

Thoughts?

Reminds me of The House of Saud (although I'm not saying they have this goal, or any shared goal):
"The family in total is estimated to comprise some 15,000 members; however, the majority of power, influence and wealth is possessed by a group of about 2,000 of them. Some estimates of the royal family's wealth measure their net worth at $1.4 trillion"
https://en.wikipedia.org/wiki/House_of_Saud
 

IMO, the best argument against strong longtermism ATM is moral cluelessness.  

IMO, the main things holding back scaling are EA's (in)ability to identify good "shovel ready" ideas and talent within the community and allocate funds appropriately.  I think this is a very general problem that we should be devoting more resources to.  Related problems are training and credentialing, and solving common good problems within the EA community.

I'm probably not articulating all of this very well, but basically I think EA should focus a lot more on figuring out how to operate effectively, make collective decisions, and distribute resources internally.  

These are very general problems that haven't been solved very well outside of EA either.  But the EA community still probably has a lot to learn from orgs/people outside EA about this.  If we can make progress here, it can scale outside of the EA community as well.

I view economists are more like physicists working with spherical cows, and often happy to continue to do so.  So that means we should expect lots of specific blind spots, and for them to be easy to identify, and for them to be readily acknowledged by many economists.  Under this model, economists are also not particularly concerned with the practical implications of the simplifications they make.  Hence they would readily acknowledge many specific limitations of their models.  Another way of putting it: this is more of a blind spot for economics, not economists.

I'll also get back to this point about measurement... there's a huge space between "nature has intrinsic value" and "we can measure the extrinsic value of nature".  I think the most reasonable position is:
- Nature has some intrinsic value, because there are conscious beings in it (with a bonus because we don't understand consciousness well enough to be confident that we aren't under-counting).
- Nature has hard to quantify, long-term extrinsic value (in expectation), and we shouldn't imagine that we'll be able to quantify it appropriately any time soon.
- We should still try to quantify it sometimes, in order to use quantitative decision-making / decision-support tools.  But we should maintain awareness of the limitations of these efforts.

It hardly seems "inexplicable"... this stuff is harder to quantify, especially in terms of the long-term value.  I think there's an interesting contrast with your comment and jackmalde's below: "It's also hardly news that GDP isn't a perfect measure."

So I don't really see why there should be a high level of skepticism of a claim that "economists haven't done a good job of modelling X[=value of nature]".  I'd guess most economists would emphatically agree with this sort of critique.

Or perhaps there's an underlying disagreement about what to do when we have  hard time modelling something: Do we mostly just ignore them?  Or do we try to reason about them less formally?  I think the latter is clearly correct, but I get the sense a lot of people in EA would disagree (e.g. the "evidence-based charity" perspective seems to go against this).

I think this illustrates a harmful double standard.  Let me substitute a different cause area in your statement:
"Sounds like any future project meant to reduce x-risk will have to deal with the measurement problem".


 

Online meetings could be an alternative/supplement, especially in the post-COVID world.

Reiterating my other comments: I don't think it's appropriate to say that the evidence showed it made sense to give up.  As others have mentioned, there are measurement issues here.  So this is a case where absence of evidence is not strong evidence of absence.  

Just because they didn't get the evidence of impact they were aiming for doesn't mean it "didn't work".  

I understand if EAs want to focus on interventions with strong evidence of impact, but I think it's terrible comms (both for PR and for our own epistemics) to go around saying that interventions lacking such evidence don't work.

It's also pretty inconsistent; we don't seem to have that attitude about spending $$ on speculative longtermist interventions! (although I'm sure some EAs do, I'm pretty sure it's a minority view).

Load more