anonymous_ea

Comments

Strong Longtermism, Irrefutability, and Moral Progress

It has, however, succumbed to a third — mathematical authority. Firmly grounded in Bayesian epistemology, the community is losing its ability to step away from the numbers when appropriate, and has forgotten that its favourite tools — expected value calculations, Bayes theorem, and mathematical models — are precisely that: tools. They are not in and of themselves a window onto truth, and they are not always applicable. Rather than respect the limit of their scope, however, EA seems to be adopting the dogma captured by the charming epithet shut up and multiply.

 

I wonder if this old post by GiveWell (and OpenPhil's ED) about expected value calculations assuages your fears a bit: Why we can’t take expected value estimates literally (even when they’re unbiased)

Personally I think equating strong longtermism with longtermism is not really correct. Longtermism is a much weaker claim. I highly doubt most longtermists are in danger of being convinced that strong longtermism is true, although I don't have any real data on it. 

Long-Term Future Fund: Ask Us Anything!

I think I would spend a substantial amount of money on prizes for people who seem to have done obviously really good things for the world. Giving $10M to scihub seems worth it. Maybe giving $5M to Daniel Ellsberg as a prize for his lifetime achievements. There are probably more people in this reference class of people who seem to me to have done heroic things, but haven't even been remotely well enough rewarded (like, it seems obvious that I would have wanted Einstein to die having at least a few millions in the bank, so righting wrongs of that reference class seems valuable, though Einstein did at least get a Nobel prize). My guess is one could spend another $100M this way.

 

I'm really surprised by this; I think things like the Future of Life award are good, but if I got $1B I would definitely not think about spending potentially $100m on similar awards as an EA endeavor. Can you say more about this? Why do you think this is so valuable? 

Long-Term Future Fund: Ask Us Anything!

Regardless of whatever happens, I've benefited greatly from all the effort you've put in your public writing on the fund Oliver. 

How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs

I learned from stories 1 and 2 - thanks for the information!

Story 3 feels like it suffers from lack of familiarity with EA and argues against a straw version. E.g you write (emphasis added):

As the community grew it spread into new areas – Animal Charity Evaluators was founded in 2012 looking at animal welfare – the community also connected to the rationalist community that was worried about AI and to academics at FHI thinking about the long term future. Throughout all of this expected value calculations remained the gold star for making decisions on how to do good. The idea was to shut up and multiply. Even as effective altruism decision makers spread into areas of greater and greater uncertainty they (as far as I can tell) have mostly continued to use the same decision making tools (expected value calculations), without questioning if these were the best tools.

By 2011 GiveWell had already published Why we can’t take expected value estimates literally (even when they’re unbiased), arguing against, well, taking expected value calculations literally, critiquing GWWC's work on that basis, and discussing how their solution avoided Pascal's Mugging. There was a healthy discussion in the comments and the cross-post on LessWrong got 100 upvotes and 250 comments.

KevinO's Shortform

I just voted for the GFI, AMF, and GD videos because of your comment!

I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA

Even if that's not what edoard meant, I would be interested in hearing the answer to 'what are things you would say if you didn't need to be risk averse?'!

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

Meta: A big thank you to Buck for doing this and putting so much effort into it! This was very interesting and will hopefully encourage more dissemination of knowledge and opinions publicly

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

I agree with Issa about the costs of not giving reasons. My guess is that over the long run, giving reasons why you believe what you believe will be a better strategy to avoid convincing people of false things. Saying you believed X and now believe ~X seems like it's likely to convince people of ~X even more strongly.

Load More