Buck

Researcher at MIRI. http://shlegeris.com/

Comments

My guess is that this feedback would be unhelpful and probably push the grantmakers towards making worse grants that were less time-consuming to justify to uninformed donors.

Evidence on correlation between making less than parents and welfare/happiness?

Inasmuch as you expect people to keep getting richer, it seems reasonable to hope that no generation has to be more frugal than the previous.

In defence of epistemic modesty

when domain experts look at the 'answer according to the rationalist community re. X', they're usually very unimpressed, even if they're sympathetic to the view themselves. I'm pretty Atheist, but I find the 'answer' to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.

I would love to see better evidence about this. Eg it doesn't match my experience of talking to physicists.

What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent?

I think he wouldn't have thought of this as "throwing the community under the bus". I'm also pretty skeptical that this consideration is strong enough to be the main consideration here (as opposed to eg the consideration that Wayne seems way more interested in making the world better from a cosmopolitan perspective than other candidates for mayor).

What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent?

Wayne at least sort-of identified as an EA in 2015, eg hosting EA meetups at his house. And he's been claiming to be interested in evidence-based approaches to making the world better since at least then.

EA Uni Group Forecasting Tournament!

I think this is a great idea, and I'm excited that you're doing it.

Buck's Shortform

I’ve recently been thinking about medieval alchemy as a metaphor for longtermist EA.

I think there’s a sense in which it was an extremely reasonable choice to study alchemy. The basic hope of alchemy was that by fiddling around in various ways with substances you had, you’d be able to turn them into other things which had various helpful properties. It would be a really big deal if humans were able to do this.

And it seems a priori pretty reasonable to expect that humanity could get way better at manipulating substances, because there was an established history of people figuring out ways that you could do useful things by fiddling around with substances in weird ways, for example metallurgy or glassmaking, and we have lots of examples of materials having different and useful properties. If you had been particularly forward thinking, you might even have noted that it seems plausible that we’ll eventually be able to do the full range of manipulations of materials that life is able to do.

So I think that alchemists deserve a lot of points for spotting a really big and important consideration about the future. (I actually have no idea if any alchemists were thinking about it this way; that’s why I billed this as a metaphor rather than an analogy.) But they weren’t really very correct about how anything worked, and so most of their work before 1650 was pretty useless. 

It’s interesting to think about whether EA is in a similar spot. I think EA has done a great job of identifying crucial and underrated considerations about how to do good and what the future will be like, eg x-risk and AI alignment. But I think our ideas for acting on these considerations seem much more tenuous. And it wouldn’t be super shocking to find out that later generations of longtermists think that our plans and ideas about the world are similarly inaccurate.

So what should you have done if you were an alchemist in the 1500s who agreed with this argument that you had some really underrated considerations but didn’t have great ideas for what to do about them? 

I think that you should probably have done some of the following things:

  • Try to establish the limits of your knowledge and be clear about the fact that you’re in possession of good questions rather than good answers.
  • Do lots of measurements, write down your experiments clearly, and disseminate the results widely, so that other alchemists could make faster progress.
  • Push for better scientific norms. (Scientific norms were in fact invented in large part by Robert Boyle for the sake of making chemistry a better field.)
  • Work on building devices which would enable people to do experiments better.

Overall I feel like the alchemists did pretty well at making the world better, and if they’d been more altruistically motivated they would have been even better.

There are some reasons to think that pushing early chemistry forward is easier than working on improving the long term future, In particular, you might think that it’s only possible to work on x-risk stuff around the time of the hinge of history.

Some thoughts on EA outreach to high schoolers

Yeah, I thought about this; it’s standard marketing terminology, and concise, which is why I ended up using it. Thanks though.

Buck's Shortform

I thought this post was really bad, basically for the reasons described by Rohin in his comment. I think it's pretty sad that that post has positive karma.

Deliberate Consumption of Emotional Content to Increase Altruistic Motivation

When I was 18 I watched a lot of videos of animal suffering, eg linked from Brian Tomasik's list of distressing videos of suffering (extremely obvious content warning: extreme suffering).  I am not sure whether I'd recommend this to others.

As a result, I felt a lot of hatred for people who were knowingly complicit in causing extreme animal suffering, which was basically everyone I knew. At the time I lived in a catered college university, where every day I'd see people around me eating animal products; I felt deeply alienated and angry and hateful.

This was good in some ways. I think it's plausibly healthy to feel a lot of hatred for society. I think that this caused me to care even less about what people thought of me, which made it easier for me to do various weird things like dropping out of university (temporarily) and moving to America.

I told a lot of people to their faces that I thought they were contemptible. I don't feel like I'm in the wrong for saying this, but this probably didn't lead to me making many more friends than I otherwise would have. And on one occasion I was very cruel to someone who didn't deserve it; I felt more bad about this than about basically anything else I'd done in my life.

I don't know whether I'd recommend this to other people. Probably some people should feel more alienated and others should feel less alienated.

Load More