126 karmaJoined


AI safety is such a new field that I don't expect you need to be a genius to do anything groundbreaking.

They claim to be working on areas like game theory, decision theory, and mathematical logic, which are all well-developed fields of study. I see no reason to think those fields have lots of low-hanging fruit that would allow average researchers to make huge breakthroughs. Sure, they have a new angle on those fields, but does a new angle really overcome their lack of an impressive research track-record?

But I expect them to be better at AI safety research because MIRI has been working on it for longer and has a stronger grasp of the technical challenges.

Do they have a stronger grasp of the technical challenges? They're certainly opinionated about what it will take to make AI safe, but their (public) justifications for those opinions look pretty flimsy.

If I had to guess, I would guess FLI, given their ability to at least theoretically use the money for grant-making. Though after Elon Musk's $10 million, donation this cause area seems to be short on room for more funding.

Thanks for writing this, Michael. More people should write up documents like these. I've been thinking of doing something similar, but haven't found the time yet.

I realized reading this that I haven't thought much about REG. It sounds like they do good things, but I'm a bit skeptical re: their ability to make good use of the marginal donation they get. I don't think a small budget, by itself, is strong evidence that they could make good use of more money. Can you talk more about what convinced you that they're a good giving opportunity on the margin? (I'm thinking out loud here, don't mean this paragraph to be a criticism.)

Re: ACE's recommended charities. I know you know I think this, but I think it's better for the health of ACE if their supporters divide their money between ACE and its recommended charities, even if the evidence for its recommended charities isn't currently as strong as I'd like. But I admit this is based on a fuzzy heuristic, not a knock-down argument.

Re: MIRI. Setting aside what I think of Yudkowsky, I think you may be overlooking the fact that that "competence" is relative to what you're trying to accomplish. Luke Muehlhauser accomplished a lot in terms of getting MIRI to follow nonprofit best practices, and from what I've read of his writing, I expect he'll do very well in his new role as an analyst for GiveWell. But there's a huge gulf between being competent in that sense, and being able to do (or supervise other people doing) ground breaking math and CS research.

Nate Soares seems as smart as you'd expect a former Google engineer to be, but would I expect him to do anything really ground breaking? No. Would I expect even the couple actual PhDs MIRI hired recently to do anything really ground breaking? They might, but I don't see why you'd think it likely.

In a way, it was easier to make a case for MIRI back when they did a lot of advocacy work. Now that they're billing themselves as a research institute, I think they've set a much higher bar for themselves, and when it comes to doing research (as opposed to advocacy) they've got much less of a track record to go on.

I was 12 when those demonstrations happened, and I'm a little fuzzy on the agenda of the protesters. I'm currently finishing up Stigliz's Gobalization and its Discontents, which while critical of the IMF, also complaints about anti-globalization activists lobbying for more protectionist measures on the part of developed countries, against goods produced in developing countries. Do you have any idea if that applies to the Seattle protests?

Question about CGD: are they optimizing for making their proposals sound boring even though in fact they ideally want huge changes from the status quo? Or do they really just think we need tweaks to the status quo?

(This is based on a very superficial glance at their site, was already planning on trying to read more of their materials.)

Hmmm... let me put it this way: I suspect the right approach to dealing with the current situation in Ukraine is to back off there, while taking a hard line re: willingness to defend Baltic NATO states like Estonia. Truly sharp red lines are established by things like the NATO treaty, not [hawkish politician X] shooting his mouth off.

I know GiveWell is aware of these articles, and has looked more into nukes. Probably more conversation notes will be coming out.

This is good to know.

Why not support the existing organizations, which have people with a lifetime of experience, scholarly background, and political connections?

Do you have any specific organizations in mind? Existing anti-nuclear weapons orgs seem focused on disarmament–which seems extremely unlikely as long as Putin (or someone like him) is in power in Russia. And existing US anti-war orgs seem tragically ineffective. But maybe that's because it's just too hard to have an effective anti-war organization in current US political context.

Partly, I was thinking of an org focused on achievable, narrowly defined actions, one that would fight say, a bill in Congress to provide arms to Ukraine, or authorize "limited" military intervention in eastern Europe, or raise a fuss when presidential candidates go a bit over the line in bellicose rhetoric (disincentivizing such rhetoric). Maybe there are already groups that do things like that–I admit I've only recently started trying to understand this area better.

Crap, thanks. Forgot the forum uses Markdown rather than HTML.

I've been using my nominally-an-atheism-blog on Patheos for a lot of EA-related blogging, but this is sub-optimal given that lots of people find the ads and commenting system extremely annoying. My first post on the new blog is titled, The case for donating to animal rights orgs. I'm hoping that with a non-awful commenting system, we'll get lots of good discussions there.

Seconded. The post seems to imply he's setting up a non-profit for this purpose, but it would be nice to have details.

Load more