Bio

Participation
3

About to complete my MSc in applied mathematics/theoretical ML. Currently a Prague Fall Season Resident working on independent AI alignment research (until Nov '22).

Interested in increasing diversity, transparency and democracy in the EA movement. Would like to know how algorithm developers can help "neartermist" causes.

Comments
576

I struggle to follow the logic that would permit this risk taking in the first place, even without all these caveats. As you said:

a foundation with $15 billion would end up being a majority of funding for those areas, and so effectively increase the resources going towards them by more than 2-fold, and perhaps as much as 5-fold... by contrast... $15 million from me, spread out over a period of years, would represent less than a 1% increase.

This is indeed a big difference. If you're looking at a small-ish donation, it makes sense to ask if it's uncorrelated with other similar donations, and of yes to take the option with the higher expected value, because over a large number of such choices it's probable that the average donation would indeed have that value. In contrast, if you're looking at a donation in the billions of dollars, this EV logic is almost entirely irrelevant - even if it were uncorrelated with other donations, you don't have a hundred or a thousand donations of this size! The idea that we can actually expect do get the EV is just wrong. We in fact never get it.

So you can decide to be more or less risk averse, but you can't really pretend you're not risking a billion dollars here and hide behind EV maximisation.

On a superficial level I agree. But then on second thought, it seems weird to me to group together donations backed by wide community discussion with those by e.g. OpenPhil, which is controlled by a board of 5-6 people.

I only read up to the end of the "Bayesian Updating" part because the next section introduced a few terms I didn't know and seemed to be expected to be familiar with.

So on that section that I read, I can say that I didn't manage to follow it with all the numbers. This might just be because I'm a mathematician and I hate numbers. But formulas would have greatly helped.

if this monkey pox image can be generated by a prompt of 12 redundant words, then that's another of saying that the image is worth far less than a thousand words - it's worth less than 12...

It's only worth less than 12 if you have mental access to state of the model after training. If not, it also includes a bunch of what it learned.

Yesterday I listened to Kelsey Piper's review using the Google Assistant text-to-speech (which only works for web pages), and it worked pretty well. Only a couple words are mispronounced ("aye" for AI).

Look here (Google doc) for more info about the GiveWell moral weights. I don't like their approach to this, but as always I can praise them highly for the transparency :)

Thanks for writing this!

Wouldn't have guessed from the username that you're a socialist :)

This contrasts with the internationalist view in EA, with distant actors solving local problems using distant resources.

You wrote this in a short way, but I think it's worth expanding upon. Progressives often equate that very idea with neo-colonialism, and they're not wrong in principle. In EA we need to take attention to both make sure it does involve locals in a meaningful enough way, and to present the ways it is very different in practice from neo-colonialism. Or, in other words, to show where that criticism is wrong, but to learn from the part that's right.

Concretely, I'm pretty uncomfortable with GiveWell's moral weights being set for a large part using a donor survey. Why should Western donors get to decide what's important for people in developing countries?

Mostly agree, although I don't necessarily think property rights as currently understood in Western culture are actually good.

I actually really like the design, and that's despite my being in the "dark theme everything" camp.

Thanks. It's not as awful as the partial quote, but in my eyes still bad, and will make me think twice about associating with MacAskill.

Load More