Some thoughts on EA outreach to high schoolers

Yeah I totally agree there are useful things to say, though my impression is these kinds of changes are smaller and this kind of advice is more out there already (except the last one).

I think the hope for more radical changes would be giving people more time to mull over the worldview, and maybe introducing people to a general 'prioritizy' mindset, that can sometimes payoff a lot (e.g. thinking about what you really want to get out of college and making sure you do).

(On the specifics, I think maths & physics probably trumps economics at A-level, if someone has the option to do both. At undergrad it's more unclear, but you can go from maths and physics into an econ, compsci or a bio PhD but not vice versa.)

Is there a positive impact company ranking for job searches?

Hi there,

If you're looking for a wider range of job listings, you might find this list of social impact job boards useful.

Plan for Impact Certificate MVP

I'm keen to see more experiments with impact certificates. Do you have funders interested in using it?

Against neglectedness

This will mainly need to wait for a separate article or podcast, since it's a pretty complicated topic.

However, my quick impression is that the issues Caspar mentions are mentioned in the problem framework article.

I also agree that their effect is probably to narrow the difference between AI safety and climate change, however I don't think they flip the ordering, and our 'all considered' view of the difference between the two was already narrower than a naive application of the INT framework implies – for the reasons mentioned here – so I don't think it really alters our bottom lines (in part because we were already aware of these issues). I'm sorry, though, that we're not clearer that our 'all considered' views are different from 'naive INT'.

What actually is the argument for effective altruism?

That's an interesting point. I was thinking that most people would say that if my goal is X, and I achieve far less of X than I easily could have, then that would qualify as a 'mistake' in normal language. I also wondered whether another premise should be something very roughly like 'maximising: it's better to achieve more rather than less of my goal (if the costs are the same)'. I could see contrasting with some kind of alternative approach could be another good option.

What actually is the argument for effective altruism?

I like the idea of thinking about it quantitatively like this.

I also agree with the second paragraph. One way of thinking about this is that if identifiability is high enough, it can offset low spread.

The importance of EA is proportional to the multiple of the degree to which the three premises hold.

[Linkpost] Some Thoughts on Effective Altruism

Hi Paolo, I apologise this is just a hot take, but from quickly reading the article, my impression was that most of the objections apply more to what we could call the 'near termist' school of EA rather than the longtermist one (which is very happy to work on difficult-to-predict or quantify interventions). You seem to basically point this out at one point in the article. When it comes to the longtermist school, my impression is that the core disagreement is ultimately about how important/tractable/neglected it is to do grassroots work to change the political & economic system compared to something like AI alignment. I'm curious if you agree.

What actually is the argument for effective altruism?

Hi Jamie,

I think it's best to think about the importance of EA as a matter of degree. I briefly mention this in the post:

Moreover, we can say that it’s more of a mistake not to pursue the project of effective altruism the greater the degree to which each of the premises hold. For instance, the greater the degree of spread, the more you’re giving up by not searching (and same for the other two premises).

I agree that if there were only, say, 2x differences in the impact of actions, EA could still be very worthwhile. But it wouldn't be as important as in a world where there are 100x differences. I talk about this a little more in the podcast.

I think ideally I'd reframe the whole argument to be about how important EA is rather than whether it's important or not, but the phrasing gets tricky.

What actually is the argument for effective altruism?

Hi David, just a very quick reply: I agree that if the first two premises were true, but the third were false, then EA would still be important in a sense, it's just that everyone would already be doing EA, so we wouldn't need a new movement to do it, and people wouldn't increase their impact by learning about EA. I'm unsure about how best to handle this in the argument.

What actually is the argument for effective altruism?

Hi Greg,

I agree that when introducing EA to someone for the first, it's often better to lead with a "thick" version, and then bring in thin later.

(I should have maybe better clarified that my aim wasn't to provide a new popular introduction, but rather to better clarify what "thin" EA actually is. I hope this will inform future popular intros to EA, but that involves a lot of extra steps.)

I also agree that many objections are about EA in practice rather than the 'thin' core ideas, and that it can be annoying to retreat back to thin EA, and that it's often better to start by responding to the objections to thick. Still, I think it would be ideal if more people understood the thin/thick distinction (I could imagine more objections starting with "I agree we should try to find the highest-impact actions, but I disagree with the current priorities of the community because...), so I think it's worth making some efforts in that direction.

Thanks for the other thoughts!

Load More