CL

Chris Leong

Organiser @ AI Safety Australia and NZ
6183 karmaJoined Nov 2015Sydney NSW, Australia

Bio

Participation
7

Currently doing local AI safety Movement Building in Australia and NZ.

Comments
1020

Okay, I guess parts of that framework make a bit more sense now that you've explained it.

At the same time, it feels that people can always decide to earn to give if they fail to land an EA-relevant gig, so I'm not sure why you're modeling it as a $5k annual donation vs. a one-time $5k donation for someone spending a year focusing on upskilling for EA roles. Maybe you could add an extra factor for the slowdown in their career advancement, but $50k extra per year is unrealistic.

I think it's also worth considering that there are selection effects here. So insofar as EA promotes direct work, people with higher odds of being successful in landing a direct work position are more likely to pursue that and people with better earn-to-give potential are less likely to take the advice.

Additionally, I wonder whether the orgs you surveyed understood ten additional applications as ten additional average applications or ten additional applications from EA's (more educated and valued-aligned than the general population) who were dedicated enough to actually follow through on earning to give. 

My point was that presumably the org thinks they're better if they decide to hire them as opposed to the next best person.

Bob sees little reason to reconsider the trade-off, especially since ChatGPT seems to have vindicated 80,000 hours’ prior belief that AI was going to be a big deal


ChatGPT is just the tip of the iceberg here.

GPT4 is significantly more powerful than 3.5. Google now has a multi-modal model that can take in sound, images and video and a context window of up to a million tokens. Sora can generate amazing realistic videos. And everyone is waiting to see what GPT5 can do.

Further, the Center for AI Safety open letter has demonstrated that it isn't just our little community that is worried about these things, but a large number of AI experts.

Their 'AI is going to be a big thing' bet seems to have been a wise call, at least at the current point in time. Of course, I'm doing AI Safety movement building, so I'm a bit biased here, and maybe we'll think differently down the line, but right now they're clearly ahead.

For 1: A lot of global health and development is much less talent-hungry than animal welfare work or x-risk work. Take for example the Against Malaria Foundation. They receive hundreds of millions of dollars, but they only have a core team of 13. Sure you need a bunch of people to hand out bed nets, but the requirements for that aren't that tight and sure you need some managers, but lots of people are capable of handling this kind of logistics, you don't really have to headhunt them. I suppose this could change if there was more of a pivot into policy where talent really matters. However, in that case, you would probably want people from the country whose policy you want to influence, moreso then thinking about cost.

For 5: It's not clear to me that the way you're thinking about this makes sense to me. If you're asking about the trade-off between direct work and donations, it seems as though we should ask about $5k from the job vs. a new candidate who is better than your current candidate as, in a lot of circumstances, they will have the option of doing earn to give so long as they don't take an EA job (I suppose there is the additional factor of how much chasing EA jobs detracts from chasing earn to give jobs).

Very interesting report. It provided a lot of visibility into how these funders think.

Geographically, India may be a standout opportunity for getting talent to do research/direct work in a counterfactually cheap way.

I would have embraced that more in the past, but I'm a lot more skeptical of this these days and I think that if it were possible then it would have worked out. For many tasks, EA wants the best talent that is available. Top talent is best able to access overseas opportunities and so the price is largely independent of current location.

In terms of age prioritization, it is suboptimal that EA focuses more on
outreach to university students or young professionals as opposed to mid-career
people with greater expertise and experience.

I agree that there is a lot of alpha in reaching out to mid-career professionals - if you are able to successfully achieve this. This work is a lot more challenging - mid-career professionals are often harder to reach, less able to pivot and have less time available to upskill. Less people are able to do this kind of outreach because these professionals may take current students or recent grads less seriously. So for a lot of potential movement builders, focusing on students or young professionals is a solid play because it is the best combination of impact and personal fit.

The report writes "80,000 Hours in particular, has been singled out for its work not having a clear positive impact, despite the enormous sums they have spent.

As an on-the-ground community builder, I'm skeptical of your take. So many people that I've talked to became interested in EA or AI safety through 80,000 Hours. Regarding: "misleading the community that it is cause neutral while being almost exclusively focused on AI risk", I was more concerned about this in the past, but I feel that they're handling this pretty well these days. Would be quite interested to hear if you had specific ways you thought they should change. Regarding, causing early career EA's to choose suboptimal early career jobs that mess up their CVs, I'd love to hear more detail on this if you can. Has anyone written up a post on this?

On funding Rethink Priorities specifically – a view floated is that there is value in RP as a check on OP, and yet if OP doesn't think RP is worth funding beyond a certain point, it's hard to gainsay them

Rethink Priorities seems to be a fairly big org - especially taking into account that it operates on the meta-level - so I understand why OpenPhil might be reluctant to spend even more money there. I suspect that there's a feeling that even if they are doing good work, that meta work should only be a certain portion of their budge. I wouldn't take that as a strong signal.

In practice, it's unclear if the community building actually translates to money/talent moved, as opposed to getting people into EA socially.

As someone on the ground, I can say that community building has translated to talent moved at the very least. Money moved is less visible to me because a lot of people aren't shouting about the pledges from the rooftops, plus the fact that donations are very heavy-tailed. Would love to hear some thoughtful discussion of how this could be better measured.

What would interest OP might be a project about getting in more people who are doing the best things in GHW (e.g. Buddy Shah, Eirik Mofoss).

Very curious about what impressed Open Philanthropy about these people since I'm not familiar with their work. Would be keen to learn more though!

At least from an AI risk perspective, it's not at all clear to me that this would improve things as it would lead to a further dispersion of this knowledge outward.

For anyone wondering about the definition of macrostrategy, the EA forum defines it as follows:

Macrostrategy is the study of how present-day actions may influence the long-term future of humanity.[1]

Macrostrategy as a field of research was pioneered by Nick Bostrom, and it is a core focus area of the Future of Humanity Institute.[2] Some authors distinguish between "foundational" and "applied" global priorities research.[3] On this distinction, macrostrategy may be regarded as closely related to the former. It is concerned with the assessment of general hypotheses such as the hinge of history hypothesis, the vulnerable world hypothesis and the technological completion conjecture; the development of conceptual tools such as the concepts of existential risk, of a crucial consideration and of differential progress; and the analysis of the impacts and capabilities of future technologies such as artificial general intelligence, whole brain emulation and atomically precise manufacturing, but considered at a higher level of abstraction than is generally the case in cause prioritization research.

If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake. 

Disagree because it is at -36.

Happy to consider your points on the merits if you have an example of an objectionable post with positive upvotes.

That said: part of me feels that Effective Altruism shouldn't be afraid of controversial discussion, whilst another part of me wants to shift it to Less Wrong. I suppose I'd have to have a concrete example in front of me to figure out how to balance these views.

I didn't vote, but maybe people are worried about the EA forum being filled up with a bunch of logistics questions?

Load more