M

MvK

Project Manager @ FAR AI
541 karmaJoined Jul 2022Working (0-5 years)

Comments
68

MvK
1mo24
12
2

"It's not common" wouldn't by itself suffice as a reason though - conducting CEAs "isn't common" in GHD, donating 10% "isn't common" in the general population, etc. (cf. Hume, is-and-ought something something).

Obviously, something may be "common" because it reliably protects you from legal exposure, is too much work for too little a benefit etc., but then I'm much more interested in those underlying reasons.

MvK
1mo4
0
0
2

Hey Charlotte! Welcome to the EA Forum. :) Your skillset and interest in consulting work in GHD seems a near-perfect fit for working with one of the charities incubated by Ambitious Impact. As I understand, they are often for looking for people like you! Some even focus on the same problems you mention (STDs, nutritional deficiencies, etc.).

You can find them here: https://www.charityentrepreneurship.com/our-charities

Thanks for the detailed reply. I completely understand the felt need to seize on windows of opportunity to contribute to AI Safety - I myself have changed my focus somewhat radically over the past 12 months.

I remain skeptical on a few of the points you mention, in descending order of importance to your argument (correct me if I'm wrong):

"ERA's ex-AI-Fellows have a stronger track record" I believe we are dealing with confounding factors here. Most importantly, AI Fellows were (if I recall correctly) significantly more senior on average than other fellows. Some had multiple years of work experience. Naturally, I would expect them to score higher on your metric of "engaging in AI Safety projects" (which we could also debate how good of a metric it is). [The problem here I suspect is the uneven recruitment across cause areas, which limits comparability.] There were also simply a lot more of them (since you mention absolute numbers). I would also think that there have been a lot more AI opportunities opening up compared to e.g. nuclear or climate in the last year, so it shouldn't surprise us if more Fellows found work and/or funding more easily. (This is somewhat balanced out by the high influx of talent into the space.) Don't get me wrong: I am incredibly proud of what the Fellows I managed have gone on to do, and helping some of them find roles after the Fellowship may have easily been the most impactful thing I've done during my time at ERA. I just don't think it's a solid argument in the context in which you bring it up.

"The infrastructure is here" This strikes me as a weird argument at least. First of all, the infrastructure (Leverhulme etc.) has long been there (and AFAIK, the Meridian Office has always been the home of CERI/ERA), so is this a realisation you only came to now? Also: If "the infrastructure is here" is an argument, would the conclusion "you should focus on a broader set of risks because CSER is a good partner nearby" seem right to you?

"It doesn't diminish the importance of other x-risks or GCR research areas" It may not be what you intended, but there is something interesting about an organisation that used to be called the "Existential Risk Alliance" pivot like this. Would I be right in assuming we can expect a new ToC alongside the change in scope? (https://forum.effectivealtruism.org/posts/9tG7daTLzyxArfQev/era-s-theory-of-change)

AI was - in your words - already "an increasingly capable emerging technology" in 2023. Can you share more information on what made you prioritize it to the exclusion of all other existential risk cause areas (bio, nuclear, etc.) this year?

[Disclaimer: I previously worked for ERA as the Research Manager for AI Governance and - briefly - as Associate Director.]

Interesting idea. Say we DO find that - what implications would this have?

It seems to me that this data point alone wouldn't be sufficient to derive from it any actionable consequences in the absence of the even more interesting but even harder-to-get-data on WHY this is the case.

Or maybe you think that this is knowledge that is intrinsically rather than instrumentally valuable to have?

"An OpenAI program director, who has very little to actually do with this larger public debate, is suddenly subpoenaed to testify in a congressional hearing where they are forced to answer an ill-tempered congress member's questions. It might go something like this"

This should be OP, not OpenAI, right?

Hi Thomas! Have you looked at the charities that were incubated by Charity Entrepreneurship (CE)? They have quite some overlap with the things you care about and especially the policy ones might benefit hugely from someone with your expertise! Some are hiring and others are working with volunteers/interns, so this might be a good place to start.

Load more