I currently lead EA funds.
Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.
Unless explicitly stated otherwise, opinions are my own, not my employer's.
You can give me positive and negative feedback here.
Idk, many of the people they are directing would just do something kinda random which an 80k rec easily beats. I'd guess the number of people for whom 80k makes their plans worse in an absolute sense is kind of low and those people are likely to course correct.
Otoh, I do think people/orgs in general should consider doing more strategy/cause prio research, and if 80k were like "we want to triple the size of our research team to work out the ideal marginal talent allocation across longtermist interventions" that seems extremely exciting to me. But I don't think 80k are currently being irresponsible (not that you explicitly said that, for some reason I got a bit of that vibe from your post).
I think it's worth noting that the two papers linked (which I agree are flawed and not that useful from an x-risk viewpoint)
I haven't read the papers but I am surprised that you don't think they are useful from an x-risk perspective. The second paper "A Model for Estimating the Economic Costs of Computer Vision Systems that use Deep Learning" seems highly relevant to forecasting AI progress which imo is one of the most useful AIS interventions.
The OP's claim
This paper has many limitations (as acknowledged by the author), and from an x-risks point of view, it seems irrelevant.
Seems overstated and I'd guess that many people working on AI safety would disagree with them.
Great post - I really enjoyed reading this.
I would have thought the standard way to resolve some of the questions above would be to use a large agent-based model, simulating disease transmission among millions of agents and then observing how successful some testing scheme is within the model (you might be able to backtest the model against well-documented outbreaks).
I'm not sure how much you'd trust these models over your intuitions, but I'd guess they'd have quite a lot of mileage.
I've only skimmed these papers, but these seem promising and illustrative of the direction to me:
Fwiw, I don't think that being on the 80k podcast is much of an endorsement of the work that people are doing. I think the signal is much more like "we think this person is impressive and interesting", which is consistent with other "interview podcasts" (and I suspect that it's especially true of podcasts that are popular amongst 80k listeners).
I also think having OpenAI employees discuss their views publicly with smart and altruistic people like Rob is generally pretty great, and I would personally be excited for 80k to have more OpenAI employees (particularly if they are willing to talk about why they do/don't think AIS is important and talk about their AI worldview).
Having a line at the start of the podcast making it clear that they don't necessarily endorse the org the guest works for would mitigate most concerns - though I don't think it's particularly necessary.
I think this is a good policy and broadly agree with your position.
It's a bit awkward to mention, but as you've said that you've delisted other roles at OpenAI and that OpenAI has acted badly before - I think you should consider explicitly saying that you don't necessarily endorse other roles at OpenAI and suspect that some other role may be harmful on the OpenAI jobs board cards.
I'm a little worried about people seeing OpenAI listed on the board and inferring that the 80k recommendation somewhat transfers to other roles at OpenAI (which, imo is a reasonable heuristic for most companies listed on the board - but fails in this specific case).
Companies often hesitate to grant individual raises due to potential ripple effects:
The true cost of a raise may exceed the individual amount if it:
An alternative for altruistic employees: Negotiate for charitable donation matches
This approach allows altruists to increase their impact without triggering company-wide salary adjustments.
Fwiw I think that posts and comments on the EA Forum do a lot to create an association. If there wasn't any coverage of Hanania attending Manifest on the forum, I think something like 10x+ fewer EAs would know about the Hanania stuff, and it would be less likely to be picked up by journalists (a bit less relevant as it was already covered by the Guardian). It seems like there's a nearby world where less than 1% of weekly active forum users know that an EAish organisation at a commercial venue run by EAish people invited Hanania to attend an event - which I personally don't think creates much association between EA and Hanania (unlike the current coverage).
Of course, some people here might think that EA should be grappling with racism outside of this incident, in which case opportunities like this are helpful for creating discourse. But insofar as people think that Manifest's actions were ok-ish, it's mostly sad that they are associated with EA and make EA look bad, meaning they personally don't want to attend Manifest; I think debating the topic on the forum is pretty counterproductive. My impression is that the majority of people in the comments are in the latter camp.
If you think that it's important that Manifest knows why you personally aren't attending, emailing them seems like a very reasonable action to me (but of course, this doesn't achieve the goal of letting people who don't organise the event know why you aren't attending).
Hi Markus,
For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual).
We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:
You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.