Empirically, in hiring rounds I've previously been involved in for my team at Open Phil, it has often seemed to be the case that if the top 1-3 candidates just vanished, we wouldn't make a hire. I've also observed hiring rounds that concluded with zero hires. So, basically I dispute the premise that the top applicants will be similar in terms of quality (as judged by OP).
I'm sympathetic to the take "that seems pretty weird." It might be that Open Phil is making a mistake here, e.g. by having too high a bar. My unconfident best-guess would be that our bar h...
Thanks for the reply.
I think "don't work on climate change[1] if it would trade off against helping one currently identifiable person with a strong need" is a really bizarre/undesirable conclusion for a moral theory to come to, since if widely adopted it seems like this would lead to no one being left to work on climate change. The prospective climate change scientists would instead earn-to-give for AMF.
Or bettering relations between countries to prevent war, or preventing the rise of a totalitarian regime, etc.
Moreover, it’s common to assume that efforts to reduce the risk of extinction might reduce it by one basis point—i.e., 1/10,000. So, multiplying through, we are talking about quite low probabilities. Of course, the probability that any particular poor child will die due to malaria may be very low as well, but the probability of making a difference is quite high. So, on a per-individual basis, which is what matters given contractualism, donating to AMF-like interventions looks good.
It seems like a society where everyone took contractualism to heart mi...
...So, it may be true that some x-risk-oriented interventions can help us all avoid a premature death due to a global catastrophe; maybe they can help ensure that many future people come into existence. But how strong is any individual's claim to your help to avoid an x-risk or to come into existence? Even if future people matter as much as present people (i.e., even if we assume that totalism is true), the answer is: Not strong at all, as you should discount it by the expected size of the benefit and you don’t aggregate benefits across persons. Since any giv
(I'm a trustee on the EV US board.)
Thanks for checking in. As Linch pointed out, we added Lincoln Quirk to the EV UK board in July (though he didn’t come through the open call). We also have several other candidates at various points in the recruitment pipeline, but we’ve put this a bit on the backburner both because we wanted to resolve some strategic questions before adding people to the board and also because we've been lower capacity than we thought.
Having said that, we were grateful for all the applications and nominations which we received in that in...
I think we should keep "neglectedness" referring to the amount of resources invested in the problem, not P(success). This seems a better fit for the "tractability" bucket.
(+1 to this approach for estimating neglectedness; I think dollars spent is a pretty reasonable place to start, even though quality adjustments might change the picture a lot. I also think it's reasonable to look at number of people.)
Looks like the estimate in the 80k article is from 2020, though the callout in the biorisk article doesn't mention it — and yeah, AIS spending has really taken off since then.
I think the OP amount should be higher because I think one should count X% of the spending on longtermist community-building as being AIS spending, for s...
Made the front page of Hacker News. Here's the comments.
The most common pushback (and the first two comments, as of now) are from people who think this is an attempt at regulatory capture by the AI labs, though there's a good deal of pushback and (I thought) some surprisingly high-quality discussion.
It seems relevant that most of the signatories are academics, where this criticism wouldn't make sense. @HaydnBelfield created a nice graphic here demonstrating this point.
Off topic: There's a line in the movie A Cinderella Story: Christmas Wish that might be applicable to you: "was also credited with helping shift the Animal Rights movement to a more utilitarian focus including a focus on chicken."
This is an amazing thing to learn.
FWIW several people I spoke to just weren't aware subforums existed, during the time they were being piloted.
(I work at Open Phil assisting with this effort.)
Thanks for pointing this out; it looks like there was a technical error which excluded these from the email receipt, which we've now fixed. The information was still received on our end, so you don't need to take any extra actions.
(I work at Open Phil assisting with this effort.)
Any grantee who is affected by the collapse of FTXFF and whose work falls within our focus areas (biosecurity, AI risk, and community-building) should feel free to apply, even if they have significant runway.
For various reasons, we don’t anticipate offering any kind of program like this, and are taking the approach laid out in the post instead. Edit: We’re still working out a number of the details, and as the comment below states, people who are worried about this should still apply.
(I work at Open Phil assisting with this effort.)
We think that people in this situation should apply. The language was intended to include this case, but it may not have been clear.
If you haven't already, I'd recommend reading Richard Ngo's AGI Safety From First Principles, which I think is an unusually rigorous treatment of the issue.
We've been paying people based on time spent, rather than by word. The amounts are based on our assessment of market rates for high-quality freelance translators for the language in question online, though my guess is this will be a more attractive than being a freelance translator because it's a source of steady work for a long period of time (e.g. 6 months).
Have you considered writing a letter to the editor? I think actual worked examples of naive consequentialism failing are kind of rare and cool for people to see .
We're interested in increasing the diversity of the longtermist community along many different axes. It's hard to give a unified 'strategy' at this abstract level, but one thing we've been particularly excited about recently is outreach in non-Western and non-English-speaking countries.
Yes, you can apply for a grant under these circumstances. It's possible that we'll ask you to come back once more aspects of the plan are figured out, but we have no hard rules about that. And yes, it's possible to apply for funding conditional on some event and later return the money/adjust the amount you want downwards if the event doesn't happen.
I'll stand by the title here. I think a bilingual person without specific training in translation can have good taste in determining whether or not a given translation is high-quality. These seem like distinct skills, e.g. in English I'm able to recognize a work badly translated from French even if I don't speak French and couldn't produce a better one. And having good taste seems like the most important skill for someone who is vetting and contracting with professional translators.
Separately, I also think that many (but not all) bilingual people without s...
Hi Zakariyau. This seems like it definitely meets the criteria of a language with >5m speakers — I don't have the context, but I don't think English being the official language would be a barrier of any kind.
Unfortunately I think this kind of experimental approach is a bad fit here; opportunity costs seem really high, there's a small number of data points, and there's a ton of noise from other factors that language communities vary along.
Fortunately I think we'll have additional context that will help us assess the impacts of these grants beyond a black-box "did this input lead to this output" analysis.
Hi Nathan — I think that probably wouldn't make sense in this case, as I think it's important for the person leading a given translation project to understand EA and related ideas well, even if translators they hire do not.
Yep, this list isn't intended to rule anything out. We'd certainly be interested in getting applications from people who want to get content translated into Hindi or other Indian languages.
Thanks, really appreciate the concrete suggestion! This seems like a good lead for anyone who wants to supervise Polish translation.
I think this Wikipedia claim is from Reagan's autobiography. But according to The Dead Hand, written by a third-party historian, Reagan was already very concerned about nuclear war by this time, and had been at least since his campaign in 1980. It's pretty interesting — apparently this concern led to both his interest in nuclear weapon abolition (which he mostly didn't talk about) and in his unrealistic and harmful missile defense plans.
So according to this book, The Day After wasn't actually any kind of turning point.
The answer is yes, I can think of some projects in this general area that sound good to me. I'd encourage you to email me or sign up to talk to me about your ideas and we can go from there. As is always the case, a lot rides on further specifics about the project — i.e. just the bare fact that something is focused on mid-career professionals in tech doesn't give me a lot of info about whether it's something we'd want to fund or not.
Thanks, I only meant to ask if focusing on mid-career tech is already a deal breaker for things you'd be interested in, and I understand that isn't.
(I work at Open Phil on community-building grantmaking.)
This role seems quite high-impact to me and I'd encourage anyone on the fence to apply. Our 2020 survey leads me to believe that 80k has been very impactful in terms of contributing to the trajectories of people who are now doing important longtermist work. I think good marketing work could significantly increase the number of people that 80k reaches, and the impact of doing this quickly and well seems competitive with a lot of other community-building work to me — one reason for this is that I think ...
Is there an equally high level of expert consensus on the existential risks posed by AI?
There isn't. I think a strange but true and important fact about the problem is that it just isn't a field of study in the same way e.g. climate science is — as argued in this Cold Takes post. So it's unclear who the relevant "experts" should be. Technical AI researchers are maybe the best choice, but they're still not a good one; they're in the business of making progress locally, not forecasting what progress will be globally and what effects that will have.
I think this is a good question and there are a few answers to it.
One is that many of these jobs only look like they check the "improving the world" box if you have fairly unusual views. There aren't many people in the world for whom e.g. "doing research to prevent future AI systems from killing us all" tracks as an altruistic activity. It's interesting to look at this (somewhat old) estimate of how many EAs even exist.
Another is that many of the roles discussed here aren't research-y roles (e.g. the biosecurity projects require entrepreneurship, not resea...
What I'm talking about tends to be more of an informal thing which I'm using "EMH" as a handle for. I'm talking about a mindset where, when you think of something that could be an impactful project, your next thought is "but why hasn't EA done this already?" I think this is pretty common and it's reasonably well-adapted to the larger world, but not very well-adapted to EA.
EMH says that we shouldn't expect great opportunities to make money to just be "lying around" ready for anyone to take. EMH says that, if you have an amazing startup idea, you have to answer "why didn't anyone do this before?" (ofc. this is a simplification, EMH isn't really one coherent view)
One might also think that there aren't great EA projects just "lying around" ready for anyone to do. This would be an "EMH for EA." But I think it's not true.
There is/was a debate on LessWrong about how valid the efficient market hypothesis is. I think this is super interesting stuff, but I want to claim (with only some brief sketches of arguments here) that, regarding EA projects, the efficient market hypothesis is not at all valid (that is, I think it's a poor way to model the situation that will lead you to make systematically wrong judgments). I think the main reasons for this are:
If you're an EA who's just about to graduate, you're very involved in the community, and most of the people you think are really cool are EAs, I think there's a decent chance you're overrating jobs at EA orgs in your job search. Per the common advice, I think most people in this position should be looking primarily at the "career capital" their first role can give them (skills, connections, resume-building, etc.) rather than the direct impact it will let them have.
At first blush it seems like this recommends you should almost never take an EA job early in ...
Thanks for posting this. I found a lot of it resonant — particularly the stuff about inventing reasons to discount positive feedback, and having to pile on more and more unlikely beliefs to avoid updating to "I'm good at this."
I remember, fairly recently, taking seriously some version of "I'm not actually good at this stuff, I'm just absurdly skilled at fooling others into thinking that I am." I don't know man, it seemed like a pretty good hypothesis at the time.
One can't stack the farmed animal welfare multiplier on top of the ones about giving malaria nets or the one about focusing on developing countries, right? E.g. can't give chickens malaria nets.
It seems like that one requires 'starting from scratch' in some sense. There might be analogies to the human case (e.g. don't focus on your pampered pets), but they still need to be argued.
So I think the final number should be lower. (It's still quite high, of course!)
The way I framed it was unclear, but the final number is correct because I was comparing the QALYs/$ of farmed animal interventions to that of malaria nets. See the footnote:
Assumes 40 chicken QALYs/$, 1 human QALY/$100, and that 400 chicken QALY = 1 human QALY due to neuron differences. Ana's moral circle includes all beings weighted by neuron count, but she hadn't thought about this enough.
I was directly comparing the following rough estimates
Hmm. Thanks for the example of the "pure time" mapping of t --> mental states. It's an interesting one. It reminds me of Max Tegmark's mathematical universe hypothesis at "level 4," where, as far as I understand, all possible mathematical structures are taken to "exist" equally. This isn't my current view, in part because I'm not sure what it would mean to believe this.
I think the physical dust mapping is meaningfully different from the "pure time" mapping. The dust mapping could be defined by the relationships between dust specks. E.g. at each tim...
I downvoted the OP because it doesn't seem to be suited to this forum. The author's experiences are interesting, but I don't think the post contains an attempt to explore the potential cause area impartially.
Hmm, so from my point of view this post is written with the intention to bring a class of interventions to the attention of people who care about mental health, which seems good to me.
I agree this post is not an impartial exploration, but I tentatively would like this forum to also be welcoming to new altruistically minded people who have a new idea about helping people, even ideas that are in their infancy in terms of the question whether it’s among the most effective ways to do good.
So instead of feedback that their initial thoughts don’t belong here, I’...
(meta musing) The conjunction of the negations of a bunch of statements seems a bit doomed to get a lot of disagreement karma, sadly. Esp. if the statements being negated are "common beliefs" of people like the ones on this forum.
I agreed with some of these and disagreed with others, so I felt unable to agreevote. But I strongly appreciated the post overall so I strong-upvoted.