FWIW, I think I did not consider non-EA jobs nearly enough right after my master's in 2016. However, my situation was somewhat idiosyncratic, and I'm not sure it could happen today in this form.
I ended up choosing between one offer from an EA org and one offer from a mid-sized German consulting firm. I chose the EA org. I think it's kind of insane that I hadn't even applied to, or strongly considered, more alternatives, and I think it's highly unclear if I made the right choice.
I don't remember a post but Daniel Kokotajlo recently said the following in a conversation. Someone with maths background should have an easy time to check & make this precise.> It is a theorem, I think, that if you are allocating resources between various projects that each have logarithmic returns to resources, and you are uncertain about how valuable the various projects are but expect that most of your total impact will come from whichever project turns out to be best (i.e. the distribution of impact is heavy-tailed) then you should, as a first approximation, allocate your resources in proportion to your credence that a project will be the best.
But the EA Infrastructure Fund currently only has ~$65k available
Hi, thanks for mentioning this - I am the chairperson of the EA Infrastructure Fund and wanted to quickly comment on this: We do have room for more funding, but the $65k number is too low. As of one week ago, the EAIF had at least $290k available. (The website for me now does show $270k, not $65k.)
It is currently hard to get accurate numbers, including for ourselves at EA Funds, due to an accounting change at CEA. Apologies for any confusion this might cause. We will fix the number on the web... (read more)
In a similar vein I enjoyed these two books with case studies of disasters:
(I'd be very interested in your answer if you have one btw.)
FWIW I agree that for some lines of work you might want to do managing conflicts of interests is very important, and I'm glad you're thinking about how to do this.
That seems fair. To be clear, I think "ground truth" isn't the exact framing I'd want to use, and overall I think the best version of such an exercise would encourage some degree of skepticism about the alleged 'better' answer as well.
Assuming it's framed well, I think there are both upsides and downsides to using examples that are closer to EA vs. clearer-cut. I'm uncertain on what seemed better overall if I could only do one of them.
Another advantage of my suggestion in my view is that it relies less on mentors. I'm concerned that having mentors that are... (read more)
Upon (brief) reflection I agree that relying on the epistemic savviness of the mentors might be too much and the best version of the training program will train a sort of keen internal sense of scientific skepticism that's not particularly reliant on social approval. If we have enough time I would float a version of a course that slowly goes from very obvious crap (marketing tripe, bad graphs) into things that are subtler crap (Why We Sleep, Bem ESP stuff) into weasely/motivated stuff (Hickel? Pinker? Sunstein? popular nonfiction in general?) into th... (read more)
I haven't thought a ton about the implications of this, but my initial reaction also is to generally be open to this.
So if you're reading this and are wondering if it could be worth it to submit an application for funding for past expenses, then I think the answer is we'd at least consider it and so potentially yes.
If you're reading this and it really matters to you what the EAIF's policy on this is going forward (e.g., if it's decision-relevant for some project you might start soon), you might want to check with me before going ahead. I'm not sure I'll be... (read more)
I would be very excited about someone experimenting with this and writing up the results. (And would be happy to provide EAIF funding for this if I thought the details of the experiment were good and the person a good fit for doing this.)
If I had had more time, I would have done this for the EA In-Depth Fellowship seminars I designed and piloted recently.
I would be particularly interested in doing this for cases where there is some amount of easily transmissible 'ground truth' people can use as feedback signal. E.g.
We even saw an NYT article about the CDC and whether reform is possible.
There were some other recent NYT articles which based on my limited COVID knowledge I thought were pretty good, e.g. on the origin of the virus or airborne vs. droplet transmission .
The background of their author, however, seems fairly consistent with an "established experts and institutions largely failed" story:
Zeynep Tufekci, a contributing opinion writer for The New York Times, writes about the social impacts of technology. She is an assistant professor in the School of Informat
This is great, thank you so much for sharing. I expect that many people will be in a similar situation, and so that I and others will link to this post many times in the future.
(For the same reason, I also think that pointers to potentially better resources by others in the comments would be very valuable.)
(The following is just my view, not necessarily the view of other EAIF managers. And I can't speak for the LTFF at all.)
FWIW I can think of a number of circumstances I'd consider a "convincing reason" in this context. In particular, cases where people know they won't be available for 6-12 months because they want to wrap up some ongoing unrelated commitment, or cases where large lead times are common (e.g., PhD programs and some other things in academia).
I think as with most other aspects of a grant, I'd make decisions on a case-by-case basis that would be... (read more)
Thanks a lot, this is useful context. I work in academia so the large lead times are relevant, particularly because other 'traditional' funders would require applications well in advance. It would be useful to know whether it was necessary to pursue those other funding routes as a 'career hedge' or not, for example, via a commitment to funding.
I am interested to hear if anyone from LTFF agrees/disagrees with Max's assessment in these circumstances.
We've now turned most of these into Anki cards
Amazing, thank you so much!
I'm afraid I don't know of great sources for the numbers you list. They may also only exist for the distribution of compute. Perhaps the numbers on the EA community are too uncertain and dynamic to be a good fit for Anki anyway. On the other hand, it may be mainly the order of magnitude that is interesting, and it should be possible to get this right using crude proxies.
One proxy for the size of the EA community could be the number of EA survey respondents (or perhaps one above a certain engagement level).
On the other points:
I agree with most of what you say here.
[ETA: I now realize that I think the following is basically just restating what Pablo already suggested in another comment.]
I think the following is a plausible & stronger concern, which could be read as a stronger version of your crisp concern #3.
"Humanity has not had meaningful control over its future, but AI will now take control one way or the other. Shaping the transition to a future controlled by AI is therefore our first and last opportunity to take control. If we mess up on AI, not only have we failed to s... (read more)
This NYTimes Magazine article might be interesting. Its framing is basically "why did the CDC fail, and how can it do better next time?".
It mentions some other groups that allegedly did better than the CDC. Though I don't know to what extent these groups were or were not EA-funded. E.g., it says:
The Covid Rapid Response Working Group, at the Edmond J. Safra Center for Ethics at Harvard, was one of several independent organizations that stepped in to help fill the gap. In the last year, these groups, run mostly out of academic centers and private foun
I'm sure there are a number of interesting movies and documentaries on nuclear security.
Three movies that come to mind immediately:
Another relevant film is The Day After, which was seen by 100 million Americans—"the most-watched television film in the history of the medium" (Hänni 2016)— and was instrumental in changing Reagan’s nuclear policy.
I like this idea. Here is some brainstorming output. Apologies for it being unedited/not sorted by categories:
FINAL UPDATE: The deck is now published.
This is amazing. I'd be happy to create an Anki deck for these and any other numbers suggested in this thread.
EDIT: Judging from the upvotes, there seems to be considerable interest in this. I will wait a few days until people stop posting answers and will then begin creating the deck. I'll probably use the CrowdAnki extension to allow for collaboration and updating; see the ultimate-geography GitHub repository for an example.
Yeah I agree that info on how much absolute impact each grant seems to have had would be more relevant for making such updates. (Though of course absolute impact is very hard to estimate.)
Strictly speaking the info in the OP is consistent with "99% of all impact came from one grant", and it could even be one of the "Not as successful as hoped for". (Though taking into account all context/info I would guess that the highest-impact grants would be in the bucket "More successful than expected".) And if that was the case one shouldn't make any updates that would be motivated by "this looks less heavy-tailed than I expected".
Thanks, that makes sense.
There is a part of me which finds the outcome (a 30 to 40% success rate) intuitively disappointing. However, it may suggest that the LTF was taking the right amount of risk per a hits-based-giving approach.
FWIW, my immediate reaction had been exactly the opposite: "wow, the fact that this skews so positive means the LTFF isn't risk-seeking enough". But I don't know if I'd stand by that assessment after thinking about it for another hour.
To really make this update, I'd want some more bins than the ones Nuno provide. That is, there could be an "extremely more successful than expected" bin; and all that matters is whether you manage to get any grant in that bin.
(For example, I think Roam got a grant in 2018-2019, and they might fall in that bin, though I haven't thought a lot about it.)
Yes, for me updating upwards on total success on a lower percentage success rate seems intuitively fairly weird. I'm not saying it's wrong, it's that I have to stop and think about it/use my system 2.
In particular, you have to have a prior distribution such that more valuable opportunities have a lower success rate. But then you have to have a bag of opportunities such that the worse they do, the more you get excited.
Now, I think this happens if you have a bag with "golden tickets", "sure things", and "duds". Then not doing well would make you ... (read more)
Also, to be clear, are your original comment and this correction talking about the same survey population? I.e., EA survey takers in the same year(s)? Rather than comparing the results for different survey populations?
How do people who first got involved at 15-17 or 18 compare to people who first got involved age 20-25 (or something like that)? So "unusually young" vs. "median" rather than "unusually young vs. unusually old"?
Thanks! I think I basically agree with everything you say in this comment. I'll need to read your longer comment above to see if there is some place where we do disagree regarding the broadly 'metaethical' level (it does seem clear we land on different object-level views/preferences).
In particular, while I happen to like a particular way of cashing out the "impartial consequentialist" outlook, I (at least on my best-guess view on metaethics) don't claim that my way is the only coherent or consistent way, or that everyone would agree with me in the limit of ideal reasoning, or anything like that.
Sounds cool. 75% that I'll join on Friday from 10:30AM California time for a few hours. If it seemed like spending more time would be useful, I'd join again on Saturday from 10AM California time for a bit.
Lmk if a firmer RSVP would be helpful.
Great! I'm also intuitively optimistic about the effect of these new features on Wiki uptake, editor participation, etc.
Narrowly,"chance favors the prepared mind" and being in either quant trading or cryptography (both competitive fields!) before the crypto boom presumably helps you see the smoke ahead of time, and like you some of the people I know in the space were world-class at an adjacent field like finance trading or programming. Though I'm aware of other people who literally did stuff closer to fly a bunch to Korea and skirt the line on capital restrictions, which seems less reliant on raw or trained talent.
(I agree that having knowledge of or experience ... (read more)
I also now think that the lower end of the 80% interval should probably be more like $5-15B.
Shouldn't your lower bound for the 50% interval be higher than for the 80% interval?
If the intervals were centered - i.e., spanning the 10th to 90th and the 25th to 75th percentile, respectively - then it should be, yes.
I could now claim that I wasn't giving centered intervals, but I think what is really going on is that my estimates are not diachronically consistent even if I make them within 1 minute of each other.
I think I often have an implicit intuition about something like "how heavy-tailed is this grant?". But I also think most grants I'm excited about are either at least somewhat heavy-tailed or aimed at generating information for a decision about a (potentially heavy-tailed) future grant, so this selection effect will reduce differences between grants along that dimension.
But I think for less than 1/10 of the grants I think about I will have any explicit quantitative specification of the distribution in mind. (And if I have it will be rougher than a full dist... (read more)
As an aside, I think that's an excellent heuristic, and I worry that many EAs (including myself) haven't internalized it enough.
(Though I also worry that pushing too much for it could lead to people failing to notice the exceptions where it doesn't apply.)
My knee-jerk reaction is: If "net negative" means "ex-post counterfactual impact anywhere below zero, but including close-to-zero cases" then it's close to 50% of grantees. Important here is that "impact" means "total impact on the universe as evaluated by some omniscient observer". I think it's much less likely that funded projects are net negative by the light of their own proxy goals or by any criterion we could evaluate in 20 years (assuming no AGI-powered omniscience or similar by then).
(I still think that the total value of the grantee portfolio woul... (read more)
I find your crypto trading examples fairly interesting, and I do feel like they only fit awkwardly with my intuitions - they certainly make me think it's more complicated.
However, one caveat is that "willing to see the opportunity" and "willing to make radical life changes" don't sound quite right to me as conditions, or at least like they omit important things. I think that actually both of these things are practice-able abilities rather than just a matter of "willingness" (or perhaps "willingness" improves with practice).
And in the few cases ... (read more)
As Max_Daniel noted, an underlying theme in this post is that "being successful at conventional metrics" is an important desiderata, but this doesn't reflect the experiences of longtermist EAs I personally know. For example, anecdotally, >60% of longtermists with top-N PhDs regret completing their program, and >80% of longtermists with MDs regret it.
Your examples actually made me realize that "successful at conventional metrics" maybe isn't a great way to describe my intuition (i.e., I misdescribed my view by saying that). Completing a top-N PhD or M... (read more)
[PAI vs. GPAI]
So there is now (well, since June 2020) both a Partnership on AI and a Global Partnership on AI.
Unfortunately GPAI's and PAI's FAQ pages conspicuously omit "how are you differnet from (G)PAI?".
Can anyone help?
At first glance it seems that:
I actually think this is surprisingly non-straightforward. Any estimate of the net present value of total longtermist $$ will have considerable uncertainty because it's a combination of several things, many of which are highly uncertain:
FWIW, I actually (and probably somewhat iconoclastically) disagree with this. :P
In particular, I think Part I of Reasons and Persons is underrated, and contains many of the most useful ideas. E.g., it's basically the best reading I know of if you want to get a deep and principled understanding for why 'naive consequentialism' is a bad idea, but why at the same time worries about naive applications of consequentialism or the demandingness objection and many other popular objections to consequentialism don't succeed at undermining it as ultimate criterion of... (read more)
I think all funds are generally making good decisions.
I think a lot of the effect is just that making these decisions is hard, and so that variance between decision-makers is to some extent unavoidable. I think some of the reasons are quite similar to why, e.g., hiring decisions, predicting startup success, high-level business strategy, science funding decisions, or policy decisions are typically considered to be hard/unreliable. Especially for longtermist grants, on top of this we have issues around cluelessness, potentially missing crucial considerations... (read more)
My very off-the-cuff thoughts are:
From an outside view the actual cost of making the grants from the pot of another fund seems incredibly small. At minimum it could just be having someone to look over the end decisions and see if any feel like they belong in a different fund and then quickly double checking with the other fund's grantmakers that they have no strong objections and then granting the money from a different pot. (You could even do that after the decision to grant has been communicated to applicants, no reason to hold up, if the second fund objects then can still be given by th
(FWIW, I personally love Reasons and Persons but I think it's much more "not for everyone" than most of the other books Jiri mentioned. It's just too dry, detailed, abstract, and has too small a density of immediately action-relevant content.
I do think it could make sense as a 'second book' for people who like that kind of philosophy content and know what they're getting into.)
If you showed me the list here and said 'Which EA Fund should fund each of these?' I would have put the Lohmar and the CLTR grants (which both look like v good grants and glad they are getting funded) in the longtermist fund. Based on your comments above you might have made the same call as well.
Thank you for sharing - as I mentioned I find this concrete feedback spelled out in terms of particular grants particularly useful.
[ETA: btw I do think part of the issue here is an "object-level" disagreement about where the grants best fit - personally, I de... (read more)
OK, on a second thought I think this argument doesn't work because it's basically double-counting: the reason why returns might not diminish much faster than logarithmic may be precisely that new, 'crazy' opportunities become available.
I don't think it's crazy at all. I think this sounds pretty good.
Hmm. Then I'm not sure I agree. When I think of prototypical example scenarios of "business as usual but with more total capital" I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based 'utility function' I'd be surprised if it had returns than diminish much more strongly than logarithmic. (That's at least my initial intuition - not sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, ... (read more)
I think we roughly agree on the direct effect of fundraising orgs, promoting effective giving, etc., from a longtermist perspective.
However, I suspect I'm (perhaps significantly) more optimistic than you about 'indirect' effects from promoting good content and advice on effective giving, promoting it as a 'social norm', etc. This is roughly because of the view I state under the first key uncertainty here, i.e., I suspect that encountering effective giving can for some people be a 'gateway' toward more impactful behaviors.
One issue is that I think the sign ... (read more)
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, I'm worried that (at least in my case) this is too anchored on imagining "business as usual, but with more total capital". I'm wondering if most of the expected value of an additional $100B - especially when controlled by a single donor who can flexibly deploy them - comes from 'crazy... (read more)
I don’t think this should be seen as evidence that these organisations did badly (maybe a bit that they were over-confident) but that this was a very difficult situation to do things well in.
I somewhat agree, but I think this point becomes much weaker if it was the case that at the same time when these organizations were giving poor advice some amateurs in the EA and rationality communities had already arrived at better conclusions, would have given better advice, etc.
I didn't follow the relevant conversations closely enough to have much of an inside view ... (read more)
Hi, Yes good point, maybe I am being too generous.FWIW I don’t remember anyone in the EA / rationalist community calling for the strategy that post-hoc seems to have worked best of a long lock-down to get to zero cases followed by border closures etc to keep cases at zero. (I remember a lot of people for example sharing this note which gets much right but stuff wrong: eg short lock-dock and comparatively easy to keep R below 1 with social distancing)
To be clear again, the specific question this analysis address is not "is it ethical to eat meat and then pay offsets". The question is "assuming you pay for offsets, is it better to eat chicken or beef?"
(FWIW, this might be worth emphasizing more prominently. When I first read this post and the landing page, it took me a while to understand what question you were addressing.)