All of Max_Daniel's Comments + Replies

More EAs should consider “non-EA” jobs

FWIW, I think I did not consider non-EA jobs nearly enough right after my master's in 2016. However, my situation was somewhat idiosyncratic, and I'm not sure it could happen today in this form.

I ended up choosing between one offer from an EA org and one offer from a mid-sized German consulting firm. I chose the EA org. I think it's kind of insane that I hadn't even applied to, or strongly considered, more alternatives, and I think it's highly unclear if I made the right choice.

8Linch1moI do think on average people don't apply to enough jobs relative to the clear benefits of having more options. I'm not sure why this is, and also don't have a sense of whether utility will increase if we tell people to apply to more EA or EA-adjacent jobs vs more jobs outside the movement. Naively I'd have guessed the former to be honest.
Post on maximizing EV by diversifying w/ declining marginal returns and uncertainty

I don't remember a post but Daniel Kokotajlo recently said the following in a conversation. Someone with maths background should have an easy time to check & make this precise.

> It is a theorem, I think, that if you are allocating resources between various projects that each have logarithmic returns to resources, and you are uncertain about how valuable the various projects are but expect that most of your total impact will come from whichever project turns out to be best (i.e. the distribution of impact is heavy-tailed) then you should, as a first approximation, allocate your resources in proportion to your credence that a project will be the best.

2MichaelStJules1moThis looks interesting, but I'd want to see a formal statement. Is it the expected value that's logarithmic, or expected value conditional on nonzero (or sufficiently high) value? tl;dr: I think under one reasonable interpretation, with logarithmic expected value and precise distributions, the theorem is false. It might be true if made precise in a different way. If 1. you only care about expected value, 2. you had the expected value of each project as a function of resources spent (assuming logarithmic expected returns already assumes a lot, but does leave a lot of room), and 3. how much you fund one doesn't affect the distribution of any others (holding their funding constant), then the uncertainty doesn't matter (with precise probabilities), only the expected values do. So allocating in proportion to your credence that each project will be best depends on something that doesn't actually matter that much, i.e. your credence that the project will be best, because you can hold the expected values for a project constant while adjusting the probability that it's best. To be more concrete, we could make all of the projects statistically independent and either return 0 or some high value with some tiny probability, and the value or probability of positive return scales with the amount of resources spent on the project, so that the expected values scale logarithmically. Let's also assume only two projects (or a number that scales sufficiently slowly with the inverse probability of any one of them succeeding). Then, conditional on nonzero impact, your impact will with probability very close to 1 come from whichever project you fund succeeds, since it'll be very unlikely that multiple will. So, I think we've satisfied the stated conditions of the theorem, and it recommends allocating in proportion to our credences in each project being best, which, with very low independent probabilities of success across projects, is roughly the credence that the pro
4RyanCarey1moMathematical theorems you had no idea existed, cause they’re false... [https://www.facebook.com/BestTheorems/]
Most research/advocacy charities are not scalable

But the EA Infrastructure Fund currently only has ~$65k available

Hi, thanks for mentioning this - I am the chairperson of the EA Infrastructure Fund and wanted to quickly comment on this: We do have room for more funding, but the $65k number is too low. As of one week ago, the EAIF had at least $290k available. (The website for me now does show $270k, not $65k.)

It is currently hard to get accurate numbers, including for ourselves at EA Funds, due to an accounting change at CEA. Apologies for any confusion this might cause. We will fix the number on the web... (read more)

7Jonas Vollmer1moI have edited all our fund pages to include the following sentence:
3jared_m2moThank you for sharing these — I may pick up the Clarke book as summer reading!
EA Infrastructure Fund: Ask us anything!

(I'd be very interested in your answer if you have one btw.)

The Centre for the Governance of AI is becoming a nonprofit

FWIW I agree that for some lines of work you might want to do managing conflicts of interests is very important, and I'm glad you're thinking about how to do this.

Linch's Shortform

That seems fair. To be clear, I think "ground truth" isn't the exact framing I'd want to use, and overall I think the best version of such an exercise would encourage some degree of skepticism about the alleged 'better' answer as well.

Assuming it's framed well, I think there are both upsides and downsides to using examples that are closer to EA vs. clearer-cut. I'm uncertain on what seemed better overall if I could only do one of them.

Another advantage of my suggestion in my view is that it relies less on mentors. I'm concerned that having mentors that are... (read more)

Upon (brief) reflection I agree that relying on the epistemic savviness of the mentors might be too much and the best version of the training program will train a sort of keen internal sense of scientific skepticism that's not particularly reliant on social approval.  

If we have enough time I would float a version of a course that slowly goes from very obvious crap (marketing tripe, bad graphs) into things that are subtler crap (Why We Sleep, Bem ESP stuff) into weasely/motivated stuff (Hickel? Pinker? Sunstein? popular nonfiction in general?) into th... (read more)

EA Infrastructure Fund: Ask us anything!

I haven't thought a ton about the implications of this, but my initial reaction also is to generally be open to this.

So if you're reading this and are wondering if it could be worth it to submit an application for funding for past expenses, then I think the answer is we'd at least consider it and so potentially yes.

If you're reading this and it really matters to you what the EAIF's policy on this is going forward (e.g., if it's decision-relevant for some project you might start soon), you might want to check with me before going ahead. I'm not sure I'll be... (read more)

Linch's Shortform

I would be very excited about someone experimenting with this and writing up the results. (And would be happy to provide EAIF funding for this if I thought the details of the experiment were good and the person a good fit for doing this.)

If I had had more time, I would have done this for the EA In-Depth Fellowship seminars I designed and piloted recently.

I would be particularly interested in doing this for cases where there is some amount of easily transmissible 'ground truth' people can use as feedback signal. E.g.

  • You first let people red-team deworming p
... (read more)
4Linch2moHmm I feel more uneasy about the truthiness grounds of considering some of these examples as "ground truth" (except maybe the Clauset et al example, not sure). I'd rather either a) train people to Red Team existing EA orthodoxy stuff and let their own internal senses + mentor guidance decide whether the red teaming is credible or b) for basic scientific literacy stuff where you do want clear ground truths, let them challenge stuff that's closer to obvious junk (Why We Sleep, some climate science stuff, maybe some covid papers, maybe pull up examples from Calling Bullshit [https://www.amazon.com/Calling-Bullshit-Skepticism-Data-Driven-World/dp/0525509186] , which I have not read).
COVID: How did we do? How can we know?

We even saw an NYT article about the CDC and whether reform is possible.

There were some other recent NYT articles which based on my limited COVID knowledge I thought were pretty good, e.g. on the origin of the virus or airborne vs. droplet transmission [1].

The background of their author, however, seems fairly consistent with an "established experts and institutions largely failed" story:

Zeynep Tufekci, a contributing opinion writer for The New York Times, writes about the social impacts of technology. She is an assistant professor in the School of Informat

... (read more)
How to get technological knowledge on AI/ML (for non-tech people)

This is great, thank you so much for sharing. I expect that many people will be in a similar situation, and so that I and others will link to this post many times in the future.

(For the same reason, I also think that pointers to potentially better resources by others in the comments would be very valuable.)

You can now apply to EA Funds anytime! (LTFF & EAIF only)

(The following is just my view, not necessarily the view of other EAIF managers. And I can't speak for the LTFF at all.)

FWIW I can think of a number of circumstances I'd consider a "convincing reason" in this context. In particular, cases where people know they won't be available for 6-12 months because they want to wrap up some ongoing unrelated commitment, or cases where large lead times are common (e.g., PhD programs and some other things in academia).

I think as with most other aspects of a grant, I'd make decisions on a case-by-case basis that would be... (read more)

Thanks a lot, this is useful context. I work in academia so the large lead times are relevant, particularly because other 'traditional' funders would require applications well in advance. It would  be useful to know whether it was necessary to pursue those other funding routes as a 'career hedge' or not, for example, via a commitment to funding. 

I am interested to hear if anyone from LTFF agrees/disagrees with Max's assessment in these circumstances. 

What are some key numbers that (almost) every EA should know?

We've now turned most of these into Anki cards

Amazing, thank you so much!

What are some key numbers that (almost) every EA should know?

I'm afraid I don't know of great sources for the numbers you list. They may also only exist for the distribution of compute. Perhaps the numbers on the EA community are too uncertain and dynamic to be a good fit for Anki anyway. On the other hand, it may be mainly the order of magnitude that is interesting, and it should be possible to get this right using crude proxies.

One proxy for the size of the EA community could be the number of EA survey respondents (or perhaps one above a certain engagement level). 

On the other points:

  • For the Great Decoupling
... (read more)
4Pablo3moThanks! It hadn't occurred to me to use the graph as the figure, but that's a good idea. On reflection, we could perhaps use "image occlusion [https://ankiweb.net/shared/info/1374772155]" for this or other questions.
Ben Garfinkel's Shortform

I agree with most of what you say here.

[ETA: I now realize that I think the following is basically just restating what Pablo already suggested in another comment.]

I think the following is a plausible & stronger concern, which could be read as a stronger version of your crisp concern #3.

"Humanity has not had meaningful control over its future, but AI will now take control one way or the other. Shaping the transition to a future controlled by AI is therefore our first and last opportunity to take control. If we mess up on AI, not only have we failed to s... (read more)

Which non-EA-funded organisations did well on Covid?

This NYTimes Magazine article might be interesting. Its framing is basically "why did the CDC fail, and how can it do better next time?". 

It mentions some other groups that allegedly did better than the CDC. Though I don't know to what extent these groups were or were not EA-funded. E.g., it says:

The Covid Rapid Response Working Group, at the Edmond J. Safra Center for Ethics at Harvard, was one of several independent organizations that stepped in to help fill the gap. In the last year, these groups, run mostly out of academic centers and private foun

... (read more)
A ranked list of all EA-relevant documentaries, movies, and TV series I've watched

I'm sure there are a number of interesting movies and documentaries on nuclear security.

Three movies that come to mind immediately:

  1. WarGames - a 1983 film that I found simultaneously interesting and very silly. The plot features the US giving control of their nuclear arsenal to an AI system running on a supercomputer (you can guess where it goes from here), a teenage hacker excitedly exclaiming "let's play 'Global Thermonuclear War'", and Tic Tac Toe as the solution to this film's version of the AI alignment problem. Curiously enough, Wikipedia claims that:
... (read more)

Another relevant film is The Day After, which was seen by 100 million Americans—"the most-watched television film in the history of the medium" (Hänni 2016)— and was instrumental in changing Reagan’s nuclear policy.

  • “President Ronald Reagan watched the film several days before its screening, on November 5, 1983. He wrote in his diary that the film was "very effective and left me greatly depressed," and that it changed his mind on the prevailing policy on a "nuclear war". The film was also screened for the Joint Chiefs of Staff. A government advisor who atte
... (read more)
What are some key numbers that (almost) every EA should know?

I like this idea. Here is some brainstorming output. Apologies for it being unedited/not sorted by categories:

  • Age of the universe
  • Age of the Earth
  • Age of homo sapiens
  • Timing of major transitions in evolution
  • Timing of invention of writing, agriculture, and the Industrial Revolution
  • Gross world product
  • Time for which Earth remains habitable absent big intervention
  • Number of working days in a year
  • Number of working hours in a year
  • Net present value of expected lifetime earnings of some reference class such as "graduate from roughly such-and-such uni and discipline"
  • Go
... (read more)
8Pablo3moWe've now turned most of these into Anki cards, but I'd appreciate pointers to reliable sources or estimates for the following: * Net present value of expected total EA-aligned capital by cause area/worldview * Number of people working on certain cause areas such as AI safety, GCBR reduction, nuclear security, ... * How much total compute there is, and how it's distributed (e.g. supercomputers vs. gaming consoles vs. personal computers vs. ...) * How much EAs should discount future financial resources * Size of the EA community For others, I have the relevant information (or know where to find it), but am not sure what numbers should be used to express it: * The 'Great Decoupling' of labor productivity from jobs + wages in the US * Some key stats about the distribution of world income and how it has changed, e.g., Milanovic's "elephant graph" and follow-ups * Some key stats about impact distributions where we have them, e.g., on how heavy-tailed the DCP2 global health cost-effectiveness numbers are (This is addressed to anyone in a position to help, not just to Max. Thanks.)

FINAL UPDATE: The deck is now published.

This is amazing. I'd be happy to create an Anki deck for these and any other numbers suggested in this thread.

EDIT: Judging from the upvotes, there seems to be considerable interest in this. I will wait a few days until people stop posting answers and will then begin creating the deck. I'll probably use the CrowdAnki extension to allow for collaboration and updating; see the ultimate-geography GitHub repository for an example.

2018-2019 Long Term Future Fund Grantees: How did they do?

Yeah I agree that info on how much absolute impact each grant seems to have had would be more relevant for making such updates. (Though of course absolute impact is very hard to estimate.)

Strictly speaking the info in the OP is consistent with "99% of all impact came from one grant", and it could even be one of the "Not as successful as hoped for". (Though taking into account all context/info I would guess that the highest-impact grants would be in the bucket "More successful than expected".) And if that was the case one shouldn't make any updates that would be motivated by "this looks less heavy-tailed than I expected".

2018-2019 Long Term Future Fund Grantees: How did they do?

Thanks, that makes sense.

  • I agree with everything you say about the GovAI example (and more broadly your last paragraph).
  • I do think my system 1 seems to work a bit differently since I can imagine some situations in which I would find it intuitive to update upwards on total success based on a lower 'success rate' - though it would depend on the definition of the success rate. I can also tell some system-2 stories, but I don't think they are conclusive.
    • E.g., I worry that a large fraction of outcomes with "impact at least x" might reflect a selection process t
... (read more)
4NunoSempere3moMakes sense. In particular, noticing that grants are all particularly legible might lead you update in the direction of a truncated distribution like you consider. So far, the LTFF seems like it has maybe moved a bit in the direction of more legibility, but not that much.
2018-2019 Long Term Future Fund Grantees: How did they do?
  • There is a part of me which finds the outcome (a 30 to 40% success rate) intuitively disappointing. However, it may suggest that the LTF was taking the right amount of risk per a hits-based-giving approach.

FWIW, my immediate reaction had been exactly the opposite: "wow, the fact that this skews so positive means the LTFF isn't risk-seeking enough". But I don't know if I'd stand by that assessment after thinking about it for another hour.

To really make this update, I'd want some more bins than the ones Nuno provide. That is, there could be an "extremely more successful than expected" bin; and all that matters is whether you manage to get any grant in that bin. 

(For example, I think Roam got a grant in 2018-2019, and they might fall in that bin, though I haven't thought a lot about it.) 

Yes, for me updating upwards on total success on a lower percentage success rate seems intuitively fairly weird. I'm not saying it's wrong, it's that I have to stop and think about it/use my system 2. 

In particular, you have to have a prior distribution such that more valuable opportunities have a lower success rate. But then you have to have a bag of opportunities such that the worse they do, the more you get excited.

Now, I think this happens if you have a bag with "golden tickets", "sure things", and "duds".  Then not doing well would make you ... (read more)

Some thoughts on EA outreach to high schoolers

Also, to be clear, are your original comment and this correction talking about the same survey population? I.e., EA survey takers in the same year(s)? Rather than comparing the results for different survey populations?

4David_Moss3moYes, these are all based on analyses which I did on EAS 2019 data.
Some thoughts on EA outreach to high schoolers

How do people who first got involved at 15-17 or 18 compare to people who first got involved age 20-25 (or something like that)? So "unusually young" vs. "median" rather than "unusually young vs. unusually old"?

9David_Moss3moPeople who first got involved at 18 (or 19) are about the same as people who got involved at 21 (i.e. a little bit lower than the peak at 20). People who first got involved at 17 are about the same as people who first got involved 22-23. For people who first got involved 15 or 16, the confidence intervals are getting pretty wide, because fewer respondents joined at these ages, but they're each a little less engaged, being most similar to those who first got involved in their mid-late 20s or 30s respectively. In short, the trend is pretty smooth both before and after 20, but mid to late 30s it seems to level out a bit, temporarily. You might want to open these images in new windows to see them full size. And finally, this is visually messy, but split by cohort, which could confound things otherwise. We'll be presenting analyses of this using EAS2020 data in the Engagement post shortly.
4Benjamin_Todd3moI'm going to leave it to David Moss or Eli to answer questions about the data, since they've been doing the analysis.
Progress studies vs. longtermist EA: some differences

Thanks! I think I basically agree with everything you say in this comment. I'll need to read your longer comment above to see if there is some place where we do disagree regarding the broadly 'metaethical' level (it does seem clear we land on different object-level views/preferences).

In particular, while I happen to like a particular way of cashing out the "impartial consequentialist" outlook, I (at least on my best-guess view on metaethics) don't claim that my way is the only coherent or consistent way, or that everyone would agree with me in the limit of ideal reasoning, or anything like that.

Vignettes Workshop (AI Impacts)

Sounds cool. 75% that I'll join on Friday from 10:30AM California time for a few hours. If it seemed like spending more time would be useful, I'd join again on Saturday from 10AM California time for a bit.

Lmk if a firmer RSVP would be helpful.

EA Infrastructure Fund: May 2021 grant recommendations

Great! I'm also intuitively optimistic about the effect of these new features on Wiki uptake, editor participation, etc.

My current impressions on career choice for longtermists

Narrowly,"chance favors the prepared mind" and being in either quant trading or cryptography (both competitive fields!) before the crypto boom presumably helps you see the smoke ahead of time, and like you some of the people I know in the space were world-class at an adjacent field like finance trading or programming.  Though I'm aware of other people who literally did stuff closer to fly a bunch to Korea and skirt the line on capital restrictions, which seems less reliant on raw or trained talent. 

(I agree that having knowledge of or experience ... (read more)

4Linch3moGot it! Thanks for the clarification! Though I originally read your comment as an extension of my points rather than arguing against them, so no confusion on my end (though of course I'm not the only audience of your comments, so these clarifications may still be helpful).
EA Infrastructure Fund: Ask us anything!

I also now think that the lower end of the 80% interval should probably be more like $5-15B.

EA Infrastructure Fund: Ask us anything!

Shouldn't your lower bound for the 50% interval be higher than for the 80% interval?

If the intervals were centered - i.e., spanning the 10th to 90th and the 25th to 75th percentile, respectively - then it should be, yes.

I could now claim that I wasn't giving centered intervals, but I think what is really going on is that my estimates are not diachronically consistent even if I make them within 1 minute of each other.

2Max_Daniel3moI also now think that the lower end of the 80% interval should probably be more like $5-15B.
EA Infrastructure Fund: Ask us anything!

I think I often have an implicit intuition about something like "how heavy-tailed is this grant?". But I also think most grants I'm excited about are either at least somewhat heavy-tailed or aimed at generating information for a decision about a (potentially heavy-tailed) future grant, so this selection effect will reduce differences between grants along that dimension.

But I think for less than 1/10 of the grants I think about I will have any explicit quantitative specification of the distribution in mind. (And if I have it will be rougher than a full dist... (read more)

EA Infrastructure Fund: Ask us anything!

As an aside, I think that's an excellent heuristic, and I worry that many EAs (including myself) haven't internalized it enough.

(Though I also worry that pushing too much for it could lead to people failing to notice the exceptions where it doesn't apply.)

2MichaelA3mo[thinking/rambling aloud] I feel like an "ideal reasoner" or something should indeed have that heuristic, but I feel unsure whether boundedly rational people internalising it more or having it advocated for to them more would be net positive or net negative. (I feel close to 50/50 on this and haven't thought about it much; "unsure" doesn't mean "I suspect it'd probably be bad.) I think this intersects with concerns about naive consequentialism [https://forum.effectivealtruism.org/tag/naive-vs-sophisticated-consequentialism] and (less so) potential downsides of using explicit probabilities [https://forum.effectivealtruism.org/posts/KfqFLDkoccf8NQsQe/potential-downsides-of-using-explicit-probabilities] . If I had to choose whether to make most of the world closer to naive consequentialism than they are now, and I can't instead choose sophisticated consequentialism, I'd probably do that. But I'm not sure for EA grantmakers. And of course sophisticated consequentialism seems better. Maybe there's a way we could pair this heuristic with some other heuristics or counter-examples such that the full package is quite useful. Or maybe adding more of this heuristic would alreadyhelp "balance things out", since grantmakers may already be focusing somewhat too much on downside risk. I really don't know.
EA Infrastructure Fund: Ask us anything!

My knee-jerk reaction is: If "net negative" means "ex-post counterfactual impact anywhere below zero, but including close-to-zero cases" then it's close to 50% of grantees. Important here is that "impact" means "total impact on the universe as evaluated by some omniscient observer". I think it's much less likely that funded projects are net negative by the light of their own proxy goals or by any criterion we could evaluate in 20 years (assuming no AGI-powered omniscience or similar by then).

(I still think that the total value of the grantee portfolio woul... (read more)

2Linch3moThanks a lot for this answer! After asking this, I realize I'm also interested in asking the same question about what ratio of grants you almost [https://forum.effectivealtruism.org/posts/KesWktndWZfGcBbHZ/ea-infrastructure-fund-ask-us-anything?commentId=mJDaubdwsFfiFggzS] funded would be ex post net-negative.
My current impressions on career choice for longtermists

I find your crypto trading examples fairly interesting, and I do feel like they only fit awkwardly with my intuitions - they certainly make me think it's more complicated.

However, one caveat is that "willing to see the opportunity"  and "willing to make radical life changes" don't sound quite right to me as conditions, or at least like they omit important things. I think that actually both of these things are practice-able abilities rather than just a matter of "willingness" (or perhaps "willingness" improves with practice). 

And in the few cases ... (read more)

7Linch3moI agree with this! Narrowly,"chance favors the prepared mind" and being in either quant trading or cryptography (both competitive fields!) before the crypto boom presumably helps you see the smoke ahead of time, and like you some of the people I know in the space were world-class at an adjacent field like finance trading or programming. Though I'm aware of other people who literally did stuff closer to fly a bunch to Korea [https://www.investopedia.com/terms/k/kimchi-premium.asp] and skirt the line on capital restrictions, which seems less reliant on raw or trained talent. Broadly, I agree that both seeing the opportunity (serendipity?) and willingness to act on crazy opportunities are rare skillsets that are somewhat practicable rather than just a pure innate disposition. This is roughly what I mean by But I also take your point that maybe this is its own skillset (somewhat akin to/a subset of "entrepreneurship") rather than a general notion of excellence.
My current impressions on career choice for longtermists

As Max_Daniel noted, an underlying theme in this post is that "being successful at conventional metrics" is an important desiderata, but this doesn't reflect the experiences of longtermist EAs I personally know. For example, anecdotally, >60% of longtermists with top-N PhDs regret completing their program, and >80% of longtermists with MDs regret it.

Your examples actually made me realize that "successful at conventional metrics" maybe isn't a great way to describe my intuition (i.e., I misdescribed my view by saying that). Completing a top-N PhD or M... (read more)

9Linch3moAs an aside, if you're up for asking your friends/colleagues a potentially awkward question, I'd be interested in seeing how much of my own anecdata about EAs with PhDs/MDs replicates in your own (EA) circles (which is presumably more Oxford-based than mine). I think it's likely that EAs outside of the Bay Area weigh the value of a PhD/other terminal degrees more, but I don't have a strong sense of how big the differences are quantitatively.
Max_Daniel's Shortform

[PAI vs. GPAI]

So there is now (well, since June 2020) both a Partnership on AI and a Global Partnership on AI.

Unfortunately GPAI's and PAI's FAQ pages conspicuously omit "how are you differnet from (G)PAI?".

Can anyone help?

At first glance it seems that:

  • PAI brings together a very large number of below-state actors of different types: e.g., nonprofits, academics, for-profit AI labs, ...
  • GPAI members are countries
  • PAI's work is based on 4 high-level goals that each are described in about two sentences [?]
  • GPAI's work is based on the OECD Recommendation on Artifi
... (read more)
4RyanCarey3moI think PAI exists primarily for companies to contribute to beneficial AI and harvest PR benefits from doing so. Whereas GPAI is a diplomatic apparatus, for Trudeau and Macron to influence the conversation [https://webcache.googleusercontent.com/search?q=cache:io4PSJs05rUJ:https://sciencebusiness.net/news/france-and-canada-move-forward-plans-global-ai-expert-council+&cd=1&hl=en&ct=clnk&gl=uk] surrounding AI.
EA Infrastructure Fund: Ask us anything!

I actually think this is surprisingly non-straightforward. Any estimate of the net present value of total longtermist $$ will have considerable uncertainty because it's a combination of several things, many of which are highly uncertain:

  • How much longtermist $$ is there now?
    • This is the least uncertain one. It's not super straightforward and requires nonpublic knowledge about the wealth and goals of some large individual donors, but I'd be surprised if my estimate on this was off by 10x.
  • What will the financial returns on current longtermist $$ be before they
... (read more)
4MichaelA3moInteresting, thanks. Shouldn't your lower bound for the 50% interval be higher than for the 80% interval? Or is the second interval based on different assumptions, e.g. including/ruling out some AI stuff? (Not sure this is an important question, given how much uncertainty there is in these numbers anyway.)
EA Infrastructure Fund: May 2021 grant recommendations

FWIW, I actually (and probably somewhat iconoclastically) disagree with this. :P

In particular, I think Part I of Reasons and Persons is underrated, and contains many of the most useful ideas. E.g., it's basically the best reading I know of if you want to get a deep and principled understanding for why 'naive consequentialism' is a bad idea, but why at the same time worries about naive applications of consequentialism or the demandingness objection and many other popular objections to consequentialism don't succeed at undermining it as ultimate criterion of... (read more)

3Misha_Yagudin3mo(Hey Max, consider reposting this to goodreads if you are on the platform.)
6Linch3moThanks for the contrarian take, though I still tentatively stand by my original stances. I should maybe mention 2 caveats here: 1. I also only read Reasons and Person ~4 years ago, and my memory can be quite faulty. 1. In particular I don't remember many good arguments against naive consequentialism. To me, it really felt like parts 1 and 2 were mainly written as justification for axioms/"lemmas" invoked in parts 3 and 4, axioms that most EAs already buy. 2. My own context for reading the book was trying to start a Reasons and Persons book club right after he passed away. Our book club dissolved in the middle of reading section 2. I kept reading on, and I distinctively remember wishing that we continued onwards, because sections 3 and 4 would kept the other book clubbers engaged etc. (obviously this is very idiosyncratic and particular to our own club).
EA Infrastructure Fund: May 2021 grant recommendations

I think all funds are generally making good decisions.

I think a lot of the effect is just that making these decisions is hard, and so that variance between decision-makers is to some extent unavoidable. I think some of the reasons are quite similar to why, e.g., hiring decisions, predicting startup success, high-level business strategy, science funding decisions, or policy decisions are typically considered to be hard/unreliable. Especially for longtermist grants, on top of this we have issues around cluelessness, potentially missing crucial considerations... (read more)

4weeatquince3moI am always amazed at how much you fund managers all do given this isn't your paid job! Fair enough. FWIW my general approach to stuff like this is not to aim for perfection but to aim for each iteration/round to be a little bit better than the last. That is possible. But also possible that you are particularly smart and have well thought-out views and people learn more from talking to you than you do from talking to them! (And/or just that everyone is different and different ways of learning work for different people)
EA Infrastructure Fund: Ask us anything!

My very off-the-cuff thoughts are:

  • If it seems like you are in an especially good position to assess that org, you should give to them directly. This could, e.g., be the case if you happened to know the org's founders especially well, or if you had rare subject-matter expertise relevant to assessing that org.
  • If not, you should give to a donor lottery.
  • If you win the donor lottery, you would probably benefit from coordinating with EA Funds. Literally giving the donor lottery winnings to EA Funds would be a solid baseline, but I would hope that many people can
... (read more)
EA Infrastructure Fund: May 2021 grant recommendations

From an outside view the actual cost of making the grants from the pot of another fund seems incredibly small. At minimum it could just be having someone to look over the end decisions and see if any feel like they belong in a different fund and then quickly double checking with the other fund's grantmakers that they have no strong objections and then granting the money from a different pot. (You could even do that after the decision to grant has been communicated to applicants, no reason to hold up, if the second fund objects then can still be given by th

... (read more)
9weeatquince3moThank you so much for your thoughtful and considered reply. Sorry to change topic but this is super fascinating and more interesting to me than questions of fund admin time (however much I like discussing organisational design I am happy to defer to you / Jonas / etc on if the admin cost is too high – ultimately only you know that). Why would there be so much disagreement (so much that you would routinely want to veto each others decisions if you had the option)? It seems plausible that if there is such levels of disagreement maybe: 1. One fund is making quite poor decisions AND/OR 2. There is significant potential to use consensus decisions making tools as a large group to improve decision quality AND/OR 3. There are some particularly interesting lessons to be learned by identifying the cruxes of these disagreements. Just curious and typing up my thoughts. Not expecting good answers to this.
6Larks3moThanks for writing up this detailed response. I agree with your intuition here that 'review, refer, and review again' could be quite time consuming. However, I think it's worth considering why this is the case. Do we think that the EAIF evaluators are similarly qualified to judge primarily-longtermist activities as the LTFF people, and the differences of views is basically noise? If so, it seems plausible to me that the EAIF evaluators should be able to unilaterally make disbursements from the LTFF money. In this setup, the specific fund you apply to is really about your choice of evaluator, not about your choice of donor, and the fund you donate to is about your choice of cause area, not your choice of evaluator-delegate. In contrast, if the EAIF people are not as qualified to judge primarily-longtermist (or primarily animal rights, etc.) projects as the specialised funds' evaluators, then they should probably refer the application early on in the process, prior to doing detailed due diligence etc.
EA Infrastructure Fund: May 2021 grant recommendations

(FWIW, I personally love Reasons and Persons but I think it's much more "not for everyone" than most of the other books Jiri mentioned. It's just too dry, detailed, abstract, and has too small a density of immediately action-relevant content.

I do think it could make sense as a 'second book' for people who like that kind of philosophy content and know what they're getting into.)

2Linch3moI agree that it's less readable than all books Jiri mentioned except maybe Superintelligence. Pro-tip for any aspiring Reasons-and-Persons-readers in the audience: skip (or skim) section I and II. Section III (personal identity) and IV (population ethics) is where the meat is, especially section III.
EA Infrastructure Fund: May 2021 grant recommendations

If you showed me the list here and said 'Which EA Fund should fund each of these?' I would have put the Lohmar and the CLTR grants (which both look like v good grants and glad they are getting funded) in the longtermist fund. Based on your comments above  you might have made the same call as well.

Thank you for sharing - as I mentioned I find this concrete feedback spelled out in terms of particular grants particularly useful.

[ETA: btw I do think part of the issue here is an "object-level" disagreement about where the grants best fit - personally, I de... (read more)

4weeatquince3moThank you Max. A guess the interesting question then is why do we think different things. Is it just a natural case of different people thinking differently or have I made a mistake or is there some way the funds could better communicate. One way to consider this might be to looking at juts the basic info / fund scope on the both EAIF and LTFF pages and ask: "if the man on the Clapham omnibus only read this information here and the description of these funds where do they think these grants would sit?"
EA Infrastructure Fund: Ask us anything!

OK, on a second thought I think this argument doesn't work because it's basically double-counting: the reason why returns might not diminish much faster than logarithmic may be precisely that new, 'crazy' opportunities become available.

7Jonas Vollmer3moHere's a toy model: * A production function [https://en.wikipedia.org/wiki/Cobb%E2%80%93Douglas_production_function] roughly along the lines of utility = funding ^ 0.2 * talent ^ 0.6 (this has diminishing returns to funding*talent, but the returns diminish slowly) * A default assumption that longtermism will eventually end up with $30-$300B in funding, let's assume $100B Increasing the funding from $100B to $200B would then increase utility by 15%.
Buck's Shortform

I don't think it's crazy at all. I think this sounds pretty good.

EA Infrastructure Fund: Ask us anything!

Hmm. Then I'm not sure I agree. When I think of prototypical example scenarios of "business as usual but with more total capital" I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based 'utility function' I'd be surprised if it had returns than diminish much more strongly than logarithmic. (That's at least my initial intuition - not sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, ... (read more)

[This comment is no longer endorsed by its author]Reply
2Max_Daniel3moOK, on a second thought I think this argument doesn't work because it's basically double-counting: the reason why returns might not diminish much faster than logarithmic may be precisely that new, 'crazy' opportunities become available.
EA Infrastructure Fund: Ask us anything!

I think we roughly agree on the direct effect of fundraising orgs, promoting effective giving, etc., from a longtermist perspective.

However, I suspect I'm (perhaps significantly) more optimistic than you about 'indirect' effects from promoting good content and advice on effective giving, promoting it as a 'social norm', etc. This is roughly because of the view I state under the first key uncertainty here, i.e., I suspect that encountering effective giving can for some people be a 'gateway' toward more impactful behaviors.

One issue is that I think the sign ... (read more)

EA Infrastructure Fund: Ask us anything!

I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.

At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, I'm worried that (at least in my case) this is too anchored on imagining "business as usual, but with more total capital". I'm wondering if most of the expected value of an additional $100B - especially when controlled by a single donor who can flexibly deploy them - comes from 'crazy... (read more)

4Buck3moI think that "business as usual but with more total capital" leads to way less increased impact than 20%; I am taking into account the fact that we'd need to do crazy new types of spending. Incidentally, you can't buy the New York Times on public markets; you'd have to do a private deal with the family who runs it .
How well did EA-funded biorisk organisations do on Covid?

I don’t think this should be seen as evidence that these organisations did badly (maybe a bit that they were over-confident) but that this was a very difficult situation to do things well in.

I somewhat agree, but I think this point becomes much weaker if it was the case that at the same time when these organizations were giving poor advice some amateurs in the EA and rationality communities had already arrived at better conclusions, would have given better advice, etc.

I didn't follow the relevant conversations closely enough to have much of an inside view ... (read more)

Hi, Yes good point, maybe I am being too generous.

FWIW I don’t remember anyone in the EA / rationalist community calling for the strategy that post-hoc seems to have worked best of a long lock-down to get to zero cases followed by border closures etc to keep cases at zero. (I remember  a lot of people for example sharing this note which gets much right but stuff wrong: eg short lock-dock and comparatively easy to keep R below 1 with social distancing)

Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare

To be clear again, the specific question this analysis address is not "is it ethical to eat meat and then pay offsets". The question is "assuming you pay for offsets, is it better to eat chicken or beef?"

(FWIW, this might be worth emphasizing more prominently. When I first read this post and the landing page, it took me a while to understand what question you were addressing.)

Load More