I work on various longtermist things, including movement building.
This is indeed my belief about ex ante impact. Thanks for the clarification.
That might achieve the "these might be directly useful goal" and "produce interesting content" goals, if the reviewers knew about how to summarize the books from an EA perspective, how to do epistemic spot checks, and so on, which they probably don't. It wouldn't achieve any of the other goals, though.
Here's a crazy idea. I haven't run it by any EAIF people yet.
I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)
What books are on topic: Anything of interest to people who want to have a massive altruistic impact on the world. More specifically:
Suggested elements of a book review:
I think that "business as usual but with more total capital" leads to way less increased impact than 20%; I am taking into account the fact that we'd need to do crazy new types of spending.
Incidentally, you can't buy the New York Times on public markets; you'd have to do a private deal with the family who runs it
Re 1: I think that the funds can maybe disburse more money (though I'm a little more bearish on this than Jonas and Max, I think). But I don't feel very excited about increasing the amount of stuff we fund by lowering our bar; as I've said elsewhere on the AMA the limiting factor on a grant to me usually feels more like "is this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make it" than "is this grant good enough to be worth the money".
I think that the funds' RFMF is only slightly real--I think that giving to the EAIF has some counterfactual impact but not very much, and the impact comes from slightly weird places. For example, I personally have access to EA funders who are basically always happy to fund things that I want them to fund. So being an EAIF fund manager doesn't really increase my ability to direct money at promising projects that I run across. (It's helpful to have the grant logistics people from CEA, though, which makes the EAIF grantmaking experience a bit nicer.) The advantages I get from being an EAIF fund manager are that EAIF seeks applications and so I get to make grants I wouldn't have otherwise known about, and also that Michelle, Max, and Jonas sometimes provide useful second opinions on grants.
And so I think that if you give to the EAIF, I do slightly more good via grantmaking. But the mechanism is definitely not via me having access to more money.
Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)?
I think that it will be easier to increase our grantmaking for things other than supporting EA-aligned researchers with salaries, because this is almost entirely limited by how many strong candidates there are, and it seems hard to increase this directly with active grantmaking. In contrast, I feel more optimistic about doing active grantmaking to encourage retreats for researchers etc.
Do you think increasing available funding wouldn't help with any EA stuff, or do you just mean for increasing the amount/quality/impact of EA-aligned research(ers)?
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
I think that increasing available funding basically won't help at all for causing interventions of the types you listed in your post--all of those are limited by factors other than funding.
(Non-longtermist EA is more funding constrained of course--there's enormous amounts of RFMF in GiveWell charities, and my impression is that farm animal welfare also could absorb a bunch of money.)
Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too). I think that research on effective giving is particularly useless because I think that that projects differ widely in their value, and my impression is that effective giving is mostly going to get people to give to relatively bad giving opportunities.
High Impact Athletes is an EAIF grantee who I feel positive about; I am enthusiastic about them not because they might raise funds but because they might be able to get athletes to influence culture various ways (eg influencing public feelings about animal agriculture etc). And so I think it makes sense for them to initially focus on fundraising, but that's not where I expect most of their value to come from.
I am willing to fund orgs that attempt to just do fundraising, if their multiplier on their expenses is pretty good, because marginal money has more than zero value and I'd rather we had twice as much money. But I think that working for such an org is unlikely to be very impactful.
I am planning on checking in with grantees to see how well they've done, mostly so that I can learn more about grantmaking and to know if we ought to renew funding.
I normally didn't make specific forecasts about the outcomes of grants, because operationalization is hard and scary.
I feel vaguely guilty about not trying harder to write down these proxies ahead of time. But I empirically don't, and my intuitions apparently don't feel that optimistic about working on this. I am not sure why. I think it's maybe just that operationationalization is super hard and I feel like I'm going to have to spend more effort figuring out reasonable proxies than actually thinking about the question of whether this grant will be good, and so I feel drawn to a more "I'll know it when I see it" approach to evaluating my past grants.
Like Max, I don't know about such a policy. I'd be very excited to fund promising projects to support the rationality community, eg funding local LessWrong/Astral Codex Ten groups.
Re 1: I don't think I would have granted more
Re 2: Mostly "good applicants with good proposals for implementing good project ideas" and "grantmaker capacity to solicit or generate new project ideas", where the main bottleneck on the second of those isn't really generating the basic idea but coming up with a more detailed proposal and figuring out who to pitch on it etc.
Re 3: I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I don't think that low quality applications make my life as a grantmaker much worse; if you're reading this, please submit your EAIF application rather than worry that it is not worth our time to evaluate.
Re 4: It varies. Mostly it isn't that the applicant lacks a specific skill.
Re 5: There are a bunch of things that have to align in order for someone to make a good proposal. There has to be a good project idea, and there has to be someone who would be able to make that work, and they have to know about the idea and apply for funding for it, and they need access to whatever other resources they need. Many of these steps can fail. Eg probably there are people who I'd love to fund to do a particular project, but no-one has had the idea for the project, or someone has had the idea for the project but that person hasn't heard about it or hasn't decided that it's promising, or doesn't want to try it because they don't have access to some other resource. I think my current guess is that there are good project ideas that exist, and people who'd be good at doing them, and if we can connect the people to the projects and the required resources we could make some great grants, and I hope to spend more of my time doing this in future.
Re your 19 interventions, here are my quick takes on all of them
Creating, scaling, and/or improving EA-aligned research orgs
Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.
Creating, scaling, and/or improving EA-aligned research training programs
I am in favor of this. I think one of the biggest bottlenecks here is finding people who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research, eg by mentoring people for an hour or two a week on projects that the mentor finds inside-view interesting (and therefore will be actually bought in to helping with). I think that in situations like this, it's very helpful for the mentor to be judged as Andrew Grov suggests, by the output of their organization + the output of neighboring organizations under their influence. That is, they should think that one of their key goals with their research interns as having the research interns do things that they actually think are useful. I think that not having this goal makes it much more tempting for the mentors to kind of snooze on the job and not really try to make the experience useful.
Increasing grantmaking capacity and/or improving grantmaking processes
Yeah this seems good if you can do it, but I don't think this is that much of the bottleneck on research. It doesn't take very much time to evaluate a grant for someone to do research compared to how much time it takes to mentor them.
My current unconfident position is that I am very enthusiastic about funding people to do research if they have someone who wants to mentor them and be held somewhat accountable for whether they do anything useful. And so I'd love to get more grant applications from people describing their research proposal and saying who their mentor is; I can make that grant in like two hours (30 mins to talk to the grantee, 30 mins to talk to the mentor, 60 mins overhead). If the grants are for 4 months, then I can spend five hours a week and do all the grantmaking for 40 people. This feels pretty leveraged to me and I am happy to spend that time, and therefore I don't feel much need to scale this up more.
I think that grantmaking capacity is more of a bottleneck for things other than research output.
Scaling Effective Thesis, improving it, and/or creating new things sort-of like it
I don't immediately feel excited by this for longtermist research; I wouldn't be surprised if it's good for animal welfare stuff but I'm not qualified to judge. I think that most research areas relevant to longtermism require high context in order to contribute to, and I don't think that pushing people in the direction of good thesis topics is very likely to produce extremely useful research.
I'm not confident.
Increasing and/or improving EAs’ use of non-EA options for research-relevant training, credentials, testing fit, etc.
The post doesn't seem to exist yet so idk
Increasing and/or improving research by non-EAs on high-priority topics
I think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually can't think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. I'm excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff.
Creating a central, editable database to help people choose and do research projects
I feel pessimistic; I don't think that this is the bottleneck. I think that people doing research projects without mentors is much worse, and if we had solved that problem, then we wouldn't need this database as much. This database is mostly helpful in the very-little-supervision world, and so doesn't seem like the key thing to work on.
Using Elicit (an automated research assistant tool) or a similar tool
I feel pessimistic, but idk maybe elicit is really amazing. (It seems at least pretty cool to me, but idk how useful it is.) Seems like if it's amazing we should expect it to be extremely commercially successful; I think I'll wait to see if I'm hearing people rave about it and then try it if so.
Forecasting the impact projects will have
I think this is worth doing to some extent, obviously; I think that my guess is that EAs aren't as into forecasting as they should be (including me unfortunately.) I'd need to know your specific proposal in order to have more specific thoughts.
Adding to and/or improving options for collaborations, mentorship, feedback, etc. (including from peers)
I think that facilitating junior researchers to connect with each other is somewhat good but doesn't seem as good as having them connect more with senior researchers somehow.
Improving the vetting of (potential) researchers, and/or better “sharing” that vetting
I'm into this. I designed a noticeable fraction of the Triplebyte interview at one point (and delivered it hundreds of times); I wonder whether I should try making up an EA interview.
Increasing and/or improving career advice and/or support with network-building
Seems cool. I think a major bottleneck here is people who are extremely extroverted and have lots of background and are willing to spend a huge amount of time talking to a huge amount of people. I think that the job "spend many hours a day talking to EAs who aren't as well connected as would be ideal for 30 minutes each, in the hope of answering their questions and connecting them to people and encouraging them" is not as good as what I'm currently doing with my time, but it feels like a tempting alternative.
I am excited for people trying to organize retreats where they invite a mix of highly-connected senior researchers and junior researchers to one place to talk about things. I would be excited to receive grant applications for things like this.
Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers
I'm not sure that this is better than providing funding to people, though it's worth considering. I'm worried that it has some bad selection effects, where the most promising people are more likely to have money that they can spend living in closer proximity to EA hubs (and are more likely to have other sources of funding) and so the cheapo EA accommodations end up filtering for people who aren't as promising.
Another way of putting this is that I think it's kind of unhealthy to have a bunch of people floating around trying unsuccessfully to get into EA research; I'd rather they tried to get funding to try it really hard for a while, and if it doesn't go well, they have a clean break from the attempt and then try to do one of the many other useful things they could do with their lives, rather than slowly giving up over the course of years and infecting everyone else with despair.
Creating and/or improving relevant educational materials
I'm not sure; seems worth people making some materials, but I'd think that we should mostly be relying on materials not produced by EAs
Creating, improving, and/or scaling market-like mechanisms for altruism
I am a total sucker for this stuff, and would love to make it happen; I don't think it's a very leveraged way of working on increasing the EA-aligned research pipeline though.
Increasing and/or improving the use of relevant online forums
Yeah I'm into this; I think that strong web developers should consider reaching out to LessWrong and saying "hey do you want to hire me to make your site better".
Increasing the number of EA-aligned aspiring/junior researchers
I think Ben Todd is wrong here. I think that the number of extremely promising junior researchers is totally a bottleneck and we totally have mentorship capacity for them. For example, I have twice run across undergrads at EA Global who I was immediately extremely impressed by and wanted to hire (they both did MIRI internships and have IMO very impactful roles (not at MIRI) now). I think that I would happily spend ten hours a week managing three more of these people, and the bottleneck here is just that I don't know many new people who are that talented (and to a lesser extent, who want to grow in the ways that align with my interests).
I think that increasing the number of people who are eg top 25% of research ability among Stanford undergrads is less helpful, because more of the bottleneck for these people is mentorship capacity. Though I'd still love to have more of these people. I think that I want people who are between 25th and 90th percentile intellectual promisingness among top schools to try first to acquire some specific and useful skill (like programming really well, or doing machine learning, or doing biology literature reviews, or clearly synthesizing disparate and confusing arguments), because they can learn these skills without needing as much mentorship from senior researchers and then they have more of a value proposition to those senior researchers later.
Increasing the amount of funding available for EA-aligned research(ers)
This seems almost entirely useless; I don't think this would help at all.
discovering, writing, and/or promoting positive case studies
Seems like a good use of someone's time.
This was a pretty good list of suggestions. I guess my takeaways from this are:
I feel very unsure about this. I don't think my position on this question is very well thought through.
Most of the time, the reason I don't want to make a grant doesn't feel like "this isn't worth the money", it feels like "making this grant would be costly for some other reason". For example, when someone applies for a salary to spend some time researching some question which I don't think they'd be very good at researching, I usually don't want to fund them, but this is mostly because I think it's unhealthy in various ways for EA to fund people to flail around unsuccessfully rather than because I think that if you multiply the probability of the research panning out by the value of the research, you get an expected amount of good that is worse than longtermism's last dollar.
I think this question feels less important to me because of the fact that the grants it affects are marginal anyway. I think that more than half of the impact I have via my EAIF grantmaking is through the top 25% of the grants I make. And I am able to spend more time on making those best grants go better, by working on active grantmaking or by advising grantees in various ways. And coming up with a more consistent answer to "where should the bar be" seems like a worse use of my time than those other activities.
I think I would rather make 30% fewer grants and keep the saved money in a personal account where I could disburse it later.
(To be clear, I am grateful to the people who apply for EAIF funding to do things, including the ones who I don't think we should fund, or only marginally think we should fund; good on all of you for trying to think through how to do lots of good.)