[Speaking for myself, not Open Philanthropy]
Empirically, I've observed some but not huge amounts of overlap between higher-rated applicants to the LTFF and applicants to Open Philanthropy's programs; I'd estimate around 10%. And my guess is the "best historical grant opportunities" that Habryka is referring to[1] are largely in object-level AI safety work, which Open Philanthropy doesn’t have any open applications for right now (though it’s still funding individuals and research groups sourced through other means, and I think it may fund some of ...
I'm planning on notifying relevant applicants this week (if/assuming we don't get a sudden increase in donations).
Hey! I applied end of april and haven't received any notification like this nor a rejection and I'm not sure what this means about the status of my application. I emailed twice over the past 4 months, but haven't received a reply :/
Re: deemphasizing expertise:
I feel kind of confused about this-- I agree in theory re: EV of marginal grants, but my own experience interacting with grant evaluations from people who I've felt were weaker has been that sometimes they’re in favor of rejecting a grant that I think would be really good, or missing a consideration that I think would make a grant pretty bad, and furthermore it's often hard to quickly tell if this is the case, e.g. they'll give a stylized summary of what's going on with the applicant, but I won't know how much to trust that summ...
I'm commenting here to say that while I don't plan to participate in public discussion of the FTX situation imminently (for similar reasons to the ones Holden gives above, though I don't totally agree with some of Holden's explanations here, and personally put more weight on some considerations here than others), I am planning to do so within the next several months. I'm sorry for how frustrating that is, though I endorse my choice.
We’re currently planning on keeping it open at least for the next month, and we’ll provide at least a month of warning if we close it down.
Sorry about the delay on this answer. I do think it’s important that organizers genuinely care about the objectives of their group (which I think can be different from being altruistic, especially for non-effective altruism groups). I think you’re right that that’s worth listing in the must-have criteria, and I’ve added it now.
I assume the main reason this criteria wouldn’t be true is if someone wanted to do organizing work just for the money, which I think we should be trying hard to select against.
“even if the upside of them working out could really be quite valuable” is the part I disagree with most in your comment. (Again, speaking just for myself), I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside; my overall calculus was something like “this doesn’t seem like it has big upside (because the policy asks don’t seem all that good), and also has some downside (because of person/project-specific factors)”. It would be nice if we did quantified risk analysis for all of our grant applications, but ult...
FWIW, I think this kind of questioning is fairly Habryka-specific and not really standard for our policy applicants; I think in many cases I wouldn’t expect that it would lead to productive discussions (and in fact could be counterproductive, in that it might put off potential allies who we might want to work with later).
I make the calls on who is the primary evaluator for which grants; as Habryka said, I think he is probably most skeptical of policy work among people on the LTFF, and hasn’t been the primary evaluator for almost any (maybe none?) of the po...
FWIW I think if this is just how Habryka works then that is totally fine from my point of view. If it helps him make good decisions then great.
(From the unusualness of the questioning approach and the focus on "why policy" I took it to be a sign that the LTFF was very sceptical of policy change as an approach compared to other approaches, but I may have been mistaken in making this assumption based on this evidence.)
Rebecca Kagan is currently working as a fund manager for us (sorry for the not-up-to-date webpage).
That's really cool! Seems like exactly the kind of person you'd want for policy grantmaking, with previous experience in federal agencies, think tanks, and campaigns. Thanks for sharing.
Hey, Sam – first, thanks for taking the time to write this post, and running it by us. I’m a big fan of public criticism, and I think people are often extra-wary of criticizing funders publicly, relative to other actors of the space.
Some clarifications on what we have and haven’t funded:
Thank you Abergal, I hope my critique is helpful. I mean it to be constructive.
I don’t think I disagree with anything at all that you wrote here!! So glad we are mostly on the same page.
(In fact you you suggest "we also differ in our views of the upsides of some of this work" and I am not sure that is the case. I am fairly sceptical of much of it, especially more AI focused stuff.)
I still expect the main disagreements are on:
Here are answers to some other common questions about the University Organizer Fellowship that I received in office hours:
If I apply and get rejected, is there a “freezing period” where I can’t apply again?
We don’t have an official freezing period, but I think we generally won’t spend time reevaluating someone within 3 months of when they last applied, unless they give some indication on the application that something significant has changed in that time.
If you’re considering applying, I really encourage you to not to wait– I think for the vast major...
I’m not sure that I agree with the premise of the question – I don’t think EA is trying all that hard to build a mainstream following (and I’m not sure that it should).
Interpreting this as “who is responsible for evaluating whether the Century Fellowship is a good use of time and money”, the answer is: someone on our team will probably try and do a review of how the program is going after it’s been running for a while longer; we will probably share that evaluation with Holden, co-CEO of Open Phil, as well as possibly other advisors and relevant stakeholders. Holden approves longtermist Open Phil grants and broadly thinks about which grants are/aren’t the best uses of money.
Each application has a primary evaluator who is on our team (current evaluators: me, Bastian Stern, Eli Rose, Kasey Shibayama, and Claire Zabel). We also generally consult / rely heavily on assessments from references or advisors, e.g. other staff at Open Phil or organizations who we work closely with, especially for applicants hoping to do work in domains we have less expertise in.
When we were originally thinking about the fellowship, one of the cases for impact was making community building a more viable career (hence the emphasis in this post), but it’s definitely intended more broadly for people working on the long-term future. I’m pretty unsure how the fellowship will shake out in terms of community organizers vs researchers vs entrepreneurs long-term – we’ve funded a mix so far (including several people who I’m not sure how to categorize / are still unsure about what they want to do).
(The cop-out answer is “I would like the truth-seeking organizers to be more ambitious, and the ambitious organizers to be more truth-seeking”.)
If I had to choose one, I think I’d go with truth-seeking. It doesn’t feel very close to me, especially among existing university group effective altruism-related organizers (maybe Claire disagrees), largely because I think there’s already been a big recent push towards ambition there, so I think people are generally already thinking pretty ambitiously.
I feel differently about e.g. rationality local group organizers, I wish they would be more ambitious.
i)
Hi Minh– sorry for the confusion! That footer was actually from an older version of the page that referenced eligible locations for the Centre for Effective Altruism’s city and national community building grant program; I’ve now deleted it.
I encourage organizers from any university to apply, including those in Singapore.
I think the LTFF will publish a payout report for grants through ~December in the next few weeks. As you suggest, we've been delayed because the number of grants we're making has increased substantially so we're pretty limited on grantmaker capacity right now (and writing the reports takes a somewhat substantial amount of time).
I like IanDavidMoss's suggestion of having a simpler list rather than delaying (and maybe we could publish more detailed justifications later)-- I'll strongly consider doing that for the payout report after this one.
Confusingly, the report called "May 2021" was for grants we made through March and early April of 2021, so this report includes most of April, May, June, and July.
I think we're going to standardize now so that reports refer to the months they cover, rather than the month they're released.
I like this idea; I'll think about it and discuss with others. I think I want grantees to be able to preserve as much privacy as they want (including not being listed in even really broad pseudo-anonymous classifications), but I'm guessing most would be happy to opt-in to something like this.
(We've done anonymous grant reports before but I think they were still more detailed than people would like.)
We got feedback from several people that they weren't applying to the funds because they didn't want to have a public report. There are lots of reasons that I sympathize with for not wanting a public report, especially as an individual (e.g. you're worried about it affecting future job prospects, you're asking for money for mental health support and don't want that to be widely known, etc.). My vision (at least for the Long-Term Future Fund) is to become a good default funding source for individuals and new organizations, and I think that vision is compromised if some people don't want to apply for publicity reasons.
Broadly, I think the benefits to funding more people outweigh the costs to transparency.
Thanks for the response.
Is there a way to make things pseudo-anonymous, revealing the type of grants being made privately but preserving the anonymity of the grant recipient? It seems like that preserves a lot of the value of what you want to protect without much downside.
For example, I'd be personally very skeptical that giving grants for personal mental support would be the best way to improve the long-term future and would make me less likely to support the LTFF and if all such grants weren't public, I wouldn't know that. There might also be peopl...
Another potential reason for optimism is that we'll be able to use observations from early on in the training runs of systems (before models are very smart) to affect the pool of Saints / Sycophants / Schemers we end up with. I.e., we are effectively "raising" the adults we hire, so it could be that we're able to detect if 8-year-olds are likely to become Sycophants / Schemers as adults and discontinue or modify their training accordingly.
Sorry this was unclear! From the post:
There is no deadline to apply; rather, we will leave this form open indefinitely until we decide that this program isn’t worth running, or that we’ve funded enough work in this space. If that happens, we will update this post noting that we plan to close the form at least a month ahead of time.
I will bold this so it's more clear.
There's no set maximum; we expect to be limited by the number of applications that seem sufficiently promising, not the cost.
FWIW I had a similar initial reaction to Sophia, though reading more carefully I totally agree that it's more reasonable to interpret your comment as a reaction to the newsletter rather than to the proposal. I'd maybe add an edit to your high-level comment just to make sure people don't get confused?
Really appreciate the clarifications! I think I was interpreting "humanity loses control of the future" in a weirdly temporally narrow sense that makes it all about outcomes, i.e. where "humanity" refers to present-day humans, rather than humans at any given time period. I totally agree that future humans may have less freedom to choose the outcome in a way that's not a consequence of alignment issues.
I also agree value drift hasn't historically driven long-run social change, though I kind of do think it will going forward, as humanity has more power to shape its environment at will.
Wow, I just learned that Robin Hanson has written about this, because obviously, and he agrees with you.
Do you have the intuition that absent further technological development, human values would drift arbitrarily far? It's not clear to me that they would-- in that sense, I do feel like we're "losing control" in that even non-extinction AI is enabling a new set of possibilities that modern-day humans would endorse much less than the decisions of future humans otherwise. (It does also feel like we're missing the opportunity to "take control" and enable a new set of possibilities that we would endorse much more.)
Relatedly, it doesn't feel to me like the values of humans 150,000 years ago and humans now and even ems in Age of Em are all that different on some more absolute scale.
I think we probably will seek out funding from larger institutional funders if our funding gap persists. We actually just applied for a ~$1M grant from the Survival and Flourishing Fund.
I agree with the thrust of the conclusion, though I worry that focusing on task decomposition this way elides the fact that the descriptions of the O*NET tasks already assume your unit of labor is fairly general. Reading many of these, I actually feel pretty unsure about the level of generality or common-sense reasoning required for an AI to straightforwardly replace that part of a human's job. Presumably there's some restructure that would still squeeze a lot of economic value out of narrow AIs that could basically do these things, but that restructure isn't captured looking at the list of present-day O*NET tasks.
I'm also a little skeptical of your "low-quality work dilutes the quality of those fields and attracts other low-quality work" fear--since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality.
The difference here is that most academic fields are pretty well-established, whereas AI safety, longtermism, and longtermist subparts of most academic fields are very new. The mechanism for attracting low-quality work I'm imagining is that s...
I was confused about the situation with debate, so I talked to Evan Hubinger about his experiences. That conversation was completely wild; I'm guessing people in this thread might be interested in hearing it. I still don't know exactly what to make of what happened there, but I think there are some genuine and non-obvious insights relevant to public discourse and optimization processes (maybe less to the specifics of debate outreach). The whole thing's also pretty funny.
I recorded the conversation; don't want to share publicly but feel free to DM me for access.
I imagine this could be one of the highest-leverage places to apply additional resources and direction though. People who are applying for funding for independent projects are people who desire to operate autonomously and execute on their own vision. So I imagine they'd require much less direction than marginal employees at an EA organization, for instance.
I don't have a strong take on whether people rejected from the LTFF are the best use of mentorship resources. I think many employees at EA organizations are also selected for being self-directed. I know ...
Sadly, I think those changes would in fact be fairly large and would take up a lot of fund manager time. I think small modifications to original proposals wouldn't be enough, and it would require suggesting new projects or assessing applicants holistically and seeing if a career change made sense.
In my mind, this relates to ways in which mentorship is a bottleneck in longtermist work right now-- there are probably lots of people who could be doing useful direct work, but they would require resources and direction that we as a community don't have the capacity for. I don't think the LTFF is well-placed to provide this kind of mentorship, though we do offer to give people one-off feedback on their applications.
I think many applicants who we reject could apply with different proposals that I'd be more excited to fund-- rejecting an application doesn't mean I think there's no good direct work the applicant could do.
I would guess some people would be better off earning to give, but I don't know that I could say which ones just from looking at one application they've sent us.
(To be clear, I think it's mostly just that we have more applications, and less that the mean application is significantly better than before.)
In several cases increased grant requests reflect larger projects or requests for funding for longer time periods. We've also definitely had a marked increase in the average individual salary request per year-- setting aside whether this is justified, this runs into a bunch of thorny issues around secondary effects that we've been discussing this round. I think we're likely to prioritize having a more standardized policy for individual salaries by next grant round.
This round, we switched from a system where we had all the grant discussion in a single spreadsheet to one where we discuss each grant in a separate Google doc, linked from a single spreadsheet. One fund manager has commented that they feel less on-top of this grant round than before as a result. (We're going to rethink this system again for next grant round.) We also changed the fund composition a bunch-- Helen and Matt left, I became chair, and three new guest managers joined. A priori, this could cause a shift in standards, though I have no particular r...
There's no strict 'minimum number'-- sometimes the grant is clearly above or below our bar and we don't consult anyone, and sometimes we're really uncertain or in disagreement, and we end up consulting lots of people (I think some grants have had 5+).
I will also say that each fund is somewhat intentionally composed of fund managers with somewhat varying viewpoints who trust different sets of experts, and the voting structure is such that if any individual fund manager is really excited about an application, it generally gets funded. As a result, I think in...
I can't respond for Adam, but just wanted to say that I personally agree with you, which is one of the reasons I'm currently excited about funding independent work.
Hey! I definitely don't expect people starting AI safety research to have a track record doing AI safety work-- in fact, I think some of our most valuable grants are paying for smart people to transition into AI safety from other fields. I don't know the details of your situation, but in general I don't think "former physics student starting AI safety work" fits into the category of "project would be good if executed exceptionally well". In that case, I think most of the value would come from supporting the transition of someone who could potentially be re...
Sherry et al. have a more exhaustive working paper about algorithmic progress in a wide variety of fields.
Also a big fan of your report. :)
Historically, what has caused the subjectively biggest-feeling updates to your timelines views? (e.g. arguments, things you learned while writing the report, events in the world).
Thanks! :)
The first time I really thought about TAI timelines was in 2016, when I read Holden's blog post. That got me to take the possibility of TAI soonish seriously for the first time (I hadn't been explicitly convinced of long timelines earlier or anything, I just hadn't thought about it).
Then I talked more with Holden and technical advisors over the next few years, and formed the impression that there was a relatively simple argument that many technical advisors believed that if a brain-sized model could be transformative, then there's a relativ...
Hey Ryan:
- Thanks for flagging that the EA Funds form still says that the funds will definitely get back in 8 weeks; I think that's real bad.
- I agree that it would be good to have a comprehensive plan-- personally, I think that if the LTFF fails to hire additional FT staff in the next few months (in particular, a FT chair), the fund should switch back to a round-based application system. But it's ultimately not my call.