All of abergal's Comments + Replies

Hey Ryan:

- Thanks for flagging that the EA Funds form still says that the funds will definitely get back in 8 weeks; I think that's real bad.

- I agree that it would be good to have a comprehensive plan-- personally, I think that if the LTFF fails to hire additional FT staff in the next few months (in particular, a FT chair), the fund should switch back to a round-based application system. But it's ultimately not my call.

[Speaking for myself, not Open Philanthropy]

Empirically, I've observed some but not huge amounts of overlap between higher-rated applicants to the LTFF and applicants to Open Philanthropy's programs; I'd estimate around 10%. And my guess is the "best historical grant opportunities" that Habryka is referring to[1] are largely in object-level AI safety work, which Open Philanthropy doesn’t have any open applications for right now (though it’s still funding individuals and research groups sourced through other means, and I think it may fund some of ... (read more)

1
AnonymousTurtle
7mo
Thank you for the detailed reply, that seems surprisingly little, I hope more apply. Also really glad to hear that OP may fund some of the MATS scholars, as the original post mentioned that "some of [the unusual funding constrain] is caused by a large number of participants of the SERI MATS program applying for funding to continue the research they started during the program, and those applications are both highly time-sensitive and of higher-than-usual quality". Thank you again for taking the time to reply given the extreme capacity constrains

I'm planning on notifying relevant applicants this week (if/assuming we don't get a sudden increase in donations).

Hey! I applied end of april and haven't received any notification like this nor a rejection and I'm not sure what this means about the status of my application. I emailed twice over the past 4 months, but haven't received a reply :/

Re: deemphasizing expertise:

I feel kind of confused about this-- I agree in theory re: EV of marginal grants, but my own experience interacting with grant evaluations from people who I've felt were weaker has been that sometimes they’re in favor of rejecting a grant that I think would be really good, or missing a consideration that I think would make a grant pretty bad, and furthermore it's often hard to quickly tell if this is the case, e.g. they'll give a stylized summary of what's going on with the applicant, but I won't know how much to trust that summ... (read more)

9
Joel Becker
8mo
Thank you for the helpful replies Asya. Re: deemphasizing expertise:  I would imagine that some of the time saved in hiring expert grantmakers could be spent training junior grantmakers. (In my somewhat analogous experience running selection for a highly competitive program, I certainly notice that some considerations that I now think are very important were entirely missing from my early decision-making!) Should I think about your comment as coming from a hypothetical that is net or gross of that time investment? As for improved set-ups, how about something like: 1. Junior grantmaker receives disproportionate training on downside considerations. 2. Junior grantmaker evaluates grants and rates downside risk. 3. Above some downside risk cut-off, if the junior grantmaker wants to give funding, the senior grantmaker checks in. 4. Below the cut-off, if the junior grantmaker wants to give funding, the funding is improved without further checks. (If you think missing great grants is a bigger deal than accepting bad ones, analogously change the above.) Intuitively, I would guess that this set-up could improve quite a bit on the waste-of-your-time and speed problems, without giving up too much on better-grants-by-your-lights. But I'm sure I'm missing helpful context. Re: comparing to FTXFF and Manifund:  Definitely makes sense that the pitches are different. I guess I would have thought of this as part of "other hiring criteria you might have" -- considerations that make it more challenging to select from the pool of people with some grantmaking experience, but for which some people tick the box.

I'm commenting here to say that while I don't plan to participate in public discussion of the FTX situation imminently (for similar reasons to the ones Holden gives above, though I don't totally agree with some of Holden's explanations here, and personally put more weight on some considerations here than others), I am planning to do so within the next several months. I'm sorry for how frustrating that is, though I endorse my choice.

-2
Charles He
1y
And?     ..     ..     ..   Just joking! I'm joking, sorry!  *pulls on rainbow dash costume*

We’re currently planning on keeping it open at least for the next month, and we’ll provide at least a month of warning if we close it down.

Sorry about the delay on this answer. I do think it’s important that organizers genuinely care about the objectives of their group (which I think can be different from being altruistic, especially for non-effective altruism groups). I think you’re right that that’s worth listing in the must-have criteria, and I’ve added it now.

I assume the main reason this criteria wouldn’t be true is if someone wanted to do organizing work just for the money, which I think we should be trying hard to select against.

“even if the upside of them working out could really be quite valuable” is the part I disagree with most in your comment. (Again, speaking just for myself), I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside; my overall calculus was something like “this doesn’t seem like it has big upside (because the policy asks don’t seem all that good), and also has some downside (because of person/project-specific factors)”. It would be nice if we did quantified risk analysis for all of our grant applications, but ult... (read more)

2
IanDavidMoss
2y
Thanks for the response! That's fair, and I should also be clear that I'm less familiar with LTFF's grantmaking than some others in the EA universe. Oh, I totally agree that the kind of risk analysis I mentioned is not costless, and for EA Funds in particular it seems like too much to expect. My main point is that in the absence of it, it's not necessarily an optimal strategy to substitute an extreme version of the precautionary principle instead. Overall, I agree that judging policy/institution-focused projects primarily based on upside makes sense.

FWIW, I think this kind of questioning is fairly Habryka-specific and not really standard for our policy applicants; I think in many cases I wouldn’t expect that it would lead to productive discussions (and in fact could be counterproductive, in that it might put off potential allies who we might want to work with later).

I make the calls on who is the primary evaluator for which grants; as Habryka said, I think he is probably most skeptical of policy work among people on the LTFF, and hasn’t been the primary evaluator for almost any (maybe none?) of the po... (read more)

FWIW I think if this is just how Habryka works then that is totally fine from my point of view. If it helps him make good decisions then great. 

(From the unusualness of the questioning approach and the focus on "why policy" I took it to be a sign that the LTFF was very sceptical of policy change as an approach compared to other approaches, but I may have been mistaken in making this assumption based on this evidence.)

Rebecca Kagan is currently working as a fund manager for us (sorry for the not-up-to-date webpage).

3
Misha_Yagudin
1y
Is Rebecca still a fund manager, or is the LTFF page out of sync?

That's really cool! Seems like exactly the kind of person you'd want for policy grantmaking, with previous experience in federal agencies, think tanks, and campaigns. Thanks for sharing. 

Hey, Sam – first, thanks for taking the time to write this post, and running it by us. I’m a big fan of public criticism, and I think people are often extra-wary of criticizing funders publicly, relative to other actors of the space.

Some clarifications on what we have and haven’t funded:

  • I want to make a distinction between “grants that work on policy research” and “grants that interact with policymakers”.
    • I think our bar for projects that involve the latter is much higher than for projects that are just doing the former.
  • I think we regularly fund “grants tha
... (read more)
1
Nathan Sherburn
1y
Thank you for sharing, Abergal! I was just wondering if you could share more concrete examples of "taking up the space" risks. We're facing some choices around this in Australia at the moment and I want to make sure we've considered all downsides of uniting under a shared vision. Are the risks of "taking up the space" mainly: 1. Less agile - multiple small organizations may be able to work faster 2. Centralized risk - if one organization among multiple small organizations faces an issue (e.g. brand damage) this is less likely to affect the other organizations 3. Less diversity of thought - there's value in taking different approaches to problems and having multiple small organizations means were less at less risk of groupthink or quashing diversity of thought I'd be keen to know if there are others we may not have considered.
2
weeatquince
2y
Hi Abergal, I hope you don’t mind me returning to this post. I wanted to follow up on a few things but it has been a super busy month and I have not really had the time. Essentially I think we did not manage to engage with each others cruxes at all. I said I expect the crux is "connected to EA funders approach to minimising risks." You did not discuss this at all in your reply (except to agree that there are downsides to policy work). But then you suggest you maybe a crux is "our views of the upsides of some of this work". And then I replied to you but did not discuss that point at all!!  So overall it felt a bit like we just talked past each other and didn’t really engage fully with the others points of view. So I felt I should just come back and respond to you about the upsides rather than leave this as was. – –  Some views I have (at least from a UK perspective, cannot comment on US): * DISAGREEMENT: I think there is no shortage of policy proposals to push for. You link to Luke's post and I reply directly to that here. There is the x-risk database of 250+ policy proposals here!! There is work on policy ideas in Future Proof here. There are more clear cut policies on general risk management and biosecurity but there are  good ideas on AI too like: government to be cautious in AI use and development by military (see p42 of Future Proof) or ensuring government has incentives to hire competent risk-aware staff . I never found that I was short of low-downside things to advocate for on AI at any point in my 2 years working on policy. So maybe we disagree there. Very happy to discuss further if you want. * AGREEMENT: I think policy may still be low impact. Especially AI policy (in the UK) has a very minimal chance of reducing AI x-risks, a topic that is not currently in the policy Overton window. Bio policy is probably more directly useful. I do expect bio and general risk policy is more likely to be directly useful but still the effects on x-risks are likely to

Thank you Abergal, I hope my critique is helpful. I mean it to be constructive.

I don’t think I disagree with anything at all that you wrote here!! So glad we are mostly on the same page.

(In fact you you suggest "we also differ in our views of the upsides of some of this work" and I am not sure that is the case. I am fairly sceptical of much of it, especially more AI focused  stuff.)

I still expect the main disagreements are on:

  1. Managing downside risks. I worry that if we as a community don’t put time and effort into understanding how to mitigate downsid
... (read more)

Here are answers to some other common questions about the University Organizer Fellowship that I received in office hours:
 
If I apply and get rejected, is there a “freezing period” where I can’t apply again?

We don’t have an official freezing period, but I think we generally won’t spend time reevaluating someone within 3 months of when they last applied, unless they give some indication on the application that something significant has changed in that time.

If you’re considering applying, I really encourage you to not to wait– I think for the vast major... (read more)

I’m not sure that I agree with the premise of the question – I don’t think EA is trying all that hard to build a mainstream following (and I’m not sure that it should).

Interpreting this as “who is responsible for evaluating whether the Century Fellowship is a good use of time and money”, the answer is: someone on our team will probably try and do a review of how the program is going after it’s been running for a while longer; we will probably share that evaluation with Holden, co-CEO of Open Phil, as well as possibly other advisors and relevant stakeholders. Holden approves longtermist Open Phil grants and broadly thinks about which grants are/aren’t the best uses of money.

Each application has a primary evaluator who is on our team (current evaluators: me, Bastian Stern, Eli Rose, Kasey Shibayama, and Claire Zabel). We also generally consult / rely heavily on assessments from references or advisors, e.g. other staff at Open Phil or organizations who we work closely with, especially for applicants hoping to do work in domains we have less expertise in.

When we were originally thinking about the fellowship, one of the cases for impact was making community building a more viable career (hence the emphasis in this post), but it’s definitely intended more broadly for people working on the long-term future. I’m pretty unsure how the fellowship will shake out in terms of community organizers vs researchers vs entrepreneurs long-term – we’ve funded a mix so far (including several people who I’m not sure how to categorize / are still unsure about what they want to do).

(The cop-out answer is “I would like the truth-seeking organizers to be more ambitious, and the ambitious organizers to be more truth-seeking”.)

If I had to choose one, I think I’d go with truth-seeking. It doesn’t feel very close to me, especially among existing university group effective altruism-related organizers (maybe Claire disagrees), largely because I think there’s already been a big recent push towards ambition there, so I think people are generally already thinking pretty ambitiously.

I feel differently about e.g. rationality local group organizers, I wish they would be more ambitious.

1
Jack R
2y
Makes sense - thanks Asya!

i)

  1. “Full-time-equivalent” is intended to mean “if you were working full-time, this is how much funding you would receive”. The fellowship is intended for people working significantly less than full-time, and most of our grants have been for 15 hours per week of organizer time or less. I definitely don’t expect undergraduates to be organizing for 40 hours per week.

    I think our page doesn’t make this clear enough early on, thanks for flagging it– I’ll make some changes to try and make this clearer.
     
  2. I think anyone who’s doing student organizing for more th
... (read more)

Hi Minh– sorry for the confusion! That footer was actually from an older version of the page that referenced eligible locations for the Centre for Effective Altruism’s city and national community building grant program; I’ve now deleted it.

I encourage organizers from any university to apply, including those in Singapore.

Answer by abergalApr 23, 202213
0
0

I think the LTFF will publish a payout report for grants through ~December in the next few weeks. As you suggest, we've been delayed because the number of grants we're making has increased substantially so we're pretty limited on grantmaker capacity right now (and writing the reports takes a somewhat substantial amount of time).

I like IanDavidMoss's suggestion of having a simpler list rather than delaying (and maybe we could publish more detailed justifications later)-- I'll strongly consider doing that for the payout report after this one.

3
IanDavidMoss
2y
Maybe a good compromise would be to publish a complete list generated from relevant fields in the grants database every couple of months (redacting recipients who requested not to have their grant publicized), and then we'd all have an opportunity to ask for elaboration or other questions in the comments each time. This way the time invested in providing (the equivalent of) "reports" on each grant would be better targeted toward the cases that the community is most interested in/uncertain about, which should be a small minority of the grants made.

Confusingly, the report called "May 2021" was for grants we made through March and early April of 2021, so this report includes most of April, May, June, and July.

I think we're going to standardize now so that reports refer to the months they cover, rather than the month they're released.

2
Ozzie Gooen
2y
That makes more sense, thanks!

I like this idea; I'll think about it and discuss with others. I think I want grantees to be able to preserve as much privacy as they want (including not being listed in even really broad pseudo-anonymous classifications), but I'm guessing most would be happy to opt-in to something like this.

(We've done anonymous grant reports before but I think they were still more detailed than people would like.)

2
jayquigley
2y
Any updates here? I share Devon's concern: this news also makes me less likely to want to donate via EA Funds.  At worst, the fear would be this: so much transparency is lost that donations go into mysterious black holes rather than funding effective organizations. What steps can be taken to convince donors that that's not what's happening?

We got feedback from several people that they weren't applying to the funds because they didn't want to have a public report.  There are lots of reasons that I sympathize with for not wanting a public report, especially as an individual (e.g. you're worried about it affecting future job prospects, you're asking for money for mental health support and don't want that to be widely known, etc.). My vision (at least for the Long-Term Future Fund) is to become a good default funding source for individuals and new organizations, and I think that vision is compromised if some people don't want to apply for publicity reasons.

Broadly, I think the benefits to funding more people outweigh the costs to transparency.

2
some person
1y
Now that the whole FTX thing has happened, have you reconsidered your position about the trust the public should have in organizations that don't share the distribution of funds?

Thanks for the response.

 

Is there a way to make things pseudo-anonymous, revealing the type of grants being made privately but preserving the anonymity of the grant recipient? It seems like that preserves a lot of the value of what you want to protect without much downside.

For example, I'd be personally very skeptical that giving grants for personal mental support would be the best way to improve the long-term future and would make me less likely to support the LTFF and if all such grants weren't public, I wouldn't know that. There might also be peopl... (read more)

Another potential reason for optimism is that we'll be able to use observations from early on in the training runs of systems (before models are very smart) to affect the pool of Saints / Sycophants / Schemers we end up with. I.e., we are effectively "raising" the adults we hire, so it could be that we're able to detect if 8-year-olds are likely to become Sycophants / Schemers as adults and discontinue or modify their training accordingly.

Sorry this was unclear! From the post:

There is no deadline to apply; rather, we will leave this form open indefinitely until we decide that this program isn’t worth running, or that we’ve funded enough work in this space. If that happens, we will update this post noting that we plan to close the form at least a month ahead of time.

I will bold this so it's more clear.

There's no set maximum; we expect to be limited by the number of applications that seem sufficiently promising, not the cost.

Yeah, FWIW I haven't found any recent claims about insect comparisons particularly rigorous.

FWIW I had a similar initial reaction to Sophia, though reading more carefully I totally agree that it's more reasonable to interpret your comment as a reaction to the newsletter rather than to the proposal. I'd maybe add an edit to your high-level comment just to make sure people don't get confused?

Really appreciate the clarifications! I think I was interpreting "humanity loses control of the future" in a weirdly temporally narrow sense that makes it all about outcomes, i.e. where "humanity" refers to present-day humans, rather than humans at any given time period.  I totally agree that future humans may have less freedom to choose the outcome in a way that's not a consequence of alignment issues.

I also agree value drift hasn't historically driven long-run social change, though I kind of do think it will going forward, as humanity has more power to shape its environment at will.

3
Linch
3y
My impression is that the differences in historical vegetarianism rates between India and China, and especially India and southern China (where there is greater similarity of climate and crops used), is a moderate counterpoint. At the timescale of centuries, vegetarianism rates in India are much higher than rates in China. Since factory farming is plausibly one of the larger sources of human-caused suffering today, the differences aren't exactly a rounding error. 

Wow, I just learned that Robin Hanson has written about this, because obviously, and he agrees with you.

2
bgarfinkel
3y
FWIW, I wouldn't say I agree with the main thesis of that post. I definitely think that human biology creates at least very strong biases toward certain values (if not hard constraints) and that AI system would not need to have these same biases. If you're worried about future agents having super different and bad values, then AI is a natural focal point for your worry. ---------------------------------------- A couple other possible clarifications about my views here: * I think that the outcome of the AI Revolution could be much worse, relative to our current values, than the Neolithic Revolution was relative to the values of our hunter-gatherer ancestors. But I think the question "Will the outcome be worse?" is distinct from the question "Will we have less freedom to choose the outcome?" * I'm personally not so focused on value drift as a driver of long-run social change. For example, the changes associated with the Neolithic Revolution weren't really driven by people becoming less egalitarian, more pro-slavery, more inclined to hold certain religious beliefs, more ideologically attached to sedentism/farming, more happy to accept risks from disease, etc. There were value changes, but, to some significant degree, they seem to have been downstream of technological/economic change.
4
abergal
3y
And Paul Christiano agrees with me. Truly, time makes fools of us all.

Do you have the intuition that absent further technological development, human values would drift arbitrarily far? It's not clear to me that they would-- in that sense, I do feel like we're "losing control" in that even non-extinction AI is enabling a new set of possibilities that modern-day humans would endorse much less than the decisions of future humans otherwise. (It does also feel like we're missing the opportunity to "take control" and enable a new set of possibilities that we would endorse much more.)

Relatedly, it doesn't feel to me like the values of humans 150,000 years ago and humans now and even ems in Age of Em are all that different on some more absolute scale.

2
bgarfinkel
3y
Certainly not arbitrarily far. I also think that technological development (esp. the emergence of agriculture and modern industry) has played a much larger role in changing the world over time than random value drift has. I definitely think that's true. But I also think that was true of agriculture, relative to the values of hunter-gatherer societies. To be clear, I'm not downplaying the likelihood or potential importance of any of the three crisper concerns I listed. For example, I think that AI progress could conceivably lead to a future that is super alienating and bad. I'm just (a) somewhat pedantically arguing that we shouldn't frame the concerns as being about a "loss of control over the future" and (b) suggesting that you can rationally have all these same concerns even if you come to believe that technical alignment issues aren't actually a big deal.
1
abergal
3y
Wow, I just learned that Robin Hanson has written about this, because obviously, and he agrees with you.

I think we probably will seek out funding from larger institutional funders if our funding gap persists. We actually just applied for a ~$1M grant from the Survival and Flourishing Fund.

I agree with the thrust of the conclusion, though I worry that focusing on task decomposition this way elides the fact that the descriptions of the O*NET tasks already assume your unit of labor is fairly general. Reading many of these, I actually feel pretty unsure about the level of generality or common-sense reasoning required for an AI to straightforwardly replace that part of a human's job. Presumably there's some restructure that would still squeeze a lot of economic value out of narrow AIs that could basically do these things, but that restructure isn't captured looking at the list of present-day O*NET tasks.

I'm also a little skeptical of your "low-quality work dilutes the quality of those fields and attracts other low-quality work" fear--since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality.

The difference here is that most academic fields are pretty well-established, whereas AI safety, longtermism, and longtermist subparts of most academic fields are very new. The mechanism for attracting low-quality work I'm imagining is that s... (read more)

-3
xccf
3y
Sure. I guess I don't have a lot of faith in your team's ability to do this, since you/people you are funding are already saying things that seem amateurish to me. But I'm not sure that is a big deal.

I was confused about the situation with debate, so I talked to Evan Hubinger about his experiences. That conversation was completely wild; I'm guessing people in this thread might be interested in hearing it. I still don't know exactly what to make of what happened there, but I think there are some genuine and non-obvious insights relevant to public discourse and optimization processes (maybe less to the specifics of debate outreach). The whole thing's also pretty funny.

I recorded the conversation; don't want to share publicly but feel free to DM me for access.

I imagine this could be one of the highest-leverage places to apply additional resources and direction though. People who are applying for funding for independent projects are people who desire to operate autonomously and execute on their own vision. So I imagine they'd require much less direction than marginal employees at an EA organization, for instance.

I don't have a strong take on whether people rejected from the LTFF are the best use of mentorship resources. I think many employees at EA organizations are also selected for being self-directed. I know ... (read more)

5
xccf
3y
Let's compare the situation of the Long-Term Future Fund evaluating the quality of a grant proposal to that of the academic community evaluating the quality of a published paper. Compared to the LTFF evaluating a grant proposal, the academic community evaluating the quality of a published paper has big advantages: The work is being evaluated retrospectively instead of prospectively (i.e. it actually exists, it is not just a hypothetical project). The academic community has more time and more eyeballs. The academic community has people who are very senior in their field, and your team is relatively junior--plus, "longtermism" is a huge area that's really hard to be an expert in all of. Even so, the academic community doesn't seem very good at their task. "Sleeping beauty" papers, whose quality is only recognized long after publication, seem common. Breakthroughs are denounced by scientists, or simply underappreciated, at first (often 'correctly' due to being less fleshed out than existing theories). This paper contains a list of 34 examples of Nobel Prize-winning work being rejected by peer review. "Science advances one funeral at a time", they say. Problems compound when the question of first-order quality is replaced by the question of what others will consider to be high quality. You're funding researchers to do work that you consider to be work that others will consider to be good--based on relatively superficial assessments due to time limitations, it sounds like. Seems like a recipe for herd behavior. But breakthroughs come from mavericks. This funding strategy could have a negative effect by stifling innovation (filtering out contrarian thinking and contrarian researchers from the field). Keep longtermism weird? (I'm also a little skeptical of your "low-quality work dilutes the quality of those fields and attracts other low-quality work" fear--since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that

Sadly, I think those changes would in fact be fairly large and would take up a lot of fund manager time. I think small modifications to original proposals wouldn't be enough, and it would require suggesting new projects or assessing applicants holistically and seeing if a career change made sense.

In my mind, this relates to ways in which mentorship is a bottleneck in longtermist work right now--  there are probably lots of people who could be doing useful direct work, but they would require resources and direction that we as a community don't have the capacity for. I don't think the LTFF is well-placed to provide this kind of mentorship, though we do offer to give people one-off feedback on their applications.

5
xccf
3y
I imagine this could be one of the highest-leverage places to apply additional resources and direction though. People who are applying for funding for independent projects are people who desire to operate autonomously and execute on their own vision. So I imagine they'd require much less direction than marginal employees at an EA organization, for instance. I also think there's an epistemic humility angle here. It's very likely that the longtermist movement as it currently exists is missing important perspectives. To some degree, as a funder, you are diffing your perspective against that of applicants and rejecting applicants whose projects make sense according to their perspective and not yours. It seems easy for this to result in the longtermist movement developing more homogenous perspectives over time, as people Goodhart on whatever metrics are related to getting funding/career advancement. I'm actually not convinced that direction is a good thing! I personally would be more inclined to fund anyone who meets a particular talent bar. That also makes your job easier because you can focus on just the person/people and worry less about their project. Huh. I understood your rejection email says the fund was unable to provide further feedback due to high volume of applications.

I think many applicants who we reject could apply with different proposals that I'd be more excited to fund-- rejecting an application doesn't mean I think there's no good direct work the applicant could do.

I would guess some people would be better off earning to give, but I don't know that I could say which ones just from looking at one application they've sent us.

5
xccf
3y
I see. That suggests you think the LTFF would have much more room for funding with some not-super-large changes to your processes, such as encouraging applicants to submit multiple project proposals, or doing calls with applicants to talk about other projects they could do, or modifications to their original proposal which would make it more appealing to you.

(To be clear, I think it's mostly just that we have more applications, and less that the mean application is significantly better than before.)

In several cases increased grant requests reflect larger projects or requests for funding for longer time periods. We've also definitely had a marked increase in the average individual salary request per year-- setting aside whether this is justified, this runs into a bunch of thorny issues around secondary effects that we've been discussing this round. I think we're likely to prioritize having a more standardized policy for individual salaries by next grant round.

This round, we switched from a system where we had all the grant discussion in a single spreadsheet to one where we discuss each grant in a separate Google doc, linked from a single spreadsheet. One fund manager has commented that they feel less on-top of this grant round than before as a result. (We're going to rethink this system again for next grant round.) We also changed the fund composition a bunch-- Helen and Matt left, I became chair, and three new guest managers joined. A priori, this could cause a shift in standards, though I have no particular r... (read more)

3
Jack Malde
3y
Thanks! I'm actually not surprised that the quality of grant applications might be increasing e.g. due to people learning more about what makes for a good grant. I have a follow-on question. Do you think that the increase in the size of the grant requests is justified? Is this because people are being more ambitious in what they want to do?

Fund managers can now opt to be compensated as contractors, at a rate of $40 / hour.

There's no strict 'minimum number'-- sometimes the grant is clearly above or below our bar and we don't consult anyone, and sometimes we're really uncertain or in disagreement, and we end up consulting lots of people (I think some grants have had 5+).

I will also say that each fund is somewhat intentionally composed of fund managers with somewhat varying viewpoints who trust different sets of experts, and the voting structure is such that if any individual fund manager is really excited about an application, it generally gets funded. As a result, I think in... (read more)

1
Manuel Allgaier
3y
Thanks for elaborating! Your process seems robustly good, and I appreciate the extra emphasis on diverse viewpoints & experts. 
4
Peter Wildeford
3y
That's great to hear - I did not know that
6
Jonas V
3y
What Asya said. I'd add that fund managers seem aware of it being bad if everyone relies on the opinion of a single person/advisor, and generally seem to think carefully about it.

I can't respond for Adam, but just wanted to say that I personally agree with you, which is one of the reasons I'm currently excited about funding independent work.

6
AdamGleave
3y
Thanks for picking up the thread here Asya! I think I largely agree with this, especially about the competitiveness in this space. For example, with AI PhD applications, I often see extremely talented people get rejected who I'm sure would have got an offer a few years ago. I'm pretty happy to see the LTFF offering effectively "bridge" funding for people who don't quite meet the hiring bar yet, but I think are likely to in the next few years. However, I'd be hesitant about heading towards a large fraction of people working independently long-term. I think there's huge advantages from the structure and mentorship an org can provide. If orgs aren't scaling up fast enough, then I'd prefer to focus on trying to help speed that up. The main way I could see myself getting more excited about long-term independent research is if we saw flourishing communities forming amongst independent researchers. Efforts like LessWrong and the Alignment Forum help in terms of providing infrastructure. But right now it still seems much worse than working for an org, especially if you want to go down any of the more traditional career paths later. But I'd love to be proven wrong here.

Hey! I definitely don't expect people starting AI safety research to have a track record doing AI safety work-- in fact, I think some of our most valuable grants are paying for smart people to transition into AI safety from other fields. I don't know the details of your situation, but in general I don't think "former physics student starting AI safety work" fits into the category of "project would be good if executed exceptionally well". In that case, I think most of the value would come from supporting the transition of someone who could potentially be re... (read more)

Sherry et al. have a more exhaustive working paper about algorithmic progress in a wide variety of fields.

Also a big fan of your report. :)

Historically, what has caused the subjectively biggest-feeling  updates to your timelines views? (e.g. arguments, things you learned  while writing the report, events in the world).

Thanks! :)

The first time I really thought about TAI timelines was in 2016, when I read Holden's  blog post. That got me to take the possibility of TAI soonish seriously for the first time (I hadn't been explicitly convinced of long timelines earlier or anything, I just hadn't thought about it).

Then I talked more with Holden and technical advisors over the next few years, and formed the impression that there was a relatively simple argument that many technical advisors believed that if a brain-sized model could be transformative, then there's a relativ... (read more)

Load more