All of abergal's Comments + Replies

Public reports are now optional for EA Funds grantees

We got feedback from several people that they weren't applying to the funds because they didn't want to have a public report.  There are lots of reasons that I sympathize with for not wanting a public report, especially as an individual (e.g. you're worried about it affecting future job prospects, you're asking for money for mental health support and don't want that to be widely known, etc.). My vision (at least for the Long-Term Future Fund) is to become a good default funding source for individuals and new organizations, and I think that vision is compromised if some people don't want to apply for publicity reasons.

Broadly, I think the benefits to funding more people outweigh the costs to transparency.

Thanks for the response.


Is there a way to make things pseudo-anonymous, revealing the type of grants being made privately but preserving the anonymity of the grant recipient? It seems like that preserves a lot of the value of what you want to protect without much downside.

For example, I'd be personally very skeptical that giving grants for personal mental support would be the best way to improve the long-term future and would make me less likely to support the LTFF and if all such grants weren't public, I wouldn't know that. There might also be peopl... (read more)

Why AI alignment could be hard with modern deep learning

Another potential reason for optimism is that we'll be able to use observations from early on in the training runs of systems (before models are very smart) to affect the pool of Saints / Sycophants / Schemers we end up with. I.e., we are effectively "raising" the adults we hire, so it could be that we're able to detect if 8-year-olds are likely to become Sycophants / Schemers as adults and discontinue or modify their training accordingly.

Open Philanthropy is seeking proposals for outreach projects

Sorry this was unclear! From the post:

There is no deadline to apply; rather, we will leave this form open indefinitely until we decide that this program isn’t worth running, or that we’ve funded enough work in this space. If that happens, we will update this post noting that we plan to close the form at least a month ahead of time.

I will bold this so it's more clear.

Open Philanthropy is seeking proposals for outreach projects

There's no set maximum; we expect to be limited by the number of applications that seem sufficiently promising, not the cost.

Taboo "Outside View"

Yeah, FWIW I haven't found any recent claims about insect comparisons particularly rigorous.

HIPR: A new EA-aligned policy newsletter

FWIW I had a similar initial reaction to Sophia, though reading more carefully I totally agree that it's more reasonable to interpret your comment as a reaction to the newsletter rather than to the proposal. I'd maybe add an edit to your high-level comment just to make sure people don't get confused?

Ben Garfinkel's Shortform

Really appreciate the clarifications! I think I was interpreting "humanity loses control of the future" in a weirdly temporally narrow sense that makes it all about outcomes, i.e. where "humanity" refers to present-day humans, rather than humans at any given time period.  I totally agree that future humans may have less freedom to choose the outcome in a way that's not a consequence of alignment issues.

I also agree value drift hasn't historically driven long-run social change, though I kind of do think it will going forward, as humanity has more power to shape its environment at will.

3Linch7moMy impression is that the differences in historical vegetarianism rates between India and China, and especially India and southern China (where there is greater similarity of climate and crops used), is a moderate counterpoint. At the timescale of centuries, vegetarianism rates in India [,the%20reasons%20are%20mainly%20cultural.] are much higher than rates in China [] . Since factory farming is plausibly one of the larger sources of human-caused suffering today, the differences aren't exactly a rounding error.
Ben Garfinkel's Shortform

Wow, I just learned that Robin Hanson has written about this, because obviously, and he agrees with you.

2Ben Garfinkel7moFWIW, I wouldn't say I agree with the main thesis of that post. I definitely think that human biology creates at least very strong biases toward certain values (if not hard constraints) and that AI system would not need to have these same biases. If you're worried about future agents having super different and bad values, then AI is a natural focal point for your worry. -------------------------------------------------------------------------------- A couple other possible clarifications about my views here: * I think that the outcome of the AI Revolution could be much worse, relative to our current values, than the Neolithic Revolution was relative to the values of our hunter-gatherer ancestors. But I think the question "Will the outcome be worse?" is distinct from the question "Will we have less freedom to choose the outcome?" * I'm personally not so focused on value drift as a driver of long-run social change. For example, the changes associated with the Neolithic Revolution weren't really driven by people becoming less egalitarian, more pro-slavery, more inclined to hold certain religious beliefs, more ideologically attached to sedentism/farming, more happy to accept risks from disease, etc. There were value changes, but, to some significant degree, they seem to have been downstream of technological/economic change.
4abergal7moAnd Paul Christiano agrees with me []. Truly, time makes fools of us all.
Ben Garfinkel's Shortform

Do you have the intuition that absent further technological development, human values would drift arbitrarily far? It's not clear to me that they would-- in that sense, I do feel like we're "losing control" in that even non-extinction AI is enabling a new set of possibilities that modern-day humans would endorse much less than the decisions of future humans otherwise. (It does also feel like we're missing the opportunity to "take control" and enable a new set of possibilities that we would endorse much more.)

Relatedly, it doesn't feel to me like the values of humans 150,000 years ago and humans now and even ems in Age of Em are all that different on some more absolute scale.

2Ben Garfinkel7moCertainly not arbitrarily far. I also think that technological development (esp. the emergence of agriculture and modern industry) has played a much larger role in changing the world over time than random value drift has. I definitely think that's true. But I also think that was true of agriculture, relative to the values of hunter-gatherer societies. To be clear, I'm not downplaying the likelihood or potential importance of any of the three crisper concerns I listed. For example, I think that AI progress could conceivably lead to a future that is super alienating and bad. I'm just (a) somewhat pedantically arguing that we shouldn't frame the concerns as being about a "loss of control over the future" and (b) suggesting that you can rationally have all these same concerns even if you come to believe that technical alignment issues aren't actually a big deal.
1abergal7moWow, I just learned that Robin Hanson has written about this, because obviously, and he agrees with you [].
The Long-Term Future Fund has room for more funding, right now

I think we probably will seek out funding from larger institutional funders if our funding gap persists. We actually just applied for a ~$1M grant from the Survival and Flourishing Fund.

Ben Garfinkel's Shortform

I agree with the thrust of the conclusion, though I worry that focusing on task decomposition this way elides the fact that the descriptions of the O*NET tasks already assume your unit of labor is fairly general. Reading many of these, I actually feel pretty unsure about the level of generality or common-sense reasoning required for an AI to straightforwardly replace that part of a human's job. Presumably there's some restructure that would still squeeze a lot of economic value out of narrow AIs that could basically do these things, but that restructure isn't captured looking at the list of present-day O*NET tasks.

The Long-Term Future Fund has room for more funding, right now

I'm also a little skeptical of your "low-quality work dilutes the quality of those fields and attracts other low-quality work" fear--since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality.

The difference here is that most academic fields are pretty well-established, whereas AI safety, longtermism, and longtermist subparts of most academic fields are very new. The mechanism for attracting low-quality work I'm imagining is that s... (read more)

-3xccf8moSure. I guess I don't have a lot of faith in your team's ability to do this, since you/people you are funding are already saying things that seem amateurish to me. But I'm not sure that is a big deal.
EA Debate Championship & Lecture Series

I was confused about the situation with debate, so I talked to Evan Hubinger about his experiences. That conversation was completely wild; I'm guessing people in this thread might be interested in hearing it. I still don't know exactly what to make of what happened there, but I think there are some genuine and non-obvious insights relevant to public discourse and optimization processes (maybe less to the specifics of debate outreach). The whole thing's also pretty funny.

I recorded the conversation; don't want to share publicly but feel free to DM me for access.

The Long-Term Future Fund has room for more funding, right now

I imagine this could be one of the highest-leverage places to apply additional resources and direction though. People who are applying for funding for independent projects are people who desire to operate autonomously and execute on their own vision. So I imagine they'd require much less direction than marginal employees at an EA organization, for instance.

I don't have a strong take on whether people rejected from the LTFF are the best use of mentorship resources. I think many employees at EA organizations are also selected for being self-directed. I know ... (read more)

5xccf8moLet's compare the situation of the Long-Term Future Fund evaluating the quality of a grant proposal to that of the academic community evaluating the quality of a published paper. Compared to the LTFF evaluating a grant proposal, the academic community evaluating the quality of a published paper has big advantages: The work is being evaluated retrospectively instead of prospectively (i.e. it actually exists, it is not just a hypothetical project). The academic community has more time and more eyeballs. The academic community has people who are very senior in their field, and your team is relatively junior--plus, "longtermism" is a huge area that's really hard to be an expert in all of. Even so, the academic community doesn't seem very good at their task. "Sleeping beauty" papers, whose quality is only recognized long after publication, seem common []. Breakthroughs are denounced by scientists [] , or simply underappreciated [] , at first (often 'correctly' due to being less fleshed out than existing theories). This paper [] contains a list of 34 examples of Nobel Prize-winning work being rejected by peer review. "Science advances one funeral at a time", they say. Problems compound when the question of first-order quality is replaced by the question of what others will consider to be high quality. You're funding researchers to do work that you consider to be work that others will consider to be good--based on relatively superficial assessments due to time limitations, it sounds like. Seems like a recipe for herd behavior. But breakthroughs [] come [
The Long-Term Future Fund has room for more funding, right now

Sadly, I think those changes would in fact be fairly large and would take up a lot of fund manager time. I think small modifications to original proposals wouldn't be enough, and it would require suggesting new projects or assessing applicants holistically and seeing if a career change made sense.

In my mind, this relates to ways in which mentorship is a bottleneck in longtermist work right now--  there are probably lots of people who could be doing useful direct work, but they would require resources and direction that we as a community don't have the capacity for. I don't think the LTFF is well-placed to provide this kind of mentorship, though we do offer to give people one-off feedback on their applications.

1xccf8moI imagine this could be one of the highest-leverage places to apply additional resources and direction though. People who are applying for funding for independent projects are people who desire to operate autonomously and execute on their own vision. So I imagine they'd require much less direction than marginal employees at an EA organization, for instance. I also think there's an epistemic humility angle here. It's very likely that the longtermist movement as it currently exists is missing important perspectives. To some degree, as a funder, you are diffing your perspective against that of applicants and rejecting applicants whose projects make sense according to their perspective and not yours. It seems easy for this to result in the longtermist movement developing more homogenous perspectives over time, as people Goodhart on whatever metrics are related to getting funding/career advancement. I'm actually not convinced that direction is a good thing! I personally would be more inclined [] to fund anyone who meets a particular talent bar. That also makes your job easier because you can focus on just the person/people and worry less about their project. Huh. I understood your rejection email says the fund was unable to provide further feedback due to high volume of applications.
The Long-Term Future Fund has room for more funding, right now

I think many applicants who we reject could apply with different proposals that I'd be more excited to fund-- rejecting an application doesn't mean I think there's no good direct work the applicant could do.

I would guess some people would be better off earning to give, but I don't know that I could say which ones just from looking at one application they've sent us.

1xccf8moI see. That suggests you think the LTFF would have much more room for funding with some not-super-large changes to your processes, such as encouraging applicants to submit multiple project proposals, or doing calls with applicants to talk about other projects they could do, or modifications to their original proposal which would make it more appealing to you.
The Long-Term Future Fund has room for more funding, right now

(To be clear, I think it's mostly just that we have more applications, and less that the mean application is significantly better than before.)

In several cases increased grant requests reflect larger projects or requests for funding for longer time periods. We've also definitely had a marked increase in the average individual salary request per year-- setting aside whether this is justified, this runs into a bunch of thorny issues around secondary effects that we've been discussing this round. I think we're likely to prioritize having a more standardized policy for individual salaries by next grant round.

The Long-Term Future Fund has room for more funding, right now

This round, we switched from a system where we had all the grant discussion in a single spreadsheet to one where we discuss each grant in a separate Google doc, linked from a single spreadsheet. One fund manager has commented that they feel less on-top of this grant round than before as a result. (We're going to rethink this system again for next grant round.) We also changed the fund composition a bunch-- Helen and Matt left, I became chair, and three new guest managers joined. A priori, this could cause a shift in standards, though I have no particular r... (read more)

3jackmalde8moThanks! I'm actually not surprised that the quality of grant applications might be increasing e.g. due to people learning more about what makes for a good grant. I have a follow-on question. Do you think that the increase in the size of the grant requests is justified? Is this because people are being more ambitious in what they want to do?
EA Funds has appointed new fund managers

Fund managers can now opt to be compensated as contractors, at a rate of $40 / hour.

EA Funds is more flexible than you might think

There's no strict 'minimum number'-- sometimes the grant is clearly above or below our bar and we don't consult anyone, and sometimes we're really uncertain or in disagreement, and we end up consulting lots of people (I think some grants have had 5+).

I will also say that each fund is somewhat intentionally composed of fund managers with somewhat varying viewpoints who trust different sets of experts, and the voting structure is such that if any individual fund manager is really excited about an application, it generally gets funded. As a result, I think in... (read more)

1Manuel_Allgaier9moThanks for elaborating! Your process seems robustly good, and I appreciate the extra emphasis on diverse viewpoints & experts.
4Peter Wildeford9moThat's great to hear - I did not know that
6Jonas Vollmer9moWhat Asya said. I'd add that fund managers seem aware of it being bad if everyone relies on the opinion of a single person/advisor, and generally seem to think carefully about it.
Long-Term Future Fund: Ask Us Anything!

I can't respond for Adam, but just wanted to say that I personally agree with you, which is one of the reasons I'm currently excited about funding independent work.

6AdamGleave9moThanks for picking up the thread here Asya! I think I largely agree with this, especially about the competitiveness in this space. For example, with AI PhD applications, I often see extremely talented people get rejected who I'm sure would have got an offer a few years ago. I'm pretty happy to see the LTFF offering effectively "bridge" funding for people who don't quite meet the hiring bar yet, but I think are likely to in the next few years. However, I'd be hesitant about heading towards a large fraction of people working independently long-term. I think there's huge advantages from the structure and mentorship an org can provide. If orgs aren't scaling up fast enough, then I'd prefer to focus on trying to help speed that up. The main way I could see myself getting more excited about long-term independent research is if we saw flourishing communities forming amongst independent researchers. Efforts like LessWrong and the Alignment Forum help in terms of providing infrastructure. But right now it still seems much worse than working for an org, especially if you want to go down any of the more traditional career paths later. But I'd love to be proven wrong here.
Long-Term Future Fund: Ask Us Anything!

Hey! I definitely don't expect people starting AI safety research to have a track record doing AI safety work-- in fact, I think some of our most valuable grants are paying for smart people to transition into AI safety from other fields. I don't know the details of your situation, but in general I don't think "former physics student starting AI safety work" fits into the category of "project would be good if executed exceptionally well". In that case, I think most of the value would come from supporting the transition of someone who could potentially be re... (read more)

AMA: Ajeya Cotra, researcher at Open Phil

Sherry et al. have a more exhaustive working paper about algorithmic progress in a wide variety of fields.

AMA: Ajeya Cotra, researcher at Open Phil

Also a big fan of your report. :)

Historically, what has caused the subjectively biggest-feeling  updates to your timelines views? (e.g. arguments, things you learned  while writing the report, events in the world).

Thanks! :)

The first time I really thought about TAI timelines was in 2016, when I read Holden's  blog post. That got me to take the possibility of TAI soonish seriously for the first time (I hadn't been explicitly convinced of long timelines earlier or anything, I just hadn't thought about it).

Then I talked more with Holden and technical advisors over the next few years, and formed the impression that there was a relatively simple argument that many technical advisors believed that if a brain-sized model could be transformative, then there's a relativ... (read more)

What does it mean to become an expert in AI Hardware?

Do you think price performance for certain applications could be one of the better ones to use on its own? Or is it perhaps better practice to keep an index of some number of trends?


I think price performance, measured in something like "operations / $", is by far the most important metric, caveating that by itself it doesn't differentiate between one-time costs of design and purchase and ongoing costs to run hardware, and it doesn't account for limitations in memory, networking, and software for parallelization that constrain performance as the numbe... (read more)

What does it mean to become an expert in AI Hardware?

Hi-- great post! I was pointed to this because I've been working on a variety of hardware-related projects at FHI and AI Impacts, including generating better hardware forecasts. (I wrote a lot here, but would also be excited to talk to you directly and have even more to say-- I contacted you through Facebook.)

 At first glance, it seemed to me that the existence of Ajeya’s report demonstrates that the EA community already has enough people with sufficient knowledge and access to expert opinion that, on the margin, adding one expert in hardware to the E

... (read more)

Thank you so much for the detailed comment! I would be very excited to chat offline, but I'll put a few questions here that are directly related to the comment:

For one, I think while the forecasts in that report are the best publicly available thing we have, there's significant room to do better

These are all super interesting points! One thing that strikes me as a similarity between many of them is that it's not straightforward what metric (# transistors per chip, price performance, etc.) is the most useful to forecast AI timelines. Do you think price perf... (read more)

abergal's Shortform

I get the sense that a lot of longtermists are libertarian or libertarian-leaning (I could be wrong) and I don't really understand why. Overall the question of where to apply market structures vs. centralized planning seems pretty unclear to me.

  • More centralization seems better from an x-risk perspective in that it avoids really perverse profit-seeking incentives that companies might have to do unsafe things. (Though several centralized nation-states likely have similar problems on the global level, and maybe companies will have more cosmopolitan values in
... (read more)
2020 AI Alignment Literature Review and Charity Comparison

AI Impacts now has a 2020 review page so it's easier to tell what we've done this year-- this should be more complete / representative than the posts listed above. (I appreciate how annoying the continuously updating wiki model is.)

My upcoming CEEALAR stay

I was speaking about AI safety! To clarify, I meant that investments in formal verification work could in part be used to develop those less primitive proof assistants.

Ask Rethink Priorities Anything (AMA)

I found this response  insightful and feel like it echoes mistakes I've made as well; really appreciate you writing it.

richard_ngo's Shortform

Thank you for writing this post-- I have the same intuition as you about this being very misleading and found this thread really helpful.

Some promising career ideas beyond 80,000 Hours' priority paths

Chiming in on this very late. (I worked on formal verification research using proof assistants for a sizable part of undergrad.)

- Given the stakes, it seems like it could be important to verify 1. formally after the math proofs step. Math proofs are erroneous a non-trivial fraction of the time.

- While I agree that proof assistants right now are much slower than doing math proofs yourself, verification is a pretty immature field. I can imagine them becoming a lot better such that they do actually become better to use than doing math proofs yourself, and don... (read more)

Long-Term Future Fund: Ask Us Anything!

In comparison with nonprofits it’s much more difficult. My read is that we sort of expect the nonprofits to never die, which means we need to be *very very* sure about them before setting them up.  But if this is the case it would be obviously severely limiting.

To clarify, I don’t think that most projects will be actively harmful-- in particular, the “projects that result in a team covering a space that is worse than the next person who have come along” case seems fairly rare to me, and would mostly apply to people who’d want to do certain movement-fa... (read more)

3Jonas Vollmer1yAgain, I agree with Asya. A minor side remark: As someone who has experience with hiring all kinds of virtual and personal assistants for myself and others, I think the problem here is not the money, but finding assistants who will actually do a good job, and organizing the entire thing in a way that’s convenient for the researchers/professionals who need support. More than half of the assistants I’ve worked with cost me more time than they saved me. Others were really good and saved me a lot of time, but it’s not straightforward to find them. If someone came up with a good proposal for this, I’d want to fund them and help them. Similar points apply to some of the other ideas. We can’t just spend money on these things; we need to receive corresponding applications (which generally hasn’t happened) or proactively work to bring such projects into existence (which is a lot of work).
"Patient vs urgent longtermism" has little direct bearing on giving now vs later

(These are more additional considerations, not intended to be counterarguments given that your post itself was mostly pointing at additional considerations.)


  • The longtermism community can enjoy above-average growth for only a finite window of time. (It can at most control all resources, after which its growth equals average growth.)
  • Thus, spending money on growing the longtermism community now rather than later merely moves a transient window of additional resource growth to an earlier point in time.
  • We should be indifferent about the timing o
... (read more)
2Denkenberger1yYes, with how under invested [] in GCR mitigation is now, I think it is better to have many resources for longtermism sooner.
"Patient vs urgent longtermism" has little direct bearing on giving now vs later

Thanks for writing this all up, Owen, I bet I'll link to this post many times in the future. I'll also selfishly link my previous post on movement-building as a longtermist investment.

4Owen_Cotton-Barratt1yThanks for the link! (I think it's self-promotional but clearly not selfish; it's just helpful to connect this with previous discussion, and I hadn't seen it before.)
Long-Term Future Fund: Ask Us Anything!

I think definitely more or equally likely. :) Please apply!

Long-Term Future Fund: Ask Us Anything!

Some things I think could actively cause harm:

  • Projects that accelerate technological development of risky technologies without corresponding greater speedup of safety technologies
  • Projects that result in a team covering a space or taking on some coordination role that is worse than the next person who could have come along
  • Projects that engage with policymakers in an uncareful way, making them less willing to engage with longtermism in the future, or causing them to make bad decisions that are hard to reverse
  • Movement-building projects that give a bad first i
... (read more)
7Ozzie Gooen1yThanks so much for this, that was informative. A few quick thoughts: I’ve heard this one before and I could sympathize with it, but it strikes me as a red flag that something is going a bit wrong. ( I’m not saying that this is your fault, but am flagging it is an issue for the community more broadly.) Big companies often don’t have the ideal teams for new initiatives. Often urgency is very important so they put something together relatively quickly. If it doesn’t work well is not that big of a deal, it is to spend the team and have them go to other projects, and perhaps find better people to take their place. In comparison with nonprofits it’s much more difficult. My read is that we sort of expect the nonprofits to never die, which means we need to be *very very* sure about them before setting them up. But if this is the case it would be obviously severely limiting. The obvious solution to this would be to have bigger orgs with more possibility. Perhaps of specific initiatives were going well and demanded independence it could happen later on, but hopefully not for the first few years. Some ideas I’ve had: - Experiment with advertising campaigns that could be clearly scaled up. Some of them seem linearly useful up to millions of dollars. - Add additional resources to make existing researchers more effective. - Buy the rights to books and spend on marketing for the key ones. - Pay for virtual assistants and all other things that could speed researchers out. - Add additional resources to make nonprofits more effective, easily. - Better budgets for external contractors. - Focus heavily on funding non-EA projects that are still really beneficial. This could mean an emphasis on funding new nonprofits that do nothing but rank and do strategy for more funding. While it might be a strange example, the wealthy, or in particular, the Saudi government are examples of how to spend lots of money with relatively few trusted people, semi-successfully. Having come from

I agree with the above response, but I would like to add some caveats because I think potential grant applicants may draw the wrong conclusions otherwise:

If you are the kind of person who thinks carefully about these risks, are likely to change your course of action if you get critical feedback, and proactively sync up with the main people/orgs in your space to ensure you’re not making things worse, I want to encourage you to try risky projects nonetheless, including projects that have a risk of making things worse. Many EAs have made mistakes that caused ... (read more)

Long-Term Future Fund: Ask Us Anything!

I think this century is likely to be extremely influential, but there's likely important direct work to do at many parts of this century.  Both patient philanthropy projects we funded have relevance to that timescale-- I'd like to know about how best to allocate longtermist resources between direct work, investment, and movement-building over the coming years, and I'm interested in how philanthropic institutions might change.

I also think it's worth spending some resources thinking about scenarios where this century isn't extremely influential.

Long-Term Future Fund: Ask Us Anything!

Edit: I really like Adam's answer

There are a lot of things I’m uncertain about, but I should say that I expect most research aimed at resolving these uncertainties not to provide strong enough evidence to change my funding decisions (though some research definitely could!) I do think weaker evidence could change my decisions if we had a larger number of high-quality applications to choose from. On the current margin, I’d be more excited about research aimed at identifying new interventions that could be promising.

Here's a small sample of the things that fe... (read more)

Long-Term Future Fund: Ask Us Anything!

I’d overall like to see more work that has a solid longtermist justification but isn't as close to existing longtermist work. It seems like the LTFF might be well-placed to encourage this, since we provide funding outside of established orgs. This round, we received many applications from people who weren’t very engaged with the existing longtermist community. While these didn’t end up meeting our bar, some of the projects were fairly novel and good enough to make me excited about funding people like this in general.

There are also lots of particular less-e... (read more)

Long-Term Future Fund: Ask Us Anything!

Like Adam, I’ll focus on things that someone reading this might be interested in supporting or applying for. I want to emphasize that this is my personal take, not representing the whole fund, and I would be sad if this response stopped anyone from applying -- there’s a lot of healthy disagreement within the fund, and we fund lots of things where at least one person thinks it’s below our bar. I also think a well-justified application could definitely change my mind.

  1. Improving science or technology, unless there’s a strong case that the improvement would dif
... (read more)
Long-Term Future Fund: Ask Us Anything!

Just want to say I agree with both Habryka's comments and Adam's take that part of what the LTFF is doing is bridging the gap while orgs scale up (and increase in number) and don't have the capacity to absorb talent.

Long-Term Future Fund: Ask Us Anything!

Filtering for obvious misfits, I think the majority reason is that I don't think the project proposal will be sufficiently valuable for the long-term future, even if executed well. The minority reason is that there isn't strong enough evidence that the project will be executed well.

Sorry if this is an unsatisfying answer-- I think our applications are different enough that it’s hard to think of common reasons for rejection that are more granular. Also, often the bottom line is "this seems like it could be good, but isn't as good as other things we want to ... (read more)

3PabloAMC10moHey Asya! I've seen that you've received a comment prize on this. Congratulations! I have found it interesting. I was wondering: you give these two reasons for rejecting a funding application * Project would be good if executed exceptionally well, but applicant doesn't have a track record in this area, and there are no references that I trust to be calibrated to vouch for their ability. * Applicant wants to do research on some topic, but their previous research on similar topics doesn't seem very good. My question is: what method would you use to evaluate the track record of someone who has not done a Ph.D. in AI Safety, but rather on something like Physics (my case :) )? Do you expect the applicant to have some track record in AI Safety research? I do not plan on applying for funding on the short term, but I think I would find some intuition on this valuable. I also ask because I find it hard to calibrate myself on the quality of my own research.
Long-Term Future Fund: Ask Us Anything!

Really good question! 

We currently have ~$315K in the fund balance.* My personal median guess is that we could use $2M over the next year while maintaining this year's bar for funding. This would be:

  • $1.7M more than our current balance
  • $500K more per year than we’ve spent in previous years
  • $800K more than the total amount of donations received in 2020 so far
  • $400K more than a naive guess for what the total amount of donations received will be in all of 2020. (That is, if we wanted a year of donations to pay for a year of funding, we would need  $400K
... (read more)
EA Forum Prize: Winners for August 2020

Random thought: I think it would be kind of cool if there were EA forum prizes for people publicly changing their minds in response to comments/ feedback.

3Aaron Gertler1yI think someone doing this in a post is likely to aid their chances of winning a prize, but that's not an official thing -- just based on how I'd expect judges to react (and how I might react, depending on the context). The "changed my mind" post/comment is one of several really good post/comment genres.

This creates weird incentives, e.g. I could construct a plausible-but-false view, make a post about it, then make a big show of changing my mind. I don't think the amounts of money involved make it worth it, but I'm wary of incentivizing things that are so easily gamed. 

New report on how much computational power it takes to match the human brain (Open Philanthropy)

Planned summary for the Alignment Newsletter:

In this blog post, Joseph Carlsmith gives a summary of his longer report estimating the number of floating point operations per second (FLOP/s) which would be sufficient to perform any cognitive task that the human brain can perform. He considers four different methods of estimation.

Using the mechanistic method, he estimates the FLOP/s required to model the brain’s low-level mechanisms at a level of detail adequate to replicate human task-performance. He does this by estimating that ~1e13 - 1e17 FLOP/s is
... (read more)
Does Economic History Point Toward a Singularity?
On the acceleration model, the periods from 1500-2000, 10kBC-1500, and "the beginning of history to 10kBC" are roughly equally important data (and if that hypothesis has higher prior I don't think you can reject that framing). Changes within 10kBC - 1500 are maybe 1/6th of the evidence, and 1/3 of the relevant evidence for comparing "continuous acceleration" to "3 exponentials." I still think it's great to dig into one of these periods, but I don't think it's misleading to present this period as only 1/3 of
... (read more)
Load More