Funding Strategy Week
Marginal Funding Week
Donation Election
Pledge Highlight
Donation Celebration
Nov 12 - 18
Marginal Funding Week
A week for organisations to explain what they would do with marginal funding. Read more.
Dec 3 - 16
Donation Election
A crowd-sourced pot of funds will be distributed amongst three charities based on your votes. Continue donation election conversations here.
$23 947 raised
Dec 3 - 16
Intermission
Dec 16 - 22
Pledge Highlight
A week to post about your experience with pledging, and to discuss the value of pledging. Read more.
Dec 23 - 31
Donation Celebration
When the donation celebration starts, you’ll be able to add a heart to the banner showing that you’ve done your annual donations.

New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Some of my thoughts on funding. It's giving season and I want to finally get around to publishing some of my thoughts and experiences around funding. I haven't written anything yet because I feel like I am mostly just revisiting painful experiences and will end up writing some angry rant. I have ideas for how things could be better so hopefully this can lead to positive change not just more complaining. All my experiences are in AI Safety. On Timing: Certainty is more important than speed. The total decision time is less important than the overdue time. Expecting a decision in 30 days and getting it in 35 days is worse than if I expect the decision in 90 days and I get it in 85 days. Grantmakers providing statistics about timing expectations makes things worse. If the mean or median response time is N days it is now N+5 days is it appropriate for me to send a follow-up email to check on the status? Technically it's not late yet. It could come tomorrow or in N more days. Imagine if the Uber app showed you the global mean wait time for the last 12 months and there was no map to track your driver's arrival. "It doesn't have to reduce the waiting time it just has to reduce the uncertainty" - Rory Sutherland My conversations about people's expectations and experiences with people in Berkeley are at times very different to those outside of Berkeley. After I posted my announcement about shutting down AISS and my comment on the LTFF update several people reached out to me about their experiences. Some people I already knew well, some I had met and others I didn't know before. Some of them had received funding a couple of times but their negative experiences led them to not reapply and walk away from their work or the ecosystem entirely. At least one mentioned having a draft post about their experience that they did not feel comfortable publishing. There was definitely a point for me where I had already given up but just not realised it. I had already run out of fundi
86
lukeprog
4d
18
Yudkowsky's message is "If anyone builds superintelligence, everyone dies." Zvi's version is "If anyone builds superintelligence under anything like current conditions, everyone probably dies." Yudkowsky contrasts those framings with common "EA framings" like "It seems hard to predict whether superintelligence will kill everyone or not, but there's a worryingly high chance it will, and Earth isn't prepared," and seems to think the latter framing is substantially driven by concerns about what can be said "in polite company." Obviously I can't speak for all of EA, or all of Open Phil, and this post is my personal view rather than an institutional one since no single institutional view exists, but for the record, my inside view since 2010 has been "If anyone builds superintelligence under anything close to current conditions, probably everyone dies (or is severely disempowered)," and I think the difference between me and Yudkowsky has less to do with social effects on our speech and more to do with differing epistemic practices, i.e. about how confident one can reasonably be about the effects of poorly understood future technologies emerging in future, poorly understood circumstances. (My all-things-considered view, which includes various reference classes and partial deference to many others who think about the topic, is more agnostic and hasn't consistently been above the "probably" line.) Moreover, I think those who believe some version of "If anyone builds superintelligence, everyone dies" should be encouraged to make their arguments loudly and repeatedly; the greatest barrier to actually-risk-mitigating action right now is the lack of political will. That said, I think people should keep in mind that: * Public argumentation can only get us so far when the evidence for the risks and their mitigations is this unclear, when AI has automated so little of the economy, when AI failures have led to so few deaths, etc. * Most concrete progress on worst-case AI risk
I'm currently facing a career choice between a role working on AI safety directly and a role at 80,000 Hours. I don't want to go into the details too much publicly, but one really key component is how to think about the basic leverage argument in favour of 80k. This is the claim that's like: well, in fact I heard about the AIS job from 80k. If I ensure even two (additional) people hear about AIS jobs by working at 80k, isn't it possible going to 80k could be even better for AIS than doing the job could be? In that form, the argument is naive and implausible. But I don't think I know what the "sophisticated" argument that replaces it is. Here are some thoughts: * Working in AIS also promotes growth of AIS. It would be a mistake to only consider the second-order effects of a job when you're forced to by the lack of first-order effects. * OK, but focusing on org growth fulltime seems surely better for org growth than having it be a side effect of the main thing you're doing. * One way to think about this is to compare two strategies of improving talent at a target org, between "try to find people to move them into roles in the org, as part of cultivating a whole overall talent pipeline into the org and related orgs", and "put all of your fulltime effort into having a single person, i.e. you, do a job at the org". It seems pretty easy to imagine that the former would be a better strategy? * I think this is the same intuition that makes pyramid schemes seem appealing (something like: surely I can recruit at least 2 people into the scheme, and surely they can recruit more people, and surely the norm is actually that you recruit a tonne of people" and it's really only by looking at the mathematics of the population as a whole you can see that it can't possibly work, and that actually it's necessarily the case that most people in the scheme will recruit exactly zero people ever. * Maybe a pyramid scheme is the extreme of "what if literally everyone in EA work
24
Linch
2d
0
For fun I mapped different clusters of people's overall AI x-risk probabilities by ~2100 to other rates of dying in my lifetime, which is a probability that I and other quantitative people might have a better intuitive grasp of. It might not be helpful or actively anti-helpful to other people, but whatever. x-risk "doomer": >90% probability of dying. Analogy: naive risk of death for an average human. (around 93% of homo sapiens have died so far). Some doomers have far higher probabilities in log-space. You can map that to your realistic all-things-considered risk of death[1] or something. (This analogy might be the least useful).  median x-risk concerned EA: 15-35% risk of dying. I can't find a good answer for median but I think this is where I'm at so it's a good baseline[2], many people I talk to give similar numbers, and also where Michael Dickens put his numbers in a recent post. Analogy: lifelong risk of death from heart disease. Roughly 20% is the number people give for risk of lifelong dying from heart disease, for Americans. This is not accounting for technological changes from improvements in GLP-1 agonists, statins, etc. My actual all-things considered view for my own lifetime risk of dying from heart disease (even ignoring AI) is considerably lower than 20% but it's probably not worth going into detail here[3].  median ML researcher in surveys: 5-10%. See here for what I think is the most recent survey; I think these numbers are relatively stable across surveyed years, though trending slightly upwards. Analogy: lifelong risk of dying from 3-5 most common cancers. I couldn't easily find a source that just lists most to least likely cancer to kill you quickly online, but I think 3-5 sounds right after a few calculations; you can do the math yourself here. As others have noted, if true, this will put AI x-risk as among your most likely causes of death.  AI "optimist": ~1% risk of doom. See eg here. "a tail risk2 worth considering, but not the dominant so
Mini EA Forum Update We now have a unified @mention feature in our editor! You can use it to add links to posts, tags, and users. Thanks so much to @Vlad Sitalo — both for the GitHub PR introducing this feature, and for time and again making useful improvements to our open source codebase. 💜