All of Tom Barnes's Comments + Replies

I thought it might be helpful for me to add my own thoughts, as a fund manger at EAIF (Note I'm speaking in a personal capacity, not on behalf of EA Funds or EV).

  1. Firstly, I'd like to apologise for my role in these mistakes. I was the Primary Investigator (PI) for Igor's application, and thus I share some responsibility here. Specifically as the PI, I should have (a) evaluated the application sooner, (b) reached a final recommendation sooner, and (c) been more responsive to communications after making a decision
    1. I did not make an initial decision until Novem
... (read more)
1
Vincent van der Holst
6h
@Tom Barnes thank you for this insight. Your team and Caleb must work under a lot of pressure and this post, even when important, must not be nice for you to read. It was clear to me from our EAIF applications and interactions that your team is overworked, understaffed and or burned out. I think it's so important that you are honest about that, and you work on a process that keeps your quality high. It seems from this post that EAIF is not meeting timelines, not communication clearly, and it was clear from the feedback on our application that it was not carefully reviewed (I can share the feedback and the errors and inconsistencies in that).  Can you limit applications somehow and focus on making better decisions on fewer applications with clear communication? I'd rather wait for your team to carefully consider our application, so I don't have to waste time drafting it every 6 months and it not being carefully reviewed. 

Specifically as the PI, I should have (a) evaluated the application sooner, (b) reached a final recommendation sooner, and (c) been more responsive to communications after making a decision

 

This comment feels to me like temporary embarrassed deadline-meeter, and I don't think that's realistic. The backlog is very understandable given your task and your staffing, I assume you're doing what you can on the staffing front but even if that's resolved it's just a big task and 3 weeks is a very ambitious timeline even with full staffing. Given that, it's not... (read more)

My boring answer would be to see details on our website. In terms of submission style, we say:

  • We recommend that applicants take about 1–2 hours to write their applications. This does not include the time spent developing the plan and strategy for the project – we recommend thinking about those carefully prior to applying.
  • Please keep your answers brief and ensure the total length of your responses does not exceed 10,000 characters. We recommend a total length of 2,000–5,000 characters.
  • We recommend focusing on the substantive arguments in favour of your proj
... (read more)

Currently we don't have a process for retroactively evaluating EAIF grants. However, there are a couple of informal channels which can help to improve decision-making:

  • We request that grantees fill out a short form detailing the impact of their grant after six months. These reports are both directly helpful for evaluating a future application from the grantee, and indirectly helpful at calibrating the "bang-for-your-buck" we should expect from different grant sizes for different projects
  • When evaluting the renewal of a grant, we can compare the initial appli
... (read more)

Hey - I think it's important to clarify that EAIF is optimising for something fairly different from GiveWell (although we share the same broad aim):

  • Specifically, GiveWell is optimising for lives saved in the next few years, under the constraint of health projects in LMICs, with a high probability of impact and fairly immediate / verifable results. 
  • Meanwhile, EAIF is focused on a hits-based, low-certainty area, where the evidence base is weaker, grants have longer paths to impact, and the overarching goal is often unclear.

As such, a direct/equivalent c... (read more)

I think the premise of your question is roughly correct: I do think it's pretty hard to "help EA notice what it is important to work on", for a bunch of reasons:

  • It could lead to new, unexpected directions which might be counterintuive / controversial.
  • it requires the community to have the psychological, financial and intellectual safety to identify / work on causes which may not be promising
  • It needs a non-trivial number of people to engage with the result of exploration, and act upon it (including people who can direct substantial resources)
  • It has a very lo
... (read more)

Good Question! We have discussed running RFP(s) to more directly support projects we'd like to see. First, though, I think we want to do some more strategic thinking about the direction we want EAIF to go in, and hence at this stage I think we are fairly unsure about which project types we'd like to see more of.

Caveats aside, I personally[1] would be pretty interested in:

  • Macrostrategy / cause prioritization research. I think a substantional amount of intellectual progress was made in the 2000s / early 2010s from a constellation of different places (e.
... (read more)

Hey, good question!

Here's a crude rationale: 

  • Suppose that by donating $1k to an EAIF project, they get 1 new person to consider donating more effectively. 
  • This 1 new person pledges to give 1% of their salary to GiveWell's top charities, and they do this for the next 10 years. 
  • If they make (say) $50k / year, then over 10 years they will donate $5k to GiveWell charities. 
  • The net result is that a $1k donation to EAIF led to $5k donated to top GiveWell charities - or a dollar donated to EAIF goes 5x further than a GiveWell Top charity

Of cou... (read more)

Tom Barnes
4mo122
11
2
18
3

I agree there's no single unified resource. Having said that, I found Richard Ngo's "five alignment clusters" pretty helpful for bucketing different groups & arguments together. Reposting below:

  1. MIRI cluster. Think that P(doom) is very high, based on intuitions about instrumental convergence, deceptive alignment, etc. Does work that's very different from mainstream ML. Central members: Eliezer Yudkowsky, Nate Soares.
  2. Structural risk cluster. Think that doom is more likely than not, but not for the same reasons as the MIRI cluster. Instead, this cluster f
... (read more)

A couple of weeks ago I blocked all mentions of "Effective Altruism", "AI Safety", "OpenAI", etc from my twitter feed. Since then I've noticed it become much less of a time sink, and much better for mental health. Would strongly recommend!

3
burner
4mo
throw e/acc on there too

I wrote the following on a draft of this post. For context, I currently do (very) part-time work at EAIF

Overall, I‘m pretty excited to see EAIF orient to a principles-first EA. Despite recent challenges, I continue to believe that the EA community is doing something special and important, and is fundamentally worth fighting for. With this reorientation of EAIF, I hope we can get the EA community back to a strong position. I share many of the uncertainties listed - about whether this is a viable project, how EAIF will practically evaluate grants under this worldview, or if it’s even philosophically coherent. Nonetheless, I’m excited to see what can be done.

Yeah that's fair. I wrote this somewhat off the cuff, but because it got more engagement than I thought I'd make it a full post if I wrote again

Is your claim "Impartial altruists with ~no credence on longtermism would have more impact donating to AI/GCRs over animals / global health"?

To my mind, this is the crux, because:

  1. If Yes, then I agree that it totally makes sense for non-longtermist EAs to donate to AI/GCRs
  2. If No, then I'm confused why one wouldn't donate to animals / global health instead?

[I use "donate" rather than "work on" because donations aren't sensitive to individual circumstances, e.g. personal fit. I'm also assuming impartiality because this seems core to EA to me, but of course one could donate / work on a topic for non-impartial/ non-EA reasons]

3
Vanessa
5mo
Yes. Moreover, GCR mitigation can appeal even to partial altruists: something that would kill most of everyone, would in particular kill most of whatever group you're partial towards. (With the caveat that "no credence on longtermism" is underspecified, since we haven't said what we assume instead of longtermism; but the case for e.g. AI risk is robust enough to be strong under a variety of guiding principles.)

Mildly against the Longtermism --> GCR shift
Epistemic status: Pretty uncertain, somewhat rambly

TL;DR replacing longtermism with GCRs might get more resources to longtermist causes, but at the expense of non-GCR longtermist interventions and broader community epistemics

Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly. Some data points on this:

  • Open Phil renaming it's EA Community Gro
... (read more)

Thanks for sharing this, Tom! I think this is an important topic, and I agree with some of the downsides you mention, and think they’re worth weighing highly; many of them are the kinds of things I was thinking in this post of mine of when I listed these anti-claims:

Anti-claims

(I.e. claims I am not trying to make and actively disagree with) 

  • No one should be doing EA-qua-EA talent pipeline work
    • I think we should try to keep this onramp strong. Even if all the above is pretty correct, I think the EA-first onramp will continue to appeal to lots of gr
... (read more)
2
Arepo
5mo
I've upvoted this comment, but weakly disagree that there's such a shift happening (EVF orgs still seem to be selecting pretty heavily for longtermist projects, the global health and development fund has been discontinued while the LTFF is still around etc), and quite strongly disagree that it would be bad if it is: That 'if' clause is doing a huge amount of work here. In practice I think the EA community is far too sanguine about our prospects post-civilisational collapse  of becoming interstellar (which, from a longtermist perspective, is what matters - not 'recovery'). I've written a sequence on this here, and have a calculator which allows you to easily explore the simple model's implications on your beliefs described in post 3 here, with an implementation of the more complex model available on the repo. As Titotal wrote in another reply, it's easy to believe 'lesser' catastrophes are many times more likely, so could very well be where the main expected loss of value lies. I think I agree with this, but draw a different conclusion. Longtermist work has focused heavily on existential risk, and in practice the risk of extinction, IMO seriously dropping the ball on trajectory changes with little more justification that the latter are hard to think about. As a consequence they've ignored what seem to me the very real loss of expected unit-value from lesser catastrophes, and the to-me-plausible increase in it from interventions designed to make people's lives better (generally lumping those in as 'shorttermist'). If people are now starting to take other catastrophic risks more seriously, that might be remedied. (also relevant to your 3rd and 4th points) This seems to be treating 'focus only on current generations' and 'focus on Pascalian arguments for astronomical value in the distant future' as the only two reasonable views. David Thorstad has written a lot, I think very reasonably, about reasons why expected value of longtermist scenarios might actually be quite

Great post, Tom, thanks for writing!

One thought is that a GCR framing isn't the only alternative to longtermism. We could also talk about caring for future generations. 

This has fewer of the problems you point out (e.g. differentiates between recoverable global catastrophes and existential catastrophes). To me, it has warm, positive associations. And it's pluralistic, connected to indigenous worldviews and environmentalist rhetoric.

Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly... My guess is these changes are (almost entirely) driven by PR concerns about longtermism.

 

It seems worth flagging that whether these alternative approaches are better for PR (or outreach considered more broadly) seems very uncertain. I'm not aware of any empirical work directly assessing this even though it seems a clearly empirical... (read more)

One point that hasn't been mentioned: GCR's may be many, many orders of magnitude more likely than extinctions. For example, it's not hard to imagine a super deadly virus that kills 50% of the worlds population , but a virus that manages to kill literally everyone, including people hiding out in bunkers, remote villages, and in antarctica, doesn't make too much sense: if it was that lethal, it would probably burn out before reaching everyone. 

The framing "PR concerns" makes it sound like all the people doing the actual work are (and will always be) longtermists, whereas the focus on GCR is just for the benefit of the broader public. This is not the case. For example, I work on technical AI safety, and I am not a longtermist. I expect there to be more people like me either already in the GCR community, or within the pool of potential contributors we want to attract. Hence, the reason to focus on GCR is building a broader coalition in a very tangible sense, not just some vague "PR".

Speaking personally, I have also perceived a move away from longtermism, and as someone who finds longtermism very compelling, this has been disappointing to see. I agree it has substantive implications on what we prioritise.

Speaking more on behalf of GWWC, where I am a researcher: our motivation for changing our cause area from “creating a better future” to “reducing global catastrophic risks” really was not based on PR. As shared here:

We think of a “high-impact cause area” as a collection of causes that, for donors with a variety of values and start

... (read more)
5
Ryan Greenblatt
5mo
I think reducing GCRs seems pretty likely to wildly outcompete other traditional approaches[1] if we use a slightly broad notion of current generation (e.g. currently existing people) due to the potential for a techno utopian world which making the lives of currently existing people >1,000x better (which heavily depends on diminishing returns and other considerations). E.g., immortality, making them wildly smarter, able to run many copies in parallel, experience insanely good experiences, etc. I don't think BOTECs will be a crux for this unless we ignore start discounting things rather sharply. IMO, the main axis of variation for EA related cause prio is "how far down the crazy train do we go" not "person affecting (current generations) vs otherwise" (though views like person affecting ethics might be downstream of crazy train stops). Idk what I think about Longtermism --> GCR, but I do think that we shouldn't lose "the future might be totally insane" and "this might be the most important century in some longer view". And I could imagine focus on GCR killing a broader view of history. 1. ^ That said, if we literally just care about experiences which are somewhat continuous with current experiences, it's plausible that speeding up AI outcompetes reducing GCRs/AI risk. And it's plausible that there are more crazy sounding interventions which look even better (e.g. extremely low cost cryonics). Minimally the overall situation gets dominated by "have people survive until techno utopia and ensure that techno utopia happens". And the relative tradeoffs between having people survive until techno utopia and ensuring that techno utopia happen seem unclear and will depend on some more complicated moral view. Minimally, animal suffering looks relatively worse to focus on.
2
harfe
5mo
Meta: this should not have been a quick take, but a post (references, structure, tldr, epistemic status, ...)

I haven't yet decided, but it's likely that a majority of my donations will go to this year's donor lottery. I'm fairly convinced by the arguments in favour of donor lotteries [1, 2], and would encourage others to consider them if they're unsure where to give. 

Having said that, lotteries have less fuzzies than donating directly, so I may separately give to some effective charities which I'm personally excited about.

Thanks - I just saw RP put out this post, which makes much the same point. Good to be cautious about interpreting these results!

YouGov Poll on SBF and EA

I recently came across this article from YouGov (published last week), summarizing a survey of US citizens for their opinions on Sam Bankman-Fried, Cryptocurrency and Effective Altruism.

I half-expected the survey responses to be pretty negative about EA, given press coverage and potential priming effects associating SBF to EA. So I was positively surprised that:

Survey Results

(it's worth noting that there were only ~1000 participants, and the survey was online only)

I am very sceptical about the numbers presented in this article. 22% of US citizens have heard of Effective Altruism? That seems very high. RP did a survey in May 2022 and found that somewhere between 2.6% and 6.7% of the US population had heard of EA. Even then, my intuition was that this seemed high. Even with the FTX stuff it seems extremely unlikely that 22% of Americans have actually heard of EA.

(FYI to others - I've just seen Ajeya's very helpful writeup, which has already partially answer this question!)

What's the reason for the change from Longtermism to GCRs? How has/will this change strategy going forward?

1
Ajeya
6mo
We decided to change the name to reflect the fact that we don't think you need to take a long-termist philosophical stance to work on AI risks and biorisks; the specific new name was chosen from among a few contenders after a survey process. The name change doesn't reflect any sharp break from how we have operated in practice for the last while, so I don't think there are any specific strategy changes it implies.

It seems that OP's AI safety & gov teams have both been historically capacity-constrained. Why the decision to hire for these roles now (rather than earlier)?

8
lukeprog
6mo
The technical folks leading our AI alignment grantmaking (Daniel Dewey and Catherine Olsson) left to do more "direct" work elsewhere a while back, and Ajeya only switched from a research focus (e.g. the Bio Anchors report) to an alignment grantmaking focus late last year. She did some private recruiting early this year, which resulted in Max Nadeau joining her team very recently, but she'd like to hire more. So the answer to "Why now?" on alignment grantmaking is "Ajeya started hiring soon after she switched into a grantmaking role. Before that, our initial alignment grantmakers left, and it's been hard to find technical folks who want to focus on grantmaking rather than on more thoroughly technical work." Re: the governance team. I've lead AI governance grantmaking at Open Phil since ~2019, but for a few years we felt very unclear about what our strategy should be, and our strategic priorities shifted rapidly, and it felt risky to hire new people into a role that might go away through no fault of their own as our strategy shifted. In retrospect, this was a mistake and I wish we'd started to grow the team at least as early as 2021. By 2022 I was finally forced into a situation of "Well, even if it's risky to take people on, there is just an insane amount of stuff to do and I don't have time for ~any of it, so I need to hire." Then I did a couple non-public hiring rounds which resulted in recent new hires Alex Lawsen, Trevor Levin, and Julian Hazell. But we still need to hire more; all of us are already overbooked and turning down opportunities for lack of bandwidth constantly.

(FYI to others - I've just seen Ajeya's very helpful writeup, which has already partially answer this question!)

7
Lowe Lundin
6mo
To add on this, I'm confused by your choice to grow these teams quite abruptly as opposed to incrementally. What's your underlying reasoning?

Ten Project Ideas for AI X-Risk Prioritization

I made a list of 10 ideas I'd be excited for someone to tackle, within the broad problem of "how to prioritize resources within AI X-risk?" I won’t claim these projects are more / less valuable than other things people could be doing. However, I'd be excited if someone gave a stab at them

10 Ideas:

  1. Threat Model Prioritization
  2. Country Prioritization
  3. An Inside-view timelines model
  4. Modelling AI Lab deployment decisions
  5. How are resources currently allocated between risks / interventions / countries
  6. How to allocate
... (read more)

Thanks Ajeya, this is very helpful and clarifying!

I am the only person who is primarily focused on funding technical research projects ... I began making grants in November 2022

Does this mean that prior to November 2022 there were ~no full-time technical AI safety grantmakers at Open Philanthropy? 

OP (prev. GiveWell labs) has been evaluating grants in the AI safety space for over 10 years. In that time the AI safety field and Open Philanthropy have both grown, with OP granting over $300m on AI risk. Open Phil has also done a lot of research on the pro... (read more)

3
OllieBase
7mo
Daniel Dewey was a Program Officer for potential risks from advanced AI at OP for several years. I don't know how long he was there for, but he was there in 2017 and left before May 2021.
Answer by Tom BarnesSep 13, 202342
13
3

Following the episode with Mustafa, it would be great to interview the founders of leading AI labs - perhaps Dario (Anthropic) [again], Sam (OpenAI), or Demis (DeepMind). Or alternatively, the companies that invest / support them - Sundar (Google) or Satya (Microsoft).

It seems valuable to elicit their honest opinions[1] about "p(doom)", timelines, whether they believe they've been net-positive for the world, etc.

  1. ^

    I think one risk here is either:

    a) not challenging them firmly enough - lending them undue credibility / legitimacy in the minds of listener

... (read more)
5
JWS
7mo
I don't know, I feel like there are serious questions for these people to answer that they're probably not going to be drawn on unless in a particular arena (such as a US Senate hearing),[1] and otherwise interviewing these people further might not be that high value? Especially since Rob/Luisa are very generous towards their guests![2] There was a great interview with Dario recently by Dwarkesh, but even that to me was on the 'soft' side of things. Like I still don't have a clear honest idea to me of, if you do truly think AIXR is high and AI Safety is so import, why you would want billions of dollars to create an AI 10x as capable as your current leading model  1. ^ And even then it took Gary Marcus nudging the senators that Sam hadn't answered their question, and even then Sam white-lied it by saying he feared AI could cause "significant harm" whereas he should have said "human extinction" 2. ^ Btw which is not a bad thing if you're reading Rob/Luisa. <3 you both you have no idea how much value the podcast has given me over the last few years

This looks very exciting, thanks for posting!

I'll quickly mention a couple of things that stuck out to me that might make the CEA potentially overoptimistic:

  1. IQ points lost per μg/dl of lead - this is likely a logarithmic relationship (as suggested by Bernard and Schukraft). For a BLL of 2.4 - 10 μg/dl, IQ loss from 1 μg/dl increase may be close to 0.5, but above 10, it's closer to 0.2 per 1 μg/dl increase, and above 20, closer to 0.1. Average BLL in Bangladesh seem to be around 6.8 μg/dl, though amongst residents living near turmeric sources of lead, it co
... (read more)
9
Kate Porterfield
8mo
Thanks so much for your response!  This is excellent feedback and we’re grateful for the interest.   1.  IQ loss: We did go to the experts to estimate based on actual numbers, so we’re reasonably confident about this.  And even if it were out by 50 percent, the results are still strong.   2.  Income loss by IQ has been a moving target in the literature.  It could be at that lower end -- we used a mid-point of studies.  Again, even if we are out by half, the results are positive.   3.  Replicability is certainly an issue.  We do know that turmeric is used throughout the country anecdotally, and that other sources of lead are important in Bangladesh (aluminum pots, contaminated sites, perhaps fish). Additionally, we know that no other interventions have been undertaken - expect for a small site clean up that will not impact nationally.  We’re happy to report that the government is very engaged and committed, and have asked WB for their support for these other sources. These conditions might be less available in other countries, including Central Asia, India (northern states), Balkans (some) Middle East (some), and North Africa.  Not all countries have the issue, but many do.  So, yes, viability of replication of this solution will be mixed. On the plus side, a similar intervention in Georgia had the same rapid result, and without a large expense. With that, we hope to continue to repeat the success. All, please do send more thoughts and comments!  We’re new to this kind of assessment, and need your feedback! Best, Pure Earth Team 

It would be great to have some way to filter for multiple topics.

Example: Suppose I want to find posts related to the cost-effectiveness of AI safety. Instead of just filtering for "AI safety", or for just "Forecasting and estimation", I might want to find posts only at the intersection of those two. I attempted to do this by customizing my frontpage feed, but this doesn't really work (since it heavily biases to new/upvoted posts)

2
JP Addison
9mo
You can do this! Filter by topics on the left hand side of the search page.

it relies primarily on heuristics like organiser track record and higher-level reasoning about plans. 

I think this is mostly correct, with the caveat that we don't exclusively rely on qualitative factors and subjective judgement alone. The way I'd frame it is more as a spectrum between

[Heuristics] <------> [GiveWell-style cost-effectiveness modelling]

I think I'd place FP's longtermist evaluation methodology somewhere between those two poles, with flexibility based on what's feasible in each cause

I'll +1 everything Johannes has already said, and add that several people (including myself) have been chewing over the "how to rate longtermist projects" question for quite some time. I'm unsure when we will post something publicly, but I hope it won't be too far in the future.

If anyone is curious for details feel free to reach out!

Quick take: renaming shortforms to Quick takes is a mistake

This looks super interesting, thanks for posting! I especially appreciate the "How to apply" section

One thing I'm interested in is seeing how this actually looks in practice - specifying real exogenous uncertainties (e.g. about timelines, takeoff speeds, etc), policy levers (e.g. these ideas, different AI safety research agendas, etc), relations (e.g. between AI labs, governments, etc) and performance metrics (e.g "p(doom)", plus many of the sub-goals you outline). What are the conclusions? What would this imply about prioritization decisions? etc

I appreci... (read more)

3
Max Reddel
11mo
Thank you for your thoughtful comment! You've highlighted some of the key aspects that I believe make this kind of AI governance model potentially valuable. I too am eager to see how these concepts would play out in practice. Firstly, on specifying real exogenous uncertainties, I believe this is indeed a crucial part of this approach. As you rightly pointed out, uncertainties around AI development timelines, takeoff speeds, and others are quite significant. A robust AI governance framework should indeed have the ability to effectively incorporate these uncertainties.  Regarding policy levers, I agree that an in-depth understanding of different AI safety research agendas is essential. In my preliminary work, I have started exploring a variety of such agendas. The goal is not only to understand these different research directions but also to identify which might offer the most promise given different (but also especially bleak) future scenarios. In terms of relations between different entities like AI labs, governments, etc., this is another important area I'm planning to looking into. The nature of these relations can significantly impact the deployment and governance of AI, and we would need to develop models to help us better understand these dynamics. Regarding performance metrics like p(doom), I'm very much in the early stages of defining and quantifying these. It's quite challenging because it requires balancing a number of competing factors. Still, I'm confident that our approach will eventually enable us to develop robust metrics for assessing different policy options. An interesting notion here is that it p(doom), is quite an aggregated variable. The DMDU approach would provide us with the opportunity to have a set of (independent) metrics that we can attempt to optimize all at the same time (think Pareto-optimality here).  As to the conclusions and the implications for prioritization decisions, it's hard to say before running optimizations or simulation

Should recent ai progress change the plans of people working on global health who are focused on economic outcomes?

I think so, see here or here for a bit more discussion on this

If you think that AI will go pretty well by default (which I think many neartermists do)

My guess/impression is that this just hasn't been discussed by neartermists very much (which I think is one sad side-effect from bucketing all AI stuff in a "longtermist" worldview)

Great question!

One can claim Gift Aid on a donation to the Patient Philanthropy Fund (PPF), e.g. if donating through Giving What We Can. So a basic rate taxpayer gets a 25% "return" on the initial donation (via gift aid). The fund can then be expected to make a financial return equivalent to an index fund (~10% p.a for e.g. S&P 500). 

So, if you buy the claim that your expected impact will be 9x larger in 10 years than today, then a £1,000 donation today will have an expected (mean) impact of £11,250, for longtermist causes (£1,000 * 1.25 * 9)[1]

Th... (read more)

I think this could be an interesting avenue to explore. One very basic way to (very roughly) do this is to model p(doom) effectively as a discount rate. This could be an additional user input on GiveWell's spreadsheets.

So for example, if your p(doom) is 20% in 20 years, then you could increase the discount rate by roughly 1% per year

[Techinically this will be somewhat off since (I'm guessing) most people's p(doom) doesn't increase at a constant rate, in the way a fixed discount rate does.]

1
MikhailSamin
2y
I think discounting QALYs/DALYs due to the probability of doom makes sense if you want a better estimate of QALYs/DALYs; but it doesn’t help with estimating the relative effectiveness of charities and doesn’t help to allocate the funding better. (It would be nice to input the distribution of the world ending in the next n years and get the discounted values. But it’s the relative cost of ways to save a life that matters; we can’t save everyone, so we want to save the most lives and reduce suffering the most, the question of how to do that means that we need to understand what our actions lead to so we can compare our options. Knowing how many people you’re saving is instrumental to saving the most people from the dragon. If it costs at least $15000 to save a life, you don’t stop saving lives because that’s too much; human life is much more valuable. If we succeed, you can imagine spending stars on saving a single life. And if we don’t, we’d still like to reduce the suffering the most and let as many people as we can live for as long as humanity lives; for that, we need estimates of the relative value of different interventions conditional on the world ending in n years with some probability.)

Rob Besinger of MIRI tweets:

...I'm happy to say that MIRI leadership thinks "humanity never builds AGI" would be the worst catastrophe in history, would cost nearly all of the future's value, and is basically just unacceptably bad as an option.

Just to add that the Research Institute for Future Design (RIFD) is a Founders Pledge recommendation for longtermist institutional reform

(disclaimer: I am a researcher at Founders Pledge)

OpenPhil might be in a position to expand EA’s expected impact if it added a cause area that allowed for more speculative investments in Global Health & Development.

My impression is that Open Philanthropy's Global Health and Development team already does this? For example, OP has focus areas on Global aid policy, Scientific research and South Asian air quality, areas which are inherently risky/uncertain.

They have also take a hit based approach philosophically, and this is what distinguishes them from GiveWell - see e.g.

Hits. We are explicitly purs

... (read more)

OpenPhil might be in a position to expand EA’s expected impact if it added a cause area that allowed for more speculative investments in Global Health & Development.

My impression is that Open Philanthropy's Global Health and Development team already does this? For example, OP has focus areas on Global aid policy, Scientific research and South Asian air quality, areas which are inherently risky/uncertain.

They have also take a hit based approach philosophically, and this is what distinguishes them from GiveWell - see e.g.

Hits. We are explicitly purs

... (read more)
1
LiaH
2y
Their assessment seems to be three small policy spheres, rather than global health policy, which is larger in scale. 

Thanks for writing this Lizka! I agree with many of the points in this [I was also a visiting fellow on the longtermist team this summer]. I'll throw my two cents in about my own reflections (I broadly share Lizka's experience, so here I just highlight the upsides/downsides things that especially resonated with me, or things unique to my own situation):

Vague background:

  • Finished BSc in PPE this June
  • No EA research experience and very little academic research experience
  • Introduced to EA in 2019

Upsides:

  • Work in areas that are intellectually stimulating
... (read more)
7
MaxRa
2y
Thanks a lot for writing about your experiences, Lizka and Tom! Especially the details about why you were happy with your managers was really valuable info for me. 

This is super helpful, thank you!

Which departments/roles do you think are most important to work in from an EA perspective? The Cabinet Office, HM Treasury and FCDO seem particularly impactful, but are also the most crowded and competitive. Are there lesser known departments doing neglected but important work? (e.g. my impression is DEFRA would be this for animal welfare policy - are there similar opportunities in other cause areas?). Thanks!

6
tobyj
3y
These 2 apps should help give a sense of where you can work on different causes in the Civil Service: https://highimpact.shinyapps.io/impact_areas/ https://highimpact.shinyapps.io/civil_service_jobs_explorer/ That said, for non-civil servants - I would strongly suggest focussing the first role you get on skill development and understanding how government works, rather than trying to go straight into the most impactful area. Civil servants are expected to move around a lot, so you can find the  most impactul areas once you are in, and demonstrating good generalist skills is one the key things that will make moving around easy.
3
Kirsten
3y
Hey 22tom, this depends a lot on which cause area you're most interested in. For example, if you want to work on climate change, several departments could be good options (BEIS, DfT, DEFRA). If you're interested in emerging technologies, DCMS could be a strong contender. Is there a particular cause you're most interested in?