Recent Discussion

FEM’s co-founders look back on their proof of concept campaign and ahead to a pilot campaign

Family Empowerment Media (FEM) is an evidence-driven nonprofit working to improve maternal and child health. We do this by providing clear, compelling, and accurate radio-based communication that targets barriers to the uptake of family planning services. We launched in September 2020, after our co-founders participated in the Charity Entrepreneurship Incubation Program

We explained in an earlier post (Introducing Family Empowerment Media) post why we think family planning is a cause worth prioritizing. Enabling couples to access modern contraceptives may be one of the most cost-effective ways to avert maternal deaths, in addition to providing a host of other benefits. Often, beliefs and attitudes are the largest obstacle to couples’ accessing modern contraceptives. FEM...

As always, it's a real pleasure to read FEM's writeups.  I hope to see another report in ~6 months as I'm figuring out my giving for the year  :-)

I'm trying to get to the crux of the differences between the progress studies (PS) and the EA / existential risk (XR) communities. I'd love input from you all on my questions below.

The road trip metaphor

Let me set up a metaphor to frame the issue:

Picture all of humanity in a car, traveling down the highway of progress. Both PS and EA/XR agree that the trip is good, and that as long as we don't crash, faster would be better. But:

  • XR thinks that the car is out of control and that we need a better grip on the steering wheel. We should not accelerate until we can steer better, and maybe we should even slow down in order to avoid crashing.
  • PS thinks we're already slowing down, and so
...

[Likely not a crux]

EA often uses an Importance - Neglectedness - Tractability framework for cause prioritization.  I would expect things producing progress to be somewhat less neglected than working on XR; it is still somewhat possible to capture some of the benefits. 

We do indeed see vast amounts of time and money being spent on research and development, in comparison to the amount being spent on XR concerns. Possibly you'd prefer to compare with PS itself, rather than with all R&D? (a) I'm not sure how justified that is; (b) it still feels ... (read more)

3AppliedDivinityStudies2hGood to hear! In the abstract, yes, I would trade 10,000 years for 0.001% reduction in XR. In practice, I think the problem with this kind of Pascal Mugging argument is that it's really hard to know what a 0.001% reduction looks like, and really easy to do some fuzzy Fermi estimate math. If someone were to say "please give me one billion dollars, I have this really good idea to prevent XR by pursuing Strategy X", they could probably convince me that they have at least a 0.001% chance of succeeding. So my objections to really small probabilities are mostly practical.
4jackmalde2hYou mention that some EAs oppose progress / think that it is bad. I might be wrong, but I think these people only "oppose" progress insofar as they think x-risk reduction from safety-based investment is even better value on the margin. So it's not that they think progress is bad in itself, it's just that they think that speeding up progress incurs a very large opportunitycost. Bostrom's 2003 paper [https://www.nickbostrom.com/astronomical/waste.html] outlines the general reasoning why many EAs think x-risk reduction is more important than quick technological development. Also, I think most EAs interested in x-risk reduction would say that they're not really in a Pascal's mugging as the reduction in probability of an existential catastrophe occurring that can be achieved isn't astronomically small. This is partly because x-risk reduction is so neglected that there's still a lot of low-hanging fruit. I'm not super certain on either of the points above but it's the sense I've gotten from the community.

Meta:

  • I'm re-posting this from my Shortform (with minor edits) because someone indicated it might be useful to apply tags to this post.
  • This was originally written as quick summary of my current (potentially flawed) understanding in an email conversation.
  • I'm not that familiar with the human progress/progress studies communities and would be grateful if people pointed out where my impression of them seems off, as well as for takes on whether I seem correct about what the key points of agreement and disagreement are.
  • I think some important omissions from my summary might include:
    • Potential differences in underlying ethical views
    • More detail on why at least some 'progress studies' proponents have significantly lower estimates for existential risk this century, and potential empirical differences regarding how to best mitigate existential risk.
  • Another caveat is
...

Hi Jason, thank you for sharing your thoughts! I also much appreciated you saying that the OP sounds accurate to you since I hadn't been sure how good a job I did with describing the Progress Studies perspective.

I hope to engage more with your other post when I find the time - for now just one point:

  • I don't think I'm assuming a short civilization. I very much want civilization to last millions or billions of years! (I differ from Tyler on this point, I guess)
  • You say “what does it matter if we accelerate progress by a few hundred or even a few thousand year
... (read more)
3AppliedDivinityStudies2hThanks for clarifying, the delta thing is a good point. I'm not aware of anyone really trying to estimate "what are the odds that MIRI prevents XR", though there is one SSC sort of on the topic: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/ [https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/] I absolutely agree with all the other points. This isn't an exact quote, but from his talk with Tyler Cowen, Nick Beckstead notes: "People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later... the philosophical side of this seems like ineffective posturing. Tyler wouldn’t necessarily recommend that these people switch to other areas of focus because people motivation and personal interests are major constraints on getting anywhere. For Tyler, his own interest in these issues is a form of consumption, though one he values highly." https://drive.google.com/file/d/1O--V1REGe1-PNTpJXl3GHsUu_eGvdAKn/view [https://drive.google.com/file/d/1O--V1REGe1-PNTpJXl3GHsUu_eGvdAKn/view] That's a bit harsh, but this was in 2014. Hopefully Tyler would agree efforts have gotten somewhat more serious since then. I think the median EA/XR person would agree that there is probably a need for the movement to get more hands on and practical. R.e. safety for something that hasn't been invented: I'm not an expert here, but my understanding is that some of it might be path dependent. I.e. research agendas hope to result in particular kinds of AI, and it's not necessarily a feature you can just add on later. But it doesn't sound like there's a deep disagreement here, and in any case I'm not the best person to try to argue this case. Intuitively, one analogy might be: we're building a rocket, humanity is already on it, and the AI Safety people are saying "let's add life support before the rocket takes off". The exacerbating factor is
3jasoncrawford4hThat's interesting, because I think it's much more obvious that we could successfully, say, accelerate GDP growth by 1-2 points per year, than it is that we could successfully, say, stop an AI catastrophe. The former is something we have tons of experience with: there's history, data, economic theory… and we can experiment and iterate. The latter is something almost completely in the future, where we don't get any chances to get it wrong and course-correct. (Again, this is not to say that I'm opposed to AI safety work: I basically think it's a good thing, or at least it can be if pursued intelligently. I just think there's a much greater chance that we look back on it and realize, too late, that we were focused on entirely the wrong things.)

This is a linkpost for a two-part update on the Founders Pledge Climate Fund (Part 1: Maximize impact, Part 2: Future plans) that I’ve recently published with my fellow fund manager Anu Khan. Note that this is written for the Founders Pledge blog audience so the style is quite different from the EA Forum. 

The blogs give some detailed insights into what we have been up to over the past seven months and what we are looking at going forward.  We also announce a match fund over USD 1 million which is live until mid-July (or until it is filled, if earlier).

For EAs, I believe there are three key take-aways:

  1. We believe that the Founders Pledge Climate Fund outperforms giving to the best individual climate charity, what most EAs
...

Hey Johannes, thanks for posting this! Always nice to see what FP are working on climate-wise. I've got a couple of questions:

 

  1. Where do you get your 2x figure multiplier from out of interest? 
  2. Where can we stay up to date on the work of TerraPraxis? Their website doesn't seem too up-to-date and I'm really curious to see how they scale up/what they deliver in the coming months and years.
  3. In terms of your future plans re. investing globally - have you got any promising orgs in developing countries that FP is considering funding?

I sent a two-question survey to ~117 people working on long-term AI risk, asking about the level of existential risk from "humanity not doing enough technical AI safety research" and from "AI systems not doing/optimizing what the people deploying them wanted/intended".

44 people responded (~38% response rate). In all cases, these represent the views of specific individuals, not an official view of any organization. Since some people's views may have made them more/less likely to respond, I suggest caution in drawing strong conclusions from the results below. Another reason for caution is that respondents added a lot of caveats to their responses (see the anonymized spreadsheet), which the aggregate numbers don't capture.

I don’t plan to do any analysis on this data, just share it; anyone who wants to analyze...

2Linch4hSure but how large is large? You said in a different comment that losing 10% of the future is too high/an existential catastrophe, which I think is already debatable (I can imagine some longtermists thinking that getting 90% of the possible value is basically an existential win, and some of the survey respondents thinking that drastic reduction actually means more like 30%+ or 50%+). I think you're implicitly agreeing with my comment that losing 0.1% of the future is acceptable, but I'm unsure if this is endorsed. If you were to redo the survey for people like me, I'd have preferred a phrasing that says more like Or alternatively, instead of asking for probabilities, > What's the expected fraction of the future's value that would be lost? Though since a) nobody else raised the same issue I did, and b) I'm not a technical AI safety or strategy researcher, and thus outside of your target audience, so this might all be a moot point.
2RobBensinger4hWhat's the definition of an "existential win"? I agree that this would be a win, and would involve us beating some existential risks that currently loom large. But I also think this would be an existential catastrophe. So if "win" means "zero x-catastrophes", I wouldn't call this a win. Bostrom's original definition of existential risk talked about things that "drastically curtail [the] potential" of "Earth-originating intelligent life". Under that phrasing, I think losing 10% of our total potential qualifies. ?!? What does "acceptable" mean? Obviously losing 0.1% of the future's value is very bad, and should be avoided if possible!!! But I'd be fine with saying that this isn't quite an existential risk, by Bostrom's original phrasing. Agreed, I'd probably have gone with a phrasing like that.

?!? What does "acceptable" mean? Obviously losing 0.1% of the future's value is very bad, and should be avoided if possible!!! But I'd be fine with saying that this isn't quite an existential risk, by Bostrom's original phrasing.


So I reskimmed the paper, and FWIW, Bostrom's original phrasing doesn't seem obviously sensitive to 2 orders of magnitude by my reading of it. "drastically curtail" feels more like poetic language than setting up clear boundaries.

He does have some lower bounds: 

> However, the true lesson is a different one. If what we are c... (read more)

Cross-posted from the Effective ESG blog.

Thoughtful trade-offs: models for going beyond profit maximisation

There is lots of talk about whether incorporating ESG factors could help improve profitability or returns. 

This post is not about whether or not this is true.

Instead, I claim that this is missing the point.

  • I’m sure there will be times when ESG thinking can lead to improved returns/profitability.
  • And there will be times when ESG factors are profitability-neutral.
  • And I expect there are also times when there’s a tension or trade-off between ESG factors and profitability.

For examples of trade-offs, I’m sure we could come up with strategies for e.g. a fossil fuel company, a tobacco company or an arms manufacturer which will be better from an ESG perspective but suboptimal from a profitability perspective.

It’s all about the trade-offs.

To...

Thanks for this, good question!

I agree with your point that investors have some blind spots, in particular that some areas of finance are not good at incorporating long term considerations.

So I think you're right, the ESG concept probably could achieve some impact by helping address that sort of blind spot.

I probably should have said something more like "To judge whether I, as someone working in ESG investing, is having material impact, we need to see if I'm actually having an influence on scenarios where there is a tension/trade-off". This is because ESG-related work is already working to address that blind spot.

2Sanjay4hSorry I didn't spot your comment earlier. Yes, more than happy for this to be shared more widely. Feel free to use this link if you wish: https://effectiveesg.com/2021/05/24/esg-investing-needs-thoughtful-trade-offs/ [https://effectiveesg.com/2021/05/24/esg-investing-needs-thoughtful-trade-offs/]

Takeaway

I think using weighted pros / cons (or more generally, arguments for / against) would be a useful norm to promote. For a summary of the reasons why, see the Example section.

Motivation

Though maybe not an explicit norm, many people in EA endorse the idea of putting probabilities to statements in order to clarify one's credence in them. Doing so allows people to be much more precise and avoid the ambiguity of phrases like "almost certain" or "significant chance." It's also helpful for discussion as it can make it clearer how and to what degree people agree or disagree. It seems that many EA community members generally value "putting numbers to things." As an extension of this, I think it would be helpful for more people to weight their pros / cons or arguments for /...

Sorry for the slow reply. I don't have a link to any examples I'm afraid but I just mean something like this:

Prior that we should put weights on arguments and considerations: 60%

Pros:

  • Clarifies the writer's perspective each of the considerations (65%)
  • Allows for better discussion for reasons x, y, z... (75%)

Cons:

  • Takes extra time (70%)

This is just an example I wrote down quickly, not actual views. But the idea is to state explicit probabilities so that we can see how they change with each consideration.

To see you can find the Bayes' factors, note that if ... (read more)

I write The Roots of Progress, a blog about the history of technology and the philosophy of progress. Some of my top posts:

I am also the creator of Progress Studies for Young Scholars, an online learning program for high schoolers; and a part-time adviser and technical consultant to Our World in Data, an Oxford-based non-profit for research and data on global development.

My work is funded by grants from Emergent Ventures, Open Philanthropy, the Long-Term Future Fund, and Jaan Tallinn (via the Survival and Flourishing Fund).

Previously, I spent 18 years as a software engineer, engineering manager, and startup founder.

Ask me anything!

UPDATE: I'm pausing for now but will come back and I will try to get to everyone, thanks for all the questions!

Followup: I did write that essay some ~5 months ago, but I got some feedback on it that made me think I needed to rethink it more carefully, and then other deadlines took over and I lost momentum.

I was recently nudged on this again, and I've written up some questions here that would help me get to clarity on this issue: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies

See the post introducing this sequence for context, caveats, credits, and links to prior discussion relevant to this sequence as a whole. This post doesn’t necessarily represent the views of my employers.

Summary

In a previous post, I highlighted some observations that I think collectively demonstrate that the current processes by which new EA-aligned research and researchers are “produced” are at least somewhat insufficient, inefficient, and prone to error. In this post, I’ll briefly discuss 19 interventions that might improve that situation. I discuss them in very roughly descending order of how important, tractable, and neglected I think each intervention is, solely from the perspective of improving the EA-aligned research pipeline.[1] The interventions are:

  1. Creating, scaling, and/or improving[2] EA-aligned research orgs
  2. Creating, scaling, and/or improving EA-aligned research training programs (e.g. certain
...

Rough notes on another idea, following a call I just had:

  • Setting up something in between a research training program and a system for collaborations in high schools, universities, or local EA groups
    • Less vetting and probably lower average current knowledge, aptitude, etc. than research training program participants undergo/have
    • But this reduces the costs for vetting
    • And this opens this up to an additional pool of people (who may not yet be able to pass that vetting)
    • Plus, this could allow more people to test their fit for and get better at mentorship, by mento
... (read more)
8Ben_Snodin14hOne (maybe?) low-effort thing that could be nice would be saying "these are my top 5" or "these are listed in order of how promising I think they are" or something (you may well have done that already and I missed it).
2MichaelA10hAh, yes, this is probably useful and definitely low-effort (I've now done it in 1 minute, due to your comment). The list was actually already in order of how promising I think they are, and I mentioned that in footnote 1. But I shouldn't expect people to read footnotes, and your feedback plus that other feedback I got on other posts suggests that readers want that sort of thing enough / find it useful enough that that should be said in the main text. So I've now moved that info to the main text (in the summary, before I list the 19 interventions). I think the main reason I originally put it in a footnote is that it's hard to know what my ranking really means (since each intervention could be done in many different ways, which would vary in their value) or how much to trust it. But my ranking is still probably better than the ranking a reader would form, or than an absence of ranking, given that I've spent more time thinking about this. Going forward, I'll be more inclined to just clearly tell readers things like my ranking, and less focused on avoiding "anchoring" them or things like that. (So thanks again for the feedback!)
3David_Moss10hThere was some discussion of the original acquisition here [https://forum.effectivealtruism.org/posts/2yERrmo6bzC3CAy2v/charity-navigator-acquired-impactmatters-and-is-starting-to] . Historically, Charity Navigator has been extremely hostile [https://ssir.org/articles/entry/the_elitist_philanthropy_of_so_called_effective_altruism#] to effective altruism, as you probably know, so perhaps this isn't surprising.

Thank you, I had not seen Luke Freeman @givingwhatwecan's earlier post

That 2013 opinion piece/hit job is shocking. But that was 9 years ago or so.

I doubt CN would have acquired IM just to bury it; there might be some room for positive suasion here.