Family Empowerment Media (FEM) is an evidence-driven nonprofit working to improve maternal and child health. We do this by providing clear, compelling, and accurate radio-based communication that targets barriers to the uptake of family planning services. We launched in September 2020, after our co-founders participated in the Charity Entrepreneurship Incubation Program.
We explained in an earlier post (Introducing Family Empowerment Media) post why we think family planning is a cause worth prioritizing. Enabling couples to access modern contraceptives may be one of the most cost-effective ways to avert maternal deaths, in addition to providing a host of other benefits. Often, beliefs and attitudes are the largest obstacle to couples’ accessing modern contraceptives. FEM...
I'm trying to get to the crux of the differences between the progress studies (PS) and the EA / existential risk (XR) communities. I'd love input from you all on my questions below.
Let me set up a metaphor to frame the issue:
Picture all of humanity in a car, traveling down the highway of progress. Both PS and EA/XR agree that the trip is good, and that as long as we don't crash, faster would be better. But:
[Likely not a crux]
EA often uses an Importance - Neglectedness - Tractability framework for cause prioritization. I would expect things producing progress to be somewhat less neglected than working on XR; it is still somewhat possible to capture some of the benefits.
We do indeed see vast amounts of time and money being spent on research and development, in comparison to the amount being spent on XR concerns. Possibly you'd prefer to compare with PS itself, rather than with all R&D? (a) I'm not sure how justified that is; (b) it still feels ... (read more)
Meta:
Hi Jason, thank you for sharing your thoughts! I also much appreciated you saying that the OP sounds accurate to you since I hadn't been sure how good a job I did with describing the Progress Studies perspective.
I hope to engage more with your other post when I find the time - for now just one point:
... (read more)
- I don't think I'm assuming a short civilization. I very much want civilization to last millions or billions of years! (I differ from Tyler on this point, I guess)
- You say “what does it matter if we accelerate progress by a few hundred or even a few thousand year
This is a linkpost for a two-part update on the Founders Pledge Climate Fund (Part 1: Maximize impact, Part 2: Future plans) that I’ve recently published with my fellow fund manager Anu Khan. Note that this is written for the Founders Pledge blog audience so the style is quite different from the EA Forum.
The blogs give some detailed insights into what we have been up to over the past seven months and what we are looking at going forward. We also announce a match fund over USD 1 million which is live until mid-July (or until it is filled, if earlier).
For EAs, I believe there are three key take-aways:
Hey Johannes, thanks for posting this! Always nice to see what FP are working on climate-wise. I've got a couple of questions:
I sent a two-question survey to ~117 people working on long-term AI risk, asking about the level of existential risk from "humanity not doing enough technical AI safety research" and from "AI systems not doing/optimizing what the people deploying them wanted/intended".
44 people responded (~38% response rate). In all cases, these represent the views of specific individuals, not an official view of any organization. Since some people's views may have made them more/less likely to respond, I suggest caution in drawing strong conclusions from the results below. Another reason for caution is that respondents added a lot of caveats to their responses (see the anonymized spreadsheet), which the aggregate numbers don't capture.
I don’t plan to do any analysis on this data, just share it; anyone who wants to analyze...
?!? What does "acceptable" mean? Obviously losing 0.1% of the future's value is very bad, and should be avoided if possible!!! But I'd be fine with saying that this isn't quite an existential risk, by Bostrom's original phrasing.
So I reskimmed the paper, and FWIW, Bostrom's original phrasing doesn't seem obviously sensitive to 2 orders of magnitude by my reading of it. "drastically curtail" feels more like poetic language than setting up clear boundaries.
He does have some lower bounds:
> However, the true lesson is a different one. If what we are c... (read more)
Cross-posted from the Effective ESG blog.
Thoughtful trade-offs: models for going beyond profit maximisation
There is lots of talk about whether incorporating ESG factors could help improve profitability or returns.
This post is not about whether or not this is true.
Instead, I claim that this is missing the point.
For examples of trade-offs, I’m sure we could come up with strategies for e.g. a fossil fuel company, a tobacco company or an arms manufacturer which will be better from an ESG perspective but suboptimal from a profitability perspective.
It’s all about the trade-offs.
To...
Thanks for this, good question!
I agree with your point that investors have some blind spots, in particular that some areas of finance are not good at incorporating long term considerations.
So I think you're right, the ESG concept probably could achieve some impact by helping address that sort of blind spot.
I probably should have said something more like "To judge whether I, as someone working in ESG investing, is having material impact, we need to see if I'm actually having an influence on scenarios where there is a tension/trade-off". This is because ESG-related work is already working to address that blind spot.
I think using weighted pros / cons (or more generally, arguments for / against) would be a useful norm to promote. For a summary of the reasons why, see the Example section.
Though maybe not an explicit norm, many people in EA endorse the idea of putting probabilities to statements in order to clarify one's credence in them. Doing so allows people to be much more precise and avoid the ambiguity of phrases like "almost certain" or "significant chance." It's also helpful for discussion as it can make it clearer how and to what degree people agree or disagree. It seems that many EA community members generally value "putting numbers to things." As an extension of this, I think it would be helpful for more people to weight their pros / cons or arguments for /...
Sorry for the slow reply. I don't have a link to any examples I'm afraid but I just mean something like this:
Prior that we should put weights on arguments and considerations: 60%
Pros:
- Clarifies the writer's perspective each of the considerations (65%)
- Allows for better discussion for reasons x, y, z... (75%)
Cons:
- Takes extra time (70%)
This is just an example I wrote down quickly, not actual views. But the idea is to state explicit probabilities so that we can see how they change with each consideration.
To see you can find the Bayes' factors, note that if ... (read more)
I write The Roots of Progress, a blog about the history of technology and the philosophy of progress. Some of my top posts:
I am also the creator of Progress Studies for Young Scholars, an online learning program for high schoolers; and a part-time adviser and technical consultant to Our World in Data, an Oxford-based non-profit for research and data on global development.
My work is funded by grants from Emergent Ventures, Open Philanthropy, the Long-Term Future Fund, and Jaan Tallinn (via the Survival and Flourishing Fund).
Previously, I spent 18 years as a software engineer, engineering manager, and startup founder.
Ask me anything!
UPDATE: I'm pausing for now but will come back and I will try to get to everyone, thanks for all the questions!
Followup: I did write that essay some ~5 months ago, but I got some feedback on it that made me think I needed to rethink it more carefully, and then other deadlines took over and I lost momentum.
I was recently nudged on this again, and I've written up some questions here that would help me get to clarity on this issue: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies
See the post introducing this sequence for context, caveats, credits, and links to prior discussion relevant to this sequence as a whole. This post doesn’t necessarily represent the views of my employers.
In a previous post, I highlighted some observations that I think collectively demonstrate that the current processes by which new EA-aligned research and researchers are “produced” are at least somewhat insufficient, inefficient, and prone to error. In this post, I’ll briefly discuss 19 interventions that might improve that situation. I discuss them in very roughly descending order of how important, tractable, and neglected I think each intervention is, solely from the perspective of improving the EA-aligned research pipeline.[1] The interventions are:
Rough notes on another idea, following a call I just had:
Thank you, I had not seen Luke Freeman @givingwhatwecan's earlier post
That 2013 opinion piece/hit job is shocking. But that was 9 years ago or so.
I doubt CN would have acquired IM just to bury it; there might be some room for positive suasion here.
As always, it's a real pleasure to read FEM's writeups. I hope to see another report in ~6 months as I'm figuring out my giving for the year :-)