Recent Discussion

The Progress Open Thread is a place to share good news, big or small.

See this post for an explanation of why we have these threads.

What goes in a progress thread comment? 

Think of this as an org update thread for individuals. You might talk about...

  • Securing a new job, internship, grant, or scholarship
  • Starting or making progress on a personal project
  • Helping someone else get involved in EA
  • Making a donation you feel really excited about
  • Taking the Giving What We Can pledge or signing up for Try Giving
  • Writing something you liked outside the Forum (whether it's a paper you've submitted to a journal or just an insightful Facebook comment)
  • Any of the above happening to someone else, if you think they'd be happy for you to share the news
  • Other EA-related progress in the world (disease eradication, cage-free laws, cool new research papers, etc.)

First of all, it's great that you are considering this (or are already most of the way there?)!

Here was a discussion on the same topic that might interest you. If you have Facebook and would like to join the Parents in Effective Altruism group, there was also a discussion on this topic here.

Good luck!

Edit: I also really enjoyed the story of this couple which adopted over 20 children. They were featured in the book Strangers Drowning covering highly altruistically motivated people, which included stories about EAs as well.

3Dale2hI realize this is a sensitive topic, but as it sounds like you have not yet firmly committed I will go ahead and encourage you to strongly consider not adopting an older child, for several reasons. Firstly, people significantly over-estimate their ability to change the outcomes for adopted children. This has been well studied with twin adoption studies, which generally find that adopted children's outcomes are closely linked to their biological parents - and not very linked to their adaptive parents. A good (if slightly dated now) introduction to this is Caplan's Selfish Reasons to Have More Kids [https://www.amazon.com/Selfish-Reasons-Have-More-Kids/dp/0465028616], reviewed and excerpted here [https://www.npr.org/2011/04/22/135612560/selfish-reasons-for-parents-to-enjoy-having-kids] . Secondly, by adopting an older child you put yourself in a much worse position. To the (limited) extent that good parenting can improve things, at 15 years a lot of that opportunity has been missed. Worse, you are suffering from severe adverse selection. By adopting a much older child, you are choosing a child than has been repeated not adopted by other people. This suggests that, even relative to the general class of kids up for adoption, this child is likely to have significant behavioural issues. Finally, it is a well established fact that one of the biggest threats to children comes from their mothers getting new boyfriends who are not genetically related to the child; this results in a something like a 10x increase in child abuse risk vs traditional families. I have never seen similar statistics around older adopted children but I would consider whether they might present a similar risk to your son given the 12 year age gap. As a result I would encourage you to consider having another biological child instead. This is also potentially much bigger upside: instead of (hopefully) someone improving someone's life a bit, you give an entire life to someone who would not have other

Summary

  • The Animal Welfare Fund, the Long-Term Future Fund, and the EA Infrastructure Fund (formerly the EA Meta Fund) are calling for applications.
  • Applying is fast and easy – it typically takes less than a few hours. If you are unsure whether to apply, simply give it a try.
  • The Long-Term Future Fund and EA Infrastructure Fund now support anonymized grants: if you prefer not having your name listed in the public payout report, we are still interested in funding you.
  • If you have a project you think will improve the world, and it seems like a good fit for one of our funds, we encourage you to apply by 7 March (11:59pm PST). Apply here. We’d be excited to hear from you!

Recent updates

  • The Long-Term Future Fund and EA Infrastructure Fund now officially support anonymized grants. To be transparent towards donors and the effective altruism community, we generally prefer to publish a report about
...

After looking more into this, we've decided not to evaluate applications for Community Building Grants during this grant application cycle. This is because we think CEA has a comparative advantage here due to their existing know-how, and they're still taking some exceptional grant applications, so some of the most valuable work will still be funded. It's currently unclear when CBG applications will reopen, but CEA is thinking carefully about this question and I'll be coordinating with them.

That said, we're interested in receiving applications from EA group... (read more)

Hillary Greaves laid out a problem of "moral cluelessness" in her paper Cluelessness,  http://users.ox.ac.uk/~mert2255/papers/cluelessness.pdf

Primer on cluelesness

There are some resources on this problem below, taken from the Oxford EA Fellowship materials:

(Edit: one text deprecated and redacted)

Hilary Greaves on Cluelessness, 80000 Hours podcast (25 min) https://80000hours.org/podcast/episodes/hilary-greaves-global-priorities-institute/

If you value future people, why do you consider short-term effects? (20 min) https://forum.effectivealtruism.org/posts/ajZ8AxhEtny7Hhbv7/if-you-value-future-people-why-do-you-consider-near-term

Simplifying cluelessness (30 min) https://philiptrammell.com/static/simplifying_cluelessness.pdf

Finally there's this half hour talk of Greaves presenting her ideas around cluelessness:

https://www.youtube.com/watch?v=fySZIYi2goY

The complex cluelessness problem

Greaves has the following worry about complex cluelessness:

The cases in question have the following structure:

For some pair of actions of interest A1, A2, 

 - (CC1) We have some reasons to think that the unforeseeable consequences of A1 would systematically tend to be substantially better than those of A2; 

 - (CC2) We have some reasons to think that the unforeseeable consequences of A2 would systematically tend to be substantially better than those of A1; 

 - (CC3) It is unclear

...

Her choice to use multiple, independent probability functions itself seems arbitrary to me, although I've done more reading since posting the above and have started to understand why there is a predicament.

Instead of multiple independent probability functions, you could start with a set of probability distributions for each of the items you are uncertain about, and then calculate the joint probability distribution by combining all of those distributions. That'll give you a single probability density function on which you can base your decision.

If you start... (read more)

Why are reading groups and journal clubs bad so often?

I think there are two reasons: boring readings and low-energy discussions. This post is about how to avoid those pitfalls.

The problem

I have participated in (and organized) some really bad reading groups. This is a shame, because I love a good reading group. They cause me to read more things and read them more carefully. A great group discussion will give me way more than I’d get just by taking notes on a reading.

This is what a bad reading group looks like: six people gather around a table. Two kind of skimmed the reading, and two didn’t read it at all. No one knows quite what to talk about. Someone ventures a, “so, what surprised you about the paper?” Another person flips through their notes, scanning for a possible answer. Most people stay quiet. No one leaves the table feeling excited about...

I've never taken part in a reading group (outside of seminars and the like in undergrad), and have no plans to do so, and yet I really enjoyed reading this piece! Thoughtfully and clearly laid out, with novel ideas I hadn't come across before.  I'll be sure to pass it on to friends who take part.

I'm glad Aaron nudged you to write this and that he included it in his digest email!

Presumably this information is public but spread out.

If you know how many hits an EA website got last year, please post it here.

Even better, a link to a public analytics site.

You can start by looking at the Alexa engagement ranking - lower rank is (superlinearly) better:

nickbostrom.com: 367k

effectivealtruism.org: 187k

givewell.org: 140k

80000hours.org: 122k

slatestarcodex.com: 91k

lesswrong.com: 46k

etc.

1Answer by Nathan Young5hI guess that this site will be the biggest by at least 10x.
9finm4hBy 'this site' do you mean the forum or all the other resources on effectivealtruism.org [https://www.effectivealtruism.org/]? In either case, if the 80,000 Hours [https://80000hours.org/] site counts as an EA site then I highly doubt that! My guess is that the answer is going to depend on how wide the catchment is for 'EA site', but most construals are going to put 80K [https://80000hours.org/] right out in front. Maybe GiveWell [https://www.givewell.org/] is up there, plus the GWWC site [givingwhatwecan.org/] and The Life You Can Save [https://www.thelifeyoucansave.org]. I also think that Nick Bostrom's personal site [https://nickbostrom.com/] gets a surprising number of hits. I would guess the forum is middling to top around these sites? Very interested in being proved wrong about that! Obviously all these sites have their own numbers, but I haven't seem them pooled together in some publicly available resource (nor am I sure that would be useful). I do know of some exact numbers but don't think it would be sensible to share them without permission. Unfortunately, in my experience it's also not totally straightforward to glean those stats from the outside, although search engine rankings etc are a good proxy.

Edited by Jacy Reese Anthis. Many thanks to Ali Ladak, Tobias Baumann, Jack Malde, James Faville, Sophie Barton, Matt Allcock, and the staff at PETRL for reviewing and providing feedback.

SUMMARY

Artificial sentient beings could be created in vast numbers in the future. While their future could be bright, there are reasons to be concerned about widespread suffering among such entities. There is increasing interest in the moral consideration of artificial entities among academics, policy-makers, and activists, which suggests that we could have substantial leverage on the trajectory of research, discussion, and regulation if we act now. Research may help us assess which actions will most cost-effectively make progress. Tentatively, we argue that outreach on this topic should first focus on researchers and other stakeholders who have adjacent interests.

INTRODUCTION

Imagine that you develop a brain disease like Alzheimer’s, but that a cutting-edge treatment has been developed. Doctors replace the damaged neurons in your brain with computer chips that are...

3Cullen_OKeefe4hThis post is a very valuable resource—one of the best current compilations on the issue I've seen so far.

Thanks very much!

[We the authors decided to put this paper up on the EA Forum as we believe it is highly relevant to global catastrophic risk and global priorities research. This a 'preprint' of the accepted for publication version. 

The journal Futures has provided 50 days' free access to the final, published version of the article. Anyone clicking on this link before March 24, 2021 will be taken directly to the article on Futures, which they are welcome to read or download. No sign up, registration or fees are required.  After March 24th, this is a permalink to the paper.]

Assessing Climate Change’s Contribution to Global Catastrophic Risk

Simon Beard,[1],[2] Lauren Holt,1 Shahar Avin,1 Asaf Tzachor,1,[3] Luke Kemp,1,[4] Phil Torres,[5] and Haydn Belfield1,[6]

 

Many have claimed that climate change is an imminent threat to humanity, but there is no way to verify such claims. This is concerning, especially given the prominence of some of these claims and the fact that...

Sorry its taking a while to get back to you!

In the meantime, you might be interested in this from our Catherine Richards: https://www.cser.ac.uk/resources/reframing-threat-global-warming/ 

In a recent answer to Issa’s Why "cause area" as the unit of analysis?, Michael Plant presents his take on cause prioritization and points to his thesis. As part of my cause prioritization analysis work with QURI I read the relevant parts of his thesis and found them interesting and novel, so I want to bring more attention to it.

In his Ph.D. thesis, Michael Plant (the founder of Happier Lives Institute) reviews the foundations of EA, presents constructive criticism on the importance of saving lives, sheds more light on how we can effectively make more people happier, describes weaknesses in the current approaches to cause prioritization and suggests a practical refinement - "Cause Mapping".  In this post, I summarize the key points of chapters 5 and 6 about cause prioritization of Michael's thesis.

Main points

  1. Cause areas can be thought of as "problems", while interventions can be thought of as corresponding "solutions".
  2. Cause Prioritization is an
...

Introduction

In sketch, the challenge of consequentialist cluelessness is the consequences of our actions ramify far into the future (and thus - at least at first glance - far beyond our epistemic access). Although we know little about them, we know enough to believe it unlikely these unknown consequences will neatly ‘cancel out’ to neutrality - indeed, they are likely to prove more significant than those we can assess. How, then, can we judge some actions to be better than others?

For example (which we shall return to), even if we can confidently predict the short-run impact of donations to the Against Malaria Foundation are better than donations to Make-a-Wish, the final consequences depend on a host of recondite matters (e.g. Does reducing child mortality increase or decrease population size? What effect does a larger population have on (among others) economic growth, scientific output, social stability? What effect do these have on...

Belatedly:

I read the stakes here differently to you. I don't think folks thinking about cluelessness see it as substantially an exercise in developing a defeater to 'everything which isn't longtermism'. At least, that isn't my interest, and I think the literature has focused on AMF etc. more as salient example to explore the concepts, rather than an important subject to apply them to. 

The AMF discussions around cluelessness in the OP are intended as toy example - if you like, deliberating purely on "is it good or bad to give to AMF versus this particu... (read more)

2MichaelStJules1dMy impression is that assigning precise credences may often just assume away the issue without addressing it, since the assignment can definitely seem more or less arbitrary. The larger your range would be if you entertained multiple distributions, the more arbitrary just picking one is (although using this to argue for multiple distributions seems circular). Or, just compare your choice of precise distribution with your peers', maybe those with similar information specifically; the more variance or the wider the range, the more arbitrary just picking one, and the more what you do depends on the particulars of your priors which you never chose for rational reasons over others. Maybe this arbitrariness doesn't actually matter, but I think that deserves a separate argument before we settle forever on a decision procedure that is not at all sensitive to it. (Of course, we can settle tentatively on one, and be willing to adopt one that is sensitive to it later if it seems better.)

We’re excited to announce the launch of Probably Good, a new organization that provides career guidance intended to help people do as much good as possible.

Context

For a while, we have felt that there was a need for a more generalist careers organization than 80,000 Hours — one which is more agnostic regarding different cause areas and might provide a different entry point into the community to people who aren’t a good fit for 80K’s priority areas. Following 80,000 Hours’ post about what they view as gaps in the careers space, we contacted them about how a new organization could effectively fill some of those gaps.

After a few months of planning, asking questions, writing content, and interviewing experts, we’re almost ready to go live (we aim to start putting our content online in 1-2 months) and would love to hear more from the community at large.

How You Can Help

The most important...

Revisiting this just to say that, for what it's worth, the  Danish beer company Carlsberg has been very successful with its slogan of being "Probably the Best Beer in the World."