Hide table of contents

I invite you to ask anything you’re wondering about that’s remotely related to effective altruism. There’s no such thing as a question too basic.

Try to ask your first batch of questions by Monday, October 17  (so that people who want to answer questions can know to make some time around then).

Everyone is encouraged to answer (see more on this below). There’s a small prize for questions and answers. [Edit: prize-winning questions and answers are announced here.]

This is a test thread — we might try variations on it later.[1]

How to ask questions

Ask anything you’re wondering about that has anything to do with effective altruism.

More guidelines:

  1. Try to post each question as a separate "Answer"-style comment on the post.
  2. There’s no such thing as a question too basic (or too niche!).
  3. Follow the Forum norms.[2]

I encourage everyone to view asking questions that you think might be “too basic” as a public service; if you’re wondering about something, others might, too.

Example questions

  • I’m confused about Bayesianism; does anyone have a good explainer?
  • Is everyone in EA a utilitarian?
  • Why would we care about neglectedness?
  • Why do people work on farmed animal welfare specifically vs just working on animal welfare?
  • Is EA an organization?
  • How do people justify working on things that will happen in the future when there’s suffering happening today?
  • Why do people think that forecasting or prediction markets work? (Or, do they?)

How to answer questions

Anyone can answer questions, and there can (and should) be multiple answers to many of the questions. I encourage you to point people to relevant resources — you don’t have to write everything from scratch!

Norms and guides:

  • Be generous and welcoming (no patronizing).
  • Honestly share your uncertainty about your answer.
  • Feel free to give partial answers or point people to relevant resources if you can’t or don’t have time to give a full answer.
  • Don’t represent your answer as an official answer on behalf of effective altruism.
  • Keep to the Forum norms.

You should feel free and welcome to vote on the answers (upvote the ones you like!). You can also give answers to questions that already have an answer, or reply to existing answers, especially if you disagree.

The (small) prize

This isn’t a competition, but just to help kick-start this thing (and to celebrate excellent discussion at the end), the Forum team will award $100 each to my 5 favorite questions, and $100 each to my 5 favorite answers (questions posted before Monday, October 17, answers posted before October 24).

I’ll post a comment on this post with the results, and edit the post itself to list the winners. [Edit: prize-winning questions and answers are announced here.]


Maybe don’t ask all of these, as they’re not quite related to EA, but this is sort of what I want the comment section of this post to be like. Source.
  1. ^

     Your feedback is very welcome! We’re considering trying out themed versions in the future; e.g. “Ask anything about cause prioritization” or “Ask anything about AI safety.”

    We’re hoping this thread will help get clarity and good answers, counter some impostor syndrome that exists in the community (see 1 and 2), potentially rediscover some good resources, and generally make us collectively more willing to ask about things that confuse us.

  2. ^

     If I think something is rude or otherwise norm-breaking, I’ll delete it.

New Answer
New Comment

78 Answers sorted by

Does anyone know why the Gates Foundation doesn't fill the GiveWell top charities' funding gaps?

One recent paper  suggests that an estimated additional $200–328 billion per year is required for the various measures of primary care and public health interventions from 2020 to 2030 in 67 low-income and middle-income countries and this will save 60 million lives. But if you look at just the amount needed in low-income countries for health care - $396B - and divide by the total 16.2 million deaths averted by that, it suggests an average cost-effectiveness of ~$25k/death averted. 

Other global health interventions can be similarly or more effective: a 2014 Lancet article estimates that, in low-income countries, it costs $4,205 to avert a death through extra spending on health[22]. Another analysis suggests that this trend will continue and from 2015-2030 additional spending in low-income countries will avert a death for $4,000-11,000[23].

For comparison, in high-income countries, the governments spend $6.4 million to prevent a death (a measure called “value of a statistical life”)[24]. This is not surprising given the poorest countries spend less than $100 per person per year on health on average, while high-income countries almost spend $10,000 per person per year[25].

Giv... (read more)

-6
Henry Howard
1y

Could you post this as a new forum post rather than a link to a Google doc? I think it's a question that gets asked a lot and would be good to have an easy to read post to link to.

5
EdoArad
2y
Agree! Hauke, let me know if you'd want me to do that on your behalf (say, using admin permissions to edit that previous post to add the doc content) if it'll help :)
2
Hauke Hillebrandt
2y
Yes, that's fine. 
7
EdoArad
2y
Edited to include the text. Did only a little bit of formatting, and added the appendix as is, so it's not perfect. Let me know if you have any issues, requests, or what not :) 

This is a great question, and the same should be asked of governments (as in: "why doesn't the UK aid budget simply all go to mosquito nets?")

A likely explanation for why the Gates Foundation doesn't give to GiveWell's top charities is that those charities don't currently have much room for more funding (GiveWell had to rollover funding last year because they couldn't spend it all. A recent blog posts suggests they may have more room for funding soon https://blog.givewell.org/2022/07/05/update-on-givewells-funding-projections/)

A likely explanation for why ... (read more)

When I read Critiques of EA that I want to read, one very concerning section seemed to be "People are pretty justified in their fears of critiquing EA leadership/community norms."

1) How seriously is this concern taken by those that are considered EA leadership, major/public facing organizations, or those working on community health? (say, CEA, OpenPhil, GiveWell, 80000 hours, Forethought, GWWC, FHI, FTX) 

2a) What plans and actions have been taken or considered?
2b) Do any of these solutions interact with the current EA funding situation and distribution? Why/why not?

3) Are there publicly available compilations of times where EA leadership or major/public facing organizations have made meaningful changes as a result of public or private feedback?

(Additional note: there were a lot of publicly supportive comments [1] on the Democratising Risk - or how EA deals with critics post, yet it seems like the overall impression was that despite these public comments, she was disappointed by what came out of it. It's unclear whether the recent Criticism/Red-teaming contest was a result of these events, though it would be useful to know which organizations considered or adopted any of the suggestions listed[2] or alternate strategies to mitigate concerns raised, and the process behind this consideration. I use this as an example primarily because it was a higher-profile post that involved engagement from many who would be considered "EA Leaders".)

  1. ^

    1, 2, 3, 4

  2. ^

    "EA needs to diversify funding sources by breaking up big funding bodies and by reducing each orgs’ reliance on EA funding and tech billionaire funding, it needs to produce academically credible work, set up whistle-blower protection, actively fund critical work, allow for bottom-up control over how funding is distributed, diversify academic fields represented in EA, make the leaders' forum and funding decisions transparent, stop glorifying individual thought-leaders, stop classifying everything as info hazards…amongst other structural changes."

Thanks for asking this. I can chime in, although obviously I can't speak for all the organizations listed, or for "EA leadership." Also, I'm writing as myself — not a representative of my organization (although I mention the work that my team does). 

  1. I think the Forum team takes this worry seriously, and we hope that the Forum contributes to making the EA community more truth-seeking in a way that disregards status or similar phenomena (as much as possible). One of the goals for the Forum is to improve community norms and epistemics, and this (criticism of established ideas and entities) is a relevant dimension; we want to find out the truth, regardless of whether it's inconvenient to leadership. We also try to make it easy for people to share concerns anonymously, which I think makes it easier to overcome these barriers.
    1. I personally haven't encountered this problem (that there are reasons to be afraid of criticizing leadership or established norms) — no one ever hinted at this, and I've never encountered repercussions for encouraging criticism, writing some myself, etc. I think it's possible that this happens, though, and I also think it's a problem even if people in the commu
... (read more)

Are there publicly available compilations of times where EA leadership or major/public facing organizations have made meaningful changes as a result of public or private feedback?

Some examples here: Examples of someone admitting an error or changing a key conclusion.

4
pseudonym
2y
Thanks for the link! I think most examples in the post do not include the part about "as a result of public or private feedback", though I think I communicated this poorly. My thought process behind going beyond a list of mistakes and changes to including a description of how they discovered this issue or the feedback that prompted it,[1]  is that doing so may be more effective at allaying people's fears of critiquing EA leadership. For example, while mistakes and updates are documented, if you were concerned about, say, gender diversity (~75% men in senior roles) in the organization,[2] but you were an OpenPhil employee or someone receiving money from OpenPhil, would the the contents of the post [3] you linked actually make you feel comfortable raising these concerns?[4] Or would you feel better if there was an explicit acknowledgement that someone in a similar situation had previously spoken up and contributed to positive change? I also think curating something like this could be beneficial not just for the EA community, but also for leaders and organizations who have a large influence in this space. I'll leave the rest of my thoughts in a footnote to minimize derailing the thread, but would be happy to discuss further elsewhere with anyone who has thoughts or pushbacks about this.[5]   1. ^  Anonymized as necessary 2. ^ I am not saying that I think OpenPhil in fact has a gender diversity problem (is 3/4 men too much? what about 2/3? what about 3/5? Is this even the right way of thinking about this question?), nor am I saying that people working in OpenPhil or receiving their funding don't feel comfortable voicing concerns. I am not using OpenPhil as an example because I believe they are bad, but because they seem especially important as both a major funder of EA and as folks who are influential in object-level discussions on a range of EA cause areas. 3. ^ Specifically, this would be Holden's Three Key Issues I've Chan

Why is scope insensitivity considered a bias instead of just the way human values work?

Quoting Kelsey Piper:

If I tell you “I’m torturing an animal in my apartment,” do you go “well, if there are no other animals being tortured anywhere in the world, then that’s really terrible! But there are some, so it’s probably not as terrible. Let me go check how many animals are being tortured.”

(a minute later)

“Oh, like ten billion. In that case you’re not doing anything morally bad, carry on.”

I can’t see why a person’s suffering would be less morally significant depending on how many other people are suffering. And as a general principle, arbitrarily bounding variables because you’re distressed by their behavior at the limits seems risky.

Not a philosopher, but scope sensitivity follows from consistency (either in the sense of acting similarly in similar situations, or maximizing a utility function). Suppose you're willing to pay $1 to save 100 birds from oil; if you would do the same trade again at a roughly similar rate (assuming you don't run out of money) your willingness to pay is roughly linear in the number of birds you save.

Scope insensitivity in practice is relatively extreme; in the original study, people were willing to pay $80 for 2000 birds and $88 for 200,000 birds. So if you ... (read more)

1
P
2y
I think the money-pump argument is wrong. You are practically assuming the conclusion. A scope insensitive person would negatively value the total number of bird deaths, or maybe positively value the number of birds alive. So that each death is less bad if other birds also die. In this case it doesn't make sense to talk about $1 per 100 avoided deaths in isolation.
3
Thomas Kwa
2y
This doesn't follow for me. I agree that you can construct some set of preferences or utility function such that being scope-insensitive is rational, but you can do that for any policy.

Two empirical reasons not to take the extreme scope neglect in studies like the 2,000 vs 200,000 birds one as directly reflecting people's values.

First, the results of studies like this depend on how you ask the question. A simple variation which generally leads to more scope sensitivity is to present the two options side by side, so that the same people would be asked both about 2,000 birds and about the 200,000 birds (some call this "joint evaluation" in contrast to "separate evaluation"). Other variations also generally produce more scope sensitive resu... (read more)

2
Dan_Keys
2y
A passage from Superforecasting: Note: in the other examples studied by Mellers & colleagues (2015), regular forecasters were less sensitive to scope than they should've been, but they were not completely insensitive to scope, so the Assad example here (40% vs. 41%) is unusually extreme.

Hm, I think that most of the people who participated in this experiment: 

three groups of subjects were asked how much they would pay to save 2,000 / 20,000 / 200,000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88.1 This is scope insensitivity or scope neglect: the number of birds saved—the scope of the altruistic action—had little effect on willingness to pay.

would agree after the results were shown to them that they were doing something irrational that they wouldn't endorse if aware of it. (Ex... (read more)

[anonymous]2y4
3
0

I think scope insensitivity could be a form of risk aversion over the difference you make in the world (=difference-making) (scope insensitivity is related at least). I explain here why I think that risk aversion over the difference you make is irrational even though risk aversion over states of the world is not. 

I think they are basically not a bias in the way confirmation bias is, and anyone claiming otherwise is pre-supposing linear aggregation of welfare already. From a thing I wrote recently:

Scope neglect is not a cognitive bias like confirmation bias. I can want there to be ≥80 birds saved, but be indifferent about larger numbers: this does not violate the von Neumann-Morgenstern axioms (nor any other axiomatic systems that underlie alternatives to utility theory that I know of). Similarly, I can most highly value there being exactly 3 flowers in the vase o

... (read more)
4
Thomas Kwa
2y
Anything is VNM-consistent if your utility function is allowed to take universe-histories or sequences of actions. So you will have to make some assumptions.

Various social aggregation theorems (e.g. Harsanyi's) show that "rational" people must aggregate welfare additively.

(I think this is a technical version of Thomas Kwa's comment.)

To answer this question in short: It is so because it's innate. Like any other bias scope insensitivity comes from within, in the case of an individual as well as an organization run by individuals. We may generalize it as the product of human values because of the long-running history of constant 'Self-Value' teachings(not the spiritual ones). But there will always be a disparity when considering the ever-evolving nature of human values, especially in the current era.
 

--------


On the contrary, most of the time, I do consider scope insensitivity as the... (read more)

There's a lot of interesting writing about the evolutionary biology and evolutionary psychology of genetic selfishness, nepotism, and tribalism, and why human values descriptively focus on the sentient beings that are more directly relevant to our survival and  reproductive fitness -- but that doesn't mean our normative or prescriptive values should follow whatever natural selection and sexual selection programmed us to value.

2
P
2y
Then what does scope sensitivity follow from?
1
Geoffrey Miller
2y
Scope sensitivity, I guess, is the triumph of 'rational compassion' (as Paul Bloom talks about it in his book Against Empathy), quantitative thinking, and moral imagination, over human moral instincts that are much more focused on small-scope, tribal concerns.  But this is an empirical question in human psychology, and I don't think there's much research on it yet. (I hope to do some in the next couple of years though).
0
P
2y
That explanation is a bit vague, I don't understand what you mean. By "quantitative thinking" do you mean something like having a textual length simplicity prior over moralities? By triumph of moral imagination do you mean somehow changing the mental representation of the world you are evaluating so that it represents better the state of the world? Why do you call it a triumph (implying it's good) over small-scope concerns? Why do you say this is an empirical question? What do you plan on testing?

Why does most AI risk research and writing focus on artificial general intelligence? Are there AI risk scenarios which involve narrow AIs?

Looking at your profile I think you have a good idea of answers already, but for the benefit of everyone else who upvoted this question looking for an answer, here's my take:

Are there AI risk scenarios which involve narrow AIs?

Yes, a notable one being military AI i.e. autonomous weapons (there are plenty of related posts on the EA forum). There are also multipolar failure modes on risks from multiple AI-enabled superpowers instead of a single superintelligent AGI.

Why does most AI risk research and writing focus on artificial general intelligence?

A misalign... (read more)

What happens when we create AI companions for children that are more “engaging” than humans? Would children stop making friends and prefer AI companions?
What happens when we create AI avatars of mothers that are as or more “engaging” to babies than real mothers, and people start using them to babysit? How might that affect a baby’s development?
What happens when AI becomes as good as an average judge at examining evidence, arguments, and reaching a verdict?
 

  1. "AGI" is largely an imprecisely-used initialism: when people talk about AGI, we usually don't care about generality and instead just mean about human-level AI. It's usually correct to implicitly substitute "human-level AI" for "AGI" outside of discussions of generality. (Caveat: "AGI" has some connotations of agency.)
  2. There are risk scenarios with narrow AI, including catastrophic misuse, conflict (caused or exacerbated by narrow AI), and alignment failure. On alignment failure, there are some good stories. Each of these possibilities is considered reasonab
... (read more)

What are the most ambitious EA projects that failed?

If we're encouraged to be more ambitious, it would be nice to have a very rough idea of how cost-effective ambition is itself. Essentially, I'd love to find or arrive at an intuitive/quantitative estimate of the following variables:

  • [total # of particularly 'ambitious' past EA projects[1]]
  • [total # (or value) of successfwl projects in the same reference class]

In other words, is the reason why we don't see more big wins in EA that people aren't ambitious enough, or are big wins just really unlikely? Are we bottlenecked by ambition?

For this reason, I think it could be personally[2] valuable to see a list,[3] one that tries hard to be comprehensive, of failed, successfwl, and abandoned projects. Failing that, I'd love to just hear anecdotes.

  1. ^

    Carrick Flynn's political campaign is a prototypical example. Others include CFAR, Arbital, RAISE. Other ideas include published EA-inspired books that went under the radar, papers that intended to persuade academics but failed, or even just earning-to-give-motivated failed entrepreneurs, etc.

  2. ^

    I currently seem to have a disproportionately high prior on the "hit rate" for really high ambition, just because I know some success stories (e.g. Sam Bankman-Fried), and this is despite the fact that I don't see much extreme ambition in the water generally.

  3. ^

    Such a list could also be usefwl for publicly celebrating failure and communicating that we're appreciative of people who risked trying. : )

Why there hasn't been a consensus/debate between people with contradicting views on the AGI timelines/safety topic?

I know almost nothing about ML/AI and I don't think I can form an opinion on my own so I try to base my opinion on the opinions of more knowledgeable people that I trust an respect. However what I find problematic is that those opinions vary dramatically, while it is not clear why those people hold their beliefs. I also don't think I have enough knowledge in the area to be able to extract that information from people myself eg. if I talk to a knowledgeable 'AGI soon and bad' person they would very likely convince me in their view and the same would happen if I talk to a knowledgeable 'AGI not soon and good' person. Wouldn't it be good idea to have debates between people with those contradicting views, figure out what the cruxes are and write them down? I understand that some people have vested interests in one side of the questions, for example a CEO of an AI company may not gain much from such debate and thus refuse to participate in it, but I think there are many reasonable people that would be willing to share their opinion and hear other people's arguments. Forgive me if this has already been done and I have missed it (but I would appreciate if you can point me to it).

  1. OpenPhil has commissioned various reviews of its work, e.g. on power-seeking AI.
  2. Less formal, but there was this facebook debate between some big names in AI.

Overall, I think a) this would be cool to see more of and b) it would be a service to the community if someone collected all the existing examples together.

Not exactly what you're describing, but MIRI and other safety researchers did the MIRI conversations and also sort of debated at events. They were helpful and I would be excited about having more, but I think there are at least three obstacles to identifying cruxes:

  • Yudkowsky just has the pessimism dial set way higher than anyone else (it's not clear that this is wrong, but this makes it hard to debate whether a plan will work)
  • Often two research agendas are built in different ontologies, and this causes a lot of friction especially when researcher A's ontol
... (read more)

The debate on this subject has been ongoing between individuals who are within or adjacent to the EA/LessWrong communities (see posts that other comments have linked and other links that are sure to follow). However, these debates often are highly insular and primarily are between people who share core assumptions about:

  1. AGI being an existential risk with a high probability of occurring
  2. Extinction via AGI having a significant probability of occurring within our lifetimes (next 10-50 years)
  3. Other extinction risks (e.g pandemics or nuclear war) not likely m
... (read more)

There was a prominent debate between Eliezer Yudkowsky and Robin Hanson back in 2008 which is a part of the EA/rationalist communities' origin story, link here: https://wiki.lesswrong.com/index.php?title=The_Hanson-Yudkowsky_AI-Foom_Debate

Prediction is hard and reading the debate from the vantage point of 14 years in the future it's clear that in many ways the science and the argument has moved on, but it's also clear that Eliezer made better predictions than Robin Hanson did, in a way that inclines me to try and learn as much of his worldview as possible so I can analyze other arguments through that frame. 

2
leosn
2y
This link could also be useful for learning how Yudkowsky & Hanson think about the issue: https://intelligence.org/ai-foom-debate Essentially, Yudkowsky is very worried about AGI ('we're dead in 20-30 years' worried) because he thinks that progress on AI overall will rapidly accelerate as AI helps us make further progress. Hanson was (is?) less worried.  

What level of existential risk would we need to achieve for existential risk reduction to no longer be seen as "important"?

What's directly relevant is not the level of existential risk, but how much we can affect it. (If existential risk was high but there was essentially nothing we could do about it, it would make sense to prioritize other issues.) Also relevant is how effectively we can do good in other ways. I'm pretty sure it costs less than 10 billion times as much (in expectation, on the margin) to save the world as to save a human life, which seems like a great deal. (I actually think it costs substantially less.) If it cost much more, x-risk reduction would be less appealing; the exact ratio depends on your moral beliefs about the future and your empirical beliefs about how big the future could be.

1
pseudonym
2y
Thanks! Presumably both are relevant, or are you suggesting if we were at existential risk levels 50 orders of magnitude below today and it was still as cost-effective as it is today to reduce existential risk by 0.1% you'd still do it?
2
Zach Stein-Perlman
2y
I meant risk reduction in the absolute sense, where reducing it from 50% to 49.9% or from 0.1% to 0% is a reduction of 0.1%. If x-risk was astronomically smaller, reducing it in absolute terms would presumably be much more expensive (and if not, it would only be able to absorb a tiny amount of money before risk hit zero).
2
pseudonym
2y
I'm not sure I follow the rationale of using absolute risk reduction here, if you drop existential risk from 50% to 49.9% for 1 trillion dollars that's less cost effective than if you drop existential risk from 1% to 0.997% at 1 trillion dollars, even though one is a 0.1% absolute reduction, and the other is a 0.002% absolute reduction. So if you're happy to do a 50% to 49.9% reduction at 1 trillion dollars, would you not be similarly happy to go from 1% to 0.997% for 1 trillion dollars? (If yes, what about 1e-50 to 9.97e-51?)

What is the strongest ethical argument you know for prioritizing AI over other cause areas? 

I'd also be very interested in the reverse of this. Is there anyone who has thought very hard about AI risk and decided to de-prioritise it?

I think Transformative AI is unusually powerful and dangerous relative to other things that can plausibly kill us or otherwise drastically affect human trajectories, and many of us believe AI doom is not inevitable. 

I think it's probably correct for EAs to focus on AI more than other things.

Other plausible contenders (some of which I've worked on) include global priorities research, biorisk mitigation, and moral circle expansion. But broadly a) I think they're less important or tractable than AI, b) many of them are entangled with AI (e.g. global priorities research that ignores AI is completely missing the most important thing).

8
Lizka
2y
I largely agree with Linch's answer (primarily: that AI is really likely very dangerous), and want to point out a couple of relevant resources in case a reader is less familiar with some foundations for these claims:  * The 80,000 Hours problem profile for AI is pretty good, and has lots of other useful links  * This post is also really helpful, I think: Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover * More broadly, you can explore a lot of discussion on the AI risk topic page in the EA Forum Wiki

Thank you for asking this! Some fascinating replies!

A related question:

Considering other existential risks like engineered pandemics, etc., is there an ethical case for continuing to escalate the advancement of AI development despite the possibly-pressing risk of unaligned AGI for addressing/mitigating other risks, such as developing better vaccines, increasing the rate of progress in climate technology research, etc.?

[I'll be assuming a consequentialist moral framework in this response, since most EAs are in fact consequentialists. I'm sure other moral systems have their own arguments for (de)prioritizing AI.]

Almost all the disputes on prioritizing AI safety are really epistemological, rather than ethical; the two big exceptions being a disagreement about how to value future persons, and one on ethics with very high numbers of people (Pascal's Mugging-adjacent situations).

I'll use the importance-tractability-neglectedness (ITN) framework to explain what I mean. The ITN... (read more)

Reasonable people think it has the most chance of killing all of us and ending future conscious life. Compared to other risks it is bigger, compared to other cause areas it will extinguish more lives.

3
Ula Zarosa
2y
"Reasonable people think" - this sounds like a very weak way to start an argument. Who are those people - would be the next question. So let's skip the differing to authority argument. Then we have "the most chance" - what are the probabilities and how soon in the future? Cause when we talk about deprioritizing other cause areas for the next X years, we need to have pretty good probabilities and timelines, right? So yeah, I would not consider differing to authorities a strong argument. But thanks for taking the time to reply.
4
Nathan Young
2y
A survey of ML researchers (not necessarily AI Safety, or EA) gave the following That seems much higher than the corresponding sample in any other field I can think of. I think that an "extremely bad outcome" is  probably equivalent to 1Bn or more people dying.  Do a near majority of those who work in green technology (what feels like the right comparison class) feel that climate change has a 10% chance of 1 Bn deaths? Personally, I think there is like a 7% chance of extinction before 2050, which is waaay  higher than anything else. 
4
Howie_Lempel
2y
FYI - subsamples of that survey were asked about this in other ways, which gave some evidence that "extremely bad outcome" was ~equivalent to extinction.   1. ^ Or, ‘human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species’ 2. ^ That is, ‘future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species’
1[anonymous]2y
There is a big gap between killing all of us and ending future conscious life (on earth, in our galaxy, entire universe/multiverse?)
3
Nathan Young
2y
Yes, but it's a much smaller gap than any other cause doing this.  You're right, conscious life will probably be fine. But it might not be.

What's the best way to talk about EA in brief, casual contexts? 

Recently I've been doing EA-related writing and copyediting, which means that I've had to talk about EA a lot more to strangers and acquaintances, because 'what do you do for work?' is a very common ice-breaker question.  I always feel kind of awkward and like I'm not doing the worldview justice or explaining it well. I think the heart of the awkwardness is that 'it's a movement that wants to do the most good possible/do good effectively' seem tautologous (does anyone want to do less good than possible?); and because EA is kind of a mixture of philosophy and career choice and charity evaluating and [misc], I basically find it hard to find legible concepts to hang it on. 

For context, I used to be doing a PhD in Greek and Roman philosophy - not exactly the most "normal" job- and I found that way easier to explain XD

 

Related questions:
-what's the best way to talk about EA on your personal social media?
-what's the best way to talk about it if you go viral on Twitter? (this happened to me today)
-what's the best way to talk about it to your parents and older family members? 
etc. 

I think, kind of, 'templates' about how to approach these situations risk seeming manipulative and being cringey, as 'scripts' always are if you don't make them your own, but I'd really enjoy reading a post collecting advice from EA community builders, communicators, marketers, content w... (read more)

2
SeanFre
2y
I think a collection like the one you're proposing would be an incredibly valuable resource for growing the EA community. 

Here's an excellent resource for different ways of pitching EA (and sub-parts of EA). Disclaimer - I do not know who is the owner of this remarkable document. I hope sharing it here is acceptable! As far as I know, this is a public document. 

My contingently-favorite option:

Effective Altruism is a global movement that spreads practical tools and advice about prioritizing social action to help others the most. We aim that when individuals are about to invest time or money in helping others, they will examine their options and choose the one with the hig... (read more)

2
Amber Dawn
2y
Thanks! This looks extremely comprehensive.
2
Rona Tobolsky
2y
This resource is also robust (and beautifully outlined).

What would convince you to start a new effective animal charity?

Has anyone produced writing on being pro-choice and placing a high value on future lives at the same time? I’d love to read about how these perspectives interact!

FYI I'm also interested in this. 
I do think it's consistent to be pro-choice and place a high value on future lives (both because people might be able to create more future lives by (eg) working on longtermist causes than by having kids themself, and because you can place a high value on lives but say that it is outweighed by the harm done by forcing someone to give birth. But I think that pro-natalist and non-person-affecting views do have implications for reproductive rights and the ethics of reproduction that are seldom noticed or made explicit.

Richard Chappell wrote this piece, though IMHO it doesn't really get to the heart of the tension.

I've only just stumbled upon this question and I'm not sure if you'll see this, but I wrote up some of my thoughts on the problems with the Total View  of population ethics (see "Abortion and Contraception" heading specifically). 

Personally, I think there is a tension there which does not seem to have been discussed much in the EA forum. 

Here is a good post on the side of being pro-life: https://forum.effectivealtruism.org/posts/ADuroAEX5mJMxY5sG/blind-spots-compartmentalizing

I have thought about this a lot, and I think pro-life might actually win out in terms of utility maximization if it doesn't increase existential risk.

1) What level of funding or attention (or other metrics) would longtermism or AI safety need to receive for it to no longer be considered "neglected"?

2) Does OpenPhil or other EA funders still fund OpenAI? If so, how much of this goes towards capabilities research? How is this justified if we think AI safety is a major risk for humanity? How much EA money is going into capabilities research generally?

(This seems like something that would have been discussed a fair amount, but I would love a distillation of the major cruxes/considerations, as well as what would need to change for OpenAI to be no longer worth funding in future).

  1. See here. (Separating importance and neglectedness is often not useful; just thinking about cost-effectiveness is often better.)
  2. No.
1
pseudonym
2y
Thanks! This makes sense. In my head AI safety feels like a cause area that can just have room for a lot of funding etc, but unlike nuclear war or engineered pandemics which seem to have clearer milestones for success, I don't know what this looks like in the AI safety space. I'm imagining a hypothetical scenario where AI safety is overprioritized by EAs, and wondering if or how we will discover this and respond appropriately.

I’ve asked this question on the forum before to no reply, but do the people doing grant evaluations consult experts in their choices? Like do global development grant-makers consult economists before giving grants? Or are these grant-makers just supposed to have up-to-date knowledge of research in the field?

I’m confused about the relationship between traditional topic expertise (usually attributed to academics) and EA cause evaluation.

[My impression. I haven't worked on grantmaking for a long time.] I think this depends on the topic, size of the grant, technicality of the grant, etc. Some grantmakers are themselves experts. Some grantmakers have experts in house. For technical/complicated grants, I think non-expert grantmakers will usually talk to at least some experts before pulling the trigger but it depends on how clearcut the case for the grant is, how big the grant is, etc.

'If I take EA thinking, ethics, and cause areas more seriously from now on, how can I cope with the guilt and shame of having been so ethically misguided in my previous life?'

or, another way to put this:

'I worry that if I learn more about animal welfare, global poverty, and existential risks, then all of my previous meat-eating, consumerist status-seeking, and political virtue-signaling will make me feel like a bad person'

(This is a common 'pain point' among students when I teach my 'Psychology of Effective Altruism' class)

I might be missing the part of my brain that makes these concerns make sense, but this would roughly be my answer: Imagine that you and everyone in your household consume water with lead it in every day. You have the chance to learn if there is lead in the the water. If you learn that it does, you'll feel very bad but also you'll be able to change your source of water going forward. If you learn that it does not, you'll no longer have this nagging doubt about the water quality. I think learning about EA is kind of like this. It will be right or wrong to eat animals regardless of whether you think about it, but only if you learn about it can you change for the better. The only truly shameful stance, at least to me, is to intentionally put your head in the sand.

My secondary approach would be to say that you can't change your past but you can change your future. There is no use feeling guilt and shame about past mistakes if you've already fixed them going forward. Focus your time and attention on what you can control.

My two cents: I view EA as supererogatory, so I don't feel bad about my previous lack of donations, but feel good about my current giving.

Changing the "moral baseline" does not really change decisions: seeing "not donating" as bad and "donating" as neutral leads to the same choices as seeing "not donating" as neutral and "donating" as good.

4
Geoffrey Miller
2y
In principle, changing the moral baseline shouldn't change decisions -- if we were fully rational utility maximizers. But for typical humans with human psychology, moral baselines matter greatly, in terms of social signaling, self-signaling, self-esteem, self-image, mental health, etc.
5
Lorenzo Buonanno
2y
I agree! That's why I'm happy that I can set it wherever it helps me the most in practice (e.g. makes me feel the "optimal" amount of guilt, potentially 0)

Meta:

  1. Seems like a more complicated question than [I could] solve with a comment
  2. Seems like something I'd try doing one on one, talking with (and/or about) a real person with a specific worry, before trying to solve it "at scale" for an entire class
  3. I assume my understanding of the problem from these few lines will be wrong and my advice (which I still will write) will be misguided
  4. Maybe record a lesson for us and we can watch it?

Tools I like, from the CFAR handbook, which I'd consider using for this situation:

  1. IDC (maybe listen to that part afraid you'll think
... (read more)
5
Geoffrey Miller
2y
Yanatan -- I like your homunculus-waking-up thought experiment. It might not resonate with all students, but everybody's seen The Matrix, so it'll probably resonate with many.

If you haven't come across it, a lot of EAs have found Nate Soares' Replacing Guilt series useful for this. (I personally didn't click with it but have lots of friends who did).

I like the way some of Joe Carlsmith's essays touch on this. 

4
Howie_Lempel
2y
Much narrower recommendation for nearby problems is Overcoming Perfectionism (~a CBT workbook).  I'd recommend to some EAs who are already struggling with these feelings (and know some who've really benefitted from it). (It's not precisely aimed at this but I think it can be repurposed for a subset of people.) Wouldn't recommend to students recently exposed to EA who are worried about these feelings in future.

What has helped me most is this quote from Seneca:

Even this, the fact that it [the mind] perceives the failings it was unaware of in itself before, is evidence for a change for the better in one's character.

That helped me feel a lot better about finding unnoticed flaws and problems in myself, which always felt like a step backwards before. 

I also sometimes tell myself a slightly shortened Litany of Gendlin:

What is true is already so.
Owning up to it doesn't make it worse.
Not being open about it doesn't make it go away.
People can stand what is true,
for ... (read more)

My personal approach:

  • I no longer think of myself as "a good person" or "a bad person", which may have something to do with my leaning towards moral anti-realism. I recognize that I did bad things in the past and even now, but refuse to label myself "morally bad" because of them; similarly, I refuse to label myself "morally good" because of my good deeds. 
    • Despite this, sometimes I still feel like I'm a bad person. When this happens, I tell myself: "I may have been a bad person, so what? Nobody should stop me from doing good, even if I'm the worst perso
... (read more)

I think You Don't Need To Justify Everything is a somewhat less related post (than others that have been shared in this thread already) that is nevertheless on point (and great).

I think it's okay to feel guilty, shame, remorse, rage, or even hopeless about our past "mistakes". These are normal emotions, and we can't or rather shouldn't purposely avoid or even bury them. It's analogous to someone being dumped by a beloved partner and feeling like the whole world is crumbling. No matter how much we try to comfort such a person, he/or she will feel heartbroken.

In fact, feeling bad about our past is a great sign of personal development because it means we realize our mistakes! We can't improve ourselves if we don't even know what we d... (read more)

4
Geoffrey Miller
2y
Kiu -- I agree. It reminds me of the old quote from Rabbi Nachman of Breslov (1772-1810):  “If you won’t be better tomorrow than you were today, then what do you need tomorrow for?” https://en.wikipedia.org/wiki/Nachman_of_Breslov

Does anyone have a good list of books related to existential and global catastrophic risk? This doesn't have to just include books on X-risk / GCRs in general, but can also include books on individual catastrophic events, such as nuclear war. 

Here is my current resource landscape (these are books that I have personally looked at and can vouch for; the entries came to my mind as I wrote them - I do not have a list of GCR / X-risk books at the moment; I have not read some of them in full): 

General:

AI Safety 

Nuclear risk

General / space

Biosecurity 

3
Howie_Lempel
2y
Others, most of which I haven't fully read and not always fully on topic: * Richard Posner. Catastrophe: Risk and Response. (Precursor) * Richard A Clarke and RP Eddy. Warnings: Finding Cassandras to Stop Catastrophes * General Leslie Groves. Now It Ca Be Told: the Story of the Manhattan Project (nukes)
2
peterhartree
2y
The Bible (Noah's Ark). File under "Fiction" or "Precursors".
2
evakat
2y
Small remark: The Goodreads list on nuclear risk you linked to is private

Of all the sort of "decision theory"-style discussions in EA, I think Anthropics (e.g. the fact we exist, tells us something about the nature of successful intelligence and x-risk) seems like one of the most useful that could arrive just from pure thought. This is sort of amazing.

The blog posts I've seen written in 2021 or 2020 seem sort of unclear and tangled (e.g. there are two competing theories and empirical arguments are unclear).

Is there a good summary of Anthropic ideas? Are there updates on this work? Is there someone working on this? Do they need help (e.g. from senior philosophers or cognitive scientists)? 

A set of related questions RE: longtermism/neartermism and community building.

1a) What is the ideal theory of change for Effective Altruism as a movement in the next 5-10 years? What exactly does EA look like, in the scenarios that community builders or groups doing EA outreach are aiming for? This may have implications for outreach strategies as well as cause prioritization.[1]

1b) What are the views of various community builders and community building funders in the space on the above? Do funders communicate and collaborate on a shared theory of change, or are there competing views? If so, which organizations best characterize these differences, what are the main cruxes/where are the main sources of tension?

2a) A commonly talked about tension on this forum seems to relate to neartermism versus longtermism, or AI safety versus more publicly friendly cause areas in the global health space. How much of its value is because it's inherently a valuable cause area, and how much of it is because it's intended as an onramp to longtermism/AI safety?

2b) What are the views of folks doing outreach and funders of community builders in EA on the above? If there are different approaches, which organizations best characterize these differences, what are the main cruxes/where are the main sources of tension? I would be particularly interested in responses from people who know what CEA's views are on this, given they explicitly state they are not doing cause-area specific work or research. [2]

3) Are there equivalents [3] of Longview Philanthropy who are EA aligned but do not focus on longtermism? For example, what EA-aligned organization do I contact if I'm a very wealthy individual donor who isn't interested in AI safety/longtermism but is interested in animal welfare and global health? Have there been donors (individual or organizational) who fit this category, and if so, who have they been referred to/how have they been managed?

  1. ^

    "Big tent" effective altruism is very important (particularly right now) is one example of a proposed model, but if folks think AI timelines are <10 years away and p(doom) is very high, then they might argue EA should just aggressively recruit for AI safety folks in elite unis.

  2. ^

    Under Where we are not focusing: "Cause-specific work (such as community building specifically for effective animal advocacy, AI safety, biosecurity, etc.)"

  3. ^

    "designs and executes bespoke giving strategies for major donors"

3. I'm not sure they do as much bespoke advising as Longview, but I'd say GiveWell and Farmed Animal Funders. I think you could contact either one with the amount you're thinking of giving and they could tell you what kind of advising they can provide. 

I really want to learn more about broad longtermism. In 2019, Ben Todd said that in a survey EAs said that it was the most underinvested cause area by something like a factor of 5. Where can I learn more about broad longtermism, what are the best resources, organizations, and advocates on ideas and projects related to broad longtermism?

I think parts of What We Owe the Future by Will MacAskill discuss this approach a bit.

1
Jordan Arel
2y
Mm good point! I seem to remember something.. do you remember what chapter/s by chance?
2
Howie_Lempel
2y
My guess is that Part II, trajectory changes will have a bunch of relevant stuff. Maybe also a bit of part 5. But unfortunately I don't remember too clearly.

Does Peter Singer still consider himself aligned to the Effective Altruism movement? And/or do you forecast he will do in five years time?

If "EA is a question," and that question is how to do the most good, I think Peter Singer will always consider himself an effective altruist.

However, he seems to disagree about whether the answer to that question entails a predominant focus on common longtermist topics. I suspect, while he will always see himself as an EA, it will be as an EA that has important differences in cause area prioritization. For more info, he discusses his views about longtermism here, perhaps captured best by the following quote:

When taking steps to reduce the risk that we will become extinct, we should focus on means that also further the interests of present and near-future people. If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do; and if we are not at that critical point, it will have been a good thing to do anyway.

How can one accept the Simulation Hypothesis and at the same time find Effective Altruism a valuable enterprise?

I don't see how the Simulation Hypothesis is a counterargument to EA, if you presume everyone else is still as "real" (I.e, simulated at the same level of detail) as you are. After all, you clearly have conscious experience, emotional valence,  and so on, despite being a simulation - so does everyone else, so we should still help them live their best simulated lives. After all, whether one is a simulation or not, we can clearly feel the things we call pleasure and pain, happiness and sorrow, freedom and despair, so we clearly have moral worth in my worldview. Though we should also probably be working on some simulation-specific research as well, I don't see how something like malaria nets would cease to be worthwhile.

1
Emanuele DL
2y
Thanks for replying to my question. Your argument is certainly valid and an important one. But if we are to take the simulation hypothesis seriously it is only one within a spectrum of possible arguments that depend on the very nature of the simulation. For instance we might find out that our universe has been devised  in such a twisted way that any improvement for its conscious beings corresponds to an unbearable proportional amount of pain for another parallel simulated universe. In such a case would pursuing effective altruism or longtermism still be moral? 
2
Jay Bailey
2y
Effective altruism is about doing the most good possible, so I'd say one can still pursue that under any circumstance. In the hypothetical you mentioned, the current form of EA would definitely be immoral in my opinion, because it is mostly about improving the lives of people in this universe, which would cause more suffering elsewhere and thus be wrong. So, in such a world, EA would have to look incredibly different - the optimal cause area would probably be to find a way to change the nature of our simulation, and we'd have to give up a lot of the things we do now because their net consequences would be bad. That's one of the best parts about EA in my opinion - it's a question (How do we do the most good?) rather than an ideology. (You must do these things) Even if our current things turned out to be wrong, we could still pursue the question anew.
1
Emanuele DL
2y
I agree with your approach to the question but perhaps if we really take the simulation hypothesis seriously (or at least consider it probable enough to concern us) the first step should be finding a way to tell whether or not we actually live in a simulation. Research in Physics/Astronomy could explicitly look for and device experiments looking to demonstrate systematic inconsistencies in the fabric of our universe that could give a hint on the made up nature of all laws. This in a way is an indirect  answer  to your last question. If effective altruisms is not an ideology just to be followed but a rational enterprise grounded on the actual nature of our universe, then it should also be concerned with improving our understanding of it. Even if this eventually leads to a radical re-think of what effective altruisms should be. 
2
Jay Bailey
2y
I agree. If the Simulation Hypothesis became decently likely, we would want to answer questions like: - Does our simulation have a goal? If so, what? - Was our simulation likely created by humans? Also, we'd probably want to be very careful with those experiments - observing existing inconsistencies makes sense, but deliberately trying to force the simulation into unlikely states seems like an existential risk to me - the last thing you want is to accidentally crash the simulation!

Have there been any actual "wins" for longtermism? 

Has anyone taken any concrete actions that clearly shift the needle towards a better future?

Why do EAs use counterfactual in a statement like "it will have a high counterfactual impact." Isn't non-fungible a more apt word than counterfactual for what EAs are trying to get at?

How tractable are animal welfare problems compared to global health and development problems?

I'm asking because I think animal welfare is a more neglected issue, but I still donate for global health and development because I think it's more tractable.

I think that it's very tractable. For example, I estimated that corporate campaigns improve 9 to 120 years of chicken life per dollar spent and this improvement seems to be very significant. It would likely cost hundreds or thousands of dollars to improve a life of one human to such a degree, even in developing countries. There are many caveats to this comparison that I can talk about upon request but I don't think that they change the conclusion.

Another way to see tractability is to look at the big wins for animal advocacy in 2021 or 2020. This progress i... (read more)

I believe they are largely tractable, there's a variety of different intervention types (Policy, Direct work, Meta, Research), cause areas (Alt Proteins, Farmed Animals, Wild animal suffering, Insects), organisations and geographies to pursue them in. Of particular note may be potentially highly tractable and impactful work in LMIC (Africa, Asia, Middle East, Eastern Europe)

I will say animal welfare is a newer and less explored area than global health but that may mean that your donation can be more impactful and make more of a difference as there could be... (read more)

Within the field of AI safety, what does "alignment" mean?

The "alignment problem for advanced agents" or "AI alignment" is the overarching research topic of how to develop sufficiently advanced machine intelligences such that running them produces good outcomes in the real world.

Both 'advanced agent' and 'good' should be understood as metasyntactic placeholders for complicated ideas still under debate. The term 'alignment' is intended to convey the idea of pointing an AI in a direction--just like, once you build a rocket, it has to be pointed in a particular direction.

"AI alignment theory" is meant as an overarch

... (read more)

Has anyone associated with EA ever looked for leverage points for reducing the rate of abortion?

(I believe the answer is no, or at least it hasn't been published publicly.)

Hi Jason, I'm the author of the aforementioned research into IUDs, artificial wombs, and legislative solutions, which is indeed very cursory. The research is included at the bottom of a [larger draft](https://docs.google.com/document/d/10VL9m-GW2f428WZSEs834kiDrHFxtfPNQzc6ljLwTyc/edit?usp=sharing) of an eventual EA forum post outlining reasons why EAs might oppose abortion and potential interventions in that regard.

The draft's philosophical arguments against abortion are much more mature than its section on potential interventions, partially because I've t... (read more)

I sense the answer is yes. I seem to recall that someone looked into this. 

Also I guess the answer is technically yes since I wouldn't be surprised if some interventions already lower the rate of unwanted pregnancy.

Hi - I'm just curious what the rationale for this would be? 

6
jasonk
2y
If there were cost-efficient leverage points, it might be worth investing some amount of money and effort in. A non-exhaustive list of semi-conjoint reasons: * One believes abortion is a grave moral wrong and a lot occur each year. * One doesn't believe abortion is a grave moral wrong, but assigns some weight to the view's correctness. Even assigning a 10% chance to the view's correctness still means a lot is potentially at stake. * There might be relatively easy ways to make a difference and have other positive, follow-on effects. For example, male contraceptives might make a big difference in reducing unintended pregnancies and my understanding (a few years old) is that there aren't many funders of relevant research. (I recognize that some people argue that the follow-on effects of other contraceptives like the pill are not fully positive and some believe they may even be negative.) * Abortion is ridiculously polarizing and seems to crowd out discussion of other important issues in politics. Maybe reducing its salience would help increase the ability to focus on other issues? * Obtaining an abortion imposes greater and greater costs in the US (financially, in time required, psychologically, health risks) as restrictions are rolled out. * The strategies engaged in by many pro-life advocates seem unlikely to significantly reduce abortion rates.

There has been some very cursory research into things like IUDs, artificial wombs, legislative action etc., but I don't think the author ever finished or published it.

If only a few people are responsible for most of EA's impact, and I'm not as ambitious as many others, should I even care about EA?

Why don't we discount future lives based on the probability of them not existing? These lives might end up not being born, right?

I understand the idea of not discount lives due to distance (distance in time as well as distance in space). Knowing a drowning child is 30km away is different from hearing from a friend that there is an x% chance of a drowning child 30km away. In the former, you know something exists; in the latter, there is a probability that it does exist, and you apply an suitable level of confidence in your actions.

We should discount someone's life based on the probability of them not existing. This is not controversial. (But standard-economic constant-factor-per-year discounting is too crude to be useful.)

Does "calibrated probability assessment" training work?

In "How to Measure Anything" chapter 5, Douglas Hubbard describes the training he provides to individuals and organizations that want to improve their skills.  He provides a sample test which is based on general knowledge trivia, questions like

 "What is the air distance from LA to NY?" 

for which the student is supposed to provide a 90% confidence interval.  There are also some true/false questions where you provide your level of confidence in the answer e.g. 

"Napoleon was born on Corsica".  

In the following few pages he describes some of the data he's collected about his trainees implying this sort of practice helps people become better estimators of various things, including forecasting the likelihood of future events.  For example, he describes CTO's making more accurate predictions of new tech after completing training.

My question: Is there evidence that practice making probabilistic estimates about trivia improves people's ability to forecast non-trivial matters?  Have there been published studies?

I asked Dr. Hubbard these questions and he graciously replied saying to check out his book, which only cites 1980 Kahneman and Tversky, or the Wikipedia page, which also only cites his book and the above study, or to read Superforecasting.

Thanks!

[note that this is a re-post of a question I asked before but didn't get an answer]

I'm not sure we need "published studies" but "proper studies" seem like a great idea. 

Hi all. I'm 40 with a mortgage and two kids. Living in west of Ireland with a 20 year career in event management and lighting design behind me. No college education. I want to change career to a more long-termist, meaningful road but I just feel trapped. Geographically, financially and socially in my current life. Anyone any advice or pointers maybe?

Is there a good historical overview of EA? One that introduces key people, key events, key ideas?

Has anyone EA looked into how compatible or how to get EA movements into non western countries such as China or India?

  1. Why is there not a good EA Youtube Channel, with short and scripted videos in the style of Crash Course or Kurzgesagt, with sharable introductions to EA in general and all the causes inside longer playlists about them?
  2. There is also not a podcast or social media accounts that seem to be trying to get big and reach people to do EA. Why is that the case? 
  3. I've seen ads of 80k hours and Givewell, but why isn't there of the whole EA movement?

Sorry for making multiple questions in one, but I feel that the answers may be related. I separated them in the case you want to answer them individually. Feel free to answer only one or two.

  1. There's Giving what we can and A happier world. Also Kurzgesagt itself got two large grants for making EA videos, I think they're going to make many more in the future.
  2. If you google for "effective altruism podcasts" you can find some. There's also a recent small grants program to start new ones.
4
Pato
2y
Yeah, but they're not the EA channel or the EA podcast in the same way that the EA forum exists. And in the case of the Youtube channels they don't have introductions to EA, its causes and how to contribute to them.

Open Philanthropy has given grants to Kurzgesagt, who have made some videos on EA-related subjects. There is also some other EA-aligned YouTube creator whose name escapes me now.

The 80k podcast has wide reach, including outside EA I believe.

So there is some of this. I think part of the answer is that it’s really hard to reach a massive audience while retaining a high-fidelity message.

I think that's because the EA movement is just that, a global movement, not an organization.

You probably also haven't seen Fridays For Future ads, or the FFF podcast.

1
Pato
2y
I understand that, and I guess 3 and even 2 could be not that effective, but it is weird to me that there isn't an org doing good Youtube videos that EAs could share and put on their profiles or something. With the message of sharing it inside them to try to do snowball effects.

What are the methods (meditation, self-analysis) and tools (podcasts, books, support groups)  you use to keep yourself motivated and inspired in Effective Altruism specifically, and in making a difference generally?

Are there examples of EA causes that had EA credence and financial support but then lost both, and how did discussion of them change before and after? Also vice-versa, are there examples of causes that had neither EA credence nor support but then gained both?

The EA Survey has info on cause prio changes over time. Summary is:

The clearest change in average cause ranking since 2015 is a steady decrease for global poverty and increases for AI risk and animal welfare.

Holden Karnofsky wrote Three Key Issues I’ve Changed My Mind About on the Open Philanthropy blog in 2016.

On AI safety, for example:

I initially guessed that relevant experts had strong reasons for being unconcerned, and were simply not bothering to engage with people who argued for the importance of the risks in question. I believed that the tool-agent distinction was a strong candidate for such a reason. But as I got to know the AI and machine learning communities better, saw how Superintelligence was received, heard reports from the Future of Life Insti

... (read more)

Can I write the questions and answers in to a FAQ on the wiki that anyone can add to?

Toby Ord has written about the affectable universe, the portion of the universe that “humanity might be able to travel to or affect in any other way.”

I’m curious whether anyone has written about the affectable universe in terms of time.

  1. We can only affect events in the present and the future
  2. Events are always moving from the present (affectable) to the past (unaffectable)
  3. We should intervene in present events (e.g. reduce suffering) before these events move to the unaffectable universe

Maybe check out the term "light cone".

Why is longtermism a compelling moral worldview? 

A few sub-questions:

  • Why should we care about people that don't exist yet? And why should we dedicate our resources to making the world better for people that might exist (reducing x-risks) rather than using them for people that definitely exist and are currently suffering (global health, near-term global catastrophic risks, etc?) Longtermism seems to be somewhat of a privileged and exclusive worldview because it deprioritizes the very real lack of healthcare, food and potable water access, security, and education that plagues many communities.
  • Why are x-risks considered worse than global catastrophic risks? From a utilitarian standpoint, global catastrophic risks should be much worse than x-risks. All things considered, x-risks are a quite neutral outcome. They're worse than a generally happy future, but highly preferable to a generally unhappy future. Global catastrophic risks would cause a generally unhappy future.
  • Should the long-term preservation of humanity necessarily be a goal of effective altruism? I don't think the preservation of humanity is an inherently bad thing. (Although it would likely be at the expense of every other species.) But, I could imagine an extinction scenario that I wouldn't be upset about: As technology progresses, people generally get richer and happier. A combination of rising GDP, more urbanized styles of living, better access to birth control, and a mechanized workforce causes the birth rate to drop, and humanity comfortably, quietly declines. Natural habitats flourish, and we make room for other species to thrive and flourish as we have. Is this outcome acceptable from an altruistic perspective? If not, why?

How are DALYs/QALYs determined?

Life years are pretty objective but how are the disability/quality adjustments made?

One common method is the Time trade-off, you can find other common methods at the bottom of that Wikipedia page.

For more details the answers in this thread might be interesting.

Also worth noting that (as far as I know) no EA group only uses QALYs.

For example, GiveWell uses their own researched "moral weights" https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/comparing-moral-weights 

Is there an overview or estimation of how many EA-aligned people work in (inter)national, governmental bodies and/or (inter)national politics? 

'Why don't EA's main cause areas overlap at all with the issues that dominate current political debates and news media?'

(This could be an occasion to explain that politically controversial topics tend not to be (politically) tractable or neglected (in terms of media coverage), and are often limited in scope (i.e. focused on domestic political squabbles and symbolic virtue-signaling)

  1. As you said, they're (almost by definition) not neglected
  2. The media picks topics based on some algorithm which is simply different from the EA algorithm. If that wouldn't be true, I guess we wouldn't really need EA

I use a lot of ideas from Leviathan (Hobbes) all the time, but my knowledge comes from just from reading the title and the first paragraph of the Wikipedia page[1]. I'm worried I look dumb in front of smart people.

Does anyone have a good approachable summary of Leviathan, or even better, a tight, well written overview of the underlying and related ideas from a modern viewpoint?

  1. ^

    ("Bellum omnium contra omnes" is just so cool to say)

Genuine question (rather than critique):

What is the EA Community doing to increase the diversity of its make-up? Are there any resources out there folks can link me to that are actively working on bringing in a plurality of perspectives/backgrounds/etc.?

Considering the scope of existential challenges we’re facing as a species, wouldn’t it stand to reason that looking for ideas for tackling them from a wider array of sources (especially areas outside of STEM, underrepresented populations, or folks outside of the English-speaking world) might offer solutions we wouldn’t otherwise come across?

Magnify Mentoring is also relevant here. 

I wrote a bit about non-STEM inclusivity (list of project ideas, a post on language), and I think there are some active efforts to expand outside of the English-speaking world (things like conferences, translation projects, local and online groups, fellowships, camps, etc.) — but more would be good! 

This doesn't completely answer your question, but you may be interested in this page on CEA's website, particularly the "Our work" section.

1
MariellaVee
2y
Thank you so much for the link! Lots of great stuff here. Trying to help mitigate economic barriers for attending events and conferences is excellent, as are the acknowledgements of the risk of English-speaking dominance within the community’s leadership; maintaining a genuine curiosity and collaborative mentality to ask communities and underrepresented groups how best to support their participation is also great! I wonder how EA might avoid the trap that I’ve witnessed a lot in Tech and Industry where the intentions are there/they state they’re committed to these principles, but the actual day-to-day reality doesn’t match up with well-intentioned guidelines (no matter how many “We’re really dedicated to DEI!” Zoom meetings are held). Is it to apply similar criteria for objective measurement of success in these categories to organizations and bodies within the community as is done for charities and initiatives? Or set transparent and time-specific goals for things like translating and proliferating seminal resources into other languages, diversifying key leadership positions, etc.? (Ex: CEA states they’re current employee make-up is 46% female and 18% self-identified minorities, though it’s not clear how this breaks down within leadership positions, etc.) Is it as simple as discouraging the over-use of technical jargon and Academic language within communications so as to widen the scope of understanding/broaden the audience? (Or something completely different/none of these things?)

Should EA give more importance to the possibility of globally working towards eliminating the negative tendencies of greed, aversion and delusion, inherent to the human mind?

Loosely related, but you might be interested in the topic "moral advocacy": https://forum.effectivealtruism.org/topics/moral-advocacy 

What's the basis for using expected utility/value calculations when allocating EA funding for "one off" bets? More details explaining what I don't understand are below for context.

My understanding is expected value relies on the law of large numbers, so in situations where you have bets that are unlikely to be repeated (for example, extinction, where you could put a ton of resources into it and go from a 5% extinction risk over the next century to a 4% risk) it doesn't seem like expected value should hold. The way I've seen this justified is using expected utility and the Von Neumann Morgenstern (VNM) theorem which I believe says that a utility function exists that follows rationale principles and that they've proved that it's optimal to maximize expected utility in that situation.

However, it seems like that doesn't really tell us much, because maybe you could construct a number of utility functions that satisfy VNM, and some bankrupt you and some don't. It seems reasonable to me that a good utility function should discount bets that will rarely be repeated at that scale and would be unlikely to average out positively in the long run since they won't be repeated enough times. But as far as I'm aware EA expected utility/value calculations often don't account for that.

It seems like people refer to attempts to account for that as risk-aversion, and my understanding is EAs often argue that we should be risk-neutral. But the arguments I've seen typically seem to frame risk-aversion as putting an upper bound on valuing people's well-being and that we don't want to do that. But it seems to me like you could value well-being linearly, but also factor in that you should downweight bets that won't be repeated enough to average out in your favor.

Apologies for the lengthy context, I'm sure I'm confused on a lot of points so any clarity or explanations on what I'm missing would be appreciated!

It's been a while since I read it but Joe Carlsmith's series on expected utility might help some. 

1
Ryan Beck
2y
Thanks, I'll check that out!

What are some strong arguments against 'astronomical waste'?

In roughly ascending order of plausibility: 

  • Cluelessness. Maybe we can't knowably affect the future (I find this argument probably the most weak)
  • Anthropics/Doomsday argument. It's extremely unlikely that we should observe ourselves as among the 100 billion earliest people, so we should update heavily towards thinking the future is inevitably doomed
  • Simulation Hypothesis. (relatedly) If we're extremely certain we're in a simulation, then (under most theories of value and empirical beliefs) we should care more about the experiences of existing beings.
  • You
... (read more)

What if your next car was 10 times more powerful? What new kinds of driver training, traffic rules, and safety features would you think are necessary? What kinds of public education, laws, and safety features are necessary when AI, genetic engineering, or robotics becomes 10 times more powerful? How do we determine the risks?
Bonus question: why is it important to have good analogies so the general public can understand the risks of technology?
 

I feel like a lot of EA charities are "reactionary" in that they try to mitigate an issue while not attempting to overcome an issue. 

Take animal welfare for example: the main charities that are funded are mostly advocacy based and activism. While I am supportive of this approach, I don't think it will ultimately help animal welfare much by any order of magnitude in the long term. Instead, something will probably displace the need for animals IMO– like lab grown meat. Why don't EAs support basic research such as lab grown meat* as a means to displace the current state of factory farming? Sure, over a lifetime lab grown meat has a really low % chance of coming to fruition, but if it did (and with greater funding you can increase its chance of happening!), it would have orders of magnitude more impact for animal welfare than the current advocacy model.

*The same situation applies to climate change too. There's a trend in EA and now more general circles to "offset your carbon footprint" but again this feels like a mitigation/reactionary way of spending your money. I would much rather my money go to nuclear fusion research b/c if it worked out, it would have orders of magnitude more impact than simply mitigating my own carbon footprint

 

hope that makes some sense!

Why don't EAs support basic research such as lab grown meat* as a means to displace the current state of factory farming?

That's news to me. 😕 Animal Charity Evaluators recommends several charities that promote alternative proteins, such as the Material Innovation Initiative and the Good Food Fund. Although no longer an ACE recommended charity, the Good Food Institute is one of the leading orgs that funds and promotes alternative protein innovation, and is often recommended by EAs.

There's a trend in EA and now more general circles to "offset your carbon footprint" but again this feels like a mitigation/reactionary way of spending your money. I would much rather my money go to nuclear fusion research b/c if it worked out, it would have orders of magnitude more impact than simply mitigating my own carbon footprint

I'm not familiar with carbon offsetting as being a 'trend in EA' - as far as I'm aware the canonical EA treatment of this is Claire's piece arguing against it.

Similarly, if I look at the EA forum wiki page for climate change, every single bullet point is about research, and the first one is 'innovative approaches to clean energy' which includes nuclear.

I'm personally pretty skeptical of lab-grown meat after looking into it for a while (see here, here, and here). I do think some investment into the space makes sense for reasons similar to your argument, but I'm personally a bit skeptical of "I think the science of my current approach doesn't work and will never work but there's a small chance I'm wrong so it might make sense to work on it anyway" as a way to do science.*

My guess is that the future replacement for meat will not look like lab-grown mammalian cells, and if it does, how we get there will look... (read more)

Sorry, stupid question, but just to clarify, questions should be posted in this thread, or in the general “questions” section on the forum?

For the purpose of trying this thread, it would be nice to post questions as "Answers" to this post.[1] Although you're welcome to post a question on the Forum if you think that's better: you can see a selection of those here

Not a stupid question! 

  1. ^

    The post is formatted as a "Question" post, which might have been a mistake on my part, as it means that I'm asking people to post questions in the form of "Answers" to the Question-post, and the terminology is super confusing as a result.

Is your altruism more effective now than it was four years ago? (Instead of the election question, "Are you better off now than you were four years ago?")

What would it look like for an organization or company to become more recognized as an 'EA' org/company? What might be good ways to become more integrated with the community (only if it is a good fit, that is, with high fidelity) and what does it mean to be more 'EA' in this manner?

I recognize that there is a lot of uncertainty/fuzziness with trying to definitively identify entities as 'EA'. It is hard for me to even know to whom to ask this question, so this comment is one of a few leads I have started.

I am generally curious about the organizational/leadership structure of "EA" as a movement. I am hesitant to detail/name the company as that feels like advertising (even though I do not actually represent the company), but some details without context:

  • Part of its efforts and investiture are aligned with reducing the risk of a potential x-risk (factor?) - aiming to develop brain-computer interfaces (BCI) that increase rather than hinder human agency.
  • Aims to use BCI to improve decision-making.
  • Donates 5% to effective charities (pulled from ~GiveWell) and engages employees in a 'giving game' to this end.
  • A for-profit company without external investors - a criterion they believe is necessary to be longterm focused on prioritizing human agency.

EAs experiencing ableism: How do you feel about the use of QALYs (Quality-Adjusted Life Year) and DALYs (Disability-Adjusted Life Year) to measure impact? Are there other concepts you would prefer?

There is a recent post and talk on Measuring Good Better. I found it really interesting to see how different organizations use WELLBYs, "moral weights", or something different.

Does being principled produce the same choice outcomes as being a long-term-consequentialist ?

Leadership circles[1] emphasize putting principles first. Utilitarianism rejects this approach: it focuses on maximizing outcomes, with little normative attention paid to the process (or, as the quip goes: the ends justify the means). This (apparent) distinction pits EA against conventional wisdom and, speaking from my experience as a group organizer,[2] is a turn-off.

However, this dichotomy seems false to me. I can easily imagine a conflict between a myopic utilitarian and a deontologist (e.g. the first might rig the lottery to send more money to charity).[3] I have more trouble imagining a conflict between a provident utilitarian and a principles-first person (e.g. cheating may help in the short term, but in the long-term, I may be barred from playing the game).[4] 

Even if principles sometimes butt heads (e.g. being kind vs. being honest), so can different choice outcomes (e.g. minimizing animal suffering vs. maximizing human flourishing). Both these differences are resolved by changing the question's parameters or definitions:[5] being dishonest is an unkindness; we need to take both sufferings into account.

All in all, it seems like both approaches face the same internal problems, the same resolutions, and could produce the same answer set. If this turns out to be true, there are a few possible consequences:

  • High confidence (>85%): With enough reflection, EA might develop 'EA principles' that are not focused on consequences but fundamentally aligned with EA.
  • Medium confidence (~55%): If EA develops these principles, EA can advertise them to current and prospective members, potentially attracting demographics that were fundamentally opposed to utilitarianism. 
  • Low confidence (~30%): If EAs adopt these principles, they may shift their primary focus to processes ('doing things right'), and move outcomes to secondary focuses. It adopts the motto: if you do things the right way, the right things will come.[6]
  1. ^

    I'm thinking of Stephen Covey's works "7 Habits of Highly Effective People" (1989) and "Principle-Centered Leadership" (1992). If these leadership models are outdated, please correct me. 

  2. ^

    When tabling for a new EA group, mentioning utilitarianism cast a shadow on a few (~40%) conversations. When I explained how we choose between lives we save every day, people seemed more empathetic, but it felt like a harder sell than it had to be.

  3. ^

    I would love for someone to do proper math to see if this expected value works out. Quick maths are as follows (making assumptions along the way). Assume the lottery is 100M$ with a 80% chance of getting caught, and otherwise, you make 200G per year, and you'd get 10 years in prison for rigging. EV of lottery rigging = winning profits + losing costs = .2*$100M + .8*(-$200G/yr*10 yr) = 18.4M.

  4. ^

    I'm assuming that we live in a society that doesn't value cheating...

  5. ^

    This strategy is Captain Kirk's when solving the Kobayashi Maru.

  6. ^

    Its modus tollens comes to the same conclusion as utilitarianism: if you have the wrong consequences, you must have had the wrong processes. 

The most important principle is to maximize long-run utility. All else follows.

How could we change the fact that our lives and fate are controlled by a few tyrant and greed people?

What are the hidden costs and benefits to working full-time on x-risk reduction?  (Including research, policy, etc.)

I know EA isn't meant to be a political movement. But in a democratic society, everything is political.

How is EA affecting its corner of society? What power structures is EA reinforcing or weakening?

How many highly engaged EAs are there? In 2020, Ben Todd estimated there were about 2666, (7:00 into video). I can’t find any info on how many there are now, where would I find this, and/or how many highly engaged EAs are there now?

I don't think there's any single good definition of "highly engaged EAs". Giving What We Can lists 8,771 people that have signed their pledge https://www.givingwhatwecan.org/about-us/members 

Are there any stats/tendencies on EAs on where they generally stand on the determinism vs free will debate?

Do you think this is relevant to EA?

Is there a good article and Youtube video that summarizes arguments for/against meat eating? Ie objective analysis that goes through the best arguments for all sides

Related to <b>Status Quo Bias</b>: If the status quo you're defending was considered an extreme opinion, would you still argue for it?

Trying to decide if this is a good example and how to phrase it: Where it is status quo to eat meat few argue against it. Depending on culture, it is status quo to • Eat beef/pork/horse/dog/rodent/snail/insect • Not eat animals The reasons given to not eat some animals are inconsistent with what people eat. If people made decision by reasoning, these cultural differences would be minor. Cultural status quo keep these cultural differences.

Is there a good article and Youtube video that summarizes arguments for/against meat eating? Ie objective analysis that goes through the best arguments for all sides

I really liked this one https://slatestarcodex.com/2019/12/11/acc-is-eating-meat-a-net-harm/ , it's not perfect, and I personally strongly disagree with their estimates of animal suffering, but I don't know of anything better.

If the status quo you're defending was considered an extreme opinion, would you still argue for it?

I don't think EA is arguing for the status quo, could you elaborate a bit?

It seems that AI safety discussions assume that once general intelligence is achieved, recursive improvement means that superintelligence is inevitable.

How confident are safety researchers about this point?

At some point, the difficulty of additional improvements exceeds the increase in intelligence and the AI will eventually no longer be able to improve itself. Why do safety researchers expect this point to be a vast superintelligence rather than something only slightly smarter than a human?

Definitely not an expert, but I think there is still no consensus on "slow takeoff vs fast takeoff" (fast takeoff is sometimes referred to as FOOM).

It's a very important topic of disagreement, see e.g. https://www.lesswrong.com/posts/hRohhttbtpY3SHmmD/takeoff-speeds-have-a-huge-effect-on-what-it-means-to-work-1 

Why should EAs be risk-neutral? 

When people talk about being risk neutral are they only referring to DALYs/QALYs or are they also referring to monetary returns?

For example, if you plan to earn to give and can either take a high-paying salaried position or join a startup and both options have equal EV why shouldn't you have a preference for the less risky option? 

 

My understanding of why individuals should be risk-averse with respect to money is that money has diminishing marginal returns. Doesn't money also have diminishing marginal returns for helping other people so EAs should be somewhat risk averse when earning to give?

Doesn't money also have diminishing marginal returns for helping other people so EAs should be somewhat risk averse when earning to give?

Personally, since it's very unlikely that I will become a mega-billionaire, I think that my contributions will be so marginal that returns can be considered linear. But you might be interested in https://forum.effectivealtruism.org/topics/risk-aversion 

Should I/we use the acronym HQALYs (Human QALYs)?

Assuming the language we use is not only descriptive but also performative, when I talk to others:

... I use “non-human animals” as a reminder that we are also animals and all of us deserve moral consideration

... I use “non-human primates” and “non-human simians” as a reminder that we belong to the same order/family

 

Following the same rational, I also use “Human QALYs” to be explicit that I’m comparing the impact of different interventions in terms of human lives. When we talk about “lives with value to be measured and saved” referring to human lives only (without being explicit of that and assuming that everyone will understand we are referring to human lives only), we may unconsciously link the concepts “valuable lives” and “only human lives” in our minds. 

 

On the other side I’ve never seen HQALY written anywhere else, and QALY/DALY is such a widespread term that I’m afraid we don’t need an additional acronym.

 

What do you think? Shall I/we use QALY or HQALY?

What are the strongest arguments for believing that conscious experiences can have negative value?

Many people of sound mind choose assisted suicide in their old age and advanced illness.

What is the biggest challenge for humanity (in a general sense, because today's biggest threat may be AI, tomorrow's may be nuclear, etc.)?

The biggest challenge for humanity is to bring out the best in itself.
If we cannot inspire the public to be their best, we will not be able to get politicians, corporations, and governments to act in humanity's best interest.
This may not be as hard as some might think. "Just do it", "Keep calm and carry on", and the Can Do spirit are examples of how we inspire people to do better. Safe driving is an example of how we can promote altruistic and cooperative behavior on roads around the world. See my post for more. https://forum.effectivealtruism.org/posts/7srarHqktkHTBDYLq/bringing-out-the-best-in-humanity

Surviving the next 100 years and building a wonderful future.

I’ve been hoping to join an EA group in Nassau, The Bahamas, but there doesn’t seem to be one in existence yet. Can someone please help me find the link to the requirements for setting up a new EA group? Thanks!

Is anyone in EA thinking about (or has come across) the metacrisis

For those not familiar with the term, this article can provide a rough idea of what it's about, while this article provides a more in-depth (though somewhat esoteric) exploration.

My questions to any EAs who are familiar with the concept (and the wider 'metamodern' perception of our current historical context) are:

  1. Do you think this is a good assessment of the fundamental drivers of some of the major challenges of our time?
  2.  If you feel that the assessment is roughly correct, do you think it has any implications for cause prioritisation within EA?
  1. ^

    https://systems-souls-society.com/tasting-the-pickle-ten-flavours-of-meta-crisis-and-the-appetite-for-a-new-civilisation/

Has there been an effort to create an EA Day (or any cause-specific version) where there is a concerted push every year to get EA onto people's timelines and share the ideas in real life?

How do EA grantmakers take expert or peer opinions on decision-relevant claims into account? More precisely, if there's some claim X that's crucial to an EA grantmakers' decision and probabilistic judgements from others are available on X (e.g. from experts) -- how do EA grantmakers tend to update on those judgements?

Motivation: I suspect that in these situations it's common to just take some weighted average of the various credences and use that as one's new probability estimate. I have some strong reasons to think that this is incompatible with bayesian updating (post coming soon).

Do you have a specific example in mind?

From what I see, there are many different kinds of EA grantmakers, and they seem to be using different processes, especially in the longtermist vs neartermist space.

I don't think there's a single general answer to "how do grantmakers update on expert judgment".

What are some things I should avoid doing if I want to be an effective altruist?

What are EA's norms regarding inaccurate quotations?

Do you have an example in mind? I don't think there are specific norms besides the standard forum norms.
I would say relevant quotes are: "assume good faith", "don't mislead", and "mistakes are expected and fine (please accept corrections, though)"

1
Elliot Temple
1y
Wait, I was checking the quotes you gave and the third is taken out of context and misleading. It's the kind of thing that IMO would make a book fail a fact check. The original sentence is: It did not say that mistakes are expected and fine in general, it said it specifically about misgendering and deadnaming, so it's not relevant to my question. Did you know the text you gave referred to misgendering when you told me it was relevant? Did you read the whole sentence? Did you consider, before using a quote, whether your use fit or conflicted with the original context (as should be a standard step every time)? I don't understand what's going on (and I've run into this kind of problem for years at my own forums – and made rules against it, and given warnings that people will be banned if they keep doing it, which has reduced it a lot – but I've never gotten good answers about why people do it). I understand that using a quote out of context involves more of a judgment call than changing the words does, so it's somewhat harder to avoid, but this still looks like an avoidable case to me.
4
Lizka
1y
This isn't about mistakes in quotations specifically, but I agree with Lorenzo that "mistakes are expected and fine (please accept corrections, though)" — although taken out of context in this case — is a true norm on the Forum. You can see the same spirit here:  (I won't participate in this thread more, probably, but wanted to endorse this point, as I think it's important. I might tweak the Guide to norms post to reflect this better in the future.)
3
Lorenzo Buonanno
1y
Thanks for checking the source! I think it gives a much better feel for norms than my short comment. I wouldn't hold forum comments to the same standard as a book. But mistakes also definitely happen in books, even the best ones. I quoted it to give an idea of (my interpretation of) the spirit of the norms, not of the letter. In my experience, that norm applies in general (but I am no expert on EA norms). Basically, my interpretation is: "be kind to fellow humans", assume we're all doing our best to do the most good. From the other comment: I would say because we're all human, it might be a judgment call to tweak a quote for readability or to fit a character count. I think that EA cares a lot about correctness and encourages writing corrections and accepting them, but is also very pragmatic and collaborative. I really like that it encourages posting potentially wrong things and getting feedback, it seems to lead to great results.
-1
Elliot Temple
1y
So, in my mind, the number one purpose and requirement of quotes is accuracy. But in your mind, quotation marks can just be used for other things, like giving an idea about a spirit, without worrying much about accuracy? Like, a use of a quotation might not be literally true, but as long as the spirit of what you're doing seems good and accurate, that's good enough? I'm trying to understand the norms/values disagreement going on. I don't understand. How does being human give one a reason to tweak a quote? Are you saying people tweak quotes on purpose because they like the tweaked version better and lack respect for quotation accuracy? And that seems OK to you? And you think that attitude is widespread? It is so foreign to me, and so clearly irrational to me, that I struggle to comprehend this.
2
Lorenzo Buonanno
1y
I personally think that the purpose of text is to share information that's decision-relevant, and everything else is secondary. Being humans gives a reason for making all sorts of mistakes / imprecise things, I think it's OK as long as the information is not misleading, otherwise it's worth sending a (polite) correction.
1
Elliot Temple
1y
Thank you for replying several times and sharing your perspective. I appreciate that. I think this kind of attitude to quotes, and some related widespread attitudes (where intellectual standards could be raised), is lowering the effectiveness of EA as a whole by over 20%. Would anyone like to have a serious discussion about this potential path to dramatically improving EA's effectiveness?
1[anonymous]1y
I would guess your expectations of how costly it is for people to be as precise as you wish for them to be is miscalibrated, i.e. it's significantly costlier for people to be as precise as you wish for them to be than you think/how costly it is for you. What do you think? 
1
Elliot Temple
1y
I think the cost/benefit ratio for this kind of accuracy is very good. The downsides are much, much larger than people realize/admit – it basically turns most of their conversations unproductive and prevents them from having very high quality or true knowledge that isn't already popular/standard (which leads to e.g. some of EA's funding priorities being incorrect). Put another way, it's a blocker for being persuaded of, and learning, many good ideas. The straightforward costs of accuracy go down a lot with practice and automatization – if people tried, they'd get better at it. Not misquoting isn't really that hard once you get used to it (e.g. copy/pasting quotes and then refraining from editing them is, in some senses, easy – people fail at that mainly because they are pursuing some other kind of benefit, not because the cost is too high, though there are some details to learn like that Grammarly and spellcheck can be dangerous to accurate quotes). I agree it's hard initially to change mindsets to e.g. care about accuracy. Lots of ways of being a better thinker are hard initially but I'd expect a rationality-oriented community like this to have some interest in putting effort into better thinker – at least e.g. comparing with other options for improvement. Also, (unlike most imprecision) misrepresenting what people said is deeply violating. It's important that people get to choose their own words and speak for themselves. It's treating someone immorally to put words in their mouth, of your choice not theirs, without their consent. Thinking the words are close enough or similar enough doesn't make that OK – that's their judgment call to make, not yours. Assuming they won't disagree, and that you didn't make a mistake, shows a lack of humility, fallibilism, tolerance and respect for ideas different than your own, understanding that different cultures and mindsets exist, etc. (E.g., you could think to yourself, before misquoting, that the person you're going to misq
3[anonymous]1y
Hi Elliot, I find this topic interesting but I've already spent more time on this thread than I intended to, so unfortunately this will likely be my last comment here. Hope it's still useful data though.  It sounds like a summary of what you said is that (please feel free to correct) that the benefits of greater precision are much higher than most people think and are needed for learning certain important ideas, the cost of misprecision can be immoral and deeply violating to another person, the costs of learning are lower than they seem in the longterm, particularly for technical people, social incentives can push away from precision in ways that make you a worse thinker, and there are other important issues as well.  In the abstract, I agree to a large extent with all of those except for math or programming skills making textual precision or understanding easier in most relevant situations (I agree for literal copy-pasting, but I think it's a pretty small part of the issue).   But I don't think Lorenzo's quotation use was bad or inaccurate. It was a bit ambiguous whether the quotes were meant to be direct or not, and decreasing the ambiguity would very likely have helped you, but there are also costs to doing so, this seems like an edge case, and it's unclear to me how much or if to update.  To respond to a different comment of yours: For direct quotes, I agree the number one purpose and requirement is accuracy. But I also think using quotes for conveying the spirit of ideas is useful.
1
Elliot Temple
1y
Can't people either omit quotation marks around paraphrases or, failing that, at least clearly label them as paraphrases? Why does anyone need quotation marks around paraphrases to convey the spirit of ideas? How do quotation marks help convey spirit? And how is any reader supposed to know that the text in quotations mark is not a "direct" quote? There are standard practices for how to handle these things (bold added): https://www.thoughtco.com/indirect-quotation-writing-1691163 Back to sphor: Would you like to have a serious conversation or debate with me about another topic, or not at all?
1[anonymous]1y
Hi Elliot, this is just a quick reply.  I'm not currently interested in participating in the sort of debate you mean, sorry. For what it's worth though, I consider our exchanges to have been serious albeit brief and unstructured. 
1
Elliot Temple
1y
Relevant: https://forum.effectivealtruism.org/posts/gL7y22tFLKaTKaZt5/debate-about-biased-methodology-or-corentin-biteau-and?commentId=iFinowJ2XGWM6gidM
1
Elliot Temple
1y
Why are mistakes within quotations expected and fine? What processes cause good faith mistakes within quotations, particularly when the quote can be copy/pasted rather than typed from paper? (I think typed in quotes should be double checked or else should have a warning.) I was hoping that EA might have high standards about accuracy. I think that's important rather than seeing an avoidable type of misinformation as merely expected and fine. Our culture in general holds quotations to a higher than normal standard, partly because misquotes put words in other people's mouths, so they're disrespectful and violating, and partly because they're false and you can just not do it. I was hoping EA, with its focus on rationality, might care more than average, but your response indicates caring less than average. When a quote can be copy/pasted, what good faith processes result in wording changes? I think a norm should be established against editing quotes or typing paraphrases then knowingly putting quote marks around them. I don't understand why so many people do it, including book authors, or what they're thinking or what exactly they're doing to generate the errors.

Which norms can/should wider culture adopt, from an EA perspective? Is it just donating at all and having a large moral circle of concern?

How do you apply isoelastic utility to real world consumption/income values? For, say, calculating "equivalent sacrifice" donation amounts for people with different incomes.

When I've tried to apply the formula, incomes as different as $10k and $80k both seem to equal "almost 2.0 utility", and I don't know what to do with this.

Why does the Forum have a ‘karma system’? Why was it called a ‘karma system’ over any other description? Is the karma system a truly accurate reflection of a persons input into discussions on the forum?

I think the name "karma" comes from reddit.com, caught on and was used by other internet forums to give a name to the "fake internet points" that users get for posting.

It's definitely not an accurate reflection of a persons input into discussions on the forum, but having a positive karma is a strong indicator that a user is not trolling/spamming.

Other than releasing anti-malaria (or similar diseases) gene drives, is there any other "physical" action that can be taken for less than a million dollars and has a chance greater than 5% of saving an enormous number of people?

Note that unilaterally using gene drives in this way is usually considered a really bad idea because of poisoning the well against further use. Not just by usual conservative bioethics types, but by the scientist who first proposed using CRISPR to affect wild populations like mosquitos: 
"Esvelt, whose work helped pave the way for Target Malaria’s efforts, is terrified, simply terrified, of a backlash between now and then that could derail it. This is hardly a theoretical concern. In 2002, anti-GMO hysteria led the government of Zambia to reject 35,000... (read more)

1
P
2y
I know that might be a problem, but I asked for other ideas that have at least a 5% chance of saving a lot of people, even if they are bad in expectation. The hope is that they can somehow be modified into good ones, and I still don't know whether that's the case for gene drives. When I get enough free time, I'll try to ask the researchers.

Should effective realists be afraid of the popular dictum 'the road to Hell is paved with the best of intentions?' In other words, how can we be sure that our interventions will be on the right side of history long after we're dead, and that they won't end up causing great misfortune despite our stated intention to do good?

If what you believe will benefit someone more is different to what they believe they need, and you believe that what they think they need will not benefit or will even harm them, is it altruistic to follow their beliefs to keep them happy, or to follow your own? Assuming that to follow your own beliefs wouldn't significantly compromise their autonomy, wellbeing or happiness

How can we "make altruism great again"? (Not trying to be political. I just wish to ask how we can inspire people to be altruistic, including using people's longing for the good old days and other methods.)

Can individuals make a difference?

A favorite piece on this: Keeping Absolutes in Mind

I like thinking in terms of "there are some battles that almost nobody is fighting, so I can be one of the only people in the world advancing those areas", as opposed to, for example, trying to beat the stock market - which many many smart people are already trying to do, all competing with each other (and with me)

Can individual drivers who practice safe driving make a difference? Of course they can! It's individual drivers who yield, who drive carefully and cautiously (as opposed to recklessly and aggressively). Collectively, enough safe drivers can create a cooperative atmosphere, and increase overall traffic efficiency. And driving attitudes and behaviors can be changed through education and incentives.

3 Related Questions

4Answer by Larks2y
To give you an overly specific answer, presumably the AGI could realize the existence of the trigger and just keep the lights on while surrounding them with with paperclips?
6Answer by lincolnq2y
An entity powerful enough to threaten all humans on earth would most likely also be powerful enough to defeat the MAD mechanism.
1
Leopard
2y
Suppose we have 10 different MAD mechanisms. Wouldn't the defense become impossible at some point? Wouldn't the AI think at one point, "It is simpler to just complete the task without killing anybody"? EDIT: okay, if the task given to AI is "calculate the 2^256 th digit of Pi", then perhaps it'd rather risk the easier task of "before they certainly interrupt my Pi calculations, defeat 50 MAD threats and kill all people, then hopelessly proceed"
2
lincolnq
2y
Perhaps, but now you're substituting one risk for another, which is the risk that the MAD mechanisms trigger by accident, causing unintentional destruction of the earth.
1
Leopard
2y
Yeah, thanks for replying. I did now realize that its more complex than that.
Comments4
Sorted by Click to highlight new comments since: Today at 9:52 AM

Who is Phil, and why does everyone talk about how open he is?

I also think that the first person to post a question will be performing a public service by breaking the ice!

Winners of the small prize

We'll be privately messaging winners and explaining how you should get your prize. Expect a private message from us in the next few days. 

Questions

QuestionWhy am I awarding it a prize? (Brief notes!)
Why is scope insensitivity considered a bias instead of just the way human values work? (Link)

This is very much in the spirit of the thread, asks a question others might be wondering about, and focuses on a topic that is pretty fundamental to effective altruism. 

🔎 Despite the fact that this question already has some answers, I think it could benefit from some more. 

'If I take EA thinking, ethics, and cause areas more seriously from now on, how can I cope with the guilt and shame of having been so ethically misguided in my previous life?' [...] (Link)I think many people struggle with this. 
Why does most AI risk research and writing focus on artificial general intelligence? Are there AI risk scenarios which involve narrow AIs? (Link)

Most AI risk conversations in EA do seem to be about general intelligence, and there’s not much discussion about why this is the case. I imagine lots of people are confused or deferring heavily to the crowd. Questions that get at what seems to be an unspoken assumption are often really useful. 

🔎 This is a question that might benefit from having more answers. 

Has anyone produced writing on being pro-choice and placing a high value on future lives at the same time? I’d love to read about how these perspectives interact! (Link)Another question that others might share! I like that the question asks about a possible intersection of viewpoints rather than assuming either conclusion or that the intersection is incompatible. 

Why don't we discount future lives based on the probability of them not existing? These lives might end up not being born, right?


 

I understand the idea of not discount lives due to distance (distance in time as well as distance in space). Knowing a drowning child is 30km away is different from hearing from a friend that there is an x% chance of a drowning child 30km away. In the former, you know something exists; in the latter, there is a probability that it does exist, and you apply an suitable level of confidence in your actions. (Link)

This question is genuine and important, and ends up highlighting a real confusion people have in conversations about “discounting” (“pure” discounting vs discounting for other reasons). 

Answers

Question (para- phrased) AnswerWhy am I awarding it a prize? (Brief notes!)
What level of existential risk would we need to achieve for existential risk reduction to no longer be seen as "important"?

What's directly relevant is not the level of existential risk, but how much we can affect it. (If existential risk was high but there was essentially nothing we could do about it, it would make sense to prioritize other issues.) Also relevant is how effectively we can do good in other ways. I'm pretty sure it costs less than 10 billion times as much (in expectation, on the margin) to save the world as to save a human life, which seems like a great deal. (I actually think it costs substantially less.) If it cost much more, x-risk reduction would be less appealing; the exact ratio depends on your moral beliefs about the future and your empirical beliefs about how big the future could be.

(Link)

The answer notices a confusion in the original question (that the level of existential risk determines whether we should prioritize it), and responds to the confusion (we should prioritize based on how much we can affect the level of risk, what we can do besides work on existential risk reduction, and our empirical and moral beliefs). Note that this is not to say that the original question is bad — the whole point of this thread is to clarify beliefs and be allowed to ask anything.

I also like that this answer is grounded in an example scenario (if the risk were really high).

Does anyone have a good list of books related to existential and global catastrophic risk? This doesn't have to just include books on X-risk / GCRs in general, but can also include books on individual catastrophic events, such as nuclear war.  [...]

In no particular order. I'll add to this if I think of extra books. [... LIST]

(Link)

It’s good to see someone putting together a collection, and the list has lots of relevant books, sorted by topic!
Why is scope insensitivity considered a bias instead of just the way human values work?

Not a philosopher, but scope sensitivity follows from consistency (either in the sense of acting similarly in similar situations, or maximizing a utility function). Suppose you're willing to pay $1 to save 100 birds from oil; if you would do the same trade again at a roughly similar rate (assuming you don't run out of money) your willingness to pay is roughly linear in the number of birds you save.

Scope insensitivity in practice is relatively extreme; in the original study, people were willing to pay $80 for 2000 birds and $88 for 200,000 birds. So if you think this represents their true values, people were willing to pay $.04 per bird for the first 2000 birds but only $0.00004 per bird for the next 198,000 birds. This is a factor of 1000 difference; most of the time when people have this variance in price they are either being irrational, or there are huge diminishing returns and they really value something else that we can identify. For example if someone values the first 2 movie tickets at $1000 each but further movie tickets at only $1, maybe they really enjoy the experience of going with a companion, and the feeling of happiness is not increased by a third ticket. So in the birds example it seems plausible that most people value the feeling of having saved some birds.

Why should you be consistent? One reason is the triage framing, which is given in Replacing Guilt. Another reason is the money-pump; if you value birds at $1 per 100 and $2 per 1000, and are willing to make trades in either direction, there is a series of trades that causes you to lose both $ and birds.

All of this relies on you caring about consequences somewhat. If your morality is entirely duty-based or has some other foundation, there are other arguments but they probably aren't as strong and I don't know them.

(Link)

The answer is pretty thorough. The birds study is not my favorite example, but I think it serves as a good illustration here, and I appreciate the note about diminishing returns. The links in the answer are valuable, and allow people wondering about this to explore on their own. I also like that the answer is caveated (“not a philosopher”).
Does anyone know why the Gates Foundation doesn't fill the GiveWell top charities' funding gaps?

I wrote a post about this 7 years ago! Still roughly valid.

(Link)

[Joint prize for these two comments.] 


 

These comments link to a relevant post (written by the commenter) and update the old content with new information. Moreover, the answer is not what you might expect and shares lots of interesting context. 

One recent paper  suggests that an estimated additional $200–328 billion per year is required for the various measures of primary care and public health interventions from 2020 to 2030 in 67 low-income and middle-income countries and this will save 60 million lives. But if you look at just the amount needed in low-income countries for health care - $396B - and divide by the total 16.2 million deaths averted by that, it suggests an average cost-effectiveness of ~$25k/death averted. 

Other global health interventions can be similarly or more effective: a 2014 Lancet article estimates that, in low-income countries, it costs $4,205 to avert a death through extra spending on health[22]. Another analysis suggests that this trend will continue and from 2015-2030 additional spending in low-income countries will avert a death for $4,000-11,000[23].

For comparison, in high-income countries, the governments spend $6.4 million to prevent a death (a measure called “value of a statistical life”)[24]. This is not surprising given the poorest countries spend less than $100 per person per year on health on average, while high-income countries almost spend $10,000 per person per year[25].

GiveDirectly is a charity that can productively absorb very large amounts of donations at scale, because they give unconditional cash transfers to extremely poor people in low-income countries. A Cochrane review suggests that such unconditional cash-transfers “probably or may improve some health outcomes.[21]  One analysis suggests that cash-transfers are roughly equivalent as effective as averting a death on the order of $10k .

So essentially  cost-effectiveness doesn't drop off sharply after Givewell's top charities are 'fully funded', and one could spend billions and billions at similar cost-effectiveness, Gates only has ~$100B and only spends~$5B a year.

(Link)

It seems that AI safety discussions assume that once general intelligence is achieved, recursive improvement means that superintelligence is inevitable.
 

How confident are safety researchers about this point?
 

At some point, the difficulty of additional improvements exceeds the increase in intelligence and the AI will eventually no longer be able to improve itself. Why do safety researchers expect this point to be a vast superintelligence rather than something only slightly smarter than a human?

Definitely not an expert, but I think there is still no consensus on "slow takeoff vs fast takeoff" (fast takeoff is sometimes referred to as FOOM).

It's a very important topic of disagreement, see e.g. https://www.lesswrong.com/posts/hRohhttbtpY3SHmmD/takeoff-speeds-have-a-huge-effect-on-what-it-means-to-work-1 

(Link)

This is another case where the answer helpfully addresses a potential confusion in the original question, and links to useful resources on the topic. 

How could we change the fact that billions of lives and fate on earth are controlled by a small group of tyrant and greed people?

More from Lizka
Curated and popular this week
Relevant opportunities