I invite you to ask anything you’re wondering about that’s remotely related to effective altruism. There’s no such thing as a question too basic.

Try to ask your first batch of questions by Monday, October 17  (so that people who want to answer questions can know to make some time around then).

Everyone is encouraged to answer (see more on this below). There’s a small prize for questions and answers. [Edit: prize-winning questions and answers are announced here.]

This is a test thread — we might try variations on it later.[1]

How to ask questions

Ask anything you’re wondering about that has anything to do with effective altruism.

More guidelines:

  1. Try to post each question as a separate "Answer"-style comment on the post.
  2. There’s no such thing as a question too basic (or too niche!).
  3. Follow the Forum norms.[2]

I encourage everyone to view asking questions that you think might be “too basic” as a public service; if you’re wondering about something, others might, too.

Example questions

  • I’m confused about Bayesianism; does anyone have a good explainer?
  • Is everyone in EA a utilitarian?
  • Why would we care about neglectedness?
  • Why do people work on farmed animal welfare specifically vs just working on animal welfare?
  • Is EA an organization?
  • How do people justify working on things that will happen in the future when there’s suffering happening today?
  • Why do people think that forecasting or prediction markets work? (Or, do they?)

How to answer questions

Anyone can answer questions, and there can (and should) be multiple answers to many of the questions. I encourage you to point people to relevant resources — you don’t have to write everything from scratch!

Norms and guides:

  • Be generous and welcoming (no patronizing).
  • Honestly share your uncertainty about your answer.
  • Feel free to give partial answers or point people to relevant resources if you can’t or don’t have time to give a full answer.
  • Don’t represent your answer as an official answer on behalf of effective altruism.
  • Keep to the Forum norms.

You should feel free and welcome to vote on the answers (upvote the ones you like!). You can also give answers to questions that already have an answer, or reply to existing answers, especially if you disagree.

The (small) prize

This isn’t a competition, but just to help kick-start this thing (and to celebrate excellent discussion at the end), the Forum team will award $100 each to my 5 favorite questions, and $100 each to my 5 favorite answers (questions posted before Monday, October 17, answers posted before October 24).

I’ll post a comment on this post with the results, and edit the post itself to list the winners. [Edit: prize-winning questions and answers are announced here.]


Maybe don’t ask all of these, as they’re not quite related to EA, but this is sort of what I want the comment section of this post to be like. Source.
  1. ^

     Your feedback is very welcome! We’re considering trying out themed versions in the future; e.g. “Ask anything about cause prioritization” or “Ask anything about AI safety.”

    We’re hoping this thread will help get clarity and good answers, counter some impostor syndrome that exists in the community (see 1 and 2), potentially rediscover some good resources, and generally make us collectively more willing to ask about things that confuse us.

  2. ^

     If I think something is rude or otherwise norm-breaking, I’ll delete it.

107

New Answer
Ask Related Question
New Comment

79 Answers sorted by

Does anyone know why the Gates Foundation doesn't fill the GiveWell top charities' funding gaps?

Could you post this as a new forum post rather than a link to a Google doc? I think it's a question that gets asked a lot and would be good to have an easy to read post to link to.

5EdoArad2mo
Agree! Hauke, let me know if you'd want me to do that on your behalf (say, using admin permissions to edit that previous post to add the doc content) if it'll help :)
2Hauke Hillebrandt2mo
Yes, that's fine.
7EdoArad1mo
Edited to include the text. Did only a little bit of formatting, and added the appendix as is, so it's not perfect. Let me know if you have any issues, requests, or what not :)

One recent paper  suggests that an estimated additional $200–328 billion per year is required for the various measures of primary care and public health interventions from 2020 to 2030 in 67 low-income and middle-income countries and this will save 60 million lives. But if you look at just the amount needed in low-income countries for health care - $396B - and divide by the total 16.2 million deaths averted by that, it suggests an average cost-effectiveness of ~$25k/death averted. 

Other global health interventions can be similarly or more effecti... (read more)

-6Henry Howard1mo

This is a great question, and the same should be asked of governments (as in: "why doesn't the UK aid budget simply all go to mosquito nets?")

A likely explanation for why the Gates Foundation doesn't give to GiveWell's top charities is that those charities don't currently have much room for more funding (GiveWell had to rollover funding last year because they couldn't spend it all. A recent blog posts suggests they may have more room for funding soon https://blog.givewell.org/2022/07/05/update-on-givewells-funding-projections/)

A likely explanation for why ... (read more)

When I read Critiques of EA that I want to read, one very concerning section seemed to be "People are pretty justified in their fears of critiquing EA leadership/community norms."

1) How seriously is this concern taken by those that are considered EA leadership, major/public facing organizations, or those working on community health? (say, CEA, OpenPhil, GiveWell, 80000 hours, Forethought, GWWC, FHI, FTX) 

2a) What plans and actions have been taken or considered?
2b) Do any of these solutions interact with the current EA funding situation and distribution? Why/why not?

3) Are there publicly available compilations of times where EA leadership or major/public facing organizations have made meaningful changes as a result of public or private feedback?

(Additional note: there were a lot of publicly supportive comments [1] on the Democratising Risk - or how EA deals with critics post, yet it seems like the overall impression was that despite these public comments, she was disappointed by what came out of it. It's unclear whether the recent Criticism/Red-teaming contest was a result of these events, though it would be useful to know which organizations considered or adopted any of the suggestions listed[2] or alternate strategies to mitigate concerns raised, and the process behind this consideration. I use this as an example primarily because it was a higher-profile post that involved engagement from many who would be considered "EA Leaders".)

  1. ^

    1, 2, 3, 4

  2. ^

    "EA needs to diversify funding sources by breaking up big funding bodies and by reducing each orgs’ reliance on EA funding and tech billionaire funding, it needs to produce academically credible work, set up whistle-blower protection, actively fund critical work, allow for bottom-up control over how funding is distributed, diversify academic fields represented in EA, make the leaders' forum and funding decisions transparent, stop glorifying individual thought-leaders, stop classifying everything as info hazards…amongst other structural changes."

Thanks for asking this. I can chime in, although obviously I can't speak for all the organizations listed, or for "EA leadership." Also, I'm writing as myself — not a representative of my organization (although I mention the work that my team does). 

  1. I think the Forum team takes this worry seriously, and we hope that the Forum contributes to making the EA community more truth-seeking in a way that disregards status or similar phenomena (as much as possible). One of the goals for the Forum is to improve community norms and epistemics, and this (criticism of established ideas and entities) is a relevant dimension; we want to find out the truth, regardless of whether it's inconvenient to leadership. We also try to make it easy for people to share concerns anonymously, which I think makes it easier to overcome these barriers.
    1. I personally haven't encountered this problem (that there are reasons to be afraid of criticizing leadership or established norms) — no one ever hinted at this, and I've never encountered repercussions for encouraging criticism, writing some myself, etc. I think it's possible that this happens, though, and I also think it's a problem even if people in the commu
... (read more)

Are there publicly available compilations of times where EA leadership or major/public facing organizations have made meaningful changes as a result of public or private feedback?

Some examples here: Examples of someone admitting an error or changing a key conclusion.

4pseudonym2mo
Thanks for the link! I think most examples in the post do not include the part about "as a result of public or private feedback", though I think I communicated this poorly. My thought process behind going beyond a list of mistakes and changes to including a description of how they discovered this issue or the feedback that prompted it,[1] [#fnuyewe0qilwj]is that doing so may be more effective at allaying people's fears of critiquing EA leadership. For example, while mistakes and updates are documented, if you were concerned about, say, gender diversity (~75% men in senior roles [https://www.openphilanthropy.org/team/]) in the organization,[2] [#fnbldmuc4rxa]but you were an OpenPhil employee or someone receiving money from OpenPhil, would the the contents of the post [https://forum.effectivealtruism.org/posts/shM9ceKYZBGpS5FJw/examples-of-someone-admitting-an-error-or-changing-a-key] [3] [#fnne6j4fipeqk]you linked actually make you feel comfortable raising these concerns?[4] [#fn1cgmqerssu2]Or would you feel better if there was an explicit acknowledgement that someone in a similar situation had previously spoken up and contributed to positive change? I also think curating something like this could be beneficial not just for the EA community, but also for leaders and organizations who have a large influence in this space. I'll leave the rest of my thoughts in a footnote to minimize derailing the thread, but would be happy to discuss further elsewhere with anyone who has thoughts or pushbacks about this.[5] [#fnx2ihzufarao] 1. ^ [#fnrefuyewe0qilwj]Anonymized as necessary 2. ^ [#fnrefbldmuc4rxa]I am not saying that I think OpenPhil in fact has a gender diversity problem (is 3/4 men too much? what about 2/3? what about 3/5? Is this even the right way of thinking about this question?), nor am I saying that people working in OpenPhil or receiving their funding don't feel comfortable voicing concerns. I am not using OpenPhil as an e

Why is scope insensitivity considered a bias instead of just the way human values work?

Quoting Kelsey Piper:

If I tell you “I’m torturing an animal in my apartment,” do you go “well, if there are no other animals being tortured anywhere in the world, then that’s really terrible! But there are some, so it’s probably not as terrible. Let me go check how many animals are being tortured.”

(a minute later)

“Oh, like ten billion. In that case you’re not doing anything morally bad, carry on.”

I can’t see why a person’s suffering would be less morally significant depending on how many other people are suffering. And as a general principle, arbitrarily bounding variables because you’re distressed by their behavior at the limits seems risky.

Not a philosopher, but scope sensitivity follows from consistency (either in the sense of acting similarly in similar situations, or maximizing a utility function). Suppose you're willing to pay $1 to save 100 birds from oil; if you would do the same trade again at a roughly similar rate (assuming you don't run out of money) your willingness to pay is roughly linear in the number of birds you save.

Scope insensitivity in practice is relatively extreme; in the original study, people were willing to pay $80 for 2000 birds and $88 for 200,000 birds. So if you ... (read more)

1P2mo
I think the money-pump argument is wrong. You are practically assuming the conclusion. A scope insensitive person would negatively value the total number of bird deaths, or maybe positively value the number of birds alive. So that each death is less bad if other birds also die. In this case it doesn't make sense to talk about $1 per 100 avoided deaths in isolation.
3Thomas Kwa2mo
This doesn't follow for me. I agree that you can construct some set of preferences or utility function such that being scope-insensitive is rational, but you can do that for any policy.

Two empirical reasons not to take the extreme scope neglect in studies like the 2,000 vs 200,000 birds one as directly reflecting people's values.

First, the results of studies like this depend on how you ask the question. A simple variation which generally leads to more scope sensitivity is to present the two options side by side, so that the same people would be asked both about 2,000 birds and about the 200,000 birds (some call this "joint evaluation" in contrast to "separate evaluation"). Other variations also generally produce more scope sensitive resu... (read more)

2Dan_Keys1mo
A passage from Superforecasting: Note: in the other examples studied by Mellers & colleagues (2015) [https://stanford.edu/~knutson/nfc/mellers15.pdf], regular forecasters were less sensitive to scope than they should've been, but they were not completely insensitive to scope, so the Assad example here (40% vs. 41%) is unusually extreme.

Hm, I think that most of the people who participated in this experiment: 

three groups of subjects were asked how much they would pay to save 2,000 / 20,000 / 200,000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88.1 This is scope insensitivity or scope neglect: the number of birds saved—the scope of the altruistic action—had little effect on willingness to pay.

would agree after the results were shown to them that they were doing something irrational that they wouldn't endorse if aware of it. (Ex... (read more)

I think scope insensitivity could be a form of risk aversion over the difference you make in the world (=difference-making) (scope insensitivity is related at least). I explain here why I think that risk aversion over the difference you make is irrational even though risk aversion over states of the world is not. 

I think they are basically not a bias in the way confirmation bias is, and anyone claiming otherwise is pre-supposing linear aggregation of welfare already. From a thing I wrote recently:

Scope neglect is not a cognitive bias like confirmation bias. I can want there to be ≥80 birds saved, but be indifferent about larger numbers: this does not violate the von Neumann-Morgenstern axioms (nor any other axiomatic systems that underlie alternatives to utility theory that I know of). Similarly, I can most highly value there being exactly 3 flowers in the vase o

... (read more)
4Thomas Kwa2mo
Anything is VNM-consistent [https://www.lesswrong.com/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-entail-goal-directed-behavior] if your utility function is allowed to take universe-histories or sequences of actions. So you will have to make some assumptions.

Various social aggregation theorems (e.g. Harsanyi's) show that "rational" people must aggregate welfare additively.

(I think this is a technical version of Thomas Kwa's comment.)

To answer this question in short: It is so because it's innate. Like any other bias scope insensitivity comes from within, in the case of an individual as well as an organisation running by individuals. We may generalise it as the product of human values because of the long-running history of constant 'Self-Value' teachings(not the spiritual ones). But there will always be a disparity when considering the ever-evolving nature of human values, especially in the current era.
 

--------


On the contrary, most of the time, I do consider scope insensitivity as... (read more)

There's a lot of interesting writing about the evolutionary biology and evolutionary psychology of genetic selfishness, nepotism, and tribalism, and why human values descriptively focus on the sentient beings that are more directly relevant to our survival and  reproductive fitness -- but that doesn't mean our normative or prescriptive values should follow whatever natural selection and sexual selection programmed us to value.

2P2mo
Then what does scope sensitivity follow from?
1Geoffrey Miller2mo
Scope sensitivity, I guess, is the triumph of 'rational compassion' (as Paul Bloom talks about it in his book Against Empathy), quantitative thinking, and moral imagination, over human moral instincts that are much more focused on small-scope, tribal concerns. But this is an empirical question in human psychology, and I don't think there's much research on it yet. (I hope to do some in the next couple of years though).
0P2mo
That explanation is a bit vague, I don't understand what you mean. By "quantitative thinking" do you mean something like having a textual length simplicity prior over moralities? By triumph of moral imagination do you mean somehow changing the mental representation of the world you are evaluating so that it represents better the state of the world? Why do you call it a triumph (implying it's good) over small-scope concerns? Why do you say this is an empirical question? What do you plan on testing?

Why does most AI risk research and writing focus on artificial general intelligence? Are there AI risk scenarios which involve narrow AIs?

Looking at your profile I think you have a good idea of answers already, but for the benefit of everyone else who upvoted this question looking for an answer, here's my take:

Are there AI risk scenarios which involve narrow AIs?

Yes, a notable one being military AI i.e. autonomous weapons (there are plenty of related posts on the EA forum). There are also multipolar failure modes on risks from multiple AI-enabled superpowers instead of a single superintelligent AGI.

Why does most AI risk research and writing focus on artificial general intelligence?

A misalign... (read more)

  1. "AGI" is largely an imprecisely-used initialism: when people talk about AGI, we usually don't care about generality and instead just mean about human-level AI. It's usually correct to implicitly substitute "human-level AI" for "AGI" outside of discussions of generality. (Caveat: "AGI" has some connotations of agency.)
  2. There are risk scenarios with narrow AI, including catastrophic misuse, conflict (caused or exacerbated by narrow AI), and alignment failure. On alignment failure, there are some good stories. Each of these possibilities is considered reasonab
... (read more)

What happens when we create AI companions for children that are more “engaging” than humans? Would children stop making friends and prefer AI companions?
What happens when we create AI avatars of mothers that are as or more “engaging” to babies than real mothers, and people start using them to babysit? How might that affect a baby’s development?
What happens when AI becomes as good as an average judge at examining evidence, arguments, and reaching a verdict?
 

What are the most ambitious EA projects that failed?

If we're encouraged to be more ambitious, it would be nice to have a very rough idea of how cost-effective ambition is itself. Essentially, I'd love to find or arrive at an intuitive/quantitative estimate of the following variables:

  • [total # of particularly 'ambitious' past EA projects[1]]
  • [total # (or value) of successfwl projects in the same reference class]

In other words, is the reason why we don't see more big wins in EA that people aren't ambitious enough, or are big wins just really unlikely? Are we bottlenecked by ambition?

For this reason, I think it could be personally[2] valuable to see a list,[3] one that tries hard to be comprehensive, of failed, successfwl, and abandoned projects. Failing that, I'd love to just hear anecdotes.

  1. ^

    Carrick Flynn's political campaign is a prototypical example. Others include CFAR, Arbital, RAISE. Other ideas include published EA-inspired books that went under the radar, papers that intended to persuade academics but failed, or even just earning-to-give-motivated failed entrepreneurs, etc.

  2. ^

    I currently seem to have a disproportionately high prior on the "hit rate" for really high ambition, just because I know some success stories (e.g. Sam Bankman-Fried), and this is despite the fact that I don't see much extreme ambition in the water generally.

  3. ^

    Such a list could also be usefwl for publicly celebrating failure and communicating that we're appreciative of people who risked trying. : )

Why there hasn't been a consensus/debate between people with contradicting views on the AGI timelines/safety topic?

I know almost nothing about ML/AI and I don't think I can form an opinion on my own so I try to base my opinion on the opinions of more knowledgeable people that I trust an respect. However what I find problematic is that those opinions vary dramatically, while it is not clear why those people hold their beliefs. I also don't think I have enough knowledge in the area to be able to extract that information from people myself eg. if I talk to a knowledgeable 'AGI soon and bad' person they would very likely convince me in their view and the same would happen if I talk to a knowledgeable 'AGI not soon and good' person. Wouldn't it be good idea to have debates between people with those contradicting views, figure out what the cruxes are and write them down? I understand that some people have vested interests in one side of the questions, for example a CEO of an AI company may not gain much from such debate and thus refuse to participate in it, but I think there are many reasonable people that would be willing to share their opinion and hear other people's arguments. Forgive me if this has already been done and I have missed it (but I would appreciate if you can point me to it).

  1. OpenPhil has commissioned various reviews of its work, e.g. on power-seeking AI.
  2. Less formal, but there was this facebook debate between some big names in AI.

Overall, I think a) this would be cool to see more of and b) it would be a service to the community if someone collected all the existing examples together.

Not exactly what you're describing, but MIRI and other safety researchers did the MIRI conversations and also sort of debated at events. They were helpful and I would be excited about having more, but I think there are at least three obstacles to identifying cruxes:

  • Yudkowsky just has the pessimism dial set way higher than anyone else (it's not clear that this is wrong, but this makes it hard to debate whether a plan will work)
  • Often two research agendas are built in different ontologies, and this causes a lot of friction especially when researcher A's ontol
... (read more)

The debate on this subject has been ongoing between individuals who are within or adjacent to the EA/LessWrong communities (see posts that other comments have linked and other links that are sure to follow). However, these debates often are highly insular and primarily are between people who share core assumptions about:

  1. AGI being an existential risk with a high probability of occurring
  2. Extinction via AGI having a significant probability of occurring within our lifetimes (next 10-50 years)
  3. Other extinction risks (e.g pandemics or nuclear war) not likely m
... (read more)

There was a prominent debate between Eliezer Yudkowsky and Robin Hanson back in 2008 which is a part of the EA/rationalist communities' origin story, link here: https://wiki.lesswrong.com/index.php?title=The_Hanson-Yudkowsky_AI-Foom_Debate

Prediction is hard and reading the debate from the vantage point of 14 years in the future it's clear that in many ways the science and the argument has moved on, but it's also clear that Eliezer made better predictions than Robin Hanson did, in a way that inclines me to try and learn as much of his worldview as possible so I can analyze other arguments through that frame. 

2leosn1mo
This link could also be useful for learning how Yudkowsky & Hanson think about the issue: https://intelligence.org/ai-foom-debate [https://intelligence.org/ai-foom-debate] Essentially, Yudkowsky is very worried about AGI ('we're dead in 20-30 years' worried) because he thinks that progress on AI overall will rapidly accelerate as AI helps us make further progress. Hanson was (is?) less worried.

What level of existential risk would we need to achieve for existential risk reduction to no longer be seen as "important"?

What's directly relevant is not the level of existential risk, but how much we can affect it. (If existential risk was high but there was essentially nothing we could do about it, it would make sense to prioritize other issues.) Also relevant is how effectively we can do good in other ways. I'm pretty sure it costs less than 10 billion times as much (in expectation, on the margin) to save the world as to save a human life, which seems like a great deal. (I actually think it costs substantially less.) If it cost much more, x-risk reduction would be less appealing; the exact ratio depends on your moral beliefs about the future and your empirical beliefs about how big the future could be.

1pseudonym2mo
Thanks! Presumably both are relevant, or are you suggesting if we were at existential risk levels 50 orders of magnitude below today and it was still as cost-effective as it is today to reduce existential risk by 0.1% you'd still do it?
2Zach Stein-Perlman2mo
I meant risk reduction in the absolute sense, where reducing it from 50% to 49.9% or from 0.1% to 0% is a reduction of 0.1%. If x-risk was astronomically smaller, reducing it in absolute terms would presumably be much more expensive (and if not, it would only be able to absorb a tiny amount of money before risk hit zero).
1pseudonym2mo
I'm not sure I follow the rationale of using absolute risk reduction here, if you drop existential risk from 50% to 49.9% for 1 trillion dollars that's less cost effective than if you drop existential risk from 1% to 0.997% at 1 trillion dollars, even though one is a 0.1% absolute reduction, and the other is a 0.002% absolute reduction. So if you're happy to do a 50% to 49.9% reduction at 1 trillion dollars, would you not be similarly happy to go from 1% to 0.997% for 1 trillion dollars? (If yes, what about 1e-50 to 9.97e-51?)

What is the strongest ethical argument you know for prioritizing AI over other cause areas? 

I'd also be very interested in the reverse of this. Is there anyone who has thought very hard about AI risk and decided to de-prioritise it?

I think Transformative AI is unusually powerful and dangerous relative to other things that can plausibly kill us or otherwise drastically affect human trajectories, and many of us believe AI doom is not inevitable. 

I think it's probably correct for EAs to focus on AI more than other things.

Other plausible contenders (some of which I've worked on) include global priorities research, biorisk mitigation, and moral circle expansion. But broadly a) I think they're less important or tractable than AI, b) many of them are entangled with AI (e.g. global priorities research that ignores AI is completely missing the most important thing).

8Lizka1mo
I largely agree with Linch's answer (primarily: that AI is really likely very dangerous), and want to point out a couple of relevant resources in case a reader is less familiar with some foundations for these claims: * The 80,000 Hours problem profile for AI [https://80000hours.org/problem-profiles/artificial-intelligence/]is pretty good, and has lots of other useful links * This post is also really helpful, I think: Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover [https://forum.effectivealtruism.org/posts/Y3sWcbcF7np35nzgu/without-specific-countermeasures-the-easiest-path-to-1] * More broadly, you can explore a lot of discussion on the AI risk topic page in the EA Forum Wiki [https://forum.effectivealtruism.org/topics/ai-risk]

Thank you for asking this! Some fascinating replies!

A related question:

Considering other existential risks like engineered pandemics, etc., is there an ethical case for continuing to escalate the advancement of AI development despite the possibly-pressing risk of unaligned AGI for addressing/mitigating other risks, such as developing better vaccines, increasing the rate of progress in climate technology research, etc.?

[I'll be assuming a consequentialist moral framework in this response, since most EAs are in fact consequentialists. I'm sure other moral systems have their own arguments for (de)prioritizing AI.]

Almost all the disputes on prioritizing AI safety are really epistemological, rather than ethical; the two big exceptions being a disagreement about how to value future persons, and one on ethics with very high numbers of people (Pascal's Mugging-adjacent situations).

I'll use the importance-tractability-neglectedness (ITN) framework to explain what I mean. The ITN... (read more)

Reasonable people think it has the most chance of killing all of us and ending future conscious life. Compared to other risks it is bigger, compared to other cause areas it will extinguish more lives.

3Ula2mo
"Reasonable people think" - this sounds like a very weak way to start an argument. Who are those people - would be the next question. So let's skip the differing to authority argument. Then we have "the most chance" - what are the probabilities and how soon in the future? Cause when we talk about deprioritizing other cause areas for the next X years, we need to have pretty good probabilities and timelines, right? So yeah, I would not consider differing to authorities a strong argument. But thanks for taking the time to reply.
4Nathan Young2mo
A survey [https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/#Existential_risk] of ML researchers (not necessarily AI Safety, or EA) gave the following That seems much higher than the corresponding sample in any other field I can think of. I think that an "extremely bad outcome" is probably equivalent to 1Bn or more people dying. Do a near majority of those who work in green technology (what feels like the right comparison class) feel that climate change has a 10% chance of 1 Bn deaths? Personally, I think there is like a 7% chance of extinction before 2050, which is waaayhigher than anything else.
4HowieL1mo
FYI - subsamples of that survey were asked about this in other ways, which gave some evidence that "extremely bad outcome" was ~equivalent to extinction. 1. ^ [#fnrefswz5h4um5bo]Or, ‘human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species’ 2. ^ [#fnrefyoclhqdjrd]That is, ‘future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species’
2Jonathan Yan2mo
There is a big gap between killing all of us and ending future conscious life (on earth, in our galaxy, entire universe/multiverse?)
3Nathan Young2mo
Yes, but it's a much smaller gap than any other cause doing this. You're right, conscious life will probably be fine. But it might not be.

What's the best way to talk about EA in brief, casual contexts? 

Recently I've been doing EA-related writing and copyediting, which means that I've had to talk about EA a lot more to strangers and acquaintances, because 'what do you do for work?' is a very common ice-breaker question.  I always feel kind of awkward and like I'm not doing the worldview justice or explaining it well. I think the heart of the awkwardness is that 'it's a movement that wants to do the most good possible/do good effectively' seem tautologous (does anyone want to do less good than possible?); and because EA is kind of a mixture of philosophy and career choice and charity evaluating and [misc], I basically find it hard to find legible concepts to hang it on. 

For context, I used to be doing a PhD in Greek and Roman philosophy - not exactly the most "normal" job- and I found that way easier to explain XD

 

Related questions:
-what's the best way to talk about EA on your personal social media?
-what's the best way to talk about it if you go viral on Twitter? (this happened to me today)
-what's the best way to talk about it to your parents and older family members? 
etc. 

I think, kind of, 'templates' about how to approach these situations risk seeming manipulative and being cringey, as 'scripts' always are if you don't make them your own, but I'd really enjoy reading a post collecting advice from EA community builders, communicators, marketers, content w... (read more)

2SeanFre2mo
I think a collection like the one you're proposing would be an incredibly valuable resource for growing the EA community.

Here's an excellent resource for different ways of pitching EA (and sub-parts of EA). Disclaimer - I do not know who is the owner of this remarkable document. I hope sharing it here is acceptable! As far as I know, this is a public document. 

My contingently-favorite option:

Effective Altruism is a global movement that spreads practical tools and advice about prioritizing social action to help others the most. We aim that when individuals are about to invest time or money in helping others, they will examine their options and choose the one with the hig... (read more)

2Amber Dawn1mo
Thanks! This looks extremely comprehensive.
2Rona Tobolsky2mo
This resource [https://resources.eagroups.org/running-a-group/communicating-about-ea/what-to-say-pitch-guide] is also robust (and beautifully outlined).

What would convince you to start a new effective animal charity?

Has anyone produced writing on being pro-choice and placing a high value on future lives at the same time? I’d love to read about how these perspectives interact!

Richard Chappell wrote this piece, though IMHO it doesn't really get to the heart of the tension.

FYI I'm also interested in this. 
I do think it's consistent to be pro-choice and place a high value on future lives (both because people might be able to create more future lives by (eg) working on longtermist causes than by having kids themself, and because you can place a high value on lives but say that it is outweighed by the harm done by forcing someone to give birth. But I think that pro-natalist and non-person-affecting views do have implications for reproductive rights and the ethics of reproduction that are seldom noticed or made explicit.

I've only just stumbled upon this question and I'm not sure if you'll see this, but I wrote up some of my thoughts on the problems with the Total View  of population ethics (see "Abortion and Contraception" heading specifically). 

Personally, I think there is a tension there which does not seem to have been discussed much in the EA forum. 

Here is a good post on the side of being pro-life: https://forum.effectivealtruism.org/posts/ADuroAEX5mJMxY5sG/blind-spots-compartmentalizing

I have thought about this a lot, and I think pro-life might actually win out in terms of utility maximization if it doesn't increase existential risk.

I’ve asked this question on the forum before to no reply, but do the people doing grant evaluations consult experts in their choices? Like do global development grant-makers consult economists before giving grants? Or are these grant-makers just supposed to have up-to-date knowledge of research in the field?

I’m confused about the relationship between traditional topic expertise (usually attributed to academics) and EA cause evaluation.

[My impression. I haven't worked on grantmaking for a long time.] I think this depends on the topic, size of the grant, technicality of the grant, etc. Some grantmakers are themselves experts. Some grantmakers have experts in house. For technical/complicated grants, I think non-expert grantmakers will usually talk to at least some experts before pulling the trigger but it depends on how clearcut the case for the grant is, how big the grant is, etc.

'If I take EA thinking, ethics, and cause areas more seriously from now on, how can I cope with the guilt and shame of having been so ethically misguided in my previous life?'

or, another way to put this:

'I worry that if I learn more about animal welfare, global poverty, and existential risks, then all of my previous meat-eating, consumerist status-seeking, and political virtue-signaling will make me feel like a bad person'

(This is a common 'pain point' among students when I teach my 'Psychology of Effective Altruism' class)

I might be missing the part of my brain that makes these concerns make sense, but this would roughly be my answer: Imagine that you and everyone in your household consume water with lead it in every day. You have the chance to learn if there is lead in the the water. If you learn that it does, you'll feel very bad but also you'll be able to change your source of water going forward. If you learn that it does not, you'll no longer have this nagging doubt about the water quality. I think learning about EA is kind of like this. It will be right or wrong to eat animals regardless of whether you think about it, but only if you learn about it can you change for the better. The only truly shameful stance, at least to me, is to intentionally put your head in the sand.

My secondary approach would be to say that you can't change your past but you can change your future. There is no use feeling guilt and shame about past mistakes if you've already fixed them going forward. Focus your time and attention on what you can control.

Meta:

  1. Seems like a more complicated question than [I could] solve with a comment
  2. Seems like something I'd try doing one on one, talking with (and/or about) a real person with a specific worry, before trying to solve it "at scale" for an entire class
  3. I assume my understanding of the problem from these few lines will be wrong and my advice (which I still will write) will be misguided
  4. Maybe record a lesson for us and we can watch it?

Tools I like, from the CFAR handbook, which I'd consider using for this situation:

  1. IDC (maybe listen to that part afraid you'll think
... (read more)
5Geoffrey Miller2mo
Yanatan -- I like your homunculus-waking-up thought experiment. It might not resonate with all students, but everybody's seen The Matrix, so it'll probably resonate with many.

My two cents: I view EA as supererogatory, so I don't feel bad about my previous lack of donations, but feel good about my current giving.

Changing the "moral baseline" does not really change decisions: seeing "not donating" as bad and "donating" as neutral leads to the same choices as seeing "not donating" as neutral and "donating" as good.

4Geoffrey Miller2mo
In principle, changing the moral baseline shouldn't change decisions -- if we were fully rational utility maximizers. But for typical humans with human psychology, moral baselines matter greatly, in terms of social signaling, self-signaling, self-esteem, self-image, mental health, etc.
5Lorenzo Buonanno2mo
I agree! That's why I'm happy that I can set it wherever it helps me the most in practice (e.g. makes me feel the "optimal" amount of guilt, potentially 0)

What has helped me most is this quote from Seneca:

Even this, the fact that it [the mind] perceives the failings it was unaware of in itself before, is evidence for a change for the better in one's character.

That helped me feel a lot better about finding unnoticed flaws and problems in myself, which always felt like a step backwards before. 

I also sometimes tell myself a slightly shortened Litany of Gendlin:

What is true is already so.
Owning up to it doesn't make it worse.
Not being open about it doesn't make it go away.
People can stand what is true,
for ... (read more)

My personal approach:

  • I no longer think of myself as "a good person" or "a bad person", which may have something to do with my leaning towards moral anti-realism. I recognize that I did bad things in the past and even now, but refuse to label myself "morally bad" because of them; similarly, I refuse to label myself "morally good" because of my good deeds. 
    • Despite this, sometimes I still feel like I'm a bad person. When this happens, I tell myself: "I may have been a bad person, so what? Nobody should stop me from doing good, even if I'm the worst perso
... (read more)

If you haven't come across it, a lot of EAs have found Nate Soares' Replacing Guilt series useful for this. (I personally didn't click with it but have lots of friends who did).

I like the way some of Joe Carlsmith's essays touch on this. 

4HowieL1mo
Much narrower recommendation for nearby problems is Overcoming Perfectionism [https://www.amazon.co.uk/Overcoming-Perfectionism-2nd-scientifically-behavioural/dp/1472140567] (~a CBT workbook). I'd recommend to some EAs who are already struggling with these feelings (and know some who've really benefitted from it). (It's not precisely aimed at this but I think it can be repurposed for a subset of people.) Wouldn't recommend to students recently exposed to EA who are worried about these feelings in future.

I think You Don't Need To Justify Everything is a somewhat less related post (than others that have been shared in this thread already) that is nevertheless on point (and great).

I think it's okay to feel guilty, shame, remorse, rage, or even hopeless about our past "mistakes". These are normal emotions, and we can't or rather shouldn't purposely avoid or even bury them. It's analogous to someone being dumped by a beloved partner and feeling like the whole world is crumbling. No matter how much we try to comfort such a person, he/or she will feel heartbroken.

In fact, feeling bad about our past is a great sign of personal development because it means we realize our mistakes! We can't improve ourselves if we don't even know what we d... (read more)

4Geoffrey Miller2mo
Kiu -- I agree. It reminds me of the old quote from Rabbi Nachman of Breslov (1772-1810): “If you won’t be better tomorrow than you were today, then what do you need tomorrow for?” https://en.wikipedia.org/wiki/Nachman_of_Breslov

Does anyone have a good list of books related to existential and global catastrophic risk? This doesn't have to just include books on X-risk / GCRs in general, but can also include books on individual catastrophic events, such as nuclear war. 

Here is my current resource landscape (these are books that I have personally looked at and can vouch for; the entries came to my mind as I wrote them - I do not have a list of GCR / X-risk books at the moment; I have not read some of them in full): 

General:

AI Safety 

Nuclear risk

General / space

Biosecurity 

3HowieL1mo
Others, most of which I haven't fully read and not always fully on topic: * Richard Posner. Catastrophe: Risk and Response. [https://www.amazon.co.uk/Catastrophe-Risk-Response-Richard-Posner/dp/0195306473] (Precursor) * Richard A Clarke and RP Eddy. Warnings: Finding Cassandras to Stop Catastrophes [https://www.amazon.co.uk/Warnings-Finding-Cassandras-Stop-Catastrophes/dp/0062488031/ref=sr_1_1?crid=2SIGW6HENV0F0&keywords=warnings+clarke&qid=1665639839&qu=eyJxc2MiOiIxLjI4IiwicXNhIjoiMC4wMCIsInFzcCI6IjAuMDAifQ%3D%3D&sprefix=warnings+clarke%2Caps%2C83&sr=8-1] * General Leslie Groves. Now It Ca Be Told: the Story of the Manhattan Project [https://www.amazon.co.uk/Warnings-Finding-Cassandras-Stop-Catastrophes/dp/0062488031/ref=sr_1_1?crid=2SIGW6HENV0F0&keywords=warnings+clarke&qid=1665639839&qu=eyJxc2MiOiIxLjI4IiwicXNhIjoiMC4wMCIsInFzcCI6IjAuMDAifQ%3D%3D&sprefix=warnings+clarke%2Caps%2C83&sr=8-1] (nukes)
2peterhartree2mo
The Bible (Noah's Ark). File under "Fiction" or "Precursors".
2evakat2mo
Small remark: The Goodreads list on nuclear risk you linked to is private

Of all the sort of "decision theory"-style discussions in EA, I think Anthropics (e.g. the fact we exist, tells us something about the nature of successful intelligence and x-risk) seems like one of the most useful that could arrive just from pure thought. This is sort of amazing.

The blog posts I've seen written in 2021 or 2020 seem sort of unclear and tangled (e.g. there are two competing theories and empirical arguments are unclear).

Is there a good summary of Anthropic ideas? Are there updates on this work? Is there someone working on this? Do they need help (e.g. from senior philosophers or cognitive scientists)? 

A set of related questions RE: longtermism/neartermism and community building.

1a) What is the ideal theory of change for Effective Altruism as a movement in the next 5-10 years? What exactly does EA look like, in the scenarios that community builders or groups doing EA outreach are aiming for? This may have implications for outreach strategies as well as cause prioritization.[1]

1b) What are the views of various community builders and community building funders in the space on the above? Do funders communicate and collaborate on a shared theory of change, or are there competing views? If so, which organizations best characterize these differences, what are the main cruxes/where are the main sources of tension?

2a) A commonly talked about tension on this forum seems to relate to neartermism versus longtermism, or AI safety versus more publicly friendly cause areas in the global health space. How much of its value is because it's inherently a valuable cause area, and how much of it is because it's intended as an onramp to longtermism/AI safety?

2b) What are the views of folks doing outreach and funders of community builders in EA on the above? If there are different approaches, which organizations best characterize these differences, what are the main cruxes/where are the main sources of tension? I would be particularly interested in responses from people who know what CEA's views are on this, given they explicitly state they are not doing cause-area specific work or research. [2]

3) Are there equivalents [3] of Longview Philanthropy who are EA aligned but do not focus on longtermism? For example, what EA-aligned organization do I contact if I'm a very wealthy individual donor who isn't interested in AI safety/longtermism but is interested in animal welfare and global health? Have there been donors (individual or organizational) who fit this category, and if so, who have they been referred to/how have they been managed?

  1. ^

    "Big tent" effective altruism is very important (particularly right now) is one example of a proposed model, but if folks think AI timelines are <10 years away and p(doom) is very high, then they might argue EA should just aggressively recruit for AI safety folks in elite unis.

  2. ^

    Under Where we are not focusing: "Cause-specific work (such as community building specifically for effective animal advocacy, AI safety, biosecurity, etc.)"

  3. ^

    "designs and executes bespoke giving strategies for major donors"

3. I'm not sure they do as much bespoke advising as Longview, but I'd say GiveWell and Farmed Animal Funders. I think you could contact either one with the amount you're thinking of giving and they could tell you what kind of advising they can provide. 

I really want to learn more about broad longtermism. In 2019, Ben Todd said that in a survey EAs said that it was the most underinvested cause area by something like a factor of 5. Where can I learn more about broad longtermism, what are the best resources, organizations, and advocates on ideas and projects related to broad longtermism?

I think parts of What We Owe the Future by Will MacAskill discuss this approach a bit.

1Jordan Arel1mo
Mm good point! I seem to remember something.. do you remember what chapter/s by chance?
2HowieL1mo
My guess is that Part II, trajectory changes will have a bunch of relevant stuff. Maybe also a bit of part 5. But unfortunately I don't remember too clearly.

Does Peter Singer still consider himself aligned to the Effective Altruism movement? And/or do you forecast he will do in five years time?

If "EA is a question," and that question is how to do the most good, I think Peter Singer will always consider himself an effective altruist.

However, he seems to disagree about whether the answer to that question entails a predominant focus on common longtermist topics. I suspect, while he will always see himself as an EA, it will be as an EA that has important differences in cause area prioritization. For more info, he discusses his views about longtermism here, perhaps captured best by the following quote:

When taking steps to reduce the risk that we will become extinct, we should focus on means that also further the interests of present and near-future people. If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do; and if we are not at that critical point, it will have been a good thing to do anyway.

1) What level of funding or attention (or other metrics) would longtermism or AI safety need to receive for it to no longer be considered "neglected"?

2) Does OpenPhil or other EA funders still fund OpenAI? If so, how much of this goes towards capabilities research? How is this justified if we think AI safety is a major risk for humanity? How much EA money is going into capabilities research generally?

(This seems like something that would have been discussed a fair amount, but I would love a distillation of the major cruxes/considerations, as well as what would need to change for OpenAI to be no longer worth funding in future).

  1. See here. (Separating importance and neglectedness is often not useful; just thinking about cost-effectiveness is often better.)
  2. No.
1pseudonym2mo
Thanks! This makes sense. In my head AI safety feels like a cause area that can just have room for a lot of funding etc, but unlike nuclear war or engineered pandemics which seem to have clearer milestones for success, I don't know what this looks like in the AI safety space. I'm imagining a hypothetical scenario where AI safety is overprioritized by EAs, and wondering if or how we will discover this and respond appropriately.

How can one accept the Simulation Hypothesis and at the same time find Effective Altruism a valuable enterprise?

I don't see how the Simulation Hypothesis is a counterargument to EA, if you presume everyone else is still as "real" (I.e, simulated at the same level of detail) as you are. After all, you clearly have conscious experience, emotional valence,  and so on, despite being a simulation - so does everyone else, so we should still help them live their best simulated lives. After all, whether one is a simulation or not, we can clearly feel the things we call pleasure and pain, happiness and sorrow, freedom and despair, so we clearly have moral worth in my worldview. Though we should also probably be working on some simulation-specific research as well, I don't see how something like malaria nets would cease to be worthwhile.

1Emanuele DL1mo
Thanks for replying to my question. Your argument is certainly valid and an important one. But if we are to take the simulation hypothesis seriously it is only one within a spectrum of possible arguments that depend on the very nature of the simulation. For instance we might find out that our universe has been devised in such a twisted way that any improvement for its conscious beings corresponds to an unbearable proportional amount of pain for another parallel simulated universe. In such a case would pursuing effective altruism or longtermism still be moral?
2Jay Bailey1mo
Effective altruism is about doing the most good possible, so I'd say one can still pursue that under any circumstance. In the hypothetical you mentioned, the current form of EA would definitely be immoral in my opinion, because it is mostly about improving the lives of people in this universe, which would cause more suffering elsewhere and thus be wrong. So, in such a world, EA would have to look incredibly different - the optimal cause area would probably be to find a way to change the nature of our simulation, and we'd have to give up a lot of the things we do now because their net consequences would be bad. That's one of the best parts about EA in my opinion - it's a question (How do we do the most good?) rather than an ideology. (You must do these things) Even if our current things turned out to be wrong, we could still pursue the question anew.
1Emanuele DL1mo
I agree with your approach to the question but perhaps if we really take the simulation hypothesis seriously (or at least consider it probable enough to concern us) the first step should be finding a way to tell whether or not we actually live in a simulation. Research in Physics/Astronomy could explicitly look for and device experiments looking to demonstrate systematic inconsistencies in the fabric of our universe that could give a hint on the made up nature of all laws. This in a way is an indirect answer to your last question. If effective altruisms is not an ideology just to be followed but a rational enterprise grounded on the actual nature of our universe, then it should also be concerned with improving our understanding of it. Even if this eventually leads to a radical re-think of what effective altruisms should be.
2Jay Bailey1mo
I agree. If the Simulation Hypothesis became decently likely, we would want to answer questions like: - Does our simulation have a goal? If so, what? - Was our simulation likely created by humans? Also, we'd probably want to be very careful with those experiments - observing existing inconsistencies makes sense, but deliberately trying to force the simulation into unlikely states seems like an existential risk to me - the last thing you want is to accidentally crash the simulation!

Have there been any actual "wins" for longtermism? 

Has anyone taken any concrete actions that clearly shift the needle towards a better future?

Why do EAs use counterfactual in a statement like "it will have a high counterfactual impact." Isn't non-fungible a more apt word than counterfactual for what EAs are trying to get at?

What are the financial incentives of grantmaking? In particular, I would like to know if the compensation of a typical full-time grantmaker in the EA world is substantially tied to the amount of grants advised.

Note that this question is not about the altruistic incentives or whether they matter more than financial incentives. This question is also not suggesting the existence of any complex financial incentives. Not knowing much about EA grantmaking, my assumption is that a typical grantmaker is most likely paid a fixed salary plus benefits.

How tractable are animal welfare problems compared to global health and development problems?

I'm asking because I think animal welfare is a more neglected issue, but I still donate for global health and development because I think it's more tractable.

I think that it's very tractable. For example, I estimated that corporate campaigns improve 9 to 120 years of chicken life per dollar spent and this improvement seems to be very significant. It would likely cost hundreds or thousands of dollars to improve a life of one human to such a degree, even in developing countries. There are many caveats to this comparison that I can talk about upon request but I don't think that they change the conclusion.

Another way to see tractability is to look at the big wins for animal advocacy in 2021 or 2020. This progress i... (read more)

I believe they are largely tractable, there's a variety of different intervention types (Policy, Direct work, Meta, Research), cause areas (Alt Proteins, Farmed Animals, Wild animal suffering, Insects), organisations and geographies to pursue them in. Of particular note may be potentially highly tractable and impactful work in LMIC (Africa, Asia, Middle East, Eastern Europe)

I will say animal welfare is a newer and less explored area than global health but that may mean that your donation can be more impactful and make more of a difference as there could be... (read more)

Within the field of AI safety, what does "alignment" mean?

The "alignment problem for advanced agents" or "AI alignment" is the overarching research topic of how to develop sufficiently advanced machine intelligences such that running them produces good outcomes in the real world.

Both 'advanced agent' and 'good' should be understood as metasyntactic placeholders for complicated ideas still under debate. The term 'alignment' is intended to convey the idea of pointing an AI in a direction--just like, once you build a rocket, it has to be pointed in a particular direction.

"AI alignment theory" is meant as an overarch

... (read more)

Has anyone associated with EA ever looked for leverage points for reducing the rate of abortion?

(I believe the answer is no, or at least it hasn't been published publicly.)

Hi Jason, I'm the author of the aforementioned research into IUDs, artificial wombs, and legislative solutions, which is indeed very cursory. The research is included at the bottom of a [larger draft](https://docs.google.com/document/d/10VL9m-GW2f428WZSEs834kiDrHFxtfPNQzc6ljLwTyc/edit?usp=sharing) of an eventual EA forum post outlining reasons why EAs might oppose abortion and potential interventions in that regard.

The draft's philosophical arguments against abortion are much more mature than its section on potential interventions, partially because I've t... (read more)

I sense the answer is yes. I seem to recall that someone looked into this. 

Also I guess the answer is technically yes since I wouldn't be surprised if some interventions already lower the rate of unwanted pregnancy.

Hi - I'm just curious what the rationale for this would be? 

6jasonk2mo
If there were cost-efficient leverage points, it might be worth investing some amount of money and effort in. A non-exhaustive list of semi-conjoint reasons: * One believes abortion is a grave moral wrong and a lot occur each year. * One doesn't believe abortion is a grave moral wrong, but assigns some weight to the view's correctness. Even assigning a 10% chance to the view's correctness still means a lot is potentially at stake. * There might be relatively easy ways to make a difference and have other positive, follow-on effects. For example, male contraceptives might make a big difference in reducing unintended pregnancies and my understanding (a few years old) is that there aren't many funders of relevant research. (I recognize that some people argue that the follow-on effects of other contraceptives like the pill are not fully positive and some believe they may even be negative.) * Abortion is ridiculously polarizing and seems to crowd out discussion of other important issues in politics. Maybe reducing its salience would help increase the ability to focus on other issues? * Obtaining an abortion imposes greater and greater costs in the US (financially, in time required, psychologically, health risks) as restrictions are rolled out. * The strategies engaged in by many pro-life advocates seem unlikely to significantly reduce abortion rates.

There has been some very cursory research into things like IUDs, artificial wombs, legislative action etc., but I don't think the author ever finished or published it.

If only a few people are responsible for most of EA's impact, and I'm not as ambitious as many others, should I even care about EA?

Why don't we discount future lives based on the probability of them not existing? These lives might end up not being born, right?

I understand the idea of not discount lives due to distance (distance in time as well as distance in space). Knowing a drowning child is 30km away is different from hearing from a friend that there is an x% chance of a drowning child 30km away. In the former, you know something exists; in the latter, there is a probability that it does exist, and you apply an suitable level of confidence in your actions.

We should discount someone's life based on the probability of them not existing. This is not controversial. (But standard-economic constant-factor-per-year discounting is too crude to be useful.)

Does "calibrated probability assessment" training work?

In "How to Measure Anything" chapter 5, Douglas Hubbard describes the training he provides to individuals and organizations that want to improve their skills.  He provides a sample test which is based on general knowledge trivia, questions like

 "What is the air distance from LA to NY?" 

for which the student is supposed to provide a 90% confidence interval.  There are also some true/false questions where you provide your level of confidence in the answer e.g. 

"Napoleon was born on Corsica".  

In the following few pages he describes some of the data he's collected about his trainees implying this sort of practice helps people become better estimators of various things, including forecasting the likelihood of future events.  For example, he describes CTO's making more accurate predictions of new tech after completing training.

My question: Is there evidence that practice making probabilistic estimates about trivia improves people's ability to forecast non-trivial matters?  Have there been published studies?

I asked Dr. Hubbard these questions and he graciously replied saying to check out his book, which only cites 1980 Kahneman and Tversky, or the Wikipedia page, which also only cites his book and the above study, or to read Superforecasting.

Thanks!

[note that this is a re-post of a question I asked before but didn't get an answer]

I'm not sure we need "published studies" but "proper studies" seem like a great idea. 

Hi all. I'm 40 with a mortgage and two kids. Living in west of Ireland with a 20 year career in event management and lighting design behind me. No college education. I want to change career to a more long-termist, meaningful road but I just feel trapped. Geographically, financially and socially in my current life. Anyone any advice or pointers maybe?

Is there a good historical overview of EA? One that introduces key people, key events, key ideas?

Has anyone EA looked into how compatible or how to get EA movements into non western countries such as China or India?

  1. Why is there not a good EA Youtube Channel, with short and scripted videos in the style of Crash Course or Kurzgesagt, with sharable introductions to EA in general and all the causes inside longer playlists about them?
  2. There is also not a podcast or social media accounts that seem to be trying to get big and reach people to do EA. Why is that the case? 
  3. I've seen ads of 80k hours and Givewell, but why isn't there of the whole EA movement?

Sorry for making multiple questions in one, but I feel that the answers may be related. I separated them in the case you want to answer them individually. Feel free to answer only one or two.

  1. There's Giving what we can and A happier world. Also Kurzgesagt itself got two large grants for making EA videos, I think they're going to make many more in the future.
  2. If you google for "effective altruism podcasts" you can find some. There's also a recent small grants program to start new ones.
4Patricio2mo
Yeah, but they're not the EA channel or the EA podcast in the same way that the EA forum exists. And in the case of the Youtube channels they don't have introductions to EA, its causes and how to contribute to them.

I think that's because the EA movement is just that, a global movement, not an organization.

You probably also haven't seen Fridays For Future ads, or the FFF podcast.

1Patricio1mo
I understand that, and I guess 3 and even 2 could be not that effective, but it is weird to me that there isn't an org doing good Youtube videos that EAs could share and put on their profiles or something. With the message of sharing it inside them to try to do snowball effects.

Open Philanthropy has given grants to Kurzgesagt, who have made some videos on EA-related subjects. There is also some other EA-aligned YouTube creator whose name escapes me now.

The 80k podcast has wide reach, including outside EA I believe.

So there is some of this. I think part of the answer is that it’s really hard to reach a massive audience while retaining a high-fidelity message.

What are the methods (meditation, self-analysis) and tools (podcasts, books, support groups)  you use to keep yourself motivated and inspired in Effective Altruism specifically, and in making a difference generally?

Are there examples of EA causes that had EA credence and financial support but then lost both, and how did discussion of them change before and after? Also vice-versa, are there examples of causes that had neither EA credence nor support but then gained both?

The EA Survey has info on cause prio changes over time. Summary is:

The clearest change in average cause ranking since 2015 is a steady decrease for global poverty and increases for AI risk and animal welfare.

Holden Karnofsky wrote Three Key Issues I’ve Changed My Mind About on the Open Philanthropy blog in 2016.

On AI safety, for example:

I initially guessed that relevant experts had strong reasons for being unconcerned, and were simply not bothering to engage with people who argued for the importance of the risks in question. I believed that the tool-agent distinction was a strong candidate for such a reason. But as I got to know the AI and machine learning communities better, saw how Superintelligence was received, heard reports from the Future of Life Insti

... (read more)

Can I write the questions and answers in to a FAQ on the wiki that anyone can add to?

Toby Ord has written about the affectable universe, the portion of the universe that “humanity might be able to travel to or affect in any other way.”

I’m curious whether anyone has written about the affectable universe in terms of time.

  1. We can only affect events in the present and the future
  2. Events are always moving from the present (affectable) to the past (unaffectable)
  3. We should intervene in present events (e.g. reduce suffering) before these events move to the unaffectable universe

Maybe check out the term "light cone".

Why is longtermism a compelling moral worldview? 

A few sub-questions:

  • Why should we care about people that don't exist yet? And why should we dedicate our resources to making the world better for people that might exist (reducing x-risks) rather than using them for people that definitely exist and are currently suffering (global health, near-term global catastrophic risks, etc?) Longtermism seems to be somewhat of a privileged and exclusive worldview because it deprioritizes the very real lack of healthcare, food and potable water access, security, and education that plagues many communities.
  • Why are x-risks considered worse than global catastrophic risks? From a utilitarian standpoint, global catastrophic risks should be much worse than x-risks. All things considered, x-risks are a quite neutral outcome. They're worse than a generally happy future, but highly preferable to a generally unhappy future. Global catastrophic risks would cause a generally unhappy future.
  • Should the long-term preservation of humanity necessarily be a goal of effective altruism? I don't think the preservation of humanity is an inherently bad thing. (Although it would likely be at the expense of every other species.) But, I could imagine an extinction scenario that I wouldn't be upset about: As technology progresses, people generally get richer and happier. A combination of rising GDP, more urbanized styles of living, better access to birth control, and a mechanized workforce causes the birth rate to drop, and humanity comfortably, quietly declines. Natural habitats flourish, and we make room for other species to thrive and flourish as we have. Is this outcome acceptable from an altruistic perspective? If not, why?

How are DALYs/QALYs determined?

Life years are pretty objective but how are the disability/quality adjustments made?

One common method is the Time trade-off, you can find other common methods at the bottom of that Wikipedia page.

For more details the answers in this thread might be interesting.

Also worth noting that (as far as I know) no EA group only uses QALYs.

For example, GiveWell uses their own researched "moral weights" https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/comparing-moral-weights 

Is there an overview or estimation of how many EA-aligned people work in (inter)national, governmental bodies and/or (inter)national politics? 

'Why don't EA's main cause areas overlap at all with the issues that dominate current political debates and news media?'

(This could be an occasion to explain that politically controversial topics tend not to be (politically) tractable or neglected (in terms of media coverage), and are often limited in scope (i.e. focused on domestic political squabbles and symbolic virtue-signaling)

  1. As you said, they're (almost by definition) not neglected
  2. The media picks topics based on some algorithm which is simply different from the EA algorithm. If that wouldn't be true, I guess we wouldn't really need EA

I use a lot of ideas from Leviathan (Hobbes) all the time, but my knowledge comes from just from reading the title and the first paragraph of the Wikipedia page[1]. I'm worried I look dumb in front of smart people.

Does anyone have a good approachable summary of Leviathan, or even better, a tight, well written overview of the underlying and related ideas from a modern viewpoint?

  1. ^

    ("Bellum omnium contra omnes" is just so cool to say)

Genuine question (rather than critique):

What is the EA Community doing to increase the diversity of its make-up? Are there any resources out there folks can link me to that are actively working on bringing in a plurality of perspectives/backgrounds/etc.?

Considering the scope of existential challenges we’re facing as a species, wouldn’t it stand to reason that looking for ideas for tackling them from a wider array of sources (especially areas outside of STEM, underrepresented populations, or folks outside of the English-speaking world) might offer solutions we wouldn’t otherwise come across?

Magnify Mentoring is also relevant here. 

I wrote a bit about non-STEM inclusivity (list of project ideas, a post on language), and I think there are some active efforts to expand outside of the English-speaking world (things like conferences, translation projects, local and online groups, fellowships, camps, etc.) — but more would be good! 

This doesn't completely answer your question, but you may be interested in this page on CEA's website, particularly the "Our work" section.

1MariellaVee1mo
Thank you so much for the link! Lots of great stuff here. Trying to help mitigate economic barriers for attending events and conferences is excellent, as are the acknowledgements of the risk of English-speaking dominance within the community’s leadership; maintaining a genuine curiosity and collaborative mentality to ask communities and underrepresented groups how best to support their participation is also great! I wonder how EA might avoid the trap that I’ve witnessed a lot in Tech and Industry where the intentions are there/they state they’re committed to these principles, but the actual day-to-day reality doesn’t match up with well-intentioned guidelines (no matter how many “We’re really dedicated to DEI!” Zoom meetings are held). Is it to apply similar criteria for objective measurement of success in these categories to organizations and bodies within the community as is done for charities and initiatives? Or set transparent and time-specific goals for things like translating and proliferating seminal resources into other languages, diversifying key leadership positions, etc.? (Ex: CEA states they’re current employee make-up is 46% female and 18% self-identified minorities, though it’s not clear how this breaks down within leadership positions, etc.) Is it as simple as discouraging the over-use of technical jargon and Academic language within communications so as to widen the scope of understanding/broaden the audience? (Or something completely different/none of these things?)

Should EA give more importance to the possibility of globally working towards eliminating the negative tendencies of greed, aversion and delusion, inherent to the human mind?

Loosely related, but you might be interested in the topic "moral advocacy": https://forum.effectivealtruism.org/topics/moral-advocacy 

What's the basis for using expected utility/value calculations when allocating EA funding for "one off" bets? More details explaining what I don't understand are below for context.

My understanding is expected value relies on the law of large numbers, so in situations where you have bets that are unlikely to be repeated (for example, extinction, where you could put a ton of resources into it and go from a 5% extinction risk over the next century to a 4% risk) it doesn't seem like expected value should hold. The way I've seen this justified is using expected utility and the Von Neumann Morgenstern (VNM) theorem which I believe says that a utility function exists that follows rationale principles and that they've proved that it's optimal to maximize expected utility in that situation.

However, it seems like that doesn't really tell us much, because maybe you could construct a number of utility functions that satisfy VNM, and some bankrupt you and some don't. It seems reasonable to me that a good utility function should discount bets that will rarely be repeated at that scale and would be unlikely to average out positively in the long run since they won't be repeated enough times. But as far as I'm aware EA expected utility/value calculations often don't account for that.

It seems like people refer to attempts to account for that as risk-aversion, and my understanding is EAs often argue that we should be risk-neutral. But the arguments I've seen typically seem to frame risk-aversion as putting an upper bound on valuing people's well-being and that we don't want to do that. But it seems to me like you could value well-being linearly, but also factor in that you should downweight bets that won't be repeated enough to average out in your favor.

Apologies for the lengthy context, I'm sure I'm confused on a lot of points so any clarity or explanations on what I'm missing would be appreciated!

It's been a while since I read it but Joe Carlsmith's series on expected utility might help some. 

1Ryan Beck1mo
Thanks, I'll check that out!

What are some strong arguments against 'astronomical waste'?

In roughly ascending order of plausibility: 

  • Cluelessness. Maybe we can't knowably affect the future (I find this argument probably the most weak)
  • Anthropics/Doomsday argument. It's extremely unlikely that we should observe ourselves as among the 100 billion earliest people, so we should update heavily towards thinking the future is inevitably doomed
  • Simulation Hypothesis. (relatedly) If we're extremely certain we're in a simulation, then (under most theories of value and empirical beliefs) we should care more about the experiences of existing beings.
  • You
... (read more)

What if your next car was 10 times more powerful? What new kinds of driver training, traffic rules, and safety features would you think are necessary? What kinds of public education, laws, and safety features are necessary when AI, genetic engineering, or robotics becomes 10 times more powerful? How do we determine the risks?
Bonus question: why is it important to have good analogies so the general public can understand the risks of technology?
 

I feel like a lot of EA charities are "reactionary" in that they try to mitigate an issue while not attempting to overcome an issue. 

Take animal welfare for example: the main charities that are funded are mostly advocacy based and activism. While I am supportive of this approach, I don't think it will ultimately help animal welfare much by any order of magnitude in the long term. Instead, something will probably displace the need for animals IMO– like lab grown meat. Why don't EAs support basic research such as lab grown meat* as a means to displace the current state of factory farming? Sure, over a lifetime lab grown meat has a really low % chance of coming to fruition, but if it did (and with greater funding you can increase its chance of happening!), it would have orders of magnitude more impact for animal welfare than the current advocacy model.

*The same situation applies to climate change too. There's a trend in EA and now more general circles to "offset your carbon footprint" but again this feels like a mitigation/reactionary way of spending your money. I would much rather my money go to nuclear fusion research b/c if it worked out, it would have orders of magnitude more impact than simply mitigating my own carbon footprint

 

hope that makes some sense!

Why don't EAs support basic research such as lab grown meat* as a means to displace the current state of factory farming?

That's news to me. 😕 Animal Charity Evaluators recommends several charities that promote alternative proteins, such as the Material Innovation Initiative and the Good Food Fund. Although no longer an ACE recommended charity, the Good Food Institute is one of the leading orgs that funds and promotes alternative protein innovation, and is often recommended by EAs.

There's a trend in EA and now more general circles to "offset your carbon footprint" but again this feels like a mitigation/reactionary way of spending your money. I would much rather my money go to nuclear fusion research b/c if it worked out, it would have orders of magnitude more impact than simply mitigating my own carbon footprint

I'm not familiar with carbon offsetting as being a 'trend in EA' - as far as I'm aware the canonical EA treatment of this is Claire's piece arguing against it.

Similarly, if I look at the EA forum wiki page for climate change, every single bullet point is about research, and the first one is 'innovative approaches to clean energy' which includes nuclear.

I'm personally pretty skeptical of lab-grown meat after looking into it for a while (see here, here, and here). I do think some investment into the space makes sense for reasons similar to your argument, but I'm personally a bit skeptical of "I think the science of my current approach doesn't work and will never work but there's a small chance I'm wrong so it might make sense to work on it anyway" as a way to do science.*

My guess is that the future replacement for meat will not look like lab-grown mammalian cells, and if it does, how we get there will look... (read more)

Why does the Forum have a ‘karma system’? Why was it called a ‘karma system’ over any other description? Is the karma system a truly accurate reflection of a persons input into discussions on the forum?

I think the name "karma" comes from reddit.com, caught on and was used by other internet forums to give a name to the "fake internet points" that users get for posting.

It's definitely not an accurate reflection of a persons input into discussions on the forum, but having a positive karma is a strong indicator that a user is not trolling/spamming.

Sorry, stupid question, but just to clarify, questions should be posted in this thread, or in the general “questions” section on the forum?

For the purpose of trying this thread, it would be nice to post questions as "Answers" to this post.[1] Although you're welcome to post a question on the Forum if you think that's better: you can see a selection of those here

Not a stupid question! 

  1. ^

    The post is formatted as a "Question" post, which might have been a mistake on my part, as it means that I'm asking people to post questions in the form of "Answers" to the Question-post, and the terminology is super confusing as a result.

Is your altruism more effective now than it was four years ago? (Instead of the election question, "Are you better off now than you were four years ago?")

What would it look like for an organization or company to become more recognized as an 'EA' org/company? What might be good ways to become more integrated with the community (only if it is a good fit, that is, with high fidelity) and what does it mean to be more 'EA' in this manner?

I recognize that there is a lot of uncertainty/fuzziness with trying to definitively identify entities as 'EA'. It is hard for me to even know to whom to ask this question, so this comment is one of a few leads I have started.

I am generally curious about the organizational/leadership structure of "EA" as a movement. I am hesitant to detail/name the company as that feels like advertising (even though I do not actually represent the company), but some details without context:

  • Part of its efforts and investiture are aligned with reducing the risk of a potential x-risk (factor?) - aiming to develop brain-computer interfaces (BCI) that increase rather than hinder human agency.
  • Aims to use BCI to improve decision-making.
  • Donates 5% to effective charities (pulled from ~GiveWell) and engages employees in a 'giving game' to this end.
  • A for-profit company without external investors - a criterion they believe is necessary to be longterm focused on prioritizing human agency.

EAs experiencing ableism: How do you feel about the use of QALYs (Quality-Adjusted Life Year) and DALYs (Disability-Adjusted Life Year) to measure impact? Are there other concepts you would prefer?

There is a recent post and talk on Measuring Good Better. I found it really interesting to see how different organizations use WELLBYs, "moral weights", or something different.

Does being principled produce the same choice outcomes as being a long-term-consequentialist ?

Leadership circles[1] emphasize putting principles first. Utilitarianism rejects this approach: it focuses on maximizing outcomes, with little normative attention paid to the process (or, as the quip goes: the ends justify the means). This (apparent) distinction pits EA against conventional wisdom and, speaking from my experience as a group organizer,[2] is a turn-off.

However, this dichotomy seems false to me. I can easily imagine a conflict between a myopic utilitarian and a deontologist (e.g. the first might rig the lottery to send more money to charity).[3] I have more trouble imagining a conflict between a provident utilitarian and a principles-first person (e.g. cheating may help in the short term, but in the long-term, I may be barred from playing the game).[4] 

Even if principles sometimes butt heads (e.g. being kind vs. being honest), so can different choice outcomes (e.g. minimizing animal suffering vs. maximizing human flourishing). Both these differences are resolved by changing the question's parameters or definitions:[5] being dishonest is an unkindness; we need to take both sufferings into account.

All in all, it seems like both approaches face the same internal problems, the same resolutions, and could produce the same answer set. If this turns out to be true, there are a few possible consequences:

  • High confidence (>85%): With enough reflection, EA might develop 'EA principles' that are not focused on consequences but fundamentally aligned with EA.
  • Medium confidence (~55%): If EA develops these principles, EA can advertise them to current and prospective members, potentially attracting demographics that were fundamentally opposed to utilitarianism. 
  • Low confidence (~30%): If EAs adopt these principles, they may shift their primary focus to processes ('doing things right'), and move outcomes to secondary focuses. It adopts the motto: if you do things the right way, the right things will come.[6]
  1. ^

    I'm thinking of Stephen Covey's works "7 Habits of Highly Effective People" (1989) and "Principle-Centered Leadership" (1992). If these leadership models are outdated, please correct me. 

  2. ^

    When tabling for a new EA group, mentioning utilitarianism cast a shadow on a few (~40%) conversations. When I explained how we choose between lives we save every day, people seemed more empathetic, but it felt like a harder sell than it had to be.

  3. ^

    I would love for someone to do proper math to see if this expected value works out. Quick maths are as follows (making assumptions along the way). Assume the lottery is 100M$ with a 80% chance of getting caught, and otherwise, you make 200G per year, and you'd get 10 years in prison for rigging. EV of lottery rigging = winning profits + losing costs = .2*$100M + .8*(-$200G/yr*10 yr) = 18.4M.

  4. ^

    I'm assuming that we live in a society that doesn't value cheating...

  5. ^

    This strategy is Captain Kirk's when solving the Kobayashi Maru.

  6. ^

    Its modus tollens comes to the same conclusion as utilitarianism: if you have the wrong consequences, you must have had the wrong processes. 

The most important principle is to maximize long-run utility. All else follows.

How could we change the fact that our lives and fate are controlled by a few tyrant and greed people?

What are the hidden costs and benefits to working full-time on x-risk reduction?  (Including research, policy, etc.)