All of Aaron_Scher's Comments + Replies

Global health is important for the epistemic foundations of EA, even for longtermists

This is great and I’m glad you wrote it. For what it’s worth, the evidence from global health does not appear to me strong enough to justify high credence (>90%) in the claim “some ways of doing good are much better than others” (maybe operationalized as "the top 1% of charities are >50x more cost-effective than the median", but I  made up these numbers).

The DCP2 (2006) data (cited by Ord, 2013) gives the distribution of the cost-effectiveness of global health interventions. This is not the distribution of the cost-effectiveness of possible dona... (read more)

5Owen Cotton-Barratt2mo
Yeah I think this is a really good question and would be excited to see that kind of analysis. Maybe I'd make the numerator be "# of charitable $ spent" rather than "# of charities" to avoid having the results be swamped by which areas have the most very small charities. It might also be pretty interesting to do some similar analysis of how good interventions in different broad areas look on longtermist grounds (although this necessarily involve a lot more subjective judgements).
Is the time crunch for AI Safety Movement Building now?

The edit is key here. I would consider running an AI-safety arguments competition in order to do better outreach to graduate-and-above level researchers to be a form of movement building and one for which crunch time could be in the last 5 years before AGI (although probably earlier is better for norm changes). 

One value add from compiling good arguments is that if there is a period of panic following advanced capabilities (some form of firealarm), then it will be really helpful to have existing and high quality arguments and resources on hand to help... (read more)

2ThomasWoodside2mo
Aaron didn't link it, so if people aren't aware,we are running that competition [https://forum.effectivealtruism.org/posts/p3eiBqnijXPv5pCMA/usd20k-in-prizes-ai-safety-arguments-competition] (judging in progress).
We should expect to worry more about speculative risks

I’m a bit confused by this post. I’m going to summarize the main idea back, and I would appreciate it if you could correct me where I’m misinterpreting.

Human psychology is flawed in such a way that we consistently estimate the probability of existential risk from each cause to be ~10% by default. In reality, the probability of existential risk from particular causes is generally less than 10% [this feels like an implicit assumption], so finding more information about the risks causes us to decrease our worry about those risks. We can get more information a... (read more)

5Ben Garfinkel2mo
This is a helpful comment - I'll see if I can reframe some points to make them clearer. I'm actually not assuming human psychology is flawed. The post is meant to be talking about how a rational person (or, at least, a boundedly rational person) should update their views. On the probabilities: I suppose I'm implicitly evoking both a subjective notion of probability ("What's a reasonable credence to assign to X happening?" or "If you were betting on X, what betting odds should you be willing to accept?") and a more objective notion ("How strong is the propensity for X to happen?" or "How likely is X actually?" or "If you replayed the tape a billion times, with slight tweaks to the initial conditions, how often would X happen?").[1] [#fn-Fczodoew84NiY8WND-1] What it means for something to pose a "major risk," in the language I'm using, is for the objective probability of doom to be high. For example, let's take existential risks from overpopulation. In the 60s and 70s, a lot of serious people were worried about near-term existential risks from overpopulation and environmental depletion. In hindsight, we can see that overpopulation actually wasn't a major risk. However, this wouldn't have been clear to someone first encountering the idea and noticing how many experts took it seriously. I think it might have been reasonable for someone first hearing about The Population Bomb [https://en.wikipedia.org/wiki/The_Population_Bomb] to assign something on the order of a 10% credence to overpopulation being a major risk. I think, for a small number of other proposed existential risks, we're in a similar epistemic position. We don't yet know enough to say whether it's actually a major risk, but we've heard enough to justify a significant credence in the hypothesis that it is one.[2] [#fn-Fczodoew84NiY8WND-2] If you assign a 10% credence to something not being a major risk, then you should assign a roughly 90% credence to further evidence/arguments helping you see that it's
On funding, trust relationships, and scaling our community [PalmCone memo]

A solution that doesn’t actually work but might be slightly useful: Slow the lemons by making EA-related Funding things less appealing than the alternative.

One specific way to do this is to pay less than industry pays for similar positions: altruistic pay cut. Lightcone, the org Habryka runs, does this: “Our current salary policy is to pay rates competitive with industry salary minus 30%.” At a full-time employment level, this seems like one way to dissuade people who are interested in money, at least assuming they are qualified and hard working enough to ... (read more)

We Ran an AI Timelines Retreat

Good question. Short answer: despite being an April Fools post, that post seems to encapsulate much of what Yudkowski actually believes – so the social context is that the post is joking in its tone and content but not so much the attitude of the author; sorry I can't link to anything to further substantiate this. I believe Yudkowski's general policy is to not put numbers on his estimates.

Better answer: Here is a somewhat up-to-date database about predictions about existential risk chances from some folks in the community. You'll notice these are far below... (read more)

2ada2mo
Thanks for the reply. I had no idea the spread was so wide (<2% to >98% in the last link you mentioned)! I guess the nice thing about most of these estimates is they are still well above the ridiculously low orders of magnitude that might prompt a sense of 'wait, I should actually upper-bound my estimate of humanity's future QALYs in order to avoid getting mugged by Pascal.' It's a pretty firm foundation for longtermism imo.
What would you like to see Giving What We Can write about?

#17 in the spreadsheet is "How much do charities differ in impact?"

I would love to see an actual distribution of charity cost-effectiveness. As far as I know, that doesn't exist. Most folks rely on Ord (2013) which is the distribution of health interventions, but it says nothing about where charities actually do work. 

The AI Messiah

I really enjoyed this comment, thanks for writing it Thomas!

Is it still hard to get a job in EA? Insights from CEA’s recruitment data

Thanks for writing this up and making it public. Couple comments:

On average 45 applications were submitted to each position.

CEA Core roles received an average of 54 applications each; EOIs received an average of 53 applications each.

Is the first number a typo? Shouldn't it be ~54

 

Ashby hires 4% of applicants, compared to 2% at CEA

...

Overall, CEA might be slightly more selective than Ashby’s customers, but it does not seem like the difference is large

Whether this is "large" is obviously subjective. When I read this, I see 'CEA is twice as selective as ... (read more)

3sapphire1mo
Bottom line is actually 'CEA is four times as selective'. This was pointed out elsewhere but its a big difference.
3Ben_West3mo
Fixed, thanks! I agree we hire a smaller percent of total applicants, but we hire a substantially greater percent of applicants who get to the people ops interview stage. I think the latter number is probably the more interesting one because the former is affected a bunch by e.g. if your job posting gets put onto a random job board which gives you a ton of low-quality applicants. But in any case: in some ways CEA is more selective, and in other ways we are less; I think the methodology we used isn't precise enough to make a stronger statement than "we are about the same".
EA needs money more than ever

Congrats on your first forum post!! Now in EA Forum style I’m going to disagree with you.... but really, I enjoyed reading this and I’m glad you shared your perspective on this matter. I’m sharing my views not to tell you you’re wrong but to add to the conversation and maybe find a point of synthesis or agreement. I'm actually very glad you posted this

I don’t think I have an obligation to help all people. I think I have an obligation to do as much good as possible with the resources available to me. This means I should specialize my altruistic work ... (read more)

3Alexandre Zajic3mo
I think both the total view (my argument) and the marginal view (your argument, as I understand it) converge when you think about the second-order effects of your donations on only the most effective causes. You're right that I argue in this post from the total view of the community, and am effectively saying that going from $50b to $100b is more valuable now than it would have been at any time in the past. But I think this logic also applies to individuals if you believe that your donations will displace other donations to the second-best option, as I think we must believe (from $50b to $50.00001b, for example). This is why I think it's important to step back and make these arguments in both total + absolute terms, rather than how they're typically made for simplicity, in marginal and relative terms (an individual picking earn-to-give vs direct work). It's ultimately the total + absolute view that matters, even though the marginal + relative view allows for the most simplified decision-making. Plus, responding to you in your framework it also just so happens that if you believe longtermism, the growth of longtermism has added not just more second-best options, but probably new first-best options, increasing the first-order efficiency like you say. So I think there are multiple ways to arrive at this conclusion :)
Longtermist EA needs more Phase 2 work

Thanks for the clarification! I would point to this recent post on a similar topic to the last thing you said. 

Longtermist EA needs more Phase 2 work

Sorry for the long and disorganized comment.

I agree with your central claim that we need more implementation, but I either disagree or am confused by a number of other parts of this post. I think the heart of my confusion is that it focuses on only one piece of end to end impact stories: Is there a plausible story for how the proposed actions actually make the world better?

You frame this post as “A general strategy for doing good things”. This is not what I care about. I do not care about doing things, I care about things being done. This is semantic but i... (read more)

2Owen Cotton-Barratt4mo
Re. Gripe #3 (/#3.5): I also think AI stuff is super important and that we're mostly not ready for Phase 2 stuff. But I'm also very worried that a lot of work people do on it is kind of missing the point of what ends up mattering ... So I think that AI alignment etc. would be in a better place if we put more effort into Phase 1.5 stuff. I think that this is supported by having some EA attention on Phase 2 work for things which aren't directly about alignment, but affect the background situation of the world and so are relevant for how well AI goes. Having the concrete Phase 2 work there encourages serious Phase 1.5 work about such things — which probably helps to encourage serious Phase 1.5 work about other AI things (like how we should eventually handle deployment).
7Owen Cotton-Barratt4mo
Re. Gripe #2: I appreciate I haven't done a perfect job of pinning down the concepts. Rather than try to patch them over now (I think I'll continue to have things that are in some ways flawed even if I add some patches), I'll talk a little more about the motivation for the concepts, in the hope that this can help you to triangulate what I intended: * I think that there's a (theoretically possible) version of EA which has become sort of corrupt, and continues to gather up resources while failing to deploy them for good ends * I think keeping a certain amount of Phase 2 work keeps EA honest, and connecting to its roots of trying to do good in the world * The ability to credibly point to achieved impact is asymmetrically deployable by memeplexes which are really gearing up to do good things and help people achieve more good things, over versions of the memeplex which tell powerful narratives about why they're really the most important thing but will ultimately fail to achieve anything * In slogan form: "Phase 2 work guarantees EA isn't a Ponzi scheme" * I think keeping more attention on "what are our current best guesses about concrete things that we can go do" prevents people's pictures of what's important from getting too unmoored from reality
“Pivotal Act” Intentions: Negative Consequences and Fallacious Arguments

Non-original idea: What about a misaligned AI threatening to torture people? An aligned AGI could exist, and then a misaligned AGI could be created. The second AGI threatens to torture or kill lots of people if not given more power. Presumably, it could get in a position where it is able to do this without triggering the Deterrence mode of the aligned AGI, unless there is really good interpretability and surveillance. The first AGI, being a utility maximizer and suffering minimizer, cedes control of the future to the second AGI because it's better than the... (read more)

5Mauricio4mo
Thanks for this! I'm not sure I get why extortion could give misaligned agents a (big) asymmetric advantage over aligned agents. Here are some things that might each prevent extortion-based takeovers: * Reasons why successful extortion might not happen: * Deterrence might prevent extortion attempts--blackmailing someone is less appealing if they've committed to severe retaliation (cf. Liam Neeson [https://www.youtube.com/watch?v=-LIIf7E-qFI]). * Plausibly there'll be good enough interpretability or surveillance (especially since we're conditioning on there being some safe AI--those are disproportionately worlds in which there's good interpretability). * Arguably, sufficiently smart and capable agents don't give in to blackmail, especially if they've had time to make commitments. If this applies, the safe AI would be less likely to be blackmailed in the first place, and it would not cede anything if it is blackmailed. * Plausibly, the aligned AI would be aligned to values that would not accept such a scope-insensitive trade, even if they were willing to give in to threats. * Other reasons why extortion might not create asymmetric advantages: * Plausibly, the aligned AI will be aligned to values that would also be fine with doing extortion. Maybe a limitation of this analogy is that it assumes away most of the above anti-extortion mechanisms. (Also, if the human blackmail scenario assumes that many humans can each unilaterally cede control, that also makes it easier for extortion to succeed than if power is more centralized.) On the other point - seems right, I agree offense is often favored by default. Still: * Deterrence and coordination can happen even (especially?) when offense is favored. * Since the aligned AI may start off with and then grow a lead, some degree of offense being favored may not be enough for things to go wrong; the defe
How I failed to form views on AI safety

Thanks for writing this, it was fascinating to hear about your journey here. I also fell into the cognitive block of “I can’t possibly contribute to this problem, so I’m not going to learn or think more about it.” I think this block was quite bad in that it got in the way of me having true beliefs, or even trying to, for quite a few months. This wasn’t something I explicitly believed, but I think it implicitly affected how much energy I put into understanding or trying to be convinced by AI safety arguments. I wouldn’t have realized it without your post, b... (read more)

A visualization of some orgs in the AI Safety Pipeline

Thank you for your comment. Personally, I'm not too bullish on academia, but you make good points as to why it should be included. I've updated the graphic and is says this "*I don’t know very much about academic programs in this space. They seem to vary in their relevance, but it is definitely possible to gain the skills in academia to contribute to professional alignment research. This looks like a good place for further interest: https://futureoflife.org/team/ai-existential-safety-community/"

If you have other ideas you would like expressed in the graphic I am happy to include them!

A visualization of some orgs in the AI Safety Pipeline

Thanks! Nudged. I'm going to not include CERI and CHERI at the moment because I don't know much about them. I'll make a note of them

A visualization of some orgs in the AI Safety Pipeline

Thanks for the reminder of this! Will update. Some don't have websites but I'll link what I can find.

A visualization of some orgs in the AI Safety Pipeline

Good question. I think "Learning the Basics" is specific to AI Safety basics and does not require a strong background in AI/ML. My sense is that the AI Safety basics and ML are slightly independent. The ML side of things simply isn't pictured here. For example, the MLAB (Machine Learning for Alignment Bootcamp) program which ran a few months ago focused on taking people with good software engineering skills and bringing them up to speed on ML. As far as I can tell, the focus was not on alignment specifically, but was intended for people likely to work in a... (read more)

Reframing AI risk

Hey! I love this video. It's been one of my favorite youtube videos in the last few years, but I don't think it highlights some of the major risks from advanced AI. The video definitely highlights bad actors and the need to regulate the use of powerful technologies. However, risks from advanced AI include both that and some other really scary stuff. I'm particularly worried about accidents arising from very powerful AI systems, and especially existential catastrophes. 

I think the key reason that this is my focus is because I look at AI risks through t... (read more)

Why should we care about existential risk?

Congrats on your first post! I appreciate reading your perspective on this – it's well articulated. 

I think I disagree about how likely existential risk from advanced AI is. You write:

Given that life is capable of thriving all on its own via evolution, AI would have to see the existence of any life as a threat for it to actively pursue extinction

In my view, an AGI (artificial general intelligence) is a self-aware agent with a set of goals and the capability to pursue those goals very well. Sure, if such an agent views humans as a threat to its own exi... (read more)

The Vultures Are Circling

Thanks for this comment, Mauricio. I always appreciate you trying to dive deeper – and I think it's quite important here. I largely agree with you. 

4Mauricio4mo
Thanks, Aaron!
New GPT3 Impressive Capabilities - InstructGPT3 [1/2]

Looking forward to the second post! I enjoy reading the fun/creative examples and hearing about how this differs from past models.

A Gentle Introduction to Long-Term Thinking

This is great, I enjoyed reading it. Regarding Footnote #8, I would consider mentioning the following example for why discounting makes no sense:

Robert Wiblin: I think we needn’t dwell on this too long, because as you say, it has basically 0% support among people who seriously thought about it, but just to give an idea how crazy it is, if you applied a time preference of just 1% per annum, pure rate of time preference of just 1% per annum, that would imply that the welfare of Tutankhamun was more important than that of all seven billion humans that are ali

... (read more)
On presenting the case for AI risk

Thanks for writing this up, it’s fantastic to get a variety of perspectives on how different messaging strategies work.

  1. Do you have evidence or a sense of if people you have talked to have changed their actions as a result? I worry that the approach you use is so similar to what people already think that it doesn’t lead to shifts in behavior. (But we need nudges where we can get them)
  2. I also worry about anchoring on small near term problems and this leading to a moral-licensing type effect for safety (and a false sense of security). It is unclear how like
... (read more)
5Aryeh Englander5mo
Yes, I have seen people become more actively interested in joining or promoting projects related to AI safety. More importantly, I think it creates an AI safety culture and mentality. I'll have a lot more to say about all of this in my (hopefully) forthcoming post on why I think promoting near-term research is valuable.
Comments for shorter Cold Takes pieces

For those particularly concerned with counterfactual impact, this is an argument to work on problems or in fields that are just beginning or don’t exist yet in which many of the wins haven’t been realized; this is not a novel argument. I think the bigger update is that “ideas get harder to find” indicates that you may not need to have Beethoven’s creativity or Newton’s math skills in order to make progress on hard problems which are relatively new or have received little attention. In particular, AI Safety seems like a key place where this rings true, in my opinion.

Some thoughts on vegetarianism and veganism

Thanks for writing this! Epistemic note: I am engaging in highly motivated reasoning and arguing for veg*n. 

  1. As BenStewart mentioned, virtue ethics seems relevant. I would similarly point to Kant’s moral imperative of universalizability: "act only in accordance with that maxim through which you can at the same time will that it become a universal law.” Not engaging in moral atrocities is a case where we should follow such an ideal in my opinion. We should at least consider the implications under moral uncertainty and worldview diversification. 
  2. My
... (read more)
9MichaelStJules6mo
On 5, diet change seems very very unlikely to make a difference on an individual level, because of how large the markets are. I think we're (possibly much) more likely to make a difference through careers and donations. Maybe we have more robust estimates of the expected effects of diet (on farmed animals, at least) than these other things, though. Diversification/hedging seems valuable to me with deep uncertainty or moral uncertainty.
Idea: Red-teaming fellowships

Thanks for writing this up. It seems like a good idea, and you address what I view as the main risks. I think that (contingent on a program like this going well) there is a pretty good chance that it would generate useful insights (Why #3). This seems particularly important to me for a couple reasons. 

  1. Having better ideas and quality scrutiny = good
  2. Relatively new EAs who do a project like this and have their work be received as meaningful/valuable would probably feel much more accepted/wanted in the community 

I would therefore add what I think is ... (read more)

We should be paying Intro Fellows

Thanks for your response, Akash! I know I'm late to reply, so forgive me. 

Especially thanks for bringing up 1.2 as a failure mode where people aren't engaged but continue coming. This seems worrisome, and I think I didn't consider it because it's not something I've noticed in my facilitating. But it's obviously very important. 

I agree that there would be lots of variability across groups, but I'm not unsure what this implies. I am not totally against high risk, high reward strategies, and this probably depends on existential risk timelines as wel... (read more)

We should be paying Intro Fellows

Hey Michael! I read your comment when you wrote it, but am only replying now :/ 

Thank you for your thoughts, you raise important questions. One I want to hone in on is: 

if EA is so focused on effectiveness, why does it make sense to pay people to just learn about EA

In a way, this seems like the classic question of "how can we convert money into X?", where X is sometimes organizer time. Here, X is "highly engaged EAs who use an EA mindset to determine their career". One proposed answer is to give out tons of books. I'm not sure if we have good cos... (read more)

Aaron_Scher's Shortform

Hey Ed, thanks for your response. I have no disagreement on 1 because I have no clue what the upper end of people applying is – simply that it's much higher than the number who will be accepted and the number of people (I think) will do a good job. 

2. I think we do disagree here. I think these qualities are relatively common in the CBers and group organizers I know (small sample). I agree that short app timeline will decrease the number of great applicants applying, also unsure about b, c seems like the biggest factor to me. 

Probably the crux here is what proportion of applicants have the skills you mention, and my guess is ⅓ to ⅔, but this is based on the people I know which may be higher than in reality.

2Edward Tranter6mo
Awesome - thanks for the response. Yes, I agree with the crux (this also may come from different conceptions of the skills themselves). I'll message you!
Aaron_Scher's Shortform

Thanks for your response! I don't think I disagree with anything you're saying, but I definitely think it's hard. That is, the burden of proof for 1, 2, and 3 is really high in progressive circles, because the starting assumption is charity does not do 1, 2, or 3. To this end, simplified messages are easily mis-interpreted. 
I really like this: "The reason being that they redistribute power, not just resources."

2Peter6mo
Yeah when I was reading it I was thinking "these are high bars to reach" but I think they cover all the concerns I've heard. Oh glad you liked it! I probably could have said that from the start, now that I think about it.
Aaron_Scher's Shortform

Yes, I agree that this is unclear. Depending on AI timelines, the long-term might not matter too much. To add to your list:

- What do you or others view as talent/skill gaps in the EA community; how can you build those skills/talents in a job that you're more likely to get? (I'm thinking person/project management, good mentoring, marketing skills, as a couple examples)

Aaron_Scher's Shortform

Random journaling and my predictions: Pre-Retrospective on the Campus Specialist role.
 Applications for the Campus Specialist role at CEA close in like 5 days. Joan Gass's talk at EAG  about this was really good, and it has led to many awesome, talented people believing they should do Uni group community building full time. 20-50 people are going to apply for this role, of which at least 20 would do an awesome job. 

Because the role is new, CEA is going to hire like 8-12 people for this role; these people are going to do great things for comm... (read more)

2Edward Tranter6mo
Thanks for posting this, Aaron! I'm also applying to the role, and your thoughts are extremely well-put and on the mark. I think we have two disagreements here. 1. My thought is that over 50 people are going to apply (my expectation is 65+); perhaps this doesn't matter too much (quite a few disappointed people regardless), and I don't think either of us has particularly good evidence for this. 2. I'm uncertain as to whether 40% (assuming your prediction of 50 applications) would do an "awesome" job. 'Awesome' needs to be defined further here, but, without going into the weeds, I think that a recently graduated person having a fleshed-out entrepreneurial aptitude + charisma + a deep understanding of EA is extremely rare (see Alex HT's post [https://forum.effectivealtruism.org/posts/FjDpyJNnzK8teSu4J/a-huge-opportunity-for-impact-movement-building-at-top-2] ). More on the 2nd thought: I'd reckon (high uncertainty) that CEA may struggle to find more than ~12 people like this. This does not imply that there are not far more than 12 qualified people for the job. Primary reasons I think this: a) the short application timeline; b) my uncertainty about the degree of headhunting that's gone on; and c) the fact that a lot of the best community builders I know (this is a limited dataset, however) already have jobs lined up. All of this depends on who is graduating this year and who is applying, of course.
9Kirsten6mo
The other people who were good fits but weren't hired might do something less impactful over the next two years, but I think it's still unclear whether their career will be less impactful in the longer term. There are lots of jobs with quality training and management that could teach you a lot in the two years you would've been a campus specialist. I would encourage everyone who's applying to be a campus specialist to also apply to some of those jobs, and think carefully about which to pick if offered both. Some things you could try: -Testing your fit for a policy/politics career -Learning the skills you'd need to help run a new EA megacharity -Working or volunteering as a community organizer
2Peter6mo
Hey I applied too! Hopefully at least one of us gets it. I think they probably got more than 50 applications, so it almost starts to become a lottery at that point if they only have a few spots and everyone seems like they could do it well. Or maybe that's just easier for me to think haha.
Partnerships between the EA community and GLAMs (galleries, libraries, archives, and museums)

Love this idea, and your suggestion of talks with AMNH, it seems like there could be lots interesting content around longtermism or existential risk with a colab there. A small idea would be asking libraries to buy EA and rationality related books (if they don’t have them), and make sure that they’re included with other related books. Like the “business self-help” and “how to be a top CEO” sections should probably include the 80k book imo.

Pilot study results: Cost-effectiveness information did not increase interest in EA

Thanks for your thorough comment! Yeah I was shooting for about 60 participants, but due to time constraints and this being a pilot study I only ended up with 44, so even more underpowered.

Intuitively I would expect a larger effect size, given that I don't consider the manipulation to be particularly subtle; but yes, it was much subtler than it could have been. This is something I will definitely explore more if I continue this project; for example, adding visuals and a manipulation check might do a better job of making the manipulation salient. I would li... (read more)

What are the best (brief) resources to introduce EA & longtermism?

I really like Ajeya Cotra’s Intro EA talk (https://youtu.be/48VAQtGmfWY) (35 mins 1x speed). I also like this article on longtermism (https://80000hours.org/articles/future-generations/) although it took me about 25 mins to read. This is a really important question, I’m glad you’re asking it, and I would really like to see more empirical work on it rather than simply “I like this article” or “a few people I talked to like this video” which seems to be the current state. I’m considering spending the second semester of my undergrad thesis on trying to figure... (read more)

1Jeremy8mo
MacAskill’s Ted talk is a good candidate at ~12 minutes long. Doesn’t get too in depth on longtermism. Not sure if that was a requirement.
Supporting Video, Audio, and other non-text media on the Forum

Having more types of content on the forum is appealing to me. There's probably discussion of this elsewhere, but would it be difficult to have audio versions of all posts? Like a built in text to speech component option.

5Ben_West8mo
Yep, this exists [https://forum.effectivealtruism.org/posts/JTZTBienqWEAjGDRv/listen-to-more-ea-content-with-the-nonlinear-library] !
3Jeremy8mo
People may know this, but I only recently figured it out, and caught up on my forum reading during a long drive. In IOS, under accessibility, you can set it up so that swiping down with 2 fingers from the top of any screen with text will read the screen to you. It’s not perfect but it got the job done. I would imagine that most other platforms have a similar feature if you dig around.
A case for the effectiveness of protest

Thank you for looking into this! This strikes me as really important!! Your post is long so I didn't read it – sorry – but this made me think of an article that I didn't see you cite which might be relevant: https://www.cambridge.org/core/services/aop-cambridge-core/content/view/136610C8C040C3D92F041BB2EFC3034C/S000305542000009Xa.pdf/agenda_seeding_how_1960s_black_protests_moved_elites_public_opinion_and_voting.pdf

6James Ozden8mo
Hi Aaron, thanks for your kind comment! Also thanks for linking that research, I remember seeing it a while ago but forgot about it so thanks for the reminder! Will look into that further for the next round of research.
Aaron_Scher's Shortform

Progressives might be turned off by the phrasing of EA as "helping others." Here's my understanding of why. Speaking anecdotally from my ongoing experience as a college student in the US, mutual aid is getting tons of support among progressives these days. Mutual aid involves members of a community asking for assistance (often monetary) from their community, and the community helping out. This is viewed as a reciprocal relationship in which different people will need help with different things and at different times from one another, so you help out when y... (read more)

2Peter6mo
I think many progressives and others on the left value mutual aid because they see it as more sustainable and genuine and with fewer negative strings attached. I think they are generally fine with aid and helping others as long as they can be shown good evidence that 1) the aid is not going to be used to prevent other positive changes (basically things like exchanging humanitarian aid for continued resource extraction from a region that's worth more than the total aid contributed, or pressuring/requiring a housing justice org to stop organizing tenants to stand up for their rights in exchange for more funding for their shelter initiatives) and 2) Aid is done in a competent manner so that it doesn't get stolen by governments, wasted, or taken by other corrupt actors and 3) respects local wisdom and empowers people to have more of a say over decisions that most affect them. Another example would be conservation efforts that kick indigenous people off their land vs ones that center their practical experience and respect their rights. There's a big difference between donating to a food bank and creating the infrastructure for people to organize their own food bank and/or grow their own food of their choosing. The first one is more narrowly focused on food security whereas the latter fits with a broader food justice or food sovereignty approach. I think both are important. Many people believe the latter kind of empowerment initiatives are more sustainable in the long run and less dependent on shifts in funding, even if they're harder to set up initially. The reason being that they redistribute power, not just resources. To sum it up, something like "Give a man a fish and he will eat for a day; teach a community to fish, and give them a place to do so, and they will eat for generations."
The Explanatory Obstacle of EA

Great post, I totally agree that we need more work in this area. Also agree with other commenters that volunteering isn’t a main focus of EA advice, but it probably should be – given the points Mauricio made.

Nitpicky, but it would have been nice to have a summary at the start of the post.

I want to second Bonus #2, I think EA is significantly about a toolkit for helping others effectively, and using examples of tools seems helpful for an engaging pitch. Is anybody familiar with a post or article listing the main EA tools? One of my side-projects is developi... (read more)

3Adam Steinberg8mo
If you end up with a list of tools, you could add 'em to the chart I link to in the comment above. It's meant to collect just about everything important. If you'd like.

Can you (or someone) write a TLDR of why "helping others" would turn off "progressives"?

We need alternatives to Intro EA Fellowships

Again, thank you for some amazing thoughts. I'll only respond to one piece:

\begin{quotation}But, anecdotally, it seems like a big chunk (most?) of the value EA groups can provide comes from:

  • Taking people who are already into weird EA stuff and connecting them with one another
  • And taking people who are unusually open/receptive to weird EA stuff and connecting them with the more experienced EAs \end{quotation}

I obviously can't disagree with your anecdotal experience, but I think what you're talking about here is closely related to what I see as one of EA... (read more)

2Mauricio9mo
Good points! Agree that reaching out beyond overrepresented EA demographics is important--I'm also optimistic that this can be done without turning off people who really jive with EA mindsets. (I wish I could offer more than anecdotes, but I think over half of the members of my local group who are just getting involved and seem most enthusiastic about EA stuff are women or POC.) I also wouldn't make that claim about "weird people" in general. Still, I think it's pretty straightforward that people who are unusual along certain traits know how to do good better than others, e.g. people who are unusually concerned with doing good well will probably do good better than people who don't care that much. Man, I don't know, I really buy that we're always in triage [https://mhollyelmoreblog.wordpress.com/2016/08/26/we-are-in-triage-every-second-of-every-day/] , and that unfortunately choosing a less altruistically efficient allocation of resources just amounts to letting more bad things happen. I agree it's a shame if some well-off people don't get the nice personal enrichment of an EA fellowship--but it seems so much worse if, like, more kids die because we couldn't face hard decisions and focus our resources on what would help the most. Edit: on rereading I realize I may have interpreted your comment too literally--sorry if I misunderstood. Maybe your point about efficient allocation was that some forms of meta-EA might naively look like efficient allocation of resources without being all that efficient (because of e.g. missing out on benefits of diversity), so less naive efficiency-seeking may be warranted? I'm sympathetic to that.
We need alternatives to Intro EA Fellowships

Good points. We should have explained what our approach is in a separate post that we could link to; because I didn't explain it too well in my comment. We are trying to frame the project like so: This is not the end goal. It is practice at what this process looks like, it is a way to improve our community in a small but meaningful way. Put another way, the primary goals are skill building and building our club's reputation on campus. Another goal is to just try more stuff to help meta-EA-community building; even though we have a ton of resources on commu... (read more)

6Mauricio9mo
Thanks for the thoughtful response! I think you're right that EA projects being legibly good to people unsympathetic with the community is tough. I like the first part; I'm still a bit nervous about the second part? Like, isn't one of the core insights of EA that "we can and should do much better than 'small but meaningful'"? And I guess even with the first part (local projects as practice), advice I've heard about practice in many other contexts (e.g. practicing skills for school, or musical instruments, or sports, or teaching computers to solve problems by trial and error) is that practice is most useful when it's as close as possible to the real thing. So maybe we can give group members even better practice by encouraging them to practice unbounded prioritization/projects? There's a tricky question here about who the target audience of our advertising is. I think you're right that working on mainstream/visible problems is good for appealing to the average college student. But, anecdotally, it seems like a big chunk (most?) of the value EA groups can provide comes from: * Taking people who are already into weird EA stuff and connecting them with one another * And taking people who are unusually open/receptive to weird EA stuff and connecting them with the more experienced EAs And there seems to be a tradeoff where branding/style that strongly appeals to the average student might be a turnoff for the above audiences. The above audiences are of course much smaller in number, but I suspect they make up for it by being much more likely to--given the right environment--get very into this stuff and have tons of impact. Personally, I think there's a good chance I wouldn't have gotten very involved with my local group (which I'm guessing would have significantly decreased my future impact, although I wouldn't have known it) if it hadn't been clear to me that they were serious about this stuff. That's fair. I guess we could say one could always spend that hou
We need alternatives to Intro EA Fellowships

Yes. Will do an end of the year assessment of what worked and what didn't. Focus will likely be on Winter Break Programming and Project Fellowships.

We need alternatives to Intro EA Fellowships

Thanks for posting this! One worry I have, particularly relevant to a Project Based Fellowship, is that it would not involve sufficiently learning key ideas. Mauricio discussed this, but I think there's even more to it than is obvious. In this critique of EA (https://www.lesswrong.com/posts/CZmkPvzkMdQJxXy54/another-critique-of-effective-altruism), it is brought up that we frequently "Over-focus on “tried and true” and “default” options, which may both reduce actual impact and decrease exploration of new potentially high-value opportunities." The less cont... (read more)

3ChanaMessinger9mo
I think I strongly agree with the value of learning about at least the core arguments for a bunch of different causes. Taking seriously that some people are devoting their whole lives to making the future go better, or worrying about lie detection, or pandemics that have never happened, or digital people, or animals that seem to most people nonsentient really pushes your mind in a particular way, and in some ways, the weirder the better, at least for the purpose of really expanding what people think of when they think of "doing good"
2ChanaMessinger9mo
Having a structured set of resources that people could engage with on breaks seems really valuable. It could let highly engaged participants who want to go faster do the "Thanksgiving Break" bingeread, or the "One/two week break" set of readings, or so on, with all of those having activities/interactive elements of that seems valuable. Is this something you're thinking of writing up?
4Mauricio9mo
Thanks for this! Tangent: Hm, I'm kind of nervous about the norms an EA group might set by limiting its projects' ambitions to its local community. Like, we know a dollar or an hour of work can do way more good if it's aimed at helping people in extreme poverty than US college students... what group norms might we be setting if our projects' scope overlooks this? At the same time, I think you're spot on in seeing that many students want to do projects, and I really appreciate your work toward offering something to these students. As a tweak on the approach you discuss, what are your intuitions about having group members do projects with global scope? I know there's a bunch of EA undergrads who are working on projects like doing research on EA causes, or running classes on AI safety or alternative proteins, or compiling relevant internship opportunities, or running training programs that help prepare people to tackle global issues, or running global EA outreach programs. This makes me optimistic that global-scope projects: * Are feasible (since they're being done) * Are enough to excite the students who want to get to doing stuff * And have a decent amount of direct impact, while reinforcing core EA mindsets
Aaron_Scher's Shortform

A Simpler Version of Pascal's Mugging Background: I found Bostrom’s original piece (https://www.nickbostrom.com/papers/pascal.pdf) unnecessarily confusing, and numerous Fellows in the EA VP Intro Fellowship have also been confused by it. I think we can be more accessible in our ideas. I wrote this in about 30 minutes though, so it's probably not very good. I would greatly appreciate feedback on how to improve it. I also can't decide if it would be useful to have at the end a section of "possible solution" because as far as I can tell, theses solutions are ... (read more)

1acylhalide9mo
P.S. For what it's worth, I got an entirely different moral from this. Namely that 200 trillion days of happiness makes no sense to the human brain. I would not submit to a million days of torture followed by 200 trillion days of happiness, I'd rather stick to status quo. No probabilities or x-risks involved.
Many Undergrads Should Take Light Courseloads

Great post Mauricio! I'm a senior undergrad this year and this is the first semester I have deliberately taken fewer classes and focussed on things I find more important/interesting (mostly EA organizing). Best decision I've made in a while, and I'm getting much more out of my college experience now than before. 

In regard to caveat 3 and people who benefit from structure/oversight, I would suggest the following:

Participate in or facilitate fellowships/reading groups for EA if EA is something you want to do. Having other people depend on you or expect things from you can be really motivating. 

4Mauricio10mo
Thanks, Aaron! I've felt similarly--crazy how much time (and effort/attention/stress) that frees up :) I'm into the general point here. I'd also encourage people to be much more ambitious in applying this advice--anecdotally, a significantly lighter courseload leaves enough time to e.g. organize whole fellowships (although facilitation/participation can definitely be a good starting point).