Recent Discussion

I'm interested in how many 2021 $s you'd think it's rational for EA be willing to trade (or perhaps the equivalent in human capital) against 0.01% (or 1 basis point) of existential risk. 

This question is potentially extremely decision-relevant for EA orgs doing prioritization, like Rethink Priorities. For example, if we assign $X to preventing 0.01% of existential risk, and we take Toby Ord's figures on existential risk (pg. 167, The Precipice) on face value, then we should not prioritize asteroid risk (~1/1,000,000 risk this century), if all realistic interventions we could think of costs >>1% of $X, or prioritize climate change (~1/1,000 risk this century) if realistic interventions costs >>$10X, at least on direct longtermism grounds (though there might still be neartermist or instrumental reasons for doing...

2Answer by Michael_Wiebe1hSuppose there areNpeople and a baseline existential riskr. There's an intervention that reduces risk byδ×100%(ie., not percentage points). Outcome with no intervention:rNpeople die. Outcome with intervention:(1−δ)rNpeople die. Difference between outcomes:δrN. So we should be willing to pay up toδr⋅u(N)for the intervention, whereu(N)is the dollar value ofNlives. [Extension: account for time periods, with discounting for exogenous risks.] I think this approach makes more sense than starting by assigning $X to 0.01% risk reductions, and then looking at the cost of available interventions.
1Michael_Wiebe2hIf asteroid risk is 1/1,000,000, how are you thinking about a 0.01% reduction? Do you mean 0.01pp = 1/10,000, in which case you're reducing asteroid risk to 0? Or reducing it by 0.01% of the given risk, in which case the amount of reduction varies across risk categories? The definition of basis point [https://en.wikipedia.org/wiki/Basis_point] seems to indicate the former.

If asteroid risk is 1/1,000,000, how are you thinking about a 0.01% reduction?

a 0.01% total xrisk reduction will be 0.01%/(1/1,000,000) = 100x the reduction of asteroid risks.

The classic definition comes from Bostrom:

Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

But this definition, while poetic and gesturing at something real, is more than a bit vague, and many people are unhappy with it, judging from the long chain of clarifying questions in my linked question. So I'm interested in proposed community alternatives that the EA community and/or leading longtermist or xrisk researchers may wish to adopt instead.

Alternative definitions should ideally be precise, clear, unambiguous, and hopefully not too long.

It seems useful to get more familiar with the motivation behind not being convinced by effective altruism (or indeed, any kind of altruism). One way of doing this is by looking at the rational arguments against it. Another one is to look at the emotional states or irrational biases that lead people to dismiss it without even having thought about it. One of the more frequent emotional states, in my opinion, seems to be “resignation” (or “fatalism”). No one thinks of themselves as resigned or fatalist. And it would probably not go down very well wi... (read more)

Inspired by Yonatan's post here.

Why do I think I’ll be useful?

I'm very much early-career myself (finished undergrad 2019). I've interned at Google and Uber (both South Bay) and worked at Citadel (Chicago), currently at Scale AI (San Francisco). My EA experiences include facilitating UCLA EA's Arete Fellowship (Fall '20), Stanford's AI Safety reading group (Fall '20), and being a Tianxia Fellow (2021). My mentorship experience include at least six 1:1 college mentees and five years of sporadic 1:1 tutoring. 

I enjoy mentoring and meeting more EAs in this way seems like fun!

I’m offering help with (re my personal experiences):

  • Motivation and accountability
  • Improving technical skills, e.g. Python, C++, and ML
  • Energy and sleep
  • Diet and exercise -- keto, intermittent fasting, cardio, lowkey strength
  • Mental health (not a therapist!) - e.g. imposter syndrome, depression,
...

If you're new to the EA Forum, consider using this thread to introduce yourself! 

You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)


If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)


Open threads are also a place to share good news, big or small. See this post for ideas.

2Charles He3hThis is a really thoughtful and useful question. Most informed people agree that beef and dairy cows live the best life of all factory farmed animals, more so than pigs, and much much more so than chickens. Further, as you point out, beef and dairy cows produce much more food per animal (or suffering weighted days alive). A calculator here can help make make the above thoughts more concrete [https://reducing-suffering.org/how-much-direct-suffering-is-caused-by-various-animal-foods/] , maybe you have seen it. I think you meant prevents painful deaths? With this change, I don't know, but this seems plausible. (I think amount of suffering depends on the land use and pesticides, but I don't know if the scientific understanding is settled, and this subtopic may be distracting.) I think you have a great question. Note that extreme suffering in factory farming probably comes from very specific issues, concentrated in a few types of animals (caged hens suffering to death [https://countinganimals.com/is-vegan-outreach-right-about-how-many-animals-suffer-to-death/#:~:text=Chickens%20arrive%20dead%20for%20a,days%20to%20reduce%20fecal%20contamination.] by the millions and other graphic situations). This means that, if the assumptions in this discussion are true, and our concern is on animal suffering, decisions like beef versus tofu, or even much larger dietary decisions, seem small in comparison.
6Lucas Lewit-Mendes2hThanks Charles for your thoughtful response. I just wanted to note that I'm referring to 100% pasture fed lamb/beef. I think it's very unlikely that it's ethically permissable to eat factory farmed lamb/beef, even if it's less bad than eating chickens, etc. I'd also caution against eating dairy since calves and mothers show signs of sadness [https://kb.rspca.org.au/knowledge-base/what-happens-to-bobby-calves/]when separated, although each dairy cow produces a lot [https://reducing-suffering.org/how-much-direct-suffering-is-caused-by-various-animal-foods/] of dairy (as you noted). Sorry, I probably could've worded this better, but my original wording was what I meant. My understanding is that crop cultivation for grains and beans causes painful wild animal deaths, but grass-fed cows/lamb do not eat crops and therefore, as far as I'm aware, do not cause wild animal deaths. I certainly agree with your conclusion that not eating factory farmed chicken, pork, and eggs (and probably also fish) is the most important step! But I'd still like to do the very best with my own consumption.

Everything you said is fair and valid and seems right to me. Thank you for your thoughtful choices and reasoning.

 

Edit: I forgot you said entirely pasture/grass fed beef, so this waives the thoughts below.

A quibble:

Sorry, I probably could've worded this better, but my original wording was what I meant. My understanding is that crop cultivation for grains and beans causes painful wild animal deaths, but grass-fed cows/lamb do not eat crops and therefore, as far as I'm aware, do not cause wild animal deaths. 

  1. It seems that beef and dairy cows both u
... (read more)

This is a linkpost for https://sashachapin.substack.com/p/your-intelligent-conscientious-in

I really liked the piece. It resonated with my experiences in EA. I don't know that I agree with the mechanisms Sasha proposes, but I buy a lot of the observations they're meant to explain. 

I asked Sasha for his permission to post this (and heavily quote it). He said that he hopes it comes off as more than a criticism of EA/rationality specifically--it's more a "general nerd social patterns" thing. I only quoted parts very related to EA, which doesn't help assuage his worry :( 

There's more behind the link :) 

 

So, I’ve noticed that a significant number of my friends in the Rationalist and Effective Altruist communities seem to stumble into pits of despair, generally when they structure their lives too rigidly around

...
2Aaron Gertler8hThat seems like the opposite of my impression. My impression is that the majority of people in EA positions who are less active online are more likely to have normal work schedules, while the people who spend the most time online are those who also spend the most time doing what they think of as "EA work" (sometimes they're just really into their jobs, sometimes they don't have a formal job but spend a lot of time just interacting in various often-effortful ways). Thanks for sharing your impression of people you know — if you live with a bunch of people who have these jobs, you're in a better position to estimate work time than I am (not counting CEA). When you say "working >45h", do you mean "the work they do actually requires >45 hours of focused time", or "they spend >45 hours/week in 'work mode', even if some of that time is spent on breaks, conversation, idle Forum browsing, etc."?
4Linch6hSorry to be clear, here's my perspective: If you only observe EA culture from online interactions, you get the impression that EAs think about effective altruism much more regularly than they actually do. This will extend to activities like spend lots of time "doing EA-adjacent things", including the forum, EA social media, casual reading, having EA conversations, etc. Many people in that reference class include people volunteering their time, or people who find thinking about EA relaxing compared to their stressful day jobs. However, if we're referring to actual amounts/hours of work done on core EA topics in their day job, EA org employees who are less active online will put in more hours towards their jobs compared to EA org employees who are more active online. They spend >>45h on their laptops or other computing devices, and (unlike me) if I glance over at their laptops during the day, it almost always appears to be a work-related thing like a work videocall or github, command line, Google Docs, etc. A lot of the work isn't as focused, e.g., lots of calls, management, screening applications, sys admin stuff, taking classes, etc. My guess is that very few people I personally know spends >45h on doing deep focused work in research or writing. I think this is a bit more common in programming, and a lot more common in trading. I think for the vast majority of people, including most people in EA, it's both psychologically and organizationally very hard to do deep focused work for anywhere near that long. Nor do I necessarily think people should even if they could: often a 15-60 minute chat with the relevant person could clarify thoughts that would otherwise take a day, or much longer, to crystallize. But they're still doing work for that long, and if you mandate that they can only be on their work computers for 45h, I'd expect noticeable dips in productivity. Re: Not sure what you mean by "requires." EA orgs by and large don't clock you, and there's pretty hi

If you only observe EA culture from online interactions, you get the impression that EAs think about effective altruism much more regularly than they actually do. This will extend to activities like spend lots of time "doing EA-adjacent things", including the forum, EA social media, casual reading, having EA conversations, etc. Many people in that reference class include people volunteering their time, or people who find thinking about EA relaxing compared to their stressful day jobs.

I agree.

However, if we're referring to actual amounts/hours of work done

... (read more)
Sign up for the Forum's email digest
Want a weekly email containing the best posts from the past week? Our moderator Aaron sends out a weekly digest of recent posts that have a lot of karma/discussion or seemed really good to him, as well as question posts that could use more answers.

1. Key Takeaways

  • The case for Wikipedia editing in a nutshell: Wikipedia articles are widely read and trusted, there is much low hanging fruit for improvement, and editing Wikipedia has low barriers to entry and is relatively low effort. Consequently, improving a Wikipedia article may benefit the reasoning and actions of its thousands, and often millions, of readers. Moreover, since Wikipedia is a global public good, improvements to Wikipedia are likely undersupplied relative to the socially optimal level.
  • Careful prioritisation is crucial. Improving or creating some Wikipedia articles could easily be 100x to 1,000x as valuable as others. The key factors to consider for prioritisation are (i) pageviews, (ii) audience, (iii) topic, (iv) room for improvement, and (v) language.
  • Respecting Wikipedia community rules and norms is key. The Wikipedia community
...

Vipul Naik used to experiment with paying people to edit wikipedia pages. This has since run foul of Wikipedia community's arbitration, for complicated online social reasons that I was unable to find a good history of.

If people want to have a community push to edit Wikipedia (and not just a few EAs individually choosing to do so), I think it'd be helpful to learn from past failures so we don't accidentally burn more goodwill. It might be as simple as "never pay people to edit Wikipedia," but I'm not sure (and lean against) that's the only generalizable les... (read more)

7Lizka4hThanks for this post! I appreciated it. It seems worth listing or crowd-sourcing articles that people should focus on, and attaching that to existing project compilations [https://forum.effectivealtruism.org/posts/MsNpJBzv5YhdfNHc9/a-central-directory-for-open-research-questions] and resources [https://forum.effectivealtruism.org/tag/get-involved]. (Michael seems to point in this direction in a comment [https://forum.effectivealtruism.org/posts/FebKgHaAymjiETvXd/wikipedia-editing-is-important-tractable-and-neglected?commentId=Y7ktqDtP7Ag4GSzL2] , too.) Maybe you or someone else can write up a quick list of topics to explore, or a meta strategy for identifying such topics? One thing that springs to mind as a possible starting point is simply checking to see which of the EA Forum Wiki tags/pages [https://forum.effectivealtruism.org/tags/all] don't have a corresponding Wikipedia page, or have a poor one (and deciding for which that should change). On that note, I quite like that you list: (Apologies if I am repeating something said in the post itself-- I read some parts and skimmed others.)
10Lakin5hWhat edits have you (or anyone you know) made that seem to have been valuable?

This post discusses multiple issues relating to the way the EA movement is perceived (ranging from common misconceptions to unjustified strong opinions against EA) and suggests alternatives to the ways we describe EA. 

Since I don’t have the resources to quantify this problem, I rely on my personal experience as a community builder and that of many other community builders and explain the rationale behind my suggestions.


Around 2013, a couple of mass media articles about EA (1,2,3) - specifically about Earning To Give - were published. These articles clearly missed most of the nuances behind the idea of Earning To Give, and heavily misrepresented the idea. 

In light of such events, the EA movement at that time faced a critical question: 
Should we stay away from mass media?
The answer the...

1ElliotJDavies10hControversial opinion, but I think most volunteers are probably fairly ineffective, enough to round down to zero. However, it's super easy to be an effective volunteer. Simply: A) Be autonomous/self-motivated B) Put in some significant amount of effort per week C) Be consistent over a long period of time (long enough to climb up the skill curve for the tasks at hand)

Controversial opinion, but I think most volunteers are probably fairly ineffective, enough to round down to zero. 

I agree with you. See Volunteering Isn't Free as one example of elucidation for why taking on volunteers is hard, often net negative. 

That said, I do not think this is a controversial opinion, whether within EA or overall. :)

1Mauricio7hSeems right; maybe this was implied, but I'd add (D) pick a cause & intervention that scores very well under scale/neglectedness/tractability/personal fit considerations

TL;DR: Please comment with pain points you have or know about that might be solved by software developers.

Have a low bar: If it's related to EA or LessWrong, and someone would probably pay $100 to solve it, please write it. For example, maybe there's an annoying task you'd like to automate? Or a Twitter bot you wish existed?

No need to repost job openings that already exist in the 80000 hours job board or in impact colabs, but if I forgot another board, I'd be happy if you add it too.

Thanks!

 

Why I'm asking: I wonder if there are existing needs in our community, but there's no easy way to surface them. I hope that commenting here will be easy and inviting enough to bridge some of that gap. On the other side I think there are software developers who might help.  

Inspiration: Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits by Ozzie Gooen, EA Communication Project Ideas by Ben West.

6Lorenzo Buonanno9hA friend uses https://tryshift.com/ [https://tryshift.com/] Downsides: it's $99.99 per year, it uses a lot of RAM and I don't think it supports SMS

Eh, I did some work for the director who started Shift and I know the owner of the parent company (Shift is one in their family of products).

If there is demand for Shift (at least 10 people will use it), I can probably get a deal, and we can fund it through the infrastructure fund or something (modulo some sort of conflict of interest thing on my end).

 

 

Just so you know, you can think of Shift like tabs in chrome, but the tabs are accounts from all of your various apps, Slacks, and you can run multiple accounts, and it should be setup pretty smoo... (read more)