All Posts

Sorted by Magic (New & Upvoted)

Friday, May 20th 2022
Fri, May 20th 2022

Frontpage Posts
Shortform
14Aaron Gertler17h
MEMORIES FROM STARTING A COLLEGE GROUP IN 2014 In August 2014, I co-founded Yale EA (alongside Tammy Pham). Things have changed a lot in community-building since then, and I figured it would be good to record my memories of that time before they drift away completely. If you read this and have questions, please ask! Timeline * I was a senior in 2014, and I'd been talking to friends about EA for years by then. Enough of them were interested (or just nice) that I got a good group together for an initial meeting, and a few agreed to stick around and help me recruit at our activities fair. One or two of them read LessWrong, and aside from those, no one had heard of effective altruism. * The group wound up composed largely of a few seniors and a bigger group of freshmen (who then had to take over the next year — not easy!). We had 8-10 people at an average meeting. * Events we ran that first year included: * A dinner with Shelly Kagan, one of the best-known academics on campus (among the undergrad population). He's apparently gotten more interested in EA since then, but during the dinner, he seemed a bit bemused and was doing his best to poke holes in utilitarianism (and his best was very good, because he's Shelly Kagan). * A virtual talk from Rob Mather, head of AMF. Kelsey Piper was visiting from Stanford and came to the event; she was the first EA celebrity I'd met and I felt a bit star-struck. * A live talk from Julia Wise and Jeff Kaufman (my second and third EA celebrities). They brought Lily, who was a young toddler at the time. I think that saying "there will be a baby!" drew nearly as many people as trying to explain who Jeff and Julia were. This was our biggest event, maybe 40 people. * A lunch with Mercy for Animals — only three other people showed up. * A dinner with Leah Libresco, an atheist blogger and CFAR instructor who converted to Cathol
4NunoSempere5h
INFINITE ETHICS 101: STOCHASTIC AND STATEWISE DOMINANCE AS A BACKUP DECISION THEORY WHEN EXPECTED VALUES FAIL First posted on nunosempere.com/blog/2022/05/20/infinite-ethics-101 [https://nunosempere.com/blog/2022/05/20/infinite-ethics-101/] , and written after one too many times encountering someone who didn't know what to do when encountering infinite expected values. In Exceeding expectations: stochastic dominance as a general decision theory [https://globalprioritiesinstitute.org/wp-content/uploads/Christian-Tarsney_Exceeding-expectations-stochastic-dominance-as-a-general-decision-theory.pdf] , Christian Tarsney presents stochastic dominance (to be defined) as a total replacement for expected value as a decision theory. He wants to argue that one decision is only rationally better as another one when it is stochastically dominant. For this, he needs to say that the choiceworthiness of a decision (how rational it is) is undefined in the case where one decision doesn’t stochastically dominate another one. I think this is absurd, and perhaps determined by academic incentives to produce more eye-popping claims rather than more restricted incremental improvements. Still, I thought that the paper made some good points about us still being able to make decisions even when expected values stop being informative. It was also my introduction to extending rational decision-making to infinite cases, and a great introduction at that. Below, I outline my rudimentary understanding of these topics. WHERE EXPECTED VALUES FAIL. Consider a choice between: * A: 1 utilon with probability ½, 2 utilons with probability ¼th, 4 utilons with probability 1/8th, etc. The expected value of this choice is 1 × ½ + 2 × ¼ + 4 × 1/8 + … = ½ + ½ + ½ + … = ∞ * B: 2 utilons with probability ½, 4 utilons with probability ¼th, 8 utilons with probability 1/8th, etc. The expected value of this choice is 2 × ½ + 2 × ¼ + 4 × 1/8 + … = 1 + 1 + 1 + … = ∞ So the expected value of choice A
2
2james.lucassen20h
Question for anyone who has interest/means/time to look into it: which topics on the EA forum are overrepresented/underrepresented? I would be interested in comparisons of (posts/views/karma/comments) per (person/dollar/survey interest) in various cause areas. Mostly interested in the situation now, but viewing changes over time would be great! My hypothesis [DO NOT VIEW IF YOU INTEND TO INVESTIGATE]:
1Michael_Wiebe6h
How to make the long-term future go well: get every generation to follow the rule "leave the world better off than it was under the previous generation".
Topic Page Edits and Discussion

Wednesday, May 18th 2022
Wed, May 18th 2022

Shortform
17Puggy Knudson3d
Carrick Flynn lost the nomination, and over $10 million dollars from EA aligned individuals went to support his nomination. So these questions may sound pointed: There was surely a lot of expected value in having an EA aligned thinker in congress supporting pandemic preparedness, but there were a lot of bottlenecks that he would have had to go through to make a change. He would have been one of hundreds of congresspeople. He would have had to get bills passed. He would have had to win enough votes to make it past the primary. He would have had to have his policies get churned through the bureaucratic agencies and it’s not entirely clear any bill he would’ve supported would have kept it’s same form through that process. What can we learn from the political gambling that was done in this situation? Should we try this again? What are the long term side effects of aligning EA with any political side or making EA a political topic? Could that $10+ million wasted on Flynn have been better used in just trying to get EA or longtermist bureaucrats in the CDC or other important decision making institutions? We know the path that individuals take to get these positions, we know what people usually get selected to run pandemic preparedness for the government, why not spend $10 million in gaining the attention of bureaucrats or placing bureaucrats in federal agencies? Should we consider political gambling in the name of EA a type of intervention that is meant for us to get warm fuzzies rather than do the most good?
8Dave Cortright3d
The Atlantic has a column called “Progress” by contributor Derek Thompson with the tag line: A special series focused on two big questions: How do you solve the world’s most important problems? And how do you inspire more people to believe that the most important problems can actually be solved? Sounds a lot like EA to me. Derek is holding virtual office hours on June 14 https://www.theatlantic.com/progress/ [https://www.theatlantic.com/progress/]
1
2david_reinstein2d
Are you engaging in motivated reasoning ... or çommitting other reasoning fallacies? I propose the following good epistemic check using Elicit.org's "reason from one claim to another" tool Whenever you have a theory thatA→B Feed this tool your theory, negating one side or the other[1] [#fnpmk4dvk996a] A→¬B and/or ¬A→B And see if any of the arguments it presents seem equally plausible to your arguments forA→B If so, believe your arguments and conclusion less. Caveat: the tool is not working great yet, and often requires a few rounds of iteration, selecting the better arguments and telling it "show me more like this", or feeding it some arguments 1. ^ [#fnrefpmk4dvk996a]Or the contrapositives of either

Sunday, May 15th 2022
Sun, May 15th 2022

Shortform
14Linch5d
I've been asked to share the following. I have not loved the EA communications from the campaign. However, I do think this is plausibly the most cost-effective use of time this year for a large fraction of American EAs and many people should seriously consider it (or just act on it and reflect later), but I have not vetted these considerations in detail. [[URGENT]] Seeking people to lead a phone-banking coworking event for Carrick Flynn's campaign today, tomorrow, or Tuesday in gather.town [https://l.messenger.com/l.php?u=http%3A%2F%2Fgather.town%2F&h=AT3psDsJ9RVNnIGz7vW-9V67biCR8JWXv2E6RaCcnw2NDCPVzxArViV5fF3M94U3ADVwceMi2I8TVs4-u_lLFao0bITKY15f46Ue5pBkELdFGtqjTfGHt7WPD9eEJc4fGEV_cQ] ! There is an EA coworking room in gathertown already. This is a strong counterfactual opportunity! This event can be promoted on a lot of EA fb pages as a casual and fun event (after all, it won't even be affiliated with "EA", but just some people who are into this getting together to do it), hopefully leading to many more phone banker hours in the next couple days. Would you or anyone else be willing to lead this? You (and other hosts) will be trained in phonebanking and how to train your participants in phonebanking. Please share with people you think would like to help, and DM Ivy [https://forum.effectivealtruism.org/users/ivy_mazzola] and/or CarolineJ [https://forum.effectivealtruism.org/users/carolinej] (likely both as Ivy is traveling). You can read more about Carrick's campaign from an EA perspective here: The best $5,800 I’ve ever donated (to pandemic prevention) [https://forum.effectivealtruism.org/posts/Qi9nnrmjwNbBqWbNT/the-best-usd5-800-i-ve-ever-donated-to-pandemic-prevention] and Why Helping the Flynn Campaign is especially useful right now [https://forum.effectivealtruism.org/posts/bJc8qpPGkdmcEWAZo/why-helping-the-flynn-campaign-is-especially-useful-right]
11Charlotte6d
Here is a Collection of Resources/Reading about (Constructing) Theories of Change [https://docs.google.com/document/d/1I65H-A5B4J6hD8zRMbQeK9zP9COwiYSvrzvynM3S0cI/edit?usp=sharing] - I provide a summary for all resources (except one) in the Google doc. The overview of the collection/summary document is: Theory of Change (Aaron Swartz's Raw Thought) [https://docs.google.com/document/d/1I65H-A5B4J6hD8zRMbQeK9zP9COwiYSvrzvynM3S0cI/edit#heading=h.xjavfz5bvxwp] "Backchaining" in Strategy - LessWrong [https://docs.google.com/document/d/1I65H-A5B4J6hD8zRMbQeK9zP9COwiYSvrzvynM3S0cI/edit#heading=h.5uay2kgaf49h] Michael Aird: "[[theory of change]] in Research" workshop [https://docs.google.com/document/d/1I65H-A5B4J6hD8zRMbQeK9zP9COwiYSvrzvynM3S0cI/edit#heading=h.hw6ryzxus5dn] What is a Theory of Change? [https://docs.google.com/document/d/1I65H-A5B4J6hD8zRMbQeK9zP9COwiYSvrzvynM3S0cI/edit#heading=h.f0tshl2jxzqx] Hivos ToC Guidelines: Theory of Change Thinking in Practice [https://docs.google.com/document/d/1I65H-A5B4J6hD8zRMbQeK9zP9COwiYSvrzvynM3S0cI/edit#heading=h.3hvej2bws1m3] Key Tools, Resources and Materials [https://docs.google.com/document/d/1I65H-A5B4J6hD8zRMbQeK9zP9COwiYSvrzvynM3S0cI/edit#heading=h.xliffxj9r4hg] Charlotte’s Main Take-aways [https://docs.google.com/document/d/1I65H-A5B4J6hD8zRMbQeK9zP9COwiYSvrzvynM3S0cI/edit#heading=h.9cnepgb7fxd0] Other resources I did not read: [https://docs.google.com/document/d/1I65H-A5B4J6hD8zRMbQeK9zP9COwiYSvrzvynM3S0cI/edit#heading=h.bo06n4g1v7ga] Motivation and Takeaways: * I looked into this today because I believe that potentially (1) the ability to construct theories of change is a key bottleneck of the EA community, e.g. if everyone were twice as good, the impact of the EA community was much higher. * Given this, I aim to become better at constructing theories of change myself. Moreover, I am interested in how to make this teachable (shout out to Michael Aird's work) or to set up better deliber
1
9Ivy_Mazzola5d
[[URGENT]] Seeking people to lead phone-banking coworking for Carrick Flynn's campaign today, tomorrow, or Tuesday in gather.town [gather.town]! There is an EA coworking room [https://app.gather.town/app/Yhi4XYj0zFNWuUNv/EA%20coworking%20and%20lounge]in gather town already. This is a strong counterfactual opportunity! This event can be promoted on a lot of EA fb pages as a casual and fun event (after all, it won't even be affiliated with "EA", but just some people who are into this getting together to do it), hopefully leading to many more phone banker hours in the next couple days. Would you or anyone else be willing to lead this? Please share! Hosts will be trained in phone-banking and how to train your participants in phone-banking. DM me and/or CarolineJ [https://forum.effectivealtruism.org/users/carolinej] if you are keen to help and we will add you to the slack with all phone banking instructions. It is easy! (I will be traveling a lot so DMing both of us is a good bet) You can read more about Carrick's campaign from an EA perspective here: The best $5,800 I’ve ever donated (to pandemic prevention) [https://forum.effectivealtruism.org/posts/Qi9nnrmjwNbBqWbNT/the-best-usd5-800-i-ve-ever-donated-to-pandemic-prevention] and Why Helping the Flynn Campaign is especially useful right now [https://forum.effectivealtruism.org/posts/bJc8qpPGkdmcEWAZo/why-helping-the-flynn-campaign-is-especially-useful-right] Read about the EA Gathertown space here: EA coworking/lounge space on gather.town [https://forum.effectivealtruism.org/posts/nxfhxwQg4HJ7KQz4A/ea-coworking-lounge-space-on-gather-town]
1
8antimonyanthony5d
In Defense of Aiming for the Minimum [https://forum.effectivealtruism.org/posts/pseF3ZmY7uhLtdwss/aiming-for-the-minimum-of-self-care-is-dangerous] I’m not really sympathetic to the following common sentiment: “EAs should not try to do as much good as feasible at the expense of their own well-being / the good of their close associates.” It’s tautologically true that if trying to hyper-optimize comes at too much of a cost to the energy you can devote to your most important altruistic work, then trying to hyper-optimize is altruistically counterproductive. I acknowledge that this is the principle behind the sentiment above, and evidently some people’s effectiveness has benefited from advice like this. But in practice, I see EAs apply this principle in ways that seem suspiciously favorable to their own well-being, or to the status quo. When you find yourself trying to justify on the grounds of impact the amounts of self-care people afford themselves when they don’t care about being effectively altruistic, you should be extremely suspicious. Some examples, which I cite not to pick on the authors in particular—since I think many others are making a similar mistake—but just because they actually wrote these claims down. 1. “Aiming for the minimum of self-care is dangerous” [https://forum.effectivealtruism.org/posts/pseF3ZmY7uhLtdwss/aiming-for-the-minimum-of-self-care-is-dangerous] I think this is just correct. If your argument is that EAs shouldn’t be totally self-effacing because some frivolities are psychologically necessary to keep rescuing people from the bottomless pit of suffering [https://slatestarcodex.com/2014/09/27/bottomless-pits-of-suffering/], then sure, do the things that are psychologically necessary. I’m skeptical that “psychologically necessary” actually looks similar to the amount of frivolities indulged by the average person who is as well-off as EAs generally are. Do I live up to this standard? Hardly. That doesn’t mean I should pretend I’m doi
7MichaelDickens5d
Looking at the Decade in Review, I feel like voters systematically over-rate cool but ultimately unimportant posts, and systematically under-rate complicated technical posts that have a reasonable probability of changing people's actual prioritization decisions. Example: "Effective Altruism is a Question (not an ideology)", the #2 voted post, is a very cool concept and I really like it, but ultimately I don't see how it would change anyone's important life decisions, so I think it's overrated in the decade review. "Differences in the Intensity of Valenced Experience across Species", the #35 voted post (with 1/3 as many votes as #2), has a significant probability of changing how people prioritize helping different species, which is very important, so I think it's underrated. (I do think the winning post, "Growth and the case against randomista development", is fairly rated because if true, it suggests that all global-poverty-focused EAs should be behaving very differently.) This pattern of voting probably happens because people tend to upvote things they like, and a post that's mildly helpful for lots of people is easier to like than a post that's very helpful for a smaller number of people. (For the record, I enjoy reading the cool conceptual posts much more than the complicated technical posts.)

Saturday, May 14th 2022
Sat, May 14th 2022

Shortform
12david_reinstein6d
MODEST PROPOSAL ON A DONATION MECHANISM FOR PEOPLE DOING DIRECT WORK? PREAMBLE Employees at EA orgs and people doing direct work are often also donors/pledgers to other causes. But charitable donations are not always exactly 1-1 deductible from income taxes. E.g., in the USA it’s only deductible if you forgo the standard deduction and ‘itemize your deductions’, and in many countries in the EU there is very limited tax deductibility. So, if you are paid $1 more by your employer/funded and donate it to the Humane League, Malaria Consortium, etc, the charity only ends up with maybe $0.65 on the margin in many cases. There are ways to do better at this (set up a DAF, bunch your donations…) but they are costly (DAF takes fees) and imperfect (whenever you itemize you lose the standard deduction if I understand.) PROPOSAL Funders/orgs (e..g, Open Phil, RP) could agree that employees are allowed relinquish some share of their paycheck into some sort of general fund. The employees who do so are allowed to determine the use of these funds (or ‘advise on’, with the advice generally followed). KEY ANTICIPATED CONCERNS, RESPONSES Concern: This will lead to a ‘pressure to donate/relinquish’ if the employers, managers, funders are aware of it Response: This process could be managed by ops and by someone at arms-length who will not share the data with the employers/managers/funders. (Details need working out, obviously, unless something like this already exists) Concern - Legal issues: Is this feasible? Would these relinquishments be seen by governments as actually income? Response: ?? Concern - crowding out: If the funder knows that the people/orgs it funds gives back to charities, they may shift their funding away from these charities, nullifying the employees counterfactual impact Response: This is hardly a new issue, hardly unique to this context; it’s a major question for donors in general, through all modes; so maybe not so important to consider here. … To th
6Emrik6d
A way of reframing the idea of "we are no longer funding-constrained" is "we are bottlenecked by people who can find new cost-effective opportunities to spend money". If this is true, we should plausibly stop donating to funds that can't give out money fast enough anyway, and rather spend money on orgs/people/causes you personally estimate needs more money now. Maybe we should up-adjust how relevant we think personal information is to our altruistic spending decisions. Is this right? And are there any good public summaries of the collective wisdom fund managers have acquired over the years? If we're bottlenecked by people who can find new giving opportunities, it would be great to promote the related skills. And I want to read them.
1
1Dave Cortright6d
Here's a framework I use for A or B decisions. There are 3 scenarios: 1. One is clearly better than the other. 2. They are both about the same 3. I'm not sure; more data is needed. 1 & 2 are easy. In the first case, choose the better one. In the second, choose the one that in your gut you like better (or use the "flip a coin" trick, and notice if you have any resistance to the "winner". That's a great reason to go with the "loser"). It's the third case that's hard. It requires more research or more analysis. But here's the thing: there are costs to doing this work. You have to decide if the opportunity cost to delve in is worth the investment to increase the odds of making the better choice. My experience shows that—especially for people who lean heavily on logic and rationality like myself 😁—we tend to overweight "getting it right" at the expense of making a decision and moving on. Switching costs are often lower than you think, and failing fast is actually a great outcome. Unless you are sending a rover to Mars where there is literally no opportunity to "fix it in post-", I suggest you do a a nominal amount of research and analysis, then make a decision and move onto other things in your life. Revisit as needed. [cross-posted from a comment I wrote in response to Why CEA Online doesn’t outsource more work to non-EA freelancers [https://forum.effectivealtruism.org/posts/kz3Czn5ndFxaEofSx/why-cea-online-doesn-t-outsource-more-work-to-non-ea] ]
Topic Page Edits and Discussion

Thursday, May 12th 2022
Thu, May 12th 2022

Frontpage Posts
Shortform
18Lizka9d
I keep coming back to this map [https://ourworldindata.org/world-population-cartogram]/cartogram. It's just so great.
9Aryeh Englander9d
Thought: In what ways do EA orgs / funds go about things differently than in the rest of the non-profit (or even for-profit) world? If they do things differently: Why? How much has that been analyzed? How much have they looked into the literature / existing alternative approaches / talked to domain experts? Naively, if the the thing they do differently is not related to the core differences between EA / that org and the rest of the world, then I'd expect that this is kind of like trying to re-invent the wheel and it won't be a good use of resources unless you have a good reason to think you can do better.

Load More Days