New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
12 more
I have been disappointed by the support some EAs have expressed for recent activist actions at Ridglan Farms. I share others’ outrage at the outcome of the state animal cruelty investigation, which found serious animal cruelty law violations but led to a settlement that still permits Ridglan to sell beagles through July and to continue in-house experimentation. But I personally think the tactics used in the recent open rescues, including property damage and forced entry to remove animals, violate reasonable moral bounds on what actions are permissible in response to the belief that a serious harm is occurring. My views here stem from contractualist views of democratic legitimacy and from concerns about the non-universalizability of principles that justify lawbreaking, though I think a purely act utilitarian calculus also supports them. Regarding universalizability, in a society where many people believe that different forms of irreparable harm are occurring (e.g. viewing abortion as murder, climate change as destroying the sacredness of the natural world, immigration as ending western civilization), I worry that moral principles that allow for significant lawbreaking when one believes that irreparable harm is occurring could easily lead to great damage if broadly followed (consider for example what it would be like to live in a country where hundreds of activists were regularly smashing their way into abortion clinics, energy companies, and refugee assistance nonprofits with sledgehammers and crowbars). Regarding the legitimacy of the law, I think reasonable contractualist views can give us obligations to follow the law when the processes by which the law is determined are legitimate, and that democracies with universal suffrage qualify as such (even granting that certain groups such as animals and future generations are impossible to enfranchise).[1] Therefore, I think that if we are trying to make decisions under moral uncertainty and give meaningful credences to
I just loved this from @Kelsey Piper on Twitter 🥺🥺 it's so true and I never appreciated it before EA.  I really appreciate you all 🙏🏻 https://x.com/KelseyTuoc/status/2031989126522945761?s=20 -- My ancestors buried half their children. All mine are alive. My ancestors' house had a dirt floor. Mine is wood. I have indoor plumbing, I have hot water, I have never in my life hauled a full bucket half a mile and I probably never will. Do you know how rare it is, in human history, for small children to wear shoes? Mine have multiple pairs. I can speak to my relatives who live thousands of miles away, for free, at any time. Video, if we want video. With machine translation, if we speak different languages. The original Library of Congress had 740 books in it. I have more than that. If I run out of books in my home my local public library has 350,000. If I want to take a hundred books with me on vacation, they all fit on a device that fits in my purse.  I have heat in the winter and AC in the summer and a washing machine and I have never, ever, ever had to scrub a dress clean by hand in the stream. I can look up recipes from more than a hundred different countries and I've tried dozens of them. I ride a clean and modern train across my city for $4, or take a robot taxi if I'm out too late for the train. I donate $40,000 every year to the cause of getting healthcare to the world's poorest people and even after the donations I never have to think about whether I can afford a book, or a pair of shoes, or a cup of coffee.  There is a great deal more to fight for, of course. I hope that our descendants will look back on our lives and list a thousand ways they're richer. Maybe we ourselves will do that, if some of the crazier stuff comes true.  But the abundance is all around you and to a significant degree you aren't feeling it only because fish don't notice water.
The EA Forum seems super dead, which is pretty bad. I think part of the reason is that much of the Forum gets cross-posted to LessWrong, so you get EA Forum content there plus everything else (especially AI stuff) — which means there's little reason to actually come here. One take: don't cross-post to LessWrong. There are real benefits to having two intellectual communities that take the same kinds of ideas seriously but develop distinct cultures, emphases, and standards around them. Other ways to help: * Do lots of cause prio out loud — especially the parts that haven't been written about much * If your private doc doesn't need to be private, don't make it private
39
Guy Raveh
4d
12
Haven't seen this anywhere on the forum: Effective Ventures sold Wytham Abbey in February for £5.95 million (source). When they bought it, there were many debates over the price they paid (just under £15 million). Some people said it's an investment and so it's not like £15 million have been lost. Well, seems like we now have the verdict on those claims - the whole thing cost about £9 million.
I'm not sure this matters at all, but I tentatively think that percent of value of the best possible future, (we can call it 'PVBPF')  which I've heard brought up by Will MacAskill a couple times now[1] (and maybe by others?) isn't an ideal metric by which to measure or even mental/conceptual model by which to understand how good the world is. My main concern is illustrated by the following: * Suppose we do a survey in 100 years[2] and conclude that the current world is, to our best guess, 50% as good as the best possible counterfactual[3]. So PVBPF = 50% * Then somehow we realize that actually there was actually ex ante a p=10−1,000,000 chance of a counterfactual outcome 5 times better than what we previously thought was the maximum of the distribution * What was originally PVBPF = 50% now becomes  50%∗(1/5)=1/10. So PVBPF = 10% So the addition of an astronomically unlikely outcome- one that for all intents and purposes doesn't change the ex ante the EV at all - radically effects our assessment of PVBPF. Maybe this is fine - I'm not really claiming there's a technical issue here (although maybe there is; more to think about). But maybe there's a vibes issue: 10% and 50% sound like really different numbers, and they are! But a 5x difference in PVBPF can correspond to either a 5x difference in actual moral value or the addition of an arbitrarily unlikely 5x-as-good-as-previously-believed-maximum counterfactual.[4] 1. ^ Eg on the latest 80k podcast, Will says: 2. ^ To get around the fact that I think the current world is net-negative. That's a modeling inconvenience but one that I do think one that neither is nor reveals a fundamental weakness with the idea being gestured at. 3. ^ You might be able to make this more well-defined via the many-worlds interpretation of quantum mechanics 4. ^ I.e. as we make p=10−1,000,000 smaller and smaller, PVBPF doesn't change but the delta between ex ante EVs represented by the orig