New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
12 more
I have been disappointed by the support some EAs have expressed for recent activist actions at Ridglan Farms. I share others’ outrage at the outcome of the state animal cruelty investigation, which found serious animal cruelty law violations but led to a settlement that still permits Ridglan to sell beagles through July and to continue in-house experimentation. But I personally think the tactics used in the recent open rescues, including property damage and forced entry to remove animals, violate reasonable moral bounds on what actions are permissible in response to the belief that a serious harm is occurring. My views here stem from contractualist views of democratic legitimacy and from concerns about the non-universalizability of principles that justify lawbreaking, though I think a purely act utilitarian calculus also supports them. Regarding universalizability, in a society where many people believe that different forms of irreparable harm are occurring (e.g. viewing abortion as murder, climate change as destroying the sacredness of the natural world, immigration as ending western civilization), I worry that moral principles that allow for significant lawbreaking when one believes that irreparable harm is occurring could easily lead to great damage if broadly followed (consider for example what it would be like to live in a country where hundreds of activists were regularly smashing their way into abortion clinics, energy companies, and refugee assistance nonprofits with sledgehammers and crowbars). Regarding the legitimacy of the law, I think reasonable contractualist views can give us obligations to follow the law when the processes by which the law is determined are legitimate, and that democracies with universal suffrage qualify as such (even granting that certain groups such as animals and future generations are impossible to enfranchise).[1] Therefore, I think that if we are trying to make decisions under moral uncertainty and give meaningful credences to
I just loved this from @Kelsey Piper on Twitter 🥺🥺 it's so true and I never appreciated it before EA.  I really appreciate you all 🙏🏻 https://x.com/KelseyTuoc/status/2031989126522945761?s=20 -- My ancestors buried half their children. All mine are alive. My ancestors' house had a dirt floor. Mine is wood. I have indoor plumbing, I have hot water, I have never in my life hauled a full bucket half a mile and I probably never will. Do you know how rare it is, in human history, for small children to wear shoes? Mine have multiple pairs. I can speak to my relatives who live thousands of miles away, for free, at any time. Video, if we want video. With machine translation, if we speak different languages. The original Library of Congress had 740 books in it. I have more than that. If I run out of books in my home my local public library has 350,000. If I want to take a hundred books with me on vacation, they all fit on a device that fits in my purse.  I have heat in the winter and AC in the summer and a washing machine and I have never, ever, ever had to scrub a dress clean by hand in the stream. I can look up recipes from more than a hundred different countries and I've tried dozens of them. I ride a clean and modern train across my city for $4, or take a robot taxi if I'm out too late for the train. I donate $40,000 every year to the cause of getting healthcare to the world's poorest people and even after the donations I never have to think about whether I can afford a book, or a pair of shoes, or a cup of coffee.  There is a great deal more to fight for, of course. I hope that our descendants will look back on our lives and list a thousand ways they're richer. Maybe we ourselves will do that, if some of the crazier stuff comes true.  But the abundance is all around you and to a significant degree you aren't feeling it only because fish don't notice water.
The EA Forum seems super dead, which is pretty bad. I think part of the reason is that much of the Forum gets cross-posted to LessWrong, so you get EA Forum content there plus everything else (especially AI stuff) — which means there's little reason to actually come here. One take: don't cross-post to LessWrong. There are real benefits to having two intellectual communities that take the same kinds of ideas seriously but develop distinct cultures, emphases, and standards around them. Other ways to help: * Do lots of cause prio out loud — especially the parts that haven't been written about much * If your private doc doesn't need to be private, don't make it private
34
Guy Raveh
3d
11
Haven't seen this anywhere on the forum: Effective Ventures sold Wytham Abbey in February for £5.95 million (source). When they bought it, there were many debates over the price they paid (just under £15 million). Some people said it's an investment and so it's not like £15 million have been lost. Well, seems like we now have the verdict on those claims - the whole thing cost about £9 million.
I was excited by ForecastBench and FutureEval both projecting that LLMs would reach superforecaster parity by June 2027. But I didn't realise access to human crowd forecasts might be driving a lot of performance. If it is, that is massively disappointing.  The top LLM performers in ForecastBench have access to the crowd forecast (and it's not clear to me if FutureEval hides crowd forecasts - Metaculus did for the Quarterly Cup in 2025 but I couldn't find info about FutureEval). Skimming the literature with Claude, it seems like most studies either deliberately provide crowd forecasts or don't prevent searching for it, and those that hide it tend to have significantly worse results (still interesting, but less exciting).  To me, the potential wonders of LLM superforecasting is being able to get excellent guesses at any questions I might come up with. If I need to already have a human crowd or market forecast for the guess to be any good, then the kind of LLM superforecasting being projected is about 10% as useful to me. I still expect 'true' parity eventually, but it becomes a story of general timelines rather than empirical projection. I don't know the field well, and I'm probably misunderstanding something. I'm posting this to find out I'm wrong. If I'm right, then it's worth dampening the expectations of anyone else who was imagining having an instant team of supers at their beck-and-call in ~14 months time.