I have been disappointed by the support some EAs have expressed for recent activist actions at Ridglan Farms. I share others’ outrage at the outcome of the state animal cruelty investigation, which found serious animal cruelty law violations but led to a settlement that still permits Ridglan to sell beagles through July and to continue in-house experimentation. But I personally think the tactics used in the recent open rescues, including property damage and forced entry to remove animals, violate reasonable moral bounds on what actions are permissible in response to the belief that a serious harm is occurring. My views here stem from contractualist views of democratic legitimacy and from concerns about the non-universalizability of principles that justify lawbreaking, though I think a purely act utilitarian calculus also supports them.
Regarding universalizability, in a society where many people believe that different forms of irreparable harm are occurring (e.g. viewing abortion as murder, climate change as destroying the sacredness of the natural world, immigration as ending western civilization), I worry that moral principles that allow for significant lawbreaking when one believes that irreparable harm is occurring could easily lead to great damage if broadly followed (consider for example what it would be like to live in a country where hundreds of activists were regularly smashing their way into abortion clinics, energy companies, and refugee assistance nonprofits with sledgehammers and crowbars). Regarding the legitimacy of the law, I think reasonable contractualist views can give us obligations to follow the law when the processes by which the law is determined are legitimate, and that democracies with universal suffrage qualify as such (even granting that certain groups such as animals and future generations are impossible to enfranchise).[1] Therefore, I think that if we are trying to make decisions under moral uncertainty and give meaningful credences to
The EA Forum seems super dead, which is pretty bad. I think part of the reason is that much of the Forum gets cross-posted to LessWrong, so you get EA Forum content there plus everything else (especially AI stuff) — which means there's little reason to actually come here.
One take: don't cross-post to LessWrong. There are real benefits to having two intellectual communities that take the same kinds of ideas seriously but develop distinct cultures, emphases, and standards around them.
Other ways to help:
* Do lots of cause prio out loud — especially the parts that haven't been written about much
* If your private doc doesn't need to be private, don't make it private
Haven't seen this anywhere on the forum: Effective Ventures sold Wytham Abbey in February for £5.95 million (source). When they bought it, there were many debates over the price they paid (just under £15 million). Some people said it's an investment and so it's not like £15 million have been lost. Well, seems like we now have the verdict on those claims - the whole thing cost about £9 million.
Did any of the boosters of real-money prediction markets correctly predict that prediction market platforms would be quickly dominated by thinly disguised sports gambling?
(I mean this question literally and earnestly, not as a snide takedown of prediction markets or their proponents)
I was excited by ForecastBench and FutureEval both projecting that LLMs would reach superforecaster parity by June 2027. But I didn't realise access to human crowd forecasts might be driving a lot of performance. If it is, that is massively disappointing.
The top LLM performers in ForecastBench have access to the crowd forecast (and it's not clear to me if FutureEval hides crowd forecasts - Metaculus did for the Quarterly Cup in 2025 but I couldn't find info about FutureEval). Skimming the literature with Claude, it seems like most studies either deliberately provide crowd forecasts or don't prevent searching for it, and those that hide it tend to have significantly worse results (still interesting, but less exciting).
To me, the potential wonders of LLM superforecasting is being able to get excellent guesses at any questions I might come up with. If I need to already have a human crowd or market forecast for the guess to be any good, then the kind of LLM superforecasting being projected is about 10% as useful to me. I still expect 'true' parity eventually, but it becomes a story of general timelines rather than empirical projection.
I don't know the field well, and I'm probably misunderstanding something. I'm posting this to find out I'm wrong. If I'm right, then it's worth dampening the expectations of anyone else who was imagining having an instant team of supers at their beck-and-call in ~14 months time.
Read the Better Futures series here, and discuss it here, all week.