New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
I’m pretty confident the EA community is underdiscussing on how to prevent global AGI powered autocracy, especially if the US democracy implodes under AGI pressure. There are two key questions here: (I) How to make the US more resilient, and (ii) how can we make the world less dependent on the US democracy resilience.
11
NickLaing
2d
13
Is there any possibility of the forum having an AI-writing detector in the background which perhaps only the admins can see, but could be queried by suspicious users? I really don't like AI writing and have called it out a number of times but have been wrong once. I imagine this has been thought about and there might even be a form of this going on already. In saying this my first post on LessWrong was scrapped because they identified it as AI written even though I have NEVER used AI in online writing not even for checking/polishing. So that system obviously isnt' perfect.
11
Linch
3d
1
PSA: regression to the mean/mean reversion is a statistical artifact, not a causal mechanism. So mean regression says that children of tall parents are likely to be shorter than their parents, but it also says parents of tall children are likely to be shorter than their children. Put in a different way, mean regression goes in both directions.  This is well-understood enough here in principle, but imo enough people get this wrong in practice that the PSA is worthwhile nonetheless.
The Forum should normalize public red-teaming for people considering new jobs, roles, or project ideas. If someone is seriously thinking about a position, they should feel comfortable posting the key info — org, scope, uncertainties, concerns, arguments for — and explicitly inviting others to stress-test the decision. Some of the best red-teaming I’ve gotten hasn’t come from my closest collaborators (whose takes I can often predict), but from semi-random thoughtful EAs who notice failure modes I wouldn’t have caught alone (or people think pretty differently so can instantly spot things that would have taken me longer to figure out). Right now, a lot of this only happens at EAGs or in private docs, which feels like an information bottleneck. If many thoughtful EAs are already reading the Forum, why not use it as a default venue for structured red-teaming? Public red-teaming could: * reduce unilateralist mistakes, * prevent coordination failures (I’ve almost spent serious time on things multiple people were already doing — reinventing the wheel is common and costly), Obviously there are tradeoffs — confidentiality, social risk, signaling concerns — but I’d be excited to see norms shift toward “post early, get red-teamed, iterate publicly,” rather than waiting for a handful of coffee chats.
4
gewind
1d
0
I believe EA Forum should (re-)consider adding auto-translated content by default. Has this been considered lately? Observation: As a native german-speaker, auto-translated reddit-content has become extremely relevant in my google-results lately whenever I research something using german search terms. Expected Effect: I believe this could have large outreach effects infering from how many german policy makers, altruistic donors or otherwise interested people (questions of ethics, economics, altruism, AI) I deem very much open to contents that EA has already produced very important and well-structured insights on but who are not really able and therefor unwilling to consume english content. Discussion points: Auto-translations still aren't perfect but usually suffice by far to convey complex contents, even with complex topics. Even more importantly, contents are a lot less (to the extent that it has truly become negligible imo) distorted by auto-translations nowadays. Short research says Google supports auto-translated contents if it's correctly highlighted as such (consistent with my search-results).