(Not trying to represent an institutional take here, other mods may disagree)
Would you mind spelling out the problem a bit? In my view, the current karma total is important info for me deciding whether to up or downvote something.
For example, I might have downvoted this quick take if it was over 60 or so (because quick-takes above 60 are generally worth reading for a wide group), and yet I wouldn't downvote it at the number I found it (1) because it doesn't deserve to have negative karma[1].
In other words, I think of karma almost as the questio...
Many people hold up 'AI As Normal Technology' as a reasonable "normal-people" case against the doomer position. I actually think it's wrong on a number of ways and falls flat on its own terms. I think I believe this for reasons mostly orthogonal to being a doomer (except inasomuch as being a doomer makes me more interested in thinking about AI). If anybody here is interested in fighting the good fight, it might be valuable to do a Andy Masley-style annilihation of the AI As Normal Technology position, trying to stick to minimally controversial arguments an...
I think you would benefit from re-reading the article in question. For example, they directly adress your point 1 by pointing out that consumer diffusion figures are often misleading by expressing figures in terms of "percentage of people that use chatbots on occasion", rather than on frequency of use.
Point 3 is not even an argument, just a restatement of what they believe: yes, they think AI domination will take decades. They state the reasons they believe this very clearly in the section "Diffusion is limited by the speed of human, organizational, ...
Here are some bullet points of reflection topics around lifestyle and priorities for EAs that I shared with some fellow EAs some months ago. I am sharing this text here in case it interests anyone. I will elaborate and expand on them more and better later if I have the opportunity.
""" Support Systems: Seriously. I didn't even know this term until after all this happened, and it would have changed everything. There's something about how people are instructed in STEM institutions (and as a consequence, many EA institutions) that makes it all about careers, h...
(My Facebook and Instagram accounts have been suspended without explanation. Hopefully they will be restored soon. If anyone reading this wants to reach me in the meantime, please use other means.)
Some women on the Facebook support group "Cluster Headache Patients" comparing labor pain to cluster headache pain:
I made a podcast feed for the posts highlighted in Best of: AGI & Animals Debate Week
RSS Feed to paste into your favorite podcast app: https://f004.backblazeb2.com/file/aaronbergman-public/podcast/agi_animals/feed.xml
I also like the cover art Gemini made so here it is:
Success is a mess.
Golf, if you allow it, teaches forbearance.
Doing hard things is hard. One of the hardest things to do is hit a tiny ball in a tiny hole hundreds of yards away. Tiny errors cause terrible outcomes. Control is a phantom. The promise and perils don’t bear thinking about.
When it all comes together, though, my goodness, it’s a hell of a party.
If it’s worth going where you’re aiming, there’ll be no straight line from here to there. Next time you’re stuck, remember Rory and what we went through with him.
Thanks for reading, and especially for commenting!
There are a few reasons for training on golf:
Help me find my replacement doing farmed animal advocacy grantmaking!
I wanted to share a job opening for, in my opinion, one of the coolest jobs to help animals: my job! I'm moving on from Mobius soon, so we're looking for the next person to lead our grantmaking and entrepreneurial projects.
The role: You'd manage the grantmaking portfolio for one of the top ten largest funders of farmed animal welfare work globally, plus lead entrepreneurial projects like incubating new organisations and identifying strategic gaps in the movement. You'd work with a small a...
Yesterday's Anthropic research ("Emotion Concepts and their Function in LLMs") provides a fascinating mechanistic analogue that highly resonates with the field observations from my March audit of GPT-5.2 Thinking.
While Anthropic studied Claude Sonnet 4.5 and my audit focused on GPT-5.2, the structural alignment between their white-box findings and my black-box observations is striking:
We are detecting today a shared collective delusion leading victims to degrade their epistemic standards. This anomaly is aimed towards no particular end, except perhaps for the amusement of its participants and the satisfaction of ingenious expression.
So far, it appears to be mostly harmless. Nonetheless, this phenomenon creates space for vulnerabilities. If some geopolitical actor were to take some implausible action on this day (for instance, US to invade Canada, Spain ...
Some possible containment procedures are as follows:
Altering the Gregorian Calendar to change Leap Day to April 1st (unknown effectiveness, could lead in transferal of the anomaly to another day)
The teaching of mind-resistance techniques in schools and workplaces, using standard cover stories (media literacy, appreciation of the arts, combating racial bias). However, this runs the risk of collapsing important delusions to the functioning of society.
Global usage of hypnotic drugs through the atmosphere, as well as using sleeper agents in the government to f...
Does anyone know why @William_MacAskill says he is "not convinced by the shrimp argument" on his recent appearance on Sam Harris's podcast?
...SAM HARRIS
So yeah, so this is one area where perhaps my own cynicism creeps in. I worry that any focus on suffering beyond human suffering, it risks confusing enough people so as to damage people's commitment to these principles. So I mean, I'm not, there's zero defense of factory farming coming from me here, but When I see a philosopher who's clearly EA or EA-adjacent arguing on behalf of the welfare of shr
Hi Aaron and Will. I estimated how much cage-free corporate campaigns for layers, and the Shrimp Welfare Project’s (SWP’s) Humane Slaughter Initiative (HSI) increase the welfare of their target beneficiaries for individual welfare per fully-healthy-animal-year proportional to "individual number of neurons"^"exponent", and "exponent" from 0 to 2, which covers the best guesses that I consider reasonable. An exponent of 1 would correspond to the linear weighting preferred by Will. Below is a graph with the results. I calculate cage-free corporate campaig...
Seriously, I love this EA forum holiday ❤️ I genuinely feel like this helps the community do more good, get more silly-but-perhaps-with-a-grain-of-usefulness ideas across, and waste time in a way which feels a bit productive
If your team's work is worth doing, it's worth doing as an org
When a few people are doing good work together, the question of whether to formally incorporate into an organization can feel like a distraction from doing the actual work. Why take time away from your exciting research project to create an org? There are some real up-front costs to incorporating – dealing with bureaucracy, legal overhead, governance obligations – but I think the benefits of doing so are usually greater and underappreciated.
I mostly strongly agree with this but think it's worth considering "being an official, recognized, and funded part of an organization" rather than constituting one's own from scratch. I know Rethink Priorities and Hive have sponsored projects before - that seems like a possibly-good intermediate step, with the possibility of spinning out independently later
Look I know I'm on the forum too much @Toby Tremlett🔹 , but I don't think its necessary to put "reading limit" controls on me....
How organisations with low AI usage can and should be using it more
There is a lot of discussion about how everyone should be using AI more, and efforts to increase use and literacy. So far in animal advocacy spaces where I work I’ve seen the following efforts to increase usage so far:
The above has made a real dent in AI usage, but much less than we should be aiming for given ...
Yeah I have, and my impression from those I've spoken with is that this has not been the case. You don't think most people whose job primarily involves sitting at a computer could have much of their job automated by a software engineer on call? For example:
[ETA: I posted a revised version of this essay here.]
AI pause advocates often say they are pro-technology and pro-economic growth, and that they simply make one exception for AI because of its unique risks. But this reasoning will grow less credible over time as AI comes to account for a larger and larger share of economic growth.
Simple growth models predict that AI capable of substituting for human labor will raise economic growth rates by an order of magnitude or more. If that's right, then AI will eventually be driving the vast majority of technological...
Many pessimistic predictions about AGI or ASI tend to paint the picture of a superhuman agent with an extreme maximalisation mindset powered by some unsophisticated version of rationalist principles, which would lead it to commit unspeakable acts of violence (e.g. the paperclip problem: the AI starts killing every form of life in order to save energy that could otherwise be used to make more paperclips).
This, to me, seems somewhat antithetic with the very notion of intelligence.
Surely, a truly 'superior' agent would be able to question the goal of tu...