This is a special post for quick takes by eca. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I’m vulnerable to occasionally losing hours of my most productive time “spinning my wheels”: working on sub-projects I later realize don’t need to exist.

Elon Musk gives the most lucid naming of this problem in the below clip. He has a 5 step process which nails a lot of best practices I’ve heard from others and more. It sounds kind of dull and obvious to write down, but somehow I think staring at the steps will actually help. Its also phrased somewhat specifically to building physical stuff, but I think there is a generic version of each. I’m going to try implementing on my next engineering project.

The explanation is meandering (though with some great examples I recommend listening to!) so I did my best attempt to quickly paraphrase them here:

The Elon Process:

  1. “Make your requirements less dumb. Your requirements are definitely dumb.” Beware especially requirements from smart people because you will question them less.
  2. Delete a part, process step or feature. If you aren’t adding 10% of deleted things back in, you aren’t deleting enough.
  3. Optimize and simplify the remaining components.
  4. Accelerate your cycle time. You can definitely go faster.
  5. Automate.

https://youtu.be/t705r8ICkRw

(13:30-28)

Quest: see the inside of an active bunker

Why, if you don't mind me asking?

Empirical differential tech development?

Many longtermist questions related to dangers from emerging tech can be reduced to “what interventions would cause technology X to be deployed before/ N years earlier than/ instead of technology Y”.

In, biosecurity, my focus area, an example of this would be something like "how can we cause DNA synthesis screening to be deployed before desktop synthesizers are widespread?"

It seems a bit cheap to say that AI safety boils down to causing an aligned AGI before an unaligned, but it kind of basically does, and I suspect that as more of the open questions get worked out in AI strategy/ policy/ deployment there will end up being at least some examples of well defined subproblems like the above.

Bostrom calls this differential technology development. I personally prefer "deliberate technology development", but call it DTD and whatever. My point is, it seems really useful to have general principles for how to approach problems like this, and I've been unable to find much work, either theoretical or empirical, trying to establish such principles. I don't know exactly what these would look like; most realistically they would be set of heuristics or strategies alongside a definition of when they are applicable.

For example, a shoddy principle I just made up but could vaguely imagine playing out is "when a field is new and has few players, (e.g. small number of startups, small number of labs) causing a player to pursue something else on the margin has a much larger influence on delaying the development of this technology than causing the same proportion of R&D capacity to leave the field at a later point".

While I expect some theoretical econ type work to be useful here, I started thinking about the empirical side. It seems like you could in principle run experiments where, for some niche areas of commercial technology, you try interventions which are cost effective according to your model to direct the outcome toward a made up goal.

Some more hallucinated examples:

  • make the majority of guitar picks purple
  • make the automatic sinks in all public restrooms in South Dakota stay on for twice as long as the current ones
  • stop CAPTCHAs from ever asking anyone to identify a boat
  • stop some specific niche supplement from being sold in gelatin capsules anywhere in California

The pattern: specific change toward something which is either market neutral or somewhat bad according to the market, in an area few enough people care about/ the market is small and straightforward such that we should expect it is possible to occasionally succeed.

I'm not sure that there is anything which is a niche enough market to be cheap to intervene on while still being at all representative of the real thing. But maybe there is? And I kind of weirdly expect trying random thing stuff like this to actually yield some lessons, at least in implicit know-how for the person who does it.

Anyway, I'm interested in thoughts on the feasibility and utility of something like this, as well as any pointers to previous attempts to do this kind of thing (sort of seems like certain type of economists might be interested in experimenting in this way, but probably way too weird).

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
titotal
 ·  · 35m read
 · 
None of this article was written with AI assistance. Introduction There have been many, many, many attempts to lay out scenarios of AI taking over or destroying humanity. What they tend to have in common is an assumption that our doom will be sealed as a result of AI becoming significantly smarter and more powerful than the best humans, eclipsing us in skill and power and outplaying us effortlessly. In this article, I’m going to do a twist: I’m going to write a story (and detailed analysis) about a scenario where humanity is disempowered and destroyed by AI that is dumber than us, due to a combination of hype, overconfidence, greed and anti-intellectualism. This is a scenario where instead of AI bringing untold abundance or tiling the universe with paperclips, it brings mediocrity, stagnation, and inequality. This is not a forecast. This story probably won’t happen. But it’s a story that reflects why I am worried about AI, despite being generally dismissive of all those doom stories above. It is accompanied by an extensive, sourced list of present day issues and warning signs that are the source of my fears. This post is divided into 3 parts: Part 1 is my attempt at a plausible sounding science fiction story sketching out this scenario, starting with the decline of a small architecture firm and ending with nuclear Armageddon. In part 2 I will explain, with sources, the real world current day trends that were used as ingredients for the story. In part 3 I will analysise the likelihood of my scenario, why I think it’s very unlikely, but also why it has some clear likelihood advantages over the typical doomsday scenario. The story of Slopworld In the nearish future: When the architecture firm HBM was bought out by the Vastum corporation, they announced that they would fire 99% of their architects and replace them with AI chatbots. The architect-bot they unveiled was incredibly impressive. After uploading your site parameters, all you had to do was chat with
Relevant opportunities
13
· · 3m read