New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
2
Linch
12m
0
I think something a lot of people miss about the “short-term chartist position” (these trends have continued until time t, so I should expect it to continue to time t+1) for an exponential that’s actually a sigmoid is that if you keep holding it, you’ll eventually be wrong exactly once. Whereas if someone is “short-term chartist hater” (these trends always break, so I predict it’s going to break at time t+1) for an exponential that’s actually a sigmoid is that if you keep holding it, you’ll eventually be correct exactly once. Now of course most chartists (myself included) want to be able to make stronger claims than just t+1, and people in general would love to know more about the world than just these trends. And if you're really good at analysis and wise and careful and lucky you might be able to time the kink in the sigmoid and successfully be wrong 0 times, which is for sure a huge improvement over being wrong once! But this is very hard. And people who ignore trends as a baseline are missing an important piece of information, and people who completely reject these trends are essentially insane.
We seem to be seeing some kind of vibe shift when it comes to AI. What is less clear is whether this is a major vibe shift or a minor one. If it's a major one, then we don't want to waste this opportunity (it wasn't clear immediately after the release of ChatGPT that it really was a limited window of opportunity and if we'd known, maybe we would have been able to leverage it better). In any case, we should try not to waste this opportunity, if happens to turn out to be a major vibe shift.
TARA Round 1, 2026 — Last call: 9 Spots Remaining 𝗪𝗲'𝘃𝗲 𝗮𝗰𝗰𝗲𝗽𝘁𝗲𝗱 ~𝟳𝟱 𝗽𝗮𝗿𝘁𝗶𝗰𝗶𝗽𝗮𝗻𝘁𝘀 across 6 APAC cities for TARA's first round this year. Applications were meant to close in January, but we have room for 9 more people in select cities. 𝗢𝗽𝗲𝗻 𝗰𝗶𝘁𝗶𝗲𝘀: Sydney, Melbourne, Brisbane, Manila, Tokyo & Singapore 𝗜𝗳 𝘆𝗼𝘂'𝗿𝗲 𝗶𝗻𝘁𝗲𝗿𝗲𝘀𝘁𝗲𝗱: → Apply by March 1 (AOE) → Attend the March 7 icebreaker → Week 1 begins March 14 TARA is a 14-week, part-time technical AI safety program delivering the ARENA curriculum through weekly Saturday sessions. It's designed for people who can't relocate or pause their careers for full-time programs. Apply here: https://www.taraprogram.org/
This is more of a note for myself that I felt might resonate/help some other folks here... For Better Thinking, Consider Doing Less I am, like I believe many EAs are, a kind of obsessive, A-type, "high-achieving" person with 27 projects and 18 lines of thought on the go. My default position is usually "work very very hard to solve the problem." And yet, some of my best, clearest thinking consistently comes when I back off and allow my brain far more space and downtime than feels comfortable, and I am yet again being reminded of that over the past couple of (deliberately quieter) weeks. So, yeah, just a thought, but if you're feeling like you're banging your head up against a problem, maybe (counter-intuitively) consider doing way less to solve it for a while
alignment is a conversation between developers and the broader field. all domains are conversations between decision-makers and everyone else: “here are important considerations you might not have been taking into account. here is a normative prescription for you.” “thanks — i had been considering that to 𝜀 extent. i will {implement it because x / not implement it because y / implement z instead}." these are the two roles i perceive. how does one train oneself to be the best at either? sometimes, conversations at eag center around ‘how to get a job’, whereas i feel they ought to center around ‘how to make oneself significantly better than the second-best candidate’.