New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
I'm surprised that there hasn't been an attempt (as far as I know) to fund/create a competitor to Epoch.ai. It wouldn't have to compete on all benchmarks, but it would be good to have a forecasting organisation that could be trusted with potentially dual use insights into capabilities trajectories. I don't believe this would require uniformity of views: it would just require people with a proper sense of responsibility. I also think that the bad judgement displayed by some of their employees impinges on some of their research (emphasis on some, particularly the more subjective elements, Epoch is still my go-to-source in many cases). Unfortunately, I think there's a difference between being intelligent and being wise and one common way that this distinction plays out is that some quite intelligent folks follow the incentive gradient towards being excessively and reflexively contrarian. Just to be clear, I'm not trying to attack their research, just noting that whilst a second opinion would always have been valuable, the fact that I trust them less on the margin, makes the need for such a second opinion feel more pressing to me. In terms of producing high-quality research, I'd orient to how Epoch has done many things well, but also made a few mistakes that I would controversially call clear mistakes. I'm also pretty sure that there's sufficient talent in the space now to create a second such effort. It could also start small and funders could help it scale if it proves itself.
This might feel obvious, but I think it's under-appreciated how much disagreement on AI progress just comes down to priors (in a pretty specific way) rather than object-level reasoning. I was recently arguing the case for shorter timelines to a friend who leans longer. We kept disagreeing on a surprising number of object-level claims, which was weird because we usually agree more on the kinda stuff we were arguing about. Then I basically realized what I think was going on: she had a pretty strong prior against what I was saying, and that prior is abstract enough that there's no clear mechanism by which I can push against it. So whenever I made a good object-level case, she'd just take the other side — not necessarily because her reasons were better all else equal, but because the prior was doing the work underneath without either of us really knowing it. There's something clearly rational here that's kinda unintuitive to get a grip on. If you have a strong prior, and someone makes a persuasive argument against it, but you can't identify the specific mechanism by which their argument defeats it, you should probably update that the arguments against their case are better than they appear, even if you can't articulate them yet. From the outside, this totally just looks like motivated reasoning (and often is), but I think it can be pretty importantly different. The reason this is so hard to disentangle is that (unless your belief web is extremely clear to you, which seems practically impossible) it's just enormously complicated. Your prior on timelines isn't an isolate thing — it's load-bearing for a bunch of downstream beliefs all at once. So the resistance isn't obviously irrational, it's more like... the system protecting its own coherence. I think this means that people should try their best to disentangle whether some object level argument they’re having comes from real object level beliefs or pretty abstract priors (in which case, it seems less worthwhile to
[Crossposted from social media, in the spirit of Draft Amnesty Week] After a lot of thinking, I am updating my Giving What We Can🔸10% donation allocation, shifting about a third of my donation portfolio to the Center for Land Economics 🔰. There are several reasons why I am excited about this donation opportunity. I believe that Georgism has the potential to radically transform our economy and society. 'Land is a Big Deal', as they say. Raising public funds without deadweight costs is a big part of this. But more fundamentally, by reducing the costs of living and the role of rent-seeking, I hope that it could shift our society from scarcity and zero-sum thinking to abundance and positive-sum collaboration. Within this cause area, I believe that CLE is the most cost-effective donation opportunity. In their first year, they have achieved much more tangible benefits than I would have anticipated, and seeing this change has made me much more optimistic about the prospects for Georgist reform today than I was a year ago. They combine an incremental approach of giving legislators and tax assessors the tools necessary to improve the situation on the ground, with movement building and consistent high-quality public outreach through the Progress & Poverty Substack. And they have done this with a small but dedicated team, with only 1 funded FTE. This means that my donations, as a small, private donor, will actually constitute a few percentage points of their annual budget. It is rare to ever have the opportunity to make such a counterfactual difference. We can often have the most impactful donation opportunities in areas where we have access to idiosyncratic information that is not yet widely recognized by the wider 'donation market'. In my case, I think that the world severely under-appreciates the potential of Georgist reform generally, and the work of CLE specifically. However, such idiosyncratic information can often be connected to unusual interests, which often comes w
Given finitude of existence, I think I'd rather help an old lady cross the street & make someone smile than read forum posts. If I had a lot more time in a day, then forum posts might come after reading fiction, gardening, and baking dark chocolate brownies. But before, reading Chomsky, learning Mandarin, and trying to find an answer to a non-pragmatic theoretical question like "If aliens exist, what could that mean about humanity?"