All Posts

Sorted by Top

Friday, February 14th 2020
Fri, Feb 14th 2020

Shortform [Beta]
22Max_Daniel3d[Is longtermism bottlenecked by "great people"?] Someone very influential in EA recently claimed in conversation with me that there are many tasks X such that (i) we currently don't have anyone in the EA community who can do X, (ii) the bottleneck for this isn't credentials or experience or knowledge but person-internal talent, and (iii) it would be very valuable (specifically from a longtermist point of view) if we could do X. And that therefore what we most need in EA are more "great people". I find this extremely dubious. (In fact, it seems so crazy to me that it seems more likely than not that I significantly misunderstood the person who I think made these claims.) The first claim is of course vacuously true if, for X, we choose some ~impossible task such as "experience a utility-monster amount of pleasure" or "come up with a blueprint for how to build safe AGI that is convincing to benign actors able to execute it". But of course more great people don't help with solving impossible tasks. Given the size and talent distribution of the EA community my guess is that for most apparent X, the issue either is that (a) X is ~impossible, or (b) there are people in EA who could do X, but the relevant actors cannot identify them, (c) acquiring the ability to do X is costly (e.g. perhaps you need time to acquire domain-specific expertise), even for maximally talented "great people", and the relevant actors either are unable to help pay that cost (e.g. by training people themselves, or giving them the resources to allow them to get training elsewhere) or make a mistake by not doing so. My best guess for the genesis of the "we need more great people" perspective: Suppose I talk a lot to people at an organization that thinks there's a decent chance we'll develop transformative AI soon but it will go badly, and that as a consequence tries to grow as fast as possible to pursue various ambitious activities which they think reduces that risk. If these activities are scalable

Tuesday, February 11th 2020
Tue, Feb 11th 2020

Shortform [Beta]
1evelynciara6dA social constructivist perspective on long-term AI policy I think the case for addressing the long-term consequences of AI systems holds even if AGI is unlikely to arise. The future of AI development will be shaped by social, economic and political factors, and I'm not convinced that AGI will be desirable in the future or that AI is necessarily progressing toward AGI. However, (1) AI already has large positive and negative effects on society, and (2) I think it's very likely that society's AI capabilities will improve over time, amplifying these effects and creating new benefits and risks in the future.
1Ramiro6dDoes anyone know or have a serious opinion / analysis on the European campaign to tax meat? I read some news at Le Monde, but nothing EA-level seriousness. I mean, it seems a pretty good idea, but I saw no data on possible impact, probability of adoption, possible ways to contribute, or even possible side-effects? (not the best comparison, but worth noting: in Brazil a surge in meat prices caused an inflation peak in december and corroded the governement's support - yeah, people can tolerate politicians meddling with criminals and fascism, as long as they can have barbecue)

Sunday, February 9th 2020
Sun, Feb 9th 2020

Shortform [Beta]
4EdoArad8dMIT has a new master's program on Development Economics. https://micromasters.mit.edu/dedp/ [https://micromasters.mit.edu/dedp/] It is taught by Esther Duflo and Abhijit Banerjee, the recent Nobel Laureates. Seems cool :)

Load More Days