N

NickGabs

190 karmaJoined Jun 2022

Bio

Harvard student and community builder interested in AI xrisk reduction

Comments
11

I think you should be substantially more optimistic about the effects of aligned AGI.  Once we have aligned AGI, this basically means high end cognitive labor becomes very cheap, as once an AI system is trained, it is relatively cheap to deploy it en masse.  Some of these AI scientists would presumably work on making AI's at least cheaper if not more capable, which limits to a functionally infinite supply of high end scientists.  Given a functionally infinite supply of high end scientists, we will quickly discover basically everything that can be discovered through parallelizable scientific labor which is, if not everything, I think at least quite a few things (e. g. I have pretty high confidence that we could solve aging, develop extremely good vaccines to prevent against biorisk, etc.).  Moreover, this is only a lower bound; I think AGI will probably relatively quickly become significantly smarter than the smartest human, so we will probably do even better than the aforementioned scenario.

I think you can stress the "ideological" implications of externalities to lefty audiences while having a more neutral tone with more centrist or conservative audiences.  The idea that externalities exist and require intervention is not IMO super ideologically charged.

I think the results being surprising are indicative of EAs underestimating how likely this is.  AI has many bad effects; social media, bias + discrimination, unemployment, deepfakes, etc.  Plus I think sufficiently competent AI will seem scary to people; a lot of people aren't really aware of recent developments but I think would be freaked out if they were.  I think  we should position ourselves to utilize this backlash if it happens.

It is true that private developers internalize some of the costs of AI risk.  However, this is also true in the case of carbon emissions; if a company emits CO2, its shareholders do pay some costs in terms of having a more polluted atmosphere.  The problem is that the private developer only pays a very small fraction of the total costs which, while still quite large in absolute terms, js plausibly worth paying for the upside.  For example, if I were entirely selfish and I thought AI risk was somewhat less likely than I actually do (let's say 10%), I would probably be willing to risk a 10% chance of death for a 90% chance of massive resource acquisition and control over the future.  However, if I internalized the full costs of that 10% chance (everyone else dying and all future generations being wiped out), then I would not be willing to take that gamble.

The key here is transparency.  Partially because people openly discuss their moral views, and partially because even when not explicitly stating their views, other people are good enough at reading them to get at least weak evidence about whether they are trustworthy, consequentialists may be unable to seem like perfect deontologists without actually being deontologists.

I think the key thing here is that the criterion by which EA intellectuals decide whether something is interesting is significantly related to it being useful.  Firstly, because a lot of EA's are intellectually interested in things that are at least somewhat relevant to EA, lots of these fields seem useful at least at a high level; moral philosophy, rationality, and AI alignment are all clearly important things for EA. Moreover, many people actually don't find these topics interesting at all, and they are thus actually highly neglected.  This is compounded by the fact that they are very hard, and thus probably only quite smart people with good epistemics can make lots of progress on them.  These two features in turn contribute to the work being more suspiciously theoretical than it would be if the broad domains in question (formal ethics, applied rationality, alignment) were less neglected, as fields become increasingly applied as they become better theorized.   In other words, it seems prima facie plausible that highly talented people should work in domains that are initially selected partially for relevance to EA and that are highly neglected due to being quite difficult and also not as interesting to people who aren’t interested in topics related to EA, and thus more theoretical than they would be if more people worked on them.

Load more