Ian Turner

99Joined Jan 2023

Comments
38

I suppose it depends on the definition of poverty that you want to use, but if we are talking about the global poverty line, redistribution would easily be adequate. There are individual people alive today with adequate assets.

https://www.brookings.edu/blog/up-front/2016/01/20/the-global-poverty-gap-is-falling-billionaires-could-help-close-it/

I think the linked article does consider it but rejects it as unrealistic. See the section, "In this scenario every country that is richer than Denmark reduces average incomes". A footnote explicitly describes this scenario as "a redistribution of incomes from richer countries to poorer countries."

If people do use chatbots to help with pro se litigation, then that opens a possible legal theory of liability against AI companies, namely that AI chatbots (or the companies that run them) are practicing law without a license.

Of course, this could extend to other related licensure violations, such as practicing medicine without a license.

It seems plausible to me that legal liability issues could be used to slow down AI development, at least in the West. But that doesn't mean that donating to legal assistance would be a good use of funds. My sense is that there are many plaintiffs armed with plenty of money to fund their own lawsuits, and some of those lawsuits have already happened.

What might be helpful, however, would be amicus briefs from AI alignment, development, or governance organizations, arguing that AI developers should face liability for errors in or misuse of their products. That seems like something that EA funders might want to consider?

A timely question. I have seen some recent media coverage about other possible legal theories of liability:

  • The Wall Street Journal ran an opinion piece this week about a theory of libel for falsely defamatory information produced by AI: ChatGPT Libeled Me. Can I Sue?
  • The Economist published an article this week about the interaction of AI and copyright law, drawing an analogy to the effects of Napster on the market for recorded music and highlighting the lawsuit between Getty Images and Stability AI over data collection: A battle royal is brewing over copyright and AI

That may be true; but for anyone tempted to try it, just a reminder that

the values here are for “good/lucky” trips and there is no guarantee e.g. LSD will feel good on a given occasion

How should we think about the 17% response rate to this survey? Is it possible that researchers who are more concerned about alignment are also more likely to complete the survey?

In my experience, an extremely common lay objection to GiveDirectly is something along the lines of, "Won't recipients waste the money on alcohol/drugs/tobacco/luxuries/etc.?", with a second-tier objection of, "Won't cash transfers cause inflation/conflict/dependence/etc.?".

I think both these questions have been pretty well addressed by the research, but those who are not aware of (or do not trust) that research are, I think, pretty likely to believe that cash transfers are neutral or harmful.

Load more