H

harsimony

198 karmaJoined Jan 2021

Comments
16

Thanks for posting this, this seems like valuable work.

I'm particularly interested in using MLOSS to intentionally shape AI development. For example, could we identify key areas where releasing particular MLOSS can increase safety or extend the time to AGI?

Finding ways to guide AI development towards narrow and simple AI models can extend AI timelines, which is complimentary to safety work:

https://www.lesswrong.com/posts/BEWdwySAgKgsyBzbC/satisf-ai-a-route-to-reducing-risks-from-ai

In your opinion, what traits of a particular piece of MLOSS determine whether it increases or decreases risk?

Ok, and any advice for reaching out to trusted-but-less-prestigious experts? It seems unlikely that reaching out to e.g. Kevin Esvelt will generate a response!

Great post, I really appreciate an in-depth review of research on reducing sleep need.

I wrote some arguments for why reducing sleep is important here:

https://harsimony.wordpress.com/2021/02/05/why-sleep/

I also submitted a cause exploration app:

https://harsimony.wordpress.com/2022/07/14/cause-exploration-prize-application/

Your post includes substantially more research than mine and I would encourage you to reformat it and submit it to the OpenPhil's Cause Exploration Prize. I'm happy to help you with edits or combine our efforts!

This kind of thing could be made more sophisticated by making fines proportional to the harm done, requiring more collateral for riskier projects, or setting up a system to short sell different projects. But simpler seems better, at least initially.

Have you thought about whether it could work with a more free market, and not necessarily knowing all of the funders in advance?

Yeah, that's a harder case. Some ideas:

  • People undertaking projects could still post collateral on their own (or pre-commit to accepting a fine under certain conditions). This kind of behavior could be rewarded by retro-funders giving these projects more consideration and the act of posting collateral does constitute a costly signal of quality. But that still requires some pre-commitments from retro funders or a general consensus from the community.

  • If contributors undertake multiple projects it should be possible to punish after-the-fact by docking some of their rewards from other projects. For example, if someone participates in 1 beneficial project and 1 harmful project, their retro funding rewards from the beneficial project can be reduced due to their participation on the harmful project. Unfortunately, this still requires some sort of pre-commitment from funders.

I proposed a simple solution to the problem:

  1. For a project to be considered for retroactive funding, participants must post a specific amount of money as collateral.
  2. If a retroactive funder determines that the project was net-negative, they can burn the collateral to punish the people that participated in it. Otherwise, the project receives its collateral back.

This eliminates the "no downside" problem of retroactive funding and makes some net-negative projects unprofitable.

The amount of collateral can be chosen adaptively. Start with a small amount and increase it slowly until the number of net-negative projects is low enough. Note that setting the collateral too high can discourage net-positive but risky projects.

I make a slightly different anti-immortality case here:

https://harsimony.wordpress.com/2020/11/27/is-immortality-ethical/

Summary: At a steady state of population, extended lifespan means taking resources away from other potential people. Technology for extended life may not be ethical in this case. Because we are not in steady state, this does not argue against working on life extension technology today.

One reason people make this claim is that many models of economic growth depend on population growth. Like you noted, there are lots of other ways to grow the economy by making each individual more productive (lower poverty, more education, automating tasks, more focus on research, etc.).

But crucially, all of these measures have diminishing returns. Let's say that in the future everyone on earth has a PhD, is highly productive, and works in an important research field. In this case the only way to continue growing economy is through population growth, since everything else has already been maxed out. This is why Chad Jones claims that the long run growth rate limits to the population growth rate:

https://web.stanford.edu/~chadj/annualreview.pdf

At least that's what the models say. Jones himself admits that AI might change these dynamics (I guess population growth of AI's would become the thing that matters more if they replace human labor?).

Thanks for writing this. Great to see people encouraging a sustainable approach to EA!

I want to tell you that taking care of yourself is what’s best for impact. But is it?

I claim that this is true:

  • Finding personal fulfillment is a positive result in and of itself.
  • It's important to prioritize personal needs, otherwise you will not be in a good position to help others (family, friends, charity, etc.).
  • Ensuring one's relationship with EA is sustainable can actually lead to more impact over the long run (though this shouldn't be peoples primary goal, personal wellbeing comes first).
  • Encouraging a sustainable culture can make EA more welcoming to others.

I think another possible route around gambling restrictions to prediction markets is to ensure all proceeds go to charity, but the winners get to choose which charity to donate to. I wrote about this more here:

https://forum.effectivealtruism.org/posts/d43f6HCWawNSazZqb/charity-prediction-markets

I have noticed that few people hold the view that we can readily reduce AI-risk. Either they are very pessimistic (they see no viable solutions so reducing risk is hard) or they are optimistic (they assume AI will be aligned by default, so trying to improve the situation is superfluous).

Either way, this would argue against alignment research, since alignment work would not produce much change.

Strategically, it's best to assume that alignment work does reduce AI-risk, since it is better to do too much alignment work (relative to doing too little alignment work and causing a catastrophe).

Load more