C

colin

30 karmaJoined Jun 2020

Comments
11

However, agricultural productivity growth also increases the income people can earn in agriculture relative to in other sectors, so it could also incentivize people to stay in agriculture. That is the opposite of what we want!

Higher incomes is the goal so why is it a problem if that comes from staying in agriculture?  Is the idea that this tops out at a lower level than manufacturing or service-focused economies?  Aren't there some developed countries, like New Zealand, where agriculture makes up more than half of their exports?

The way I hope for you to read this series is with an entrepreneurial eye 

 

I appreciate this specific call to action, so I'll kick us off.  How will AI advances affect sectoral transformations moving people into the service sector?  It will depend on where AI is a complement vs replacement for labor.  In the complement case, high-quality instantaneous translations could dramatically expand the export market for services by eliminating English fluency as a barrier.  In the replacement case, AI agents trained to write code could replace many routine contract software jobs.

Is there literature on skill gaps in LMI countries that is more granular than proxy metrics like years of schooling?  That could be a good place to start to look for where AI complements could open up opportunities. 

Are there plans to release the videos from EAGx Virtual?

I've been thinking about this for a while now and I agree with all of these points. Another route would be selling AI Safety solutions to non-AI safety firms.  This solves many of the issues raised in the post but introduces new ones.  As you mentioned in the Infertile Idea Space section, companies often start with a product idea, talk to potential customers, and then end up building a different product to solve a bigger problem that the customer segment has.  In this context that could look like offering a product with an alignment tax, customers not being willing to pay it, pivoting to something accelerationist instead.  You might think "I would never do that!" but it can be very easy to fool yourself into thinking you are still having a positive impact when the alternative is the painful process of firing your employees and telling your investors (who are often your friends and family) that you lost their money.

Outsourcing a function is usually not binary.  For example, Red Bull's brand was actually originally developed by an outside agency https://kastner.agency/work/red-bull-brand and they still use a mix of internal and external marketing teams today.  Often internal teams at a company function serve as a bridge between the company and contractors.

With that said, I wonder if the people asking about outsourcing are thinking of it in the literal "employe vs contractor" sense that you covered.  When I have heard these debates I believe that people meant "If we have money now, why not hire non-EAs for these positions?".

Ah, yeah I misread your opinion of the likelihood that humans will ever create AGI.  I believe it will happen eventually unless AI research stops due to some exogenous reason (civilizational collapse, a ban on development, etc.).  Important assumptions I am making:  

  • General Intelligence is all computation, so it isn't substrate-dependent
  • The more powerful an AI is the more economically valuable it is to the creators
  • Moore's Law will continue so more computing will be available.
  • If other approaches fail, we will be able to simulate brains with sufficient compute.
  • Fully simulated brains will be AGI.

I'm not saying that I think this would be the best, easiest, or only way to create AGI, just that if every other attempt fails, I don't see what would prevent this from happening. Particularly since we are already to simulate portions of a mouse brain.  I am also not claiming here that this implies short timelines for AGI.  I don't have a good estimate of how long this approach would take.

I'm going to attempt to summarize what I think part of your current beliefs are (please correct me if I am wrong!)

  • Current ML techniques are not sufficient to develop AGI
  • But someday humans will be able to create AGI
  • It is possible (likely?) that it will be difficult to ensure that the AGI is safe
  • It is possible that humans will give enough control to an unsafe AGI that it is an X risk.

If I got that right I would describe that as both having (appropriately loosely held) beliefs about AI Safety and agreement that AI Safety is a risk with some unspecified probability and magnitude.

What you don't have a view on, but you believe people in AI safety do have strong views on is (again not trying to put words in your mouth just my best attempt at understanding):

  • Is AI safety actually possible?
  • What work would be useful to increase AI Safety if that is possible?
  • How important is AI safety compared to other cause areas?
     

My (fairly uninformed view) is that people working on AI safety don't know the answer to that first or second question.  Rather, they think that the probability and magnitude of the problem are high enough that it swamps those questions in calculating the importance of the cause area.  Some of these people have tried to model out this reasoning, while others are leaning more on intuition.  I think reducing the uncertainty of any of these three questions is useful in itself, so I think it would be great if you wanted to work on that. 
 

it would be ideal for you to work on something other than AGI safety!

I disagree. Here is my reasoning:

  • Many people that have extensive ML knowledge are not working on safety because either they are not convinced of its importance or because they haven't fully wrestled with the issue
  • In this post, Ada-Maaria articulated the path to her current beliefs and how current AI safety communication has affected her.
  • She has done a much more rigorous job of evaluating the pervasiveness of these arguments than anyone else I've read
  • If she continues down this path she could either discover what unstated assumptions the AI safety community has failed to communicate or potentially the actual flaws in the AI safety argument.
  • This will either make it easier for AI Safety folks to express their opinions or uncover assumptions that need to be verified.
  • Either would be valuable!

"I feel like it would be useful to write down limitations/upper bounds on what AI systems are able to do if they are not superintelligent and don’t for example have the ability to simulate all of physics (maybe someone has done this already, I don’t know)" - I think it would be useful and interesting to explore this. Even if someone else has done this, I'd be interested in your perspective.

I want to strongly second this!  I think that a proof of the limitations of ML under certain constraints would be incredibly useful to narrow the area in which we need to worry about AI safety or at least limit the types of safety questions that need to be addressed in that subset of ML

I'll add that advanced market commitments are also useful in situations where a jump-start isn't explicitly required.  In that case, they can act similarly to prize based funding

Load more