I've been thinking about this for a while now and I agree with all of these points. Another route would be selling AI Safety solutions to non-AI safety firms. This solves many of the issues raised in the post but introduces new ones. As you mentioned in the Infertile Idea Space section, companies often start with a product idea, talk to potential customers, and then end up building a different product to solve a bigger problem that the customer segment has. In this context that could look like offering a product with an alignment tax, customers not being willing to pay it, pivoting to something accelerationist instead. You might think "I would never do that!" but it can be very easy to fool yourself into thinking you are still having a positive impact when the alternative is the painful process of firing your employees and telling your investors (who are often your friends and family) that you lost their money.
Outsourcing a function is usually not binary. For example, Red Bull's brand was actually originally developed by an outside agency https://kastner.agency/work/red-bull-brand and they still use a mix of internal and external marketing teams today. Often internal teams at a company function serve as a bridge between the company and contractors.
With that said, I wonder if the people asking about outsourcing are thinking of it in the literal "employe vs contractor" sense that you covered. When I have heard these debates I believe that people meant "If we have money now, why not hire non-EAs for these positions?".
Ah, yeah I misread your opinion of the likelihood that humans will ever create AGI. I believe it will happen eventually unless AI research stops due to some exogenous reason (civilizational collapse, a ban on development, etc.). Important assumptions I am making:
I'm not saying that I think this would be the best, easiest, or only way to create AGI, just that if every other attempt fails, I don't see what would prevent this from happening. Particularly since we are already to simulate portions of a mouse brain. I am also not claiming here that this implies short timelines for AGI. I don't have a good estimate of how long this approach would take.
I'm going to attempt to summarize what I think part of your current beliefs are (please correct me if I am wrong!)
If I got that right I would describe that as both having (appropriately loosely held) beliefs about AI Safety and agreement that AI Safety is a risk with some unspecified probability and magnitude.
What you don't have a view on, but you believe people in AI safety do have strong views on is (again not trying to put words in your mouth just my best attempt at understanding):
My (fairly uninformed view) is that people working on AI safety don't know the answer to that first or second question. Rather, they think that the probability and magnitude of the problem are high enough that it swamps those questions in calculating the importance of the cause area. Some of these people have tried to model out this reasoning, while others are leaning more on intuition. I think reducing the uncertainty of any of these three questions is useful in itself, so I think it would be great if you wanted to work on that.
it would be ideal for you to work on something other than AGI safety!
I disagree. Here is my reasoning:
"I feel like it would be useful to write down limitations/upper bounds on what AI systems are able to do if they are not superintelligent and don’t for example have the ability to simulate all of physics (maybe someone has done this already, I don’t know)" - I think it would be useful and interesting to explore this. Even if someone else has done this, I'd be interested in your perspective.
I want to strongly second this! I think that a proof of the limitations of ML under certain constraints would be incredibly useful to narrow the area in which we need to worry about AI safety or at least limit the types of safety questions that need to be addressed in that subset of ML
I'll add that advanced market commitments are also useful in situations where a jump-start isn't explicitly required. In that case, they can act similarly to prize based funding
This is really interesting. Setting up individual projects as DAOs could be an effective way to manage this. The DAO issues tokens to founders, advisors, and donors. If retrospectively it turns out that this was a particularly impactful project the funder can buy and burn the DAO tokens, which will drive up the price, thereby rewarding all of the holders.
Are there plans to release the videos from EAGx Virtual?