SoerenMind

Sequences

Working at EA Organizations

Topic Contributions

Comments

The Future Fund’s Project Ideas Competition

Acquire and repurpose new AI startups for AI safety

Artificial intelligence

As ML performance has recently improved there is a new wave of startups coming. Some are composed of top talent, carefully engineered infrastructure, a promising product, well-coordinated teams, with existing workflows and management capacity. All of these are bottlenecks for AI safety R&D.

It should be possible to acquire some appropriate startups and middle-sized companies. Examples include HuggingFace, AI21, Cohere, and smaller, newer startups. The idea is to repurpose the mission of some select companies to align them more closely with socially beneficial and safety-oriented R&D. This is sometimes feasible since their missions are often broad, still in flux, and their product could benefit from improving safety and alignment.

Trying this could have very high information value. If it works, it has enormous potential upside as many new AI startups are being created now that could be acquired in the future. It could potentially  more than double the size of the AI alignment R&D.

Paying existing employees to do safety R&D seems easier than paying academics. Academics often like to follow their own ideas but employees are already doing what their superior tells them to. In fact, they may find alignment and safety R&D more motivating than their company's existing mission. Additionally, some founders may be more willing to sell to a non-profit org with a social-good mission than to Big Tech.

Big tech companies acquire small companies all the time. The reasons for this vary (e.g. killing competition), but overall it suggests that it can be feasible and even profitable.

Caveats:

1) A highly qualified replacement may be needed for the top-level management.

2) Some employees may leave after an acquisition. This seems more likely if the pivot towards safety is a big change to the skills and workflows. Or if the employees don't like the new mission. It seems possible to partially avoid both of these by acquiring the right companies and steering them towards a mission that is relevant to their existing work. For example, natural language generation startups would usually benefit from fine-tuning their models with alignment techniques. 

We should consider funding well-known think tanks to do EA policy research

I'm no expert in this area but I'm told that European think-tanks are often strapped for cash so that may explain why the funders get so much influence (which is promising for the funder of course but it may not generalize to the US).

Is there evidence that recommender systems are changing users' preferences?

IIRC Tristan Harris has also made this claim. Maybe his 80k podcast or The Social Dilemma  has some clues. 

Edit: maybe he just said something like 'Youtube's algorithm is trained to send users down rabbit hole'

AMA: Ajeya Cotra, researcher at Open Phil

Re why AI isn't generating much revenue - have you considered the productivity paradox? It's historically normal that productivity  slows down before steeply increasing  when a new general purpose technologies arrives. 

See "Why Future Technological Progress Is Consistent with Low Current Productivity Growth" in "Artificial Intelligence and the Modern Productivity Paradox"

My recommendations for RSI treatment

Instructions for that: http://www.eccentrictraining.com/6.html

Correlations Between Cause Prioritization and the Big Five Personality Traits

That's really interesting, thanks! Do you (or someone else) have a sense of how much variation in priorities can be explained by the big 5?

We're (surprisingly) more positive about tackling bio risks: outcomes of a survey

Makes sense. I guess then the question is if the work of everyone except the x-risk focused NGOs helps reduce r x-risk much. I tend to think yes since much of pandemic preparedness also addresses the worst case scenarios. But that seems to be an open question.

We're (surprisingly) more positive about tackling bio risks: outcomes of a survey

Thanks, great analysis! Just registering that I still expect bio risk will be less neglected than in the past. The major consideration for me is institutional funding, due to its scale. Like you say:

We believe that an issue of the magnitude of COVID-19 will likely not be forgotten soon, and that funding for pandemic preparedness will likely be safe for much longer than in the aftermath of previous pandemics. In particular it may persist long enough to become institutionalised and therefore harder to cut.

Aside from future institutional funding, we also have to take the into account the current funding and new experience because they contribute to our cumulative knowledge and preparedness.

The academic contribution to AI safety seems large

Important question, and nicely researched!

A caveat is that some essential subareas of safety may be neglected. This is not a problem when subareas substitute each other: e.g. debate substitutes for amplification so it's okay if one of them is neglected. But there's a problem when subareas complement each other: e.g. alignment complements robustness so we probably need to solve both. See also When causes multiply.

It's ok when a subarea is neglected as long as there's a substitute for it. But so far it seems that some areas are necessary components of AI safety (perhaps both inner and outer alignment are).

Load More