Last nontrivial update: 2022-12-20.
Send me anonymous feedback: https://docs.google.com/forms/d/1qDWHI0ARJAJMGqhxc9FHgzHyEFp-1xneyl9hxSMzJP0/viewform
Any type of feedback is welcome, including arguments that a post/comment I wrote is net negative.
I'm interested in ways to increase the EV of the EA community by mitigating downside risks from EA related activities. Without claiming originality, I think that:
Feel free to reach out by sending me a PM. I've turned off email notifications for private messages, so if you send me a time sensitive PM consider also pinging me about it via the anonymous feedback link above.
Temporary Mastodon handle (as of 2022-12-20): @ofer@mas.to.
Also: When CEA (now Effective Ventures) appointed an Executive Director in 2019, they wrote in their blog that the CEO of OpenPhil (at the time) "provided extensive advice" to the search committee and "contributed to the final recommendation to the board of trustees".
These risks mostly seem like “black swan” risks to us – deleterious but highly unlikely risks.
This effort can end up popularizing a mechanism that incentivizes and funds risky, net-negative projects—in anthropogenic x-risk domains, using EA funding.
Conditional on this effort ending up being extremely impactful, do you really believe the downside risks are "highly unlikely"? (And do you think most of the EA community would agree)?
Solving (scalable) alignment might be worth lots of $$$ and key to beating China.
.
I really don't want Xi Jinping Thought to rule the world
.
If you want to win the AGI race, if you want to beat China, [...]
.
Let’s not lose to China [...]
The China-is-an-opponent-that-we-must-beat-in-the-AI-race is a classic talking point of AI companies in the US, that is used as an argument against regulation. Are you by any chance affiliated with an AI company, or an organization that is funded by one?
Relatedly: deciding to vote can also be important due to one's decisions being correlated with the decisions of other potential voters. A more general version of this consideration is discussed in Multiverse-wide Cooperation via Correlated Decision Making by Caspar Oesterheld.
How did you come to the conclusion that funding ML research is "pretty messy and unpredictable"? I've seen many ML companies funded over the years as straightforwardly as other tech startups, […]
I think it's important to distinguish here between companies that intend to use existing state-of-the-art ML approaches (where the innovation is in the product side of things) and companies that intend to advance the state-of-the-art in ML. I'm only claiming that research that aims to advance the state-of-the-art in ML is messy and unpredictable.
To illustrate my point: If we use an extreme version of the messy-and-unpredictable view, we can imagine that OpenAI's research was like repeatedly drawing balls from an urn, where drawing each ball costs $1M and there is a 1% chance (or whatever) to draw a Winning Ball (that is analogous to getting a super impressive ML model). The more funding OpenAI has the more balls they can draw, and thus the more likely they are to draw a Winning Ball. Giving OpenAI $30M increases their chance to draw a Winning Ball; though that increase must be small if they have access to much more funding than $30M (without a super impressive ML model).
Research that involves game theory simulations can be net-positive, but it also seems very dangerous, and should not be done unilaterally. Especially when it involves publishing papers and source code.