O

Ofer

1910 karmaJoined Jun 2017

Bio

Last nontrivial update: 2022-12-20.

Send me anonymous feedback: https://docs.google.com/forms/d/1qDWHI0ARJAJMGqhxc9FHgzHyEFp-1xneyl9hxSMzJP0/viewform

Any type of feedback is welcome, including arguments that a post/comment I wrote is net negative.

I'm interested in ways to increase the EV of the EA community by mitigating downside risks from EA related activities. Without claiming originality, I think that:

  • Complex cluelessness is a common phenomenon in the domains of anthropogenic x-risks and meta-EA (due to an abundance of crucial considerations). It is often very hard to judge whether a given intervention is net-positive or net-negative.
  • The EA community is made out of humans. Humans' judgement tends to be influenced by biases and self-deception. That is a serious source of risk, considering the previous point.
    • Some potential mitigations involve improving some aspects of how EA funding works, e.g. with respect to conflicts of interest. Please don't interpret my interest in such mitigations as accusations of corruption etc.

Feel free to reach out by sending me a PM. I've turned off email notifications for private messages, so if you send me a time sensitive PM consider also pinging me about it via the anonymous feedback link above.

Temporary Mastodon handle (as of 2022-12-20): @ofer@mas.to.

Comments
242

Ofer
1mo02

but what work on s-risks really looks like in the end is writing open source game theory simulations and writing papers

Research that involves game theory simulations can be net-positive, but it also seems very dangerous, and should not be done unilaterally. Especially when it involves publishing papers and source code.

Ofer
1mo60

I couldn't find on the website of the Center for AI Safety any information about who is running it, or who is on the board. Is this information publicly available anywhere?

Ofer
1mo42

The local incentives people face often discourage publicly giving negative feedback that may cause an applicant to not get funding. ("I would gain nothing from giving negative feedback, and that person might hate me.")

Ofer
1mo110

Notably, it seems that Yoshua Bengio is one of the signatories (he is an extremely prominent AI researcher; one of the three researchers who won a Turing Award for their work in deep learning).

Ofer
2mo2314

Also: When CEA (now Effective Ventures) appointed an Executive Director in 2019, they wrote in their blog that the CEO of OpenPhil (at the time) "provided extensive advice" to the search committee and "contributed to the final recommendation to the board of trustees".

Ofer
2mo0-2

These risks mostly seem like “black swan” risks to us – deleterious but highly unlikely risks.

This effort can end up popularizing a mechanism that incentivizes and funds risky, net-negative projects—in anthropogenic x-risk domains, using EA funding.

Conditional on this effort ending up being extremely impactful, do you really believe the downside risks are "highly unlikely"? (And do you think most of the EA community would agree)?

Ofer
2mo-20

Solving (scalable) alignment might be worth lots of $$$ and key to beating China.

.

I really don't want Xi Jinping Thought to rule the world

.

If you want to win the AGI race, if you want to beat China, [...]

.

Let’s not lose to China [...]

The China-is-an-opponent-that-we-must-beat-in-the-AI-race is a classic talking point of AI companies in the US, that is used as an argument against regulation. Are you by any chance affiliated with an AI company, or an organization that is funded by one?

Ofer
2mo20

I'm not sure. Very few people would use the term "correlation" here; but perhaps quite a few people sometimes reason along the lines of: "Should I (not) do X? What happens if many people (not) do it?"

Ofer
2mo30

Relatedly: deciding to vote can also be important due to one's decisions being correlated with the decisions of other potential voters. A more general version of this consideration is discussed in Multiverse-wide Cooperation via Correlated Decision Making by Caspar Oesterheld.

Ofer
3mo30

How did you come to the conclusion that funding ML research is "pretty messy and unpredictable"? I've seen many ML companies funded over the years as straightforwardly as other tech startups, […]

I think it's important to distinguish here between companies that intend to use existing state-of-the-art ML approaches (where the innovation is in the product side of things) and companies that intend to advance the state-of-the-art in ML. I'm only claiming that research that aims to advance the state-of-the-art in ML is messy and unpredictable.

To illustrate my point: If we use an extreme version of the messy-and-unpredictable view, we can imagine that OpenAI's research was like repeatedly drawing balls from an urn, where drawing each ball costs $1M and there is a 1% chance (or whatever) to draw a Winning Ball (that is analogous to getting a super impressive ML model). The more funding OpenAI has the more balls they can draw, and thus the more likely they are to draw a Winning Ball. Giving OpenAI $30M increases their chance to draw a Winning Ball; though that increase must be small if they have access to much more funding than $30M (without a super impressive ML model).

Load more