ofer

Send me anonymous feedback: https://docs.google.com/forms/d/1qDWHI0ARJAJMGqhxc9FHgzHyEFp-1xneyl9hxSMzJP0/viewform

Any type of feedback is welcome, including arguments that a post/comment I wrote is net negative.


Some quick info about me:

I have a background in computer science (BSc+MSc; my MSc thesis was in NLP and ML, though not in deep learning).

You can also find me on the AI Alignment Forum and LessWrong.

Feel free to reach out by sending me a PM here or on my website.

Comments

EA Funds has appointed new fund managers

Committee members recused themselves from some discussions and decisions in accordance with our conflict of interest policy.

Is that policy public?

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

I'm not that aware of what the non-technical AI Safety interventions are, aside from semi-related things like working on AI strategy and policy (i.e. FHI's GovAI, The Partnership on AI) and advocating against shorter-term AI risks (i.e. Future of Life Institute's work on Lethal Autonomous Weapons Systems).

Just wanted to quickly flag: I think the more popular interpretation of the term AI safety points to a wide landscape that includes AI policy/strategy as well as technical AI safety (which is also often referred to by the term AI alignment).

AMA: Ajeya Cotra, researcher at Open Phil

Apart from the biological anchors approach, what efforts in AI timelines or takeoff dynamics forecasting—both inside and outside Open Phil—are you most excited about?

AMA: Ajeya Cotra, researcher at Open Phil

Imagine you win $10B in a donor lottery. What sort of interventions—that are unlikely to be funded by Open Phil in the near future—might you fund with that money?

Promoting Effective Giving and Giving What We Can within EA Groups

Regarding the first potential change: It seems to me helpful (consider also "inclined" -> "inclined/able"). Regarding the second one, I was not sure at first that "resign" here means ceasing to follow through after having taken the pledge.

For both changes, consider wording it such that it's clear that we should consider the runway / financial situation factors over a person's entire life (rather than just their current plans and financial situation) and the substantial uncertainties that are involved.

Promoting Effective Giving and Giving What We Can within EA Groups

Hi Luke,

I recommend expanding the discussion in the "Things to be careful of" section. In particular, it seems worthwhile to estimate the impact of people in EA not having as much runway as they could have.

You mentioned that some people took The Pledge and did not follow through. It's important to also consider the downsides in situations where people do follow through despite regretting having taken The Pledge. People in EA are selected for scrupulousness which probably correlates strongly with pledge-keeping. As an aside, maybe it's worth adding to The Pledge (or The Pledge 2.0?) some text such that the obligation is conditional on some things (e.g. no unanticipated developments that would make the person regret taking the pledge).

How much does a vote matter?

When one assumes that the number of people that are similar to them (roughly speaking) is sufficiently small, I agree.

How much does a vote matter?

The costs are higher for people who value the time of people that are correlated with them, while the benefits are not.

[This comment is no longer endorsed by its author]Reply
How much does a vote matter?

Wikipedia's entry on superrationality probably explains the main idea here better than me.

Thoughts on whether we're living at the most influential time in history

I don’t make any claims about how likely it is that we are part of a very long future. Only that, a priori, the probability that we’re *both* in a very large future *and* one of the most influential people ever is very low. For that reason, there aren’t any implications from that argument to claims about the magnitude of extinction risk this century.

I don't understand why there are implications from that argument to claims about the magnitude of our influentialness either.

As an analogy, suppose Alice bought a lottery ticket that will win her $100,000,000 with an extremely small probability. The lottery is over, and she is now looking at the winning numbers on her phone, comparing them one by one to the numbers on her ticket. Her excitement grows as she finds more and more of the winning numbers on her ticket. She managed to verify that she got 7 numbers right (amazing!), but before she finished comparing the rest of the numbers, her battery died. She tries to find a charger, and in the meantime she's considering whether to donate the money to FHI if she wins. It occurs to her that the probability that *both* [a given person wins the lottery] *and* [donating $100,000,000 to FHI will reduce existential risk] is extremely small. She reasons that, sure, there are some plausible arguments that donating $100,000,000 to FHI will have a huge positive impact, but are those arguments strong enough considering her extremely small prior probability in the above conjunction?

Load More