ofer

Send me anonymous feedback: https://docs.google.com/forms/d/1qDWHI0ARJAJMGqhxc9FHgzHyEFp-1xneyl9hxSMzJP0/viewform

Any type of feedback is welcome, including arguments that a post/comment I wrote is net negative.


Some quick info about me:

I have a background in computer science (BSc+MSc; my MSc thesis was in NLP and ML, though not in deep learning).

You can also find me on the AI Alignment Forum and LessWrong.

Feel free to reach out by sending me a PM here or on my website.

Wiki Contributions

Comments

Is effective altruism growing? An update on the stock of funding vs. people

If Bostrom, a very high-status figure within longtermist EA, has really good donation opportunities to the tune of 1 million, I doubt it'd be unfunded.

Even 'very high-status figures within longtermist EA' can control a limited amount of funding, especially for requests that are speculative/weird/non-legible from the perspective of the relevant donors. I don't know what's the bar for "really good donation opportunities", but the relevant thing here is to compare the EV of that $1M in the hands of Bostrom to the EV of that $1M in the hands of other longtermism aligned people.

Less importantly, you rely here on the assumption that being "a very high-status figure within longtermist EA" means you can influence a lot of funding, but the causal relationship may mostly be going in the other direction. Bostrom (for example) probably got his high-status in longtermist EA mostly from his influential work, and not from being able to influence a lot of funding.

I also feel like there are similar analogous experiments made in the past where relatively low oversight grantmaking power was given to certain high-prestige longtermist EA figures( eg here and here). You can judge for yourself whether impact "several orders of magnitude higher" sounds right, personally I very much doubt it.

To be clear, I don't think my reasoning here applies generally to "high-prestige longtermist EA figures". Though this conversation with you made me think about this some more and my above claim now seems to me too strong (I added an EDIT block).

Is effective altruism growing? An update on the stock of funding vs. people

This seems like a fairly surprising claim to me, do you have a real or hypothetical example in mind?

Imagine that all the longtermism ~aligned people in the world participate in a "longtermism donor lottery" that will win one of them $1M. My estimate is that the EV of that $1M, conditional on person X winning, is several orders of magnitude larger for X=[Nick Bostrom] than for almost any other value of X.

[EDIT: following the conversation here with Linch I thought about this some more, and I think the above claim is too strong. My estimate of the EV for many values of X is very non-robust, and I haven't tried to estimate the EV for all the relevant values of X. Also, maybe potential interventions that cause there to be more longtermism-aligned funding should change my reasoning here.]

EDIT: Also I feel like in many such situations, such people should almost certainly become grantmakers!

Why? Do you believe in something analogous to the efficient-market hypothesis for EA grantmaking? What mechanism causes that? Do grantmakers who make grants with higher-than-average EV tend to gain more and more influence over future grant funds at the expense of other grantmakers? Do people who appoint such high-EV grantmakers tend to gain more and more influence over future grantmaker-appointments at the expense of other people who appoint grantmakers?

Is effective altruism growing? An update on the stock of funding vs. people

Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10 (where this hugely depends on fit and the circumstances).

(Maybe you already think so, but...) it probably also depends a lot on the identity of that "someone" who is donating the $X (even if we restrict the discussion to, say, potential donors who are longtermism-aligned). Some people may have a comparative advantage with respect to their ability to donate effectively such that the EV from their donation would be several orders of magnitude larger than the "average EV" from a donation of that amount.

All Possible Views About Humanity's Future Are Wild

The three critical probabilities here are Pr(Someone makes an epistemic mistake when thinking about their place in history), Pr(Someone believes they live at the HoH|They haven’t made an epistemic mistake), and Pr(Someone believes they live at the HoH|They’ve made an epistemic mistake).

I think the more decision relevant probabilities involve "Someone believes they should act as if they live at the HoH" rather than "Someone believes they live at the HoH". Our actions may be much less important if 'this is all a dream/simulation' (for example). We should make our decisions in the way we wish everyone-similar-to-us-across-the-multiverse make their decisions.

As an analogy, suppose Alice finds herself getting elected as the president of the US. Let's imagine there are citizens in the US. So Alice reasons that it's way more likely that she is delusional than she actually being the president of the US. Should she act as if she is the president of the US anyway, or rather spend her time trying to regain her grip on reality? The citizens want everyone in her situation to choose the former. It is critical to have a functioning president. And it does not matter if there are many delusional citizens who act as if they are the president. Their "mistake" does not matter. What matters is how the real president acts.

The Centre for the Governance of AI is becoming a nonprofit

Related to the concern that I raised here: I recommend interested readers to listen to (or read the transcript of) this FOL podcast episode with Mohamed Abdalla about their paper: "The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity".

The Centre for the Governance of AI is becoming a nonprofit

Will GovAI in its new form continue to deal with the topic of regulation (i.e. regulation of AI companies by states)?

DeepMind is owned by Alphabet (Google). Many interventions that are related to AI regulation can affect the stock price of Alphabet, which Alphabet is legally obligated to try to maximize (regardless of the good intentions that many senior executives there may have). If GovAI will be co-lead by an employee of DeepMind, there is seemingly a severe conflict of interest issue about anything that GovAI does (or avoids doing) with respect to the topic of regulating AI companies.

GovAI's research agenda (which is currently linked to from their 'placeholder website') includes the following:

[...] At what point would and should the state be involved? What are the legal and other tools that the state could employ (or are employing) to close and exert control over AI companies? With what probability, and under what circumstances, could AI research and development be securitized--i.e., treated as a matter of national security--at or before the point that transformative capabilities are developed? How might this happen and what would be the strategic implications? How are particular private companies likely to regard the involvement of their host government, and what policy options are available to them to navigate the process of state influence? [...]

How will this part of the research agenda be influenced by GovAI being co-lead by a DeepMind employee?

[Meta] Is it legitimate to ask people to upvote posts on this forum?
Answer by oferJun 29, 202158

I think this method of "promoting a post" should be discouraged in the EA community.

The community's attention is a limited resource. Gaining more upvotes in an "artificial" way is roughly a zero-sum game with other writers on this forum and it adds noise to the useful signal that the karma score provides. It also seems counterproductive in terms of fostering good coordination norms within the community.

EA Funds has appointed new fund managers

Committee members recused themselves from some discussions and decisions in accordance with our conflict of interest policy.

Is that policy public?

Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

I'm not that aware of what the non-technical AI Safety interventions are, aside from semi-related things like working on AI strategy and policy (i.e. FHI's GovAI, The Partnership on AI) and advocating against shorter-term AI risks (i.e. Future of Life Institute's work on Lethal Autonomous Weapons Systems).

Just wanted to quickly flag: I think the more popular interpretation of the term AI safety points to a wide landscape that includes AI policy/strategy as well as technical AI safety (which is also often referred to by the term AI alignment).

AMA: Ajeya Cotra, researcher at Open Phil

Apart from the biological anchors approach, what efforts in AI timelines or takeoff dynamics forecasting—both inside and outside Open Phil—are you most excited about?

Load More