All of EliezerYudkowsky's Comments + Replies

FTX EA Fellowships

If I ended up spending some time in the Bahamas this year, do you have a guess as to when would be the optimal time for that?

List of EA funding opportunities

Can you put something on here to the effect of: "Eliezer Yudkowsky continues to claim that anybody who comes to him with a really good AGI alignment idea can and will be funded."

I'm finding this difficult to interpret - I can't find a way of phrasing my question without it seeming snarky but this isn't intended.

One reading of this offer looks something like:

if you have an idea which may enable some progress, it's really important that you be able to try and I'll get you the funding to make sure you do

Another version of this offer looks more like:

I expect basically never to have to pay out because almost all ideas in the space are useless, but if you can convince me yours is the one thing that isn't useless I guess I'll get y

... (read more)
Towards a Weaker Longtermism

It strikes me as a fine internal bargain for some nonhuman but human-adjacent species; I would not expect the internal parts of a human to able to abide well by that bargain.

Towards a Weaker Longtermism

There’s nothing convoluted about it! We just observe that historical experience shows that the supposed benefits never actually appear, leaving just the atrocity! That’s it! That’s the actual reason you know the real result would be net bad and therefore you need to find a reason to argue against it! If historically it worked great and exactly as promised every time, you would have different heuristics about it now!

Towards a Weaker Longtermism

The final conclusion here strikes me as just the sort of conclusion that you might arrive at as your real bottom line, if in fact you had an arrived at an inner equilibrium between some inner parts of you that enjoy doing something other than longtermism, and your longtermist parts.  This inner equilibrium, in my opinion, is fine; and in fact, it is so fine that we ought not to need to search desperately for a utilitarian defense of it.  It is wildly unlikely that our utilitarian parts ought to arrive at the conclusion that the present weighs abo... (read more)

This is crazy, and I think it makes a lot more sense to just admit that part of you cares about galaxies and part of you cares about ice cream and say that neither of these parts are going to be suppressed and beaten down inside you.

Have you read Is the potential astronomical waste in our universe too small to care about? which asks the question, should these two parts of you make a (mutually beneficial) deal/bet while being uncertain of the size of (the reachable part of) the universe, such that the part of you that cares about galaxies gets more votes... (read more)

5Davidmanheim4moAgreed - upon reflection, this was what wrote my bottom line, and yes, this seemed like essentially the only viable way of approaching longtermism, according to my intuitions. This also seems to match the moral intuitions of many people I have spoken with, given the various issues with the alternatives. And I didn't try to claim that 50% specifically was justified by anything - as you pointed out, almost any balance of shortermism and longtermism could be an outcome of what many humans actually embrace, but as I argued, if we are roughly utilitarian in each context with those weights, the different options lead to very similar conclusions in most contexts. Given that if we are willing to be utilitarian by weighting across these two preferences, I believe that any one such weighting will lead to a coherent preference ordering - which is valuable if we don't want to be Dutch booked, among other things. But I don't think that it's in some way more correct to start with "time-impartial utilitarianism is the correct objective morality," and ignore actual human intuitions about what we care about, which you seem to imply is the single coherent longtermist position, while my approach is only justified by preventing analysis paralysis - but perhaps I misunderstood.

Are there two different proposals?

  1. Construct a value function = 0.5* (near term value) + 0.5* (far future value), and do what seems best according to that function.
  2. Spend 50% of your energy on the best longtermist thing and 50% on the best neartermist thing. (Or as a community, half of people do each.)
     

I think Eliezer is proposing (2), but David is proposing (1). Worldview diversification seems more like (2).

I have an intuition these lead different places – would be interested in thoughts.

Edit: Maybe if 'energy' is understood as 'votes from your parts' then (2) ends up the same as (1).

Towards a Weaker Longtermism

The reason we have a deontological taboo against “let’s commit atrocities for a brighter tomorrow” is not that people have repeatedly done this, it worked exactly like they said it would, and millions of people received better lives in exchange for thousands of people dying unpleasant deaths exactly as promised.

The reason we have this deontological taboo is that the atrocities almost never work to produce the promised benefits. Period. That’s it. That’s why we normatively should have a taboo like that.

(And as always in a case like that, we have historic... (read more)

7Davidmanheim4moAgreed, and that's a very good response to a position that one of the sides I critiqued has presented. But despite this and other reasons to reject their positions, I don't think the reverse theoretical claim that we should focus resources exclusively on longtermism is a reasonable one to hold, even while accepting the deontological taboo and dismissing those overwrought supposed fears.
Taboo "Outside View"

I worriedly predict that anyone who followed your advice here would just switch to describing whatever they're doing as "reference class forecasting" since this captures the key dynamic that makes describing what they're doing as "outside viewing" appealing: namely, they get to pick a choice of "reference class" whose samples yield the answer they want, claim that their point is in the reference class, and then claiming that what they're doing is what superforecasters do and what Philip Tetlock told them to do and super epistemically virtuous and anyone wh... (read more)

Good point, I'll add analogy to the list. Much that is called reference class forecasting is really just analogy, and often not even a good analogy.

I really think we should taboo "outside view." If people are forced to use the term "reference class" to describe what they are doing, it'll be more obvious when they are doing epistemically shitty things, because the term "reference class" invites the obvious next questions: 1. What reference class? 2. Why is that the best reference class to use?

All those experimental results on people doing well by using the outside view are results on people drawing a new sample from the same bag as previous samples.  Not "arguably the same bag" or "well it's the same bag if you look at this way", really actually the same bag: how late you'll be getting Christmas presents this year, based on how late you were in previous years

Hmm, I'm not convinced that this is meaningfully different in kind rather than degree. You aren't predicting a randomly chosen holdout year, so saying that 2021 is from the same distri... (read more)

Two Strange Things About AI Safety Policy

The idea of running an event in particular seems misguided. Conventions come after conversations. Real progress toward understanding, or conveying understanding, does not happen through speakers going On Stage at big events. If speakers On Stage ever say anything sensible, it's because an edifice of knowledge was built in the background out of people having real, engaged, and constructive arguments with each other, in private where constructive conversations can actually happen, and the speaker On Stage is quoting from that edifice.

(This is also true of... (read more)

0ZachWeems4y|...having a Big Event with people On Stage is just a giant opportunity for a bunch of people new to the problem to spout out whatever errors they thought up in the first five seconds of thinking, neither aware of past work nor expecting to engage with detailed criticism... I had to go back and double-check that this comment was written before Asilomar 2017. It describes some of the talks very well.
0turchin5yOne way to have interesting conversations - is to have them on a dinner between public speeches on a conference. The most interesting thing during conferences is informal connection between people during breaks and during evenings. A conference is just a cause to collect right people together and put topic frame. So such conference may help to connect national security people and AI safety people. But I have feeling from previous conversation is that current wisdom of AI people is that government people are unable to understand their complex problems and also are not players in the game in AI creation. Only hackers and corporations are. I don't think that it is tight approach.

These seem like reasonable points in isolation, but I'm not sure they answer the first question as actually posed. In particular:

  1. Why would it necessarily be 'a bunch of people new to the problem [spouting] whatever errors they've thought up in the first five seconds of thinking'? Jay's spectrum of suggestions was wide and included a video or podcast. With that kind of thing there would appear to be ample scope to either have someone experienced with the problem doing the presenting or it could be reviewed by the people with relevant expertise before bein

... (read more)
4John_Maxwell5yAre these recommendations based on sound empirical data (e.g. a survey of AI researchers who've come to realize AI risk is a thing, asking them what they were exposed to and what they found persuasive), or just guessing/personal observation? If persuasive speaking is an ineffective way of spreading concern for AI risk, then we live in one of two worlds. In the first world, the one you seem to imply we live in, persuasive speaking is ineffective for most things, and in particular it's ineffective for AI risk. In this world, I'd expect training in persuasive speaking (whether at a 21st century law school or an academy in Ancient Greece) to be largely a waste of time. I would be surprised if this is true. The only data I could find offhand related to the question is from Robin Hanson [http://www.overcomingbias.com/2009/04/why-refuse-to-debate.html]: "The initially disfavored side [in a debate] almost always gains a lot... my guess is that hearing half of a long hi-profile argument time devoted to something makes it seem more equally plausible." In the second world, public speaking is effective persuasion in at least some cases, but there's something about this particular case that makes public speaking a bad fit. This seems more plausible, but it could also be a case of ineffective speakers or an ineffective presentation. It's also important to have good measurement methods: for example, if most post-presentation questions offer various objections, it's still possible that your presentation was persuasive to the majority of the audience. I'm not saying all this because I think events are a particularly promising way to persuade people here. Rather, I think this issue is important enough that our actions should be determined by data whenever it's possible. (Might be worthwhile to do that survey if it hasn't been done already.) I also think the burden of proof for a strategy focused primarily on personal conversations should be really high. Personal conversations ar

So do you think other kinds of non-event programming could be useful? Like an in depth blog post, or a podcast episode?

The history of the term 'effective altruism'

There's only so many things you can call it, and accidental namespace collisions / phrase reinventions aren't surprising. I was surprised when I looked back myself and noticed the phrase was there, so it would be more surprising if Toby Ord remembered than if he didn't. I'm proud to have used the term "effective altruist" once in 2007, but to say that this means I coined the term, especially when it was re-output by the more careful process described above, might be giving me too much credit - but it's still nice to have this not-quite-coincidental mention be remembered, so thank you for that!