Researcher at MIRI. http://shlegeris.com/
I mean on average; obviously you're right that our opinions are correlated. Do you think there's anything important about this correlation?
You say that the impact/scale of COVID is "huge". I think this might mislead people who are used to thinking about the problems EAs think about. Here's why.
I think COVID is probably going to cause on the order of 100 million DALYs this year, based on predictions like this; I think that 50-95% the damage ever done by COVID will be done this year. On the scale that 80000 Hours uses to assess the scale of problems, this would be ranked as importance level 11 or so.
I think this is lower than most things EAs consider working on or funding. For example:
This is a logarithmic scale, so for example, according to this scale, health in poor countries is 100 times more important than COVID.
So given that COVID seems likely to be between 100x and 10000x less important than the main other cause areas EAs think about, I think it's misleading to describe its scale as "huge".
I'm interested in betting about whether 20% of EAs think psychedelics are a plausible top EA cause area. Eg we could sample 20 EAs from some group and ask them. Perhaps we could ask random attendees from last year's EAG. Or we could do a poll in EA Hangout.
I think that it's important for EA to have a space where we can communicate efficiently, rather than phrase everything for the benefit of newcomers who might be reading, so I think that this is bad advice.
I'd prefer something like the weaker and less clear statement "we **can** think ahead, and it's potentially valuable to do so even given the fact that people might try to figure this all out later".
I think your summary of crux three is slightly wrong: I didn’t say that we need to think about it ahead of time, I just said that we can.
Yeah, for the record I also think those are pretty plausible and important sources of impact for AI safety research.
I think that either way, it’s useful for people to think about which of these paths to impact they’re going for with their research.
My guess is I consider the activities you mentioned less valuable than you do. Probably the difference is largest for programming at MIRI and smallest for Hubinger-style AI safety research. (This would probably be a bigger discussion.)
I don't think that peculiarities of what kinds of EA work we're most enthusiastic about lead to much of the disagreement. When I imagine myself taking on various different people's views about what work would be most helpful, most of the time I end up thinking that valuable contributions could be made to that work by sufficiently talented undergrads.
Independent of this, my guess would be that EA does have a decent number of unidentified people who would be about as good as people you've identified. E.g., I can think of ~5 people off the top of my head of whom I think they might be great at one of the things you listed, and if I had your view on their value I'd probably think they should stop doing what they're doing now and switch to trying one of these things. And I suspect if I thought hard about it, I could come up with 5-10 more people - and then there is the large number of people neither of us has any information about.
I am pretty skeptical of this. Eg I suspect that people like Evan (sorry Evan if you're reading this for using you as a running example) are extremely unlikely to remain unidentified, because one of the things that they do is think about things in their own time and put the results online. Could you name a profile of such a person, and which of the types of work I named you think they'd maybe be as good at as the people I named?
It might be quite relevant if "great people" refers only to talent or also to beliefs and values/preferences
I am not intending to include beliefs and preferences in my definition of "great person", except for preferences/beliefs like being not very altruistic, which I do count.
E.g. my guess is that there are several people who could be great at functional programming who either don't want to work for MIRI, or don't believe that this would be valuable. (This includes e.g. myself.)
I think my definition of great might be a higher bar than yours, based on the proportion of people who I think meet it? (To be clear I have no idea how good you'd be at programming for MIRI because I barely know you, and so I'm just talking about priors rather than specific guesses about you.)
For what it's worth, I think that you're not credulous enough of the possibility that the person you talked to actually disagreed with you--I think you might doing that thing whose name I forget where you steelman someone into saying the thing you think instead of the thing they think.
For the problems-that-solve-themselves arguments, I feel like your examples have very "good" qualities for solving themselves: both personal and economic incentives are against them, they are obvious when one is confronted with the situation, and at the point where the problems becomes obvious, you can still solve them. I would argue that not all these properties holds for AGI. What are your thoughts about that?
I agree that it's an important question whether AGI has the right qualities to "solve itself". To go through the ones you named:
I'm not quite sure how high your bar is for "experience", but many of the tasks that I'm most enthusiastic about in EA are ones which could plausibly be done by someone in their early 20s who eg just graduated university. Various tasks of this type:
My guess is that EA does not have a lot of unidentified people who are as good at these things as the people I've identified.
I think that the "EA doesn't have enough great people" problem feels more important to me than the "EA has trouble using the people we have" problem.