Most of my stuff (even the stuff of interest to EAs) can be found on LessWrong: https://www.lesswrong.com/users/daniel-kokotajlo
Sorry for the delayed reply! Didn't notice this until now.Sure, I'd be happy to see your slides, thanks! Looking at your post on FAI and valence, it looks like reasons no. 3, 4, 5, and 9 are somewhat plausible to me. I also agree that there might be philosophical path-dependencies in AI development and that doing some of the initial work ourselves might help to discover them--but I feel like QRI isn't aimed at this directly and could achieve this much better if it was; if it happens it'll be a side-effect of QRI's research.For your flipped criticism:
--I think bolstering the EA community and AI risk communities is a good idea--I think "blue sky" research on global priorities, ethics, metaphilosophy, etc. is also a good idea if people seem likely to make progress on it--Obviously I think AI safety, AI governance, etc. are valuable--There are various other things that seem valuable because they support those things, e.g. trying to forecast decline of collective epistemology and/or prevent it.
--There are various other things that don't impact AI safety but independently have a decently strong case that they are similarly important, e.g. ALLFED or pandemic preparedness.
--I'm probably missing a few things--My metaphysical uncertainty... If you mean how uncertain am I about various philosophical questions like what is happiness, what is consciousness, etc., then the answer is "very uncertain." But I think the best thing to do is not try to think about it directly now, but rather to try to stabilize the world and get to the Long Reflection so we can think about it longer and better later.
Thanks for the detailed engagement!Yep, that's roughly correct as a statement of my position. Thanks. I guess I'd put it slightly differently in some respects -- I'd say something like "A good test for whether to do some EA project is how likely it is that it's within a few orders of magnitude as good as AI safety work. There will be several projects for which we can tell a not-too-implausible story for how they are close to as good or better than AI safety work, and then we can let tractibility/neglectedness/fit considerations convince us to do them. But if we can't even tell such a story in the first place, that's a pretty bad sign." The general thought is: AI safety is the "gold standard" to compare against, since it's currently No. 1 priority in my book. (If something else was No. 1, it would be my gold standard.)I think QRI actually can tell such a story, I just haven't heard it yet. In the comments it seems that a story like this was sketched. I would be interested to hear it in more detail. I don't think the very abstract story of "we are trying to make good experiences but we don't know what experiences are" is plausible enough as a story for why this is close to as good as AI safety. (But I might be wrong about that too.)re: A: Hmmm, fair enough that you disagree, but I have the opposite intuition.re: B: Yeah I think even the EA community underweights AI safety. I have loads of respect for people doing animal welfare stuff and global poverty stuff, but it just doesn't seem nearly as important as preventing everyone from being killed or worse in the near future. It also seems much less neglected--most of the quality-adjusted AI safety work is being done by EA-adjacent people, whereas that's not true (I think?) for animal welfare or global poverty stuff. As for traceability, I'm less sure how to make the comparison--it's obviously much more tractable to make SOME improvement to animal welfare or the lives of the global poor, but if we compare helping ALL the animals / ALL the global poor to AI safety, it actually seems less tractable (while still being less important and less neglected.) There's a lot more to say about this topic obviously, I worry I come across as callous or ignorant of various nuances... so let me just say I'd love to discuss with you further and hear your thoughts.re: D: I'm certainly pretty uncertain about the improving collective sanity thing. One reason I'm more optimistic about it than QRI is that I see how it plugs in to AI safety: If we improve collective sanity, that massively helps with AI safety, whereas if we succeed at understanding consciousness better, how does that help with AI safety? (QRI seems to think it does, I just don't see it yet) Therefore sanity-improvement can be thought of as similarly important to AI safety (or alternatively as a kind of AI safety intervention) and the remaining question is how tractable and neglected it is. I'm unsure, but one thing that makes me optimistic about tractability is that we don't need to improve sanity of the entire world, just a few small parts of the world--most importantly, our community, but also certain AI companies and (maybe) governments. And even if all we do is improve sanity of our own community, that has a substantially positive effect on AI safety already, since so much of AI safety work comes from our community. As for neglectedness, yeah IDK. Within our community there is a lot of focus on good epistemology and stuff already, so maybe the low-hanging fruit has been picked already. But subjectively I get the impression that there are still good things to be doing--e.g. trying to forecast how collective epistemology in the relevant communities could change in the coming years, building up new tools (such as Guesstimate or Metaculus) ...
Good question. Here are my answers:
Yes, thanks! Some follow-ups:1. To what extent do some journalists use the Chinese Robber Fallacy deliberately -- they know that they have a wide range of even-worse, even-bigger tragedies and scandals to report on, but they choose to report on the ones that let them push their overall ideology or political agenda? (And they choose not to report on the ones that seem to undermine or distract from their ideology/agenda)2. Do you agree with the "The parity inverse of a meme is the same meme in a different point of its life cycle" idea? In other words, do you agree with the "Toxoplasma of Rage" thesis?
I currently think consciousness research is less important/tractable/neglected than AI safety, AI governance, and a few other things. The main reason is that it totally seems to me to be something we can "punt to the future" or "defer to more capable successors" to a large extent. However, I might be wrong about this. I haven't talked to QRI at length sufficient to truly evaluate their arguments. (See this exchange, which is about all I've got.)
Thanks for doing this -- I'm a big fan of your book!I'm interested to hear what you think this post about how media works gets right and gets wrong. In particular: (1)
A common misconception about propaganda is the idea it comes from deliberate lies (on the part of media outlets) or from money changing hands. In my personal experience colluding with the media no money changes hands and no (deliberate) lies are told by the media itself. ... Most media bias actually takes the form of selective reporting. ... Combine the Chinese Robbers Fallacy with a large pool of uncurated data and you can find facts to support any plausible thesis.
Even when a news outlet is broadcasting a lie, their government is unlikely to prosecute them for promoting official government policy. Newspapers abnegate responsibility for truth by quoting official sources. You get away (legally) straight-up lying about medical facts if you are quoting the CDC.News outlets' unquestioning reliance on official sources comes from the economics of their situation. It is cheaper to republish official statements without questioning them. The news outlet which produces the cheapest news outcompetes outlets with higher expenditure.
Even when a news outlet is broadcasting a lie, their government is unlikely to prosecute them for promoting official government policy. Newspapers abnegate responsibility for truth by quoting official sources. You get away (legally) straight-up lying about medical facts if you are quoting the CDC.
News outlets' unquestioning reliance on official sources comes from the economics of their situation. It is cheaper to republish official statements without questioning them. The news outlet which produces the cheapest news outcompetes outlets with higher expenditure.
The parity inverse of a meme is the same meme—at a different phase in its lifecycle. Two-sided conflicts are extremely virulent memes because they co-opt potential enemies.
Media bias is not a game of science. It is a game of memetics. Memetics isn't about truth. It is about attention. Ask yourself "What are you thinking about and why are you thinking about it?"
Whoa, Lesswrong beats SSC? That surprises me.
Update: The draft I mentioned is now a post!
I wonder if you think the EA community is too slow to update their strategies here. It feels like what is coming is easily among the most difficult things humanity ever has to get right and we could be doing much more if we all took current TAI forecasts more into account.
You guessed it -- I believe that most of EA's best and brightest will end up having approximately zero impact (compared to what they could have had) because they are planning for business-as-usual. The twenties are going to take a lot of people by surprise, I think. Hopefully EAs working their way up the academic hierarchy will at least be able to redirect prestige/status towards those who have been building up expertise in AI safety and AI governance, when the time comes.
I think that if I were going on outside-view economic arguments I'd probably be <50% singularity by 2100.
To what extent is this a repudiation of Roodman's outside-view projection? My guess is you'd say something like "This new paper is more detailed and trustworthy than Roodman's simple model, so I'm assigning it more weight, but still putting a decent amount of weight on Roodman's being roughly correct and that's why I said <50% instead of <10%."