kokotajlod

Most of my stuff (even the stuff of interest to EAs) can be found on LessWrong: https://www.lesswrong.com/users/daniel-kokotajlo

Sequences

Tiny Probabilities of Vast Utilities: A Problem for Longtermism?
What to do about short timelines?

Comments

Qualia Research Institute: History & 2021 Strategy

Sorry for the delayed reply! Didn't notice this until now.

Sure, I'd be happy to see your slides, thanks! Looking at your post on FAI and valence, it looks like reasons no. 3, 4, 5, and 9 are somewhat plausible to me. I also agree that there might be philosophical path-dependencies in AI development and that doing some of the initial work ourselves might help to discover them--but I feel like QRI isn't aimed at this directly and could achieve this much better if it was; if it happens it'll be a side-effect of QRI's research.

For your flipped criticism: 

--I think bolstering the EA community and AI risk communities is a good idea
--I think "blue sky" research on global priorities, ethics, metaphilosophy, etc. is also a good idea if people seem likely to make progress on it
--Obviously I think AI safety, AI governance, etc. are valuable
--There are various other things that seem valuable because they support those things, e.g. trying to forecast decline of collective epistemology and/or prevent it.

--There are various other things that don't impact AI safety but independently have a decently strong case that they are similarly important, e.g. ALLFED or pandemic preparedness.

--I'm probably missing a few things
--My metaphysical uncertainty... If you mean how uncertain am I about various philosophical questions like what is happiness, what is consciousness, etc., then the answer is "very uncertain." But I think the best thing to do is not try to think about it directly now, but rather to try to stabilize the world and get to the Long Reflection so we can think about it longer and better later.
 

Qualia Research Institute: History & 2021 Strategy

Thanks for the detailed engagement!

Yep, that's roughly correct as a statement of my position. Thanks. I guess I'd put it slightly differently in some respects -- I'd say something like "A good test for whether to do some EA project is how likely it is that it's within a few orders of magnitude as good as AI safety work. There will be several projects for which we can tell a not-too-implausible story for how they are close to as good or better than AI safety work, and then we can let tractibility/neglectedness/fit considerations convince us to do them. But if we can't even tell such a story in the first place, that's a pretty bad sign." The general thought is: AI safety is the "gold standard" to compare against, since it's currently No. 1 priority in my book. (If something else was No. 1, it would be my gold standard.)

I think QRI actually can tell such a story, I just haven't heard it yet. In the comments it seems that a story like this was sketched. I would be interested to hear it in more detail. I don't think the very abstract story of "we are trying to make good experiences but we don't know what experiences are" is plausible enough as a story for why this is close to as good as AI safety. (But I might be wrong about that too.)

re: A: Hmmm, fair enough that you disagree, but I have the opposite intuition.

re: B: Yeah I think even the EA community underweights AI safety. I have loads of respect for people doing animal welfare stuff and global poverty stuff, but it just doesn't seem nearly as important as preventing everyone from being killed or worse in the near future. It also seems much less neglected--most of the quality-adjusted AI safety work is being done by EA-adjacent people, whereas that's not true (I think?) for animal welfare or global poverty stuff.  As for traceability, I'm less sure how to make the comparison--it's obviously much more tractable to make SOME improvement to animal welfare or the lives of the global poor, but if we compare helping ALL the animals / ALL the global poor to AI safety, it actually seems less tractable (while still being less important and less neglected.) There's a lot more to say about this topic obviously, I worry I come across as callous or ignorant of various nuances... so let me just say I'd love to discuss with you further and hear your thoughts.

re: D:  I'm certainly pretty uncertain about the improving collective sanity thing. One reason I'm more optimistic about it than QRI is that I see how it plugs in to AI safety: If we improve collective sanity, that massively helps with AI safety, whereas if we succeed at understanding consciousness better, how does that help with AI safety? (QRI seems to think it does, I just don't see it yet) Therefore sanity-improvement can be thought of as similarly important to AI safety (or alternatively as a kind of AI safety intervention) and the remaining question is how tractable and neglected it is. I'm unsure, but one thing that makes me optimistic about tractability is that we don't need to improve sanity of the entire world, just a few small parts of the world--most importantly, our community, but also certain AI companies and (maybe) governments. And even if all we do is improve sanity of our own community, that has a substantially positive effect on AI safety already, since so much of AI safety work comes from our community. As for neglectedness, yeah IDK. Within our community there is a lot of focus on good epistemology and stuff already, so maybe the low-hanging fruit has been picked already. But subjectively I get the  impression that there are still good things to be doing--e.g. trying to forecast how collective epistemology in the relevant communities could change in the coming years, building up new tools (such as Guesstimate or Metaculus) ...











 

Qualia Research Institute: History & 2021 Strategy

Good question.  Here are my answers:

  1. I don't think I would say the same thing to every project discussed on the EA forum. I think for every non-AI-focused project I'd say something similar (why not focus instead on AI?) but the bit about how I didn't find QRI's positive pitch compelling was specific to QRI. (I'm a philosopher, I love thinking about what things mean, but I think we've got to have a better story than "We are trying to make more good and less bad experiences, therefore we should try to objectively quantify and measure experience." Compare: Suppose it were WW2, 1939. We are thinking of various ways to help the allied war effort. An institute designed to study "what does war even mean anyway? What does it mean to win a war? let's try to objectively quantify this so we can measure how much we are winning and optimize that metric" is not obviously a good idea. Like, it's definitely not harmful, but it wouldn't be top priority, especially if there are various other projects that seem super important, tractable, and neglected, such as preventing the Axis from getting atom bombs. (I think of the EA community's position with respect to AI as analogous to the position re atom bombs held by the small cohort of people in 1939 "in the know" about the possibility. It would be silly for someone who knew about atom bombs in 1939 to instead focus on objectively defining war and winning.)
  2. But yeah, I would say to every non-AI-related project something like "Will your project be useful for making AI go well? How?" And I think that insofar as one could do good work on both AI safety stuff and something else, one should probably choose AI safety stuff. This isn't because I think AI safety stuff is DEFINITELY the most important, merely that I think it probably is. (Also I think it's more neglected AND tractable than many, though not all, of the alternatives people typically consider)
  3. Some projects I think are still worth pursuing even if they don't help make AI go well. For example, bio risk, preventing nuclear war, improving collective sanity/rationality/decision-making, ... (lots of other things would be added, it all depends on tractibility + neglectedness + personal fit.) After all, maybe AI won't happen for many decades or even centuries. Or maybe one of those other risks is more likely to happen soon than it appears.
  4. Anyhow, to sum it all up: I agree that we shouldn't be super confident that AI is the most important thing. Depending on how broadly you define AI, I'm probably about 80-90% confident. And I agree that this means our community should explore a portfolio of ideas rather than just one. Nevertheless, I think even our community is currently less focused on AI than it should be, and I think AI is the "gold standard" so to speak that projects should compare themselves to, and moreover I think QRI in particular has not done much to argue for their case. (Compare with, say, ALLFED which has a pretty good case IMO: There's at least a 1% chance of some sort of global agricultural shortfall prior to AI getting crazy, and by default this will mean terrible collapse and famine, but if we prepare for this possibility it could instead mean much better things (people and institutions surviving, maybe learning)).
  5. My criticism is not directly of QRI but of their argument as presented here. I expect that if I talked with them and heard more of their views, I'd hear a better, more expanded version of the argument that would be much more convincing. In fact I'd say 40% chance QRI ends up seeming better than ALLFED to me after such a conversation. For example, I myself used to think that consciousness research was really important for making AI go well. It might not be so hard to convince me to switch back to that old position.
AMA: Tom Chivers, science writer, science editor at UnHerd

Yes, thanks!  Some follow-ups:

1. To what extent do some journalists use the Chinese Robber Fallacy deliberately -- they know that they have a wide range of even-worse, even-bigger tragedies and scandals to report on, but they choose to report on the ones that let them push their overall ideology or political agenda? (And they choose not to report on the ones that seem to undermine or distract from their ideology/agenda)
2.  Do you agree with the "The parity inverse of a meme is the same meme in a different point of its life cycle" idea? In other words, do you agree with the "Toxoplasma of Rage" thesis?
 

Consciousness research as a cause? [asking for advice]

I currently think consciousness research is less important/tractable/neglected than AI safety, AI governance, and a few other things. The main reason is that it totally seems to me to be something we can "punt to the future" or "defer to more capable successors" to a large extent. However, I might be wrong about this. I haven't talked to QRI at length sufficient to truly evaluate their arguments. (See this exchange, which is about all I've got.)

AMA: Tom Chivers, science writer, science editor at UnHerd

Thanks for doing this -- I'm a big fan of your book!

I'm interested to hear what you think this post about how media works gets right and gets wrong. In particular: (1)

A common misconception about propaganda is the idea it comes from deliberate lies (on the part of media outlets) or from money changing hands. In my personal experience colluding with the media no money changes hands and no (deliberate) lies are told by the media itself. ... Most media bias actually takes the form of selective reporting. ... Combine the Chinese Robbers Fallacy with a large pool of uncurated data and you can find facts to support any plausible thesis.

and (2)

Even when a news outlet is broadcasting a lie, their government is unlikely to prosecute them for promoting official government policy. Newspapers abnegate responsibility for truth by quoting official sources. You get away (legally) straight-up lying about medical facts if you are quoting the CDC.

News outlets' unquestioning reliance on official sources comes from the economics of their situation. It is cheaper to republish official statements without questioning them. The news outlet which produces the cheapest news outcompetes outlets with higher expenditure.

and (3)

The parity inverse of a meme is the same meme—at a different phase in its lifecycle. Two-sided conflicts are extremely virulent memes because they co-opt potential enemies.

and (4)

Media bias is not a game of science. It is a game of memetics. Memetics isn't about truth. It is about attention. Ask yourself "What are you thinking about and why are you thinking about it?"

Fun with +12 OOMs of Compute

I wonder if you think the EA community is too slow to update their strategies here. It feels like what is coming is easily among the most difficult things humanity ever has to get right and we could be doing much more if we all took current TAI forecasts more into account.

You guessed it -- I believe that most of EA's best and brightest will end up having approximately zero impact (compared to what they could have had) because they are planning for business-as-usual. The twenties are going to take a lot of people by surprise, I think. Hopefully EAs working their way up the academic hierarchy will at least be able to redirect prestige/status towards those who have been building up expertise in AI safety and AI governance, when the time comes.

[Link post] Are we approaching the singularity?

I think that if I were going on outside-view economic arguments I'd probably be <50% singularity by 2100.

To what extent is this a repudiation of Roodman's outside-view projection? My guess is you'd say something like "This new paper is more detailed and trustworthy than Roodman's simple model, so I'm assigning it more weight, but still putting a decent amount of weight on Roodman's being roughly correct and that's why I said <50% instead of <10%."

Load More