David Johnston

367Joined Sep 2021

Comments
69

What sort of substantial value would you expect to be added? It sounds like we either have a different belief about the value-add, or a different belief about the costs.

I'd be very surprised if the actual amount of big-picture strategic thinking at either organisation was "very little". I'd be less surprised if they didn't have a consensus view about big-picture strategy, or a clearly written document spelling it out. If I'm right, I think the current content is misleading-ish. If I'm wrong and actually little thinking has been done - there's some chance they say "we're focused on identifying and tackling near-term problems", which would be interesting to me given what I currently believe. If I'm wrong and something clear has been written, then making this visible (or pointing out its existence) would also be a useful update for me.

Polished vs sloppy

Here are some dimensions I think of as distinguishing sloppy from polished:

  • Vague hunches <-> precise theories
  • First impressions <-> thorough search for evidence/prior work
  • Hard <-> easy to understand
  • Vulgar <-> polite
  • Unclear <-> clear account of robustness, pitfalls and so forth

All else equal, I don't think the left side is epistemically superior. It can be faster, and that might be worth it, but there are obvious epistemic costs to relying on vague hunches, first impressions, failures of communication and overlooked pitfalls (politeness is perhaps neutral here). I think these costs are particularly high in, as you say, domains that are uncertain and disagreement-heavy.

I think it is sloppy to stay too close to the left if you think the issue is important and you have time to address it properly. You have to manage your time, but I don't think there are additional reasons to promote sloppy work.

You say that there are epistemic advantages to exposing thought processes, and you give the example of dialogues. I agree there are pedagogical advantages to exposing thought processes, but exposing thoughts clearly also requires polish, and I don't think pedagogy is a high priority most of the time. I'd be way more excited to see more theory from MIRI than more dialogues.

If my reasoning process is actually flawed, then I want other EAs to be aware of that, so they can have an accurate model of how much weight to put on my views.

I don't think it's realistic to expect Lightcone forums to do serious reviews of difficult work. That takes a lot of individual time and dedication; maybe you occasionally get lucky, but you should mostly expect not to.

I agree that I'm not a paradigmatic example of the EAs who most need to hear this lesson [of exposing the thought process]; but I think non-established EAs heavily follow the example set by established EAs, so I want to set an example that's closer to what I actually want to see more of

Maybe I'll get into this more deeply one day, but I just don't think sharing your thoughts freely is a particularly effective way to encourage other people to share theirs. I think you've been pretty successful at getting the "don't worry about being polite to OpenAI" message across, less so the higher level stuff.

I don’t think this makes sense. Your group, in the EA community, regarding AI safety, gets taken seriously whatever you write. This in not the paradigmatic example of someone who feels worried about making public mistakes. A community that gives you even more leeway to do sloppy work is not one that encourages more people to share their independent thoughts about the problem. In fact, I think the reverse is true: when your criticisms carry a lot of weight even when they’re flawed, this has a stifling effect on people in more marginal positions who disagree with you.

If you want to promote more open discussion, your time would be far better spent seeking out flawed but promising work by lesser known individuals and pointing out what you think is valuable in it.

Am I correct in my belief that you are paid to do this work? If this is so, then I think the fact that you are both highly regarded and compensated for your time means your output should meet higher standards than a typical community post. Contacting the relevant labs is a step that wouldn’t take you much time, can’t be done by the vast majority of readers, and has a decent chance of adding substantial value. I think you should have done it.

We might just be talking past each other - I’m not saying this is a reason to be confident explosive growth won’t happen and I agree it looks like growth could go much faster before hitting any limits like this. I just meant to say “here’s a speculative mechanism that could break some of the explosive growth models”

I don’t think your summary is wrong as such, but it’s not how I think about it.

Suppose we’ve got great AI that, in practice, we still use with a wide variety of control inputs (“make better batteries”, “create software that does X”). Then it could be the case - if AI enables explosive growth in other domains - that “production of control inputs” becomes the main production bottleneck.

Alternatively, suppose there’s a “make me a lot of money” AI and money making is basically about making stuff that people want to buy. You can sell more stuff that people are already known to want - but that runs into the limit that people only want a finite amount of stuff. You could alternatively sell new stuff that people want but don’t know it yet. This is still limited by the number of people in the world, how often each wants to consider adopting a new technology and what things someone with life history X is actually likely to adopt and how long it takes them to make this decision. These things seem unlikely to scale indefinitely with AI capability.

This could be defeated by either money not being about making stuff people want - which seems fairly likely, but in this case I don’t really know what to think - or AI capability leading to (explosive?) human population expansion.

In defence of this not being completely wild speculation: advertising already comprises a nontrivial fraction of economic activity and seems to be growing faster than other sectors https://www.statista.com/statistics/272443/growth-of-advertising-spending-worldwide/

(Although only a small fraction of advertising is promoting the adoption of new tech)

One objection to the “more AI -> more growth” story is that it’s quite plausible that people still participate in an AI driven economy to the extent that they decide what they want, and this could be a substantial bottleneck to growth rates. Speeds of technological adoption do seem to have increased (https://www.visualcapitalist.com/rising-speed-technological-adoption/), but that doesn’t necessarily mean they can indefinitely keep pace with AI driven innovation.

I haven’t looked in detail at how Give Well evaluates evidence, so maybe you’re no worse here, but I don’t think “weighted average of published evidence” is appropriate when one has concerns about the quality of published evidence. Furthermore, I think some level of concern about the quality of published evidence should be one’s baseline position - I.e. a weighted average is only appropriate when there are unusually strong reasons to think the published evidence is good.

I’m broadly supportive of the project of evaluating impacts on happiness.

Eliezer’s threat model is “a single superintelligent algorithm with at least a little bit of ability to influence the world”. In this sentence, the word “superintelligent” cannot mean intelligence in the sense of definition 2, or else it is nonsense - definition 2 precludes “small or no ability to influence the world”.

Furthermore, in recent writing Eliezer has emphasised threat models that mostly leverage cognitive abilities (“intelligence 1”), such as a superintelligence that manipulates someone into building a nano factory using existing technology. Such scenarios illustrate that intelligence 2 is not necessary for AI to be risky, and I think Eliezer deliberately chose these scenarios to make just that point.

One slightly awkward way to square this with the second definition you link is to say that Yudkowsky uses definition 2 to measure intelligence, but is also very confident that high cognitive abilities are sufficient for high intelligence and therefore doesn’t always see a need to draw a clear distinction between the two.

I want to add: I've had a few similar experiences of being rudely dismissed where the person doing the rude dismissing was just wrong about the issue at hand. I mean, you, dear reader, obviously don't know whether they were wrong or I was wrong, but that's the conclusion I drew.

Furthermore, I think Gell-Mann amnesia is relevant here: the reason I'm so confident that my counterpart was wrong in these instances is because I happened to have a better understanding of the particular issues - but for most issues I don't have a better understanding than most other people. So this might be more common than my couple of experiences suggest.

I've had a roughly equal number of good experiences working with EAs, and overwhelmingly good experiences at conferences (EAGx Australia only).

As a brief addendum, I imagine in the non fraudulent world, Sam’s net worth is substantially smaller. So maybe the extremely fast growth of his wealth should itself be regarded with suspicion?

One counterfacutal I think is worth considering: had Sam never loaned customer deposits to Alameda, how do you think everyone should have acted?

Had the loans never happened, FTX would still have been engaged in some fairly disreputable business, Sam would still have a wildly high appetite for risk, and just about all of the "red flags" people bring up would still have been there. However, even if this was all common knowledge, my best guess is that most people would've readily endorsed continuing to work with FTX and would not have endorsed making bureaucratic requirements too onerous for FTX funded projects. I think, even in this counterfactual, it might still have made sense to insist on FTX improving their governance before they further scale up their engagement with EA (and perhaps a few other things too).

I suspect that factually, whatever people reasonably could have known was most likely limited to "disreputable business and red flags", not that the loans to Alameda had happened. Furthermore, I doubt anyone even had particularly good reason to think FTX might be engaged in outright fraud on this scale - I think crypto exchanges go bust for non-fraudulent reasons much more often than for fraudulent ones. For these reasons, I suspect that while there are improvements to be made, they probably won't amount to drastic changes. I also suspect that, despite numerous negative signs about FTX, even insiders would have  been justified in placing relatively little credence in things playing out the way they have.

Load More