Davidmanheim

Head of Research and Policy @ ALTER - Association for Long Term Existence and Resilience
6769 karmaJoined Oct 2018Working (6-15 years)

Participation
4

  • Received career coaching from 80,000 Hours
  • Attended more than three meetings with a local EA group
  • Completed the AGI Safety Fundamentals Virtual Program
  • Completed the In-Depth EA Virtual Program

Sequences
2

Deconfusion and Disentangling EA
Policy and International Relations Primer

Comments
785

Perhaps worth noting that very long term discounting is even more obviously wrong because of light-speed limits and the mass available to us that limits long term available wealth - at which point discounting should be based on polynomial growth (cubic) rather than exponential growth. And around 100,000-200,000 years, it gets far worse, once we've saturated the Milky Way.

The reason it seems reasonable to view the future 1,000,010 years as almost exactly as uncertain as 1,000,000 years is mostly myopia. To analogize, is the ground 1,000 miles west of me more or less uneven than the ground 10 miles west of me? Maybe, maybe not - but I have a better idea of what the near-surroundings are, so it seems more known. For the long term future, we don't have much confidence in our projections of either a million or a million an ten years, but it seems hard to understand why all the relevant uncertainties will simply go away, other than simply not being able to have any degree of resolution due to distance. (Unless we're extinct, in which case, yeah.)

To embrace this as a conclusion, you also need to fairly strongly buy total utilitarianism across the future light cone, as opposed to any understanding of the future, and the present, that assumes that humanity as a species doesn't change much in value just because there are more people. (Not that I think either view is obviously wrong - but it is so generally assumed in EA that it's often unnoticed, but it's very much not a widely shared view among philosophers or the public.)

I misunderstood, perhaps. Audit rates are primarily a function of funding - so marginal funding goes directly to those audits, because they are the most important. But if the US government wasn't being insane, it would fund audits until the marginal societal cost of the audit was roughly equal to the income by the state.
The reason I thought this disagreed with you point is because I thought you were disagreeing with the earlier claim that "This is going to lead to billionaires' actions being surveilled more and thus gone after for crimes more often than the average person. The reward makes it worth it. Billionaires will have far more legal/monetary resources and thus you should naively expect more settlements, particularly without an admission of wrongdoing." This seems to be borne out by the model it seems we're both agreeing with, that higher complexity audits are more worthwhile in terms of return.

And yes, I think that per person, the ultra-wealthy, rather than just the people making 200k+, are far more likely to be investigated and prosecuted for white collar crime, because they get more scrutiny, rather than based on whether or not they commit more crime, even though they are more likely to get off without a large punishment via lawyers. But the analysis looked at rates of guilt, not punishment sizes. 

Data seems to indicate otherwise.
https://www.gao.gov/products/gao-22-104960
https://www.usatoday.com/story/money/2023/11/09/irs-uses-funding-to-audit-wealthy/71486513007/

First, to your second point, I agree that they aren't comparable, so I don't want to respond to your discussion. I was not, in this specific post, arguing that anything about safety in the two domains is comparable. The claim, which you agree to in the final paragraph, is that there is an underlying fallacy which is present both places. 

However, returning to your first tangential point, the claim that the acceleration versus deceleration debate is theoretical and academic seems hard to support. Domains where everyone is dedicated to minimizing regulation and going full speed ahead are vastly different than those where people agree that significant care is needed, and where there is significant regulation and public debate. You seem to explicitly admit exactly this when you say that nuclear power is very different than AI because of the "very high levels of anti-nuclear campaigning and risk aversion" - that is, public pressure against nuclear seemed to have stopped the metaphorical tide. So I'm confused about your beliefs here.

"The analogies establish almost nothing of importance about the behavior and workings of real AIs"

You seem to be saying that there is some alternative that establishes something about "real AIs," but then you admit these real AIs don't exist yet, and you're discussing "expectations of the future" by proxy. I'd like to push back, and say that I think you're not really proposing an alternative, or that to the extent you are, you're not actually defending that alternative clearly.

 

I agree that arguing by analogy to discuss current LLM behavior is less useful than having a working theory of interpretability and LLM cognition - though we don't have any such theory, as far as I can tell - but I have an even harder time understanding what you're proposing is a superior way of discussing a future situation that isn't amenable to that type of theoretical analysis, because we are trying to figure out where we do and do not share intuitions, and which models are or are not appropriate for describing the future technology. And I'm not seeing a gears level model proposed, and I'm not seeing concrete predictions.

Yes, arguing by analogy can certainly be slippery and confusing, and I think it would benefit from grounding in concrete predictions. And use of any specific base rates is deeply contentious, since reference classes are always debateable. But at least it's clear what the argument is, since it's an analogy. In opposition to that, arguing by direct appeal to your intuitions, where you claim your views are a "straightforward extrapolation of current trends" is being done without reference to your reasoning process. And that reasoning process, because it doesn't have a explicit gears level model, is based on informal human reasoning and therefore, as Lakens argues, deeply rooted in metaphor anyways, seems worse - it's reasoning by analogy with extra steps.

For example, what does "straightforward" convey, when you say "straightforward extrapolation"? Well, the intuition the words build on is that moving straight, as opposed to extrapolating exponentially or discontinuously, is better or simpler. Is that mode of prediction easier to justify than reasoning via analogies to other types of minds? I don't know, but it's not obvious, and dismissing one as analogy but seeing the other as "straightforward" seems confused.

I agree that the properties are somewhat simplified, but a key problem here is that the intuition and knowledge we have about how to make software better fails for deep learning. Current procedures for developing debugging software work less well for neural networks doing text prediction than psychology does. And at that point, from the point of view of actually interacting with the systems, it seems worse to group software and AI than to group AI and humans. Obviously, however, calling current AI humanlike is mostly wrong. But that just shows that we don't want to use these categories!

One the first point, I think most technical people would agree with the claim: "AI is a very different type of thing that qualifies as software given a broad definition, but that's not how to think about it."

And given that, I'm saying that we don't say " a videoconference meeting is different to other kinds of software in important ways," or "photography is different to other kinds of software in important ways" because we think of those as a different thing, where the fact that it's run on software is incidental. And my claim is that we should be doing that with AI.

Load more