DM

David Mathers🔸

5834 karmaJoined

Bio

Superforecaster, former philosophy PhD, Giving What We Can member since 2012. Currently trying to get into AI governance. 

Posts
11

Sorted by New

Comments
708

Oh, ok, I agree, if the number of deer is the same after as counterfactually, it seems plausibly net positive yes.

Also, it's certainly not common sense that it is always better to have less beings with higher welfare. It's not common sense that a world with 10 incredibly happy people is better than one with a billion very slightly less happy people.

And not every theory that avoids the repugnant conclusion delivers this result, either. 

I agree, it is unclear whether welfare is actually positive. 

Those are fair point in themselves, but I don't think "less deer is fine, so long as they have a higher standard of living" has anything like the same commonsense standing as "we should protect people from malaria with insecticide even if the insecticide hurts insects". 

And it's not clear to me that assuming less deer is fine in itself even if their lives are good is avoiding taking a stance on the intractable philosophical debate, rather than just implicitly taking one side of it. 

"A potentially lower-risk example might be the warble fly (Hypoderma), which burrows under the skin of cattle and deer, causing great discomfort, yet rarely kills its host. The warble fly is small in biomass, host-specific (so doesn't greatly affect other species), and has more limited interactions beyond its host-parasite relationship. Although it does reduce the grazing and reproductive activity of hosts, these effects are comparatively minor and could be offset with non-invasive fertility control"

Remember that it's not uncontroversial that it is preferable to have less animals at higher welfare level, rather than more animals at lower welfare level. Where welfare is net positive either way, some population ethicists are going to say having more animals at a lower level of welfare can be better than less at a higher level of welfare. See for example: https://www.cambridge.org/core/journals/utilitas/article/what-should-we-agree-on-about-the-repugnant-conclusion/EB52C686BAFEF490CE37043A0A3DD075  But also, even on critical level views designed to BLOCK the repugnant conclusion, it can sometimes be better to have more welfare subjects at a lower but still positive level of welfare, then less subjects at a higher level of welfare. So maybe it's better to have more deer even when some of them have warble fly, than to have less deer, but none of them have warble fly. 

 

And it's not so much that I think I have zero evidence: I keep up with progress in AI to some degrees, I have some idea of what the remaining gaps are to general intelligence, I've seen the speed at which capabilities have improved in recent years etc. It's that how to evaluate that evidence is not obvious, and so simply presenting a skeptic with it probably won't move them, especially as the skeptic-in this case you-probably already has most of the evidence I have anyway. If it was just some random person who had never heard of AI asking why I thought the chance of mildly-over-human level AI in 10 years was not far under 1%, there are things I could say. It's just you already know those things, probably, so there's not much point in my saying them to you. 

Yeah, I agree that in some sense saying "we should instantly reject a theory that recommends WD" doesn't not combine super-well with belief in classical U, for the reasons you give. That's compatible with classical U's problems with WD being less bad than NU's problem's with it, is all I'm saying. 

"I'm generally against this sort of appeal to authority. While I'm open to hear the arguments of smart people, we should evaluate those arguments themselves and not the people giving them. So far, I've heard no argument that would change my opinion on this matter."

I think this attitude is just a mistake if your goal is to form the most accurate credences you can. Obviously, it is always good practice to ask people for their arguments rather than only taking what they say on trust. But your evaluation of other people's arguments is fallible, and you know it is fallible. So you should distribute some of your confidence to cases where your personal evaluations of credible people's arguments are just wrong. This isn't the same as failing to question purported experts. I can question an expert, and even disagree with them overall, and still move my credences somewhat towards theirs. (I'm much more confident about this general claim than I am about what credences in ASI in the next decade are or aren't reasonable, or how much credibility anyone saying ASI is coming in the next decade should get.) 

"It all comes down to the question of whether the current tech is relevant for ASI or not. In my estimation, it is not – something else entirely is required. The probability for us discovering that something else just now is low." 

I think Richard's idea is that you shouldn't have *super-high* confidence in your estimation here, but should put some non-negligible credence on the idea that it is wrong, and current progress is relevant. Why be close to certainty about a question that you probably think is hard and that other smart people disagree about being the reasoning? And once you open yourself up to a small chance that current progress is in fact relevant, it then becomes at least somewhat unclear that you should be way below 1% in the chanc of AGI in the relatively near term or in current safety work being relevant. (Not necessarily endorsing the line of thought in this paragraph myself.) 

It seems like if you find it incredible to deny and he doesn't, it's very hard to make further progress :(  I'm on your side about the chance being over 1% in the next decade, I think, but I don't know how I'd prove it to a skeptic, except to gesture and say that capabilities have improved loads in a short time, and it doesn't seem like the are >20 similar sized jumps before AGI. But when I ask myself what evidence I have for "there are not >20 similar sized jumps before AGI" I come up short. I don't necessarily think the burden of proof here is actually on people arguing that the chance of AGI in the next decade is non-negligible though: it's a goal of some serious people within the relevant science, and they are not making zero progress, and some identifiable quantifiable individual capabilities have improved very fast. Plus the extreme difficulty of forecasting technological breakthroughs over more than a couple of years cuts both ways. 

Load more