All of Bentham's Bulldog's Comments + Replies

Note the U.S. hasn't had 10+% GDP growth since the great depression.  But yeah I'd be happy to take some bets about this--north of 5% for instance.

Weirdly aggressive reply.  

First of all, the AI 2027 people disagree about the numbers.  Lifland's median is nearer to 2031.  I have a good amount of uncertainty, so I wouldn't be shocked if, say, we don't get the intelligence explosion for a decadeish.  

"you've predicted a 95-trillion-fold increase in AI research capacity under a 'conservative scenario.'" is false. I was just giving that as an example of the rapid exponential growth.  

So the answer, in short, is that I'm not very confident in extremely rapid growth within the next few years.  I'd probably put +10% GDP growth by 2029 below 50%.  

2
John Salter
To respond briefly: 1. "First of all, the AI 2027 people disagree about the numbers".  That's irrelevant to your claim that you'd put "60% odds on the kind of growth depicted  in AI 2027"  "you've predicted a 95-trillion-fold increase in AI research capacity under a 'conservative scenario.'" is false. I was just giving that as an example of the rapid exponential growth.   Here's what you wrote: "This might sound outrageous, but remember: the number of AI models we can run is going up 25x per year! Once we reach human level, if those trends continue (and they show no signs of stopping) it will be as if the number of human researchers is going up 25x per year. 25x yearly increases is a 95-trillion-fold increase in a decade." You then go on to outline reasons why it would actually be faster than that. If you aren't predicting this 95-trillion-fold increase, then either: 1. The trends do indeed show signs of stopping 2. The number of AI models you can run isn't really going up 25x YOY  We can talk all day, but words are cheap. I'd much rather bet. Bets force you to get specific about what you actually believe. They make false predictions costly, true ones profitable. They signal what you actually believe, not what you think writing will get you the most status / clicks / views / shares etc. 
2
John Salter
What's the minimum percentage chance of greater than 10% GDP growth in 2029 that you think is plausible given the trends you're writing about and how much are you willing to bet at those odds? I'd rather bet on an earlier year, but I'd accept 2029 if that's all you've got in you. To be explicit, I'm trying to work out what you actually believe and what is just sensationalised.

Sure.  I'd bet that in the next 15 years the U.S. will have 10+% GDP growth in at least one year.   

Less sure about pre 2028 bets. 

1
Parker_Whitfill
I don't like the formulation of 1 year because you can have recessions and then catchup or weird anomalous years.    Very open to bet on alternative formulations e.g.  average gdp growth over next 10 years is <5%
0
John Salter
This is a response more befitting Jim Cramer's Chihuahua than Jeremy Bentham's Bulldog.  According to AI 2027, before the end of 2027, OpenAI has: * a “country of geniuses in a datacenter.” each: * 75x more capable than the best human human at AI research * "wildly superhuman" coding, hacking and politics * 330K Superhuman AI Researcher copies thinking at 57x human speed" In their slowest projection, by April 2028, OpenAI has achieved generalised superintelligence. But you're only willing to bet US GDP grows just 10%, in just one year, across the next 15? The US did 7.4% in 1984. Within 10 years - five years before your proposed bet resolves - you've predicted a 95-trillion-fold increase in AI research capacity under a 'conservative scenario.' According to your eighth section, this won't cause major bottlenecks elsewhere that would seriously stifle growth. If this is really the best bet you're willing to offer, one of three things is true: * You're wildly risk averse * You don't believe what you're writing * You're misleadingly missing out the fine print (e.g. "I’d put about 60% odds on the kind of growth depicted variously in AI 2027" 2027 but not any time close to when they actually predict it will happen") Which is it?

I meant why the low probability of bee sentience.

Obviously I'm the opposite of an expert here but here are my reasons, roughly from most important to least important

1. I think the best assessment we have of Animal sentience seems biased towards animals for at least 4 reasons as I outlined here. So I take RP numbers and downward multiply them by something like 10x - 1,000x depending on the animal. IMO the most important bias here was selecting a pro animal-welfare research team with zero animal welfare skeptics.

https://forum.effectivealtruism.org/posts/E9NnR9cJMM7m5G2r4/is-rp-s-moral-weights-project-too-a... (read more)

Out of curiosity, why so low? 

2
NickLaing
straight after a short spray, the bees vacated the roof. There might have been s lot more due later though but they looked not bad.

Want to come on the podcast and argue about the person-affecting view? 

Probably our disagreements are too vast to settle much in a comment. 

I mean, that might help with a few problems, but doesn't help with a lot of the problems.  Also, it just seems so crazy.  Giving up axiology to hold on to a not even very widely shared intuition?  Giving up the idea that the world would be better if it had lots of extra happy people and every existing person was a million times better? 

7
Michael St Jules 🔸
I think we have very different intuitions. I don't think giving up axiology is much or any bullet to bite, and I find the frameworks I linked  1. better motivated than axiology, and, in particular, by empathy, and to better respect what individuals (would) actually care about,[1] which I take to be pretty fundamental and pretty much the point of "ethics", and 2. better fits with subjectivism/moral antirealism.[2]  The problems with axiology also seem worse to me, often as a consequence of failing to respect what individuals (would) actually care about and so failing at empathy, one way or another, as I illustrate in my sequence. What do you mean to imply here? Why would I force myself to accept axiology, which I don't find compelling, at the cost of giving up my own stronger intuitions? And is axiology (or the disjunction of conjunctions of intuitions from which it would follow) much more popular than person-affecting intuitions like the Procreation Asymmetry?   I think whether or not a given person-affecting view has to give that up can depend on the view and/or the details of the hypothetical. 1. ^ At a basic level better, not necessarily the things they care about by derivation from other things they care about, because they can be mistaken in their derivations. 2. ^ Moral realism, that there's good or bad independently of individuals' stances (or evaluative attitudes, as in my first post) seems to me to be a non-starter. I've never seen anything close to a good argument for moral realism, maybe other than epistemic humility and wagers.

The apples being unbounded thing was just a brief intuition pump.  It wasn't really connected to the other stuff.  

I don't think the argument actually requires that different value systems can be compared in fungible units.  You can just compare stuff that is, in one value system, clearly better than something in another value system.  So, assume you have a credence of .5 in fanaticism and of .5 in bounded views.  Well, creating 10,000 happy people given bounded views is less good than creating 10 trillion suffering people given un... (read more)

Would be curious why people are downvoting. 

Thanks!  

I don't think the analogy with subsistence humans is a good one because the basic argument for net negative animal welfare doesn't apply to them.  The basic argument is: most animals have very short lives that culminate in a painful death, and a few days of life isn't enough to recoup the harms of a painful death.  This doesn't apply to long-lived hunter-gatherers.  Fwiw, I don't think it applies to animals either--it seems plausible that elephants mostly live good lives, for example.  But the most numerous animals are wor... (read more)

I'm pretty worried about this because I think most wild animals have bad lives, and so increasing their numbers is very bad https://benthams.substack.com/p/against-biodiversity?utm_source=publication-search

5
Tandena Wagner
Hi, thank you for voicing this concern. I read your recent post, “Rewilding Is Extremely Bad.” Personally, I doubt that most wild animals have negative lives. (informed by analogy to most of our own history of subsistence-level survival, and my doubt that they would consider their lives to have not been worth living). I also don’t believe that total hedonic utilitarianism is a complete frame for thinking about this. I think it is important to factor in people's and animals' preferences for continued existence. Mostly I think we just don't know much about this question overall. I do think we should care about this fundamental question and certainly do what is in our power to improve the lives of other beings. I think you may have gotten the wrong impression from my use of "biodiversity." It would be understandable to assume that I want to maximize Earth's total biomass / total natural land area / number of wild animals, or something like that. I'm actually mostly interested in preserving the diversity of life that has evolved on Earth, such as by avoiding species extinctions. I think there are several good reasons to do this, such as to provide the far future with valuable information that would otherwise be lost, potentially fulfill uplift-style moral obligations we may have towards nonhuman animals, and generally keep our options open. Preserving natural land tends to be a tractable, robust, large scale way to prevent species extinctions. But there are other biodiversity interventions that work with very small numbers of individuals, like seed banks or analogous "insect zoo", or even zero individuals, like biobanking tissue samples with the aim of de-extinction in a utopian future world.  Perhaps we could both celebrate something like a well-designed insect zoo - where we care for many small populations of insects, work toward better understanding their many different desires, elevate the value of their lives for more to see, and preserve a wide variety of life

I don't think the case for Vasco's argument depends really on sentience in non-arthropods. There are like a billion soil arthropods for every person, so funding research on soil animals looks similarly important.  And a lot of these are ants who are more likely to be sentient than black soldier flies. 

I do find the comment "I also want robustness in the case for sentience," a bit puzzling in context.  As I understood it, Vasco's argument was that it's not very unlikely that animals even simpler than arthropods are sentient (mites, springtail... (read more)

Yes it would imply that a bit of extra energy can vastly increase consciousness.  But so what?  Why be 99.9999% confident that it can't? 

3
Vasco Grilo🔸
Here is an illustration of how one can easily be much more confident than that. If welfare per animal-year was proportional to f(x) = 2^x, where x is the number of neurons, its elasticity would be x*f'(x)/f(x) = x*ln(2)*2^x/2^x = ln(2)*x. Even for the 302 neurons of adult nematodes, which are the animals with the fewest neurons, the elasticity would be 209 (= ln(2)*302). For my assumption that welfare per animal-year is proportional to g(x) = x^a, its elasticity is x*a*x^(a - 1)/x^a = a. So, for a number of neurons close to that of adult nematodes, I think welfare per animal-year being proportional to 2^x is roughly as plausible as it being proportional to x^302. For a number of neurons close to that of humans, I believe welfare per animal-year being proportional to 2^x is roughly as plausible as it being proportional to x^(86*10^9). If the elasticity y follows a normal distribution with mean m, and standard deviation s, the probability p(y) of a given elasticity is proportional to e^(-(y - m)^2/(2*s^2))/s. Given 2 values for the elasticity y1 and y2, the ratio between their probabilities is p(y2)/p(y1) = e^((-(y2 - m)^2 + (y1 - m)^2)/(2*s^2)). For m = 0.5, s = 0.25, y1 = 0.5 (equal the expected elasticity m), and y2 = 302 (the value I am arguing is very unlikely), p(y2)/p(y1) = e^((-(302 - 0.5)^2 + (0.5 - 0.5)^2)/(2*0.25^2)) = e^(-727*10^3) = 10^(-log10(e)*727*10^3) = 10^(-316*10^3). The above does not show that the welfare of animals with the fewest neurons dominates. However, it illustrate one can not only get astronomical stakes, but also astronomically low probabilities of such stakes holding. For any probability distribution describing real world phenomena, the probability and stakes are not independent. So one cannot just come up with a function implying astronomical stakes, and then independently guess a probability of such stakes holding. Production functions usually have elasticities from 0 to 1, which is part of why my speculative best guess is that wel

I think it's a bad result of a view if it implies that no actions we perform are good or bad.  Intuitively it doesn't seem like all chaotic actions are neutral. 

It’s a somewhat long post.  Want to come on the podcast to discuss?

8
Anthony DiGiovanni
Sounds great, please DM me! Thanks for the invite. :) In the meantime, if it helps, for the purposes of this discussion I think the essential sections of the posts I linked are: * "The structure of indeterminacy" * "Aggregating our representor with higher-order credences uses more information" (and "Response") (The section I linked to from this other post is more of a quick overview of stuff mostly discussed in the sections above. But it might be harder to follow because it's in the context of a post about unawareness specifically, hence the "UEV" term etc. — sorry about that! You could skip the first paragraph and replace "UEV" with "imprecise EV".)

I don't agree with that.  Cluelessness seems to only arise if you have reason to think that on average your actions won't make things better.  And yet even a very flawed procedure will, on average across worlds, do better than chance.  This seems to deal with epistemic cluelessness fine. 

9
Anthony DiGiovanni
I respond to the "better than chance" claim in the post I linked to (in my reply to Richard). What do you think I'm missing there? (See also here.)

Why can't you take seriously every plausible argument with huge implications? 

2
idea21
If by "taking seriously" we mean acting effectively, the problem, as I already wrote, is that we have to choose options. The most plausible option must be the one that increases the possibilities for all kinds of altruistic action. Schubert and Caviola, in their book *Effective Altruism and the Human Mind*, consider it acceptable to offer altruistic options that, while perhaps not the most effective from a logical standpoint, may be more appealing to the general public (thus increasing the number of altruistic agents and the resulting altruistic action in general). It is necessary to find a middle ground based on trial and error, always bearing in mind that increasing the number of people motivated to act altruistically should be the primary objective. Logically, I am referring to a motivation based on rational and enlightened principles, and one that takes into account the psychological, cultural, and social factors inherent in human altruistic behavior. The main factor in "Effective Altruism" is altruistic motivation. Long-term options are not very motivating due to the cluelessness factor. Nor are options for animal welfare as motivating as those that involve reducing human suffering in the present moment. When we have as many agents of "Effective Altruism" as, for example, followers of Jehovah's Witnesses or communist militants (outside of communist states), then we will be able to make many more altruistic choices of all kinds. Isn't this plausible?

Thanks, yes I think I fired this post off too quickly without taking time to read deeper analysis of it.  I'll try to give your post a read when I get the chance. 

Interesting point, though I disagree--I think there are strong arguments for thinking that you should just maximize utility https://joecarlsmith.com/2022/03/16/on-expected-utility-part-1-skyscrapers-and-madmen/

It's made me a bit more Longtermist.  I think that one of the more plausible scenarios for infinite value is that God exists and actions that help each other out infinitely strengthen our eternal relationship, and such a judgment will generally result in doing conventionally good things.  I also think that you should have some uncertainty about ethics, so you should want the AI to do reflection.

Majorly disagree!  I think that while probably you'd expect an animal to behave aversively in response to stimuli, it's surprising that: 

  1. This distracts them from other aversive stimuli (nociception doesn't typically work that way--it's not like elbow twitches distract you and make you less likely to have other twitches.  
  2. They'd react to anaesthetic (they could just have some aversive behavior without anaesthetic).
  3. They'd rub their wounds.  

etc

No!  It implies only that if you inflict some comparable injury on a human and bee (adjusting for e.g. bees diminshed size) the human will feel, on average (though lots of uncertainty), around 10X as much pain.  Moral evaluation of this is something different! 

If you want to read the longer defense of the RP numbers, you can read the RP report or my followup article on the subject https://benthams.substack.com/p/you-cant-tell-how-conscious-animals. Suffice it to say, it strikes me as deeply unwise to base your assessments of bee consciousness on how they look, rather than on behavior.  I think the strong confidence that small and simple animals aren't intensely conscious rests on little more than unquestioned dogma, with nothing very persuasive having ever been said in its favor https://benthams.substack.co... (read more)

You can read a brief summary of his findings here--he also read my article and didn't point out anything major, so it's unlikely that I majorly distorted what he said. 

https://forum.effectivealtruism.org/posts/BvNxD66sLeAT8u9Lv/climate-change-and-longtermism-new-book-length-report

Oh and one point about the update: all of these errors came from me being a dumbass and misreading Halstead or posting the wrong link, so this shouldn't affect your update from Halstead. 

2
MichaelDickens
Would be true if I had read Halstead's 437-page report, but I didn't, I only read the intro + your summary. So if I don't put high credence in your summary then I don't know what Halstead's findings were.

Okay yes you are totally right, these are embarrassing errors that I will now fix!

Sorry, just saw this, will double-check and then fix the various claims if you are right.

I think there is probably a pretty strong moral reason to abstain from those but honey provides much stronger reasons.  Disagree on strategy--people really like bees! 

How is this different from, say, the external world?  Like, in both cases you'll ultimately ground out at intuitions, but nonetheless, the beliefs seem justified. 

1
ThomasEliot
No? We can test for things like object permanence by having person A secretly put an object in a box without telling person B what it is, and then person B checking the box later on while person A is not there and seeing what's inside of it and seeing if their reports match.

Moral realism is just the idea that some moral propositions are objectively true, not that all of them are true. 

2
Pablo
Sure, who could possibly believe that all moral propositions are objectively true? My point was that moral realists typically believe that some axiological and some deontic claims are objectively true, and that if you are an anti-realist about the former and a realist about the latter, calling yourself a “moral realist” may fail to communicate your views accurately.

They're doing nothing subjectively wrong if they really don't know.  But if they knowingly don't look into it then they're a bit blameworthy. 

There's a distinction between subjective rightness and objective rightness (these are poor terms given that they're both compatible with using moral realism).  I'd say that if you torture someone thinking it will be bad but it turns out good, that was subjectively bad but objectively good.  Given what you knew at the time you shouldn't have done it but it was ultimately for the bets.

1
Osty
Ok, but this still leaves unanswered the question of whether and to what degree you have a moral obligation to become better informed about the consequences of your actions. Many people are blissfully unaware of what happens in factory farms. Are they doing nothing (subjectively) wrong, or is there a sense in which we can say they "should have known better"? Can I absolve myself of subjective wrongness just by being an ignoramus?
5
Noah Birnbaum
Just gonna have to write a reply post, probably 

Objective just means that its truth doesn't depend on what people think about it.  The Earth being round is objective--even if everyone thought it was flat, it wouldn't be. 

//I think that these things really are wrong and don't depend on what people think about it. But I also think that that statement is part of a language game dictated by complex norms and expectations.// 

To me this sounds a bit like moral naturalism.  You don't think morality is something non-physical and spooky but you think there are real moral facts and these don't depend on our attitudes.  

I guess I don't quite see what your puzzlement is with morality.  There are moral norms which govern what people should do.  Now, you might d... (read more)

I think of moral naturalism as a position where moral language is supposed to represent things, and it represents certain natural things. The view I favor is a lot closer to inferentialism: the meaning of moral language is constituted by the way it is used, not what it is about. (But I also don't think inferentialism is quite right, since I'm not into realism about meaning either.)

I guess I don't quite see what your puzzlement is with morality. There are moral norms which govern what people should do. Now, you might deny there in fact are such things,

... (read more)
Bentham's Bulldog
3
0
0
100% ➔ 80% agree

Morality is Objective

Um, see above :)

Biodiversity isn't ultimately what matters but unfortunately it's the best proxy that we have for learning about the distant past.  There aren't really studies about past NPP after mass extinctions.  More diverse ecosystems tend to be richer and more productive. 

Also, humans have, in fact, been drastically reducing insect populations--https://reducing-suffering.org/humanitys-net-impact-on-wild-animal-suffering/

2
Vasco Grilo🔸
I guess because cage-free chickens can move around, and therefore spend more energy. Importantly, I expect the effects on soil nematodes, mites, and springtails to be larger than those on laying hens regardless of whether cage-free chickens increase or decrease cropland. I estimated cage-free reforms benefit laying hens 0.718 % as much as they benefit soil nematodes, mites, and springtails for an increase in feed of 3.63 % (= 0.0726/2.00). So, for the effects on laying hens to be larger than those on soil nematodes, mites, and springtails, the change in feed would have to be smaller than 0.0261 % (= 0.00718*0.0363) holding other factors constant, which is very small. 
2
Vasco Grilo🔸
Hi Matthew, Because they require more feed, although there is large uncertainty about whether this is the case for cage-free reforms. Below are the relevant paragraphs of the post where "I estimated broiler welfare and cage-free reforms increase cropland by 1.98 m2-year/meat-kg and 0.113 m2-year/egg-kg".
  1. I'm doubtful that any of those are conscious, but I agree that given that it's possible they are, their interests matter a decent amount in expectation--though probably less than insects.  
  2. If the world is very weird then the right ethical view should get weird results.  For more on this see https://wonderandaporia.substack.com/p/surely-were-not-moral-monsters and https://benthams.substack.com/p/lyman-stone-continues-being-dumb?utm_source=publication-search starting at "Lyman's a pro-natalist".  A view shouldn't be judged by matching intuitio
... (read more)
3
Henry Howard🔸
  Why? The average person says that same thing about insects.

Well, all Christians will need to explain why evangelism isn't the only thing of any importance.  In my view universalists have the best answer, but whatever one's answer is, it can explain why to give to effective anti-poverty charities. 

5
mlsbt
But this is what the first commenter's argument is, that's why Christianity would be incompatible with EA. A truly EA, non-universalist Christianity does not explain why evangelism isn't the only thing of any importance because by their lights it clearly is. And yet the Bible does say to do all these other good but non-maximally-effective things! Unless, as mentioned, they're all weirdly instrumental.
Load more