Great summaries/comments!
I think this is an original argument of Tomasik’s.
The specific calculations are probably original, but the basic idea that being in a simulation would probably reduce the importance of long-term outcomes was discussed by others, such as the people mentioned in this section.
It's great to have these quotes all in one place. :)
In addition to the main point you made -- that the futures containing the most suffering are often the ones that it's too late to stop -- I would also argue that even reflective, human-controlled futures could be pretty terrible because a lot of humans have (by my lights) some horrifying values. For example, human-controlled futures might accept enormous s-risks for the sake of enormous positive value, might endorse strong norms of retribution, might severely punish outgroups or heterodoxy, might value gi...
Thanks for the kind words. :)
things like the dark tetrad traits (narcissism, machiavellianism, psychopathy, sadism) are adaptive even on a group level
Yup. And how adaptive they are depends on the distribution of other agent types. For example, against a population of pure pacifists, Dark Tetrad traits may be pretty effective. In a population of agents who cooperate with one another but punish rule-breakers, Dark Tetrad traits are probably less adaptive. Hopefully present-day society is somewhat close to the latter case, although human reproduction isn'...
Thanks for the post! There's a lot of deep food for thought in it. I agree it's nice to know that you're not alone in having these kinds of feelings.
reading an article by Brian Tomasik one night[...]. It was the most painful experience of my life.
Sorry about that! Several people have had strong adverse reactions to my discussions of suffering. On the whole I think it's better for people to be exposed to such ideas, although some particular people may be more debilitated than motivated by thinking about extreme suffering.
I notice a trend for news and Go...
I haven't looked into sheep and goats specifically, but I imagine their wild-animal impacts would be fairly similar as for cattle. Unfortunately they're smaller, so there's more suffering and death per kg than for cattle, but they're still much better than chicken/fish/etc.
Dairy is another lower-impact option, and I guess a lot of Hindus are ok with dairy.
there's no sense in which asking dumb questions can plausibly have very significant downsides for the world (other than opportunity costs)
I think the opportunity costs are the key issue. :) There's a reason that companies use FAQs and automated phone systems to reduce the number of customer-support calls they have. There have been several times in my life when I've asked questions to someone who was sort of busy, and it was clear the person was annoyed.
At one of my previous employers (not an EA organization), I asked a lot of questions during meetings, ...
Thanks! It's worth noting that the rainforest and Cerrado numbers in that piece are very rough guesses based on limited and noisy data. As one friend of mine would say, I basically pulled those numbers out of my posterior (...distribution). :) Also, even if that comparison is accurate, it's just for one region of the world; it may not apply to the difference between, e.g., temperate forests and grasslands. All of that said, my impression is that crop fields do tend to have fewer mammals and birds than wild grassland or forest. For birds, see the screenshot of a table in this section.
Great post! In addition to biases that increase antagonism, there are also biases that reduce antagonism. For example, the fact that most EAs see each other as friends can blind us to the fact that we may in fact be quite opposed on some important questions. Plausibly this is a good thing, because friendship is a form of cooperation that tends to work in the real world. But I think friendship does make us less likely to notice or worry about large value differences.
As an example, it's plausible to me that the EA movement overall somewhat increases expected...
Thanks! Good to know. If you're just buying eyeballs, then there's roughly unlimited room for more funding (unless you were to get a lot bigger), so presumably there'd be less reason for funging dynamics. (And I assume you don't receive much or any money from big EA animal donors anyway.)
I'm honored that you're honored. :) Thanks for the work you do and for your answer here!
there are certain large grantors that I have been told prefer to fund nonprofits that already have raised at least a certain amount from other sources
Are those EA grantors? Or maybe you prefer not to say.
That makes sense about how more donors helps with fundraising. I wonder if that's more true for a startup charity that has to demonstrate its legitimacy, while for a larger and more established charity, maybe it could go the other way?
Makes sense about ex ante vs ex post. :)
Are you more optimistic that various different kinds of reflection would tend to yield a fair amount of convergence? Or that our descendants will in fact undertake reflection on human values to a significant degree?
Makes sense. :) There are at least two different reasons why one might discourage taking more than one's fair share:
Point #1 may be a reason to not try to outcompete others purely for its own sake. However, reason #2 depends on whether other donors are in fact playing chicken and whether it's feasibl...
Your question is fairly relevant to the discussion because if I thought there was net positive value in the lives of wild animals, then I would have a lot fewer concerns about non-welfare-reform animal charities.
I've had it on my todo list to check out that video and paper, but I probably won't get to it any time soon, so for now I'll just reply to the slides you asked about. :)
Personally I would not want to live even as the two surviving adult fish, because they probably experience a number of moments of extreme suffering, at least during death if not ear...
Thanks! I'm confused about the acausal issue as well :) , and it's not my specialty. I agree that acausal trade (if it's possible in practice, which I'm uncertain about) could add a lot of weird dynamics to the mix. If someone was currently almost certain that Earth-originating space colonization was net bad, then this extra variance should make such a person less certain. (But it should also make people less certain who think space colonization is definitely good.) My own probabilities for Earth-originating space colonization being net bad vs good from a ...
In reading more about this topic, I discovered that there has already been a lot of discussion about donor coordination on the EA Forum that I missed. (I don't read the Forum very actively.) EAs generally think it's bad to engage in a game of chicken where you try to let other people fund something first, at least within the EA community -- e.g., Cotton-Barratt (2021).
My original thought behind making this post was that the extent of funging for animal donations seemed like a useful thing for various animal donors to be aware of, to be more informed about ...
it's total on-farm deaths that matter more to me than the rates, so just increasing the prices enough could reduce demand enough to reduce those deaths.
If cage-free hens are less productive, then there might still be more total deaths in cage-free despite higher prices?
I don't have a copy of the book to check, but I think Compassion, by the Pound says that cage-free hens lay fewer eggs.
A 2006 study gives some specific numbers, although this is for free-range rather than cage-free:
...Layers from the free range system, compared to those kept in cages, laid
Good to know! Are there any other slaughter-focused groups besides HSA? Maybe you mean groups for which one of their major priorities is slaughter, like Shrimp Welfare Project and various other charities working on chickens and fish?
I saw a 2021 Open Phil grant "to Animal Protection Denmark to support research on ways to improve the welfare of wild-caught fish." But that organization itself does lots of stuff (including non-farm-animal work).
Off topic: There's a line in the movie A Cinderella Story: Christmas Wish that might be applicable to you: "was also...
they tend not to lose status because of reduced RFMF
Great point! That makes them different from GiveWell charities, where, e.g., AMF was dropped at least once due to RFMF concerns.
I suppose donating could even increase their RFMF in the longer run
Yeah, it's not obvious to me that it's right to think about RFMF decreasing as a charity gets more money. It may well be the opposite: more money means faster growth, which means more ability to use money.
OTOH, if other donors believe that RFMF is limited, then there's a possibility of them funging away any...
That's right. :) There are various additional details to consider, but that's the main idea.
Catastrophic risks have other side effects in scenarios where humanity does survive, and in most cases, humanity would survive. My impression is that apart from AI risk, biorisk is the most likely form of x-risk to cause actual extinction rather than just disruption. Nuclear winter and especially climate change seem to have a higher ratio of (probability of disruption but still survival)/(probability of complete extinction). AI extinction risk would presumably still...
Good points! I'd be curious to hear what Lewis thought of those two HSA grants and why Open Phil hasn't done more since then.
Hey Brian, I think it's too early to judge both of the HSA grants we funded because they're for long research projects, which have also gotten delayed. We'd like to fund more similar work for HSA but there have been capacity constraints on both sides. We also tend to weigh prolonged chronic suffering more highly than shorter acute suffering, so slaughter isn't as obvious a focus for us. So I think funding HSA or similar slaughter-focused groups is a good idea for EAs like you who prioritize acute suffering. On slaughter, you might like to also look into the Shrimp Welfare Project (OP-funded, but with RFMF).
Thanks! That's encouraging to hear (although it would be better for animals if the charities did fill their funding gaps).
There could still be some funging if a smaller remaining funding gap discourages other donors, such as the Animal Welfare Fund, from giving more, but at least the effect is probably less drastic than if the org hits its target RFMF fully.
Great point! Michael said something similar:
the funders may have specific total funding targets below filling their near term RFMF, and the closer to those targets, the less they give.
For example, the funders might aim for a marginal utility of 6 utilons per dollar, so using your example numbers, they would only want to fund the org up to $800K. And if someone else is already giving $100K, they would only want to give $700K.
My guess would be that in practice, funders probably aren't thinking too much about a curve of marginal utility per dollar but are...
Thanks! I may have several questions later. :) For now, I was curious about your thoughts on the funging framework in general. Do you think it is the case that if one EA gives more to you, then other EAs like the Animal Welfare Fund will tend to give at least somewhat less? And how much less?
I sort of wonder if that funging model is wrong, especially in the case of rapidly growing charities. For example, suppose the Animal Welfare Fund in year 1 thinks you have enough money, so they don't grant any more. But another donor wants you to spend $25K (or whatev...
Thanks! That's good to know. When I looked through the Animal Welfare Fund grantees recently, Healthier Hens was one that I picked out as a possible candidate for donating to. I'm more concerned about extreme than chronic pain, but I guess HH says that bone fractures cause some intense pain as well as chronic (and of course I care about chronic pain somewhat too).
Is there info about why grantors didn't give more funding to HH? I wonder if there's something they know that I don't. (In general, that's a main downside of trying to donate off the beaten path.)
Yeah, more research on questions like whether beef reduces net suffering would be extremely useful, both for my personal donation decisions and more importantly for potentially shifting the priorities of the animal movement overall. My worries about funging here ultimately derive from my thinking that the movement is missing some crucial considerations (or else just has different values from me), and the best way to fix that would be for more people to highlight those considerations.
I'm unsure how more research on the welfare of populous wild animals would...
That's a useful post! It's an interesting idea. There could be some funging between Open Phil and other EA animal donors -- like, if Open Phil is handling the welfare reforms, then other donors don't have to and can donate more to non-welfare stuff. OTOH, the fact that a high-status funder like Open Phil does welfare reforms makes it more likely that other EAs follow suit.
Another thing I'd worry about is that if Open Phil's preferred animal charities have less RFMF, then maybe Open Phil would allocate less of its funds to animal welfare in general, leaving...
You'd have to donate enough to reduce the recommendation status of an org, which seems unlikely for their Top Charities, at least
It's unlikely, but if it did happen, it would be a huge negative impact, so in expectation it could still be nontrivial funging? For example, if I think one of ACE's four top charities is way better than the others, then if I donate a small amount to it, there's a tiny chance this leads to it becoming unrecommended, but if so, that would result in a ton less future funding to the org.
But I suppose there could still be funging; the funders may have specific total funding targets below filling their near term RFMF, and the closer to those targets, the less they give.
Yeah. Or it could work in reverse: if they commit to giving only, say, 50% of an org's budget, then if individual donors give more, this "unlocks" the ability for the big donors to give more also. However, Karnofsky says it's a myth that Open Phil has a hard rule like this. Also, as I noted in the post, I wouldn't want them to have a hard rule like this, because it could l...
Thanks!
so the effect may be thought of as additional money going to the worst/borderline EA Animal welfare grantee
Yeah, that's the funging scenario that I had in mind. :) It's fine if everyone agrees about the ranking of the different charities. It's not great if the donor to the funged charity thinks the funged charity is significantly better than the average Animal Welfare Fund grant.
EA Animal Welfare fund does ask on their application form about counterfactual funding
Interesting! That does support the idea there is some funging that happens inte...
If the AI didn't face any competition and was a rational agent, it might indeed want to be extremely cautious about making changes to itself or building successors, for the reason you mention. However, if there's competition among AIs, then just like in the case of a human AI arms race, there might be pressure to self-improve even at the risk of goal drift.
If an AI is built to value helping humans, and if that value can remain intact, then it wouldn't need to be "enslaved"; it would want to be nice on its own accord. However, I agree with what I take to be the thrust of your question, which is that the chances seem slim that an AI would continue to care about human concerns after many rounds of self-improvement. It seems too easy for things to slide askew from what humans wanted one way or other, especially if there's a competitive environment with complex interactions among agents.
Thanks. :) I'm personally not one of those transhumanists who welcome the transition to weird posthuman values. I would prefer for space not to be colonized at all in order to avoid astronomically increasing the amount of sentience (and therefore the amount of expected suffering) in our region of the cosmos. I think there could be some common ground, at least in the short run, between suffering-focused people who don't want space colonized in general and existential-risk people who want to radically slow down the pace of AI progress. If it were possible, t...
Work related to AI trajectories can still be important even if you think the expected value of the far future is net negative (as I do, relative to my roughly negative-utilitarian values). In addition to alignment, we can also work on reducing s-risks that would result from superintelligence. This work tends to be somewhat different from ordinary AI alignment, although some types of alignment work may reduce s-risks also. (Some alignment work might increase s-risks.)
If you're not a longtermist or think we're too clueless about the long-run future, then thi...
I think GPT-4 is an early AGI. I don't think it makes sense to use a binary threshold, because various intelligences (from bacteria to ants to humans to superintelligences) have varying degrees of generality.
The goalpost shifting seems like the AI effect to me: "AI is anything that has not been done yet."
I don't think it's obvious that GPT-4 isn't conscious (even for non-panpsychists), nor is it obvious that its style of intelligence is that different from what happens in our brains.
Suppose that near-term AGI progress mostly looks like making GPT smarter and smarter. Do people think this, in itself, would likely cause human extinction? How? Due to mesa-optimizers that would emerge during training of GPT? Due to people hooking GPT up to control of actions in the real world, and those autonomous systems would themselves go off the rails? Just due to accelerating disruptive social change that makes all sorts of other risks (nuclear war, bioterrorism, economic or government collapse, etc) more likely? Or do people think the AI extinction ...
I think humans may indeed find ways to scale up their control over successive generations of AIs for a while, and successive generations of AIs may be able to exert some control over their successors, and so on. However, I don't see how at the end of a long chain of successive generations we could be left with anything that cares much about our little primate goals. Even if individual agents within that system still cared somewhat about humans, I doubt the collective behavior of the society of AIs overall would still care, rather than being driven by its o...
I think a simple reward/punishment signal can be an extremely basic neural representation that "this is good/bad", and activation of escape muscles can be an extremely basic representation of an imperative to avoid something. I agree that these things seem almost completely unimportant in the simplest systems (I think nematodes aren't the simplest systems), but I also don't see any sharp dividing lines between the simplest systems and ourselves, just degrees of complexity and extra machinery. It's like the difference between a :-| emoticon and the Mona Lis...
Thanks. :)
I plausibly agree with your last paragraph, but I think illusionism as a way to (dis)solve the hard problem can be consistent with lots of different moral views about which brain processes we consider sentient. Some people take the approach I think you're proposing, in which we have stricter criteria regarding what it takes for a mind to be sentient than we might have had before learning about illusionism. Others might feel that illusionism shows that the distinction between "conscious" and "unconscious" is less fundamental than we assumed and th...
I'm not sure where to draw lines, but illusions of "this is bad!" (evaluative) or "get this to stop!" (imperative) could be enough, rather than something like "I care about avoiding pain", and I doubt nematodes have those illusions, too. It's not clear responses to noxious stimuli, including learning or being put into a pessimistic or fearful-like state, actually indicate illusions of evaluations or imperatives. But it's also not clear what would.
You could imagine a switch between hardcoded exploratory and defensive modes of NPCs or simple non-flexible rob...
I'm not particularly well informed about current EA discourse on AI alignment, but I imagine that two possible strategies are
Yudkowsky's article helps push on the latter approach. Making the public and governments more worried about AI risk does seem to me the most plausible way of slowing it down. If more people in the national-security community worry about...
I agree that animal-welfare charities are a good choice. For s-risks, there are the Center on Long-Term Risk and Center for Reducing Suffering.
Personally I'm most enthusiastic about humane slaughter because
these random particle movements could sometimes temporarily simulate valence-generating systems by chance, even if only for a fraction of a second
I see. :) I think counterfactual robustness is important, so maybe I'm less worried about that than you? Apart from gerrymandered interpretations, I assume that even 50 nematode neurons are vanishingly rare in particle movements?
In your post on counterfactual robustness, you mention as an example that if we eliminated the unused neural pathways during torture of you, you would still scream out in pain, so it s...
Thanks for the detailed explanation! I haven't read any of the papers you linked to (just most of the summaries right now), so my comments may be misguided.
My general feeling is that simplified models of other things, including sometimes models that are resistant to change, are fairly ubiquitous in the world. For example, imagine an alert on your computer that says "Warning: RAM usage is above 90%" (so that you can avoid going up to 100% of RAM, which would slow your computer to a crawl). This alert would be an extremely simple "model" of the total amount ...
I'm not committed to only illusions related to attention mattering or indicating consciousness. I suspect the illusion of body ownership is an illusion that indicates consciousness of some kind, like with the rubber hand illusion, or, in rodents, the rubber tail illusion. I can imagine illusions related to various components of experiences (e.g. redness, sound, each sense), and the ones that should matter terminally to us would be the ones related to valence and desires/preferences, basically illusions that things actually matter to the system with those i...
Thanks. :) I'm uncertain how accurate or robust the 2.3/1.5 comparison was, but you're right to cite that. And you're right that human land-use changes (including changes to forest area) likely have big effects of some kind on total arthropod welfare.
also about the sign of the welfare of arthropods
Makes sense. I have almost no uncertainty about that because I measure welfare in a suffering-focused way, according to which extreme pain is vastly more important than positive experiences. I suspect that a lot of variation in opinions on this question come ...
By "illusionism" do you have in mind something like a higher-order view according to which noticing one's own awareness (or having a sufficiently complex model of one's attention, as in attention schema theory) is the crucial part of consciousness? I think that doesn't necessarily follow from pure illusionism itself.
As I mention here, we could take illusionism to show that the distinction between "conscious" and "unconscious" processing is more shallow and trivial than we might have thought. For example, adding a model of one's attention to a brain seems l...
I think noticing your own awareness, a self-model and a model of your own attention are each logically independent of (neither necessary nor sufficient for) consciousness. I interpret AST as claiming that illusions of conscious experience, specific ways information is processed that would lead to inferences like the kind we make about consciousness (possibly when connected to appropriate inference-making systems, even if not normally connected), are what make something conscious, and, in practice in animals, these illusions happen with the attention model ...
I see a huge gap between the optimized and organized rhythm of 302 neurons acting in concert with the rest of the body, on the one hand, and roughly random particle movements on the other hand. I think there's even a big gap between the optimized behavior of a bacterium versus the unoptimized behavior of individual particles (except insofar as we see particles themselves as optimizing for a lowest-energy configuration, etc).
If it's true that individual biological neurons are like two-layer neural networks, then 302 biological neurons would be like thousand...
What I have in mind is specifically that these random particle movements could sometimes temporarily simulate valence-generating systems by chance, even if only for a fraction of a second. I discussed this more here, and in the comments.
My impression across various animal species (mostly mammals, birds and a few insect species) is that 10-30% of neurons are in the sensory-associative structures (based on data here), and even fewer could be used to generate conscious valence (on the right inputs, say), maybe even a fraction of the neurons that ever generate...
I think net change in forest area is a major driver for the impact of humans on terrestrial arthropods.
Is that just a guess, or has someone said that explicitly? I also get the vague impression that forests have higher productivity than grasslands/etc, but that's not obvious, and I'd be curious to see more investigation of whether/when forests do have higher productivity. (This includes both primary productivity and productivity in terms of invertebrate life.)
Given the examples of cognitive abilities of nematodes mentioned here, I don't see them as a mugging. For example, here's a quote from that link:
The deterministic development of the worm's nervous system would seem to limit its usefulness as a model to study behavioral plasticity, but time and again the worm has demonstrated its extreme sensitivity to experience
It's not obvious to me why one would draw a line between mites/springtails and nematodes, rather than between ants and mites/springtails, between small fish and ants, etc.
With only 302 neurons, probably only a minority of which actually generate valenced experiences, if they're sentient at all, I might have to worry about random particle interactions in the walls generating suffering.
Nematodes also seem like very minimal RL agents that would be pretty easy to program. The fear-like behaviour seems interesting, but still plausibly easy to program.
I don't actually know much about mites or springtails, but my ignorance counts in their favour, as does them being more closely related to and sharing more brain structures (e.g. mushroom bodies) with arthropods with more complex behaviours that seem like better evidence for sentience (spiders for mites, and insects for springtails).
Thanks for the question. :)
That sounds like a definition of physicalism in general rather than eliminativism specifically?
I agree with the analogies in Tim's comment. As he says, the idea is that eliminativism says all physical processes are kind of on the same footing as far as not containing (the philosophically laden version of) consciousness. So it's more plausible we'd treat all physical processes as in the same boat rather than drawing sharp dividing lines.