All of Brian_Tomasik's Comments + Replies

Thanks for the question. :)

eliminativism (qualia are just a kind of physical process, not a different essence or property existing separate from physical reality)

That sounds like a definition of physicalism in general rather than eliminativism specifically?

I agree with the analogies in Tim's comment. As he says, the idea is that eliminativism says all physical processes are kind of on the same footing as far as not containing (the philosophically laden version of) consciousness. So it's more plausible we'd treat all physical processes as in the same boat rather than drawing sharp dividing lines.

Great summaries/comments!

I think this is an original argument of Tomasik’s.

The specific calculations are probably original, but the basic idea that being in a simulation would probably reduce the importance of long-term outcomes was discussed by others, such as the people mentioned in this section.

It's great to have these quotes all in one place. :)

In addition to the main point you made -- that the futures containing the most suffering are often the ones that it's too late to stop -- I would also argue that even reflective, human-controlled futures could be pretty terrible because a lot of humans have (by my lights) some horrifying values. For example, human-controlled futures might accept enormous s-risks for the sake of enormous positive value, might endorse strong norms of retribution, might severely punish outgroups or heterodoxy, might value gi... (read more)

Thanks for the kind words. :)

things like the dark tetrad traits (narcissism, machiavellianism, psychopathy, sadism) are adaptive even on a group level

Yup. And how adaptive they are depends on the distribution of other agent types. For example, against a population of pure pacifists, Dark Tetrad traits may be pretty effective. In a population of agents who cooperate with one another but punish rule-breakers, Dark Tetrad traits are probably less adaptive. Hopefully present-day society is somewhat close to the latter case, although human reproduction isn'... (read more)

Thanks for the post! There's a lot of deep food for thought in it. I agree it's nice to know that you're not alone in having these kinds of feelings.

reading an article by Brian Tomasik one night[...]. It was the most painful experience of my life.

Sorry about that! Several people have had strong adverse reactions to my discussions of suffering. On the whole I think it's better for people to be exposed to such ideas, although some particular people may be more debilitated than motivated by thinking about extreme suffering.

I notice a trend for news and Go... (read more)

5
Kenneth_Diao
5mo
Hi Brian, I'm honored that you read my article and thought it was valuable! For the record, I also think that it's good to know the truth. Maybe I wish it wasn't necessary for us to know about these things, but I think it is necessary, and I very much prefer knowing about something and thus being able to act in accordance to that knowledge than not knowing about it. So yeah, don't let my adverse reaction fool you; I love your work and admire you as a person. Regarding love and hatred, the points you brought up do make me think. I try to always keep an evolutionary perspective in mind; that is, I tend to assume something is adaptive, especially if it's survived across big time. So I think that, at least in certain environments, things like the dark tetrad traits (narcissism, machiavellianism, psychopathy, sadism) are adaptive even on a group level; maybe they reach some kind of local maximum of adaptiveness. My hope is that there is a better way to retain the adaptive behavioral manifestations of these traits while avoiding the volatile and maladaptive aspects of these traits, and my belief is that we can approach this by having more correct motivations. Like I really idealise the approaches of people like Gandhi and MLK who recognised the wrongness of the status quo while also trying to create positive change with love and peace; I believe we need more of that. That being said, I take your point that darkness and hate can lead to love/reduction in hatred, and that this may always be true, especially in our non-ideal world.

I haven't looked into sheep and goats specifically, but I imagine their wild-animal impacts would be fairly similar as for cattle. Unfortunately they're smaller, so there's more suffering and death per kg than for cattle, but they're still much better than chicken/fish/etc.

Dairy is another lower-impact option, and I guess a lot of Hindus are ok with dairy.

there's no sense in which asking dumb questions can plausibly have very significant downsides for the world (other than opportunity costs)

I think the opportunity costs are the key issue. :) There's a reason that companies use FAQs and automated phone systems to reduce the number of customer-support calls they have. There have been several times in my life when I've asked questions to someone who was sort of busy, and it was clear the person was annoyed.

At one of my previous employers (not an EA organization), I asked a lot of questions during meetings, ... (read more)

Thanks! It's worth noting that the rainforest and Cerrado numbers in that piece are very rough guesses based on limited and noisy data. As one friend of mine would say, I basically pulled those numbers out of my posterior (...distribution). :) Also, even if that comparison is accurate, it's just for one region of the world; it may not apply to the difference between, e.g., temperate forests and grasslands. All of that said, my impression is that crop fields do tend to have fewer mammals and birds than wild grassland or forest. For birds, see the screenshot of a table in this section.

Great post! In addition to biases that increase antagonism, there are also biases that reduce antagonism. For example, the fact that most EAs see each other as friends can blind us to the fact that we may in fact be quite opposed on some important questions. Plausibly this is a good thing, because friendship is a form of cooperation that tends to work in the real world. But I think friendship does make us less likely to notice or worry about large value differences.

As an example, it's plausible to me that the EA movement overall somewhat increases expected... (read more)

Thanks! Good to know. If you're just buying eyeballs, then there's roughly unlimited room for more funding (unless you were to get a lot bigger), so presumably there'd be less reason for funging dynamics. (And I assume you don't receive much or any money from big EA animal donors anyway.)

I'm honored that you're honored. :) Thanks for the work you do and for your answer here!

there are certain large grantors that I have been told prefer to fund nonprofits that already have raised at least a certain amount from other sources

Are those EA grantors? Or maybe you prefer not to say.

That makes sense about how more donors helps with fundraising. I wonder if that's more true for a startup charity that has to demonstrate its legitimacy, while for a larger and more established charity, maybe it could go the other way?

3
alene
1y
One of them is and one of them isn’t! Yeah it could totally be a startup thing. :-)

Makes sense about ex ante vs ex post. :)

Are you more optimistic that various different kinds of reflection would tend to yield a fair amount of convergence? Or that our descendants will in fact undertake reflection on human values to a significant degree?

3
Ryan Greenblatt
1y
More optimistic on both.

Makes sense. :) There are at least two different reasons why one might discourage taking more than one's fair share:

  1. Epistemic: As you said, there may be "collective wisdom" that an individual donor is missing.
  2. Game theoretic: If multiple donors who have different values compete in a game of chicken, this could be worse for all of them than if they can agree to cooperate.

Point #1 may be a reason to not try to outcompete others purely for its own sake. However, reason #2 depends on whether other donors are in fact playing chicken and whether it's feasibl... (read more)

Your question is fairly relevant to the discussion because if I thought there was net positive value in the lives of wild animals, then I would have a lot fewer concerns about non-welfare-reform animal charities.

I've had it on my todo list to check out that video and paper, but I probably won't get to it any time soon, so for now I'll just reply to the slides you asked about. :)

Personally I would not want to live even as the two surviving adult fish, because they probably experience a number of moments of extreme suffering, at least during death if not ear... (read more)

Thanks! I'm confused about the acausal issue as well :) , and it's not my specialty. I agree that acausal trade (if it's possible in practice, which I'm uncertain about) could add a lot of weird dynamics to the mix. If someone was currently almost certain that Earth-originating space colonization was net bad, then this extra variance should make such a person less certain. (But it should also make people less certain who think space colonization is definitely good.) My own probabilities for Earth-originating space colonization being net bad vs good from a ... (read more)

3
Ryan Greenblatt
1y
It seems as though I'm more optimistic about a 'simple' picture of reflection and enlightenment. When providing the 60/40 numbers, I was imagining something like 'probability that it's ex-ante good, as opposed to ex-post good'. This distinction is pretty unclear and I certainly didn't make this clear in my comment.

In reading more about this topic, I discovered that there has already been a lot of discussion about donor coordination on the EA Forum that I missed. (I don't read the Forum very actively.) EAs generally think it's bad to engage in a game of chicken where you try to let other people fund something first, at least within the EA community -- e.g., Cotton-Barratt (2021).

My original thought behind making this post was that the extent of funging for animal donations seemed like a useful thing for various animal donors to be aware of, to be more informed about ... (read more)

6
Jason
1y
I think it's a fair topic to bring up in general, as long as the questioner isn't seeking more than their "fair share" (as it were) of control over global allocation. I think it's important that overall funding levels reflect the collective wisdom of all donors, rather than larger donors "funding last" and setting global funding levels to their own individual judgment. Stated differently, suppose Big Fund thinks that funding should be allocated 50:50 between strategies A and B. But 80% of small/medium independent donors in the community think strategy A is better and donate to it exclusively. To me, that's evidence that 50:50 isn't the correct overall allocation, and it would be suboptimal for Big Fund to use its economic firepower to totally "correct" what the smaller donors have done. (That is not to say I think Big Fund needs to totally disregard the effects of other funders and allocate 50:50 in this circumstance. Nor am I confident in any specific mathematical construct, such as quadratic funding, to set the global funding level in this hypothetical.) So in my example, Big Fund needs to take steps to ensure that its views are not overweighted in the global allocation of funds. It should then assure independent donors (those not giving through Big Fund) that they are exerting an appropriate amount of influence on global allocation (i.e., that they are not being practically forced to delegate their decisionmaking to Big Fund). Not doing that may suppress independent giving, as independent donors who feel they are being 100% funged by Big Fund will give based on their perception of the cost-effectiveness of Big Fund's entire portfolio without weighing the cost-effectiveness of their preferred organization. All that is to say that your post makes me think that communication on this topic to independent donors could be improved.

it's total on-farm deaths that matter more to me than the rates, so just increasing the prices enough could reduce demand enough to reduce those deaths.

If cage-free hens are less productive, then there might still be more total deaths in cage-free despite higher prices?

I don't have a copy of the book to check, but I think Compassion, by the Pound says that cage-free hens lay fewer eggs.

A 2006 study gives some specific numbers, although this is for free-range rather than cage-free:

Layers from the free range system, compared to those kept in cages, laid

... (read more)

Good to know! Are there any other slaughter-focused groups besides HSA? Maybe you mean groups for which one of their major priorities is slaughter, like Shrimp Welfare Project and various other charities working on chickens and fish?

I saw a 2021 Open Phil grant "to Animal Protection Denmark to support research on ways to improve the welfare of wild-caught fish." But that organization itself does lots of stuff (including non-farm-animal work).

Off topic: There's a line in the movie A Cinderella Story: Christmas Wish that might be applicable to you: "was also... (read more)

7
Eli Rose
1y
This is an amazing thing to learn.

they tend not to lose status because of reduced RFMF

Great point! That makes them different from GiveWell charities, where, e.g., AMF was dropped at least once due to RFMF concerns.

I suppose donating could even increase their RFMF in the longer run

Yeah, it's not obvious to me that it's right to think about RFMF decreasing as a charity gets more money. It may well be the opposite: more money means faster growth, which means more ability to use money.

OTOH, if other donors believe that RFMF is limited, then there's a possibility of them funging away any... (read more)

That's right. :) There are various additional details to consider, but that's the main idea.

Catastrophic risks have other side effects in scenarios where humanity does survive, and in most cases, humanity would survive. My impression is that apart from AI risk, biorisk is the most likely form of x-risk to cause actual extinction rather than just disruption. Nuclear winter and especially climate change seem to have a higher ratio of (probability of disruption but still survival)/(probability of complete extinction). AI extinction risk would presumably still... (read more)

4
Ryan Greenblatt
1y
[Epistemic status: confused stuff that I haven't thought about that much. That said I do think this consideration is quite real and I've talked to suffering focused people about this sort of thing (I'm not currently suffering focused)] Beyond ECL-style cooperation with values which want to reach the stars and causally reaching aliens, I think the strongest remaining case is post singularity acausal trade. I think this consideration is actually quite strong in expectation if you think that suffering focused ethics is common on reflection among humans (or human originating AIs which took over) and less common among other powerful civilizations. Though this depends heavily on the relative probabilities of S-risk from different sources. My guess would be that this consideration out weighs cooperation and encountering technologically immature aliens. I normally think causal trade with technologically mature aliens/AIs from aliens and acaual trade are basically the same. I'd guess that this consideration is probably not sufficent to think that reaching the stars is good from a negative utiliarian perspective, but I'm only like 60/40 on this (and very confused overall). By 'on reflection' I mean something like 'after the great reflection' or what you get from indirect normativity: https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/ My guess would be that negative utilitarians should think that at least they would likely remain negative utilitarian on reflection (or that the residual is unpredictable). So probably negative utilitarians should also think negative utilitarianism is common on reflection?

Good points! I'd be curious to hear what Lewis thought of those two HSA grants and why Open Phil hasn't done more since then.

Hey Brian, I think it's too early to judge both of the HSA grants we funded because they're for long research projects, which have also gotten delayed. We'd like to fund more similar work for HSA but there have been capacity constraints on both sides. We also tend to weigh prolonged chronic suffering more highly than shorter acute suffering, so slaughter isn't as obvious a focus for us. So I think funding HSA or similar slaughter-focused groups is a good idea for EAs like you who prioritize acute suffering. On slaughter, you might like to also look into the Shrimp Welfare Project (OP-funded, but with RFMF).

Thanks! That's encouraging to hear (although it would be better for animals if the charities did fill their funding gaps).

There could still be some funging if a smaller remaining funding gap discourages other donors, such as the Animal Welfare Fund, from giving more, but at least the effect is probably less drastic than if the org hits its target RFMF fully.

Great point! Michael said something similar:

the funders may have specific total funding targets below filling their near term RFMF, and the closer to those targets, the less they give.

For example, the funders might aim for a marginal utility of 6 utilons per dollar, so using your example numbers, they would only want to fund the org up to $800K. And if someone else is already giving $100K, they would only want to give $700K.

My guess would be that in practice, funders probably aren't thinking too much about a curve of marginal utility per dollar but are... (read more)

Thanks! I may have several questions later. :) For now, I was curious about your thoughts on the funging framework in general. Do you think it is the case that if one EA gives more to you, then other EAs like the Animal Welfare Fund will tend to give at least somewhat less? And how much less?

I sort of wonder if that funging model is wrong, especially in the case of rapidly growing charities. For example, suppose the Animal Welfare Fund in year 1 thinks you have enough money, so they don't grant any more. But another donor wants you to spend $25K (or whatev... (read more)

5
emre kaplan
1y
I don't really have a good response to your main question as I can't speak on behalf of the grantmakers. But I might at least contribute in the following way: In our first and second years, EA Animal Welfare Fund and other funders were willing to fund us more than we requested. So if some individual donor gave money to us in our first and second years, we would basically ask for less money from EA Animal Welfare Fund and other sources. This is no longer the case as EA Animal Welfare Fund doesn't make grants larger than $100k very often. For that reason, additional individual donations have a counterfactual positive impact on our growth. But I don't know if additional individual contributions lead the grantmakers to grant less money to us. That is something grantmakers can speak about.

Thanks! That's good to know. When I looked through the Animal Welfare Fund grantees recently, Healthier Hens was one that I picked out as a possible candidate for donating to. I'm more concerned about extreme than chronic pain, but I guess HH says that bone fractures cause some intense pain as well as chronic (and of course I care about chronic pain somewhat too).

Is there info about why grantors didn't give more funding to HH? I wonder if there's something they know that I don't. (In general, that's a main downside of trying to donate off the beaten path.)

4
weeatquince
1y
I don’t have this info. I think it is possible that funders are not interested in Africa (HH was working in Kenya) or that funders don’t value this kind of work as they see it as incremental welfare improvements that they don’t lead to long run change, but I'm mostly honestly speculating ... 

Yeah, more research on questions like whether beef reduces net suffering would be extremely useful, both for my personal donation decisions and more importantly for potentially shifting the priorities of the animal movement overall. My worries about funging here ultimately derive from my thinking that the movement is missing some crucial considerations (or else just has different values from me), and the best way to fix that would be for more people to highlight those considerations.

I'm unsure how more research on the welfare of populous wild animals would... (read more)

That's a useful post! It's an interesting idea. There could be some funging between Open Phil and other EA animal donors -- like, if Open Phil is handling the welfare reforms, then other donors don't have to and can donate more to non-welfare stuff. OTOH, the fact that a high-status funder like Open Phil does welfare reforms makes it more likely that other EAs follow suit.

Another thing I'd worry about is that if Open Phil's preferred animal charities have less RFMF, then maybe Open Phil would allocate less of its funds to animal welfare in general, leaving... (read more)

You'd have to donate enough to reduce the recommendation status of an org, which seems unlikely for their Top Charities, at least

It's unlikely, but if it did happen, it would be a huge negative impact, so in expectation it could still be nontrivial funging? For example, if I think one of ACE's four top charities is way better than the others, then if I donate a small amount to it, there's a tiny chance this leads to it becoming unrecommended, but if so, that would result in a ton less future funding to the org.

4
MichaelStJules
1y
I'd guess the funging or reduced funding this way would be small in expectation, like less than 5%? If you split your donations across multiple of these charities, you can reduce the total risk. But again, I think these orgs systematically have extra RFMF (and you could check ACE's reports to see how much), and they tend not to lose status because of reduced RFMF. Like THL and GFI have been Top Charities continuously (except GFI missing one year for culture issues). I think other orgs dropped in status usually because of culture/harassment issues or revisions to expectations of their cost-effectiveness or promisingness of their work. Also, I suppose donating could even increase their RFMF in the longer run instead of dropping the recommendation status, by addressing bottlenecks for growth.

But I suppose there could still be funging; the funders may have specific total funding targets below filling their near term RFMF, and the closer to those targets, the less they give.

Yeah. Or it could work in reverse: if they commit to giving only, say, 50% of an org's budget, then if individual donors give more, this "unlocks" the ability for the big donors to give more also. However, Karnofsky says it's a myth that Open Phil has a hard rule like this. Also, as I noted in the post, I wouldn't want them to have a hard rule like this, because it could l... (read more)

Thanks!

so the effect may be thought of as additional money going to the worst/borderline EA Animal welfare grantee

Yeah, that's the funging scenario that I had in mind. :) It's fine if everyone agrees about the ranking of the different charities. It's not great if the donor to the funged charity thinks the funged charity is significantly better than the average Animal Welfare Fund grant.

EA Animal Welfare fund does ask on their application form about counterfactual funding

Interesting! That does support the idea there is some funging that happens inte... (read more)

4
NunoSempere
1y
The section "B. A value certificate equilibrium" in this post[1] might be of interest, because it kind of provides one solution to that coordination problem. In theory you could try to get the Animal Welfare fund to agree on that coordination solution, and then estimate parameters for your case, and then send a bill/donation to the Animal Welfare fund to reach that solution. That said, for relatively small amounts, my guess is that this would be too much work. 1. ^ Sadly amateurishly/immaturely written, though I think that the core point gets across.

If the AI didn't face any competition and was a rational agent, it might indeed want to be extremely cautious about making changes to itself or building successors, for the reason you mention. However, if there's competition among AIs, then just like in the case of a human AI arms race, there might be pressure to self-improve even at the risk of goal drift.

If an AI is built to value helping humans, and if that value can remain intact, then it wouldn't need to be "enslaved"; it would want to be nice on its own accord. However, I agree with what I take to be the thrust of your question, which is that the chances seem slim that an AI would continue to care about human concerns after many rounds of self-improvement. It seems too easy for things to slide askew from what humans wanted one way or other, especially if there's a competitive environment with complex interactions among agents.

Thanks. :) I'm personally not one of those transhumanists who welcome the transition to weird posthuman values. I would prefer for space not to be colonized at all in order to avoid astronomically increasing the amount of sentience (and therefore the amount of expected suffering) in our region of the cosmos. I think there could be some common ground, at least in the short run, between suffering-focused people who don't want space colonized in general and existential-risk people who want to radically slow down the pace of AI progress. If it were possible, t... (read more)

4
Geoffrey Miller
1y
Brian - that all seems reasonable. Much to think about!

Work related to AI trajectories can still be important even if you think the expected value of the far future is net negative (as I do, relative to my roughly negative-utilitarian values). In addition to alignment, we can also work on reducing s-risks that would result from superintelligence. This work tends to be somewhat different from ordinary AI alignment, although some types of alignment work may reduce s-risks also. (Some alignment work might increase s-risks.)

If you're not a longtermist or think we're too clueless about the long-run future, then thi... (read more)

I think GPT-4 is an early AGI. I don't think it makes sense to use a binary threshold, because various intelligences (from bacteria to ants to humans to superintelligences) have varying degrees of generality.

The goalpost shifting seems like the AI effect to me: "AI is anything that has not been done yet."

I don't think it's obvious that GPT-4 isn't conscious (even for non-panpsychists), nor is it obvious that its style of intelligence is that different from what happens in our brains.

1
kpurens
1y
It seems to me that consciousness is a different concept than intelligence, and one that isn't well understood and communicated because it's tough for us to differentiate them from inside our little meat-boxes! We need better definitions of intelligence and consciousness; I'm sure someone is working on it, and so perhaps just finding those people and communicating their findings is an easy way to help?  I 100% agree that these things aren't obvious--which is a great indicator that we should talk about them more!

Suppose that near-term AGI progress mostly looks like making GPT smarter and smarter. Do people think this, in itself, would likely cause human extinction? How? Due to mesa-optimizers that would emerge during training of GPT? Due to people hooking GPT up to control of actions in the real world, and those autonomous systems would themselves go off the rails? Just due to accelerating disruptive social change that makes all sorts of other risks (nuclear war, bioterrorism, economic or government collapse, etc) more likely? Or do people think the AI extinction ... (read more)

6
aogara
1y
Those all seem like important risks to me, but I’d estimate the highest x-risk from agentic systems that learn to seek power or wirehead, especially after a transition to very rapid economic or scientific progress. If AI progresses slowly or is only a tool used by human operators, x-risk seems much lower to me. Good recent post on various failure modes: https://www.lesswrong.com/posts/mSF4KTxAGRG3EHmhb/ai-x-risk-approximately-ordered-by-embarrassment
4
Ward A
1y
Personally, my worry stems primarily from how difficult it seems to prevent utter fools from mixing up something like ChaosGPT with GPT-5 or 6. That was a doozy for me. You don't need fancy causal explanations of misalignment if the doom-mechanism is just... somebody telling the GPT to kill us all. And somebody will definitely try. Secondarily, I also think a gradually increasing share of GPT's activation network gets funneled through heuristics that are generally useful for all the tasks involved in minimising its loss function at INT<20, and those heuristics may not stay inner- or outer-aligned at INT>20. Such heuristics include: 1. You get better results if you search a higher-dimensional action-space. 2. You get better results on novel tasks if you model the cognitive processes producing those results, followed by using that model to produce results. There's a monotonic path all the way up to consequentialism that goes something like the following. 1. ...index and reuse algorithms that have been reliable for similar tasks, since searching a space of general algorithms is much faster than the alternative. 2. ...extend its ability to recognise which tasks count as 'similar'.[1] 3. ...develop meta-algorithms for more reliably putting algorithms together in increasingly complex sequences. 4. This progression could result in something that has an explicit model of its own proxy-values, and explicitly searches a high-dimensional space of action-sequences for plans according to meta-heuristics that have historically maximised those proxy-values. Aka a consequentialist. At which point you should hope those proxy-values capture something you care about. This is just one hypothetical zoomed-out story that makes sense in my own head, but you definitely shouldn't defer to my understanding of this. I can explain jargon upon request. 1. ^ Aka proxy-values. Note that just by extending the domain of inputs for which a particular algorithm is used

I think humans may indeed find ways to scale up their control over successive generations of AIs for a while, and successive generations of AIs may be able to exert some control over their successors, and so on. However, I don't see how at the end of a long chain of successive generations we could be left with anything that cares much about our little primate goals. Even if individual agents within that system still cared somewhat about humans, I doubt the collective behavior of the society of AIs overall would still care, rather than being driven by its o... (read more)

5
Geoffrey Miller
1y
Hi Brian, thanks for this reminder about the longtermist perspective on humanity's future. I agree that in a million years, whatever sentient beings that are around may have little interest or respect for the values that humans happen to have now. However, one lesson from evolution is that most mutations are harmful, most populations trying to spread into a new habitats fail, and most new species go extinct within about a million years. There's huge survivorship bias in our understanding of natural history.  I worry that this survivorship bias leads us to radically over-estimate the likely adaptiveness and longevity of any new digital sentiences and any new transhumanist innovations. New autonomous advanced AIs are likely to be extremely fragile, just because most new complex systems that haven't been battle-tested by evolution are extremely fragile.  For this reason, I think we would be foolish to rush into any radical transhumanism, or any more advanced AI systems, until we have explored human potential further, and until we have been successfully, resiliently multi-planetary, if not multi-stellar. Once we have a foothold in the stars, and humanity has reached some kind of asymptote in what un-augmented humanity can accomplish, then it might make sense to think about the 'next phase of evolution'. Until then, any attempt to push sentient evolution faster will probably result in calamity.

I think a simple reward/punishment signal can be an extremely basic neural representation that "this is good/bad", and activation of escape muscles can be an extremely basic representation of an imperative to avoid something. I agree that these things seem almost completely unimportant in the simplest systems (I think nematodes aren't the simplest systems), but I also don't see any sharp dividing lines between the simplest systems and ourselves, just degrees of complexity and extra machinery. It's like the difference between a :-| emoticon and the Mona Lis... (read more)

8
MichaelStJules
1y
I'll lay out how I'm thinking about it now after looking more into this and illusionism over the past few days. I would consider three groups of moral interpretations of illusionism, which can be further divided: 1. A system/process is conscious in a morally relevant way if and only if we could connect to it the right kind of introspective (monitoring and/or modelling) and belief-forming process in the right way to generate a belief that something matters[1]. 2. A system/process is conscious in a morally relevant way if and only if we could connect to it the right kind of belief-forming process (with no further introspective processes) in the right way to generate a belief that something matters[1]. 3. A system/process is conscious in a morally relevant way if and only if it generates a belief that something matters[1]. I'm now tentatively most sympathetic to something like 3, although I was previously endorsing something like 2 in this thread. 1 and 2 seem plausibly trivial, so that anything matters in any way if you put all the work into the introspective and/or belief-forming processes, although maybe the actual responses of the original system/process can help break symmetries, or you can have enough restrictions on the connected introspective and/or belief-forming processes. Frankish explicitly endorses something like 1. I think Graziano endorses something like 2 or 3, and I think Humphrey endorses something like 3. Their views of course differ further in their details besides just 1, 2 and 3, especially on what counts as the right kind of introspection or belief. There may be accounts of beliefs according to which "a reward/punishment signal" (and/or its effects), "activation of escape muscles" or even the responses of electrons to electric fields count as beliefs that something matters. However, I suspect those and what nematodes do aren't beliefs (of mattering) under some accounts of beliefs I'm pretty sympathetic to. For example, maybe responses need

Thanks. :)

I plausibly agree with your last paragraph, but I think illusionism as a way to (dis)solve the hard problem can be consistent with lots of different moral views about which brain processes we consider sentient. Some people take the approach I think you're proposing, in which we have stricter criteria regarding what it takes for a mind to be sentient than we might have had before learning about illusionism. Others might feel that illusionism shows that the distinction between "conscious" and "unconscious" is less fundamental than we assumed and th... (read more)

I'm not sure where to draw lines, but illusions of "this is bad!" (evaluative) or "get this to stop!" (imperative) could be enough, rather than something like "I care about avoiding pain", and I doubt nematodes have those illusions, too. It's not clear responses to noxious stimuli, including learning or being put into a pessimistic or fearful-like state, actually indicate illusions of evaluations or imperatives. But it's also not clear what would.

You could imagine a switch between hardcoded exploratory and defensive modes of NPCs or simple non-flexible rob... (read more)

I'm not particularly well informed about current EA discourse on AI alignment, but I imagine that two possible strategies are

  1. accelerating alignment research and staying friendly with the big AI companies
  2. getting governments to slow AI development in a worldwide-coordinated way, even if this angers people at AI companies.

Yudkowsky's article helps push on the latter approach. Making the public and governments more worried about AI risk does seem to me the most plausible way of slowing it down. If more people in the national-security community worry about... (read more)

I agree that animal-welfare charities are a good choice. For s-risks, there are the Center on Long-Term Risk and Center for Reducing Suffering.

Personally I'm most enthusiastic about humane slaughter because

  1. as you note in the post, excruciating pain seems vastly more important than lesser pains, and I imagine that slaughter -- along with other physical traumas like castration, branding, dehorning, and tail docking -- are generally the most excruciating experiences for most food animals
  2. compared with other welfare reforms, and especially compared with meat
... (read more)

these random particle movements could sometimes temporarily simulate valence-generating systems by chance, even if only for a fraction of a second

I see. :) I think counterfactual robustness is important, so maybe I'm less worried about that than you? Apart from gerrymandered interpretations, I assume that even 50 nematode neurons are vanishingly rare in particle movements?

In your post on counterfactual robustness, you mention as an example that if we eliminated the unused neural pathways during torture of you, you would still scream out in pain, so it s... (read more)

Thanks for the detailed explanation! I haven't read any of the papers you linked to (just most of the summaries right now), so my comments may be misguided.

My general feeling is that simplified models of other things, including sometimes models that are resistant to change, are fairly ubiquitous in the world. For example, imagine an alert on your computer that says "Warning: RAM usage is above 90%" (so that you can avoid going up to 100% of RAM, which would slow your computer to a crawl). This alert would be an extremely simple "model" of the total amount ... (read more)

I'm not committed to only illusions related to attention mattering or indicating consciousness. I suspect the illusion of body ownership is an illusion that indicates consciousness of some kind, like with the rubber hand illusion, or, in rodents, the rubber tail illusion. I can imagine illusions related to various components of experiences (e.g. redness, sound, each sense), and the ones that should matter terminally to us would be the ones related to valence and desires/preferences, basically illusions that things actually matter to the system with those i... (read more)

Thanks. :) I'm uncertain how accurate or robust the 2.3/1.5 comparison was, but you're right to cite that. And you're right that human land-use changes (including changes to forest area) likely have big effects of some kind on total arthropod welfare.

also about the sign of the welfare of arthropods

Makes sense. I have almost no uncertainty about that because I measure welfare in a suffering-focused way, according to which extreme pain is vastly more important than positive experiences. I suspect that a lot of variation in opinions on this question come ... (read more)

By "illusionism" do you have in mind something like a higher-order view according to which noticing one's own awareness (or having a sufficiently complex model of one's attention, as in attention schema theory) is the crucial part of consciousness? I think that doesn't necessarily follow from pure illusionism itself.

As I mention here, we could take illusionism to show that the distinction between "conscious" and "unconscious" processing is more shallow and trivial than we might have thought. For example, adding a model of one's attention to a brain seems l... (read more)

I think noticing your own awareness, a self-model and a model of your own attention are each logically independent of (neither necessary nor sufficient for) consciousness. I interpret AST as claiming that illusions of conscious experience, specific ways information is processed that would lead to inferences like the kind we make about consciousness (possibly when connected to appropriate inference-making systems, even if not normally connected), are what make something conscious, and, in practice in animals, these illusions happen with the attention model ... (read more)

I see a huge gap between the optimized and organized rhythm of 302 neurons acting in concert with the rest of the body, on the one hand, and roughly random particle movements on the other hand. I think there's even a big gap between the optimized behavior of a bacterium versus the unoptimized behavior of individual particles (except insofar as we see particles themselves as optimizing for a lowest-energy configuration, etc).

If it's true that individual biological neurons are like two-layer neural networks, then 302 biological neurons would be like thousand... (read more)

What I have in mind is specifically that these random particle movements could sometimes temporarily simulate valence-generating systems by chance, even if only for a fraction of a second. I discussed this more here, and in the comments.

My impression across various animal species (mostly mammals, birds and a few insect species) is that 10-30% of neurons are in the sensory-associative structures (based on data here), and even fewer could be used to generate conscious valence (on the right inputs, say), maybe even a fraction of the neurons that ever generate... (read more)

I think net change in forest area is a major driver for the impact of humans on terrestrial arthropods.

Is that just a guess, or has someone said that explicitly? I also get the vague impression that forests have higher productivity than grasslands/etc, but that's not obvious, and I'd be curious to see more investigation of whether/when forests do have higher productivity. (This includes both primary productivity and productivity in terms of invertebrate life.)

4
Vasco Grilo
1y
Thanks for commenting, Brian! It is a guess informed by you (great!) analysis here, where you assumed the median density of arthropods in rainforests to be 1.53 (= 2.3/1.5) times that in Cerrado, although with high uncertainty as you noticed. However, I did not mean that increasing forest area would necessarily lead to more arthropods. I just meant that the change in forest area due to human activities could be the main factor for the net change in the total welfare of arthropods. I am uncertain about the sign of the correlation because I am not only uncertain about which biomes have greater density of arthropods, but also about the sign of the welfare of arthropods. I have also illustrated here that the change in forest area might be the driver for the nearterm cost-effectiveness of GiveWell's top charities.

Given the examples of cognitive abilities of nematodes mentioned here, I don't see them as a mugging. For example, here's a quote from that link:

The deterministic development of the worm's nervous system would seem to limit its usefulness as a model to study behavioral plasticity, but time and again the worm has demonstrated its extreme sensitivity to experience

It's not obvious to me why one would draw a line between mites/springtails and nematodes, rather than between ants and mites/springtails, between small fish and ants, etc.

With only 302 neurons, probably only a minority of which actually generate valenced experiences, if they're sentient at all, I might have to worry about random particle interactions in the walls generating suffering.

Nematodes also seem like very minimal RL agents that would be pretty easy to program. The fear-like behaviour seems interesting, but still plausibly easy to program.

I don't actually know much about mites or springtails, but my ignorance counts in their favour, as does them being more closely related to and sharing more brain structures (e.g. mushroom bodies) with arthropods with more complex behaviours that seem like better evidence for sentience (spiders for mites, and insects for springtails).

Load more