All of zdgroff's Comments + Replies

I'm a grantmaker at Longview and manage the Digital Sentience Fund—thought I'd share my thinking here: “backchaining from… making the long-term future go well conditional on no Al takeover” is my goal with the fund (with the restriction of being related to the wellbeing of AIs in a somewhat direct way), though we might disagree on how that’s best achieved through funding. Specifically, the things you’re excited about would probably be toward the top of the list of things I’m excited about, but I also think broader empirical and philosophical work and field... (read more)

6
Zach Stein-Perlman
Thanks. I'm somewhat glad to hear this. One crux is that I'm worried that broad field-building mostly recruits people to work on stuff like "are AIs conscious" and "how can we improve short-term AI welfare" rather than "how can we do digital-minds stuff to improve what the von Neumann probes tile the universe with." So the field-building feels approximately zero-value to me — I doubt you'll be able to steer people toward the important stuff in the future. A smaller crux is that I'm worried about lab-facing work similarly being poorly aimed.

Thanks for writing this Will. I feel a bit torn on this so will lay out some places where I agree and some where I disagree:

  • I agree that some of these AI-related cause areas beyond takeover risk deserve to be seen as their own cause areas as such and that lumping them all under "AI" risks being a bit inaccurate.
    • That said, I think the same could be said of some areas of animal work—wild animal welfare, invertebrate welfare, and farmed vertebrate welfare should perhaps get their own billing. And then this can keep expanding—see, e.g., OP's focus areas, which
... (read more)
1
Alix Pham
Thank you for acting on this! It's a team effort :)

I'd add to this that you do also have the possibility that 1-3 happen, but they happen much later than many people currently think. My personal take is that the probability that 'either AGI's impact comes in more than ten years or it's not that radical' is >50%, certainly far more than 0%.

I've been interested to see this book since I came across the idea. I think the argument for this being a problem from a variety of perspectives is pretty compelling.

  • For me, probably the key chapter is "Dodging the asteroid. And other benefits of other people." I'm also interested in how population issues could interact with AI-driven changes.
  • The Peter Singer quote is interesting—I'm a bit surprised given his past views on population ethics. I'm wondering if he's updated his views.
6
deanspears
There’s a nice post right now on the front page about the lifesaving power of immediate skin-to-skin (meaning right after birth, take a look here: https://forum.effectivealtruism.org/posts/bLZj9puhixeYajTJ2/promoting-immediate-skin-to-skin-contact-and-early-1 ) which reminded me that one thing to say about our book is that it also tells some stories from the KMC program that I’ve been involved with in Uttar Pradesh (it's a longer-term intervention, meaning days-to-weeks; I talked and wrote about it here: https://forum.effectivealtruism.org/posts/rwq8WqcQ9hxjxPmud/ask-me-questions-here-about-my-80-000-hours-podcast-on ). That might not be what you’re expecting to find in a book that’s also about progress towards an abundant future and making parenting better—but we think it all hangs together and that the connections are part of where readers here might find something interesting. Preventing neonatal deaths is an important way that people have learned from other people. And! We indeed wrote in chapter 7 about why we might need many of us to work together to solve a big problem like a pandemic, decarbonization or, yes, maybe an asteroid.

Agree that it's not certain or obvious that AI risk is the most pressing issue (though it is 80k's best guess & my personal best guess

 

Yeah, FWIW, it's mine too. Time will tell how I feel about the change in the end. That EA Forum post on the 80K-EA community relationship feels very appropriate to me, so I think my disagreement is about the application.

I think that existential risks from various issues with AGI (especially if one includes trajectory changes) are high enough that one needn't accept fanatical views to prioritise them

 

I think the argument you linked to is reasonable. I disagree, but not strongly. But I think it's plausible enough that AGI concerns (from an impartial cause prioritization perspective) require fanaticism that there should still be significant worry about it. My take would be that this worry means an initially general EA org should not overwhelmingly prioritize AGI.

By my read, that post and the excerpt from it are about the rhetorical motivation for existential risk rather than the impartial ethical motivation. I basically agree that longtermism is not the right framing in most conversations, and it's also not necessary for thinking existential risk work would be more valuable than the marginal public dollar.

I included the qualifier "From an altruistic cause prioritization perspective" because I think that from an impartial cause prioritization perspective, the case is different. If you're comparing existential risk to animal welfare and global health, the links in my comment I think make the case pretty persuasively that you need longtermism.

-12
Greg_Colbourn ⏸️
zdgroff
173
63
26
1

I'm not sure exactly what this change will look like, but my current impression from this post leaves me disappointed. I say this as someone who now works on AI full-time and is mostly persuaded of strong longtermism. I think there's enough reason for uncertainty about the top cause and value in a broad community that central EA organizations should not go all-in on a single cause. This seems especially the case for 80,000 Hours, which brings people in by appealing to a general interest in doing good.

Some reasons for thinking cause diversification by the c... (read more)

1
David_Kristoffersson
This seems clearly incorrect to me. I'm surprised to see this claim fronted prominently inside a highly upvoted comment. It also strikes me as uncharitable by invoking the "fanatical" frame. Prioritizing x-risk merely requires thinking the risk of existential catastrophe is close enough in time. https://forum.effectivealtruism.org/posts/X5aJKx3f6z5sX2Ji4/the-far-future-is-not-just-the-far-future Note that I wrote this short piece in 2020, before Chat GPT. I used "50 years" to, even at the point, work with a conservative time frame. Back then in 2020, I might have used 20-30 years personally. Now, in 2025, I might use 10 years personally.

Adding a bit more to my other comment:

For what it’s worth, I think it makes sense to see this as something of a continuation of a previous trend – 80k has for a long time prioritised existential risks more than the EA community as a whole. This has influenced EA (in my view, in a good way), and at the same time EA as a whole has continued to support work on other issues. My best guess is that that is good (though I'm not totally sure - EA as a whole mobilising to help things go better with AI also sounds like it could be really positively impactful).

From a

... (read more)

From an altruistic cause prioritization perspective, existential risk seems to require longtermism

No it doesn't! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.

When I'm talking to non-philosophers, I prefer an "existential risk" framework to a "long-termism" framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. I

... (read more)

Hey Zach,

(Responding as an 80k team member, though I’m quite new)

I appreciate this take; I was until recently working at CEA, and was in a lot of ways very very glad that Zach Robinson was all in on general EA. It remains the case (as I see it) that, from a strategic and moral point of view, there’s a ton of value in EA in general. It says what’s true in a clear and inspiring way, a lot of people are looking for a worldview that makes sense, and there’s still a lot we don’t know about the future. (And, as you say, non-fanaticism and pluralistic elements ha... (read more)

8
Arden Koehler
Hey Zach. I'm about to get on a plane so won't have time to write a full response, sorry! But wanted to say a few quick things before I do. Agree that it's not certain or obvious that AI risk is the most pressing issue (though it is 80k's best guess & my personal best guess, and I don't personally have the view that it requires fanaticism.) And I also hope the EA community continues to be a place where people work on a variety of issues -- wherever they think they can have the biggest positive impact. However, our top commitment at 80k is to do our best to help people find careers that will allow them to have as much positive impact as they can. & We think that to do that, more people should strongly consider and/or try out working on reducing the variety of risks that we think transformative AI poses. So we want to do much more to tell them that! In particular, from a web specific perspective, I feel that the website doesn't feel consistent right now with the possibility of short AI timelines & the possibility that AI might not only pose risks from catastrophic misalignment, but also other risks, plus that it will probably affect many other cause areas. Given the size of our team, I think we need to focus our new content capacity on changing that. I think this post I wrote a while ago might also be relevant here! https://forum.effectivealtruism.org/posts/iCDcJdqqmBa9QrEHv/faq-on-the-relationship-between-80-000-hours-and-the Will circle back more tomorrow / when I'm off the flight!

I think I agree with the Moral Power Laws hypothesis, but it might be irrelevant to the question of whether to try to improve the value of the future or work on extinction risk.

My thought is this: the best future is probably a convergence of many things going well, such as people being happy on average, there being many people, the future lasting a long time, and maybe some empirical/moral uncertainty stuff. Each of these things plausibly has a variety of components, creating a long tail. Yet you'd need expansive, simultaneous efforts on many fronts to get... (read more)

3
tylermjohn
Thanks, @zdgroff! I think MPL is most important if you think that there are going to be some agents shaping things, these agents' motivations are decisive for what outcomes are achieved, and you might (today) be able to align these agents with tail-valuable outcomes. Then aligning these agents with your moral values is wildly important. And by contrast marginal improvements to the agents' motivations are relatively unimportant. You're right that if you don't have any chance of optimizing any part of the universe, then MPL doesn't matter as much. Do you think that there won't be agents (even groups of them) with decisive control over what outcomes are achieved in (even parts of) the world? It seems to me in the worst case we could at least ask Dustin to try to buy one star and then eventually turn it into computronium. 
zdgroff
10
2
1
36% disagree

The value of the future conditional on civilization surviving seems positive to me, but not robustly so. I think the main argument for its being positive is theoretical (e.g., Spreading happiness to the stars seems little harder than just spreading), but the historical/contemporary record is ambiguous.

The value of improving the future seems more robustly positive if it is tractable. I suspect it is not that much less tractable than extinction risk work. I think a lot of AI risk satisfies this goal as well as the x-risk goal for reasons Will MacAskill gives... (read more)

This is a stimulating and impressively broad post.

I want to press a bit on whether these trends are necessarily bad—I think they are, but there are a few reasons why I wonder about it.

1) Secrecy: While secrecy makes it difficult or impossible to know if a system is a moral patient, it also prevents rogue actors from quickly making copies of a sentient system or obtaining a blueprint for suffering. (It also prevents rogue actors from obtaining a blueprint for flourishing, which supports your point.) How do you think about this?

2 and 3) If I understand corre... (read more)

6
Derek Shiller
There is definitely a scenario in which secrecy works out for the best. Suppose AI companies develop recognizably conscious systems in secret that they don't deploy, or deploy only with proper safeguards. If they had publicized how to build them, then it is possible that others would go ahead and be less responsible. The open source community raises some concerns. I wouldn't want conscious AI systems to be open-sourced if it was feasible to run them on hardware anyone could afford. Still, I think the dangers here are relatively modest: it seems unlikely that rogue actors will run suffering AI on a large scale in the near future. The scenario I'm most worried about is one in which the public favors policies about digital minds that are divorced from reality. Perhaps they grant rights and protections to all and only AIs that behave in sufficiently overt human-like ways. This would be a problem if human-likeness is not a good guide to moral status, either because many inhuman systems have moral status or many human-like systems lack it. Hiding the details from experts would make it more likely that we attribute moral status to the wrong AIs: AIs that trigger mind-recognizing heuristics from our evolutionary past, or AIs that the creators want us to believe are moral subjects. My primary worry is getting ahead of ourselves and not knowing what to say about the first systems that come off as convincingly conscious. This is mostly a worry in conjunction with secrecy, but the wider we explore and the quicker we do it, the less time there will be for experts to process the details, even if they have access in principle. There are other worries for exploration even if we do have proper time to assess the systems we build, but it may make it more likely that we will see digital minds and I'm an optimist that any digital minds we create will be more likely to have good lives than bad. If experts don't know what to say about new systems, the public may make up its own mind.

I think I'd be more worried about pulling out entirely than a delayed release, but either one seems possible (but IMO unlikely).

What seems less likely to work?

  • Work with the EU and the UK
    • Trump is far less likely to take regulatory inspiration from European countries and generally less likely to regulate. On the other-hand perhaps under a 2028 Dem administration we would see significant attention on EU/UK regulations.
    • The EU/UK are already scaling back the ambitions of their AI regulations out of fear that Trump would retaliate if they put limits on US companies.

 

Interesting—I've had the opposite take for the EU. The low likelihood of regulation in the US seems like it would... (read more)

5
Manuel Allgaier
Anthropic released Claude everywhere but the EU first, and their EU release happened only months later, so to some extend labs are already deprioritizing the EU market. I guess this trend would continue? Not sure. 

I think I might add this to my DIY, atheist, animal-rights Haggadah.

Answer by zdgroff30
2
0

TLDR: Graduating Stanford economics Ph.D. primarily interested in research or grantmaking work to improve the long-term future or animal welfare.

Skills & background: My job-market details, primarily aimed at economics academia, are on my website. I am an applied microeconomist (meaning empirical work and applied theory), with my research largely falling in political economy (econ + poli sci), public economics (econ of policy impacts), and behavioral/experimental economics.

I have been involved in effective altruism for 10+ years, including having been a... (read more)

[Edited to add the second sentence of the paragraph beginning, "Putting these together."]

The primary result doesn't speak to this, but secondary results can shed some light on it. Overall, I'd guess persistence is a touch less for policies with much more support, but note that the effect of proposing a policy on later policy is likely much larger than the effect of passing a policy conditional on its having been proposed.

The first thing to note is that there are really two questions here we might want to ask:

  1. What is the effect of passing a policy change, c
... (read more)

Yep, at the risk of omitting others, Lukas Freund as well.

Yes, it's a good point that benefits and length of the period are not independent, and I agree with the footnote too.

I would note that the factors I mentioned there don't seem like they should change things that much for most issues. I could see using 50-100 years rather than, e.g., 150 years as my results would seem to suggest, but I do think 5-10 years is an order of magnitude off.

2
Vasco Grilo🔸
Could you elaborate on why you think multiplying your results by a factor of 0.5 would be enough? Do you think it would be possible to study the question with empirical data, by looking not only into how much time the policy changes persisted counterfactually, but also into the target outcomes (e.g. number of caged hens for policy around animal welfare standards)? I am guessing this would be much harder, but that there are some questions in this vicinity one could try to answer more empirically to get a sense of how much the persistence estimates you got have to be adjusted downwards.

Easy Q to answer so doesn't take much time! In economics, the norm is not to publish your job market paper until after the market for various reasons. (That way, you don't take time away from improving the paper, and the department that hires you gets credit.)

We will see before long how it publishes!

  1. I look at some things you might find relevant here. I try to measure the scale of the impact of a referendum. I do this two ways. I have just a subjective judgment on a five-point scale, and then I also look at predictions of the referendum's fiscal impact from the secretary of state. Neither one is predictive. I also look at how many people would be directly affected by a referendum and how much news coverage there was before the election cycle. These predict less persistence.
  2. This is something I plan to do more, but they can't vary that much because when
... (read more)
1
Peter
1. Interesting. Are there any examples of what we might consider a relatively small policy changes that received huge amounts of coverage? Like for something people normally wouldn't care about. Maybe these would be informative to look at compared to more hot button issues like abortion that tend to get a lot of coverage. I'm also curious if any big issues somehow got less attention than expected and how this looks for pass/fail margins compared to other states where they got more attention. There are probably some ways to estimate this that are better than others.  2. I see.  3. I was interpreting it as "a referendum increases the likelihood of the policy existing later." My question is about the assumptions that lead to this view and the idea that it might be more effective to run a campaign for a policy ballot initiative once and never again. Is this estimate of the referendum effect only for the exact same policy (maybe an education tax but the percent is slightly higher or lower) or similar policies (a fee or a subsidy or voucher or something even more different)? How similar do they have to be? What is the most different policy that existed later that you think would still count?

I do look at predictors a bit—though note that it's not about what makes it harder to repeal but rather about what makes a policy change/choice influential decades later.

The main takeaway is there aren't many predictors—the effect is remarkably uniform. I can't look at things around the structure of the law (e.g., integration in a larger bill), but I'd be surprised if something like complexity of language or cross-party support made a difference in what I'm looking at.

Yeah, Jack, I think you're capturing my thinking here (which is an informal point for this audience rather than something formal in the paper). I look at measures of how much people were interested in a policy well before the referendum or how much we should expect them to be interested after the referendum. It looks like both of these predict less persistence. So the thought is that things that generally are less salient when not on the ballot are more persistent.

See my reply to Neil Dullaghan—I think that gives somewhat of a sense here. Some other things:

  • I don't have a ton of observations on any one specific policy, so I can't say much about whether some special policy area (e.g., pollution regulation) exhibits a different pattern.
  • I look at whether this policy, or a version of it, is in place. This should capture anything that would be a direct and obvious substitute, but there might be looser substitutes that end up passing if you fail to pass an initial policy. The evidence I do have on this suggests it's small,
... (read more)

I didn't write down a prior. I think if I had, it would have been less persistence. I think I would have guessed five years was an underestimate. (I think probably many people making that assumption would also have guessed it was an underestimate but were airing on the side of conservatism.)

Yes, basically (if I understand correctly). If you think a policy has impact X for each year it's in place, and you don't discount, then the impact of causing it to pass rather than fail is something on the order of 100 * X. The impact of funding a campaign to pass it is bigger, though, because you presumably don't want to count the possibility that you fund it later as part of the counterfactual (see my note above about Appendix Figure D20).

 Some things to keep in mind:

  1. Impacts might change over time (e.g., a policy stops mattering in 50 years even if
... (read more)

A few things:

  • I do find these patterns when I look at a few different types of policies (referendums, legislation, state vs. Congress, U.S. vs. international), so there's some reason to think it's not just state referendums. 
  • There's a paper on the repeals of executive orders that finds an even lower rate of repeals there, but that doesn't tell us the counterfactual (i.e., would someone else have done this if the president in question did not).
  • There's suggestive evidence that when policies are more negotiable, there's less persistence. In my narrative/c
... (read more)
1
Madhav Malhotra
Sorry if I missed this in your post, but how many policies did you analyse that were passed via referendum vs. by legislation? How many at the state level vs. federal US vs. international?

I think there are probably ways to tackle that but don't have anything shovel-ready. I'd want to look at the general evidence on campaign spending and what methods have been used there, then see if any of those would apply (with some adaptations) to this case.

Thanks a lot! And good luck on the job market to you—let's connect when we're through with this (or if we have time before then).

1
Andrew Gimber
Good luck, both! Are there any other economist EAs on the job market this year?
2
Seth Ariel Green 🔸
It's fun to see job market candidates posting summaries here! (@basil.halperin I just saw your paper on MR.) It's a great venue for a high-level summary. Good luck to you both!

I'm very glad to see you working and thinking about this—it seems pretty neglected within the EA community. (I'm aware of and agree with the thought that speeding up space settlement is not a priority, but making sure it goes well if it happens does seem important.)

Oh, that's a good idea. I had thought of something quite different and broader, but this also seems like a promising approach.

Yeah, I think that would reduce the longevity in expectation, maybe by something like 2x. My research includes things that could hypothetically fall under congressional authority and occasionally do. (Anything could fall under congressional authority, though some might require a constitutional amendment.) So I don't think this is dramatically out of sample, but I do think it's worth keeping in mind.

The former, though I don't have estimates of the counterfactual timeline of corporate campaigns. (I'd like to find a way to do that and have toyed with it a bit but currently don't have one.)

2
Michael St Jules 🔸
Maybe you can get estimates for corporate campaign counterfactuals from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4219976 or based on a similar methodology?

I believe 4 years is very conservative. I'm working on a paper due November that should basically answer the question in part 1, but suffice it to say I think the ballot measures should look many times more cost-effective than corporate campaigns.

4
Laura Duffy
One consideration that Peter Wildeford made me think of is that, with the initiatives that do fall under Congress’ Interstate Commerce Clause authority, we might expect the longevity to be reduced. For example, if every five years a Congressperson puts into the Farm Bill a proposal to ban states from having Prop 12-style regulations, there’s some chance this passes eventually. Does your research include any initiatives that do fall under Congressional authority?
8
Michael St Jules 🔸
Awesome, I'm looking forward to it! Given similar costs per hen-year per year of impact according to Laura's report, are you expecting ballot initiatives to have longer counterfactuals than corporate campaigns? Or, do you think ballot initatives are more cost-effective per hen-year per year of impact? (Or both?)

From what I can tell, the climate change one seems like the one with the most support in the literature. I'm not sure how much the consensus in favor of the human cause of megafauna extinctions (which I buy) generalizes to the extinction of other species in the Homo genus. Most of the Homo extinctions happened much earlier than the megafauna ones. But it could be—I have not given much thought to whether this consensus generalizes.

The other thing is that "extinction" sometimes happened in the sense that the species interbred with the larger population of Homo sapiens, and I would not count that as the relevant sort of extinction here.

Yeah, this is an interesting one. I'd basically agree with what you say here. I looked into it and came away thinking (a) it's very unclear what the actual base rate is, but (b) it seems like it probably roughly resembles the general species one I have here. Given (b), I bumped up how much weight I put on the species reference class, but I did not include the human subspecies as a reference class here given (a).

From my exploration, it looked like there had been loose claims about many of them going extinct because of Homo sapiens, but it seemed like this w... (read more)

2
Vasco Grilo🔸
Interesting discussion, Linch and Zach. Relatedly, people may want to check the episode of Dwarkesh Podcast with David Reich.
4
Linch
Thanks for the reply! I appreciate it and will think further. To confirm, you find the climate change extinction hypotheses very credible here? I know very little about the topic except I vaguely recall that some scholars also advanced climate change as the hypothesis for the megafauna extinctions but these days it's generally considered substantially less credible than human origin.

Very strong +1 to all this. I honestly think it's the most neglected area relative to its importance right now. It seems plausible that the vast majority of future beings will be digital, so it would be surprising if longtermism does not imply much more attention to the issue.

I take 5%-60% as an estimate of how much of human civilization's future value will depend on what AI systems do, but it does not necessarily exclude human autonomy. If humans determine what AI systems do with the resources they acquire and the actions they take, then AI could be extremely important, and humans would still retain autonomy.

I don't think this really left me more or less concerned about losing autonomy over resources. It does feel like this exercise made it starker that there's a large chance of AI reshaping the world beyond human extinction. ... (read more)

I guess I would think that if one wants to argue for democracy as an intrinsic good, that would get you global democracy (and global control of EA funds), and it's practical and instrumental considerations (which, anyway, are all the considerations in my view) that bite against it.

It seems like the critics would claim that EA is, if not coercing or subjugating, at least substantially influencing something like the world population in a way that meets the criteria for democratisation. This seems to be the claim in arguments about billionaire philanthropy, for example. I'm not defending or vouching for that claim, but I think whether we are in a sufficiently different situation may be contentious.

That argument would be seen as too weak in the political theory context. Then powerful states would have to enfranchise everyone in the world and form a global democracy. It also is too strong in this context, since it implies global democratic control of EA funds, not community control.

This argument seems to be fair to apply towards CEA's funding decisions as they influence the community, but I do not think I as a self described EA have more justification to decide over bed net distribution than the people of Kenya who are directly affected.

Well I think MIC relies on some sort of discontinuity this century, and when we start getting into the range of precedented growth rates, the discontinuity looks less likely.

But we might not be disagreeing much here. It seems like a plausibly important update, but I'm not sure how large.

This is a valuable point, but I do think that giving real weight to a world where we have neither extinction nor 30% growth would still be an update to important views about superhuman AI. It seems like evidence against the Most Important Century thesis, for example.

2
Joel Becker
An update, yeh, but how important?  I think Most Important Century still goes through if you replace extinction/TAI with "bigdealness". In fact, bigdealness takes up considerably more space for me.  To the degree that non-extinction/TAI-bigdealness decreases the magnitude of implications for financial markets in particular, it is more consistent with the current state of financial markets.

It might be challenging to borrow (though I'm not sure), but there seem to be plenty of sophisticated entities that should be selling off their bonds and aren't. The top-level comment does cut into the gains from shorting (as the OP concedes), but I think it's right that there are borrowing-esque things to do.

7
lexande
If you're in charge of investing decisions for a pension fund or sovereign wealth fund or similar, you likely can't personally derive any benefit from having the fund sell off its bonds and other long-term assets now. You might do this in your personal account but the impact will be small. For government bonds in particular it also seems relevant that I think most are held by entities that are effectively required to hold them for some reason (e.g. bank capital requirements, pension fund regulations) or otherwise oddly insensitive to their low ROI compared to alternatives. See also the "equity premium puzzle".

The reason sophisticated entities like e.g. hedge funds hold bonds isn't so they can collect a cash flow 10 years from now. It's because they think bond prices will go up tomorrow, or next year. 

The big entities that hold bonds for the future cash flows are e.g. pension funds. It would be very surprising and (I think) borderline illegal if the pension funds ever started reasoning, "I guess I don't need to worry about cash flows after 2045, since the world will probably end before then. So I'll just hold shorter-term assets."

I think this adds up to, no... (read more)

I'm trying to make sure I understand: Is this (a more colorful version) of the same point as the OP makes at the end of "Bet on real rates rising"?

The other risk that could motivate not making this bet is the risk that the market – for some unspecified reason – never has a chance to correct, because (1) transformative AI ends up unaligned and (2) humanity’s conversion into paperclips occurs overnight. This would prevent the market from ever “waking up”.

However, to be clear, expecting this specific scenario requires both: 

  1. Buying into spe
... (read more)
3
EliezerYudkowsky
I wouldn't say that I have "a lot of" skepticism about the applicability of the EMH in this case; you only need realism to believe that the bar is above USDT and Covid, for a case where nobody ever says 'oops' and the market never pays out.

It doesn't seem all that relevant to me whether traders have a probability like that in their heads. Whether they have a low probability or are not thinking about it, they're approximately leaving money on the table in a short-timelines world, which should be surprising. People have a large incentive to hunt for important probabilities they're ignoring.

Of course, there are examples (cf. behavioral economics) of systemic biases in markets. But even within behavioral economics, it's fairly commonly known that it's hard to find ongoing, large-scale biases in financial markets.

Do you have a sense of whether the case is any stronger for specifically using cortical and pallial neurons? That's the approach Romain Espinosa takes in this paper, which is among the best work in economics on animal welfare.

2
Adam Shriver
It's an interesting thought, although I'd note that quite a few prominent authors would disagree that the cortex is ultimately what matters for valence even in mammals (Jaak Panksepp being a prominent example). I think it'd also raise interesting questions about how to generalize this idea to organisms that don't have cortices. Michael used mushroom bodies in insects as an example, but is there reason to think that mushroom bodies in insects are "like the cortex and pallium" but unlike various subcortical structures in the brain that also play a role in integrating information from different sensory sources?  I think there's need to be more of a specification of which types of neurons are ultimately counted in a principled way.
9[anonymous]
I'm curious about this as well. I'm also really confused about the extent to which this measure is just highly correlated with overall neuron count. The wikipedia page on neuron and pallial/cortical counts in animals lists humans as having lower pallial/cortical neuron counts than orcas and elephants while "Animals and Social Welfare" lists the reverse. Based on the Wikipedia page, it seems that there is a strong correlation (and while I know basically nothing about neuroscience, I would maybe think the same arguments apply?). I looked at some of the papers that the wikipedia page cited and couldn't consistently locate the cited number but they might have just had to multiply e.g. pallial neuron density by brain mass and I wouldn't know which numbers to multiply.
Answer by zdgroff16
❤️1

My husband and I are planning to donate to Wild Animal Initiative and Animal Charity Evaluators; we've also supported a number of political candidates this year (not tax deductible) who share our values. 

We've been donating to WAI for a while, as we think they have a thoughtful, skilled team tackling a problem with a sweeping scale and scant attention. 

We also support ACE's work to evaluate and support effective ways to help animals. I'm on the board there, and we're excited about ACE's new approach to evaluations and trajectory for the coming years.

Yes, and thank you for the detailed private proposal you sent the research team. I didn't see it but heard about it, and it seems like it was a huge help and just a massive amount of volunteer labor. I know they really appreciated it.

I'm an ACE board member, so full disclosure on that, though what I say here is in my personal capacity.

I'm very glad about a number of improvements to the eval process that are not obvious from this post. In particular, there are now numeric cost-effectiveness ratings that I found clarifying, overall explanations for each recommendation, and clearer delineation of the roles the "programs" and "cost-effectiveness" sections play in the reviews. I expect these changes to make recommendations more scope sensitive. This leaves me grateful for and confident in the new review framework.

6
NunoSempere
Nice.

As I noted on the nuclear post, I believe this is based on a (loosely speaking) person-affecting view (mentioned in Joel and Ben's back-and-forth below). That seems likely to me to bias the cost-effectiveness downward.

Like Fin, I'm very surprised by how well this performs given takes in other places (e.g. The Precipice) on how asteroid prevention compares to other x-risk work.

Load more