All of zdgroff's Comments + Replies

I think I might add this to my DIY, atheist, animal-rights Haggadah.

Answer by zdgroffFeb 05, 202430
2
0

TLDR: Graduating Stanford economics Ph.D. primarily interested in research or grantmaking work to improve the long-term future or animal welfare.

Skills & background: My job-market details, primarily aimed at economics academia, are on my website. I am an applied microeconomist (meaning empirical work and applied theory), with my research largely falling in political economy (econ + poli sci), public economics (econ of policy impacts), and behavioral/experimental economics.

I have been involved in effective altruism for 10+ years, including having been a... (read more)

[Edited to add the second sentence of the paragraph beginning, "Putting these together."]

The primary result doesn't speak to this, but secondary results can shed some light on it. Overall, I'd guess persistence is a touch less for policies with much more support, but note that the effect of proposing a policy on later policy is likely much larger than the effect of passing a policy conditional on its having been proposed.

The first thing to note is that there are really two questions here we might want to ask:

  1. What is the effect of passing a policy change, c
... (read more)

Yep, at the risk of omitting others, Lukas Freund as well.

Yes, it's a good point that benefits and length of the period are not independent, and I agree with the footnote too.

I would note that the factors I mentioned there don't seem like they should change things that much for most issues. I could see using 50-100 years rather than, e.g., 150 years as my results would seem to suggest, but I do think 5-10 years is an order of magnitude off.

2
Vasco Grilo
5mo
Could you elaborate on why you think multiplying your results by a factor of 0.5 would be enough? Do you think it would be possible to study the question with empirical data, by looking not only into how much time the policy changes persisted counterfactually, but also into the target outcomes (e.g. number of caged hens for policy around animal welfare standards)? I am guessing this would be much harder, but that there are some questions in this vicinity one could try to answer more empirically to get a sense of how much the persistence estimates you got have to be adjusted downwards.

Easy Q to answer so doesn't take much time! In economics, the norm is not to publish your job market paper until after the market for various reasons. (That way, you don't take time away from improving the paper, and the department that hires you gets credit.)

We will see before long how it publishes!

  1. I look at some things you might find relevant here. I try to measure the scale of the impact of a referendum. I do this two ways. I have just a subjective judgment on a five-point scale, and then I also look at predictions of the referendum's fiscal impact from the secretary of state. Neither one is predictive. I also look at how many people would be directly affected by a referendum and how much news coverage there was before the election cycle. These predict less persistence.
  2. This is something I plan to do more, but they can't vary that much because when
... (read more)
1
Peter
5mo
1. Interesting. Are there any examples of what we might consider a relatively small policy changes that received huge amounts of coverage? Like for something people normally wouldn't care about. Maybe these would be informative to look at compared to more hot button issues like abortion that tend to get a lot of coverage. I'm also curious if any big issues somehow got less attention than expected and how this looks for pass/fail margins compared to other states where they got more attention. There are probably some ways to estimate this that are better than others.  2. I see.  3. I was interpreting it as "a referendum increases the likelihood of the policy existing later." My question is about the assumptions that lead to this view and the idea that it might be more effective to run a campaign for a policy ballot initiative once and never again. Is this estimate of the referendum effect only for the exact same policy (maybe an education tax but the percent is slightly higher or lower) or similar policies (a fee or a subsidy or voucher or something even more different)? How similar do they have to be? What is the most different policy that existed later that you think would still count?

I do look at predictors a bit—though note that it's not about what makes it harder to repeal but rather about what makes a policy change/choice influential decades later.

The main takeaway is there aren't many predictors—the effect is remarkably uniform. I can't look at things around the structure of the law (e.g., integration in a larger bill), but I'd be surprised if something like complexity of language or cross-party support made a difference in what I'm looking at.

Yeah, Jack, I think you're capturing my thinking here (which is an informal point for this audience rather than something formal in the paper). I look at measures of how much people were interested in a policy well before the referendum or how much we should expect them to be interested after the referendum. It looks like both of these predict less persistence. So the thought is that things that generally are less salient when not on the ballot are more persistent.

See my reply to Neil Dullaghan—I think that gives somewhat of a sense here. Some other things:

  • I don't have a ton of observations on any one specific policy, so I can't say much about whether some special policy area (e.g., pollution regulation) exhibits a different pattern.
  • I look at whether this policy, or a version of it, is in place. This should capture anything that would be a direct and obvious substitute, but there might be looser substitutes that end up passing if you fail to pass an initial policy. The evidence I do have on this suggests it's small,
... (read more)

I didn't write down a prior. I think if I had, it would have been less persistence. I think I would have guessed five years was an underestimate. (I think probably many people making that assumption would also have guessed it was an underestimate but were airing on the side of conservatism.)

Yes, basically (if I understand correctly). If you think a policy has impact X for each year it's in place, and you don't discount, then the impact of causing it to pass rather than fail is something on the order of 100 * X. The impact of funding a campaign to pass it is bigger, though, because you presumably don't want to count the possibility that you fund it later as part of the counterfactual (see my note above about Appendix Figure D20).

 Some things to keep in mind:

  1. Impacts might change over time (e.g., a policy stops mattering in 50 years even if
... (read more)

A few things:

  • I do find these patterns when I look at a few different types of policies (referendums, legislation, state vs. Congress, U.S. vs. international), so there's some reason to think it's not just state referendums. 
  • There's a paper on the repeals of executive orders that finds an even lower rate of repeals there, but that doesn't tell us the counterfactual (i.e., would someone else have done this if the president in question did not).
  • There's suggestive evidence that when policies are more negotiable, there's less persistence. In my narrative/c
... (read more)
1
Madhav Malhotra
6mo
Sorry if I missed this in your post, but how many policies did you analyse that were passed via referendum vs. by legislation? How many at the state level vs. federal US vs. international?

I think there are probably ways to tackle that but don't have anything shovel-ready. I'd want to look at the general evidence on campaign spending and what methods have been used there, then see if any of those would apply (with some adaptations) to this case.

Thanks a lot! And good luck on the job market to you—let's connect when we're through with this (or if we have time before then).

1
Andrew Gimber
5mo
Good luck, both! Are there any other economist EAs on the job market this year?
2
Seth Ariel Green
6mo
It's fun to see job market candidates posting summaries here! (@basil.halperin I just saw your paper on MR.) It's a great venue for a high-level summary. Good luck to you both!

I'm very glad to see you working and thinking about this—it seems pretty neglected within the EA community. (I'm aware of and agree with the thought that speeding up space settlement is not a priority, but making sure it goes well if it happens does seem important.)

Oh, that's a good idea. I had thought of something quite different and broader, but this also seems like a promising approach.

Yeah, I think that would reduce the longevity in expectation, maybe by something like 2x. My research includes things that could hypothetically fall under congressional authority and occasionally do. (Anything could fall under congressional authority, though some might require a constitutional amendment.) So I don't think this is dramatically out of sample, but I do think it's worth keeping in mind.

The former, though I don't have estimates of the counterfactual timeline of corporate campaigns. (I'd like to find a way to do that and have toyed with it a bit but currently don't have one.)

2
MichaelStJules
10mo
Maybe you can get estimates for corporate campaign counterfactuals from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4219976 or based on a similar methodology?

I believe 4 years is very conservative. I'm working on a paper due November that should basically answer the question in part 1, but suffice it to say I think the ballot measures should look many times more cost-effective than corporate campaigns.

4
Laura Duffy
10mo
One consideration that Peter Wildeford made me think of is that, with the initiatives that do fall under Congress’ Interstate Commerce Clause authority, we might expect the longevity to be reduced. For example, if every five years a Congressperson puts into the Farm Bill a proposal to ban states from having Prop 12-style regulations, there’s some chance this passes eventually. Does your research include any initiatives that do fall under Congressional authority?
8
MichaelStJules
10mo
Awesome, I'm looking forward to it! Given similar costs per hen-year per year of impact according to Laura's report, are you expecting ballot initiatives to have longer counterfactuals than corporate campaigns? Or, do you think ballot initatives are more cost-effective per hen-year per year of impact? (Or both?)

From what I can tell, the climate change one seems like the one with the most support in the literature. I'm not sure how much the consensus in favor of the human cause of megafauna extinctions (which I buy) generalizes to the extinction of other species in the Homo genus. Most of the Homo extinctions happened much earlier than the megafauna ones. But it could be—I have not given much thought to whether this consensus generalizes.

The other thing is that "extinction" sometimes happened in the sense that the species interbred with the larger population of Homo sapiens, and I would not count that as the relevant sort of extinction here.

Yeah, this is an interesting one. I'd basically agree with what you say here. I looked into it and came away thinking (a) it's very unclear what the actual base rate is, but (b) it seems like it probably roughly resembles the general species one I have here. Given (b), I bumped up how much weight I put on the species reference class, but I did not include the human subspecies as a reference class here given (a).

From my exploration, it looked like there had been loose claims about many of them going extinct because of Homo sapiens, but it seemed like this w... (read more)

4
Linch
11mo
Thanks for the reply! I appreciate it and will think further. To confirm, you find the climate change extinction hypotheses very credible here? I know very little about the topic except I vaguely recall that some scholars also advanced climate change as the hypothesis for the megafauna extinctions but these days it's generally considered substantially less credible than human origin.

Very strong +1 to all this. I honestly think it's the most neglected area relative to its importance right now. It seems plausible that the vast majority of future beings will be digital, so it would be surprising if longtermism does not imply much more attention to the issue.

I take 5%-60% as an estimate of how much of human civilization's future value will depend on what AI systems do, but it does not necessarily exclude human autonomy. If humans determine what AI systems do with the resources they acquire and the actions they take, then AI could be extremely important, and humans would still retain autonomy.

I don't think this really left me more or less concerned about losing autonomy over resources. It does feel like this exercise made it starker that there's a large chance of AI reshaping the world beyond human extinction. ... (read more)

I guess I would think that if one wants to argue for democracy as an intrinsic good, that would get you global democracy (and global control of EA funds), and it's practical and instrumental considerations (which, anyway, are all the considerations in my view) that bite against it.

It seems like the critics would claim that EA is, if not coercing or subjugating, at least substantially influencing something like the world population in a way that meets the criteria for democratisation. This seems to be the claim in arguments about billionaire philanthropy, for example. I'm not defending or vouching for that claim, but I think whether we are in a sufficiently different situation may be contentious.

That argument would be seen as too weak in the political theory context. Then powerful states would have to enfranchise everyone in the world and form a global democracy. It also is too strong in this context, since it implies global democratic control of EA funds, not community control.

This argument seems to be fair to apply towards CEA's funding decisions as they influence the community, but I do not think I as a self described EA have more justification to decide over bed net distribution than the people of Kenya who are directly affected.

Well I think MIC relies on some sort of discontinuity this century, and when we start getting into the range of precedented growth rates, the discontinuity looks less likely.

But we might not be disagreeing much here. It seems like a plausibly important update, but I'm not sure how large.

This is a valuable point, but I do think that giving real weight to a world where we have neither extinction nor 30% growth would still be an update to important views about superhuman AI. It seems like evidence against the Most Important Century thesis, for example.

2
Joel Becker
1y
An update, yeh, but how important?  I think Most Important Century still goes through if you replace extinction/TAI with "bigdealness". In fact, bigdealness takes up considerably more space for me.  To the degree that non-extinction/TAI-bigdealness decreases the magnitude of implications for financial markets in particular, it is more consistent with the current state of financial markets.

It might be challenging to borrow (though I'm not sure), but there seem to be plenty of sophisticated entities that should be selling off their bonds and aren't. The top-level comment does cut into the gains from shorting (as the OP concedes), but I think it's right that there are borrowing-esque things to do.

7
lexande
1y
If you're in charge of investing decisions for a pension fund or sovereign wealth fund or similar, you likely can't personally derive any benefit from having the fund sell off its bonds and other long-term assets now. You might do this in your personal account but the impact will be small. For government bonds in particular it also seems relevant that I think most are held by entities that are effectively required to hold them for some reason (e.g. bank capital requirements, pension fund regulations) or otherwise oddly insensitive to their low ROI compared to alternatives. See also the "equity premium puzzle".

The reason sophisticated entities like e.g. hedge funds hold bonds isn't so they can collect a cash flow 10 years from now. It's because they think bond prices will go up tomorrow, or next year. 

The big entities that hold bonds for the future cash flows are e.g. pension funds. It would be very surprising and (I think) borderline illegal if the pension funds ever started reasoning, "I guess I don't need to worry about cash flows after 2045, since the world will probably end before then. So I'll just hold shorter-term assets."

I think this adds up to, no... (read more)

I'm trying to make sure I understand: Is this (a more colorful version) of the same point as the OP makes at the end of "Bet on real rates rising"?

The other risk that could motivate not making this bet is the risk that the market – for some unspecified reason – never has a chance to correct, because (1) transformative AI ends up unaligned and (2) humanity’s conversion into paperclips occurs overnight. This would prevent the market from ever “waking up”.

However, to be clear, expecting this specific scenario requires both: 

  1. Buying into spe
... (read more)
3
EliezerYudkowsky
1y
I wouldn't say that I have "a lot of" skepticism about the applicability of the EMH in this case; you only need realism to believe that the bar is above USDT and Covid, for a case where nobody ever says 'oops' and the market never pays out.

It doesn't seem all that relevant to me whether traders have a probability like that in their heads. Whether they have a low probability or are not thinking about it, they're approximately leaving money on the table in a short-timelines world, which should be surprising. People have a large incentive to hunt for important probabilities they're ignoring.

Of course, there are examples (cf. behavioral economics) of systemic biases in markets. But even within behavioral economics, it's fairly commonly known that it's hard to find ongoing, large-scale biases in financial markets.

Do you have a sense of whether the case is any stronger for specifically using cortical and pallial neurons? That's the approach Romain Espinosa takes in this paper, which is among the best work in economics on animal welfare.

2
Adam Shriver
1y
It's an interesting thought, although I'd note that quite a few prominent authors would disagree that the cortex is ultimately what matters for valence even in mammals (Jaak Panksepp being a prominent example). I think it'd also raise interesting questions about how to generalize this idea to organisms that don't have cortices. Michael used mushroom bodies in insects as an example, but is there reason to think that mushroom bodies in insects are "like the cortex and pallium" but unlike various subcortical structures in the brain that also play a role in integrating information from different sensory sources?  I think there's need to be more of a specification of which types of neurons are ultimately counted in a principled way.
9[anonymous]1y
I'm curious about this as well. I'm also really confused about the extent to which this measure is just highly correlated with overall neuron count. The wikipedia page on neuron and pallial/cortical counts in animals lists humans as having lower pallial/cortical neuron counts than orcas and elephants while "Animals and Social Welfare" lists the reverse. Based on the Wikipedia page, it seems that there is a strong correlation (and while I know basically nothing about neuroscience, I would maybe think the same arguments apply?). I looked at some of the papers that the wikipedia page cited and couldn't consistently locate the cited number but they might have just had to multiply e.g. pallial neuron density by brain mass and I wouldn't know which numbers to multiply.
Answer by zdgroffNov 28, 202216
❤️1

My husband and I are planning to donate to Wild Animal Initiative and Animal Charity Evaluators; we've also supported a number of political candidates this year (not tax deductible) who share our values. 

We've been donating to WAI for a while, as we think they have a thoughtful, skilled team tackling a problem with a sweeping scale and scant attention. 

We also support ACE's work to evaluate and support effective ways to help animals. I'm on the board there, and we're excited about ACE's new approach to evaluations and trajectory for the coming years.

Yes, and thank you for the detailed private proposal you sent the research team. I didn't see it but heard about it, and it seems like it was a huge help and just a massive amount of volunteer labor. I know they really appreciated it.

I'm an ACE board member, so full disclosure on that, though what I say here is in my personal capacity.

I'm very glad about a number of improvements to the eval process that are not obvious from this post. In particular, there are now numeric cost-effectiveness ratings that I found clarifying, overall explanations for each recommendation, and clearer delineation of the roles the "programs" and "cost-effectiveness" sections play in the reviews. I expect these changes to make recommendations more scope sensitive. This leaves me grateful for and confident in the new review framework.

6
NunoSempere
1y
Nice.

As I noted on the nuclear post, I believe this is based on a (loosely speaking) person-affecting view (mentioned in Joel and Ben's back-and-forth below). That seems likely to me to bias the cost-effectiveness downward.

Like Fin, I'm very surprised by how well this performs given takes in other places (e.g. The Precipice) on how asteroid prevention compares to other x-risk work.

Worth flagging that I believe this is based on a (loosely speaking) person-affecting view (mentioned in Joel and Ben's back-and-forth below). That seems to me to bias the cost-effectiveness of anything that poses a sizable extinction risk dramatically downward.

At the same time, I find both the empirical work and the inside-view thinking here very impressive for a week's work, and it seems like even those without a person-affecting view can learn a lot from this.

Thanks for writing this. I think about these sorts of things a lot. Given the title, do you know of examples of movements that did not start academic disciplines and appear to have suffered as a result?

The Global Priorities Institute and clusters of work around that do work in economics, including welfare economics. I'd also be curious to hear what you think they should do differently.

Not quite a discipline, but I think American Christianity lost cultural influence by denominations ceding control of their colleges (based off this book).

Had the men's right movement established men's studies as more distinct from women's studies maybe they would have benefited (hard to believe they ever had the political power to achieve this.)

I can imagine a world where sociobiology became its own discipline. It did not.

I think the establishment of chiropractic schools legitimized the practice in the United States compared to other alternative medicines.... (read more)

I'm toying with a project to gather reference classes for AGI-induced extinction and AGI takeover. If someone would like to collaborate, please get in touch.

(I'm aware of and giving thought to reference class tennis concerns but still think something like this is neglected.)

I don't think it's right that the broad project of alignment would look the same with and without considering religion. I'm curious what your reasoning is here and if I'm mistaken.

One way of reading this comment is that it's a semantic disagreement about what alignment means. The OP seems to be talking about the  problem of getting an AI to do the right thing, writ large, which may encompass a broader set of topics than alignment research as you define it.

Two other ways of reading it are that (a) solving the problem the OP is addressing (getting an AI... (read more)

2
Geoffrey Miller
2y
zdgroff -- that link re. specific preferences to the 80k Hours interview with Stuart Russell is a fascinating example of what I'm concerned about. Russell seems to be arguing that either we align an AI system with one person's individual stated preferences at a time, or we'd have to discover the ultimate moral truth of the universe, and get the AI aligned to that.  But where's the middle ground of trying to align with multiple people who have diverse values? That's where most of the near-term X risk lurks, IMHO -- i.e. in runaway geopolitical or religious wars, or other human conflicts, amplified by AI capabilities. Even if we're talking fairly narrow AI rather than AGI. 
5
Zach Stein-Perlman
2y
Kind of. Alignment researchers want AI to do the right thing. How they try to do that is mostly not sensitive to what humans want; different researchers do different stuff but it's generally more like interpretability or robustness than teaching specific values to AI systems. So even if religion was more popular/appreciated/whatever, they'd still be doing stuff like interpretability, and still be doing it in the same way. (a) and (b) are clearly false, but many believe that most of the making-AI-go-well problem is getting from AI killing everyone to AI not killing everyone and that going from AI not killing everyone to AI doing stuff everyone thinks is great is relatively easy. And value-loading approaches like CEV should be literally optimal regardless of religiosity. Few alignment researchers are excited about Stuart Russell's research, I think (at least in the bay area, where the alignment researchers I know are). I agree that if his style of research was more popular, thinking about values and metavalues and such would be more relevant.

One thing that's  sad and perhaps not obvious to people is that, as I understand it, Nathan Robinson was initially sympathetic to EA (and this played a role in at-times vocal advocacy for animals). I don't know that there's much to be done about this. I think the course of events was perhaps inevitable, but that's relevant context for other Forum readers who see this.

And worth noting that Ben Franklin was involved in the constitution, so at least some of his longtermist time seems to have been well spent.

I don't have a strong view on the original setup, but I can clarify what the argument is. For the first point, that we maximize . The idea is that we want to maximize the likelihood that the organism chooses the action that leads to enjoyment (the one being selected for). That probability is a function of how much better it is to choose that action than the alternative. So if you get E from choosing that action and lose S from choosing the alternative, the benefit from choosing that action is E - (-S) = E + S. However, you only pay to produ... (read more)

Like others, I really appreciate these thoughts, and it resonates with me quite a lot. At this point, I think the biggest potential failure mode for EA is too much drift in this direction. I think the "EA needs megaprojects" thing has generated a view that the more we spend, the better, which we need to temper. Given all the resources, there's a good chance EA is around for a while and quite large and powerful. We need to make sure we put these tools to good use and retain the right values.

EA spending is often perceived as wasteful and self-serving

It's int... (read more)

Yes, that's an accurate characterization of my suggestion. Re: digital sentience, intuitively something in the 80-90% range?

Yes, all those first points make sense. I did want to just point to where I see the most likely cruxes.

Re: neuron count, the idea would be to use various transformations of neuron counts, or of a particular type of neuron.  I think it's a judgment call whether to leave it to the readers to judge; I would prefer giving what one thinks is the most plausible benchmark way of counting and then giving the tools to adjust from there, but your approach is sensible too.

1
Fai
2y
Sorry that I missed your comment and therefore the late reply!  Thank you for sharing. Let me clarify your suggestion here, do you mean you suggest me to give my model of accounting for moral significance, rather than just writing about the number of beings involved? Also, do you mind sharing your credence of the possibility of digital sentience?

Thanks for writing this post. I have similar concerns and am glad to see this composed. I particularly like the note about the initial design of space colonies. A couple things:

  • My sense is that the dominance of digital minds (which you mention as a possible issue) is actually the main reason many longtermists think factory farming is likely to be small relative to the size of the future. You're right to note that this means future human welfare is also relatively unimportant, and my sense is that most would admit that. Humanity is instrumentally important,
... (read more)
9
Fai
2y
Thank you for your comment!  Yes, I recognize that some longtermists bite the bullet and admit that humanity virtually only have instrumental values, but I am not sure if they are the majority, it seems like they are not.  In any case, it seems to me that the vast majority of longtermists either think the focus should be humanity, or digital beings. Animals are almost always left out of the picture. I think you are right that "part of this" is a strategy to avoid weird messaging, but I think most longtermists I discussed with do not think that humanity do not matter, probably especially with new longtermists. Also, naming of initiatives such as human compatible AI, value alignment, learning from humans, etc, makes me feel that these people genuinely care about the future of humanity. And I am not even sure digital sentience is even possible, we haven't even proven that it is possible, right? And I don't even know how to think about the feasibility of digital sentience. Maybe you can introduce me to some readings?     I find the neuron count model implausible. 1. Human infants have more neurons than adult humans. 2. Some nonhuman animals have more neurons than humans (btw, I have some credence, albeit low, that some nonhuman animals have higher moral weights than humans, 1v1) . 3. Using the neuron count model would also create seemingly absurd prescriptions.  The total number of nematode neurons exceed that of humans, which would prescribe a focus on nematodes more than humans, which would sound no less absurd than focusing all on insect larvae. (nonetheless, I don't put 0 credence to these possibilities) 4. There are evidence that within humans, the capacity to suffer has great variety, down to the extreme which some humans barely ever feel pain or suffer, and there were no evidence these vast differences was because of neuron counts. In any case, my aim for this post is literally to present the number of animals, while also made the case that I expect most o

Research institute focused on civilizational lock-in

Values and Reflective Processes, Economic Growth, Space Governance, Effective Altruism

One source of long-term risks and potential levers to positively shape the future is the possibility that certain values or social structures get locked in, such as via global totalitarianism, self-replicating colonies, or widespread dominance of a single set of values. Though organizations exist dedicated to work on risks of human extinction, we would like to see an academic or independent institute focused on other eve... (read more)

Consulting on best practices around info hazards

Epistemic Institutions, Effective Altruism, Research That Can Help Us Improve

Information about ways to influence the long-term future can in some cases give rise to information hazards, where true information can cause harm. Typical examples concern research into existential risks, such as around potential powerful weapons or algorithms prone to misuse. Other risks exist, however, and may also be especially important for longtermists. For example, better understanding of ways social structures and values can ... (read more)

Load more