zdgroff

Topic Contributions

Comments

Free-spending EA might be a big problem for optics and epistemics

Like others, I really appreciate these thoughts, and it resonates with me quite a lot. At this point, I think the biggest potential failure mode for EA is too much drift in this direction. I think the "EA needs megaprojects" thing has generated a view that the more we spend, the better, which we need to temper. Given all the resources, there's a good chance EA is around for a while and quite large and powerful. We need to make sure we put these tools to good use and retain the right values.

EA spending is often perceived as wasteful and self-serving

It's interesting here how far this is from the original version of EA and its criticisms; e.g. that EA was an unrealistic standard that involved sacrificing one's identity and sense of companionship for an ascetic universalism.

I think the old perception is likely still more common, but it's probably a matter of time (which means there's likely still time to change it). And I think you described the tensions brilliantly.

Why the expected numbers of farmed animals in the far future might be huge

Yes, that's an accurate characterization of my suggestion. Re: digital sentience, intuitively something in the 80-90% range?

Why the expected numbers of farmed animals in the far future might be huge

Yes, all those first points make sense. I did want to just point to where I see the most likely cruxes.

Re: neuron count, the idea would be to use various transformations of neuron counts, or of a particular type of neuron.  I think it's a judgment call whether to leave it to the readers to judge; I would prefer giving what one thinks is the most plausible benchmark way of counting and then giving the tools to adjust from there, but your approach is sensible too.

Why the expected numbers of farmed animals in the far future might be huge

Thanks for writing this post. I have similar concerns and am glad to see this composed. I particularly like the note about the initial design of space colonies. A couple things:

  • My sense is that the dominance of digital minds (which you mention as a possible issue) is actually the main reason many longtermists think factory farming is likely to be small relative to the size of the future. You're right to note that this means future human welfare is also relatively unimportant, and my sense is that most would admit that. Humanity is instrumentally important, however, since it will create those digital minds. I do think it's an issue that a lot of discussion of the future treats it as the future "of humanity" when that's not really what it's about. I suspect that part of this is just a matter of avoiding overly weird messaging.
  • It would be good to explore how your argument changes when you weight animals in different ways, e.g. by neuron count, since that [does appear to change things](https://forum.effectivealtruism.org/posts/NfkEqssr7qDazTquW/the-expected-value-of-extinction-risk-reduction-is-positive). I think we should probably take a variety of approaches and place some weight on each, although there's a sort of Pascalian problem with considering the possibility that each animal mind has equal weight in that it feels somewhat plausible but also leads to wild and seemingly wrong conclusions (e.g. that it's all about insect larvae). But in general, this seems like a central issue worth adjusting for.
The Future Fund’s Project Ideas Competition

Research institute focused on civilizational lock-in

Values and Reflective Processes, Economic Growth, Space Governance, Effective Altruism

One source of long-term risks and potential levers to positively shape the future is the possibility that certain values or social structures get locked in, such as via global totalitarianism, self-replicating colonies, or widespread dominance of a single set of values. Though organizations exist dedicated to work on risks of human extinction, we would like to see an academic or independent institute focused on other events that could have an impact on the order of millions of years or more. Are such events plausible, and which ones should be of most interest and concern? Such an institute might be similar in structure to FHI, GPI, or CSER, drawing on the social sciences, history, philosophy, and mathematics.

The Future Fund’s Project Ideas Competition

Consulting on best practices around info hazards

Epistemic Institutions, Effective Altruism, Research That Can Help Us Improve

Information about ways to influence the long-term future can in some cases give rise to information hazards, where true information can cause harm. Typical examples concern research into existential risks, such as around potential powerful weapons or algorithms prone to misuse. Other risks exist, however, and may also be especially important for longtermists. For example, better understanding of ways social structures and values can get locked in may help powerful actors achieve deeply misguided objectives. 

We would like to support an organization that can develop a set of best practices and consult with important institutions, companies, and longtermist organizations on how best to manage information hazards. We would like to see work to help organizations think about the tradeoffs in sharing information. How common are info hazards? Are there ways to eliminate or minimize downsides? Is it typically the case that the downsides to information sharing are much smaller than upsides or vice versa?

The Future Fund’s Project Ideas Competition

Advocacy for digital minds

Artificial Intelligence, Values and Reflective Processes, Effective Altruism

Digital sentience is likely to be widespread in the most important future scenarios. It may be possible to shape the development and deployment of artificially sentient beings in various ways, e.g. through corporate outreach and lobbying. For example, constitutions can be drafted or revised to grant personhood on the basis of sentience; corporate charters can include responsibilities to sentient subroutines; and laws regarding safe artificial intelligence can be tailored to consider the interests of a sentient system. We would like to see an organization dedicated to identifying and pursuing opportunities to protect the interests of digital minds. There could be one or multiple organizations. We expect foundational research to be crucial here; a successful effort would hinge on thorough research into potential policies and the best ways of identifying digital suffering.

The Future Fund’s Project Ideas Competition

Lobbying architects of the future

Values and Reflective Processes, Effective Altruism

Advocacy often focuses on changing politics, but the most important decisions about the future of civilization may be made in domains that receive relatively less attention. Examples include the reward functions of generally intelligent algorithms that eventually get scaled up, the design of the first space colonies, and the structure of virtual reality. We would like to see one or more organizations focused on getting the right values considered by influential decision-makers at institutions like NASA and Google. We would be excited about targeted outreach to promote consideration of aligned artificial intelligence, existential risks, the interests of future generations, and nonhuman (both animal and digital) minds. The nature of this work could take various forms, but some potential strategies are prestigious conferences in important industries, retreats including a small number of highly-influential professionals, or shareholder activism.

Potentially high-impact job: Colorado Department of Agriculture, Bureau of Animal Protection Manager

Yeah, I think this would be good context—the CO gov's husband is a die-hard animal rights activist and seems to have influence: https://en.wikipedia.org/wiki/Marlon_Reis

He declared a "MeatOut" day recently to support plant-based eating and has signed various animal welfare initiatives into law, such as a cage-free law.

So it seems that someone very EA-minded could get this position if they apply.

Persistence - A critical review [ABRIDGED]

I'm really excited to see this and look into it. I'm working on some long-term persistence issues, and this is largely in line with my intuitive feel for the literature. I haven't looked at the Church-WEIRDness one, though, and now I'm eager to read that one.

Load More