Bio

Disenangling "nature." 
It is my favorite thing, but I want to know its actual value.
Is it replaceable. Is it useful. Is it morally repugnant. Is it our responsibility.  Is it valuable. 
"I asked my questions. And then I discovered a whole world I never knew. That's my trouble with questions. I still don't know how to take them back."

Comments
107

I'm very interested in this. I would very much like to know which pesticides kill less insects (but achieve their purpose of protecting crops) and which pesticides kill more humanely and what are the best blanket replacements for the biggest pesticides in use today. 

I think this is very understudied for biodiversity off-target deaths as well. Unintended deaths and unnecessary suffering. I think it's possible to do a truly huge amount of good here.

 

Does anyone have some initial leads on what these might be? 

Just want to flag that this post doesn't specify what kind of development. I was confused if this magazine was about nimbys, green infrastructure, poverty, civil engineering, or something else.

Oh I missed this post when it was first published. I think this might be the first piece on the EA forum regarding incorporating earth-systems into AI frameworks!

Some critique: I'm not sure if there are values beyond human/sentient preferences. I am not sure what the earth wants or needs. It makes it very difficult to incorporate those hypothetical values without someone defining them and giving a way to measure success at fulfilling them. Does the earth want to stay the same? Or change? In what way? (I am not aware of what the accepted thinking is on this, or if there is any commonly accepted thinking.)

I am somewhat hopeful that we can make more sense of the "right way to respect the earth" by feeding a ton of data into AIs. My hope would be that they could work on massive scales and disentangle some patterns, then condense it down to a message that makes sense to us. Or it might not help at all because the philosophy simply has to be worked out first.

In your piece I found it valuable that you summarized the other attempts at representing non-humans in governance. I learned about a few I had not heard of and learned what happened after they were implemented. 

I also liked your suggestions for what to emphasize in training - The planetary boundaries framework, Earth System Models, Traditional Ecological Knowledge, Life Cycle Assessment databases, and Real-time environmental sensors. I think this is a really good starting set. I do have a minor quibble with the planetary boundaries. I recently became aware that they are not quite so good as they first appear.

If the values and methods of maintaining ecosystem integrity and earth stability are not worked out yet, then it could be very high impact to work on developing them. I suggest that this is something that you would be well equipped to do, as it appears the other attempts have been weak on this front. "Legible ecological signals. Non-human interests must be translated into monitored, updated, and decision-relevant indicators that powerful actors cannot simply ignore." This roughly translates to an actionable concrete objective. Extinction risk and biodiversity are too hard to measure directly according to what went wrong with some of the non-human governance examples. Perhaps some simpler indicator of those could be standardized? Perhaps eDNA could make this easier and more objective? 

Personally I am excited about using EDGE (evolutionarily distinct, globally endangered) as a better method of evaluation and prioritization. 



 

Hi!

Like you I care about the environment and like you I want to know the most effective way to make the global environment better. 

My focus is on biodiversity, and 80% of the time that means habitat loss. Alternative proteins can halve deforestation permanently, and that's a bigger impact that anything else I know. A full explanation is written up on EcoResilience Initiative's website here.  We think some of the best alternative protein companies for combating future habitat loss are Terra Bioindustries, Hyfe, and Pow.bio Because they are working on changing feedstock for precision fermentation tech away from sugarcane (a tropical crop grown in biodiverse areas) to recycling agricultural waste products like corn husks and spent barley or waste water. Here is EcoResilience Initiative's full write-up on the individual places working directly on the problem. To be clear, the statistic about halving deforestation is still using sugarcane. It could just get even better than that if we recycle agwaste on top of using alternative proteins. Pow.bio is doubling efficiency by changing the fermentation process. Synthesis Capital recommends Hyfe and Pow.bio and some other specific companies too. Incidentally I got up to speed and found a lot of detailed information from GFI, which made me feel like they are pretty good at pushing alternative protein innovation/funding/development forward. Here are all the alternative protein companies. GFI keeps a database. 

If you place more emphasis on longterm technological approaches and solving extinction altogether, biobanking could allow genetic rescue of species suffering extinction debt, and eventually de-extinction. It's also ridiculously cheap. For $3,000,000 for 100 years you could save a species. (For comparison, one study estimates it costs about $1,300,000 per year to keep critically endangered species surviving in the wild with insurance populations in zoos. The authors consider this a low cost.) The Frozen Zoo, Frozen Ark, Svalbard Gloval Seed Vault, and Ocean Genome Legacy are doing this, and Revive and Restore is working on de-extinction tech. 

For a more immediate, less techno-utopia approach, keystone species introductions seem really effective at improving landscapes. I haven't done research into what organizations are doing the best work on this, but the Big Scrub Conservancy is doing some amazing things. They are reintroducing some of the most evolutionarily distinct species and rebuilding an almost lost habitat. Its really exciting. I would particularly search for any freshwater mollusc introduction programs, because those are some highly effective keystone species in some of the most important and most depleted habitats. (And often they are neglected and evolutionarily distinct species themselves too!) Sorry I don't have specific orgs to recommend for this category. My lame excuse is that keystone species introductions don't globalize in quite the same way as the above two biodiversity interventions. 

I might also recommend EDGE for their direct action on neglected species around the globe. They prioritize by evolutionary distinctiveness (unique and irreplaceable species that represent entire branches of the tree of life) which I think is the right approach for a biodiversity crisis.

Giving Green just released their Biodiversity Conservation recommendations and they settled on GFI and Wetlands International. For marine biodiversity they specify "Supporting Implementation and Innovation of Improved Fishing Gear" as one of the most effective ways to reduce overfishing. If you dig in the footnotes you can find examples of people working on this, for example these conservation engineers and this team. I don't think Giving Green wants to claim these are the MOST impactful direct action. That is what their biodiversity philanthropy page is for. There is a lot of uncertainty about management and viability when you drill down this far. But its probably within the top 90% since its within the most effective intervention bracket.

For climate, I think Giving Green (and other EA climate orgs) have that covered. I dug around Giving Green's climate recommendations, trying to find non-policy nonprofits, and I see what you mean. You will probably have to go to the policy orgs Giving Green recommends, and then see if you can dig around and find a specific project/company they endorse. It won't be easy, because I'm guessing these climate policy nonprofits don't want to single out favorites since they will be working with lots of orgs over their lifetime. 

That being said, you might be able to find some specific direct action they are excited about if you search their reports and check their news page. For example, here is a list of 18 geothermal companies put out by Future CleanTech Architects (page 9). First I would pick a sector you expect to be most impactful, and then search within their coverage of that sector. They'll probably highlight a few places acting directly.

I'm not sure I understood what you are saying here? Do you add more in the direction they are already tilting, or are just more likely to vote if its a high-vote-volume post? 

I am aware I vote based upon the current karma count.  If someone has a bunch of karma, then I don't mind downvoting. If the post or user has super little karma, I upvote it much more readily. Something has to be truly egregious for me to push it further into negative karma.

In the midrange I am less likely to vote at all, and vote more accurately: if it was personally valuable to me, if I feel its underrepresented, or if I feel like it would be better that more eyes see it then I upvote. My favorite thing is to disagree vote and then give karma for a valuable contribution. Then I feel like I'm (a True Rationalist =P) counteracting the natural "like+agree+karma" impulse. I try to vote like this as often as possible.

I appreciate being counter poked! That was my hope. 

The concepts of metarationality, complexity science, and the like really appeal to me. When I have tried to enter into their domain and learn what they advise, I've been disappointed mainly for the reasons in my above critique. It means a lot to get an inside answer, thank you. 

I'm going to switch gears and now give my own best version of what integral altruism and associated nodes have to offer:

Pre-mortem - Also known as prospective hindsight, you start with the premise that everything went horribly wrong, and then identify what lead to that outcome so we can avoid it. It comes from a psychologist studying field intuition around 2007. Now adopted enthusiastically by EA. (see also backcasting whose lineage traces back to sustainability and environmentalism)

Red teaming - Yes this is super EA. EA took it from military wargames around 2004. But this is exactly the kind of "holding two views" and explicitly searching for alternative frames that integral altruism has pointed towards. Integral altruism would probably call it polarity management techniques which emerged from systems thinking research. Polarity management is mapping  the upsides and downsides to two different goal frameworks, and oscillating between them. You make a 2x2 matrix and then make decisions which keep you in the upper half of the matrix of both goal frameworks. Polarity management dates back to 1992.

Adaptive Management - The jist is: When you're managing a system you don't fully understand, every management action should be treated as an experiment to generate information, not just to achieve outcomes. Passive adaptive management is the standard good practice of enacting seems best, monitoring results, and adjusting. Active adaptive management is deliberately designing multiple competing interventions to discriminate how the system works, even if that means some of the interventions are suboptimal by your current theory. Developed for ecology by C.S. Holling in 1978 from a systems thinking background. I think this is a pretty important tool and I vaguely feel like it should be discussed more. The int/a term would be probe-sense-respond, moving at the speed of wisdom, or double loop learning. The EA version would be explore/exploit tradeoffs or maybe value of information calculations.

Integrative Complexity - A simplistic version of this is making pro/con lists and then having that loaded into active memory when making decisions. Pro/con lists are not as rigorous analysis as most EA endorsed methods, but they are extremely practical, and in some sense enable more thorough judgements than rigorous calculations. Apparently the formal version was developed by Philip Tetlock in the 1980s from psychometrics research. The int/a terms might be decoupling and recoupling. EA might call it scout mindset.

Collective intelligence - Super forecaster research has shown when and how crowd intelligence outperforms experts (and vice versa). Prediction markets are an example of trying to implement this at scale to inform decisionmaking. I think prediction markets are really exciting new technology that will improving decision making across humanity. I suspect this can be traced back to a variety of sources, but in particular it came from systems thinking research. Int/a might say it reflects how full spectrum knowing trumps expertise/formal analysis (in certain conditions).

Focusing - I personally have found this to be incredibly useful across many scales of decisionmaking. I think it makes me wiser and more able to expose and tinker with my underlying reasons. By Gendlin in 1978, adopted by CFAR, generally "EA" accepted now in my experience. Completely in the spirit of integral altruism.

Elizabeth Anderson is a philosopher that worked on the idea of a world composed of incomparable values. There is no reason that we would necessarily live in a world composed of values that are comparable. This might be more accurate (though inconvenient) reflection of reality. I'm going to butcher this, but here is my summary: Anderson describes shifting between frameworks and appropriate actions according to the values being optimized for. For example, we don't optimize for "grieving" but it is meaningful to us. Examining the shift between optimization and how we actually practice meaningful activities could help us better pinpoint why we should shift out of EA optimization.

I hope these descriptions are a useful translation bridging these two approaches.

I think integral altruism and friends are important because I want to make better, more wholistic, more informed choices. I  want to take everything into account. I want the ability to be context dependent and switch to the exact best approach according to the changing circumstances. It might be harder to do and harder to describe, but it is what we should strive for. I think we all want this. 

edit: I think this part captures it best: "we want to empower individuals & projects with x and y so they can discern for themselves whether x or y is right for their context.
Yes! Exactly! Its really great when advice declares "who this is for." I think int/a and adjacent groups could work towards bringing more clarity when holding such expanded levels of context. Make recognizable the context. What probably matters, what has been missed, what might not matter, how can we identify the options. When does "the individual altruist, the problem they are working on, and a plethora of other factors" matter, and when don't they make a difference? Clarify, reveal, discern. Everything depends. We are in the state of trying to know what we can best do, under our individual circumstances.

I'm really pleased to see so many people coalescing around this post. I'm enormously blessed to be amongst people thinking about the big problems with such openness, passion, and energy.

Int/a correctly identifies that EA has imperfections. But the proposals, replacing specificity with multidimensionality, putting process over goals, substituting metrics with sensing, don't fix those imperfections. They mostly obfuscate them by disallowing comparison and avoid failure by never choosing between options. I think the main problem int/a has with EA is not an EA problem but an imperfect-world problem.

EA's singleminded focus on specificity, measurability, and goal-orientedness is the painful, imperfect method that turns values and caring and messy big problems into singular choices and actions. Yes, the metrics are always flawed. Yes, you cut off possibilities when you commit to a direction. That's the cost of actually acting in the world, and I don't think int/a has provided a better path forward.

I may be being ungenerous, but my aim is to cut through to my biggest concern and look for correction. What int/a offers is staying in the ideation phase. More intuition, more holism, more systems thinking, more openness, more frames. Every single recommendation is widening, sourcing, and uncontroversial. These are a vital part of the opening process. But as far as I can tell, int/a does not move past enriching understanding, and does not seem concerned with what that is giving up. At some point the unpleasant part has to come: splitting apart, letting go of options, committing to something that might be wrong. EA isn't limiting itself to specificity and comparison out of compulsion. It sees these as necessary stages. Pleading for more modalities does not get you to a tradeoff-free world! At some point you have to demonstrate a better outcome.

The complexity science and metacrisis communities have said "see the whole system, keep entanglements, don't reduce" and then hit the entirely predictable problem of being unable to make much headway. They have produced real analytical tools, but the endpoint actions remain sparse. Is EA's predisposition towards action more harmful than int/a's moving at the speed of wisdom? I genuinely think EA's greater bias toward action has produced more good than harm. But I can see arguing for change.

What int/a does do well, and EA should listen to, is their unearthing root problems, catching incomplete definitions, calls for opening up, and providing more frames. Int/a can teach us greater things to get narrowed toward. I don't think its best seen as a competing method. It needs to be handed off to EA-style problem-solving, and should be resurfaced periodically too.

I don't know anything about music and am not a math virtuoso, but I love this. Wonderful content, wonderful writing.

Exciting resource, and well presented! I'm digging into the insecticide section now. Some of the research into numbers of individuals, prevalence of insecticides, biggest actors, and off target effects is also useful for grounding biodiversity impact estimations. Thanks to all the researchers for their hard work on this project.

Hi, I'm trying to understand your call to action.

I'm confused why donors "should not give to Founder’s Pledge or Giving Green’s climate fund until charities that engage in nuclear advocacy are no longer part of their recommended charities lists." It sounds like you are mainly saying that nuclear is ineffective. You also believe funding nuclear efforts might worsen outcomes by displacing renewables. Are you saying it a significant enough backfire so as to to negate the effectiveness of the rest of the fund? Or is this just a way to say that "it would be more effective to customize your donations to avoid nuclear advocacy."  

If 5% of Giving Green's climate fund is being mis-allocated, why should one still not donate to their overall portfolio?

Load more