capybaralet

Comments

Trying to help coral reefs survive climate change seems incredibly neglected.

(Sorry, this is a bit stream-of-conscious):

I assume its because humans rely on natural ecosystems in a variety of ways in order to have the conditions necessary for agriculture, life, etc.  So, like with climate change, the long-term cost of mitigation is simply massive... really these numbers should not be thought of as very meaningful, I think, since the kinds of disruptions and destruction we are talking about is not easily measured in $s.

TBH, I find it not-at-all surprising that saving coral reefs would have a huge impact, since they are basically part of the backbone of the entire global ocean ecosystem, and this stuff is all connected, etc.

I think environmentalism is often portrayed as some sort of hippy-dippy sentimentalism and contrasted with humanist values and economic good sense, and I've been a bit surprised how prevalent that sort of attitude seems to be in EA.  I'm not trying to say that either of you in the thread have this attitude; it's more just that I was reminded of it by these comments... it seems like I have a much stronger prior that protecting the environment is good for people's long-term future (e.g. like most people here have probably heard the idea that all the biodiversity we're destroying could have massive scientific implications, e.g. leading to the development of new materials and drugs).

I think the reality is that we're completely squandering the natural resources of the earth, and all of this only looks good for people in the short term, or if we expect to achieve technological independence from nature.  I think it's very foolhardy to assume that we will achieve technological independence from nature, and doing so is a source of x-risk.  (TBC, I'm not an expert on any of this; just sharing my perspective.)

To be clear, I also think that AI timelines are likely to be short, and AI x-risk mostly dominates my thinking about the future.  If we can build aligned, transformative AI, there is a good chance that we will be able to leverage to develop technological independence from nature.  At the same time, I think our current irresponsible attitude towards managing natural resources doesn't bode well, even if we grant ourselves huge technological advances (it seems to me that many problems facing humanity now require social, not technological solutions; the technology is often already there...).

Big List of Cause Candidates

Yeah...  it's not at all my main focus, so I'm hoping to inspire someone else to do that! :) 

Big List of Cause Candidates

I recommend changing the "climate change" header to something a  bit broader (e.g."environmentalism" or "protecting the natural environment", etc.).  It is a shame that (it seems) climate change has come to eclipse/subsume all other environmental  concerns in the public imagination.  While most environmental issues are exacerbated by climate change, solving climate change will not necessarily solve them.

A specific cause worth mentioning is preventing the collapse of key ecosystems, e.g. coral reefs: https://forum.effectivealtruism.org/posts/YEkyuTvachFyE2mqh/trying-to-help-coral-reefs-survive-climate-change-seems

 

Idea: "SpikeTrain" for lifelogging

Thanks for the pointer!  I think many EAs are interested in QS, but I agree it's a bit tangential.

Improving Institutional Decision-Making: a new working group

IIRC Etherium foundation is using QF somehow.
But it's probably best just to get in touch with someone who knows more of what's going on at RXC.
Not sure who that would be OTTMH, unfortunately.
 

Improving Institutional Decision-Making: a new working group

I think you guys are already aware of RadicalXChange.  It's a bit different in focus, but I know they are excited about trying out mechanisms like QV/QF in institutional settings.

What's a good reference for finding (more) ethical animal products?

It was a few years back that I looked into it, and I didn't try too hard.  Sad to see the PETA link.
I'm basically looking for a reference that summarizes someone else's research (so I don't have to do my own).

Idea: the "woketionary"

This doesn't seem like a great use of time. For one thing,  I think it gets the psychology of political disagreements backwards. People don't simply disagree with each other because they don't understand each others' words. Rather they'll often misinterpret words to meet political ends.

It's not one or the other.  Anyways, having shared definitions also prevents deliberate/strategic misinterpretation.

I also question anyone's ability to create such an "objective/apolitical" dictionary. As you note, even the term "woke" can have a negative connotation. (And in some circles it still has a positive connotation.) Some words are essentially political footballs in today's climate. For example, in this dictionary what would be the definition of the word "woman"?

Sure, nothing is ever apolitical.  But you can try to make it less so.

I'm also unconvinced that this is an EA type of activity. For the standard reasons, I think EA should be very cautious when approaching politics. It seems like creating a central hub for people to look up politically loaded terms is the opposite of this.

What do you mean "the standard reasons"?   I don't think it should be EA "branded".  I don't believe EAs should reason from cause areas to interventions; rather I think we should evaluate each intervention independently.

What are the most common objections to “multiplier” organizations that raise funds for other effective charities?

Do you disagree that the EA community at large seems less excited about multiplier orgs vs. more direct orgs?  

What are the most common objections to “multiplier” organizations that raise funds for other effective charities?

I'm skeptical of multiplier organizations relative effectiveness because the EA community doesn't seem that excited about them. 

(P.S.: This is actually probably my #1 reason, as someone who hasn't spent much time thinking about where people should donate.  I suspect a lot of people are wary of seeming too enthusiastic because they don't want EA to look like a pyramid scheme.)

Load More