UriKatz

UriKatz's Posts

Sorted by New

UriKatz's Comments

Climate Change Is Neglected By EA

Yes, you are correct and thank you for forcing me to further clarify my position (in what follows I leave out WAW since I know absolutely nothing about it):

  1. EA funds, which I will assume is representative of EA priorities has these funds a) “Global Health and Development”; b) “Animal Welfare”; c) “Long-Term Future”; d) “EA Meta”. Let’s leave D aside for the purposes of this discussion.

  2. There is good reason to believe the importance and tractability of specific climate change interventions can equal or even exceed those of A & B. We have not done enough research to determine if this is the case.

  3. The arguments in favor of C being the only area we should be concerned with, or the area we should be most concerned with, are:

I) reminiscent of other arguments in the history of thought that compel us (humans) because we do not account for the limits of our own rationality. I could say a lot more about this another time, suffice it to say here that in the end I cautiously accept these arguments and believe x-risk deserves a lot of our attention.

II) are popular within this community for psychological as well as purely rational reasons. There is nothing wrong with that and it might even be needed to build a dedicated community.

III) For these reasons I think we are biased towards C, and should employ measurements to correct for this bias.

  1. None of these priorities is neglected by the world, but certain interventions or research opportunities within them are. EA has spent an enormous amount of effort finding opportunities for marginal value add in A, B & C.

  2. Climate change should be researched just as much as A & B. One way of accounting for the bias I see in C is to divert a certain portion of resources to climate change research despite our strongly held beliefs. I simply cannot accept the conclusion that unless climate change renders our planet uninhabitable before we colonize Mars, we have better things to worry about. That sounds absurd in light of the fact that certain detrimental effects of climate change are already happening, and even the best case future scenarios include a lot of suffering. It might still be right, but it’s absurdity means we need to give it more attention.

What surprises me the most from the discussion of this post (and I realize it’s readers are a tiny sample size of the larger community) is that no one has come back with: “we did the research years ago, we could find no marginal value add. Please read this article for all the details”.

Climate Change Is Neglected By EA

The assumption is not that people outside EA cannot do good, it is merely that we should not take it for granted that they are doing good, and doing it effectively, no matter their number. Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell. So the claim that climate change work is or is not the most good has no merit without a deeper dive into the field and a search for incredible giving / working opportunities. Any shallow dive into this cause reveals further attention and concern are warranted. I do not know what the results of a deeper dive might show, but am fairly confident we can at least be as effective working on climate change as working on some of the other present day welfare causes.

I do believe that there is strong bias towards the far future in many EA discussions. I am not unsympathetic to the rational behind this, but since it seems to override everything else, and present day welfare (as your reply implies) is merely tolerated, I am cautious about it.

Developing my inner self vs. doing external actions

This is a great question and one everyone struggles with.

TL;DR work on self improvement daily but be open to opportunities for acting now. My advice would indeed be to balance the two, but balance is not a 50-50 split. To be a top performer in anything you do, practice, practice, practice. The impact of a top performer can easily be 100x over the rest of us, so the effort put into self improvement pays off. Professional sports is a prime example, but research, engineering, academia, management, parenting, they all benefit from working on yourself.

The trap to avoid is not acting before you are perfect. Do not let opportunity for doing good slip you by. Your first job, relationship, child will all suffer from your inexperience, but how else do you gain experience? In truth, the more experience you gain the greater the challenges you will allow yourself to tackle, so being comfortable acting with some doubt of your ability is critical to great achievements.

Climate Change Is Neglected By EA

I feel sometimes that the EA movement is starting to sound like heavy metalists (“climate change is too mainstream”), or evangelists (“in the days after the great climate change (Armageddon), mankind will colonize the galaxy (the 2nd coming), so the important work is the one that prevents x-risk (saves people’s souls)”). I say “amen” to that, and have supported AI safety financially in the past, but I remain skeptical that climate change can be ignored. What would you recommend as next steps for an EA ember who wants to learn more and eventually act? What are the AMF or GD of climate change?

Climate Change Is Neglected By EA

I wonder how much of the assessment that climate change work is far less impactful than other work relies on the logic of “low probability, high impact”, which seems to be the most compelling argument for x-risk. Personally, I generally agree with this line of reasoning, but it leads to conclusions so far away from common sense and intuition, that I am a bit worried something is wrong with it. It wouldn’t be the first time people failed to recognize the limits of human rationality and were led astray. That error is no big deal as long as it does not have a high cost, but climate change, even if temperatures only rise by 1.5 degrees, is going to create a lot of suffering in this world.

In an 80,000 hours podcast with Peter Singer the question was raised whether EA should split into 2 movements: present welfare and longtermism. If we assume that concern with climate issues can grow the movement, that might be a good way to account for our long term bias, while continuing the work on x-risk at current and even higher levels.

Choosing the Zero Point

In my own mind I would file this post under “psychological hacks”, a set of tools that can be extremely useful when used correctly. I am already considering how to apply this hack to some moral dilemmas I am grappling with. I share this because I think it highlights two important points.

First off, the post is endorsing the common marketing technique of framing. I am not an expert in the field, but am fairly confident this technique can influence people’s thoughts, feelings & behavior. Importantly, the framing exercise is not merely confined to the conclusion of the post: “choosing a new zero point“. A big part of the framing is the language the post employs. I am referring to the use of terms like “utility functions” and “positive affine transformations”, and, more broadly, explaining Rob Bensinger’s quote using a popular framework in economics & philosophy. I suspect this is just as significant to the behavioral effect the framing hack produces as the final recommendation the post makes.

Secondly, I wonder if you believe “choosing a new zero point“ is something we should do as often as possible, or whether there is a more limited scope of problems it applies to. Might we be normalizing the current state of the world, and suggesting a brighter future that we can, but do not have, to strive for. What if small incremental changes are not enough? One example of this would be climate change. Another would be problems like genocide or slavery. Is it enough to be slightly better than the average citizen in a society that permits slavery?

If you value future people, why do you consider near term effects?

Great post, thank you.

If one accepts your conclusion, how does one go about implementing it? There is the work on existential risk reduction, which you mention. Beyond that, however, predicting any long-term effect seems to be a work of fiction. If you think you might have a vague idea of how things will turn out in 1k year, you must realize that even longer-term effects (1m? 1b?) dominate these. An omniscient being might be able to see the causal chain from our present actions to the far future, but we certainly cannot.

A question this raises for me is whether we should adjust our moral theories in any way. Given your conclusions, classic utilitarianism becomes a great idea that can never be implemented by us mere mortals. A bounded implementation, as MichaelStJules mentions, is probably preferable to ignoring utilitarianism completely, but that only answers this question by side-stepping it. I have come across philosophical work on “The Nonidentity Problem” which suggests that our moral obligations more or less extend to our grandchildren, but personaly I remain unconvinced by it.

I think there might be one area of human activity that, even given your conclusion, it is moral and rational to pursue - education. Not the contemporary kind which amounts to exercising our memories to pass standardized tests. More along the lines of what the ancient Greeks had in mind when they thought about education. The aim would be somewhere in the ballpark of producing critical thinking, compassionate, and physically fit people. These people will then be able to face the challenges they encounter, and which we cannot predict, in the best possible way. There is a real risk that humanity takes an unrecoverable turn for the worst, and while good education does not promise to prevent that, it increases the odds that we achieve the highest levels of human happiness and fulfillment as we set out to discover the farthest reaches of our galaxy.

I would love to hear your thoughts.

[Linkpost] - Mitigation versus Supression for COVID-19

I know there is a death toll associated with economic recessions. Basically, people get poorer and that results in worse mental and physical healthcare. Are there any studies weighing those numbers against these interventions? Seems like a classic QALY problem to me, but I am an amateur in any of the relevant fields.

Also, people keep suggesting to quarantine everyone above 50 or 60 and let everyone else catch the virus to create herd immunity. Is there any scientific validity behind such a course of action? Is it off the table simply because the ”agism” of the virus is only assumed at this point?

Load More