UriKatz

Posts

Sorted by New

Wiki Contributions

Comments

EA for Jews: Launch and Call for Volunteers

What would you say are the biggest benefits of being part of an EA faith group?

The Elevator Pitch for why Mental Health should get more attention in EA

From a broad enough perspective no cause area EA deals with is neglected. Poverty? Billions donated annually. AI? Every other start up uses it. So we start narrowing it down: poverty -> malaria-> bednets.

There is every reason to believe mental health has neglected yet tractable and highly impactful areas, because of the size of the problem as you outline it, and because mental health touches all of us all the time in everything we do (when by health we don’t just mean the absence of disease but the maximization of wellbeing).

I think EA concepts are here to challenge us. Being a clinical psychiatrist is amazing, you can probably help hundreds of people. Could you do more? What’s going on in other parts of the globe, where is humanity headed towards in the future? This challenge does not have to be burdensome, it can be inspiring. It should certainly not paralyze you and prevent you from doing any good at all. Like a mathematician obsessed with proving a theorem, or a physicist relentlessly searching for the theory of everything, they also do other work, but never give up the challenge.

The Elevator Pitch for why Mental Health should get more attention in EA

Hey @Dvir, mental health is a (not-professional) passion of mine so I am grateful for any attention given to it in EA. I wonder if you think a version 2.0 of your pitch can be written, which takes into account the 3 criteria below. Right now you seem to have nailed down the 1st, but I don't see the case for 2 & 3:

  1. Great in scale (it affects many lives, by a great amount)
  2. Highly neglected (few other people are working on addressing the problem)
  3. Highly solvable or tractable (additional resources will do a great deal to address it) (https://80000hours.org/articles/problem-framework/)

I think that is what HLI is trying to do: https://forum.effectivealtruism.org/posts/uzLRw7cjpKnsuM7c3/hli-s-mental-health-programme-evaluation-project-update-on https://forum.effectivealtruism.org/posts/v5n6eP4ZNr7ZSgEbT/jasper-synowski-and-clare-donaldson-identifying-the-most

AMA: Jeremiah Johnson, Director/Founder of the Neoliberal Project

I am not sure about the etiquette of follow up questions in AMAs, but I’ll give it a go:

Why does being mainstream matter? If, for example, s-risk is the highest priority cause to work on, and the work of a few mad scientists is what is needed to solve the problem, why worry about the general public’s perception of EA as a movement, or EA ideas? We can look at growing the movement as growing the number of top performers and game-changers, in their respective industries, who share EA values. Let the rest of us enjoy the benefit of their labor.

Why I am probably not a longtermist

Well, it wouldn’t work if you said “I want a future with less suffering, so I am going to evaluate my impact based on how many paper clips exist in the world at a given time”. Bostrom selects collaboration, technology and wisdom because he thinks they are the most important indicators of a better future and reduced x-risk. You are welcome to suggest other parameters for the evaluation function of course, but not every parameter works. If you read the analogy to chess in the link I posted it will become much more clear how Bostrom is thinking about this.

(if anyone reading this comment knows of evolutions in Bostrom’s thought since this lecture I would very much appreciate a reference)

Why I am probably not a longtermist

Hi Khorton,

If by “decide” you mean control the outcome in any meaningful way I agree, we cannot. However I think it is possible to make a best effort attempt to steer things towards a better future (in small and big ways). Mistakes will be made, progress is never linear and we may even fail altogether, but the attempt is really all we have, and there is reason to believe in a non-trivial probability that our efforts will bear fruit, especially compared to not trying or to aiming towards something else (like maximum power in the hands of a few).

For a great exploration of this topic I refer to this talk by Nick Bostrom: http://www.stafforini.com/blog/bostrom. The tl;dr is that we can come up with evaluation functions for states of the world that, while not yet being our desired outcome, are indications that we are probably moving in the right direction. We can then figure out how we get to the very next state, in the near future. Once there, we will jot a course for the next state, and so on. Bostrom signals out technology, collaboration and wisdom as traits humanity will need a lot of in the better future we are envisioning, so he suggests can measure them with our evaluation function.

Why I am probably not a longtermist

I am largely sympathetic to the main thrust of your argument (borrowing from your own title: I am probably a negative utilitarian), but I have 2 disagreements that ultimately lead me to a very different conclusion on longtermism and global priorities:

  1. Why do you assume we cannot effect the future further than 100 years? There are numerous examples of humans doing just that: in science (inventing the wheel, electricity or gunpowder), government (the US constitution), religion (the Buddhist Pali cannon, the Bible, the Quran), philosophy (utilitarianism), and so on. One can even argue that the works of Shakespeare have had an effect on people for hundreds of years.
  2. Though humanity is not inherently awesome, it does not inherently suck either. Humans have the potential to do amazing things, for good or evil. If we can build a world with a lot less war and crime and a lot more collaboration and generosity, isn't it worth a try? In Parfit's beautiful words: "Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea ... Some of our successors might live lives and create worlds that, though failing to justify past suffering, would give us all, including some of those who have suffered, reasons to be glad that the Universe exists.
Against opposing SJ activism/cancellations

I thought it worth pointing out that this statement from one of your comments I mostly agree with, while I strongly disagree with your main post. If this was the essence of your message, maybe it requires clarification:

"Politics is the mind killer." Better to treat it like the weather and focus on the things that actually matter and we have a chance of affecting, and that our movement has a comparative advantage in.

To be clear, I think justice does actually matter, and any movement that would look past it to “more important” considerations scares me a little, but I strongly agree with the “weather” and “comparative advantage” parts of your statement. We should practice patience and humility. By patience I means not jumping into the hot topic conversation of the day, no matter how heated the debate. Humility means recognizing how much effort we spend learning about animal advocacy, malaria, X risk factors, etc. That is why we can feel confident to speak/act on them. But this doesn’t automatically transfer to other issues. Merely recognizing how difficult it is to get altruism right, compared to how much ineffective altruism there is, should be a warning signal when we wade out of our domains of expertise.

I think the middle ground here is not to allow people to bully you out of speaking, but to only speak when you have something worth saying that you considered carefully (preferably with some input from peers). So basically, as others have already mentioned: “what would Peter Singer do?”.

Cause Prioritization in Light of Inspirational Disasters

I have similar objections to this post as Khorton & cwbakerlee. I think it shows how the limits of human reason make utilitarianism a very dangerous idea (which may nevertheless be correct), but I don’t want to discuss that further here. Rather, let’s assume for the sake of argument that you are factually & morally correct. What can we learn from disasters, and the world’s reaction to them, that we can reproduce without the negative effects of the disaster? I am thinking of anything from faking a disaster (wouldn’t the conspiracy theorist love that) to increasing international cooperation. What are the key characteristics of a pandemic or a war that make the world change for the better? Is the suffering an absolute necessity?

Climate Change Is Neglected By EA

Yes, you are correct and thank you for forcing me to further clarify my position (in what follows I leave out WAW since I know absolutely nothing about it):

  1. EA funds, which I will assume is representative of EA priorities has these funds a) “Global Health and Development”; b) “Animal Welfare”; c) “Long-Term Future”; d) “EA Meta”. Let’s leave D aside for the purposes of this discussion.

  2. There is good reason to believe the importance and tractability of specific climate change interventions can equal or even exceed those of A & B. We have not done enough research to determine if this is the case.

  3. The arguments in favor of C being the only area we should be concerned with, or the area we should be most concerned with, are:

I) reminiscent of other arguments in the history of thought that compel us (humans) because we do not account for the limits of our own rationality. I could say a lot more about this another time, suffice it to say here that in the end I cautiously accept these arguments and believe x-risk deserves a lot of our attention.

II) are popular within this community for psychological as well as purely rational reasons. There is nothing wrong with that and it might even be needed to build a dedicated community.

III) For these reasons I think we are biased towards C, and should employ measurements to correct for this bias.

  1. None of these priorities is neglected by the world, but certain interventions or research opportunities within them are. EA has spent an enormous amount of effort finding opportunities for marginal value add in A, B & C.

  2. Climate change should be researched just as much as A & B. One way of accounting for the bias I see in C is to divert a certain portion of resources to climate change research despite our strongly held beliefs. I simply cannot accept the conclusion that unless climate change renders our planet uninhabitable before we colonize Mars, we have better things to worry about. That sounds absurd in light of the fact that certain detrimental effects of climate change are already happening, and even the best case future scenarios include a lot of suffering. It might still be right, but it’s absurdity means we need to give it more attention.

What surprises me the most from the discussion of this post (and I realize it’s readers are a tiny sample size of the larger community) is that no one has come back with: “we did the research years ago, we could find no marginal value add. Please read this article for all the details”.

Load More