All of UriKatz's Comments + Replies

Against opposing SJ activism/cancellations

I thought it worth pointing out that this statement from one of your comments I mostly agree with, while I strongly disagree with your main post. If this was the essence of your message, maybe it requires clarification:

"Politics is the mind killer." Better to treat it like the weather and focus on the things that actually matter and we have a chance of affecting, and that our movement has a comparative advantage in.

To be clear, I think justice does actually matter, and any movement that would look past it to “more important” considerations scares me a litt

... (read more)
Cause Prioritization in Light of Inspirational Disasters

I have similar objections to this post as Khorton & cwbakerlee. I think it shows how the limits of human reason make utilitarianism a very dangerous idea (which may nevertheless be correct), but I don’t want to discuss that further here. Rather, let’s assume for the sake of argument that you are factually & morally correct. What can we learn from disasters, and the world’s reaction to them, that we can reproduce without the negative effects of the disaster? I am thinking of anything from faking a disaster (wouldn’t the conspiracy theorist love that

... (read more)
Climate Change Is Neglected By EA

Yes, you are correct and thank you for forcing me to further clarify my position (in what follows I leave out WAW since I know absolutely nothing about it):

  1. EA funds, which I will assume is representative of EA priorities has these funds a) “Global Health and Development”; b) “Animal Welfare”; c) “Long-Term Future”; d) “EA Meta”. Let’s leave D aside for the purposes of this discussion.

  2. There is good reason to believe the importance and tractability of specific climate change interventions can equal or even exceed those of A & B. We have not done en

... (read more)
Climate Change Is Neglected By EA

The assumption is not that people outside EA cannot do good, it is merely that we should not take it for granted that they are doing good, and doing it effectively, no matter their number. Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell. So the claim that climate change work is or is not the most good has no merit without a deeper dive into the field and a search for incredible giving / working opportunities. Any shallow dive into this cause

... (read more)
5Ben_West1yI noticed Will listed AI safety and wild animal welfare (WAW), and you mentioned malaria. I'm curious if this is the crux – I would guess that Will agrees (certain types of) climate change work is plausibly as good as anti-malaria, and I wonder if you agree that the sort of person who (perhaps incorrectly) cares about WAW should consider that to be more impactful than climate change.
Developing my inner self vs. doing external actions

This is a great question and one everyone struggles with.

TL;DR work on self improvement daily but be open to opportunities for acting now. My advice would indeed be to balance the two, but balance is not a 50-50 split. To be a top performer in anything you do, practice, practice, practice. The impact of a top performer can easily be 100x over the rest of us, so the effort put into self improvement pays off. Professional sports is a prime example, but research, engineering, academia, management, parenting, they all benefit from working on yourself.

The trap

... (read more)

Wildlife conservation and wild animal welfare are emphatically not the same thing. "Tech safety" (which isn't a term I've heard before, and which on googling seems to mostly refer to tech in the context of domestic abuse) and AI safety are just as emphatically not the same thing.

Anyway, yes, in most areas EAs care about they are a minority of the people who care about that thing. Those areas still differ hugely in terms of neglectedness, both in terms of total attention and in terms of expertise. Assuming one doesn't believe that EAs are the only people wh

... (read more)
Climate Change Is Neglected By EA

I feel sometimes that the EA movement is starting to sound like heavy metalists (“climate change is too mainstream”), or evangelists (“in the days after the great climate change (Armageddon), mankind will colonize the galaxy (the 2nd coming), so the important work is the one that prevents x-risk (saves people’s souls)”). I say “amen” to that, and have supported AI safety financially in the past, but I remain skeptical that climate change can be ignored. What would you recommend as next steps for an EA ember who wants to learn more and eventually act? What are the AMF or GD of climate change?

Nothing you've written here sounds like anything I've heard anyone say in the context of a serious EA discussion. Are there any examples you could link to of people complaining about causes being "too mainstream" or using religious language to discuss X-risk prevention?

The arguments you seem to be referring to with these points (that it's hard to make marginal impact in crowded areas, and that it's good to work toward futures where more people are alive and flourishing) rely on a lot of careful economic and moral reasoning about the real world, and I think

... (read more)
2mchr3k1yThe Effective Environmentalism group [https://www.facebook.com/groups/effectiveenvironmentalism/] maintains a document of recommended resources [https://docs.google.com/document/d/1QoAdW2la3fKM9WbzST4Vrm5WgV9P9dhe6bXKw-J1Vqw/edit#heading=h.hh7l0dmi107w] .
Climate Change Is Neglected By EA

I wonder how much of the assessment that climate change work is far less impactful than other work relies on the logic of “low probability, high impact”, which seems to be the most compelling argument for x-risk. Personally, I generally agree with this line of reasoning, but it leads to conclusions so far away from common sense and intuition, that I am a bit worried something is wrong with it. It wouldn’t be the first time people failed to recognize the limits of human rationality and were led astray. That error is no big deal as long as it does not have a

... (read more)
1SebK1yThat sounds right to me. (And Will, your drawbridge metaphor is wonderful.) My impression is that there already is some grumbling about EA being too elitist/out-of-touch/non-diverse/arrogant/navel-gazing/etc., and discussions in the community about what can be done to fix that perception. Add to that Toby Ord's realization (in his well-marketed book) that hey, perhaps climate change is a bigger x-risk (if indirectly) than he had previously thought, and I think we have fertile ground for posts like this one. EA's attitude has already shifted once (away from earning-to-give); perhaps the next shift is an embrace of issues that are already in the public consciousness, if only to attract more diversity into the broader community. I've had smart and very morally-conscious friends laugh off the entirety of EA as "the paperclip people", and others refer to Peter Singer as "that animal guy". And I think that's really sad, because they could be very valuable members of the community if we had been more conscious to avoid such alienation. Many STEM-type EAs think of PR considerations as distractions from the real issues, but that might mean leaving huge amounts of low-hanging utility fruit unpicked. Explicitly putting present-welfare and longtermism on equal footing seems like a good first step to me.
Choosing the Zero Point

In my own mind I would file this post under “psychological hacks”, a set of tools that can be extremely useful when used correctly. I am already considering how to apply this hack to some moral dilemmas I am grappling with. I share this because I think it highlights two important points.

First off, the post is endorsing the common marketing technique of framing. I am not an expert in the field, but am fairly confident this technique can influence people’s thoughts, feelings & behavior. Importantly, the framing exercise is not merely confined to the con

... (read more)
If you value future people, why do you consider near term effects?

Great post, thank you.

If one accepts your conclusion, how does one go about implementing it? There is the work on existential risk reduction, which you mention. Beyond that, however, predicting any long-term effect seems to be a work of fiction. If you think you might have a vague idea of how things will turn out in 1k year, you must realize that even longer-term effects (1m? 1b?) dominate these. An omniscient being might be able to see the causal chain from our present actions to the far future, but we certainly cannot.

A question this raises for me is ... (read more)

1Alex HT1yPleased you liked it and thanks for the question. Here are my quick thoughts: That kind of flourishing-education sounds a bit like Bostrom's evaluation function described here: http://www.stafforini.com/blog/bostrom/ [http://www.stafforini.com/blog/bostrom/] Or steering capacity described here: https://forum.effectivealtruism.org/posts/X2n6pt3uzZtxGT9Lm/doing-good-while-clueless [https://forum.effectivealtruism.org/posts/X2n6pt3uzZtxGT9Lm/doing-good-while-clueless] Unfortunately he doesn't talk about how to construct the evaluation function, and steering capacity is only motivated by an analogy. I agree with you/Bostrom/Milan that there are probably some things that look more robustly good than others. It's a bit unclear how to get these but something like :'Build models of how the world works by looking to the past and then updating based on inside view arguments of the present/future. Then take actions that look good on most of your models' seems vaguely right to me. Some things that look good to me are: investing, building the EA community, reducing the chance of catastrophic risks, spreading good values, getting better at forecasting, building models of how the world works Adjusting our values based on them being difficult to achieve seems a bit backward to me, but I'm motivated by subjective preferences, and maybe it would make more sense if you were taking a more ethical/realist approach (eg. because you expect the correct moral theory to actually be feasible to implement).
[Linkpost] - Mitigation versus Supression for COVID-19

I know there is a death toll associated with economic recessions. Basically, people get poorer and that results in worse mental and physical healthcare. Are there any studies weighing those numbers against these interventions? Seems like a classic QALY problem to me, but I am an amateur in any of the relevant fields.

Also, people keep suggesting to quarantine everyone above 50 or 60 and let everyone else catch the virus to create herd immunity. Is there any scientific validity behind such a course of action? Is it off the table simply because the ”agism” of the virus is only assumed at this point?

4Tsunayoshi2yNot an expert myself, but the naive calculations that I have seen with regards to herd immunity are incorrect. The precise numbers are just to illustrate the thought process. "We need 60-70% of people to be immune, people 65 and younger make up 65 % percent of the population, so if they catch it we have achieved herd immunity to protect the elderly". The flaw with that reasoning is that the immune people need to be essentially randomly distributed in the population. However, the elderly make up a sub population with their own distinct networks, in which the virus can spread after the quarantines are lifted. It also would probably not work in much (probably the larger part) of the world, where the elderly live together with their families, unless one would relocate them to special made quarantines.

Brendon,

First of all great article.

I just wanted to point out that I am looking for a robo-advisor and having talked with WealthSimple, they wrote back the following:

"we do support the option to gift securities without selling the asset. There is a short form via docusign we'll send you anytime you'd like to take advantage of this option."

1Brendon_Wong3yThanks! WealthSimple's support for the donation of appreciated securities is not listed online, so this is very useful information for EAs to have as they evaluate investment options. Do they explicitly support this in the United States, and do they impose any restrictions on asset donations?
Working at EA organizations series: Effective Altruism Foundation

Hi,

Could you by any chance use a few hours of software development each week from volunteers?

Effective Altruism and Religious Faiths: Mutually Exclusive Entities, or an Important Nexus to Explore?

I love the depth you went to with this post, and just wanted to share a bit of personal experience. In the past few years my religious practice has flourished, as has my involvement with EA. I doubt this is an accidental coincidence, especially since my highest aspirations in life are a combination I took from EA and religion (sometimes I refer to them as the guiding or organizing principles of my life). Religion gives me the emotional and spiritual support I need, EA fills in the intellectual side and provides practical advice I can implement here and now... (read more)

Maximizing long-term impact

For the sake of argument I will start with your definition of good and add that what I want to happen is for all sentient beings to be free from suffering, or for all sentient beings to be happy (personally I don't see a distinction between these two propositions, but that is a topic for another discussion).

Being general in this way allows me to let go of my attachment to specific human qualities I think are valuable. Considering how different most people's values are from my own, and how different my needs are from Julie's (my canine companion), I think... (read more)

0Squark7yIf your only requirement is for all sentient beings to be happy, you should be satisfied with a universe completely devoid of sentient beings. However, I suspect you wouldn't be (?) Regarding definition of good, it's pointless to argue about definitions. We should only make sure both of us know what each word we use means. So, let's define "koodness(X)" to mean "the extent to which things X wants to happen actually happen" and "gudness" to mean "the extent to which what is happening to all beings is what they want to happen" (although the latter notion requires clarifications: how do we average between the beings? do we take non-existing beings into account? how do we define "happening to X"?) So, by definition of kood, I want the future world to be kood(Squark). I also want the future world to be gud among other things (that is, gudness is a component of koodness(Squark)). I disagree with Mill. It is probably better for a human being not become a pig, in the sense that a human being prefers not becoming a pig. However, I'm not at all convinced a pig prefers to become a human being. Certainly, I wouldn't want to become a "Super-Droid" if it comes at a cost of losing my essential human qualities.
Maximizing long-term impact

Great thought provoking post, which raises many questions.

My main concern is perhaps due to the limitations of my personal psychology: I cannot help but heavily prioritize present suffering over future suffering. I heard many arguments why this is wrong, and use very similar arguments when faced with those who claim that "charity begins at home". Nevertheless, the compassion I have for people and animals in great suffering overrides my fear of a dystopian future. Rational risk / reward assessments leave me unconvinced (oh, why am I not a superint... (read more)

1Squark7yHi Uri, thanks for the thoughtful reply! It is not necessarily bad for future sentients to be different. However, it is bad for them to be devoid of properties that make humans morally valuable (love, friendship, compassion, humor, curiosity, appreciation of beauty...). The only definition of "good" that makes sense to me is "things I want to happen" and I definitely don't want a universe empty of love. A random UFAI is likely to have none of the above properties.
Outreaching Effective Altruism Locally – Resources and Guides

For anyone who might read this thread in the future I felt an update is in order. I revisited my numbers, and concluded that opening a local outreach EA chapter is very cost-effective. The reward/risk ratio is high, even when the alternative is entrepreneurship, assuming the time you invest in outreach does not severely hurt your chances of success and high profits.

Previously I wrote that: "Assuming after 1 year I get 10 people to take GWWC's pledge, which I consider phenomenal success, my guesstimates show the expected dollars given to charity will b... (read more)

Open Thread 6

I will start my reply from the end. Your intuition is right. My investment will simply go into another share holder's pocket, and the company, socially responsible or otherwise, will see none of it. However, this will also decrease the company's cost of capital: when they go to the markets for additional funds, investors will know there is a market for these stocks and will be willing to pay more for them. I have no data on the extent of this impact.

As for your AMF example, I have no way of quantifying the good my SRI (socially responsible investing) may d... (read more)

0RyanCarey7yCool. Yeah I don't know how much harm normal stocks do compared to socially responsible ones, although I imagine they both do a lot of good on average. Philosophically, I'm not sold that "do no harm" is decisive here, because you're doing harm by earning less money and withholding donations to amf in a sense. Good luck!
Open Thread 6

I have a small amount of money I want to invest. If all goes well, I will eventually donate the appreciated stock, but there is a small chance I might need the money so I don't want to donate it now. I was wondering what would be more effective altruism: to focus on socially responsible investing at the possible cost of lower returns, or to maximize returns so I can donate a larger sum to the most effective charities in the end? I stumbled upon this article on the subject, which I find interesting, but wanted to hear more opinions: https://blog.wealthfront... (read more)

2RyanCarey7yInteresting. I haven't read much analysis of this. One question you can ask is: assuming ordinary shares have a greater return, if you donate that dividend to AMF, will the world be better off. We think AMF can save a life for $3k (or $10-15k if it's affected by inflation). And our guess it that after investing $100k in normal shares for 30 years, you're $50k ahead. That's 17 lives (or 3-5). On the other hand, you're giving up the opportunity to give more responsible companies $100k of investment for 30 years. So the question would be - how good are these companies, and is funding them better than saving a few lives? Also, intuitively, it seems like there should be some price elasticity situation here, where whichever shares you buy, some other people will sell them off, partially offsetting your direct impact. I'm not sure how that works with shares though. Anyway, if you see any more useful info about it all, do report back!
Open Thread 6

If you have a chance within the next 22 hours, you should go to the Project for Awesome website (http://www.projectforawesome.com/) and vote for effective charities. Search for GD, DtW & AMF.

Project for Awesome is an annual YouTube project run by the Vlogbrothers, that raises awareness and money for charity. The participants (video creators, viewers, donors, etc.) are probably relatively young and this is a great way of introducing EA to them.

Open thread 5

Should we try to make a mark on the Volgbrother's "Project 4 Awesome"? It can expose effective altruism to a wide and, on average, young audience.

I would love to help in any way possible, but video editing is not my thing...

https://www.youtube.com/watch?v=kD8l3aI0Srk

2Roxanne_Heston7yHi UriKatz, there's a group of us trying to do just that, and we'd love to have your help. Join the EA Nerdfighters Facebook group and I'll brief you on what we've been up to. :) https://www.facebook.com/groups/254657514743021/ [https://www.facebook.com/groups/254657514743021/]
Certificates of impact

Full disclosure: I fear I do not completely understand your idea. Having said that, I hope my comment is at list a little useful to you.

Think about the following cases: (1) I donate to an organization that distributes bednets in Africa and receive a certificate. I then trade that certificate for a new pair of shoes. My money, which normally can only be used for one of these purposes, is now used for both. (2) I work for a non-profit and receive a salary. I also receive certificates. So I am being paid double?

The second case is easily solved, just give th... (read more)

0Paul_Christiano7yIf you buy and then sell a certificate, you aren't funding the charity, the ultimate holder of the certificate is. They will only buy the certificate if they are interested in funding the charity. You could pretend you are funding the charity, but that wouldn't be true---the person you sold the certificate to would otherwise have bought it from someone else, perhaps directly from the charity. So your net effect on the charity's funding is 0. I could just as well give some money to my friend and pretend I was funding an effective charity. (I'm setting aside the tax treatment for now.) You would pay an employee with certificates for the same reason a company might pay an emplyee in equity. If there is no secondary market, this can be better for the company for liquidity reasons, and can introduce a component of performance pay. But even if there is a secondary market (e.g. for Google stock), it can still be a financially attractive way for a company to pay a large part of its salary, because it passes some of the risk on to the employee without having to constantly adjust dollar-denominated salaries. (There are also default effects, where paying employees in certificates would likely lead to them holding some certificates.)
Outreaching Effective Altruism Locally – Resources and Guides

Thank you for your offer to help me further, but having reviewed the link posted by Vincent, I am certain I do not have the time to start a local chapter right now.

0Ilya7yI did not offer you to start a local chapter.
Outreaching Effective Altruism Locally – Resources and Guides

Hi Ilya, thanks for your reply. I may have misunderstood you, but your example seems not to take into account the overhead of managing a larger team, or the diminishing returns of each additional staff member. This goes to the heart of my question: what would be the most effective way for each individual to further EA causes? Should they work full time and donate more, or work part time and do other things (this question may only apply to those who are earning to give). This question can best be determined on a case by case basis of course. It relates to ... (read more)

1Ilya7yHi Uri, I guess there is no modern (21st century) data yet on potential return of localized outreach because the Oxford-style EA mentality is very young, though the great minds thousand years ago have found the best answers to the challenge of best purpose/meaning of human life. My intuition tells me that the most effective way to actualize EA causes would be through creation of small teams/groups/collectives able of sustainable economical and ideological exchange with the environment. My assumption is that members of such a group recognize their need for harmonious interconnection. Think of a living organism in which all organs work in unity and harmony. A more or less normal organism does not have what you call “management overhead”. Complexity of physical human body, with all its “major and minor” organs, comprised of hundreds of millions of cells, is much greater than complexity of a small (3 to10 person) team. If you are interested and free in principle of forming a small project team for sustainable propagation, I’d like to chat with you in skype.
Outreaching Effective Altruism Locally – Resources and Guides

Thank you for this very important post, this is something I have been wanting to do for a very long time.

Do you know of any work that has been done comparing the effectiveness of outreach to other activities effective altruism supporters can take? I refer specifically to the limited kind of outreach suggested here, such as opening a local chapter, and not the kind of outreach Peter Singer is capable of.

I will give you an example of what I am thinking about.

A year ago I changed my career plan and started a technology startup. If my startup succeeds, it wil... (read more)

2weeatquince7yHi Uri, Unfortunately I don't know the answer to your question. As you suggest the answer might vary from person to person or situation to situation and be skewed by effects with small probabilities but huge impacts. The only thing that I would say however is to disagree with your comment that "work on an outreach program ..[will] require significant time and effort". It requires as much or as little time and effort as you are willing to put in. For example just creating a Facebook group in London and creating a social event every month has helped grow the movement in London. Organising the occasional evening with EAs has shown to be not too much more effort than organising a evening with friends. Happy to give you some tips on growing EA in minimal time - feel free to message me!
2Vincent_deB7yPerhaps weeatquince could ask someone from The High Impact Network [http://www.thehighimpactnetwork.org/] to comment?
-1Ilya7yUri, it is my understanding that a better EA model of a startup would be a collective one, than individualistic. Your business will grow faster by creating and growing a larger team. For example, if your current team needs 12 month of full time work to complete the startup phase, then by increasing the team 3 times, the work may be accomplished in 3 month. I am interested to learn nature of your startup for more qualified communications with you.