All of UriKatz's Comments + Replies

I used a meal logging app once and the database it had was incredible, though not perfect. If the item had a barcode, the app had its nutritional data. So extension, agent, even an app with a camera can all work. Of course, I live in the US.

An added benefit of these projects could be to demonstrate that EA is not anti-AI capabilities, just pro safe and ethical development and deployment of AI.

1
Itamar Menuhin-Gruman
I absolutely agree!  I also appreciate this addition, I didn't think this would be of value to the community.

Reading the discussions here I cannot shake the intuition that utilitarianism with very big numbers is once again resulting in weird conclusions. AW advocates are basically describing earth as hell with a tiny sanctuary reserved for humans that are better off than average. I need more convincing. While I cannot disagree with the math or data, I think better theories of animal suffering are needed. At what point is a brain sufficiently developed, for example, to experience suffering in a way that is morally relevant, that we should care about? Are there qua... (read more)

5
Ariel Simnegar 🔸
Hey Uri, thanks for your transparent comment! The cost-effectiveness estimates of cage-free campaigns being orders of magnitude more cost-effective than GiveWell Top Charities have several bases: 1. The Welfare Footprint Project's incredibly exhaustive deep dive into every aspect of an egg-laying hen's life: "Overall, an average of at least 275 hours of disabling pain, 2,313 hours of hurtful pain and 4,645 hours of annoying pain are prevented for each hen kept in an aviary instead of CC during her laying life, and 1,410 hours of hurtful pain and 4,065 hours of annoying pain prevented for each hen kept in an aviary instead of a FC during her laying life." 2. Welfare range comparisons between humans and chickens. Rethink Priorities' Welfare Range Project focused on finding proxies for consciousness and welfare, and enumerating which proxies various animals share with humans. Their methodology found that chickens feel pain approximately 1/3 as intensely as humans do. (Of course, different methodologies may give quite different answers.) 3. Doing the math with the suffering prevented by cage-free campaigns and Rethink's welfare ranges will give a cost-effectiveness multiplier on the order of 1000x. But even if you assign chickens a welfare range like 0.001x that of humans, you're still going to get a cost-effectiveness multiplier on the order of 10x. 4. Similarly, if you ignore Rethink's research and instead derive a welfare range from neuron counts (to penalize chickens for their small brains), you still get cage-free campaigns outperforming GiveWell Top Charities by an order of magnitude. All of this why I am quite confident that cage-free campaigns are indeed far more cost-effective than GiveWell-recommended charities.

You are right that a lot of people believing something doesn’t make it true, but I don’t think that’s what the OP is suggesting. Rather, if a lot of EAs believe enlightenment is possible and reduces suffering, it is strange that they don’t explore it further. I would suggest that your attitude is the reason why. To label it religious, and religion as the antithesis of empirical evidence, is problematic in its on right, but in any case there is plenty of secular interest in this topic, and plenty of empirical research on it. It is also worth considering tha... (read more)

With regards to the 3rd point above, most of these studies compare meditation, not enlightenment, to other mental health interventions. Their finding that meditation is no better than CBT is not a negative. Since there is no “one size fit all” psychotherapy, having more options should be a net positive for mental health. Also, if meditation practice can lead to something more, even if that thing is not the end of all suffering, and even if it is rare, that increases the value of meditation practice.

1
huw
I agree that this finding is not a negative, and that including mindfulness should be a net positive for mental health interventions (especially since it'll adapt well to a lot of cultural contexts). The reason I included this null-ish result was to indicate that Vipassana-style mindfulness is unlikely to produce measurable 'enlightenment' when scaled up as an intervention—otherwise, where is it hiding in these studies? The burden of proof is with mindfulness proponents to find evidence that their method produces the superior effects they claim it does (a) when scaled up and (b) within a time-frame that would make it cost-effective. (FWIW I think that it probably produces non-inferior effects at scale on comparable timeframes, and for some small number of the population might achieve superiority after some time with the method, but this wouldn't make it a superior candidate for a global health intervention)

I applaud you for writing this post.

There is a huge difference between statement (a): "AI is more dangerous than nuclear war", and statement (b): "we should, as a last resort, use nuclear weapons to stop AI". It is irresponsible to downplay the danger and horror of (b) by claiming Yudkowsky is merely displaying intellectual honesty by making explicit what treaty enforcement entails (not the least because everyone studying or working on international treaties is already aware of this, and is willing to discuss it openly). Yudkowsky is making a clear and p... (read more)

It seems to me that no amount of arguments in support of individual assumptions, or a set of assumptions taken together, can make their repugnant conclusions more correct or palatable. It is as if Frege’s response to Russel’s paradox were to write a book exalting the virtues of set theory. Utility monsters and utility legions show us that there is a problem either with human rationality or human moral intuitions. If not them than the repugnant conclusion does for sure, and it is an outcome of the same assumptions and same reasoning. Personally, I refuse t... (read more)

the quest for an other-centered ethics leads naturally to utilitarian-flavored systems with a number of controversial implications.

This seems incorrect. Rather, it is your 4 assumptions that “lead naturally” to utilitarianism. It would not be hard for a deontologist to be other-focused simply by emphasizing the a-priori normative duties that are directed towards others (I am thinking here of Kant’s duties matrix: perfect / imperfect & towards self / towards others). The argument can even be made, and often is, that the duties that one has towards on... (read more)

Only read the TL;DR and the conclusion, but I was wondering why the link between jhana meditation and brain activity matters? Even if we assume materialism, the Path in its various forms (I am intimately familiar with the Buddhist one) always includes other steps, and only taken together do they lead to increased happiness and mental health. My thinking is that we should go in one of two direction: direct manipulation of the brain, or a holistic spiritual approach. This middle way, ironically, seems to leave out the best of both worlds.

Some ideas for why this research might matter:

  1. Study in this area could shed light on (some of) the mechanisms of mental wellbeing: given that practitioners report that these states are extremely high valence, and generally useful to psychological wellbeing, perhaps they can advance research more generally into depression and other disorders/conditions. More specific to your points, if you want to do direct manipulation of the brain, how do you know what areas to manipulate? Studying the neural activity associated with Jhana could provide a target state for
... (read more)

I am responding to the newer version of this critique found [here] (https://www.radicalphilosophy.com/article/against-effective-altruism).

Someone needs to steel man Crary's critique for me, because as it stands I find it very weak. The way I understand this article:

  1. The institutional critique - Basically claims 2 things: a) EA's are searching for their keys only under the lamppost. This is a great warning for anyone doing quantitate research and evaluation. EA's are well aware of it and try to overcome the problem as much as possible; b) EA is addressing

... (read more)

What would you say are the biggest benefits of being part of an EA faith group?

7
Gordon Seidoh Worley
I think of it as coming from two angles. One is that it's a form of community building to expose folks to EA ideas who might otherwise not engage with them by doing so in a language they are familiar with. Two, it's a way for EAs who are religious to explore how EA impacts other spheres of their life. I think it's also nice to have community by creating a sense of belonging. With EA being such a secular space normally, having a way to learn you're not the only one trying to combine EA and practice of a religion is nice. Good to have folks to talk to, etc.

From a broad enough perspective no cause area EA deals with is neglected. Poverty? Billions donated annually. AI? Every other start up uses it. So we start narrowing it down: poverty -> malaria-> bednets.

There is every reason to believe mental health has neglected yet tractable and highly impactful areas, because of the size of the problem as you outline it, and because mental health touches all of us all the time in everything we do (when by health we don’t just mean the absence of disease but the maximization of wellbeing).

I think EA concepts are h... (read more)

7
Dvir Caspi
That's a great perspective, appreciate it!! Inspires me. Tiny side note - clinical psychologist not psychiatrist (psychiatrists are also in mental health, but are medical doctors, and can prescribe medications). 

Hey @Dvir, mental health is a (not-professional) passion of mine so I am grateful for any attention given to it in EA. I wonder if you think a version 2.0 of your pitch can be written, which takes into account the 3 criteria below. Right now you seem to have nailed down the 1st, but I don't see the case for 2 & 3:

  1. Great in scale (it affects many lives, by a great amount)
  2. Highly neglected (few other people are working on addressing the problem)
  3. Highly solvable or tractable (additional resources will do a great deal to address it) (https://80000hours.
... (read more)
3
Dvir Caspi
Hi Uri, thanks for your reply. :)  While Mental Health is neglected in terms of government funds, it is not neglected at all in terms of the number of people who are interested in this field. Many are. So by this criteria it doesn't line up with the EA mindset.   Regarding the highly solvable or tractable, I think this is very challenging to evaluate.. But this could and should be a further discussion. Regarding the Happier Lives Institute, I have read some of their posts and reports, but admit that I am not familiar enough.  Mental Health Innovation Network is also a great organization in this space. 

I am not sure about the etiquette of follow up questions in AMAs, but I’ll give it a go:

Why does being mainstream matter? If, for example, s-risk is the highest priority cause to work on, and the work of a few mad scientists is what is needed to solve the problem, why worry about the general public’s perception of EA as a movement, or EA ideas? We can look at growing the movement as growing the number of top performers and game-changers, in their respective industries, who share EA values. Let the rest of us enjoy the benefit of their labor.

5
JeremiahJohnson
My point is that I think you can often a ton of good by NOT focusing on the highest priority cause. If you constantly talk about killer AI for a year, you might get 2 people to contribute to it.  If you constantly talk about improving regular people's regular charitable giving for a year, you might influence dozens or hundreds of people to give more efficiently, even if they're still giving to something that isn't the highest priority cause. Basically - If your goal is to improve restaurant quality, improving  every McDonald's in the US by 10% does more to improve restaurant quality than opening a handful of Michelin star joints.

Well, it wouldn’t work if you said “I want a future with less suffering, so I am going to evaluate my impact based on how many paper clips exist in the world at a given time”. Bostrom selects collaboration, technology and wisdom because he thinks they are the most important indicators of a better future and reduced x-risk. You are welcome to suggest other parameters for the evaluation function of course, but not every parameter works. If you read the analogy to chess in the link I posted it will become much more clear how Bostrom is thinking about this.

(if anyone reading this comment knows of evolutions in Bostrom’s thought since this lecture I would very much appreciate a reference)

Hi Khorton,

If by “decide” you mean control the outcome in any meaningful way I agree, we cannot. However I think it is possible to make a best effort attempt to steer things towards a better future (in small and big ways). Mistakes will be made, progress is never linear and we may even fail altogether, but the attempt is really all we have, and there is reason to believe in a non-trivial probability that our efforts will bear fruit, especially compared to not trying or to aiming towards something else (like maximum power in the hands of a few).

For a great ... (read more)

3
Kirsten
To me that doesn't sound very different from "I want a future with less suffering, so I'm going to evaluate my impact based on how far humanity gets towards eradicating malaria and other painful diseases". Which I guess is consistent with my views but doesn't sound like most long-termists I've met.

I am largely sympathetic to the main thrust of your argument (borrowing from your own title: I am probably a negative utilitarian), but I have 2 disagreements that ultimately lead me to a very different conclusion on longtermism and global priorities:

  1. Why do you assume we cannot effect the future further than 100 years? There are numerous examples of humans doing just that: in science (inventing the wheel, electricity or gunpowder), government (the US constitution), religion (the Buddhist Pali cannon, the Bible, the Quran), philosophy (utilitarianism), a
... (read more)

I'm not Denise, but I agree that we can and will all affect the long-term future. The children we have or don't have, the work we do, the lives we save, will all effect future generations.

What I'm more skeptical about is the claim that we can decide /how/ we want to affect future generations. The Bible has certainly had a massive influence on world history, but it hasn't been exclusively good, and the apostle Paul would have never guessed how his writing would influence people even a couple hundred years after his death.

I thought it worth pointing out that this statement from one of your comments I mostly agree with, while I strongly disagree with your main post. If this was the essence of your message, maybe it requires clarification:

"Politics is the mind killer." Better to treat it like the weather and focus on the things that actually matter and we have a chance of affecting, and that our movement has a comparative advantage in.

To be clear, I think justice does actually matter, and any movement that would look past it to “more important” considerations scares me a litt

... (read more)

I have similar objections to this post as Khorton & cwbakerlee. I think it shows how the limits of human reason make utilitarianism a very dangerous idea (which may nevertheless be correct), but I don’t want to discuss that further here. Rather, let’s assume for the sake of argument that you are factually & morally correct. What can we learn from disasters, and the world’s reaction to them, that we can reproduce without the negative effects of the disaster? I am thinking of anything from faking a disaster (wouldn’t the conspiracy theorist love that

... (read more)

Yes, you are correct and thank you for forcing me to further clarify my position (in what follows I leave out WAW since I know absolutely nothing about it):

  1. EA funds, which I will assume is representative of EA priorities has these funds a) “Global Health and Development”; b) “Animal Welfare”; c) “Long-Term Future”; d) “EA Meta”. Let’s leave D aside for the purposes of this discussion.

  2. There is good reason to believe the importance and tractability of specific climate change interventions can equal or even exceed those of A & B. We have not done en

... (read more)

The assumption is not that people outside EA cannot do good, it is merely that we should not take it for granted that they are doing good, and doing it effectively, no matter their number. Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell. So the claim that climate change work is or is not the most good has no merit without a deeper dive into the field and a search for incredible giving / working opportunities. Any shallow dive into this cause

... (read more)
5
Ben_West🔸
I noticed Will listed AI safety and wild animal welfare (WAW), and you mentioned malaria. I'm curious if this is the crux – I would guess that Will agrees (certain types of) climate change work is plausibly as good as anti-malaria, and I wonder if you agree that the sort of person who (perhaps incorrectly) cares about WAW should consider that to be more impactful than climate change.
Answer by UriKatz3
0
0

This is a great question and one everyone struggles with.

TL;DR work on self improvement daily but be open to opportunities for acting now. My advice would indeed be to balance the two, but balance is not a 50-50 split. To be a top performer in anything you do, practice, practice, practice. The impact of a top performer can easily be 100x over the rest of us, so the effort put into self improvement pays off. Professional sports is a prime example, but research, engineering, academia, management, parenting, they all benefit from working on yourself.

The trap

... (read more)

Wildlife conservation and wild animal welfare are emphatically not the same thing. "Tech safety" (which isn't a term I've heard before, and which on googling seems to mostly refer to tech in the context of domestic abuse) and AI safety are just as emphatically not the same thing.

Anyway, yes, in most areas EAs care about they are a minority of the people who care about that thing. Those areas still differ hugely in terms of neglectedness, both in terms of total attention and in terms of expertise. Assuming one doesn't believe that EAs are the only people wh

... (read more)

I feel sometimes that the EA movement is starting to sound like heavy metalists (“climate change is too mainstream”), or evangelists (“in the days after the great climate change (Armageddon), mankind will colonize the galaxy (the 2nd coming), so the important work is the one that prevents x-risk (saves people’s souls)”). I say “amen” to that, and have supported AI safety financially in the past, but I remain skeptical that climate change can be ignored. What would you recommend as next steps for an EA ember who wants to learn more and eventually act? What are the AMF or GD of climate change?

Nothing you've written here sounds like anything I've heard anyone say in the context of a serious EA discussion. Are there any examples you could link to of people complaining about causes being "too mainstream" or using religious language to discuss X-risk prevention?

The arguments you seem to be referring to with these points (that it's hard to make marginal impact in crowded areas, and that it's good to work toward futures where more people are alive and flourishing) rely on a lot of careful economic and moral reasoning about the real world, and I think

... (read more)
3
mchr3k
The Effective Environmentalism group maintains a document of recommended resources.

I wonder how much of the assessment that climate change work is far less impactful than other work relies on the logic of “low probability, high impact”, which seems to be the most compelling argument for x-risk. Personally, I generally agree with this line of reasoning, but it leads to conclusions so far away from common sense and intuition, that I am a bit worried something is wrong with it. It wouldn’t be the first time people failed to recognize the limits of human rationality and were led astray. That error is no big deal as long as it does not have a

... (read more)
3
SebK
That sounds right to me. (And Will, your drawbridge metaphor is wonderful.) My impression is that there already is some grumbling about EA being too elitist/out-of-touch/non-diverse/arrogant/navel-gazing/etc., and discussions in the community about what can be done to fix that perception. Add to that Toby Ord's realization (in his well-marketed book) that hey, perhaps climate change is a bigger x-risk (if indirectly) than he had previously thought, and I think we have fertile ground for posts like this one. EA's attitude has already shifted once (away from earning-to-give); perhaps the next shift is an embrace of issues that are already in the public consciousness, if only to attract more diversity into the broader community. I've had smart and very morally-conscious friends laugh off the entirety of EA as "the paperclip people", and others refer to Peter Singer as "that animal guy". And I think that's really sad, because they could be very valuable members of the community if we had been more conscious to avoid such alienation. Many STEM-type EAs think of PR considerations as distractions from the real issues, but that might mean leaving huge amounts of low-hanging utility fruit unpicked. Explicitly putting present-welfare and longtermism on equal footing seems like a good first step to me.

In my own mind I would file this post under “psychological hacks”, a set of tools that can be extremely useful when used correctly. I am already considering how to apply this hack to some moral dilemmas I am grappling with. I share this because I think it highlights two important points.

First off, the post is endorsing the common marketing technique of framing. I am not an expert in the field, but am fairly confident this technique can influence people’s thoughts, feelings & behavior. Importantly, the framing exercise is not merely confined to the con

... (read more)

Great post, thank you.

If one accepts your conclusion, how does one go about implementing it? There is the work on existential risk reduction, which you mention. Beyond that, however, predicting any long-term effect seems to be a work of fiction. If you think you might have a vague idea of how things will turn out in 1k year, you must realize that even longer-term effects (1m? 1b?) dominate these. An omniscient being might be able to see the causal chain from our present actions to the far future, but we certainly cannot.

A question this raises for me is ... (read more)

1
AHT
Pleased you liked it and thanks for the question. Here are my quick thoughts: That kind of flourishing-education sounds a bit like Bostrom's evaluation function described here: http://www.stafforini.com/blog/bostrom/ Or steering capacity described here: https://forum.effectivealtruism.org/posts/X2n6pt3uzZtxGT9Lm/doing-good-while-clueless Unfortunately he doesn't talk about how to construct the evaluation function, and steering capacity is only motivated by an analogy. I agree with you/Bostrom/Milan that there are probably some things that look more robustly good than others. It's a bit unclear how to get these but something like :'Build models of how the world works by looking to the past and then updating based on inside view arguments of the present/future. Then take actions that look good on most of your models' seems vaguely right to me. Some things that look good to me are: investing, building the EA community, reducing the chance of catastrophic risks, spreading good values, getting better at forecasting, building models of how the world works Adjusting our values based on them being difficult to achieve seems a bit backward to me, but I'm motivated by subjective preferences, and maybe it would make more sense if you were taking a more ethical/realist approach (eg. because you expect the correct moral theory to actually be feasible to implement).

I know there is a death toll associated with economic recessions. Basically, people get poorer and that results in worse mental and physical healthcare. Are there any studies weighing those numbers against these interventions? Seems like a classic QALY problem to me, but I am an amateur in any of the relevant fields.

Also, people keep suggesting to quarantine everyone above 50 or 60 and let everyone else catch the virus to create herd immunity. Is there any scientific validity behind such a course of action? Is it off the table simply because the ”agism” of the virus is only assumed at this point?

4
Tsunayoshi
Not an expert myself, but the naive calculations that I have seen with regards to herd immunity are incorrect. The precise numbers are just to illustrate the thought process. "We need 60-70% of people to be immune, people 65 and younger make up 65 % percent of the population, so if they catch it we have achieved herd immunity to protect the elderly". The flaw with that reasoning is that the immune people need to be essentially randomly distributed in the population. However, the elderly make up a sub population with their own distinct networks, in which the virus can spread after the quarantines are lifted. It also would probably not work in much (probably the larger part) of the world, where the elderly live together with their families, unless one would relocate them to special made quarantines.

Brendon,

First of all great article.

I just wanted to point out that I am looking for a robo-advisor and having talked with WealthSimple, they wrote back the following:

"we do support the option to gift securities without selling the asset. There is a short form via docusign we'll send you anytime you'd like to take advantage of this option."

1
Brendon_Wong
Thanks! WealthSimple's support for the donation of appreciated securities is not listed online, so this is very useful information for EAs to have as they evaluate investment options. Do they explicitly support this in the United States, and do they impose any restrictions on asset donations?

Hi,

Could you by any chance use a few hours of software development each week from volunteers?

I love the depth you went to with this post, and just wanted to share a bit of personal experience. In the past few years my religious practice has flourished, as has my involvement with EA. I doubt this is an accidental coincidence, especially since my highest aspirations in life are a combination I took from EA and religion (sometimes I refer to them as the guiding or organizing principles of my life). Religion gives me the emotional and spiritual support I need, EA fills in the intellectual side and provides practical advice I can implement here and now... (read more)

For the sake of argument I will start with your definition of good and add that what I want to happen is for all sentient beings to be free from suffering, or for all sentient beings to be happy (personally I don't see a distinction between these two propositions, but that is a topic for another discussion).

Being general in this way allows me to let go of my attachment to specific human qualities I think are valuable. Considering how different most people's values are from my own, and how different my needs are from Julie's (my canine companion), I think... (read more)

0
Squark
If your only requirement is for all sentient beings to be happy, you should be satisfied with a universe completely devoid of sentient beings. However, I suspect you wouldn't be (?) Regarding definition of good, it's pointless to argue about definitions. We should only make sure both of us know what each word we use means. So, let's define "koodness(X)" to mean "the extent to which things X wants to happen actually happen" and "gudness" to mean "the extent to which what is happening to all beings is what they want to happen" (although the latter notion requires clarifications: how do we average between the beings? do we take non-existing beings into account? how do we define "happening to X"?) So, by definition of kood, I want the future world to be kood(Squark). I also want the future world to be gud among other things (that is, gudness is a component of koodness(Squark)). I disagree with Mill. It is probably better for a human being not become a pig, in the sense that a human being prefers not becoming a pig. However, I'm not at all convinced a pig prefers to become a human being. Certainly, I wouldn't want to become a "Super-Droid" if it comes at a cost of losing my essential human qualities.

Great thought provoking post, which raises many questions.

My main concern is perhaps due to the limitations of my personal psychology: I cannot help but heavily prioritize present suffering over future suffering. I heard many arguments why this is wrong, and use very similar arguments when faced with those who claim that "charity begins at home". Nevertheless, the compassion I have for people and animals in great suffering overrides my fear of a dystopian future. Rational risk / reward assessments leave me unconvinced (oh, why am I not a superint... (read more)

1
Squark
Hi Uri, thanks for the thoughtful reply! It is not necessarily bad for future sentients to be different. However, it is bad for them to be devoid of properties that make humans morally valuable (love, friendship, compassion, humor, curiosity, appreciation of beauty...). The only definition of "good" that makes sense to me is "things I want to happen" and I definitely don't want a universe empty of love. A random UFAI is likely to have none of the above properties.

For anyone who might read this thread in the future I felt an update is in order. I revisited my numbers, and concluded that opening a local outreach EA chapter is very cost-effective. The reward/risk ratio is high, even when the alternative is entrepreneurship, assuming the time you invest in outreach does not severely hurt your chances of success and high profits.

Previously I wrote that: "Assuming after 1 year I get 10 people to take GWWC's pledge, which I consider phenomenal success, my guesstimates show the expected dollars given to charity will b... (read more)

I will start my reply from the end. Your intuition is right. My investment will simply go into another share holder's pocket, and the company, socially responsible or otherwise, will see none of it. However, this will also decrease the company's cost of capital: when they go to the markets for additional funds, investors will know there is a market for these stocks and will be willing to pay more for them. I have no data on the extent of this impact.

As for your AMF example, I have no way of quantifying the good my SRI (socially responsible investing) may d... (read more)

0
RyanCarey
Cool. Yeah I don't know how much harm normal stocks do compared to socially responsible ones, although I imagine they both do a lot of good on average. Philosophically, I'm not sold that "do no harm" is decisive here, because you're doing harm by earning less money and withholding donations to amf in a sense. Good luck!

I have a small amount of money I want to invest. If all goes well, I will eventually donate the appreciated stock, but there is a small chance I might need the money so I don't want to donate it now. I was wondering what would be more effective altruism: to focus on socially responsible investing at the possible cost of lower returns, or to maximize returns so I can donate a larger sum to the most effective charities in the end? I stumbled upon this article on the subject, which I find interesting, but wanted to hear more opinions: https://blog.wealthfront... (read more)

2
RyanCarey
Interesting. I haven't read much analysis of this. One question you can ask is: assuming ordinary shares have a greater return, if you donate that dividend to AMF, will the world be better off. We think AMF can save a life for $3k (or $10-15k if it's affected by inflation). And our guess it that after investing $100k in normal shares for 30 years, you're $50k ahead. That's 17 lives (or 3-5). On the other hand, you're giving up the opportunity to give more responsible companies $100k of investment for 30 years. So the question would be - how good are these companies, and is funding them better than saving a few lives? Also, intuitively, it seems like there should be some price elasticity situation here, where whichever shares you buy, some other people will sell them off, partially offsetting your direct impact. I'm not sure how that works with shares though. Anyway, if you see any more useful info about it all, do report back!

If you have a chance within the next 22 hours, you should go to the Project for Awesome website (http://www.projectforawesome.com/) and vote for effective charities. Search for GD, DtW & AMF.

Project for Awesome is an annual YouTube project run by the Vlogbrothers, that raises awareness and money for charity. The participants (video creators, viewers, donors, etc.) are probably relatively young and this is a great way of introducing EA to them.

Should we try to make a mark on the Volgbrother's "Project 4 Awesome"? It can expose effective altruism to a wide and, on average, young audience.

I would love to help in any way possible, but video editing is not my thing...

https://www.youtube.com/watch?v=kD8l3aI0Srk

2
ricoh_aficio
Hi UriKatz, there's a group of us trying to do just that, and we'd love to have your help. Join the EA Nerdfighters Facebook group and I'll brief you on what we've been up to. :) https://www.facebook.com/groups/254657514743021/

Full disclosure: I fear I do not completely understand your idea. Having said that, I hope my comment is at list a little useful to you.

Think about the following cases: (1) I donate to an organization that distributes bednets in Africa and receive a certificate. I then trade that certificate for a new pair of shoes. My money, which normally can only be used for one of these purposes, is now used for both. (2) I work for a non-profit and receive a salary. I also receive certificates. So I am being paid double?

The second case is easily solved, just give th... (read more)

2
Paul_Christiano
If you buy and then sell a certificate, you aren't funding the charity, the ultimate holder of the certificate is. They will only buy the certificate if they are interested in funding the charity. You could pretend you are funding the charity, but that wouldn't be true---the person you sold the certificate to would otherwise have bought it from someone else, perhaps directly from the charity. So your net effect on the charity's funding is 0. I could just as well give some money to my friend and pretend I was funding an effective charity. (I'm setting aside the tax treatment for now.) You would pay an employee with certificates for the same reason a company might pay an emplyee in equity. If there is no secondary market, this can be better for the company for liquidity reasons, and can introduce a component of performance pay. But even if there is a secondary market (e.g. for Google stock), it can still be a financially attractive way for a company to pay a large part of its salary, because it passes some of the risk on to the employee without having to constantly adjust dollar-denominated salaries. (There are also default effects, where paying employees in certificates would likely lead to them holding some certificates.)

Thank you for your offer to help me further, but having reviewed the link posted by Vincent, I am certain I do not have the time to start a local chapter right now.

0
Ilya
I did not offer you to start a local chapter.

Hi Ilya, thanks for your reply. I may have misunderstood you, but your example seems not to take into account the overhead of managing a larger team, or the diminishing returns of each additional staff member. This goes to the heart of my question: what would be the most effective way for each individual to further EA causes? Should they work full time and donate more, or work part time and do other things (this question may only apply to those who are earning to give). This question can best be determined on a case by case basis of course. It relates to ... (read more)

1
Ilya
Hi Uri, I guess there is no modern (21st century) data yet on potential return of localized outreach because the Oxford-style EA mentality is very young, though the great minds thousand years ago have found the best answers to the challenge of best purpose/meaning of human life. My intuition tells me that the most effective way to actualize EA causes would be through creation of small teams/groups/collectives able of sustainable economical and ideological exchange with the environment. My assumption is that members of such a group recognize their need for harmonious interconnection. Think of a living organism in which all organs work in unity and harmony. A more or less normal organism does not have what you call “management overhead”. Complexity of physical human body, with all its “major and minor” organs, comprised of hundreds of millions of cells, is much greater than complexity of a small (3 to10 person) team. If you are interested and free in principle of forming a small project team for sustainable propagation, I’d like to chat with you in skype.

Thank you for this very important post, this is something I have been wanting to do for a very long time.

Do you know of any work that has been done comparing the effectiveness of outreach to other activities effective altruism supporters can take? I refer specifically to the limited kind of outreach suggested here, such as opening a local chapter, and not the kind of outreach Peter Singer is capable of.

I will give you an example of what I am thinking about.

A year ago I changed my career plan and started a technology startup. If my startup succeeds, it wil... (read more)

2
weeatquince🔸
Hi Uri, Unfortunately I don't know the answer to your question. As you suggest the answer might vary from person to person or situation to situation and be skewed by effects with small probabilities but huge impacts. The only thing that I would say however is to disagree with your comment that "work on an outreach program ..[will] require significant time and effort". It requires as much or as little time and effort as you are willing to put in. For example just creating a Facebook group in London and creating a social event every month has helped grow the movement in London. Organising the occasional evening with EAs has shown to be not too much more effort than organising a evening with friends. Happy to give you some tips on growing EA in minimal time - feel free to message me!
2
Vincent_deB
Perhaps weeatquince could ask someone from The High Impact Network to comment?
-1
Ilya
Uri, it is my understanding that a better EA model of a startup would be a collective one, than individualistic. Your business will grow faster by creating and growing a larger team. For example, if your current team needs 12 month of full time work to complete the startup phase, then by increasing the team 3 times, the work may be accomplished in 3 month. I am interested to learn nature of your startup for more qualified communications with you.