All of Cornelius's Comments + Replies

Are there any sites set up to gamify your donations? I rather liked how the old GWWC site had little token pictures next to the organizations you donated to (vaguely felt like a "collect them all game") and the pie chart breakdown along with other nifty visualizations. The new pledge dashboard over at effectivealtruism.org lacks all that and, for me, has reduced the pleasure I was having with organizing and tracking and thinking about my donation strategies. Though I can understand some people prefer the simplification, I don't, so are there any alternatives people like me that prefer a more gamified visualization-rich approach use?

Worth pointing out some academics think the parameters used in the Imperial model was too negative based on real world data we have. See Bill Gates's take on it:

Fortunately it appears the parameters used in that model were too negative. The experience in China is the most critical data we have. They did their "shut down" and were able to reduce the number of cases. They are testing widely so they see rebounds immediately and so far there have not been a lot. They avoided widespread infection. The Imperial model does not match this experie
... (read more)

that's a tribal war between economists and epidemiologists?

What?

I guess you aren't up to speed with worm-wars. Things have gotten pretty tribal here with twitter wars between respected academics (made worse by a viral Buzzfeed article that arguably politicized the issue...), but nobody (to date) would argue EAs should stay out of deworming altogether because of that.

On the contrary precisely because of all this shit I'd think we need more EAs working on deworming.

Of course in the case of deworming it seems more clear that throwing in EAs will lead to... (read more)

0
the_jaded_one
6y
Thanks for the info on the worm wars, will look into it.

I see every day the devastating economic harm that organizations like the Against Malaria Foundation wreak on communities.

Make a series of videos about that instead then if it's so prevalent. It would serve to undermine GiveWell far more and strengthen your credibility.

Your video against GiveWell does not address or debunk any of GiveWell's evidence. It's a philosophical treatise on GiveWell's methods not an evidence-based treatise. Arguing by analogy based on your own experience is not evidence. I've been robbed 3 times living in Vancouver and yet zero... (read more)

I've for a long time seen things this way:

  • GiveWell: emphasizes effectiveness: the logic pull
  • TLYCS: emphasizes altruism: the emotion pull
  • GWWC: emphasizes the pledge: the act that unifies us as a common movement (or I think+feel it does)

One cute EA family.

We have found this exceptionally difficult due to the diversity of GFI’s activities and the particularly unclear counterfactuals.

Perhaps I am not understanding but isn't it possible to simplify your model by honing in on one particular thing GFI is doing and pretending that a donation goes towards only that? Oxfam's impact is notoriously difficult to model (too big, too many counterfactuals) but as soon as you only look at their disaster management programs (where they've done RCTs to showcase effectiveness) then suddenly we have far better cost-effecti... (read more)

4
Ward
7y
There are two ways donations to GFI could be beneficial: speeding up a paradigm-change that would have happened anyway, and increasing the odds that the change happens at all. I think it's not unreasonable to focus on the former, since there aren't fundamental barriers to developing vat meat and there are some long-term drivers for it (energy/land efficiency, demand). However, in that case, it helps to have some kind of model for the dynamics of the process. Say you think it'll take $100 million and 10 years to develop affordable vat burgers; $1million now probably represents more than .1 year of speedup, since investors will pile on as the technology gets closer to being viable. But how much does it represent? (And, also, how much is that worth?) Plus, in practice we might want to decide between different methods and target meats, but then we need to have a decent sense of the responses for each of those. I agree that this is possible. I'd say the way to go is generating a few possible development paths (paired $/time and progress/$ curves) based on historical tech's development and domain-experts' prognostications, and then looking at marginal effects for each path. Not having looked into this more, it seems doable but not-straightforward. Note that the Impossible Burger isn't a great model for full-on synthetic meat. Their burgers are mostly plant-based, and they use yeast to synthesize hemoglobin, a single protein--something that's very much within the purview of existing biotech. This contrasts with New Harvest and Memphis Meats' efforts synthesizing muscle fibers to make ground beef, to say nothing of the eventual goal of synthesizing large-scale muscle structure to replicate steak, etc. And we have a lot less to go on there. Mark Post at Maastricht University made a $325,000 burger in 2013. Memphis Meats claimed to be making meat at $40,000/kg in 2016.* Mark Post also claims scaling up his current methods could get to ~$80/kg (~$10/burger) in a few years.

It's pretty much like you said in this comment and I completely agree with you and am putting it here because of how well I think you've driven home the point:

...I myself once mocked a co-worker for taking an effort to recycle when the same effort could do so much more impact for people in Africa. That's wrong in any case, but I was probably wrong in my reasoning too because of numbers.

Also, I'm afraid that some doctor will stand up during an EA presentation and say

You kids pretend to be visionaries, but in reality you don't have the slightest ide

... (read more)
1
Fluttershy
7y
I strongly agree with both of the comments you've written in this thread so far, but the last paragraph here seems especially important. Regarding this bit, though: This factor may push in the opposite way than you'd think, given the context. Specifically, if people who might have gotten into EA in the past ended up avoiding it because they were exposed to this example, then you'd expect the example to be more popular than it would be if everyone who once stood a reasonable chance of becoming an EA (or even a hardcore EA) had stuck around to give you their opinion on whether you should use that example. So, keep doing what you're doing! I like your approach.

This is a great post and I thank you for taking the time to write it up.

I ran an EA club at my university and ran a workshop where we covered all the philosophical objections to Effective Altruism. All objections were fairly straightforward to address except for one which - in addressing it - seemed to upend how many participants viewed EA, given what image they thus far had of EA. That objection is: Effective Altruism is not that effective.

There is a lot to be said for this objection and I highly highly recommend anyone who calls themselves an EA to read ... (read more)

1
DC
7y
Overstatement seems to be selected for when 1) evaluators like Givewell are deferred-to rather than questioned, 2) you want to market that Faithful Deferrence to others.

Everyone is warm (±37°C, ideally), open-minded, reasonable and curious.

You sir, will be thoroughly quoted and requoted on this gem, lol. I commend this heartfelt post.

One thing I'm unclear on is:

Is s/he leaving the EA community and retaining the EA philosophy or rejecting the EA philosophy and staying in the EA community or leaving both?

What EAs do and what EA is are two different things after all. I'm going to guess leaving the EA community given that yes most EAs are utilitarians and this seems to be foundational to the reason Lila is leaving. However the EA philosophy is not utilitarian per se so you'd expect there to be many non-utilitarian EAs. I've commented on this before here. Many of us are not utilitarian. 44%... (read more)

The movement started around 1870 and was still appears to have been active around 1894 (latest handbook in OP). WW1 was 1914-1918 and WW2 1939-1945. I'd like to know if it survived to 1945. If it did this is its cut off since my guess is that it died very quickly after WW2 when eugenics very rapidly spread throughout the world's collective consciousness as an unspeakable evil. I imagine the movement couldn't adapt quickly enough to bad PR and silently faded or rebranded itself. For instance, the Charity Organization Society of Denver, Colorado, is the fore... (read more)

Update: Nir Eyal very much appears to self-identify as an effective altruist despite being a non-utilitarian. See interview with Harvard EA here specifically about non-utilitarin effective altruism and this article on Effective Altruism back in 2015. Wikipedia even mentions him as a "leader in Effective Altruism"

No one has made any concerted effort to map the values of people who are not utilitarians, to come up with metrics that may represent what such people care about and evaluate charities on these metrics.

This appears to be demonstrably false. And in very strong terms given how strong a claim you've made and how I only need to find one person to prove it wrong. We have many non-utilitarian egalitarian luminaries making a concerted effort to come up with exactly the metrics that would tell us, based on egalitarian/priorian principles, what charities/interv... (read more)

0
Cornelius
7y
Update: Nir Eyal very much appears to self-identify as an effective altruist despite being a non-utilitarian. See interview with Harvard EA here specifically about non-utilitarin effective altruism and this article on Effective Altruism back in 2015. Wikipedia even mentions him as a "leader in Effective Altruism"

I think that joint donations not only with kin or via couples, but with friends in an extended community, may become more common if EA becomes more prevalent in collectivist cultures. Right now EA is focused primarily in the UK, Netherlands, Germany, Switzerland, Australia and America, which are all pretty much your archetypal individualist cultures.

I mention this because I consistently notice the trend of the EA community focusing on advertising what the individual can accomplish with their donation. This may not be best if EA is to achieve broad appeal ... (read more)

I'm confused and your 4 points only make me feel I'm missing something embarrassingly obvious.

Where did I suggest that valuing saving overall good lives means we are failing to achieve a shared goal of negative utilitarianism? In the first paragraph of my post and the part you seem to think is misleading I thought I specifically suggested exactly the opposite.

And yes, negative utilitarianism is a useful ethical theory that nonetheless many EAs and philosophers will indeed reject given particular real-world circumstances. And I wholeheartedly agree. This is a whole different topic though, so I feel like you're getting at something others think is obvious that I'm clearly missing.

Put this way I change my mind and agree it is unclear. However, to make your paper stronger, I would have included something akin to what you just wrote in the paper to make it clear why you think Gabriel's use of "iteration effects" is unclear and not the same as his usage in the 'priority' section.

I'm not sure how important clarifying something like this is for philosophical argumentation, but for me, this was the one nagging kink in what is otherwise fast becoming one of my favourite "EA-defense" papers.

0[anonymous]7y
Thanks for the feedback. From memory, I think at the time we thought that since it didn't do any work in his argument, we didn't think that could be what he meant by it.

I notice this in your paper:

He also mentions that cost-effectiveness analysis ignores the significance of ‘iteration effects’ (page 12)

Gabriel uses Iterate in his Ultra-poverty example so I'm fairly certain how he uses iterate here is what he was trying to refer to

Therefore, they would choose the program that supports literate men. When this pattern of reasoning is iterated many times, it leads to the systematic neglect of those at the very bottom, a trend exemplified by how EAs systematically neglect focusing on the very bottom in the first world. T

... (read more)
2[anonymous]7y
Thanks for this. I have two comments. Firstly, I'm not sure he's making a point about justice and equality in the 'quantification bias' section. If his criticism of DALYs works, then it works on straightforward consequentialist grounds - DALYs are the wrong metric of welfare. (On this, see our footnote 41.) Secondly, the claim about iteration effects is neither necessary nor sufficient to get to his conclusion. If the DALY metric inappropriately ignores hope, then it doesn't really matter whether a decision about healthcare resource distribution on the basis of DALYs is made once or is iterated. Either way, DALYs would ignore an important component of welfare.

Perhaps "systemic change bias" needs to be coined, or something to that effect, to be used in further debates.

Might be useful in elucidating why people criticizing EAs always mischaracterize us as not caring about systemic change or harder-to-quantify causes.

2
CarlShulman
7y
Those causes get criticized because of how hard to quantify they are. The relatively neglected thing is recognizing both strands, and arguing for Goldilocks positions between 'linear clear evidence-backed non-systemic charity' and 'far too radical for most interested in systemic change.'

Couldn't you just counter and say that if EA were around back then and it had just started out trying to figure out what the most good is that they would not support the abolitionist movement because of difficult EV calculations and because they are spending resources elsewhere? However, if the EA community existed back then and had matured a bit to the stage that something like OpenPhil existed back then as well (OpenPhil of course being an EA org for those reading who don't know) then they would have very likely supported attempts at cost-effectiveness c... (read more)

Yes, precisely. Although - there are so many variants of negative utilitarianism that "precisely" is probably a misnomer.

4
CarlShulman
7y
OK, then since most EAs (and philosophers, and the world) think that other things like overall well-being matter it's misleading to suggest that by valuing saving overall good lives they are failing to achieve a shared goal of negative utilitarianism (which they reject).

Yea as a two-level consequentialist moral anti-realist I actually am pretty tired of EA's insistence of "how many lives we can save" instead of emphasizing how much "life fulfillment and happiness" you can spread. I always thought this was not only a PR mistake but also a utilitarian mistake. We're trying to prevent suffering, so obviously preventing instances where a single person goes through more suffering on the road to death is more morally relevant utils-wise than preventing a death with less suffering.

Nonetheless, this is the fir... (read more)

4
CarlShulman
7y
What do you mean by 'we'? Negative utilitarians?

I can also vouch for the success of "What's one good thing and one bad thing that has happened to you this week/month/since last time?" Each person picks one of each and talks about it. Naturally, some people may bring up things related to EA very easily with this question if they are involved with it.