Hey Jeffrey,
Great to hear you are interested in starting an EA group! I hope your event today goes well, and apologies for the delayed response. I work on the CEA group team to provide support to EA groups. Here are some of my thoughts for new groups starting out:
It is key that anyone leading a local group has a solid understanding of effective altruism, so that they can answer questions from community members, and avoid potentially giving anyone a misleading impression of EA. This means having a level of knowledge at least equivalent to the EA handbook, o...
I'm not quite sure what argument you are trying to make with this comment.
I interpreted your original comment as arguing for something like: "Although most of the relevant employees at central coordinator organisations are not sure about the sign of outreach, most EAs think it is likely to be positive, thus it is likely to in fact be positive".
Where I agree with first two points but not the conclusion, as I think we should consider the staff at the 'coordinator organizations' to be the relevant expert class and mostly defer to their judgement.
It...
But should we not expect coordinator organizations to be the ones best placed to have considered the issue?
My impression is that they have developed their view over a fairly long time period after a lot of thought and experience.
Just to clarify, when I say that my sense is that there are two types of EA, I mean that I sense that there are two types of effective altruism, not that I sense that there are two types of effective altruists.
Ah I see. for some reason I got the other sense from reading your comment, but looking back at it I think that was just a failing of reading comprehension on my part.
I agree that the differences between global poverty and animal welfare are more matters of degree, but I also think they are larger than people seem to expect.
I am somewhat confused by the framing of this comment, you start by saying "there are two types of EA" but the points seem to all be about the properties of different causes.
I don't think there are 'two kinds' of EAs in the sense you could easily tell which group people were going to fall into in advance, but that all of your characteristics just follow as practical considerations resulting from how important people find the longtermist view. (But I do think "A longtermist viewpoint leads to very different approach" is correct.)
I'm als...
As far as I can tell none of the links that look like this instead of http://effective-altruism.com work in the pdf version.
as people who aren't actually interested drop out.
This depends on what you mean by 'drop out'. Only around 10% (~5) of our committee dropped out during last year, although maybe 1/3rd chose not to rejoin the committee this year (and about another 1/3rd are graduating)
2) From my understanding, Cambridge viewed the 1 year roles as a way of being able to 'lock in' people to engage with EA for 1 year and create a norm of committee attending events.
This does not ring especially true to me, see my reply to Josh.
To jump in as the ex-co-president of EA: Cambridge from last year:
I think the differences mostly come in things which were omitted from this post, as opposed to the explicit points made, which I mostly agree with.
There is a fairly wide distinction between the EA community in Cambridge and the EA: Cam committee, and we don't try to force people from the former into the latter (although we hope for the reverse!).
I largely view a big formal committee (ours was over 40 people last year) as an addition to the attempts to build a community as outlined in this po...
I'm surprised by your last point, since the article says:
Although it seems unlikely x-risk reduction is the best buy from the lights of the total view (we should be suspicious if it were), given $13000 per life year compares unfavourably to best global health interventions, it is still a good buy: it compares favourably to marginal cost effectiveness for rich country healthcare spending, for example.
This seems a far cry from the impression you seem to have gotten from the article. In fact your quote of "highly effective" is only used once, ...
How could it explain that diabetics lived longer than healthy people?
If all of the sickest diabetics are switched to other drugs, then the only people taking metformin are the 'healthy diabetics', and it is possible that the average healthy diabetic lives longer than the average person (who may be healthy or unhealthy).
This would give the observed effective without metformin having any effect on longevity.
I'm not quite sure what this equation is meant to be calculating. If it is meant to be $ per life saved it should be something like:
Direct effects: (price of the experiment)/((probability of success)*(lives saved assuming e.g. 10% adoption))
(Note the division is very important here! You missed it in your comment, but it is not clear at all what you would be estimating without it.)
Your estimate of the indirect costs seems right to me, although in the case of:
growth of food consumption because of higher population
I would probably not include this level of secondary effect, since these people are also economically productive etc. so it being very hard to estimate.
I'm not saying you need to solve the problem, I'm saying you should take the problem into account in your cost calculations, instead of assuming it will be solved.
It probably should be analysed how the bulk price of metformin could be lowered. For example, global supply of vitamin C costs around 1 billion USD a year with 150 kt of bulk powder.
Yes but as I discuss above it needs to be turned into pills and distributed to people, for which a 2 cents per pill cost seems pretty low. If you are arguing for fortification of foods with metformin then presumably we would need to show extraordinary levels of safety, since we would be dosing the entire population at very variable levels.
In general I would find it helpful i...
Yes, but 10kg of pure Metformin powder is not much good since it needs to be packaged into pills for easy consumption (since its needs to be taken in sub gram doses). Since you are not able to find pills for less than 2 cents (and even those only in India) I think you should not assume a lower price than that without good reason.
Presumably we run into some fundamental price to form, package and ship all the pills? I would be surprised if that could be gotten much below 1p per pill in developed countries. (although around 1p per pill is clearly possible since some painkillers are sold around that level)
I more meant it should be mentioned by the $0.24 figure e.g. something like:
"Under our model the direct cost effectiveness is $0.24 per life saved, but there is also an indirect cost of ~$12,000 per life saved from the cost of the metformin (as we will need to supply everyone with it for $3 trillion, but it will only save 250 million lives)."
Noticeably the indirect figure is actually more expensive than current global poverty charities, so under your model buying people metformin would not be an attractive intervention for EAs. This does not mean...
Even if the cost of Metformin is only 2 cents a day, giving to to 5 billion people every day for 80 years would cost about $3 trillion (0.02*365*80*5*10^9). Whilst the cost would (at least potentially) be distributed across the population, it also seems like something that should be mentioned as a cost of the policy.
I was trying to keep the discussions of 'which kind of pain is morally relevant' and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further.
...You're saying that, if we determined "total pain" by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly hi
Sure, although I'm not sure how much time I will have to look it over. My email is alexbarry40@gmail.com.
Thanks for the reply. Despite my very negative tone I do think this is an important work, and doing good cost benefit analysis like these is very difficult.
Taking median date of the AI arrival like 2062 is not informative as in half cases it will not be here at 2062. The date of 2100 is taken as the date when it (or other powerful life-extending technology) almost sure will appear as a very conservative estimate.
I don't share the intuition that human level AI will rapidly cause the creation of powerful life-extending technology. This seems to be relyi...
Reading through this I have some pretty significant concerns.
First the model behind the "$0.24 for each life saved" figure seems very suspect:
Also, not sure why my comment was downvoted. I wasn't being rude (or, I think, stupid) and I think it's unhelpful to downvote without explanation as it just looks petty and feels unfriendly.
I didn't downvote, but:
In which case I'm not understanding your model. The 'Cost per life year' box is $1bn/EV. How is that not a one off of $1bn? What have I missed?
The last two sentences of this come across as pretty curt to me. I think there is a wide range in how people interpret things like these, so it is probably just a bit of a communication style mismatc...
Yes, "switched" was a bit strong, I meant that by default people will assume a standard usage, so if you only reveal later that actually you are using a non-standard definition people will be surprised. I guess despite your response to Objection 2 I was unsure in this case whether you were arguing in terms of (what are at least to me) conventional definitions or not, and I had assumed you were.
To italicize works puts *s on either side, like *this* (when you are replying to a comment there is a 'show help' button that explains some of these things.)
If this isn't true, or consensus view amongst PAAs is "TRIA, and we're mistaken to our degree of psychological continuity", then this plausibly shaves off an order of magnitude-ish and plonks it more in the 'probably not a good buy' category.
It would also have the same (or worse) effect on other things that save lives (e.g. AMF) so it is not totally clear how much worse x-risk would look compared to everything else. (Although perhaps e.g. deworming would come out very well, if it just reduces suffering for a short-ish timescale. (The fact that it mostly effects children might sway things the other way though!))
Some of your quotes are broken in your comment, you need a > for each paragraph (and two >s for double quotes etc.)
I know for most of your post you were arguing with standard definitions, but that made it all the more jarring when you switched!
I actually think most (maybe all?) moral theories can be baked into goodness/badness of sates of affairs. If you want incorporate a side-constraint you can just define any state of affairs in which you violate that constraint as being worse than all other states of affairs. I do agree this can be less natural,...
On 'people should have a chance to be helped in proportion to how much we can help them' (versus just always helping whoever we can help the most).
(Again, my preferred usage of 'morally worse/better' is basically defined so as to mean one always 'should' always pick the 'morally best' action. You could do that in this case, by saying cases are morally worse than one another if people do not have chances of being helped in proportion to how badly off they are. This however leads directly into my next point... )
How much would you be willing to trade off help...
So you're suggesting that most people aggregate different people's experiences as follows:
Well most EAs, probably not most people :P
But yes, I think most EAs apply this 'merchandise' approach weighed by conscious experience.
In regards to your discussion of moral theories, side constraints: I know there are a range of moral theories that can have rules etc. My objection was that if you were not in fact arguing that total pain (or whatever) is the sole determiner of what action is right then you should make this clear from the start (and ideally baked int...
Ah sorry yes you are right - I had misread the cost as £1 Billion total, not £1 Billion per year!
Edit: My comment is wrong - i had misread the price as £1 billion as a one-off, but it is £1 billion per year
I'm not quite able to follow what role annualising the risk plays in your model, since as far as I can tell you seem to calculate your final cost effectiveness in terms purely of the risk reduction in 1 year. This seems like it should undercount the impact 100-fold.
e.g. if I skip annualising entirely, and just work in century blocks I get:
Thanks for writing this up! This does seem to be an important argument not made often enough.
To my knowledge this has been covered a couple of times before, although not as thoroughly.
Once by Oxford Prioritization Project however they approached it from the other end, instead asking "what absolute percentage x-risk reduction would you need to get for £10,000 for it to be as cost effective as AMF" and finding the answer of 4 x 10^-8%. I think your model gives £10,000 as reducing x-risk by 10^-9%, which fits with your conclusion of close but not qu...
A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).
...I'm not sure I s
are you using "bad" to mean "morally bad?"
Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)
I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about conv...
Thanks for getting back to me, I've read your reply to kblog, but I don't find your argument especially different to those you laid out previously (which given that I always thought you were trying to make the moral case should maybe not be surprising). Again I see why there is a distinction one could care about, but I don't find it personally compelling.
(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person...
Huh, weirdly they seem to all work again now, they used to take me to the same page as any non-valid URl, e.g. https://80000hours.org/not-a-real-URL/
The links to 2, 4, 6 and 15 seem broken on the 80K end, I just get 'page not found' for each.
Link 30 also does not work, but that is just because it starts with an unnecessary "effective-altruism.com/" before the youtube link.
I checked and everything else seems to work.
Thanks for writing this! The interaction between donations and the reductions in personal allowance are interesting, and I would not have thought of them otherwise.
Some reservations I would have about the usefulness of a database vs lots of write-ups 'in context' like these is that I think how well activities work can depend heavily on the wider structure and atmosphere of the retreat, as well as the events that have come before. I would probably be happier with a classification of 2 or 3 different types of retreat, and the activities that seem to work best in each. (However we should not let perfect be the enemy of good here, and there is probably a number of things that work well across different retreat styles).
Yo...
Thanks for writing this up,
For your impact review this seems likely to have some impact on the program of future years EA: Cambridge retreats. (In particular it seems likely we will include a version of the 'Explaining Concepts' activity, which we would not have done otherwise, as well as being an additional point in favour of CFAR stuff, and another call to think carefully about the space/mood we create).
I am also interested in the breakdown of how you spend the 200h planning time since i would estimate the EA: Cam retreat (which had around 45 attendees, ...
I think I agree with the comments on this post that job postings on the EA forum are not ideal, since if all the different orgs did it they would significantly clutter the forum.
The existing "Effective Altruism Job Postings" Facebook group and possibly the 80k job board should fulfill this purpose.
How about a shameless plug for EA Work Club? 😇
This role is also listed there – http://www.eawork.club/jobs/87
Thanks for your reply - I'm extremely confused if you think there is no 'intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person" since (as has been discussed in these comments) if you view/define total pain as being measured by intensity-weighted number of experiences this gives a clear metric that matches consequentialist usage.
I had assumed you were arguing at the 'which is morally important' level, which I think might well come down to intuitions.
I hope you manage to work it out with kblog!
(Posted as top-level comment as I has some general things to say, was originally a response here)
I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don't find it particularly compelling myself, but I can understand why others could find it important.
Overall I find this post confusing though, since the framing seems to be 'Effective Altruism is making an intellectual mistake' whereas you just actually seem to have a different se...
I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don't find it particularly compelling myself, but I can understand why others could find it important.
Overall I find this post confusing though, since the framing seems to be "Effective Altruism is making an intellectual mistake" whereas you just actually seem to have a different set of moral intuitions from those involved in EA, which are largely incompatible with ef...
Thanks for the writeup, I was not aware of the Effect Foundation before now.
After reading the above I am still not sure exactly what kind of outreach you perform. Could you give me a quick rundown of how you think you influenced the donations, and what you plan to continue doing going forwards?
Thanks for writing this - it fits well with my experience of how a lot of people get increasingly involved with EA, bouncing between disparate programs by different orgs. This does unfortunately make evaluating impact much harder, but I think it is important to bear in mind when when designing resources for EA outreach or similar projects.
Thanks for the post, as a minor nitpick, shouldn't the maximal DALY cost of doing something for an hour a day be 1/16, since there are only 16 waking hours in a day and presumably the period whilst asleep does not contribute?
Ah good point on the researcher salary, it was definitely just eyeballed and should be higher.
I think a reason I was happy to leave it low was as a fudge to take into account that the marginal impact of a researcher now is likely to be far greater than the average impact if there were 10,000 working on x-risk, but I should have clarified that as a separate factor.
In any case, even adjusting the cost of a researcher up to $500,000 a year and leaving the rest unchanged does not significantly change the conclusion, with the very rough calculation still giving ~$10 per QALY (but obviously leaves less wiggle room for skepticism about the efficacy of research etc.)
Thanks for taking the time to write this up and share it Jessica!I just also want to highlight a couple of other resources available for those planning retreats:
- The CEA retreat planning guide (which is more focused on the logistics side of things)
- The EA Cambridge retreat handover doc (which you reference a couple of times, and is a modified version of a document originally written by me a couple of years ago. NB. this was meant as an internal document, so has lots of content written specifically from the EA Cambridge perspective)
- CZEA's review of their
... (read more)