All of Alex_Barry's Comments + Replies

Thanks for taking the time to write this up and share it Jessica!I just also want to highlight a couple of other resources available for those planning retreats:

  • The CEA retreat planning guide (which is more focused on the logistics side of things)
  • The EA Cambridge retreat handover doc (which you reference a couple of times, and is a modified version of a document originally written by me a couple of years ago. NB. this was meant as an internal document, so has lots of content written specifically from the EA Cambridge perspective)
  • CZEA's review of their
... (read more)

Hey Jeffrey,

Great to hear you are interested in starting an EA group! I hope your event today goes well, and apologies for the delayed response. I work on the CEA group team to provide support to EA groups. Here are some of my thoughts for new groups starting out:

It is key that anyone leading a local group has a solid understanding of effective altruism, so that they can answer questions from community members, and avoid potentially giving anyone a misleading impression of EA. This means having a level of knowledge at least equivalent to the EA handbook, o... (read more)

1
Jeffrey
5y
Hi Barry, thanks for your response. Yes, the event last Dec 5 goes very well' and will continue event with invites next year, as of now, we will going to have few discussions once a week to learn more about EA and to be equipped ' before we conduct another event to Introduce EA. It will really be a great help for us if there's someone like you who will guide us on building the community. As of now, you can contact us through our fb page. www.facebook.com/EffectiveAltruismPhillipines

I'm not quite sure what argument you are trying to make with this comment.

I interpreted your original comment as arguing for something like: "Although most of the relevant employees at central coordinator organisations are not sure about the sign of outreach, most EAs think it is likely to be positive, thus it is likely to in fact be positive".

Where I agree with first two points but not the conclusion, as I think we should consider the staff at the 'coordinator organizations' to be the relevant expert class and mostly defer to their judgement.

It... (read more)

But should we not expect coordinator organizations to be the ones best placed to have considered the issue?

My impression is that they have developed their view over a fairly long time period after a lot of thought and experience.

2
Evan_Gaensbauer
6y
Yes, but I think the current process isn't inclusive of input from as many EA organizations as it could or should be. It appears it might be as simple as the CEA having offices in Berkeley and Oxford meaning they receive a disproportionate amount of input on EA from those organizations, as opposed to EA organizations whose staff are geographically distributed and/or don't have an office presence near the CEA. I think the CEA should still be at the centre of making these decisions, and after recent feedback from Max Dalton from the CEA on the EA Handbook 2.0, I expect they will make a more inclusive process for feedback on outreach materials.

Just to clarify, when I say that my sense is that there are two types of EA, I mean that I sense that there are two types of effective altruism, not that I sense that there are two types of effective altruists.

Ah I see. for some reason I got the other sense from reading your comment, but looking back at it I think that was just a failing of reading comprehension on my part.

I agree that the differences between global poverty and animal welfare are more matters of degree, but I also think they are larger than people seem to expect.

I am somewhat confused by the framing of this comment, you start by saying "there are two types of EA" but the points seem to all be about the properties of different causes.

I don't think there are 'two kinds' of EAs in the sense you could easily tell which group people were going to fall into in advance, but that all of your characteristics just follow as practical considerations resulting from how important people find the longtermist view. (But I do think "A longtermist viewpoint leads to very different approach" is correct.)

I'm als... (read more)

2
RandomEA
6y
Just to clarify, when I say that my sense is that there are two types of EA, I mean that I sense that there are two types of effective altruism, not that I sense that there are two types of effective altruists. I agree that there are substantial differences between global poverty and farm animal welfare (with global poverty being more clearly Type 1). But it seems to me that those differences are more differences of degree, while the differences between global poverty/farm animal welfare and biosecurity/AI alignment are more differences of kind.

As far as I can tell none of the links that look like this instead of http://effective-altruism.com work in the pdf version.

1
JoshP
6y
I think there's a mix of working and non-working, having just checked myself. Some don't go through to anything when you click on them; some go through to a 404 error; and some go through to the correct website. Bizarrely, this will depend on the copy I have downloaded. I have downloaded it more than once, and it works differently each time. The first one I have downloaded (I downloaded more than once in different tabs) works in every link I check. The second one doesn't- and this remains true when comparing certain links like for like. I'm not really sure why. Bit bizarre.

as people who aren't actually interested drop out.

This depends on what you mean by 'drop out'. Only around 10% (~5) of our committee dropped out during last year, although maybe 1/3rd chose not to rejoin the committee this year (and about another 1/3rd are graduating)

2) From my understanding, Cambridge viewed the 1 year roles as a way of being able to 'lock in' people to engage with EA for 1 year and create a norm of committee attending events.

This does not ring especially true to me, see my reply to Josh.

To jump in as the ex-co-president of EA: Cambridge from last year:

I think the differences mostly come in things which were omitted from this post, as opposed to the explicit points made, which I mostly agree with.

There is a fairly wide distinction between the EA community in Cambridge and the EA: Cam committee, and we don't try to force people from the former into the latter (although we hope for the reverse!).

I largely view a big formal committee (ours was over 40 people last year) as an addition to the attempts to build a community as outlined in this po... (read more)

I'm surprised by your last point, since the article says:

Although it seems unlikely x-risk reduction is the best buy from the lights of the total view (we should be suspicious if it were), given $13000 per life year compares unfavourably to best global health interventions, it is still a good buy: it compares favourably to marginal cost effectiveness for rich country healthcare spending, for example.

This seems a far cry from the impression you seem to have gotten from the article. In fact your quote of "highly effective" is only used once, ... (read more)

How could it explain that diabetics lived longer than healthy people?

If all of the sickest diabetics are switched to other drugs, then the only people taking metformin are the 'healthy diabetics', and it is possible that the average healthy diabetic lives longer than the average person (who may be healthy or unhealthy).

This would give the observed effective without metformin having any effect on longevity.

I'm not quite sure what this equation is meant to be calculating. If it is meant to be $ per life saved it should be something like:

Direct effects: (price of the experiment)/((probability of success)*(lives saved assuming e.g. 10% adoption))

(Note the division is very important here! You missed it in your comment, but it is not clear at all what you would be estimating without it.)

Your estimate of the indirect costs seems right to me, although in the case of:

growth of food consumption because of higher population

I would probably not include this level of secondary effect, since these people are also economically productive etc. so it being very hard to estimate.

I'm not saying you need to solve the problem, I'm saying you should take the problem into account in your cost calculations, instead of assuming it will be solved.

0
turchin
6y
In the next version of the article, I will present general equation in which will try to answer all these concerns. It will be (price of the experiment)(probability of success) + indirect benefits of experiment - (fixed price of metformin pills for life)(number of people)(share of adopters)(probability of success of the experiment) - unexpected side effects - growth of food consumption because of higher population. Anything lost?

It probably should be analysed how the bulk price of metformin could be lowered. For example, global supply of vitamin C costs around 1 billion USD a year with 150 kt of bulk powder.

Yes but as I discuss above it needs to be turned into pills and distributed to people, for which a 2 cents per pill cost seems pretty low. If you are arguing for fortification of foods with metformin then presumably we would need to show extraordinary levels of safety, since we would be dosing the entire population at very variable levels.

In general I would find it helpful i... (read more)

-1
turchin
6y
Ok. I just have two ideas in different moments of time, that is why there are two comments. I think that again the problem of expensive pills is not a problem of antiaging therapies, but a more general problem of expensive medicine and poverty. I should not try to solve all possible problems in one article as it will immediately grow to the size of the book. Most drugs we now consume are overpriced compared with bulk prices; also food is much more expensive in retail. I think it is important problem, but it is another problem.

Yes, but 10kg of pure Metformin powder is not much good since it needs to be packaged into pills for easy consumption (since its needs to be taken in sub gram doses). Since you are not able to find pills for less than 2 cents (and even those only in India) I think you should not assume a lower price than that without good reason.

Presumably we run into some fundamental price to form, package and ship all the pills? I would be surprised if that could be gotten much below 1p per pill in developed countries. (although around 1p per pill is clearly possible since some painkillers are sold around that level)

I more meant it should be mentioned by the $0.24 figure e.g. something like:

"Under our model the direct cost effectiveness is $0.24 per life saved, but there is also an indirect cost of ~$12,000 per life saved from the cost of the metformin (as we will need to supply everyone with it for $3 trillion, but it will only save 250 million lives)."

Noticeably the indirect figure is actually more expensive than current global poverty charities, so under your model buying people metformin would not be an attractive intervention for EAs. This does not mean... (read more)

0
turchin
6y
Also, Alibaba suggests metformin for 5 USD for kg, which implies lifelong supply could be bought for something like 50 USD. https://www.alibaba.com/product-detail/HOT-SALE--99-High-Purity_50033115776.html?spm=a2700.7724857.main07.53.2c7f20b6ktwrdq
0
turchin
6y
Also, the global market for snake-oil life extension is 300 bn a year, so spending 10 times less would provide everybody with actually working drug.
0
turchin
6y
It probably should be analysed how the bulk price of metformin could be lowered. For example, global supply of vitamin C costs around 1 billion USD a year with 150 kt of bulk powder. I also not suggesting buying metformin for people. In case of food fortification, the price is probably included into the total price of food and the manufacturers pay lowerest bulk price.

Even if the cost of Metformin is only 2 cents a day, giving to to 5 billion people every day for 80 years would cost about $3 trillion (0.02*365*80*5*10^9). Whilst the cost would (at least potentially) be distributed across the population, it also seems like something that should be mentioned as a cost of the policy.

0
turchin
6y
It was in fact discussed in section 7.1 there we wrote: The price of a lifetime supply of metformin, 500 USD, will pay for an additional 1-3 years of life expectancy and a proportional delay of age-related diseases. However, the actual price of the therapy for a person could be negative, because medical insurance companies will be interested that people will start taking age-slowing drugs, as it will delay payments on medical bills. Insurance companies could gain interest on this money. For example, if 100K of medical bills is delayed by three years, and the interest rate is two percent, the insurance company will earn 6 000 USD on later billing. Thus, insurance companies could provide incentives such as discounts or free aging treatments to those who use antiaging therapies.

I was trying to keep the discussions of 'which kind of pain is morally relevant' and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further.

You're saying that, if we determined "total pain" by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly hi

... (read more)
0
Jeffhe
6y
I think I see the original argument you were going for. The argument against my approach-minus-the-who-suffers-matters-bit is that it renders all resulting states of affairs equally bad, morally speaking, because all resulting states of affairs would involve the same total pain. Given that we should prevent the morally worst case, this means that my approach would have it that we shouldn't take any action, and that's just absurd. Therefore, my way of determining total pain is problematic. Here "a resulting state of affairs" is broadly understood as the indefinite span of time following a possible action, as opposed to any particular point in time following a possible action. On this broad understanding, it seems undeniable that each possible action will result in a state of affairs with the same total maximal pain, since there will surely be someone who suffers maximally at some point in time in each indefinite span of time. Well, if who suffered didn't matter, then I think leximin should be used to determine which resulting state of affairs is morally worse. According to leximin, we determine which state of affairs is morally better as follows: Step 1: From each state of affairs, select a person among the worst off in that state of affairs. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 2. Step 2: From each state of affairs, select a person among the worst off in that state of affairs, except for the person who has already been selected. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 3. And so forth... According to this method, even though all resulting states of affairs will involve the same total pain, certain resulting states of affairs will be morall

Sure, although I'm not sure how much time I will have to look it over. My email is alexbarry40@gmail.com.

Thanks for the reply. Despite my very negative tone I do think this is an important work, and doing good cost benefit analysis like these is very difficult.

Taking median date of the AI arrival like 2062 is not informative as in half cases it will not be here at 2062. The date of 2100 is taken as the date when it (or other powerful life-extending technology) almost sure will appear as a very conservative estimate.

I don't share the intuition that human level AI will rapidly cause the creation of powerful life-extending technology. This seems to be relyi... (read more)

1
turchin
6y
May I share with you the next version when all that changes will be done? I expect that the next revision will appear in 2 months.

Reading through this I have some pretty significant concerns.

First the model behind the "$0.24 for each life saved" figure seems very suspect:

  • The assumption of radical life extension technology being developed by 2100 is totally unsupported, with the one citation being to a survey of machine learning researchers which gave a 50% chance of AI reaching human level in all activities by 2062. It is unclear how this relates to the development of radical life extension technology however, something significantly out of reach of (current) human level
... (read more)
1
turchin
6y
Thank you for review. Taking median date of the AI arrival like 2062 is not informative as in half cases it will not be here at 2062. The date of 2100 is taken as the date when it (or other powerful life-extending technology) almost sure will appear as a very conservative estimate. Maybe it should be justified more in the text. Yes, it is assumed by Barzilai and gwern that metformin will extend human life 1 year, based on many human cohorts studies, but to actually prove it we need TAME study, and until this study is finished, metformin can't be used as a life-extending drug. So any year of delay of the experiment means a year in the delay in global implementation. For now, it is already delayed for 2 years by luck of funds. Given all uncertainty, the simplified model provides only an order of magnitude of the effect, but a more detailed model which take into account actual age distribution is coming. As the paper is already too long, we tried to outline the main arguments or provide links to the articles where detailed refutation is presented, as in case of Gavrilov, 2010, where the problem of overpopulation is analysed in detail. But it is obvious now that this points should be clarified. The next round of professional grammar editing is scheduled.

Also, not sure why my comment was downvoted. I wasn't being rude (or, I think, stupid) and I think it's unhelpful to downvote without explanation as it just looks petty and feels unfriendly.

I didn't downvote, but:

In which case I'm not understanding your model. The 'Cost per life year' box is $1bn/EV. How is that not a one off of $1bn? What have I missed?

The last two sentences of this come across as pretty curt to me. I think there is a wide range in how people interpret things like these, so it is probably just a bit of a communication style mismatc... (read more)

2
MichaelPlant
6y
Yeah, on re-reading, the "How is that not a one off of $1bn?" does seem snippy. Okay. Fair cop.

Yes, "switched" was a bit strong, I meant that by default people will assume a standard usage, so if you only reveal later that actually you are using a non-standard definition people will be surprised. I guess despite your response to Objection 2 I was unsure in this case whether you were arguing in terms of (what are at least to me) conventional definitions or not, and I had assumed you were.

To italicize works puts *s on either side, like *this* (when you are replying to a comment there is a 'show help' button that explains some of these things.)

1
Jeffhe
6y
I see the problem. I will fix this. Thanks.

If this isn't true, or consensus view amongst PAAs is "TRIA, and we're mistaken to our degree of psychological continuity", then this plausibly shaves off an order of magnitude-ish and plonks it more in the 'probably not a good buy' category.

It would also have the same (or worse) effect on other things that save lives (e.g. AMF) so it is not totally clear how much worse x-risk would look compared to everything else. (Although perhaps e.g. deworming would come out very well, if it just reduces suffering for a short-ish timescale. (The fact that it mostly effects children might sway things the other way though!))

2
MichaelPlant
6y
I agree. As I said here, TRIA implies you should care much less about saving young lives. The upshot for TRIA vs PAA combined with the life-comparative account is you should focused more on improving lives than saving lives if you like TRIA.> Just on this note, GiveWell claim only 2% of the value of deworming comes from short term health benefits and 98% from economic gains (see their latest cost-effectiveness spreadsheet), so they don't think the value is on the suffering-reducing end.

Some of your quotes are broken in your comment, you need a > for each paragraph (and two >s for double quotes etc.)

I know for most of your post you were arguing with standard definitions, but that made it all the more jarring when you switched!

I actually think most (maybe all?) moral theories can be baked into goodness/badness of sates of affairs. If you want incorporate a side-constraint you can just define any state of affairs in which you violate that constraint as being worse than all other states of affairs. I do agree this can be less natural,... (read more)

1
Jeffhe
6y
I certainly did not mean to cause confusion, and I apologize for wasting any of your time that you spent trying to make sense of things. By "you switched", do you mean that in my response to Objection 1, I gave the impression that only experience matters to me, such that when I mentioned in my response to Objection 2 that who suffers matters to me too, it seems like I've switched? And thanks, I have fixed the broken quote. Btw, do you know how to italicize words?

On 'people should have a chance to be helped in proportion to how much we can help them' (versus just always helping whoever we can help the most).

(Again, my preferred usage of 'morally worse/better' is basically defined so as to mean one always 'should' always pick the 'morally best' action. You could do that in this case, by saying cases are morally worse than one another if people do not have chances of being helped in proportion to how badly off they are. This however leads directly into my next point... )

How much would you be willing to trade off help... (read more)

1
Jeffhe
6y
Hey Alex! Sorry for the super late response! I have a self-control problem and my life got derailed a bit in the past week >< Anyways, I'm back :P This is an interesting question, adding another layer of chance to the original scenario. As you know, if (there was a 100% chance) I could give each person a chance of being saved in proportional to his/her suffering, I would do that instead of outright saving the person who has the worst to suffer. After all, this is what I think we should do, given that suffering matters, but who suffers also matters. Here, there seems to me a nice harmony between these two morally relevant factors – the suffering and the identity of who suffers, where both have a sufficient impact on what we ought to do: we ought to give each person a chance of being saved because who suffers matters, but each person’s chance ought to be in proportion to what he/she has to suffer because suffering also matters. Now you’re asking me what I would do if there was only a 95% chance that I could give each person a chance of being saved in proportion to his/her suffering with a 5% chance of not helping anyone at all: would I accept the 95% chance or outright save the person who has the worst to suffer? Well, what should I do? I must admit it’s not clear. I think it comes down to how much weight we should place on the morally relevant factor of identity. The more weight it has, the more likely the answer is that we should accept the 95% chance. I think it’s plausible that it has enough weight such that we should accept a 95% chance, but not a 40% chance. If one is a moral realist, one can accept that there is a correct objective answer yet not know what it is. One complication is that you mention the notion of fairness. On my account of what matters, the fair thing to do – as you suggest - seems to be to give each person a chance in proportion to his/her suffering. Fairness is often thought of as a morally relevant factor in of itself, but if what the fa

So you're suggesting that most people aggregate different people's experiences as follows:

Well most EAs, probably not most people :P

But yes, I think most EAs apply this 'merchandise' approach weighed by conscious experience.

In regards to your discussion of moral theories, side constraints: I know there are a range of moral theories that can have rules etc. My objection was that if you were not in fact arguing that total pain (or whatever) is the sole determiner of what action is right then you should make this clear from the start (and ideally baked int... (read more)

0
Jeffhe
6y
FYI, I have since reworded this as "So you're suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:" I think it is a more precise formulation. In any case, we're on the same page. The way I phrased Objection 1 was as follows: "One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie." Notice that this objection in argument form is as follows: P1) Two people suffering a given pain is morally worse than one other person suffering the given pain. P2) We ought to prevent the morally worst case. C) Therefore, we should help Amy and Susie over Bob. My argument with kbog concerns P1). As I mentioned, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering. Given this premise, I've been arguing that two people suffering a given pain does not involve more pain than one person suffering the given pain, and thus P1) is false. And kbog has been arguing that two people suffering a given pain does involve more pain than one person suffering the given pain, and thus P1) is true. Of course, both of us are right on our respective preferred sense of "involves more pain than". So I recently started arguing that my sense is the sense that really matters. Anyways, notice that P2) has not been debated. I understand that consequentialists would accept P2). But for other moral theorists, they would not because not all things that they take to matter (i.e. to be morally relevant, to have moral value, etc) can be baked into/captured by the moral worseness/goodness of a state of affairs. Thus, it seems natural for them to talk of side constraints, etc. For me, two things matt

Ah sorry yes you are right - I had misread the cost as £1 Billion total, not £1 Billion per year!

Edit: My comment is wrong - i had misread the price as £1 billion as a one-off, but it is £1 billion per year

I'm not quite able to follow what role annualising the risk plays in your model, since as far as I can tell you seem to calculate your final cost effectiveness in terms purely of the risk reduction in 1 year. This seems like it should undercount the impact 100-fold.

e.g. if I skip annualising entirely, and just work in century blocks I get:

  • still 247 Billion Life years at stake
  • 1% chance of x-risk, reduced to 0.99% by £1 billion project X.
  • This expe
... (read more)
2
Gregory Lewis
6y
The mistake might be on my part, but I think where this may be going wrong is I assume the cost needs to be repeated each year (i.e. you spent 1B to reduce risk by 1% in 2018, then have to spend another 1B to reduce risk by 1% in 2019). So if you assume a single 1B pulse reduces x risk across the century by 1%, then you do get 100 fold better results. I mainly chose the device of some costly 'project X' as it is hard to get a handle on (e.g.) whether 10^-10 reduction in xrisk/$ is a plausible figure or not. Given this, I might see if I can tweak the wording to make it clearer - or at least make any mistake I am making easier to diagnose.

Thanks for writing this up! This does seem to be an important argument not made often enough.

To my knowledge this has been covered a couple of times before, although not as thoroughly.

Once by Oxford Prioritization Project however they approached it from the other end, instead asking "what absolute percentage x-risk reduction would you need to get for £10,000 for it to be as cost effective as AMF" and finding the answer of 4 x 10^-8%. I think your model gives £10,000 as reducing x-risk by 10^-9%, which fits with your conclusion of close but not qu... (read more)

6
Benjamin_Todd
6y
I also made a very rough estimate in this article: https://80000hours.org/articles/extinction-risk/#in-total-how-effective-is-it-to-reduce-these-risks Though this estimate is much better and I've added a link to it. I also think x-risk over the century is over 1%, and we can reduce it much more cheaply than your guess, though it's nice to show it's plausible even with conservative figures.

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I s

... (read more)
0
Jeffhe
6y
Thanks for the exposition. I see the argument now. You're saying that, if we determined "total pain" by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it. I've since made clear to you that who suffers matters to me too, so if the above is right, then according to my moral theory, what we ought to do is assign an equal chance to any possible action we could take, since each possible action gives rise to the same total pain, just suffered by different individuals. Your argument would continue: Any moral theory that gave this absurd recommendation cannot be correct. Since the root of the absurdity is my preferred approach to determining total pain, that approach to determining total pain must be problematic too. My response: JanBrauner, if I remember correctly was talking about extreme unpredictability, but your argument doesn't seem to be based on unpredictability. If A1 and A2 are true, then each possible action more-or-less seems to inevitably result in a different person suffering maximal pain. Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot). I think there is no more absurdity to assigning each possible action an equal chance (assuming A1 and A2 are true) than there is in, say, flipping a coin between saving a million people on one island from being burned alive and saving one other person on another island from being burned alive. Since I don't find the latter absurd at all (keeping in mind that none of the million will suffer anything worse than the one, i.e. that the one would suffer no less than any one of the million), I would not find the former absurd either.

are you using "bad" to mean "morally bad?"

Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)

I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about conv... (read more)

0
Jeffhe
6y
So you're suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way: 1. Assign a moral value to each person's experiences based on its overall what-it's-like. For example, if someone is to experience 5 headaches, we are to assign a single moral value to his 5 headaches based on how experientially bad the what-it's-like-of-going-through-5-headaches is. If going through 5 such headaches is about experientially as bad as going through 1 major headache, then we would assign the same moral value to someone's 5 minor headaches as we would to someone else's 1 major headache. 2. We then add up the moral value assigned to each person's experiences to get a global moral value, and compare this moral value to the other global values corresponding to the other states of affairs we could bring about. This approach reminds me of trade-off situations that involve saving lives instead of saving people from suffering. For example, suppose we can either save Amy's and Susie's life or Bob's life, but we cannot save all. Who do we save? Most people would reason that we should save Amy's and Susie's life because each life is assigned a certain positive moral value, so 2 lives is twice the moral value as 1 life. I purposely avoided talking about trade-off situations involving saving lives because I don't think a life has moral value in itself, yet I anticipated that people would appeal to life having some sort of positive moral value in itself and I didn't want to spend time arguing about that. In any case, if life does have positive moral value in itself, then I think it makes sense to add those values just as it makes sense to add the dollar values of different merchandise. This would result in Amy's and Susie's death being a morally worse thing than Bob's death, and so I would at least agree that what we ought to do in this case wouldn't be to give everyone a 50% chance. In any case, if we assign a moral value to each per

Thanks for getting back to me, I've read your reply to kblog, but I don't find your argument especially different to those you laid out previously (which given that I always thought you were trying to make the moral case should maybe not be surprising). Again I see why there is a distinction one could care about, but I don't find it personally compelling.

(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person... (read more)

0
Jeffhe
6y
Hey Alex, Thanks again for taking the time to read my conversation with kbog and replying. I have a few thoughts in response: When you say that many people here would embrace the assumption that "two people experiencing the same pain is twice as bad as one person experiencing that pain", are you using "bad" to mean "morally bad?" I ask because I would agree if you meant morally bad IF the single person was a subset of the two people. For example, I would agree that Amy and Susie each suffering is twice as morally bad as just Amy suffering. However, I would not agree IF the single person was not a subset of the two (e.g., if the single person was Bob). If the single person was Bob, I would think the two cases are morally just as bad. Now, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering. However, based on my preferred sense of "more pain", two people suffering involves the same amount of pain as one person suffering, irrespective of whether the single person is a subset or not. Therefore, you might wonder how I am able to arrive at the different opinions above. More specifically, if I think Amy and Susie each suffering involves the same amount of pain as just Amy suffering, shouldn't I be committed to saying that the former is morally just as bad as the latter, rather than twice as morally bad (which is what I want to say?) I don't think so. I think the pareto principle provides an adequate reason for taking Amy and Susie each suffering to be morally worse than just Amy's suffering. As Otsuka (a philosopher at Harvard) puts it, the Pareto states that “One distribution of benefits over a population is strictly Pareto superior to another distribution of benefits over that same population just in case (i) at least one person is better off under the former distribution than she would b

Huh, weirdly they seem to all work again now, they used to take me to the same page as any non-valid URl, e.g. https://80000hours.org/not-a-real-URL/

The links to 2, 4, 6 and 15 seem broken on the 80K end, I just get 'page not found' for each.

Link 30 also does not work, but that is just because it starts with an unnecessary "effective-altruism.com/" before the youtube link.

I checked and everything else seems to work.

1
80000_Hours
6y
Hi Alex thanks I fixed 30. 2,4,6 and 15 are working for me - can you email over a screenshot of the error you're getting?

Thanks for writing this! The interaction between donations and the reductions in personal allowance are interesting, and I would not have thought of them otherwise.

Some reservations I would have about the usefulness of a database vs lots of write-ups 'in context' like these is that I think how well activities work can depend heavily on the wider structure and atmosphere of the retreat, as well as the events that have come before. I would probably be happier with a classification of 2 or 3 different types of retreat, and the activities that seem to work best in each. (However we should not let perfect be the enemy of good here, and there is probably a number of things that work well across different retreat styles).

Yo... (read more)

Thanks for writing this up,

For your impact review this seems likely to have some impact on the program of future years EA: Cambridge retreats. (In particular it seems likely we will include a version of the 'Explaining Concepts' activity, which we would not have done otherwise, as well as being an additional point in favour of CFAR stuff, and another call to think carefully about the space/mood we create).

I am also interested in the breakdown of how you spend the 200h planning time since i would estimate the EA: Cam retreat (which had around 45 attendees, ... (read more)

3
Jan_Kulveit
6y
Thanks for the feedback. Judging from this and some private feedback I think it would actually make sense to create some kind of database of activities, containing not only descriptions, but info like how intellectually/emotionally/knowledge demanding it is, what materials you need, what are prerequisites, the best practices... and ideally also a data about the presentations and feedbacks. My rough estimate of time costs is 20h general team meetups, 10h syncing between the team and CZEA board, 70h individual time spent planning and preparations, 50h activity developement, 50h survey design, playing with data, writing this, etc. It guess in your case you are not counting the time cost of the people giving the talks preparing them?

I think I agree with the comments on this post that job postings on the EA forum are not ideal, since if all the different orgs did it they would significantly clutter the forum.

The existing "Effective Altruism Job Postings" Facebook group and possibly the 80k job board should fulfill this purpose.

2
RobBensinger
6y
If clutter is the main concern, might it be useful for 80K to post a regular (say, monthly) EA Forum post noting updates to their job board, and to have other job ad posts get removed and centralized to that post? I personally would have an easier time keeping track of what's new vs. old if there were a canonical location that mentioned key job listing updates.
1
HaydenW
6y
Sorry about that, I hadn't seen that thread. Consider me well and truly chastened!
2
Peter Wildeford
6y
As last time, I upvoted this comment and downvoted the post to show I agree with a "no job postings on the EA Forum unless they have other content of general interest" norm.
1
impala
6y
Seconded

How about a shameless plug for EA Work Club? 😇

This role is also listed there – http://www.eawork.club/jobs/87

Thanks for your reply - I'm extremely confused if you think there is no 'intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person" since (as has been discussed in these comments) if you view/define total pain as being measured by intensity-weighted number of experiences this gives a clear metric that matches consequentialist usage.

I had assumed you were arguing at the 'which is morally important' level, which I think might well come down to intuitions.

I hope you manage to work it out with kblog!

1
Jeffhe
6y
Hey Alex, Thanks for your reply. I can understand why you'd be extremely confused because I think I was in error to deny the intelligibility of the utilitarian sense of "more pain". I have recently replied to kbog acknowledging this mistake, outlining how I understand the utilitarian sense of "more pain", and then presenting an argument for why my sense of "more pain" is the one that really matters. I'd be interested to know what you think.

(Posted as top-level comment as I has some general things to say, was originally a response here)

I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don't find it particularly compelling myself, but I can understand why others could find it important.

Overall I find this post confusing though, since the framing seems to be 'Effective Altruism is making an intellectual mistake' whereas you just actually seem to have a different se... (read more)

0
kbog
6y
Little disagreement in philosophy comes down to a matter of bare differences in moral intuition. Sometimes people are just confused.

I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don't find it particularly compelling myself, but I can understand why others could find it important.

Overall I find this post confusing though, since the framing seems to be "Effective Altruism is making an intellectual mistake" whereas you just actually seem to have a different set of moral intuitions from those involved in EA, which are largely incompatible with ef... (read more)

We may just be seeing upvote inflation if the EA forum now has more readers than before

Thanks for the writeup, I was not aware of the Effect Foundation before now.

After reading the above I am still not sure exactly what kind of outreach you perform. Could you give me a quick rundown of how you think you influenced the donations, and what you plan to continue doing going forwards?

2
Jorgen_Ljones
6y
As of now it is quite low effort. We got a website that works like a donation portal providing information about GW orgs in Norwegian, general arguments for why one should give effectively and transparent information about the Effect Foundation. The main value here is the information is provided in Norwegian and that we support Norwegian payment methods. These payment methods are no or low fees so there are some savings in transaction cost by donating through us rather than directly. In addition to the website we use Facebook to promote the organizations and effective giving, and we use the new facebook fundraising feature. Also we have a promotional video shown on national television 1-2 days a year(http://effective-altruism.com/ea/l0/eacommersials_on_national_tv_in_norway_for_free/). We have experimented with donor events (AMF visited Oslo last year for a talk and get together at a pub afterwards) and reaching out to companies and their CSR-projects (http://effective-altruism.com/ea/1js/project_report_on_the_potential_of_norwegian/).

Thanks for writing this - it fits well with my experience of how a lot of people get increasingly involved with EA, bouncing between disparate programs by different orgs. This does unfortunately make evaluating impact much harder, but I think it is important to bear in mind when when designing resources for EA outreach or similar projects.

Thanks for the post, as a minor nitpick, shouldn't the maximal DALY cost of doing something for an hour a day be 1/16, since there are only 16 waking hours in a day and presumably the period whilst asleep does not contribute?

1
Elizabeth
6y
You're the second person to argue for this (other was on my personal blog), and I hear the argument. I think there's a slippery slope of what to control for here- if I include sleep, I'd also want to look at how happy people were when meditating relative to the activity it displaced.

Ah good point on the researcher salary, it was definitely just eyeballed and should be higher.

I think a reason I was happy to leave it low was as a fudge to take into account that the marginal impact of a researcher now is likely to be far greater than the average impact if there were 10,000 working on x-risk, but I should have clarified that as a separate factor.

In any case, even adjusting the cost of a researcher up to $500,000 a year and leaving the rest unchanged does not significantly change the conclusion, with the very rough calculation still giving ~$10 per QALY (but obviously leaves less wiggle room for skepticism about the efficacy of research etc.)

2
Denkenberger
6y
Indeed, the Oxford Prioritisation Project found cost-effectiveness about an order of magnitude lower than yours for AI. But still it was more cost-effective than global poverty interventions even in the present generation. And alternate foods for agricultural catastrophes are even more cost effective for the present generation.
Load more