I know this wasn't directed at me but I have a few thoughts.
First question: in broad terms, what do you think moral philosophers should infer for psychological studies of this type in general, and from this one in particular? One perspective would be for moral philosophers to update their views towards that of the population - the "500 million Elvis fans can't be wrong" approach.
I think there are various useful things one can take from this study. A few main ones off the top of my head:
You might find it interesting to read through this post and the comments. It covers how one might use a large sum of money to most improve the far future.
Many EAs believe that we should be focused on the far future when doing good, a school of thought called "longtermism". Let me know if you would like to read more about that and I can try to point you in a useful direction.
OK thanks, and I have read through now and seen that you discuss randomness in section 4.
Overall a very interesting read! Out of interest, is this idea of "acausal control" entirely novel or has it/something similar been discussed by others?
I haven't read the whole post, but:
But defecting in this case, I claim, is totally crazy. Why? Because absent some kind of computer malfunction, both of you will make the same choice, as a matter of logical necessity.
Is this definitely true when you take into account quantum randomness? Maybe it is, but, if so, I think it might be worth explaining why.
I see where you're coming from, but if this is true:
For all these reasons, utilitarians are largely united in rejecting person-affecting views, even as they continue to debate which impersonal theory provides the best way forward.
then part of me thinks that the population ethics section did in fact need to pay adequate attention to the potential drawbacks of person-affecting views and make it clear why utilitarians tend to think impersonal theories are preferable, which was always going to come across somewhat biased.
Ultimately whilst the section is an int... (read more)
My gut reaction is to think we should just make use of existing mental health resources out there which are abundant. I’m not sure why it would help for it to be EA specific.
It would certainly be useful for someone to make a summary of available resources and/or to do a meta-review of what works for mental health, but I can’t see that this would require a whole organisation to be set up. CEA could hire one person to work on this for example and that would seem to me to be sufficient.
For the record I'm not really sure about 1030 times, but I'm open 1000s of times.
And differences of 1030 are almost impossible, because everything we do now may affect the whole far future and therefore has nontrivial expected impact on vast numbers of lives.
Pretty much every action has an expected impact on the future in that we know it will radically alter the future e.g. by altering the times of conceptions and therefore who lives in the future. But that doesn't necessarily mean we have any idea on the magnitude or sign of this expected impact. Wh... (read more)
It seems to me that many longtermists believe (i) but that almost no-one believes (ii).
Really? This surprises me. Combine (i) with the belief that we can tractably influence the far future and don't we pretty much get to (ii)?
I think Ord's favoured approach to moral uncertainty is maximising expected choice-worthiness (MEC) which he argues for with Will MacAskill.
Reading the abstract of the moral parliamentarianism paper, it isn't clear to me that he is actually a proponent of that approach, just that he has a view on the best specific approach within moral parliamentarianism.
As I say in my comment to Ben, I think an MEC approach to moral uncertainty can lead to being quite fanatical in favour of longtermism.
I don't think it's necessarily clear that incorporating moral uncertainty means you have to support hedging across different plausible views. If one maximises expected choiceworthiness (MEC) for example one can be fanatically driven by a single view that posits an extreme payoff (e.g. strong longtermism!).
Indeed MacAskill and Greaves have argued that strong longtermism seems robust to variations in population axiology and decision theory whilst Ord has argued reducing x-risk is robust to normative variations (deontology, virtue ethics, consequentialism). I... (read more)
Thanks, I understand all that. I was confused when Khorton said:
I meant increasing the number of grantmakers who have spent significant time thinking about where to donate significant capital
I wouldn't say the lottery increases the number of grantmakers who have spent significant time thinking, I think it in fact reduces it.
I agree with you when you say however:
The overall amount of time spent is actually less than before, but the depth is far greater, and with dramatically less redundancy.
I think perhaps we agree then - if after significant research, you realize you can't beat an EA Fund, that seems like a reasonable fallback, but that should not be plan A.
Yeah that sounds about right to me.
I still don't understand this. The lottery means one / a small number of grantmakers get all the money to allocate. People who don't win don't need to think about where to donate. So really it seems to me that the lottery reduces the number of grantmakers and indeed the number of who spend time thinking about where to donate.
The model is this:
I'm not sure I understand how the lottery increases the diversity of funding sources / increases the number of grantmakers if one or a small number of people end up winning the lottery. Wouldn't it actually reduce diversity / number of grantmakers? I might be missing something quite obvious here...
Reading this it seems the justification for lotteries is that it not only saves research time for the EA community as a whole, but also improves the allocation of the money in expectation. Basically if you don't win you don't have to bother doing any research (so... (read more)
Although whether increasing the population is a good thing depends of if you are an average utilitarian or a total utilitarian.
Yes that is true. For what it's worth most people who have looked into population ethics at all reject average utilitarianism as it has some extremely unintuitive implications like the "sadistic conclusion" whereby one can make things better by bringing into existence people with terrible lives, as long as they're still bringing up the average wellbeing level by doing so i.e. if existing people have even worse lives.
I got the impression that their new, general-purpose pool would still be fairly longtermist, but it's possible they will have to make sacrifices.
To clarify it's not that I don't think they would be "longtermist" it's more that I think they may have to give to longtermist options that "seem intuitively good to a non-EA", e.g. giving to an established organisation like MIRI or CHAI, rather than give to longtermist options that may be better on the margin but seem a bit weirder at first glance like "buying out some clever person so they have more time to do s... (read more)
Yeah you probably should - unless perhaps you think there are scale effects to giving which makes you want to punt on being able to give far more.
Worth noting of course that Patrick didn’t know he was going to give to a capital allocator when he entered the lottery though, and of course still doesn’t. Ideally all donor lottery winners would examine the LTFF very carefully and honestly consider whether they think they can do better than LTFF. People may be able to beat LTFF, but if someone isn’t giving to LTFF I would expect clear justification as to why they think they can beat it.
I disagree. One of the original rationales for the lottery if I recall correctly was to increase the diversity* of funding sources and increase the number of grantmakers. I think if the LTFF is particularly funding constrained, there's a good chance the Open Philanthropy Project or a similar organisation will donate to them. I value increased diversity and number of grantmakers enough that I think it's worth trying to beat LTFF's grantmaking even if you might fail.
*By diversity, I don't mean gender or ethnicity, I just mean having more than one grantmaker doing the same thing, ideally with different knowledge, experience and connections.
Would you mind linking some posts or articles assessing the expected value of the long-term future?
You're right to question this as it is an important consideration. The Global Priorities Institute has highlighted "The value of the future of humanity" in their research agenda (pages 10-13). Have a look at the "existing informal discussion" on pages 12 and 13, some of which argues that the expected value of the future is positive.
Sure, it's possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point
I'm not really sure what to think about digital sentience. We could in theory create astronomical levels of happiness, astronomical levels of suffering, or both. Digital sentience could easily dominate all other forms of sentience so it's certainly an important consideration.
It seems unlikely to me that we would go extinct, even conditional on "us" deciding it would be best.
This is a fair point to be honest!
In general, it kind of seems like the "point" of the lottery is to do something other than allocate to a capital allocator.
If you enter a donor lottery your expected donation amount is the same as if you didn't enter the lottery. If you win the lottery, it will be worth the time to think more carefully about where to allocate the money than if you had never entered, as you're giving away a much larger amount. Because extra time thinking is more likely to lead to better (rather than worse) decisions, this leads to more (expected) impact overall, even though... (read more)
There is still the possibility that the Pinkerites are wrong though, and quality of life is not improving.
Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.
happiness levels in general should be roughly stable in the long run regardless of life circumstances.
Maybe, but if we can't make people happier we can always just make more happy people. This would be very highly desirable if yo... (read more)
B) the far future can be reasonably expected to have significantly more happiness than suffering
I think EAs who want to reduce x-risk generally do believe that the future should have more happiness than suffering, conditional on no existential catastrophe occurring. I think these people generally argue that quality of life has improved over time and believe that this trend should continue (e.g. Steven Pinker's The Better Angels of Our Nature). Of course life for farmed animals has got worse...but I think people believe we should successfully render factory... (read more)
Of course life for farmed animals has got worse...but I think people believe we should successfully render factory farming redundant on account of cultivated meat.
I think there's recently more skepticism about cultured meat (see here, although I still expect factory farming to be phased out eventually, regardless), but either way, it's not clear a similar argument would work for artificial sentience, used as tools, used in simulations or even intentionally tortured. There's also some risk that nonhuman animals themselves will be used in space colonization,... (read more)
I have to say I'm pretty glad you won the lottery as I like the way you’re thinking! I have a few thoughts which I put below. I’m posting here so others can respond, but I will also fill out your survey to provide my details as I would be happy to help further if you are interested in having my assistance!
TLDR: I think LTFF and PPF are the best options, but it’s very hard to say which is the better of the two.
I'm not saying that reducing S-risks isn't a great thing to do, nor that it would reduce happiness, I'm just saying that it isn't clear that a focus on reducing S-risks rather than on reducing existential risk is justified if one values reducing suffering and increasing happiness equally.
I think robustness (or ambiguity aversion) favours reducing extinction risks without increasing s-risks and reducing s-risks without increasing extinction risks, or overall reducing both, perhaps with a portfolio of interventions. I think this would favour AI safety, especially that focused on cooperation, possibly other work on governance and conflict, and most other work to reduce s-risks (since it does not increase extinction risks), at least if we believe CRS and/or CLR that these do in fact reduce s-risks. I think Brian Tomasik comes to an overall pos... (read more)
My understanding is that Brian Tomasik has a suffering-focused view of ethics in that he sees reducing suffering as inherently more important than increasing happiness - even if the 'magnitude' of the happiness and suffering are the same.
If one holds a more symmetric view where suffering and happiness are both equally important it isn't clear how useful his donation recommendations are.
Probably depends on how you're reducing poverty...and how long-term your "long-term" is. Something like removing trade restrictions is likely to have very different long-term effects than distributing bednets. Even then I really don't have good answers for you on the nature of these differences.
You might want to check out the persistence studies literature. For example work by Nathan Nunn, who Will MacAskill references in this talk. This may not precisely align to what you're asking for, but Nunn has studies finding for example that:
Could also go for tractable and intractable cluelessness?
Also I wonder if we should be distinguishing between empirical and moral cluelessness - with the former being about claims about consequences and the latter about fundamental ethical claims.
Thanks Robert. I've never seen this breakdown of cluelessness and it could be a useful way for further research to define the issue.
The Global Priorities Institute raised the modelling of cluelessness in their research agenda and I'm looking forward to further work on this. If interested, see below for the two research questions related to cluelessness in the GPI research agenda. I have a feeling that there is still quite a bit of research that could be conducted in this area.
Forecasting the long-term effects of our actions often requires... (read more)
Yeah I meant ruling out negative EV in a representor may be slightly extreme, but I’m not really sure - I need to read more.
Thanks, I really haven't given sufficient thought to the cluelessness section which seems the most novel and tricky. Fanaticism is probably just as important, if not more so, but is also easier to get one's head around.
I agree with you in your other comment though that the following seems to imply that the authors are not "complexly clueless" about AI safety:
For example, we don’t think any reasonable representor even contains a probability function according to which efforts to mitigate AI risk save only 0.001 lives per $100 in expectation.
I mean I ... (read more)
On their (new) view on what objections against strong longtermism are strongest - I think that this may be the most useful update in the paper. I think it is very important to pinpoint the strongest objections to a thesis, to focus further research.
It is interesting that the authors essentially appear to have dismissed the intractability objection. It isn’t clear if they no longer think this is a valid objection, or if they just don’t think it is as strong as the other objections they highlight this time around. I would like to ask them about this in... (read more)
On addressing cluelessness - for the most part I agree with the authors’ views, which includes the view that there needs to be further research in this area.
I do find it odd however that they attempt to counter the worry of ‘simple cluelessness’ but not that of ‘complex cluelessness’ i.e. to counter the possibility that there could be semi-foreseeable unintended consequences of longtermist interventions that make us ultimately uncertain on the sign of the expected-value assessment of these interventions. Maybe they see this as obviously not an issue...but I would have appreciated some thoughts on this.
On the new definition - as far as I can tell it does pretty much the same job as the old definition, but is clearer and more precise, bar a small nitpick I have...
One deviation is from “a wide class of decision situations” to “the most important decision situations facing agents today”. As far as I can tell, Greaves and MacAskill don’t actually narrow the set of decision situations they argue ASL applies to in the new paper. Instead, I suspect the motivation for this change in wording was because “wide” is quite imprecise and subjective (Greaves concedes t... (read more)
You mention that some EAs oppose progress / think that it is bad. I might be wrong, but I think these people only "oppose" progress insofar as they think x-risk reduction from safety-based investment is even better value on the margin. So it's not that they think progress is bad in itself, it's just that they think that speeding up progress incurs a very large opportunity cost. Bostrom's 2003 paper outlines the general reasoning why many EAs think x-risk reduction is more important than quick technological development.
Also, I think most EAs intereste... (read more)
I would absolutely expect EAs to differ in various ways to the general population. The fact that a greater proportion of EAs are vegan is totally expected, and I can understand the computer science stat as well given how important AI is in EA at the moment.
However when it comes to sexuality it isn't clear to me why the EA population should differ. It may not be very important to understand why, but then again the reason why could be quite interesting and help us understand what draws people to EA in the first place. For example perhaps LGBTQ+ people ... (read more)
Thanks. Any particular reason why you decided to do unguided self-description?
You could include the regular options and an "other (please specify)" option too. That might give people choice, reduce time required for analysis, and make comparisons to general population surveys easier.
We observed extremely strong divergence across gender categories. 76.9% of responses from male participants identified as straight/heterosexual, while only 48.6% of female responses identified as such.
The majority of females don't identify as heterosexual? Am I the only one who finds this super interesting? I mean in the UK around 2% of females in the wider population identify as LGB.
Even the male heterosexual figure is surprising low. Any sociologists or others want to chime in here?
I think that's fair but I also think that non-neglectedness is actually bad for two reasons:
I'm thinking number 2 could be quite relevant in this case. Admittedly it's quite relevant for any EA intervention that involves systemic change, but I get the impression that other systemic change interventions may be even higher in importance.
The only thing of interest here is what sort of compromise ACE wanted. What CARE said in response is not of immediate interest, and there's certainly no need to actually share the messages themselves.
Perhaps you can understand why one might come away from this conversation thinking that ACE tried to deplatform the speaker? To me at least it feels hard to interpret "find a compromise" any other way.
[Note that I have no idea whatsoever about what actually happened here. This is purely hypothetical.]
FWIW if I was in a position similar to ACE's here are a few potential "compromises" I would have explored. (Of course, which of these would be acceptable to me would depend on exactly why I'm concerned and how strongly, etc.) I think some of them wouldn't typically be considered deplatforming, though I would imagine that people who are against deplatforming would find many if not all of them at least somewhat objectionable (I would also guess that some who ... (read more)
Thanks for writing this comment as I think you make some good points and I would like people who disagree with Hypatia to speak up rather than stay silent.
Having said that, I do have a few critical thoughts on your comment.
Your main issue seems to be the claim that these harms are linked, but you just respond by only saying how you feel reading the quote, which isn't a particularly valuable approach.
I don’t think this was Hypatia’s main issue. Quoting Hypatia directly, they imply the following are the main issues:
I don't find your comment to have much in the way of argument as to why it might be bad if papers like this one become more widespread. What are you actually worried would happen? This isn't super clear to me at the moment.
I agree a paper that just says "we should ignore the repugnant conclusion" without saying anything else isn't very helpful, but this paper does at least gather reasons why the repugnant conclusion may be on shaky ground which seems somewhat useful to me.
My short answer is that 'neutrality against creating happy lives' is not a mainstream position in the EA community. Some do hold that view, but I think it's a minority. Most think that creating happy lives is good.
Thanks for writing this Michael, I would love to see more research in this area.
Thus, it seems plausible that expanding a person’s moral circle to include farm animals doesn’t bring the “boundary” of that person’s moral circles any “closer” to including whatever class of beings we’re ultimately concerned about (e.g., wild animals or artificial sentient beings). Furthermore, even if expanding a person’s moral circle to include farm animals does achieve that outcome, it seems plausible that that the outcome would be better achieved by expanding moral c
To be honest I'm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing I took from Greaves was to realise there seems to be an issue of complex cluelessness in the first place - where we can't really form precise credences in certain instances where people have traditionally felt like they can, and that these instances are often faced by EAs when they're trying to do the most good.
Maybe we're also complexy clueless about what day to conceive a child on, or which chair to sit on, b... (read more)
So far, I feel I've been able to counter any proposed example, and I predict I would be able to do so for any future example (unless it's the sort of thing that would never happen in real life, or the information given is less than one would have in real life).
I think simple cluelessness is a subjective state. In reality one chair might be slightly older, but one can be fairly confident that it isn't worth trying to find out (in expected value terms). So I think I can probably just modify my example to one where there doesn't seem to be any subjectiv... (read more)
Your critique of the conception example might be fair actually. I do think it's possible to think up circumstances of genuine 'simple cluelessness' though where, from a subjective standpoint, we really don't have any reasons to think one option may be better or worse than the alternative.
For example we can imagine there being two chairs in front of us and making a choice of which chair to sit on. There doesn't seem to be any point stressing about this decision (assuming there isn't some obvious consideration to take into account), although it is cert... (read more)
Thanks for all your comments Michael, and thanks for recommending this post to others!
I have read through your comments and there is certainly a lot of interesting stuff to think about there. I hope to respond but I might not be able to do that in the very near future.
I'd suggest editing the post to put the misconceptions in the headings in quote marks
Great suggestion thanks, I have done that.
OK thanks I think that is clearer now.
Thanks yeah, I saw this section of the paper after I posted my original comment. I might be wrong but I don't think he really engages in this sort of discussion in the video, and I had only watched the video and skimmed through the paper.
So overall I think you may be right in your critique. It might be interesting to ask Tarsney about this (although it might be a fairly specific question to ask).
OK that's clearer, although I'm not immediately sure why the paper would have achieved the following:
I somewhat updated my views regarding: how likely such a lock-in isand in particular how likely it is that a state that looks like it might be a lock-in would actually be a lock-in...
I somewhat updated my views regarding:
I think Tarsney implies that institutional reform is less likely to be a true lock-in, but he doesn't really back this up with much argument. He just implies that this point is somewhat obvious. Under this assumption, I can understand why his model would lead to the follow... (read more)
In case anyone is interested, Rob Wiblin will be interviewing Tarsney on the 80,000 Hours podcast next week. Rob is accepting question suggestions on Facebook (I think you can submit questions to Rob on Twitter or by email too).