Great, thanks for writing this. I wished you had included a concise and short summary of the article in your post rather than your evaluation. This would have provided more information to people who don't read the article. I read parts of the original article.
Hi Frank, I am not sure I completely understand your questions.
Are you talking about interspecies comparisons of utility (differences)? I.e., how can we determine whether these 20 insects are happier than this one human
or (about utility differences) that giving food to 20 insects results in more additional utility than giving the food to one human?
Literature I can recommend is:
Dawkins, M.S. (1990). From an Animal's Point of View: Motivation, Fitness, and Animal Welfare. Behavioral and Brain Sciences, 13(1), pp.1–9.
Fleurbaey, M., and Hammo... (read more)
(The Commission just opened its public consultation, which I encourage European NGOs, scientists, and citizens to weigh in on.)
Perhaps just to clarify the procedure. This is the Inception Impact Assessment Consultation where feedback is acquired on priorities and legislative paths. As written in the Inception Impact Assessment, another Consultation for 12 (rather than 7) weeks will be opened in the second half of 2021.
For this consultation, a good answer would take a precise stance on the different options outlined in the Inception Impact... (read more)
As some examples, Open Wing Alliance, Compassion in World Farming, Humane Society International/Europe (HSI), and
Animal Protection Denmark (Dyrenes Beskyttelse) have already submitted comments to this feedback period.
For the subsequent public consultation process I would again highlight that Alice DiConcetto, of Animal Law Europe recently published a short manual on how to submit feedback to an EU Public Consultation that I think will be valuable for advocates. IMO, feedback will be more impactful if it sends a consistent message but avoids sending dupl... (read more)
I also want to note that the things I have added and many others added are still ongoing. It would be naive to say that these are only moral catastrophes of the past.
A few more controversial moral catastrophes:
Yes, I agree with you that they should be different but are related, so thanks for your edits. Beckstead uses at least the QWIRC keyboard as an example for trajectory changes in his Phd as far as I remember.
As far as I understand, Beckstead and other EAs also refer to this as a "trajectory change". Hence, I would find it useful to mention this name in the tag page.
Also see the response from CSER here.
Hi Edo, instead of the leaked document, you might want to link to the official publication which is here. The European Commission published simultaneously the Coordinated Plan on AI. Some readers unfamiliar with the EU legislative process might assume that the details of the regulation are almost fixed, which is not the case. During the next months/years, the Council and the European Parliament will work on the proposal and will have trilogue meetings.
I am confused as to how this relates to trajectory changes (https://forum.effectivealtruism.org/tag/trajectory-changes). When Beckstead (2013) talks about ripple effects, I understand him to talk about trajectory changes, ie., a certainclass of interventions which might be very effective for longtermists, compared to x-risk mitigation. Independent of this and whether one agrees with longtermism, it might be still relevant to think about info hauards, replacability (the bullet point). I would suggest that the first paragraph should be moved to trajectory changes instead. Sorry, if I have overseen something.
I have read all except one post you linked to. I don't understand how your post related to the two posts about children and would appreciate a comment. I agree with your argument that "EA jobs provide scarce non-monetary goods" and that it is hard to get hired by EA organisations. However, it is unclear to me that any of these posts provide a damaging critique to EA. I would be surprised if anyone managed to create a movement without any of these dynamics. However, I would also be excited to see working tackling these putative problems such as the non-monetary value of different jobs.
Clarification question: why do you understand longtermism to be outside of EA?
It seems to me that longtermism ( I assume you talk about the combination of believing in strong longtermism (Greaves and Macaskill, 2019) and believing in doing the most good) is just one particular kind of an effective altruist (an effective altruist with particular moral and empirical beliefs).
Thanks for this very interesting syllabus and thank you for mentioning the issue of diversity and for the first steps of tackling it. I don't see this issue discussed very often on the EA forum and in EA adjacent academia.
Great. thank you :)
Thanks for writing this. Here are two of my messy thoughts: If you believe that X is the biggest and most important problem (e.g. clean meat, poverty alleviation or AI governance), I would believe that the Head of the relevant department is a really really good job to work on the problem.
I was also wondering why you are not considering the career capital you get to later on work on projects such as Alpenglow or work in applied research job/ lobbying/policy thinkers etc.
Thanks for sharing. Would you be able to share more information on the top-ranked option "exploration". My thinking on this is limited (like in general regarding a cause X). Would you able to share concrete ideas people talked about or concrete proposed plans for such an organisation (a cause X organisation or an organisation focused on one particular underexplored cause area?)
And on a related note, will you publish the report about meta charities you describe here publish before the incubation programme application deadline (as it might be decision-relevant for some people)?
I am german, lead an EA group in the UK, and do EA career coaching there. I am personally interested in the policy side but I am happy to talk with you through your cause prioritisation and think about good jobs in Germany. If you are interested, pm me :)
Sorry, I don't have the time to comment in-depth. However, I think if one agrees with cluelessness, then you don't offer an objection. You might even extend their worries by saying that "almost everything has "asymmetric uncertainty"". I would be interested in your extension of your last sentence. " They are extremely unlikely and thus not worth bearing mind". Why is this true?
re: your lady example: as far as I know, the recent papers e.g. here provide the following example: (1) either you help the old lady on a Monday or on a Tuesday (you must and can do exactly one of the two options). In this case, your examples for CC1 and CC2 don't hold. One might argue that the previous example was maybe just a mistake and I find it very hard to come up with CC1 and CC2 for (1) if (supposedly) you don't know anything about Mondays or Tuesdays.
Sorry about the late answer. I just wanted to say that I also upvoted your comment because I would be very interested in a longer piece on being an RA.
What is the most likely reason that s-risks are not worth working on?
Apart from the normative discussions relating to the suffering focus (cf. other questions), I think the most likely reasons are that s-risks may simply turn out to be too unlikely, or too far in the future for us to do something about it at this point. I do not currently believe either of those (see here and here for more), and hence do work on s-risks, but it is possible that I will eventually conclude that s-risks should not be a top priority for one of those reasons.
How did you figure out that you prioritize the reduction of suffering?
I am interested in your personal life story and in the most convincing arguments or intuition pumps?
Thank you very much for writing this up. However, I am not sure I understand your point, the things you are referring to in:
3. Policy and beyond – not happening – 2/10. Are you referring to your explanation within the subsection on The Parliament? Then, this would make sense for me.
Another operationalisation would be to ask to what extent the 80k top career recommendations have changed, e.g. what percentage of the current top recommendations wills till be in the top recommendations in 10 years.
Hi Maxime and Konrad, thank you for your work and the post.
I have a question with regard to the structure of the book. It seems like from your summary and the longer description that chapter 2 &3,(4) are quite distinct from 1,4,5. While the former chapters are focused on policymaking/lobbying etc in general (taking shorttermist situations, longtermist problems as examples), the other 3 are more specifically about longtermist policies. Please correct me if I am wrong. Why did you decide to include them in the same publication? It seems to me that a po... (read more)
Yeah I was really surprised by this as well. As someone who already works in policy, I would be disappointed to pick up a book about long-termist policy making and find out that it's just explaining how my job works!
Even chapter 5 doesn't seem very clearly focused on long-termist policy rather than policy generally from this table of contents, but I'm probably not understanding the nuances.
Copying Catherine's message from the Group Organizers Slack:
I dont know whether this is the right place to post it: But why are we caring about the risk of the coronavirus for us as EAs? Why are people thinking about canceling EAG or other local meetings?
(are we caring for selfish reasons or because this indirectly reduces the extent the virus spreads?
If we believe that a young healthy person has a 0.5 percent of doing from the virus and 5 percent of the world will be infected in expectation and all these actions (cancellation of EA events) reduces my chance of being infected by 5 percent:
(This seems super optim... (read more)
Datapoint (my general considerations/thought processes around this, feeding into case-by case decisions about my own activities rather than a blanket decision): I am (young healthy male) pretty unconcerned personally about risk to myself individually; but quite concerned about becoming a vector for spread (especially to older or less robust people). While I have a higher-than-some-people personal risk tolerance, I don't like the idea of imposing my risk tolerance on others. Particularly when travelling/fatigued/jetlagged, I'm not 100% sure I trus... (read more)
I am planning on writing a post summarizing the existing discussion of information cascades in EA and when doing and the different forms and possibilities to do something against it. Lastly, I discuss why the concept of the information cascade might disadvantageous. I would be interested in comments on the draft.
I think I updated towards "maybe its useful if this cause area would be analysed in great depth". Is this planned at the moment? Perhaps interviewing experts etc.
Do you think that it might be important to develop clear guidelines what is meant with the first article of the outer space treaty: "The exploration and use of outer space, including the moon and other celestial bodies, shall be carried out for the benefit and in the interests of all countries, irrespective of their degree of economic or scientific development, and shall be the province of all man... (read more)
Interesting idea about the "driver s license" for rationality.
You suggest that EA student groups should run tournaments . I would be interested in your reasoning. Why do you think this is better than encouraging people to join foretold.io as individuals? Do you think that we are lacking an institution or platform which helps individuals to get up to speed and interested in forecasting (so that they are good enough that foretold.io provides a positive experience)? Or do you think that these tournaments would be good signaling for students applyin... (read more)
(thank you for writing this, my comment is related to Denkenberger)A consideration against the creation of groups around cause areas if they are open for younger people (not only senior professionals) who are also new to EA: (the argument does not hold if those groups are only for people who are very familiar with EA thinking - of course among others those groups could also make work and coordination more effective)
It might be that this leads to a misallocation of people and resources in EA as the cost of switching focus or careers increases with this net... (read more)