All of Charlotte's Comments + Replies

The Phil Torres essay in Aeon attacking Longtermism might be good

Great, thanks for writing this.  I wished you had included a concise and short summary of the article in your post rather than your evaluation. This would have provided more information to people who don't read the article. I read parts of the original article.

How to assign numerical values to individual welfare?

Hi Frank, I am not sure I completely understand your questions.

Are you talking about interspecies comparisons of utility (differences)? I.e., how can we determine whether these 20 insects are happier than this one human 

or (about utility differences) that giving food to 20 insects results in more additional utility than giving the food to one human?

Literature I can recommend is: 

 

Dawkins, M.S. (1990). From an Animal's Point of View: Motivation, Fitness, and Animal Welfare. Behavioral and Brain Sciences, 13(1), pp.1–9.

Fleurbaey, M., and Hammo... (read more)

1Frank_R4moMy question was mainly the first one. (Are 20 insects happier than one human?) Of course similar problems arise if you compare the welfare of humans. (Are 20 people whose living standard is slightly above subsistence happier than one millionaire ?) The reason why I have chosen interspecies comparison as an example is that it is much harder to compare the welfare of members of different species. At least you can ask humans to rate their happiness on a scale from 1 to 10. Moreover, the moral consequences of different choices for the function f are potentially greater. The forum post seems to be what I have asked for, but I need some time to read through the literature. Thank you very much!
An End to Cages in Europe?

(The Commission just opened its public consultation, which I encourage European NGOs, scientists, and citizens to weigh in on.)

 

Perhaps just to clarify the procedure. This is the Inception Impact Assessment Consultation where feedback is acquired on priorities and legislative paths. As written in the Inception Impact Assessment, another Consultation for 12 (rather than 7) weeks will be opened in the second half of 2021. 

 

For this consultation, a good answer would take a precise stance on the different options outlined in the Inception Impact... (read more)

As some examples, Open Wing Alliance, Compassion in World Farming, Humane Society International/Europe (HSI), and Animal Protection Denmark (Dyrenes Beskyttelse) have already submitted comments to this feedback period.

For the subsequent public consultation process I would again highlight that Alice DiConcetto, of Animal Law Europe recently published a short manual on how to submit feedback to an EU Public Consultation that I think will be valuable for advocates. IMO, feedback will be more impactful if it sends a consistent message but avoids sending dupl... (read more)

What are some moral catastrophes events in history?

adding: 

 

I also want to note that the things I have added and many others added are still ongoing. It would be naive to say that these are only moral catastrophes of the past.

 

A few more controversial moral catastrophes: 

  • religion (arguably counterfactually responsible for at least a few wars and a few really unhealthy cultural
... (read more)
Cultural persistence

Yes, I agree with you that they should be different but are related, so thanks for your edits. Beckstead uses at least the QWIRC keyboard as an example for trajectory changes in his Phd as far as I remember.

Cultural persistence

As far as I understand, Beckstead and other EAs also refer to this as a "trajectory change". Hence, I would find it useful to mention this name in the tag page.

2MichaelA6moI think this is definitely related to / relevant to the idea of trajectory change, so, prompted by your comment, I've added each entry in the other entry's Related entries section. And I do think an expanded version of the text for each entry should mention the other concept. (Like, once an editor gets around to it, that'd be good.) So thanks for mentioning that! OTOH, I think the concepts are meaningfully distinct, rather than being synonymous. In particular, trajectory changes are persistent changes to total value at every point in the long-term future [https://forum.effectivealtruism.org/tag/long-term-future]. Whereas things like persistence studies tend to focus on far shorter time horizons, and their scope could in theory cover persistence in cases where the persistence doesn't affect how morally valuable things are. (Though I expect almost all instances of persistence would at least slightly affect the value of the future, even e.g. a very slight rearrangement of keyboards that lasts indefinitely for some reason.)
A new proposal for regulating AI in the EU

Hi Edo, instead of the leaked document, you might want to link to the official publication which is here. The European Commission published simultaneously the Coordinated Plan on AI. Some readers unfamiliar with the EU legislative process might assume that the details of the regulation are almost fixed, which is not the case. During the next months/years, the Council and the European Parliament will work on the proposal and will have trilogue meetings

5EdoArad7moThanks! I'll update the post :) I was hoping for someone more knowledgeable than myself to chip in
Indirect long-term effects

I am confused as to how this relates to trajectory changes (https://forum.effectivealtruism.org/tag/trajectory-changes). When Beckstead (2013) talks about ripple effects, I understand him to talk about trajectory changes, ie., a certainclass of interventions which might be very effective for longtermists, compared to x-risk mitigation. Independent of this and whether one agrees with longtermism, it might be still relevant to think about info hauards, replacability  (the bullet point). I would suggest that the first paragraph should be moved to trajectory changes instead. Sorry, if I have overseen something.

2MichaelA7mo* I think "effects on the long-run future from interventions targeted at the short-term" are distinct from trajectory changes. * These effects may be trajectory changes, or increase or decrease the chance of trajectory changes, but trajectory changes can also occur as a result of things other than interventions targeted at the short-term (e.g., they can occur due to interventions targeted at the long-term). * Less relevantly for this comment, trajectory changes can include existential catastrophes or the prevention of them; "trajectory change" is a broader term that includes both that and "smaller" changes to the long-term future. * But I now realise that "indirect long-term effects" probably shouldn't actually be defined as only effects on the long-term future from interventions targeted at the short term * It seems like the most natural interpretation of the term would also cover long-term effects of interventions that were targeted at the long-term future, as long as the effects weren't what was intended * E.g., I think this article itself, as currently written, wants to imply that information hazards from research intended to reduce x-risk would be an example of indirect long-term effects. And that seems natural to me. But there the intervention was aimed at the long-term, not the short-term. * Wiblin's talk doesn't offer any clear definition, actually. * In some places, he seems to imply that the term covers examples where the original actions were aimed at influencing the long-term. * In other places, he does focus on how things * So I think this article should probably either use a different key term for its name (though I'm not sure what the best name would be) or should broaden the definition (highlighting the current definition as just one type of indirect

I have read all except one post you linked to. I don't understand how your post related to the two posts about children and would appreciate a comment. I agree with your argument that "EA jobs provide scarce non-monetary goods" and that it is hard to get hired by EA organisations. However, it is unclear to me that any of these posts provide a damaging critique to EA. I would be surprised if anyone managed to create a movement without any of these dynamics. However, I would also be excited to see working tackling these putative problems such as the non-monetary value of different jobs.

Name for the larger EA+adjacent ecosystem?

Clarification question: why do you understand longtermism to be outside of EA?

It seems to me that longtermism ( I assume you talk about the combination of believing in strong longtermism (Greaves and Macaskill, 2019) and believing in doing the most good) is just one particular kind of an effective altruist (an effective altruist with particular moral and empirical beliefs).

4RyanCarey9moJust like environmentalism and animal rights intersect with EA, without being a subset of it, the same could be true for longtermism. (I expect longtermism to grow a lot outside of EA, while remaining closer to EA than those other groups.)
A full syllabus on longtermism

Thanks for this very interesting syllabus and thank you for mentioning the issue of diversity and for the first steps of tackling it. I don't see this issue discussed very often on the EA forum and in EA adjacent academia.

5jtm9moThanks for your comment. I wholeheartedly agree that this is generally a neglected issue in the community, which is partly why included the brief note – although, as stated, I believe it deserves separate and longer discussions.
How high impact are UK policy career paths?

Thanks for writing this. Here are two of my messy thoughts: If you believe that X is the biggest and most important problem (e.g. clean meat, poverty alleviation or AI governance), I would believe that the Head of the relevant department is a really really good job to work on the problem.

I was also wondering why you are not considering the career capital you get to later on work on projects such as Alpenglow or work in applied research job/ lobbying/policy thinkers etc.

What areas are the most promising to start new EA meta charities - A survey of 40 EAs

Thanks for sharing. Would you be able to share more information on the top-ranked option "exploration". My thinking on this is limited (like in general regarding a cause X). Would you able to share concrete ideas people talked about or concrete proposed plans for such an organisation (a cause X organisation or an organisation focused on one particular underexplored cause area?)

 

And on a related note, will you publish the report about meta charities you describe here publish before the incubation programme application deadline (as it might be decision-relevant for some people)?

5Joey1yOur current plan is to publish a short description but not a full report of the top ideas we plan to recommend in the first week of Jan so possible applicants can get a sense before the deadline (Jan 15th).
Careers Questions Open Thread

Heya, 

 

I am german, lead an EA group in the UK, and do EA career coaching there. I am personally interested in the policy side but I am happy to talk with you through your cause prioritisation and think about good jobs in Germany. If you are interested, pm me :)

 

https://www.linkedin.com/in/charlotte-siegmann/

Andreas Mogensen's "Maximal Cluelessness"

Sorry, I don't have the time to comment in-depth. However,  I think if one agrees with cluelessness, then you don't offer an objection. You might even extend their worries by saying that "almost everything has "asymmetric uncertainty"".  I would be interested in your extension of your last sentence. " They are extremely unlikely and thus not worth bearing mind". Why is this true? 

2MichaelA1yWhen I said "I think both of these "stories" I've told are extremely unlikely, and for practical purposes aren't worth bearing in mind", the bolded bit meant that I think a person will tend to better achieve their goals (including altruistic ones) if they don't devote explicit attention to such (extremely unlikely) "stories" when making decisions. The reason is essentially that one could generate huge numbers of such stories for basically every decision. If one tried to explicitly think through and weigh up all such stories in all such decision situations, one would probably become paralysed. So I think the expected value of making decisions before and without thinking through such stories is higher than the expected value of trying to think through such stories before making decisions. In other words, the value of information [https://en.wikipedia.org/wiki/Value_of_information] one would be expected to get from spending extra time thinking through such stories is probably usually lower than the opportunity cost of gaining that information (e.g., what one could've done with that time otherwise).
2MichaelA1yDisclaimer: Written on low sleep, and again reporting only independent impressions [https://www.overcomingbias.com/2008/04/naming-beliefs.html] (i.e., what I'd believe before updating on the fact that various smart people don't share my views on this). I also shared related thoughts in this comment thread [https://forum.effectivealtruism.org/posts/uGt5HfRTYi9xwF6i8/3-suggestions-about-jargon-in-ea?commentId=mXm3KbwsBCszdZ9hT#comments] . I agree that one way someone could respond to my points is indeed by saying that everything/almost everything involves complex cluelessness, rather than that complex cluelessness isn't a useful concept. But if Greaves introduces complex cluelessness by juxtaposing it with simple cluelessness, yet the examples of simple cluelessness actually meet their definition of complex cluelessness (which I think I've shown), I think this provides reason to pause and re-evaluate the claims. And then I think we might notice that Greaves suggests a sharp distinction between simple and complex cluelessness. And also that she (if I recall correctly) arguably suggests homogeneity within each type of cluelessness - i.e., suggesting all cases of simple cluelessness can be dealt with by just ignoring the possible flow-through effects that seem symmetrical, while we should search for a type of approach to handle all cases of complex cluelessness. (But this latter point is probably debatable.) And we might also notice that the term "cluelessness" seems to suggest we know literally nothing about how to compare the outcomes. Whereas I've argued that in all cases we'll have some information relevant to that, and the various bits of information will vary in their importance and degree of uncertainty. So altogether, it would just seem more natural to me to say: * we're always at least a little uncertain, and often extremely uncertain, and often somewhere in between * in theory, the "correct" way to reason is basically expected value theory, using
Andreas Mogensen's "Maximal Cluelessness"

re: your lady example: as far as I know, the recent papers e.g. here provide the following example: (1) either you help the old lady on a Monday or on a Tuesday (you must and can do exactly one of the two options).  In this case, your examples for CC1 and CC2 don't hold. One might argue that the previous example was maybe just a mistake and I find it very hard to come up with CC1 and CC2 for (1) if (supposedly) you don't know anything about Mondays or Tuesdays.

2MichaelA1yInteresting. I wonder if the switch to that example was because they had a similar thought to mine, or read that comment. But I think I can make a similar point with the Monday vs Tuesday example. I also predict I could make a similar point with respect to any example I'm given. This is because I do know things about Mondays and Tuesdays in general, as well as about other variables. If the papers argue we're meant to artificially suppose we know literally nothing at all about a given variable, that seems weird or question-begging, and irrelevant to actual decision-making. (Note that I haven't read the recent paper you link to.) I could probably come up with several stories for the Monday vs Tuesday example, but my first thought is to make it connect to my prior stories so it can reuse most of the reasoning from there, and to do that via social media. Above, I wrote: This [https://www.socialmediatoday.com/news/the-best-times-to-post-on-social-media-in-2020-report/574045/] says people tend to use social media more on Tuesday and Wednesday than on Monday. I think we therefore have some reason to believe that, if I help an old lady cross the road on Tuesday rather than Monday, it's slightly more likely that someone will post about that on social media, and/or use social media in a slightly more altruistic, kind, community-spirit-y way than they otherwise would've. (Because me doing this on Tuesday means they're slightly more likely to be on social media while this kind deed is relatively fresh in their minds.) This could then further spread those norms (compared to how much they'd be spread if we helped on Monday), and we could tell a story about how that ripples out further etc. Above, I also wrote: I would now say exactly the same thing is true for the Monday vs Tuesday example, given my above argument for why the norms might be spread more if we help on Tuesday rather than Monday. (We could also probably come up with stories related to amounts of traffic on M
Has anyone gone into the 'High-Impact PA' path?

Sorry about the late answer. I just wanted to say that I also upvoted your comment because I would be very interested in a longer piece on being an RA.

AMA: Tobias Baumann, Center for Reducing Suffering

What is the most likely reason that s-risks are not worth working on?

8Jonas Vollmer1yPaul Christiano talks about this question in his 80,000 Hours podcast [https://80000hours.org/podcast/episodes/paul-christiano-a-message-for-the-future/#s-risks] , mainly saying that s-risks seem less tractable than AI alignment (but also expressing some enthusiasm for working on them).

Apart from the normative discussions relating to the suffering focus (cf. other questions), I think the most likely reasons are that s-risks may simply turn out to be too unlikely, or too far in the future for us to do something about it at this point. I do not currently believe either of those (see here and here for more), and hence do work on s-risks, but it is possible that I will eventually conclude that s-risks should not be a top priority for one of those reasons.

AMA: Tobias Baumann, Center for Reducing Suffering

How did you figure out that you prioritize the reduction of suffering?

I am interested in your personal life story and in the most convincing arguments or intuition pumps?

6Tobias_Baumann1yI was exposed to arguments for suffering-focused ethics from the start, since I was involved with German-speaking EAs (the Effective Altruism Foundation / Foundational Research Institute back then). I don’t really know why exactly (there isn’t much research on what makes people suffering-focused or non-suffering-focused), but this intuitively resonated with me. I can’t point to any specific arguments or intuition pumps, but my views are inspired by writing such as the Case for Suffering-Focused Ethics [https://longtermrisk.org/the-case-for-suffering-focused-ethics/], Brian Tomasik’s essays [https://reducing-suffering.org/], and writings by Simon Knutsson [https://www.simonknutsson.com/writing] and Magnus Vinding [https://magnusvinding.com/2020/05/31/suffering-focused-ethics-defense-and-implications/] .
The case of the missing cause prioritisation research

Thank you very much for writing this up. However, I am not sure I understand your point, the things you are referring to in:

3. Policy and beyond – not happening – 2/10. Are you referring to your explanation within the subsection on The Parliament? Then, this would make sense for me.

2weeatquince1yYes that is correct. I have made some edits to clarify.
What questions would you like to see forecasts on from the Metaculus community?

Another operationalisation would be to ask to what extent the 80k top career recommendations have changed, e.g. what percentage of the current top recommendations wills till be in the top recommendations in 10 years.

5alexrjl1yThis question is now open. How many of the "priority paths" identified by 80,000hours will still be priority paths in 2030? [https://www.metaculus.com/questions/4912/how-many-of-the-priority-paths-identified-by-80000hours-will-still-be-priority-paths-in-2030/]
2alexrjl1yI really like this and will work something out to this effect
Call for feedback and input on longterm policy book proposal

Hi Maxime and Konrad, thank you for your work and the post.

I have a question with regard to the structure of the book. It seems like from your summary and the longer description that chapter 2 &3,(4) are quite distinct from 1,4,5. While the former chapters are focused on policymaking/lobbying etc in general (taking shorttermist situations, longtermist problems as examples), the other 3 are more specifically about longtermist policies. Please correct me if I am wrong. Why did you decide to include them in the same publication? It seems to me that a po... (read more)

7Jamie_Harris1yAdmittedly I read Charlotte's comment before reading the full proposal but my main thoughts were: (1) Everything in the book looks really interesting and exciting and I'd be keen to read (or give more feedback on) the specifics in each chapter. (2) It didn't seem like the content of the different chapters was very clearly linked together. That's not necessarily a bad thing, since some books are structured like that (e.g. edited academic books, textbooks etc) but seems unusual for a short, self/co-authored books?

Yeah I was really surprised by this as well. As someone who already works in policy, I would be disappointed to pick up a book about long-termist policy making and find out that it's just explaining how my job works!

Even chapter 5 doesn't seem very clearly focused on long-termist policy rather than policy generally from this table of contents, but I'm probably not understanding the nuances.

[Open Thread] What virtual events are you hosting that you'd like to open to the EA Forum-reading public?

Copying Catherine's message from the Group Organizers Slack:

  • The international EA events calendar, curated by several group organisers, lists high-quality speaker events and large social events suitable for large groups of anyone interested in EA. To add an event, fill out this form.
  • For a wide range of online events, check out the EA online events Facebook group - feel free to share events you’d like to promote to a wider audience here.
  • Special interest events are being run by some interest area groups that can be found here: https://resources.
... (read more)
COVID-19 brief for friends and family

I dont know whether this is the right place to post it: But why are we caring about the risk of the coronavirus for us as EAs? Why are people thinking about canceling EAG or other local meetings?

(are we caring for selfish reasons or because this indirectly reduces the extent the virus spreads?

If we believe that a young healthy person has a 0.5 percent of doing from the virus and 5 percent of the world will be infected in expectation and all these actions (cancellation of EA events) reduces my chance of being infected by 5 percent:

(This seems super optim... (read more)

[This comment is no longer endorsed by its author]Reply
6eca2yYeah, its a good point. On personal risk: a calculation I am stealing from a friend (who I believe does not want credit) suggests a young person's risk after catching is around 1000 micromorts (based on ~.1% young healthy person's IFR). This is doubling or tripling your risk of dying in a given year. See also Beth's comment about chronic fatigue, and note the unknown immunity period etc. I'm not super psyched about those personal risks (if I were to catch it). This stands if you take best guess if you take the median parameters for things. It seems like if we were to actually propagate uncertainty over the values of parameters like per-age IFR, long-term follow-on conditions like chronic fatigue, infection risk in location of origin, infection risk in San Francisco, infection risk from domestic and international air travel, the posterior distribution looks pretty different. In particular, I'd guess a mildly risk averse (say 75th percentile) decision point would say that cancelling EAG saves a fair bit more than 10 micromorts per person, given how bad current information is. Other random things: -SF seems a likely place for an early outbreak given community transmission was first documented in Nor Cal and east asia travel links -There might be some signalling benefit -EAs probably have higher risk of infecting other EAs outside the conference -Conference attendees are generally young but some may be at much higher personal risk because of age or comorbidities. I don't know if these points are conclusive. On a meta-level, my doc is really intended for friends and family and is not trying to weigh in on this point.

Datapoint (my general considerations/thought processes around this, feeding into case-by case decisions about my own activities rather than a blanket decision): I am (young healthy male) pretty unconcerned personally about risk to myself individually; but quite concerned about becoming a vector for spread (especially to older or less robust people). While I have a higher-than-some-people personal risk tolerance, I don't like the idea of imposing my risk tolerance on others. Particularly when travelling/fatigued/jetlagged, I'm not 100% sure I trus... (read more)

What posts you are planning on writing?

I am planning on writing a post summarizing the existing discussion of information cascades in EA and when doing and the different forms and possibilities to do something against it. Lastly, I discuss why the concept of the information cascade might disadvantageous. I would be interested in comments on the draft.

Space governance is important, tractable and neglected

I think I updated towards "maybe its useful if this cause area would be analysed in great depth". Is this planned at the moment? Perhaps interviewing experts etc.

Do you think that it might be important to develop clear guidelines what is meant with the first article of the outer space treaty: "The exploration and use of outer space, including the moon and other celestial bodies, shall be carried out for the benefit and in the interests of all countries, irrespective of their degree of economic or scientific development, and shall be the province of all man... (read more)

3MichaelA2y(Just FYI, your comment doesn’t seem to have a link to the podcast mentioned.)
[Part 1] Amplifying generalist research via forecasting – models of impact and challenges

Interesting idea about the "driver s license" for rationality.

You suggest that EA student groups should run tournaments . I would be interested in your reasoning. Why do you think this is better than encouraging people to join foretold.io as individuals? Do you think that we are lacking an institution or platform which helps individuals to get up to speed and interested in forecasting (so that they are good enough that foretold.io provides a positive experience)? Or do you think that these tournaments would be good signaling for students applyin... (read more)

5jacobjacob2yI'm not sure if the group should fully run the tournaments, as opposed to just training a local team, or having the group leader stay in some contact with tournament organisers. Though I have an intuition that some support from a local group might make things better. A similar case might be sports. Even though young children might start skiing with their parents, they often eventually join local clubs. There they practice with a trainer and older children, and occasionally travel together to tournaments. Eventually some of the best skiers move on to more intense clubs with more dedicated training regimes. Trying to cache out the intuition more concretely, some of the things the local group might provide are: * Teammates. For motivation, accountability and learning. * A lower threshold for entering. * Team leaders. Someone to organise the effort and who can take lots of schleps out of the process (e.g. when I did math competitions in high school I met some kids from the more successful schools, and they would have teachers who were more clued in to when the competitions happened and who would pitch it to new students, book rooms for participants and provide them with writing utensils, point them to resources to learn more, etc) I don't think this list is exhaustive. Yes, I think they would be.
Community vs Network

(thank you for writing this, my comment is related to Denkenberger)A consideration against the creation of groups around cause areas if they are open for younger people (not only senior professionals) who are also new to EA: (the argument does not hold if those groups are only for people who are very familiar with EA thinking - of course among others those groups could also make work and coordination more effective)

It might be that this leads to a misallocation of people and resources in EA as the cost of switching focus or careers increases with this net... (read more)

4DavidNash2yI think when creating most groups/sub-communities it's important that there is a filter to make sure people have an understanding of EA, otherwise it can become an average group for that cause area rather than a space for people who have an interest in EA and that specific cause, and are looking for EA related conversations. I think most people who have an interest in EA also hold uncertainty about their moral values, the tractability of various interventions and which causes are most important. It can be easy sometimes to pigeonhole people with particular causes depending on where they work or donate but I don't meet many people who only care about one cause, and the EA survey had similar results [https://forum.effectivealtruism.org/posts/hP6oEXurLrDXyEzcT/ea-survey-2018-series-cause-selection] . If people are able to come across well reasoned arguments for interventions within a cause area they care about, I think it's more likely that they'll stick around. As most of the core EA material (newsletters, forum, FB) has reference to multiple causes, it will be hard to avoid these ideas. Especially if they are also in groups for their career/interests/location. I think the bigger risk is losing people who instantly bounce from EA when it doesn't even attempt to answer their questions rather than the risk of people not getting exposed to other ideas. If EA doesn't have cause groups then there's probably a higher chance of someone just going to another movement that does allow conversation in that area. This quote from an 80,000 Hours interview with Kelsey Piper [https://80000hours.org/podcast/episodes/kelsey-piper-important-advocacy-in-journalism/] phrases it much better. "Maybe pretty early on, it just became obvious that there wasn’t a lot of value in preaching to people on a topic that they weren’t necessarily there for, and that I had a lot of thoughts on the conversations people were already having. Then I think one thing you can do to share any reasoning