All of tylermjohn's Comments + Replies

I haven't tried this, but I'm excited about the idea! Effective Altruism as an idea seems unusually difficult to communicate faithfully, and creating a GPT that can be probed on various details and correct misconceptions seems like a great way to increase communication fidelity.

On your future directions / tentative reflections (with apologies that I haven't looked into your model, which is probably cool and valuable!):

To the extent that we think this is relevant for things like lock-in and x-risk prioritisation we need to also think that current trends are predictive of future trends. But it's not at all clear that they are once you take into account the possibility of explosive growth a la https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/. Moreover, worlds where there is explosive growth have way mor... (read more)

I've only skimmed the essay but it looks pretty good! Many of the ideas I had in mind are covered here, and I respond very differently to this than to your post here.

I don't know what most EAs believe about ethics and metaethics, but I took this post to be about the truth, or desirability of these metaethical, ethical, and methodological positions, not whether they're better than what most EAs believe. And that's what I'm commenting on here.

1
spencerg
6mo
Cool, thanks for checking it out! I'll update the post slightly to make it clearer that I'm talking about beliefs rather than the truth.

Hi Spencer and Amber,

There's a pretty chunky literature on some of these issues in metaethics, e.g.:

  • Moral fictionalism, or why it could make sense to talk in terms of moral truths even if there aren't any
  • Moral antirealism/constructivism, or why there can be moral "shoulds" and "oughts" even if these are just mental attitudes
  • Why even if you're a pluralist, utilitarian considerations dominate your reasoning on a range of psychologically typical value systems given how much welfare matters to people compared to other things, and how much we can affect it
... (read more)

Hi Tyler, thanks for your thoughts on this! Note that this post is not about the best philosophical objections, it's about what EAs actually believe. I have spoken to many EAs who say they are utilitarian but don't believe in objective moral truth (or think that objective moral truth is very unlikely) and what I'm responding to in this post is what those people say about what they believe and why. I also have spoken to Jeff Sebo about this as well! 

 

In point 1 and 2 in this post, namely, "1. I think (in one sense) it’s empirically false to say th... (read more)

I'd be pretty excited to see a new platform for retail donors giving to x-risk charities. For this, you'd want to have some x-risk opportunities that are highly scaleable (can do ≥ $10m p.a., will execute the project over years reliably without intervention or outside pressure), measurable (you can write out a legible, robust, well-quantifed theory of change from marginal dollars to x-risk), have a pretty smooth returns curve (so people can have decent confidence that their donations have the returns that they expect, whether they are a retail donors or a ... (read more)

Noting that I think that making substantive public comments on this draft (including positive comments about what it gets right) is one of the very best volunteer opportunities for EAs right now! I plan to send a comment on the draft before the deadline of 6 June.

Thanks Sam! I don't have much more to say about this right now since on a couple things we just have different impressions, but I did talk to someone at 80k last night about this. They basically said: some people need the advice Tyler gave, some people need the advice Sam gave. The best general advice is probably "apply broadly": apply to some EA jobs, to some high-impact jobs outside of EA, to some upskilling jobs, etc. And then pick the highest EV job you were accepted to (where EV is comprehensive and includes things like improvements to your future career from credentialing and upskilling).

Hi readers! I work as a Programme Officer at a longtermist organisation. (These views are my own and don't represent my employer!) I think there's some valuable advice in this post, especially about not being constrained too much by what you majored in. But after running several hiring rounds, I would frame my advice a bit differently. Working at a grantmaking organisation did change my views on the value of my time. But I also learned a bunch of other things, like:

  1. The majority of people who apply for EA jobs are not qualified for them.
  2. Junior EA talent
... (read more)
4
Sam Anschell
1y
Thank you for this thoughtful comment, Tyler - I appreciate your perspective and I think it will help readers improve their decision-making.  On point 1: I suspect this is less true for entry level roles, especially those that don’t specify an advanced degree or technical skill requirement. But it’s valuable to know that this was your experience when reviewing applications, and this updates my opinion. On point 2: I agree there are more early career EAs looking for EA jobs than entry level EA job openings at any given time, but I disagree with the conclusion that early career EAs should apply to fewer EA jobs. * It seems hard to get the same level of mentorship, relevant skills, at most “non-EA” orgs. If you’re working as e.g. an SWE for FAANG, your employer’s incentives are to invest in your professional development insofar as that increases your productivity and job satisfaction (for retention). If you’re working as e.g. a researcher for the Center for Global Development, your employer’s incentives are to invest in your professional development to maximize your lifetime impact on your & CGD’s shared mission (agnostic as to whether you achieve that impact while working at CGD or elsewhere).    * You might be a great culture fit, or have a particularly relevant background that headhunters or EA orgs wouldn’t know about unless you actively apply.   * Of the current entry level roles on 80k’s job board, there are a pretty diverse array of functions. I agree that applying to all 224 wouldn’t be a good use of time, but regularly checking back in and applying to promising leads seems like a good idea to me (especially keeping an eye out for jobs matching any specialized background/knowledge base you might have, like in policy, academia, development economics, ML, infectious disease, etc.)   I personally made the mistake of applying to a bunch of stuff all at once, feeling disappointed about not getting anything, and giving up until I felt motivated to a

I'm not sure if this fits your concept, but it might be helpful to have a guidebook that caters specifically to new EAs, to help give guidance to people excited about the ideas but unsure how to put them into practice in daily life, in order to convert top of funnel growth into healthy middle of funnel growth. This could maybe couple with a more general audience book that appeals to people who are antecedently interested in the ideas.

A couple things I'd like to see in this are the reasoning transparency stuff, guidance on going out and getting skills outside of the EA community to bring into the community, anti-burnout stuff, and various cultural elements that will help community health and epistemics.

I've just shared the survey. I think it would be useful if the survey included more information on who will use it, who the data will be available, who is running the survey, and the like.

That argument would be seen as too weak in the political theory context. Then powerful states would have to enfranchise everyone in the world and form a global democracy. It also is too strong in this context, since it implies global democratic control of EA funds, not community control.

2
zdgroff
1y
I guess I would think that if one wants to argue for democracy as an intrinsic good, that would get you global democracy (and global control of EA funds), and it's practical and instrumental considerations (which, anyway, are all the considerations in my view) that bite against it.

I think it could make sense in various instances to form a trade agreement between people earning and people doing direct work, where the latter group has additional control over how resources are spent.

It could also make sense to act like that trade agreement which was not in fact made was in fact made, if that incentivises people to do useful direct work.

But if this trade has never in fact transpired, explicitly or tacitly, I see no sense in which these resources "are meaningfully owned by the people who have forsaken direct control over that money in order to pursue our object-level priorities."

Also, the (normative, rather than instrumental) arguments for democratisation in political theory are very often based on the idea that states coerce or subjugate their members, and so the only way to justify (or eliminate) this coercion is through something like consent or agreement. Here we find ourselves in quite a radically different situation.

It seems like the critics would claim that EA is, if not coercing or subjugating, at least substantially influencing something like the world population in a way that meets the criteria for democratisation. This seems to be the claim in arguments about billionaire philanthropy, for example. I'm not defending or vouching for that claim, but I think whether we are in a sufficiently different situation may be contentious.

Much as I am sympathetic to many of the points in this post, I don't understand the purpose of the section, "Can you demand ten billion dollars?". As I understand the proposal to democratise EA it's just that: a proposal about what, morally, EA ought to do. It certainly doesn't follow that any particular person or group should try to enforce that norm. So pointing out that it would be a bad idea to try to use force to establish this is not a meaningful criticism of the proposal.

7
Nick Whitaker
1y
My apologies if this proves uncharitable. I interpreted Carla Zoe's classification of this proposal as: as potentially  endorsing grassroots attempts to democratize EA funding without funder buy-in.  I do find the general ambiguity frustrating: But if no one interested in reform would endorse a strategy like this, it's simply my mistake. 

I'd love to hear what you think we'd be doing differently. With JackM, I think if we thought that hinginess was pretty evenly distributed across centuries ex ante we'd be doing a lot of movement-building and saving, and then distributing some of our resources at the hingiest opportunities we come across at each time interval. And in fact that looks like what we're doing. Would you just expect a bigger focus on investment? I'm not sure I would, given how much EA is poised to grow and how comparably little we've spent so far. (Cf. Phil Trammell's disbursement tool https://www.philiptrammell.com/dpptool/)

Strong agree. All of the evidence cited in this post is about philosopher-bioethicists, and my experience working in bioethics (including at the NIH Department of Bioethics) says that philosopher-bioethicsts are much more progressive than bioethicists with a health background. And unfortunately, bioethicists with a health background have much stronger ties to the medical community and health care policy. One major piece of evidence for this is that none of the "bioethicists" mentioned in this post (other than Art Caplan) are members of the American Society... (read more)

1
Devin Kalish
1y
As I'm revisiting this post, I'm going to break with my no-comment policy again. This time I don't have a very good excuse, this comment just sort of sits in my head rent-free, and I keep wanting to address it. On the one hand, I think your broad point is right, my evidence is more weighted towards the philosopher bioethicists than the medical bioethicists, and I don't really distinguish the two in my post. This might full well make an important difference to several of the points in my piece, though I'm not sure what sort of difference in particular you think it makes (do you think the medical bioethicists are more bioconservative on average than the general public as well as the philosophers? Do you think this is the primary reason for current problems in the bioethics bureaucracies?). On a somewhat more petty level, I'm bothered by how you say all of my evidence is specific to philosophers. The philpapers survey certainly is, and the figures I cite from the 1DaySooner letter, but the two pieces of evidence I bring up that I consider strongest don't seem to be. The program I surveyed (my MA) has a mix of students from both a medical and philosophical background, and is even in NYU's School of Global Public Health rather than its philosophy school. As for Bensinger's literature review, if I had time I would go through all of the authors to check how many are from more of a philosophy versus medical background (and I encourage anyone interested to report the results back to me), but I think they are a mix. I don't want to lean on this too much though. Again, your basic point holds, that my evidence is philosophy leaning, and it is fully possible to me that the split is characterized by above average philosopher bioethicists canceling out below average medical bioethicists in the aggregates, and the medical ones having more influence. I just don't know personally.

This is great and under-emphasized. I think it was @weeatquince who told me that the primary determinant of what gets implemented by governments is what has successfully been tried before, and while I haven't seen much empirical data on this it strikes me as plausible.

One counter-point comes from Michael Rose's book Zukünftige Generationen in der heutigen Demokratie, which finds that low institutional path-dependence (approximated by the rate of recent constitutional changes) had no effect on the institutionalization of powerful proxies for ... (read more)

1
ac
3y
Thanks for this Tyler! The references are great, I wasn't aware of them. Re the first, how exactly do you think low institutional path-dependence and institutional innovativeness interact? They seem like related but distinct concepts to me. I agree that it would be great to see more research on those questions, though I wonder if a thorough review of the policy diffusion literature might be sufficient. I definitely would like a clearer characterization of governmental innovativeness; I felt kind of hand-wavey in this post.

Thanks for posting this here as well as Jess's excellent questions! This seems like a nice place to continue the conversation around the paper, so I'll respond to what I take to be the most pertinent issues in the blog post here. As Jess notes, this is a relatively early attempt to formulate these ideas and the literature on longtermist institutional reform is extremely young, so the more conversation the better.


How will (short-term) vested interests try to capture these in-government research groups, and how will that be prevented? Why is this b... (read more)

5
Larks
4y
Hey Tyler, thanks very much for engaging, and for working on this very important topic. I was a little surprised you didn't spend more time arguing for Citizen's Assemblies and Sortition in general. While in your comment you mention they have been used a bit, it seems they have been only used for a tiny fraction of all decisions. If they were so advantageous, we might have expected private companies to take advantage of them in decision making, or governments to make widespread use, but as far as I'm aware their use by both is very small. I'm not aware of any major software or engineering projects being designed by sortition, or any military using it to decide strategy and tactics. Presumably this is because a randomly chosen decision making body will be made of up less conscientiousness, less knowledgable and less intelligent people than a body specifically chosen for these traits. Given what we know about the importance of mental acuity in decision making, it seems that we should be wary of any scheme that deliberately neglects any selection on this basis. I worry that citizens' assemblies will end up favouring the views whose partisans have the most rhetorical skill and the most fashionable beliefs. In a representative system, disengaged people can rely on highly skilled representatives to defend their position. In an assembly, those with complicated but sound arguments might be at a disadvantage compared to those with higher status or more memetically powerful slogans, even if the latter are false. You highlight the long remaining life expectancy of the members as a motivation for their to be longtermist, but this seems quite imperfect. In particular, it causes them to be disproportionately motivated by the interests of older people the further out in time you go, with little direct reason they should be concerned about the welfare of future cohorts at all. In particular, the paper mentions the 2016 Irish assembly as an positive example, but it seems to actu

Ah, it looks like I read your post to be a bit more committal than you meant it to be! Thanks for your reply! And sorry for the misnomer, I'll correct that in the top-level comment.

Hi Tobias,

I'm glad to see CRS take something of an interest in this topic and I'm particularly happy to see some meta-level discussion of representing the interests of future generations which has been sorely missing from the longtermism space.

We are in full agreement that most extant proposals to represent future generations involve very weak institutions and often rely on tenuous political commitments. In fact, it's because political commitments are so tenuous that political institutions to represent future generations must at first be wea... (read more)

4
Tobias_Baumann
4y
Hi Tyler, thanks for the detailed and thoughtful comment! Yeah, I agree that there are plenty of reasons why institutional reform could be valuable. I didn't mean to endorse that objection (at least not in a strong form). I like your point about how longtermist institutions may shift norms and attitudes. I mostly had the former in mind when writing the post, though other attempts to ameliorate short-termism are also plausibly very important. Might just be a typo but this post is by CRS (Center for Reducing Suffering), not CLR (Center on long-term risk). (It's easy to mix up because CRS is new, CLR recently re-branded, and both focus on s-risks.) Looking forward to reading it!

Thanks for clarifying all of this! Given that most questions are optional I no longer have this concern, and I'm glad that you've clarified this on the application.

Much looking forward to seeing you there as well!

Thanks so much to those involved in organizing! I wanted to share that I found the registration process (with its 40 or so questions, many requiring detailed information) quite onerous and I can imagine that it might deter some people from submitting completed applications. While this might sometimes be useful for a physical conference, to ration spots in part on the basis of the amount of effort put in, I can't as easily see how it would be useful for a virtual conference. But I may simply be insufficiently creative!

9
Amy Labenz
4y
Thanks for the feedback, Tyler! To clarify: most of these questions aren’t actually *application/registration* questions for the event. Rather, they are meant to help us gather information about the community, and most are optional. I notice that you applied on May 12th - we have since gotten feedback that we should clarify that point, so we tried to make a clear distinction between the small number of application questions and the larger number of information-gathering questions. Half of the attendees got an especially long application form and shorter registration form, where the other half got a long registration form and shorter application form. We hope to use this information to understand the different types of users that we attract and what kinds of content and interactions provide the most value, to help us get a better sense of the value EA Global and CEA as a whole provide the community. I hope the length of the application doesn’t serve as too much of a deterrent. We don’t have many barriers to entry for this event (ticket prices are on a sliding scale starting at $5, and we expect to admit most applicants who aren’t completely new to EA), and we are hoping it can be the biggest event yet. So far, we have more than 1000 applications, so it looks like we could be on track! I look forward to seeing you there.

See also my 2018 EAG talk on shaping the long-term future through antispeciest legislative initiatives. Most of the relevant discussion starts at 8:40.

https://youtu.be/0RznIFm_Ee4

While I at the time thought the dominant beneficial effect would be through AGI alignment, I now think that we should think of these interventions as improving the value alignment of humanity and our descendents in general.

And cf. my and Jeff Sebo's paper on the indirect effects of eating meat and farming animals on human moral psychology and its importance for consequentialists:

j

... (read more)

Thanks! I appreciate your wariness of overemphasizing precise numbers and I agree that it is important to hedge your estimates in this way.

However, none of the claims in the bullet you cite give us any indication of the expected value of each intervention. For two interventions A and B, all of the following is consistent with the expected value of A being astronomically higher than the expected value of B:

  • B is better than A in most of the most plausible scenarios
  • On most models the difference in cost-effectiveness is small (within 1 or 2 orders of magnitude
... (read more)
2
AidanGoth
4y
Thanks, this is a good criticism. I think I agree with the main thrust of your comment but in a bit of a roundabout way. I agree that focusing on expected value is important and that ideally we should communicate how arguments and results affect expected values. I think it's helpful to distinguish between (1) expected value estimates that our models output and (2) the overall expected value of an action/intervention, which is informed by our models and arguments etc. The guesstimate model is so speculative that it doesn't actually do that much work in my overall expected value, so I don't want to overemphasise it. Perhaps we under-emphasised it though. The non-probabilistic model is also speculative of course, but I think this offers stronger evidence about the relative cost-effectiveness than the output of the guesstimate model. It doesn't offer a precise number in the same way that the guesstimate model does but the guesstimate model only does that by making arbitrary distributional assumptions, so I don't think it adds much information. I think that the non-probabilistic model offers evidence of greater cost-effectiveness of THL relative to AMF (given hedonism, anti-speciesism) because THL tends to come out better and sometimes comes out much, much better. I also think this isn't super strong evidence but that you're right that our summary is overly agnostic, in light of this. In case it's helpful, here's a possible explanation for why we communicated the findings in this way. We actually came into this project expecting THL to be much more cost-effective, given a wide range of assumptions about the parameters of our model (and assuming hedonism, anti-speciesism) and we were surprised to see that AMF could plausibly be more cost-effective. So for me, this project gave an update slightly in favour of AMF in terms of expected cost-effectiveness (though I was probably previously overconfident in THL). For many priors, this project should update the other way and

Thanks for doing this! Though it seems like you kinda buried the lede. Why isn't this in the top level summary?

  • In expectation, THL is >100x better than AMF
  • In the median scenario, THL is about 2-4x more cost-effective than AMF
  • A 71% chance that THL is more cost-effective than AMF

Thanks for raising this. It's a fair question but I think I disagree that the numbers you quote should be in the top level summary.

I'm wary of overemphasising precise numbers. We're really uncertain about many parts of this question and we arrived at these numbers by making many strong assumptions, so these numbers don't represent our all-things-considered-view and it might be misleading to state them without a lot of context. In particular, the numbers you quote came from the Guesstimate model, which isn't where the bulk of the wo... (read more)

On the topic of the outlier age group:

"If it really is the case that the 55 to 64 year old age group is an outlier as the more present-day-centric group, it suggests that a simple “rational” explanation (“why care about the future when I’ll be dead soon anyway”) might not be the best explanation. Other socio-cultural factors may be at play."

I can see two decent explanations for why the 55 to 64 age group would have less longtermist values than either adjacent age cohort.

The first is cohort effects. As the Pew Re... (read more)

More on the question of what best explains these trends:

http://eprints.lse.ac.uk/88702/1/dp1552.pdf

Ahlfeldt et al. analyze 305 Swiss referenda and argue that aging effects swing free from cohort effects and status quo habituation effects. "The evidence, instead, suggests that voters make deliberate choices that maximize their expected utility conditional on their stage in the lifecycle."

I think these trends are not better-explained by the hypothesis that older people are more conservative.

1. In the study, older voters were more likely to support health spending on risks to elderly health and less likely to support health care cost cuts, and less likely to support education spending, public transportation and infrastructure spending, and job creation. They were also neutral on the creation of sports facilities.

While I unfortunately haven't been able to look at the 82 referenda to examine their specific content, on its face this looks ... (read more)

Hi Sam,

This is helpful indeed. Thanks for the reply!

1. Good point on clarifying the timescale for the sake of the report. I think the timescale you define for the UK is about right for narrowing the scope of the institutions considered by the report. Then the "effectiveness" evaluation criterion can do the work of identifying which institutions are best by longtermist lights, ranking institutions cardinally as a function of, among other things, their temporal reach.

2. You did previously share your list with me and I'm glad you've reshar... (read more)

Edit: Upon revisiting I realized that I had already read this paper. It's one of the more useful things I've read in this area, so good nod.

Thanks! I've spoken to the APPG and seen some of their policy statements but I had not seen this particular paper. Super helpful.

It's worth noting that one important assumption here is that experts are pretty good at determining the counterfactual value of past policy decisions. I think this is right, but if we gave it up then no system like this one would be effective, since the feedback from future generations would be near-random. On the other hand, if the assumption is correct then there should be some feasible system that provides useful intergenerational feedback of the kind described here, though it may need to include a mechanism for increasing the influence of experts in the decision process.

Thanks!

On (1), I'm not currently considering any existing institutions, other than existing variants of the proposals mentioned. You're right that it would be useful to know which institutions we should preserve, and there also might be other things to learn from analyzing these institutions, such as what has worked well about them and what has kept them from working better. I'll have to consider adding these sorts of institutions.

On (2), that's definitely of concern to me in light of the fact that so many recently-adopted future-focused institutions have

... (read more)

I agree it will probably not change voter epistemic behavior. The thought was that it would change the epistemic behavior of the parties catering to voters and the representatives acting on behalf of the voters, since the voting rule will select for parties and representatives which are less short-termist. This of course can't be guaranteed—if parties are not motivationally longtermist but are merely trying to appease voters to hold power, for example, it won't change their epistemic incentives very much unless competing actors (parties, media) can demonstrate to young people that their plans are bad. But even in this case this is plausible.

Thanks, I've looked at some of the inclusive wealth and natural capital accounting stuff a little bit and will continue to do so. Do you currently have any sense how useful this sort of accounting will be for general future generations issues (incl. catastrophic risks, positive moral & economic trajectories) beyond concerns related to environmental degradation?

5
cole_haus
4y
I like inclusive wealth quite a bit more than some of the other attempts I've seen because it seems like there's an appealing, coherent theory behind it. Given that, I think it extends fairly straightforwardly to other kinds of issues (the paper itself talks about human capital, manufactured capital, natural capital, and social capital) on a conceptual level. The only real requirement is that you be able to phrase things in terms of stocks and flows and assign values to these. I think the key difficulty for most additional things we'd like to add to inclusive wealth is settling on workable definitions and getting reliable measurements/data.

I am extremely interested in the question of how religions transmit ideas and values across many generations, but at the current moment I have no idea how they do this so successfully. If anyone has ideas or empirical sources on this I'd be quite keen to get more info on this.

7
Larks
4y
You might be interested in this (courtesy of Gwern): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1137090
3
Milan_Griffes
4y
Yeah, it's a great question. For Catholic stuff, The Great Heresies looks interesting, though old. (I haven't read it.) I have thoughts about Mahayana Buddhist value transmission. Probably best to DM about that. I bet Leah Libresco would have good thoughts on Catholic value transmission. Message me if an intro would be helpful.

Surprising (and confusing!) as it may be, there is some evidence that voters would vote differently with their Demeny vote than with their first vote.

I've asked Ben Grodeck (who clued me into Demeny voting) to weigh in with more data, but for now see this study from Japanese economist Reiko Aoki, who found (Table 8 and Figure 7) that the voting preferences of surveyed participants who are permitted to cast one vote on behalf of themselves and one vote for their child sometimes vote differently on their second vote. The effect isn't drastic, but i... (read more)

I had the same intuition as RhysSouthan that most people who acquire the second vote in a Demeny voting structure would use the two votes for the same party/candidate/policy . I think an important facet here is that the salience of the vote being for the 'future generation' may nudge people on the margin to use both votes for the policy/party that best benefits the future generation, whereas without receiving the second vote they may not have voted this way. The Kochi University of Technology Research Institute of Future Design have some papers t... (read more)

Excellent. This is a much better idea than the "allow the 2119 people to decide whether to sentence the grandchildren of the 2019 political leaders to the tribunal of death" feedback mechanism that, disturbingly, came to me more readily.

It would be interesting to think about whether there are other feasible ways to see to it that the decisions of future people provide an incentive for the actions for present people.

Two concerns I have with this general kind of scheme is that it requires citizens to have lots of faith that the relevant institutions and the

... (read more)

Thanks, I agree that pinpointing whether these institutions target the epistemic vs motivational (vs other) determinants of short-termism will be important. One more reason to do this is that the best solutions will combine a multiplicity of institutions and policies to address all of the different sources of short-termism without reduplicating effort.

Also note that most institutions will do at least a little bit of both. The government think tank will also address some motivational failings by providing more government officials focused on the long-term a

... (read more)
2
Stefan_Schubert
4y
I agree that some institutions will do both. I'm not sure that age-weighted voting will change voters' tendency, weighted by voting power, to seek good information about the future much, though.

There is also some direct evidence on voting. I think the best evidence is the paper that Will cites in his age weighted voting post. Ahfeldt et al. found that across 82 studied referenda, the elderly voted largely in their generational self-interest.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2753511

There are some complications. For example, there is some evidence that referenda are easier to manipulate via advertising campaigns than other polls, which might lead people to vote more in self-interest here than elsewhere.

I think this remains an open

... (read more)
8
Ramiro
4y
Wouldn't this trend be better explained by the hypothesis that older people are usually more conservative? (e.g., I just confirmed that, in Brazil, opinions about the government among young and old people are symmetrically opposite)

That's true, thanks for your comment. I didn't say this exactly, but some of the policies proposed above are suggested in what I think is the same spirit. E.g., adding the submajority delay rule or age quotas to these upper houses would plausibly make them more longtermist. If you have other specific ideas about ways of reforming legislative houses that make them more longtermist I would be quite interested to hear them.

This is false. Jacy was accused of sexual harassment at Brown, never sexual assault. Some members of this community have conflated Jacy's case with the case of another student, which for some reason shows up in google searches for Jacy's name. This is an understandable confusion, but it is a very bad confusion to continue spreading.

1
AnonEA
5y
Noted, thanks Tyler - that's an important distinction. I had retracted my comment anyway as the main point of it has been made elsewhere. I think my original comment still stands.

thanks for the clarification on (3), gregory. i exaggerated the strength of the valence on your post.

on (1), i think we should be skeptical about self-reports of well-being given the pollyanna principle (we may be evolutionarily hard-wired overestimate the value of our own lives).

on (2), my point was that extinction risks are rarely confined to only human beings, and events that cause human extinction will often also cause nonhuman extinction. but you're right that for risks of exclusively human extinction we must also consider the impact of human extinction on other animals, and that impact - whatever its valence - may also outside the impact of the event on human well-being.

thanks, gregory. it's valuable to have numbers on this but i have some concerns about this argument and the spirit in which it is made:

1) most arguments for x-risk reduction make the controversial assumption that the future is very positive in expectation. this argument makes the (to my mind even more) controversial assumption that an arbitrary life-year added to a presently-existing person is very positive, on average. while it might be that many relatively wealthy euro-american EAs have life-years that are very positive, on average, it's highly questiona... (read more)

3
Alex_Barry
6y
I'm surprised by your last point, since the article says: This seems a far cry from the impression you seem to have gotten from the article. In fact your quote of "highly effective" is only used once, in the introduction, as a hypothetical motivation for crunching the numbers. (Since, a-priori, it could have turned out the cost effectiveness was 100 times higher, which would have been very cost effective). On your first two points, my (admittedly not very justified) impression is the 'default' opinons people typically have is that almost all human lives are positive, and that animal lives are extremely unimportant compared to humans. Whilst one can question the truth of these claims, writing an article aimed at the majority seems reasonable. It might be that actually within EA the average opinion is closer to yours, and in any case I agree the assumptions should have been clearly stated somewhere, along with the fact he is taking the symmetric as opposed to asymmetric view etc.
5
Gregory Lewis
6y
1) Happiness levels seem to trend strongly positive, given things like the world values survey (in the most recent wave - 2014, only Egypt had <50% of people reporting being either 'happy' or 'very happy', although in fairness there were a lot of poorer countries with missing data. The association between wealth and happiness is there, but pretty weak (e.g. Zimbabwe gets 80+%, Bulgaria 55%). Given this (and when you throw in implied preferences, commonsensical intuitions whereby we don't wonder about whether we should jump in the pond to save the child as we're genuinely uncertain it is good for them to extent their life), it seems the average human takes themselves to have a life worth living. (q.v.) 2) My understanding from essays by Shulman and Tomasik is that even intensive factory farming plausibly leads to a net reduction in animal populations, given a greater reduction in wild animals due to habitat reduction. So if human extinction leads to another ~100M years of wildlife, this looks pretty bad by asymmetric views. Of course, these estimates are highly non-resilient even with respect to sign. Yet the objective of the essay wasn't to show the result was robust to all reasonable moral considerations, but that the value of x-risk reduction isn't wholly ablated on a popular view of population ethics - somewhat akin to how Givewell analysis on cash transfers don't try and factor in poor meat eater considerations. 3) I neither 'tout' - nor even state - this is a finding that 'xrisk reduction is highly effective for person-affecting views'. Indeed, I say the opposite:

Hi KelseyPiper, thanks so much for a thoughtful reply. I really agree with most of this - I was talking in terms of these benefits as "pure" benefits because I assumed the many costs you rightly point out up front. That is, assuming that we read Kelly's piece and we come away with a sense of the costs and benefits that promoting diversity and inclusion in the Effective Altruism movement will have, these benefits I've pointed out above are "pure" because they come along for free with that labor involved in making the EA community more in... (read more)

Thanks so much for this thoughtful and well-researched write-up, Kelly. The changes you recommend seem extremely promising and it's very helpful to have all of these recommendations in one place.

I think that there are some additional reasons that go beyond those stated in this post that increase the value of making the EA a more diverse and inclusive community. First, if the EA movement genuinely aspires to cause-neutrality, then we should care about benefits that accrue to others regardless of who these other people are and independent of what the causal ... (read more)

5
kbog
6y
Aside from the direct question of cause prioritization which has already been mentioned, I think it's bad to be explicitly self-serving. Even if the concept would technically work out in the grand calculus, it's better for social-moral reasons to not treat ourselves as ultimate ends. It runs counter to the idea of an altruist movement. The people who get bothered along these lines to such a degree - as in, they think negatively of EA for being "exclusionary" just because we don't do enough catering and decide to condemn it - are not a substantial proportion of media, academia, or the broad liberal political sphere. They are a small group of people who care more about tribal politics than they do about ethical work, and they won't turn around and cooperate just because you want to get along with them. In the long run, it's bad to fall victim to these kinds of heckler's vetoes. (The phrase "negotiating with terrorists" comes to mind.)
1
Chris Leong
6y
"Even if one thinks that this effect size will be very small compared to the good that the EA movement is doing" - I would like to hear why you believe that the effects that you mention in your first paragraph might be comparable to the direct good that we do. I mean, I would be rather surprised if this was the case, but I haven't heard your argument.

I just want to quickly call attention to one point: "these are still pure benefits" seems like a mistaken way of thinking about this - or perhaps I'm just misinterpreting you. To me "pure benefits" suggests something costless, or where the costs are so trivial they should be discarded in analysis, and I think that really underestimates the labor that goes into building inclusive communities. Researching and compiling these recommendations took work, and implementing them will take a lot of work. Mentoring people can have wonderful retur... (read more)

Thanks for sharing! That's good to know.

I have a good friend who is a thorough-going hedonistic act utilitarian and a moral anti-realist (I might come to accept this conjunction myself). He's a Humean about the truth of utilitarianism. That is, he thinks that utilitarianism is what an infinite number of perfectly rational agents would converge upon given an infinite period of time. Basically, he thinks that it's the most rational way to act, because it's basically a universalization of what everyone wants.

Yeah, I think you're all-around right. I'm less sure that my life over the past two years has been very good (my memory doesn't go back much father than that), and I'm very privileged and have a career that I enjoy. But that gives me little if any reason to doubt your own testimony.

I agree that the life of an EA isn't going to be more important, even if saving that EA has greater value than saving someone who isn't an EA.

And if we're giving animals any moral weight at all (as we obviously should), the same can be said about people who are vegan.

Edited (after Tom A's comment): Maybe part of the problem is we're not clear here about what we mean by "a life". In my mind, a life is more or less important depending on whether it contains more or less intrinsic goods. The fact that an EA might do more good than a non-EA doesn't m... (read more)

0
tomstocker
9y
1). The way it reads it sounds like you're talking about intrinsic value to someone not used to these discussion
Load more