All of Pablo's Comments + Replies

How many lives has the U.S. President's Emergency Plan for AIDS Relief (PEPFAR) saved?

Thanks! Coincidentally, I also found Dylan's article (as well as another study from 2015) and added an answer based on it, before seeing yours.

EDIT: Oh, I now see that you were linking to an earlier piece by Dylan from mid-2015, also published in Vox. The article in my answer is from late 2018.

How many lives has the U.S. President's Emergency Plan for AIDS Relief (PEPFAR) saved?

Since writing the question, I found this study estimating the impact of PEPFAR during its first decade. It concludes that the program resulted in 11,560,114 life-years gained (p. 3). Rough linear extrapolation from the chart on p. 5 (though note that growth was superlinear in the reference period) would suggest an additional 25 million or so life-years were gained between 2014 and 2021, vindicating the "tens of millions of life-years" Open Phil estimate.

Dylan Matthews points to another study finding 1.2 million deaths averted by PEPFAR by 2007. Matthews po... (read more)

Survival and Flourishing Fund

Should we split this entry and have separate articles for each of the two organizations?

[linkpost] Peter Singer: The Hinge of History

I agree with your assessment. It is interesting to note that Singer's comments are in response to Holden, who used to hold a similar view but no longer does (I believe).

The other part I found surprising was Singer's comparison of longtermism with past harmful ideologies. At least in principle, I do think that, when evaluating moral views, we should take into consideration not only the contents of those views but also the consequences of publicizing them. But:

  1. These two types of evaluation should be clearly distinguished and done separately, both for concept
... (read more)
What questions relevant to EA could be answered by surveying the public?

I think that both moral uncertainty and non-moral epistemic uncertainty (if you'll allow the distinction) both suggest we should assign some weight to what people say is valuable.

Only ~7% of all people who ever lived are currently alive. What's the justification for focusing on humans living in 2022? Is it just that figuring out the values of past generations is less tractable?

2David_Moss5dIt seems plausible that we should assign weight to what past generations valued (though one would likely not use survey methodology to do this), as well as what future generations will value, insofar as that is knowable.
Why I'm concerned about Giving Green

Giving Green no longer recommends TSM, although the reasons prompting the withdrawal of the recommendation appear to be unrelated to the incidents described above:

we have concerns about Sunrise’s need for additional funding and its lack of clear strategy beyond 2021. Sunrise’s budget grew explosively from just $50,000 in 2017 to $15 million in 2020 and 2021. This kind of rapid growth can strain any organization, and it appears that Sunrise is no different, as 2021 was a year of internal friction in the Movement. Also aside from some advocacy work on climat

... (read more)
2alexrjl8dThere was substantial evidence of TSM's rapid growth available at the time I originally wrote this piece, some of which I included in it. It therefore seems somewhat strange that the thing which prompted the de-recommendation is that TSM appeared to grow rapidly. Nonetheless, the de-recommendation itself seems good.
External praise for EA

I like having a single "external praise" tag rather than three "praise" tags corresponding to the three "criticism" tags, for the reasons you note.

3jtm10dJames, thanks for pointing this out, and thanks, Pablo, that was indeed the link I intended to use! Fixed it now.
EA Forum feature suggestion thread

and allows for completed changes to be hidden

Having an option to "resolve" a comment thread (analogous to "closing" a GitHub issue) would be very useful, especially for Wiki comments.

The Precipice

Just saw this—added.

I see.

My overall sense is that the scope of the entry is insufficiently crisp, and that it's probably better to discuss these topics under other entries. For instance, there has been considerable discussion about the degree to which charities or causes differ in cost-effectiveness, so I would say that the question of whether the EA community should try to persuade people to switch causes rather than to improve their effectiveness within the cause they already support should be addressed as part of that discussion, in the cost-effectiveness entry.  Som... (read more)

1Harrison D11dI hadn’t really put much effort into crisping the entry when I first created it, so I’m not surprised there. I do still think that, in the context of a wiki filled with a variety of tags already (including “Fabianism,” [https://forum.effectivealtruism.org/tag/fabianism] which has been used for zero posts), it would make sense to add a new tag for this subject—which I agree is related to messaging, cost-effectiveness calculations, local priorities research, cause neutrality, and a few other things but is not entirely subsumed by any one of those tags. However, it seems that I am alone (and outclassed) on this point, so feel free to do/delete as you see fit.

Cause-neutrality is another option. Both of the tagged articles discuss/criticise cause-neutrality.

EA Forum feature suggestion thread

Also seconded.

In the meantime, you can get pseudo dark mode with the dark reader extension.

Civilization Re-Emerging After a Catastrophic Collapse

It would be great if this talk was transcribed.

Meat-eater problem

Re-reading this exchange, I'd like to add that it may be worth discussing those externalities in other Wiki articles, such as dietary change.

[Linkpost] Eric Schwitzgebel: Against Longtermism

Your reply to Eric's fourth objection makes an important point that I haven't seen mentioned before:

By contrast, I think there's a much more credible risk that defenders of conventional morality may use dismissive rhetoric about "grandiose fantasies" (etc.) to discourage other conventional thinkers from taking longtermism and existential risks as seriously as they ought, on the merits, to take them.  (I don't accuse Schwitzgebel, in particular, of this.  He grants that most people unduly neglect the importance of existential risk reduction. 

... (read more)
Ben Garfinkel: How sure are we about this AI stuff?

I think this talk, as well as Ben's subsequent comments on the 80k podast, serve as a good illustration of the importance of being clear, precise, and explicit when evaluating causes, especially those often supported by relatively vague analogies or arguments with unstated premises. I don't recall how my views about the seriousness of AI safety as a cause area changed in response to watching this, but I do remember feeling that I had a better understanding of the relevant considerations and that I was in a better position to make an informed assessment.

Reducing long-term risks from malevolent actors

I'm surprised to see that this post hasn't yet been reviewed. In my opinion, it embodies many of the attributes I like to see in EA reports, including reasoning transparency, intellectual rigor, good scholarship, and focus on an important and neglected topic.

Have you considered switching countries to save money?

I know Uruguay well (my father used to own a house near Colonia del Sacramento) and I would agree with your assessment. Montevideo is about two hours away from Buenos Aires by ferry, so an additional advantage is relative proximity to a major metropolis. 

Have you considered switching countries to save money?

Many years ago, when personal finance considerations weighed more heavily on EA decisions, there was an attempt to establish a "new EA hub" in a country with low cost of living and some other characteristics. Maybe the associated Facebook group still exists. I recall there were also spreadsheets, Trello boards, etc. with detailed comparisons of the different options. Posting this here in case others have links to some of that material.

1acylhalide9dInteresting, I'd be keen to know more about that. I feel like this might have some impact my career choices.

Perhaps we could have an entry on the demandingness of EA, though. That is, insofar as one thinks there is a requirement to engage in effective altruism, how stringent is this requirement?

What do you think of this proposal?

Thanks for creating this entry. I'm not sure having a dedicated article on 'more good vs. most good' is justified, however. EA places great emphasis on doing the most good with a given unit of resources, but is not typically understood to require people to allocate their resources so as to do the most good. This debate seems rather to be one in moral philosophy, covered under demandingness of morality. Insofar as it comes up in EA discussion, it seems to be discussed mostly in the context of excited vs. obligatory altruism.

1Harrison D12dI may not be fully/properly understanding your objection(s), but it sounds like you are interpreting the tag as more about philosophical issues (e.g., the demandingness of morality), whereas my intent for the category (as well as that of the first post I tagged with this [https://forum.effectivealtruism.org/posts/udsATFrQtc34iKs2c/doing-more-good-vs-doing-the-most-good-possible] ) was more about questions regarding the outreach/branding/community-building strategy of EA as a movement: for example, to what extent should EA try to persuade people to change their cause areas altogether (which tends to be a more difficult sell) vs. encouraging/helping people to find more-effective charities within their own preferred cause areas (even if such charities are less impactful than ones in other areas)? Is it better to support or provide research about something (hypothetically low/medium-impact) like "most effective (1st-world) disaster relief funds" as opposed to supporting or providing research about something typically higher-impact like "most effective 3rd-world health and development charities," once you take into consideration less-direct effects like movement image/growth, moral circle expansion, etc.
[Feature Announcement] Rich Text Editor Footnotes

This is fantastic! I'm excited that the Wiki will gradually replace the static and distracting inline citations with dynamic and hoverable footnotes.

One thing I noticed is that, when there are multiple footnote references pointing to the same footnote, the link that takes you back to the footnote reference always points to its first occurrence. Although this is not very important, I think that, ideally, there should be separate links for each footnote reference.

For illustration, compare the Wiki entry on Cari Tuna (which I just edited so that it uses footn... (read more)

7Jonathan Mustin14dThanks Pablo! Added it to the list. The return tooltip idea in particular is clever. This one might take a bit of work, enough that I don't expect it to go in the immediate quick-fix bucket, but I agree it's a good addition and I will look into it!
Do you use the EA Wiki?

Yes, agreed. One model would be for tags to have content that corresponds to a glossary rather than an encyclopedia. In this model, each tag would be associated with a concise definition or description of the tag, spanning 1–3 sentences, and perhaps also a short list of references for further reading. Then there would also be SEP-style encyclopedia articles corresponding to some of these tags, but it's unclear how the two should be integrated, or whether they should be integrated at all.

Do you use the EA Wiki?
Answer by PabloJan 04, 202227

Not an answer to your question, but it may answer or address some of your underlying questions or concerns:

EA Funds recently extended funding for my work on the EA Wiki, but the current plan is to focus more on experimentation and less on content creation. Currently, I'm exploring the possibility of launching a more ambitious encyclopedia of effective altruism following roughly the model of the Stanford Encyclopedia of Philosophy, with authoritative, comprehensive, and up-to-date articles on core EA concepts and topics commissioned to experts from both aca... (read more)

2Nathan Young15dYeah it's my strong view that if the wiki is set up right, the content should more or less create itself. That the wiki isn't useful suggests that people don't feel comfortable adding stuff to it. Personally, I'd like ways to integrate it more with posts and encourage poeple to correct errors - perhaps people can tag phrases in posts with links to the wiki and then people would hover over those links to understand the concepts. When that's happening, people would find and correct errors as they saw them.
2casebash15dWell even if you implement that model I still think it'd be important to keep tags and for those tags to be able to have descriptions.
Nathan Young's Shortform

I do think that many of the entries are rather superficial, because so far we've been prioritizing breadth over depth. You are welcome to try to make some of these entries more substantive. I can't tell, in the abstract, if I agree with your approach to resolving the tradeoff between having more content and having a greater fraction of content reflect just someone's opinion. Maybe you can try editing a few articles and see if it attracts any feedback, via comments or karma?

Democratising Risk - or how EA deals with critics

Thanks for the comments. They have helped me clarify my thoughts, though I feel I'm still somewhat confused.

However, I'll note that establishing a rule like "we won't look at claims seriously if the person making them has a personal vendetta against us" could lead to people trying to argue against examining someone's claims by arguing that they have a personal vendetta, which gets weird and messy. ("This person told me they were sad after org X rejected their job application, so I'm not going to take their argument against org X's work very seriously.")

Yes... (read more)

An Issue with the Repugnant Conclusion

The crux I think lies in, "is not meant to be sensitive to how resources are allocated or how resources convert to wellbeing." I guess the point established here is that it is, in fact, sensitive to these parameters.

In particular if one takes this 'total utility' approach of adding up everyone's individual utility we have to ask what each individual's utility is a function of.

Yes, that is a question that needs to be answered, but population ethics is not an attempt to answer it. This subdiscipline treats distributions of wellbeing across individuals in dif... (read more)

1matthewp18dAs someone with a mathematical background, I see a claim about a general implication (the RC) arising from Total Utilitarianism. I ask 'what is Total Utilitarianism?' I understand 'add up all the utilities'. I ask 'what would the utility functions have to look like for the claim to hold?' The answer is, 'quite special'. I don't think any of us should be comfortable with not checking the claim works at a gears level. The claim here being, approximately, that the RC is implied under Total Utilitarianism regardless of the choice of utility function. Which is false, as demonstrated above. > This subdiscipline treats distributions of wellbeing across individuals in different hypothetical worlds as a given input, and seeks to find a function that outputs a plausible ranking of those worlds. If you'd be interested formalising what this means, I could try and show that either the formalisation is uninteresting or that some form of my counterexamples to the RC still holds.
An Issue with the Repugnant Conclusion

Thanks for the clarification. My intention was not to dismiss your proposal, but to understand it better.

After reading your comment and re-reading your post, I understand you to be claiming that the Repugnant Conclusion follows only if the mapping of resources to wellbeing takes a particular form, which can't be taken for granted. I agree that this is substantively different from the proposals in the section of the SEP article, so the difference is not verbal, contrary to what it seemed to me initially.

However, I don't think this works as a reply to the Re... (read more)

1matthewp19dThanks for the considered reply :) The crux I think lies in, "is not meant to be sensitive to how resources are allocated or how resources convert to wellbeing." I guess the point established here is that it is, in fact, sensitive to these parameters. In particular if one takes this 'total utility' approach of adding up everyone's individual utility we have to ask what each individual's utility is a function of. It seems easy to argue that the utility of existing individuals will be affected by expanding or contacting the total pool of individuals. There will be opposing forces of division of scarce resources vs network effects etc. A way the argument above could be taken down would be writing down some example of a utility function, plugging it into the total utility calculation and showing the RC does hold. Then pointing out that the function comes from a broad class which covers most situations of practical interest. If the best defence is indeed just pointing out that it's true for a narrow range of assumptions, my reaction will be like, "OK, but that means I don't have to pay much attention whenever it crops up in arguments because it probably doesn't apply."
Where are you donating in 2021, and why?

An alternative is to donate your time rather than your money, and use it to do the kind of work you would have funded, had this been an option. With most interventions, this isn't possible or realistic, but Wikipedia is the Free Encyclopedia that Anyone Can Edit.

An Issue with the Repugnant Conclusion

This criticism is most similar to that of the ‘Variable value principles’ of the Plato article. The difference here is that we are not trying to find a ‘modification’ of total utilitarianism. Instead we argue that the Conclusion doesn’t follow from the premises in the general case, even if we are total utilitarians.

Superficially, the difference seems merely verbal: what they call a modification of total utilitarianism, you call a version of total utilitarianism. Is there anything substantive at stake?

5matthewp19dWell, on the basis of the description in the SEP article: It's not the same thing, since above we're saying that each individual's utility is a function of the whole setup. So when you add new people you change the existing population's utilities. The SEP description instead sounds like changing only what happens at the margin. The main argument above is more or less technical, rather than 'verbal'. And reliance on verbal argument is pretty much the root of the original issue. In the event someone else said something similar some other time, there's still value in a rederivation from a different starting position. I'm not so much concerned with credit for coming up with an idea than that I less frequently encounter instances of this issue.
Democratising Risk - or how EA deals with critics

I agree that there is a relevant difference, and I appreciate your pointing it out. However, I also think that knowledge of the origins of a claim or an argument is sometimes relevant for deciding whether one should engage seriously with it, or engage with it at all, even if the person presenting it is not himself/herself acting in bad faith. For example, if I know that the oil or the tobacco industries funded studies seeking to show that global warming is not anthropogenic or that smoking doesn't cause cancer, I think it's reasonable to be skeptical  ... (read more)

 One reason is that the studies may consist of filtered evidence—that is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.

The "incentives" point is reasonable, and it's part of the reason I'd want to deprioritize checking into claims with dishonest origins. 

However, I'll note that establishing a rule like "we won... (read more)

Propose and vote on potential EA Wiki entries

Here's the entry. I was only able to read the transcript of Paul's talk and Rohin's summary of it, so feel free to add anything you think is missing.

Propose and vote on potential EA Wiki entries

Thanks, Michael. This is a good idea; I will create the entry.

(I just noticed you left other comments to which I didn't respond; I'll do so shortly.)

Democratising Risk - or how EA deals with critics

This seems like a fruitful area of research—I would like to see more exploration of this topic. I don't think I have anything interesting to say off the top of my head.

Democratising Risk - or how EA deals with critics

The longtermist could then argue that an analogous argument applies to "other-defence" of future generations. (In case there was any need to clarify: I am not making this argument, but I am also not making the argument that violence should be used to prevent nonhuman animals from being tortured.)

Separately, note that a similar objection also applies to many forms of non-totalist longtermism. On broad person-affecting views, for instance, the future likely contains an enormous number of future moral patients who will suffer greatly unless we do something ab... (read more)

4MichaelStJules21dAlso, I think we should be clear about what kinds of serious harms would in principle be justified on a rights-based (or contractualist) view. Harming people who are innocent or not threats seems likely to violate rights and be impermissible on rights-based (and contractualist) views. This seems likely to apply to massive global surveillance and bombing civilian-populated regions, unless you can argue on such views that each person being surveilled or bombed is sufficiently a threat and harming innocent threats is permissible, or that collateral damage to innocent non-threats is permissible. I would guess statistical arguments about the probability of a random person being a threat are based on interpretations of these views that the people holding them would reject, or that the probability for each person being a threat would be too low to justify the harm to that person. So, what kinds of objectionable harms could be justified on such views? I don't think most people would qualify as serious enough threats to justify harm to them to protect others, especially people in the far future.
5MichaelStJules21dI realize now I interpreted "rights" in moral terms (e.g. deontological terms), when Halstead may have intended it to be interpreted legally. On some rights-based (or contractualist) views, some acts that violate humans' legal rights to protect nonhuman animals or future people could be morally permissible. I agree. I think rights-based (and contractualist) views are usually person-affecting, so while they could in principle endorse coercive action to prevent the violation of rights of future people, preventing someone's birth would not violate that then non-existent person's rights, and this is an important distinction to make. Involuntary extinction would plausibly violate many people's rights, but rights-based (and contractualist) views tend to be anti-aggregative (or at least limit aggregation), so while preventing extinction could be good on such views, it's not clear it would deserve the kind of priority it gets in EA. See this paper [https://www.cambridge.org/core/journals/canadian-journal-of-philosophy/article/abs/whats-wrong-with-human-extinction/D836D5BC13C24FE1DF2F144E40FAB728] , for example, which I got from one of Torres' articles and takes a contractualist approach. I think a rights-based approach could treat it similarly. It could also be the case that procreation violates the rights of future people pretty generally in practice, and then causing involuntary extinction might not violate rights at all in principle, but I don't get the impression that this view is common among deontologists and contractualists or people who adopt some deontological or contractualist elements in their views. I don't know how they would normally respond to this. Considering "innocent threats" complicates things further, too, and it looks like there's disagreement over the permissibility of harming innocent threats to prevent harm caused by them. I agree. However, again, on some non-consequentialist views, some coercive acts could be prohibited in some contexts, and wh
[Linkpost] - Sam Harris and Sam Bankman-Fried - Earning to Give

Tyler Cowen announced that he will soon be interviewing SBF. You can suggest questions here.

Democratising Risk - or how EA deals with critics

Just to clarify (since I now realize my comment was written in a way that may have suggested otherwise): I wasn't alluding to your attempt to steelman his criticism. I agree that at the time the evidence was much less clear, and that steelmanning probably made sense back then (though I don't recall the details well).

Democratising Risk - or how EA deals with critics

Hi Charles. Please consider revising or retracting this comment; unlike your other comments in this thread, it's unkind and not adding to the conversation.

Per your personal request, I have deleted my comment.

Democratising Risk - or how EA deals with critics

I agree with this, and would add that the appropriate response to arguments made in bad faith is not to "steelman" them (or to add them to a syllabus, or to keep disseminating a cherry-picked quote from a doctoral dissertation), but to expose them for what they are or ignore them altogether. Intellectually dishonesty is the epistemic equivalent of defection in the cooperative enterprise of truth-seeking; to cooperate with defectors is not a sign of virtue, but quite the opposite.

I've seen "in bad faith" used in two ways:

  1. This person's argument is based on a lie.
  2. This person doesn't believe their own argument, but they aren't lying within the argument itself.

While it's obvious that we should point out lies where we see them, I think we should distinguish between (1) and (2). An argument's original promoter not believing it isn't a reason for no one to believe it, and shouldn't stop us from engaging with arguments that aren't obviously false.

(See this comment for more.)

9weeatquince21dAgree with this.
Democratising Risk - or how EA deals with critics

nor it is germane to this discussion

I do think it is germane to the discussion, because it helps to clarify what the authors are claiming and whether they are applying their claims consistently. 

7Davidmanheim21dI was discussing this paper, which doesn't discuss climate philanthropy, not everything they have ever stated. I don't know what else they've claimed, and I'm not interested in a discussion of it.
Democratising Risk - or how EA deals with critics

Technology causes problems? Just add more technology!

"it's more nuanced than that".

Who are the most well known credible people who endorse EA?
Answer by PabloDec 27, 202111

I second Harrison D's suggestion to create a spreadsheet of endorsements, since such a list might be useful to a number of EAs and EA orgs, beyond the specific task of updating effectivealtruism.org.

Sources that may point you in the right direction:

Effective Altruism: The First Decade (Forum Review)

Okay, that explains why I can't vote on the post by Carl I crossposted. But why can I (and everyone else, presumably) neither review nor vote on the three posts above?

They were not nominated during the nominations phase. I'll treat Tessa's posting as a nomination though, and nominate them manually. You should now be able to vote on and review them.

Effective Altruism: The First Decade (Forum Review)

Thanks for crossposting these. It seems that it's not possible to review or vote on some of those posts (specifically, these three posts). Is there an explanation for this? I also noticed I can't vote on this post by Carl Shulman, which I crossposted, though in that case I can write a review.

2RyanCarey23dIn general you're allowed to review, but not vote on your own posts.
Evidence, cluelessness, and the long term - Hilary Greaves

Thanks for the reply. Although this doesn't resolve our disagreement, it helps to clarify it.

4weeatquince23dThank you Pablo. Have edited my review. Hopefully it is fairer and more clear now. Thank you for the helpful feedback!!
Response to Recent Criticisms of Longtermism

As I mentioned in a top-level comment on this post, I don't think this is actually true. He never claims so outright.

In one of the articles, he claims that longtermism can be "analys[ed]" (i.e. logically entails) "a moral view closely associated with what philosophers call 'total utilitarianism'." And in his reply to Avital, he writes that "an integral component" of the type of longtermism that he criticized in that article is "total impersonalist utilitarianism". So it looks like the only role the "closely" qualifier plays is to note that the type of tota... (read more)

0MichaelStJules1moOk, I don't find this particularly useful to discuss further, but I think your interpretations of his words are pretty uncharitable here. He could have been clearer/more explicit, and this could prevent misinterpretation, including by the wider audience of people reading his essays. EDIT: Having read more of his post on LW, it does often seem like either he thinks longtermists are committed to assigning positive value to the creation of new people, or that this is just the kind of longtermism he takes issue with, and it's not always clear which, although I would still lean towards the second interpretation, given everything he wrote. This seems overly literal, and conflicts with other things he wrote (which I've quoted previously, and also in the new post on LW). He wrote: That means he's criticizing a specific sort of longtermism, not the minimal abstract longtermist view, so this does not mean he's claiming longtermism is committed to total utilitarianism. He also wrote: Again, if he thought longtermism was literally committed to consequentialism or total utilitarianism, he would have said so here, rather than speaking about specific positions and merely pointing out similarities. He also wrote: Given that he seems to have person-affecting views, this means he does not think longtermism is committed to totalism/impersonalism or similar views. Total utilitarianism is already impersonalist, from my understanding, so to assume by "moral view closely associated with what philosophers call 'total utilitarianism'", he meant "total impersonalist utilitarianism", I think you have to assume he didn't realize (or didn't think) total utilitarianism and total impersonalist utilitarianism are the same view. My guess is that he only added the "impersonalist" to emphasize the fact that the theory is impersonalist.
Comments for shorter Cold Takes pieces

Given this simple consideration that cases would have to drop off exceptionally fast at just the right time for Zvi's outcome to happen, I assign a 5% chance to Zvi's outcome happening.

Your analysis roughly matches my independent impression, but I'm pretty sure this simple consideration didn't escape Zvi's attention. So, it seems that you can't so easily jump from that analysis to the conclusion that Holden will win the bet, unless you didn't think much of Zvi as a reasoner to begin with or had a plausible error theory to explain this particular instance.

4WilliamKiely1moYes, you're quite right, thanks. I failed to differentiate between my independent impression and my all-things-considered view when thinking about and writing the above. Thinking about it now, I realize ~5% is basically my independent impression, not my all-things-considered view. My all-things-considered view is more like ~20% Zvi wins--and if you told me yours was 40% then I'd update to ~35%, though I'd guess yours is more like ~25%. I meta-updated upwards based on knowing Zvi's view and the fact that Holden updated upwards on Zvi to 50%. (And even if I didn't know their views, my initial naive all-things-considered forecasts would very rarely be as far from 50% as 5% is unless there's a clear base rate that is that extreme.). That said, I haven't read much of what Zvi has written in general and the one thing I do remember reading of his on Covid (his 12/24/20 Covid post [https://www.lesswrong.com/posts/CHtwDXy63BsLkQx4n/covid-12-24-we-re-f-ed-it-s-over] ) I strongly disagreed with at the time (and it turns out he was indeed overconfident). I recognize that this probably makes me biased against Zvi's judgment, leading me to want to meta-update on his view less than I probably should (since I hear a lot of people think he has good judgment and there probably are a lot of other predictions he's made which were good that I'm just not aware of), but at the same I really don't personally have good evidence of his forecasting track record in the way that I do of e.g. your record, so I'm much less inclined to meta-update a lot on him than I would e.g. on you. Additionally, I did think of a plausible error theory earlier after writing the 5% forecast (specifically: a plausible story for how Zvi could have accepted such a bet at terrible odds). (I said this out loud to someone at the time rather than type it:) My thought was that Zvi's view in the conceptual disagreement they were betting on seems much more plausible to me than Zvi's position in the bet operationalizatio
Response to Recent Criticisms of Longtermism

[I made some edits to make my comment clearer.]

I think this is not a very good way to dismiss the objection, given the views actual longtermists hold and how longtermism looks in practice today (a point Torres makes).

I wouldn't characterise my observation that longtermism isn't committed to total utilitarianism as dismissing the objection. I was simply pointing out something that I thought is both true and important, especially in the context of a thread prompted by a series of articles in which the author assumes such a commitment. The remainder of my com... (read more)

2MichaelStJules1moAs I mentioned in a top-level comment on this post, I don't think this is actually true. He never claims so outright. The Current Affairs piece doesn't use the word "utilitarian" at all, and just refers to totalist arguments made for longtermism, which are some of the most common ones. His wording from the Aeon piece, which I've bolded here to emphasize, also suggests otherwise: I don't think he would have written "closely associated" if he thought longtermism and longtermists were necessarily committed to total utilitarianism. The "utilitarianism repackaged" article explicitly distinguishes EA and utilitarianism, but points out what they share, and argues that criticisms of EA based on criticisms of utilitarianism are therefore fair because of what they share. Similarly, Dr. David Mathers never actually claimed longtermism is committed total utilitarian, he only extended a critique of total utilitarianism to longtermism, which responds to one of the main arguments made for longtermism. Longtermism is also not just the ethical view that some of the primary determinants of what we should do are the consequences on the far future (or similar). It's defended in certain ways (often totalist arguments), it has an associated community and practice, and identifying as a longtermist means associating with those, too, and possibly promoting them. The community and practice are shaped largely by totalist (or similar) views. Extending critiques of total utilitarianism to longtermism seems fair to me, even if they don't generalize to all longtermist views.
Evidence, cluelessness, and the long term - Hilary Greaves

A few thoughts:

  • I'm open to the possibility that there are terms better than "cluelessness" to refer to the problem Hilary discusses in her talk. Perhaps we could continue this discussion elsewhere, such as on the 'talk' page of the cluelessness Wiki entry (note that the entry is currently just a stub)?
  • As noted, the term has been used in philosophy for quite some time. So if equivalent or related expressions exist in other disciplines, the question is, "Which of these terms should we settle for?" Whereas you make it seem like using "cluelessness" requires a
... (read more)
6weeatquince1moThe EA Forum wiki has talk pages!! Wow you learn something new every day :-) Yes I think that is ultimately the thing we disagree on. And perhaps it is one of those subjective things that we will always disagree on (e.g. maybe different life experiences means you read some content as new and exciting and I read the same thing as old and repetitive). If I had to condense why I didn’t think it is a valuable contribution is it looks to me (given my background) that it is reinventing the wheel. The rough topic of how to make decisions under uncertainty about the impact of those decisions (uncertainty about what the options are, what the probabilities are, how to decide, what is even valuable ect) in the face of unknown unknowns, etc – is a topic that military planners, risk managers, academics and others have been researching for decades. And they have a host of solutions: anti-fragility, robust-decision making, assumption based planning, sequence thinking, adaptive planning. And they have views on when to make such decisions, when to do more research, how to respond, etc. I think any thorough analysis of the options for addressing uncertainty/cluelessness really should draw on some of that literature (before dismissing options like "make bolder estimates" / "make the analysis more sophisticated") . Otherwise it would be like trying to reinvent the wheel, suggesting it should be square and then concluding it cannot be done and wheels don’t work. Hope that explains where I am coming from. (PS. To reiterate, in Hilary's defense, EAs reinvent wheels all the time. No1 top flaw and all that. I just think this specific case has lead to lots of confusion. Eg people thinking there is no good research into uncertainty management)
Evidence, cluelessness, and the long term - Hilary Greaves

Unfortunately this author has had the bad luck that her new terminology stuck. And it stuck pretty hard.

The term "cluelessness" has been used in the philosophical literature for decades, to refer to the specific and well-defined problem faced by consequentialism and other moral theories which take future consequences into account. Greaves's talk is a contribution to that literature. She wasn't even the first to use the term in EA contexts; I believe Amanda Askell and probably other EAs were discussing cluelessness years before this talk.

5weeatquince1moYes you are correct. I am not an expert here but my best guess is the story is something like * "Moral cluelessness" was a philosophical term that has been around for a while. * Hilary borrow the philosophy term and extended it to discuss "complex clulessness" (which a quick Google makes me think is a term she invented). * "complex cluelessness" is essentially identical to “deep uncertainty” and such concepts (at least as far as I can tell from reading her work, I think it was this paper [https://philarchive.org/archive/GREC-38v1] I read) . * This and other articles then shorthanded "complex cluelessness" to just "cluelessness". I am not sure exactly, happy to be corrected. So maybe not an invented term but maybe a borrowed, slightly changed and then rephrased term. Or something like that. It all gets a bit confusing. And sorry for picking on this talk if Hilary was just borrowing ideas from others, just saw it on the Decade Review list. – – Either way I don’t think this changes the point of my review. It is of course totally fine to invent / reinvent / borrow terminology, (in fact in academic philosophy it is almost a requirement as far as I can tell). And it is of course fine for philosophers to talk like philosophers. I just think sometimes adding new jargon to the EA space can cause more confusion than clarity, and this has been one of those times. I think in this case it would have been much better if EA had got into the habit of using the more common widely used terminology that is more applicable to this topic (this specific topic is not, as far as I can tell, a problem where philosophy has done the bulk of the work to date). And insofar as the decade review is about reviewing what has been useful 1+ years later I would say this is a nice post that has in actuality turned out unfortunately to be dis-useful / net harmful. Not trying to place blame. Maybe there is just a lesson for all of us on being cautious on introducing terminology.
Load More