«If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that would be possible. Some of our successors might live lives and create worlds that, though failing to justify past suffering, would have given us all, including those who suffered most, reasons to be glad that the Universe exists.» — Derek Parfit


Long-Term Future Fund: September 2020 grants

I agree that the sentence Linch quoted sounds like a "bravery debate" opening, but that's not how I perceive it in the broader context. I don't think the author is presenting himself/herself as an underdog, intentionally or otherwise. Rather, they are making that remark as part of their overall attempt to indicate that they are aware that they are raising a sensitive issue and that they are doing so in a collaborative spirit and with admittedly limited information. This strikes me as importantly  different from the prototypical bravery debate, where the primary effect is not to foster an atmosphere of open dialogue but to gain sympathy for a position.

I am tentatively in agreement with you that "clarification of intent" can be done without "bravery talk", by which I understand any mention that the view one is advancing is unpopular. But I also think that such talk doesn't always communicate that one is the underdog, and is therefore not inherently problematic. So, yes, the OP could have avoided that kind of language altogether, but given the broader context, I don't think the use of that language did any harm.

(I'm maybe 80% confident in what I say above, so if you disagree, feel free to push me.)

Long-Term Future Fund: September 2020 grants

Thanks, you are right. I have amended the last sentence of  my comment.

Long-Term Future Fund: September 2020 grants

FWIW, I think that the qualification was very appropriate and I didn't see the author as intending to start a "bravery debate". Instead, the purpose appears to have been to emphasize that the concerns were raised in good faith and with limited information. Clarifications of this sort seem very relevant and useful, and quite unlike the phenomenon described in Scott's post.

Pablo Stafforini’s Forecasting System

The link is broken; can you fix it?

In the meantime, a few random thoughts. First, the index fund analogy suggests a self-correcting mechanism. Players defer to the community only to the degree that they expect it to track the truth more reliably than their individual judgment, given their time and ability constraints. As the reliability of the community prediction changes, in response to changes in the degree to which individual players defer to it, so will these players's willingness to defer to the community. 

Second, other things equal, I think it's a desirable property of a prediction platform that it makes it rational for players to sometimes defer to the community. This could be seen as embodying the important and neglected truth that in many areas of life one can generally do better by deferring to society's collective wisdom than by going with one's individual opinion.

Finally, insofar as there are reasons for wanting players not to defer to the community, I think the appropriate response is to change the scoring function rather than to ask players to exercise self-restraint. As fellow forecaster Tom Adamczewski reminded me, the Metaculus Scoring System page describes one such possible change:

 It's easy to account for the average community prediction  by adding a constant to each of these. For example, . This way a player would get precisely zero points if they just go along with the community average.

Perhaps Metaculus could have two separate leaderboards: in addition to the current ranking, it could also display a ranking of players with the community component subtracted. These two rankings could be seen as measuring the quality of a player's "credences" and "impressions", respectively.

Pablo Stafforini’s Forecasting System

My spacemacs config file is here. The main Keyboard Maestro macros I use are here.  As noted, these macros were created back when I was beginning to use Emacs, so they don't make use of org capture or other native functionality (including Emacs own internal macros, or the even more powerful elmacro package). I plan to review these files at some point, but not in the immediate future. Happy to answer questions if anything is unclear.

Pablo Stafforini’s Forecasting System

Yes, indeed. I was about to suggest an edit to the transcript to make that clear. When I created the Keyboard Maestro script, I was still relatively unfamiliar with Org mode so I didn't make use of org capture. But that's the proper way to do it.

Pablo Stafforini’s Forecasting System

It was a pleasure to discuss my approach to forecasting with Jungwon and Amanda. I'd be happy to clarify anything that I failed to explain properly during our conversation, or to answer any questions related to the implementation or reasoning behind my "system" (if one may call it that).

Are social media algorithms an existential risk?

I haven't watched the documentary, but I'm antecedently skeptical of claims that social media constitute an existential risk in the sense in which EAs use that term. The brief summary provided by the Wikipedia article doesn't seem to support that characterization:

the film explores the rise of social media and the damage it has caused to society, focusing on its exploitation of its users for financial gain through surveillance capitalism and data mining, how its design is meant to nurture an addiction, its use in politics, its impact on mental health (including the mental health of adolescents and rising teen suicide rates), and its role in spreading conspiracy theories and aiding groups such as flat-earthers and white supremacists.

While many of these effects are terrible (and concern about them partly explains why I myself basically don't use social media), they do not appear to amount to threats of existential catastrophe. Maybe the claim is that the kind of surveillance made possible by social media and big tech firms more generally ("surveillance capitalism") has the potential to establish an unrecoverable global dystopia?

Are there other concrete mechanisms discussed by the documentary? 

AMA: Tobias Baumann, Center for Reducing Suffering

To what degree are the differences between longtermists who prioritize s-risks and longtermists who prioritize x-risks driven by moral disagreements about the relative importance of suffering versus happiness, rather than by factual disagreements about the relative magnitude of s-risks versus x-risks?

AMA: Tobias Baumann, Center for Reducing Suffering

The universe is vast, so it seems there is a lot of room for variation even within the subset of risks involving astronomical quantities of suffering. How much, in your opinion, do s-risks vary in severity? Relatedly, what are your grounds for singling out s-risks as the object of concern, rather than those risks involving the most suffering?

Load More