Pablo_Stafforini

«If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that would be possible. Some of our successors might live lives and create worlds that, though failing to justify past suffering, would have given us all, including those who suffered most, reasons to be glad that the Universe exists.» — Derek Parfit

Comments

alexrjl's Shortform

Together with a few EA friends, I ended up betting a substantial amount of money on Biden. It went well for me, too, as well as for some of my friends. I think presidential elections present unusually good opportunities for both betting and arbitrage, so it may be worth coordinating some joint effort next time.

(As a note of historical interest, during the 2012 US election a small group of early EAs made some money arbitraging Intrade.)

4 Years Later: President Trump and Global Catastrophic Risk

In some cases Trump has been bad, but for the opposite reason than you were worried about! For example you criticized him for supporting travel bans during Ebola

It's not the opposite reason. The underlying criticism is that Trump's measures were miscalibrated to the magnitude of the problem. If your decision-making process is deeply flawed, as Trump's is, you should expect miscalibration in both directions.

'Existential Risk and Growth' Deep Dive #1 - Summary of the Paper

Leopold has now published a popular article discussing this topic. Highly recommended.

An excerpt:

Philosophers like Nick Bostrom, Derek Parfit, and Toby Ord have become increasingly concerned about such so-called “existential risks.” An unrecoverable collapse of civilization wouldn’t just be tragic for the billions who would suffer and die. Perhaps the greatest tragedy would be the foreclosing of all of humanity’s potential. Humanity could flourish for billions of years and enable trillions of happy human lives—if only we do not destroy ourselves beforehand.

This line of thinking has led some to question whether “progress”—in particular, technological progress—is as straightforwardly beneficial as commonly assumed. Nick Bostrom imagines the process of technological development as “pulling balls out of a giant urn.” So far, we’ve been lucky, pulling out a great many “white” balls that are broadly beneficial. But someday, we might pull out a “black” ball: a new technology that destroys humanity. Before that first nuclear test, some of the physicists worried that the nuclear bomb would ignite the atmosphere and end the world. Their calculations ultimately deemed it “extremely unlikely,” and so they proceeded with the test—which, as it turns out, did not end the world. Perhaps the next time, we don’t get so lucky.

The same technological progress that creates these risks is also what drives economic growth. Does that mean economic growth is inherently risky? Economic growth has brought about extraordinary prosperity. But for the sake of posterity, must we choose safe stagnation instead? This view is arguably becoming ever-more popular, particularly amongst those concerned about climate change; Greta Thunberg recently denounced “fairy tales of eternal economic growth” at the United Nations.

I argue that the opposite is the case. It is not safe stagnation and risky growth that we must choose between; rather, it is stagnation that is risky and it is growth that leads to safety.

We might indeed be in “time of perils”: we might be advanced enough to have developed the means for our destruction, but not advanced enough to care sufficiently about safety. But stagnation does not solve the problem: we would simply stagnate at this high level of risk. Eventually, a nuclear war or environmental catastrophe would doom humanity regardless.

Faster economic growth could initially increase risk, as feared. But it will also help us get past this time of perils more quickly. When people are poor, they can’t focus on much beyond ensuring their own livelihoods. But as people grow richer, they start caring more about things like the environment and protecting against risks to life. And so, as economic growth makes people richer, they will invest more in safety, protecting against existential catastrophes. As technological innovation and our growing wealth has allowed us to conquer past threats to human life like smallpox, so can faster economic growth, in the long run, increase the overall chances  of humanity’s survival.

Avoiding Munich's Mistakes: Advice for CEA and Local Groups

I sympathise with this, but I think if we don't have public posts like this one, the outcome is more-or-less decided in advance.

Yes, I agree. What I'm uncertain about is whether it's desirable to have more of these posts at the current margin. And to be clear: by saying I'm uncertain whether it's a good idea, I don't mean to suggest it's not a good idea; I'm simply agnostic.

Avoiding Munich's Mistakes: Advice for CEA and Local Groups

This comment expresses something I was considering saying, but more clearly than I could. I would add that thinking strategically about this cultural phenomenon involves not only trying to understand its mechanism of action, but also coming up with frameworks for deciding what tradeoffs to make in response to it. I am personally very disturbed by the potential of cancel culture to undermine or destroy EA, and my natural reaction is to believe that we should stand firm and make no concessions to it, as well as to upvote posts and comments that express this sentiment. This is not, however, a position I feel I can endorse on reflection: it seems instead that protecting our movement against this risk involves striking a difficult and delicate balance between excessive and insufficient relaxation of our epistemic standards. By giving in too much the EA movement risks relinquishing its core principles, but by giving in too little the movement risks ruining its reputation. Unfortunately, I suspect that an open discussion of this issue may itself pose a reputational risk, and in fact I'm not sure it's even a good idea to have public posts like the one this comment is responding to, however much I agree with it.

jackmalde's Shortform

On a charitable reading of Parfit,  the 'muzak and potatoes' expression is meant to pick out the kind of phenomenal experience associated with the "drab existence" he wants to communicate to the reader. So he is not asking you to imagine a life where you do nothing but listen to muzak and eat potatoes. Instead, he is asking you to consider how it typically feels like to listen to muzak and eat potatoes, and to then imagine a life that feels like that, all the time.

Some learnings I had from forecasting in 2020

On the one hand, my opinion of Metaculus predictions worsened as I saw how the 'recent predictions' showed people piling in on the median on some questions I watch.

Can you say more about this? I ask because this behavior seems consistent with an attitude of epistemic deference towards the community prediction when individual predictors perceive it to be superior to what they can themselves predict given their time and ability constraints.

Election scenarios

The US democracy may be at risk. It is only "our democracy" for 4.25% of the world's population.

(Apologies for focusing on a single word of your post, but I think this seemingly trivial semantic difference reflects a more substantive and widespread issue. How many concerned posts about the political situation in, say, India have you seen in the Forum recently? How many "action items" for protecting democracy in, say, Brazil have you encountered? It is depressing and, yes, irritating to see a community that supposedly values all people equally concentrate their attention so overwhelmingly on a single country when it comes to politics and "current affairs".)

Long-Term Future Fund: September 2020 grants

I agree that the sentence Linch quoted sounds like a "bravery debate" opening, but that's not how I perceive it in the broader context. I don't think the author is presenting himself/herself as an underdog, intentionally or otherwise. Rather, they are making that remark as part of their overall attempt to indicate that they are aware that they are raising a sensitive issue and that they are doing so in a collaborative spirit and with admittedly limited information. This strikes me as importantly  different from the prototypical bravery debate, where the primary effect is not to foster an atmosphere of open dialogue but to gain sympathy for a position.

I am tentatively in agreement with you that "clarification of intent" can be done without "bravery talk", by which I understand any mention that the view one is advancing is unpopular. But I also think that such talk doesn't always communicate that one is the underdog, and is therefore not inherently problematic. So, yes, the OP could have avoided that kind of language altogether, but given the broader context, I don't think the use of that language did any harm.

(I'm maybe 80% confident in what I say above, so if you disagree, feel free to push me.)

Long-Term Future Fund: September 2020 grants

Thanks, you are right. I have amended the last sentence of  my comment.

Load More