michaelchen

Wiki Contributions

Comments

Personal Perspective: Is EA particularly promising?

Thanks for sharing optimize.me! It’s really cool how the app lets you read/listen to good summaries of books on positive psychology and other topics. I think EA has a lot of room for improvement in terms of supporting members to not also work on pressing issues but also personally thrive in doing so, and I like how you’ve highlighted that. Where can I find the Optimize community?

[linkpost] Peter Singer: The Hinge of History

I was surprised to read this from Peter Singer, a thoroughgoing utilitarian who I often see as a little extreme in how EA his beliefs are.

I don't particularly agree with this conclusion:

When taking steps to reduce the risk that we will become extinct, we should focus on means that also further the interests of present and near-future people. If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do

It seems extremely unlikely to me that global poverty is just as good at reducing existential risk as things that are more targeted, such as AI safety research. At least, Singer's point requires significant elaboration on why he believes this to be the case. MichaelStJules writes more about this in his comment here.

Nevertheless, I found it valuable to see how Peter Singer views longtermism, which can provide a window into future public perceptions.

Yaroslav Elistratov writes more on Peter Singer's thoughts on existential risk here.

I was surprised to read this from Peter Singer, a thoroughgoing utilitarian who I often see as a little extreme in how EA his beliefs are. I don't particularly agree with this conclusion:

When taking steps to reduce the risk that we will become extinct, we should focus on means that also further the interests of present and near-future people. If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do

It seems extremely unlikely to me that global poverty is just as good at reducing existential risk as things that are more targeted, such as AI safety research. MichaelStJules writes more about this in his comment here.

Nevertheless, I found it valuable to see how Peter Singer views longtermism.

Elistratov writes more on Peter Singer's thoughts on existential risk here.

How Big a Problem is Status Quo Bias in the EA Community?

Gotcha, I think standard EA articles about systemic change such as Some personal thoughts on EA and systemic change would be relevant.

Response to Phil Torres’ ‘The Case Against Longtermism’

Is your preprint available now? I'd be curious to read your thoughts about why climate change and nuclear war should be prioritized more.

How Big a Problem is Status Quo Bias in the EA Community?

I realize that you weren’t the original author of this question, but I think this question really needs much more context to be a high-quality question on the EA Forum. Why does the question asker think that the status quo bias might be a problem in the EA community? What kinds of examples do they have in mind? Are they interested in a status quo bias towards the way the world is, or the way the EA community currently is?

An EA case for interest in UAPs/UFOs and an idea as to what they are

a 5% chance of these being extraterrestrial crafts seems to be a very conservative estimate with another 5% allocated to advanced earth technology also seeming a reasonable lower bound

Actually, I think these estimates are extremely high. I don't think this post engages enough with mundane explanations (as in this Salon post)).

Convergence thesis between longtermism and neartermism

If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice)

To be precise, Toby Ord's figure of one in six in ''The Precipice'' refers to the chance of existential catastrophe, not human extinction. Existential catastrophe which includes events such as unrecoverable collapse.

Pedant, a type checker for Cost Effectiveness Analysis

In my communications with GiveWell and others about Pedant, the most prominent message I have is that the tool is too technical and complicated for someone interested in starting out.

This seems like a significant concern that might seriously impede adoption. I'd like to see more iteration to try to identify an MVP that is approachable for users to use. You write that you hope that the barrier to entry will be lowered in the third stage of the project, which involves making a web interface. I assume the web interface would let people skip the process of installing Pedant and knowing how to build files. But what would the web interface look like, and to what extent do potential users find the design intuitive?

The language seems quite straightforward to use, so I think it's feasible to have people write directly in Pedant, but many people may be intimidated by the idea of writing in a programming language. Would friendlier syntax documentation help make Pedant more approachable? (The current syntax documentation assumes familiarity with programming languages.) Maybe a tutorial video on how to write a Pedant file?

I think the current documentation is too technical, or at least, it would be good to write a non-technical guide to the key features of Pedant. I also don't understand power units.

I think some of the syntax could be more intuitive. For example,

present_value payment discount duration = 
  payment * (1 -  discount ^ (-duration)) / ln discount

might have a more obvious meaning if written as

present_value(payment, discount, duration) = 
  payment * (1 -  discount ^ (-duration)) / ln discount

though what syntax is possible is constrained by Haskell syntax.

Load More