Sorted by New

Wiki Contributions


Why doesn't the EA forum have curated posts or sequences?

Wasn't it announced at launch, that this would be implemented at some point?

Combination Existential Risks
I think people who think about existential risk should devote some of their energy to thinking about risks that are not themselves existential but might be existential if combined with other risks. For example, climate change is not an existential risk, but it plausibly plays a role in many combination existential risks, such as by increasing international tensions or by rendering much of the globe difficult to inhabit. Similarly, many global catastrophic risks may in fact be existential if combined with other global catastrophic risks, such as a nuclear war combined with a pandemic.

I think those would be called 'Context Risk'. I haven't read that word in many places, but i first heard of it in Phil Torres' book about x-risks.

What’s the Use In Physics?

Very good post, thank you for collecting everything.

I'd be interested in a closer look into the field of energy (especially nuclear fussion, modern nuclear energy technology), i don't really know if there are neglected areas or positions.

What’s the Use In Physics?

Not an expert on the foundations of QM, but a few points on your question:

  • For some interpretations the mathematics does change somewhat (e.g. Bohmian Mechanics, Collapse Theories)
  • Some interpretations actually do make testable predictions (like the Many Wolds Interpretation), but they tend to be quite hard to test in practice
  • Some people have argued that some interpretations follow more naturally from the mathematics. It's pretty clear in my opinion that Bohmian Mechanics is postulating additional structure on top of the mathematics we have now, while many-worlds is not really doing that.
[Link] "Would Human Extinction Be a Tragedy?"
How many human lives would it be worth sacrificing to preserve the existence of Shakespeare’s works? If we were required to engage in human sacrifice in order to save his works from eradication, how many humans would be too many?

This strikes me as a good way of making people think of the distinction between instrumental- and terminal values.

Critique of Superintelligence Part 1

I don't see how using Intelligence (1) as a definition undermines the orthogonality thesis.

Intelligence(1): Intelligence as being able to perform most or all of the cognitive tasks that humans can perform. (See page 22)

This only makes reference to abilities and not to the underlaying motivation. Looking at high functioning sociopaths you might argue we have an example of agents that often perform very well at all most human abilities but still have attitudes towards other people that might be quite different from most people and lack a lot of ordinary inhibitions.

This should become clear if one considers that ‘essentially all human cognitive abilities’ includes such activities as pondering moral dilemmas, reflecting on the meaning of life, analysing and producing sophisticated literature, formulating arguments about what constitutes a ‘good life’, interpreting and writing poetry, forming social connections with others, and critically introspecting upon one’s own goals and desires. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips.

I don't agree, i personally can easily imagine an agent that can argue for moral positions convincingly by analysing huge amounts of data about human preferences, that can use statistical techniques to infer the behaviour and attitude of humans and then use that knowledge to maximize something like positive affection or trust and many other things.

How Effective Altruists Can Be Welcoming To Conservatives

Good and important points. I feel maybe the same care should be taken towards people who have various kinds of anti-capitalist beliefs.

Response to a Dylan Matthews article on Vox about bipartisanship

I share you irriation with this article. This struck me as a normal opinion Vox opinion piece, which never should have been postet on Future Perfect.

I think some of your points of criticism might be explained by the fact that we had to/wanted to keep the article below a certain length. But i also believe that when Dylan writes for Future Perfect about such political topics, he should make sure to argue every point carefully as well as be especially rigorous and careful in his arguments.

Announcing PriorityWiki: A Cause Prioritization Wiki

was trying to figure out how opinionated the Wiki should be

Certainly an important question. 80k certainly explains why they don't recommend certain careers and it's important for them to continue to do so. In my opinion we should make our reasons for considering a cause effective very clear, so they can be challenged. In practice, of course, how such an entry depends strongly on the wording. I would prefer to word it like "Cause X has traditionally been considered not neglected enough/not tractabe/too small by EA organisations. ... According to that reasoning you'd have to show Y to establish X as an effective cause. ..." instead of "X is not effective, because ...".

Load More