H

Halffull

566 karmaJoined Nov 2017

Posts
3

Sorted by New
4
· 4y ago · 1m read

Comments
116

This just seems like you're taking on one specific worldview and holding every other worldview up to it to see how it compares.

Of course this is an inherent problem with worldview diversification, how to define what counts as a worldview and how to choose between them.

But still intuitively if your meta-wolrdview screens out the vast majority of real life views that seems undesirable. The meta-worldview that coherency matters is impotant but should be balanced with other meta worldviews, such as that what matters is how many people hold a worldview, or how much harmony it creates

why do you think that the worldviews need strong philosophical justification? it seems like this may leave out the vast majority of worldviews.

I think thoughtleader sometimes means "has thoughts at the leading edge" and sometimes mean "leads the thoughts of the herd on a subject" and that there is sometimes a deliberate ambiguity between the two.

one values humans 10-100x as much

 

This seems quite low, at least from a perspective of revelead preferences. If one indeed rejects unitarism, I suspect that the actual willingness to pay is something like 1000x - 10,000x to prevent the death of an animal vs. a human.  

The executive summary is entirely hallucinated.

"To what extent is money important to you?" and found that was much more important than money itself: money has a much bigger effect on happiness if you *think* money is important (a


Or perhaps, you think money is important if it has a bigger effect on your happiness (based on e.g. environmental factors and genetic predispostion)? In other words, maybe these people are making correct predictions about how they work, rather than creating self-fulfilling prophecies?  It is at least worth considering that the causality goes this way.

AND it found people who equate success with money are less happy.  

This of course is slight evidence that the causality goes in the direction you said.

I think it's also easy to make a case that longtermist efforts have increased the x-risk of artificial intelligence, with the money and talent that grew some of the biggest hype machines in AI (Deepmind, OpenAI) coming from longtermist places.  

It's possible that EA has shaved  a couple  counterfactual years off of time to catastrophic AGI, compared to a world where the community wasn't working on it.

I'd also add Vitalik Buterin to the list.

If you're going to have a meeting this short, isn't it better to e.g. send a message or email about this?  Having very short conversations like this means you've wasted a large slot of time on your EAG calendar that you could have used for different types of conversations that you can only do in person at EAG.

It's pretty clear that being multiplanetary is more anti-fragile? It provides more optionality, allows for more differentiation and evolution, and provides stronger challenges.

Load more