Daniel_Eth

Wiki Contributions

Comments

Has anyone found an effective way to scrub indoor CO2?

Also the cost of sound, and possibly outside pollution (though that can be addressed with HEPA filters & ozone filters)

2018-2019 Long-Term Future Fund Grantees: How did they do?

"There is a part of me which finds the outcome (a 30 to 40% success rate) intuitively disappointing"

Not only do I somewhat disagree with this conclusion, but I don't think this is the right way to frame it. If we discard the "Very little information" group, then there's basically a three-way tie between "surprisingly successful", "unsurprisingly successful", and "surprisingly unsuccessful". If a similar amount of grants are surprisingly successful and surprisingly unsuccessful, the main takeaway to me is good calibration about how successful funded grants are likely to be.

Kardashev for Kindness

"I definitely don't think that a world without suffering would necessarily be a state of hedonic neutral, or result in meaninglessness"

Right, it wouldn't necessary be natural – my point was your definition of Type III allowed for a neutral world, not that it required it. I think it makes more sense for the highest classification to be specifically for a very positive world, as opposed to something that could be anywhere from neutral to very positive.

Event-driven mission hedging and the 2020 US election

If you expect your donation to be ~10x more valuable if one political party is in power, then it probably makes more sense to just hold* your money until they are in power. I suppose the exception here would be if you don't expect the opportunity to come up again (eg., if it's about a specific politician being president, or one party having a supermajority), but I don't see a Biden presidency as presenting such a unique opportunity.

 

*presumably actually as an investment

Kardashev for Kindness

So I like this idea, but I think the exclusively suffering-focused viewpoint is misguided. In particular:
"In a Type III Wisdom civilization, nothing and no one has to experience suffering at all, whether human, non-human animal, or sentient AI"

^this would be achieved if we had a "society" entirely of sentient AI that were always at hedonic neutral. Such lives would involve experiencing zero sense of joy, wonder, meaning, friendship, love, etc – just totally apathetic sensory of the outside world and meaningless pursuit of activity. It's hard to imagine this would be the absolute pinnacle of civilizational existence. 

Edit: to be clear, I'm not arguing "for" suffering (or that suffering is necessary for joy), just "for" pleasure in addition to the elimination of suffering.

A Viral License for AI Safety

I'm not sure how well the analogy holds. With GPL, for-profit companies would lose their profits. With the AI Safety analog, they'd be able to keep 100% of their profits, so long as they followed XYZ safety protocols (which would be pushing them towards goals they want anyway – none of the major tech companies wants to cause human extinction).

Linch's Shortform

So framing this in the inverse way – if you have a windfall of time from "life" getting in the way less, you spend that time mostly on the most important work, instead of things like extra meetings. This seems good. Perhaps it would be good to spend less of your time on things like meetings and more on things like research, but (I'd guess) this is true whether or not "life" is getting in the way more.

Thoughts on being overqualified for EA positions

It seems like one solution would be to pay people more. I feel like some in EA are against this because they worry high pay will attract people who are just in it for the money - this is an argument for perhaps paying people ~20% less than they'd get in the private sector, not ~80% less (which seems to be what some EA positions pay relative to the skills they'd want for the hire).

Load More