Halffull

Halffull's Comments

Halffull's Shortform

Something else in the vein of "things EAs and rationalists should be paying attention to in regards to Corona."

There's a common failure mode in large human systems where one outlier causes us to create a rule that is a worse equilibrium. In the PersonalMBA, Josh Kaufman talks about someone taking advantage of a "buy any book you want" rule that a company has - so you make it so that you can no longer get any free books.

This same pattern has happened before in the US, after 9-11 - We created a whole bunch of security theater, that caused more suffering for everyone, and gave government way more power and way less oversight than is safe, because we over-reacted to prevent one bad event, not considering the counterfactual invisible things we would be losing.

This will happen again with Corona, things will be put in place that are maybe good at preventing pandemics (or worse, making people think they're safe from pandemics), but create a million trivial conveniences every day that add up to more strife than they're worth.

These types of rules are very hard to repeal after the fact because of absence blindness - someone needs to do the work of calculating the cost/benefit ratio BEFORE they get implemented, then build a convincing enough narrative to what seems obvious/common sense measures given the climate/devastation.

What posts do you want someone to write?
Curious about what you think is weird in the framing?

The problem framing is basically spot on, talking about how our institution drive our lives. Like I said, basically all the points get it right and apply to broader systemic change like RadX, DAOs, etc.

Then, even though the problem is framed perfectly, the solution section almost universally talks about narrow interventions related to individual decision making like improving calibration.

Growth and the case against randomista development

No, I actually think the post is ignoring x-risk as a cause area to focus on now. It makes sense under certain assumptions and heuristics (e.g. if you think near term x-risk is highly unlikely, or you're using absurdity heuristics), I think I was more giving my argument for how this post could be compatible with Bostrom.

Growth and the case against randomista development
the post focuses on human welfare,

It seems to me that there's a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.

I'm also very interested in how increased economic growth impacts existential risk.

At one point I was focused on accelerating innovation, but have come to be more worried about increasing x-risk (I have a question somewhere else on the post that gets at this).

I've since added a constraint into my innovation acceleration efforts, and now am basically focused on "asymmetric, wisdom-constrained innovation."

Growth and the case against randomista development

Let's say you believe two things:

1. Growth will have flowthrough effects on existential risk.

2. You have a comparative advantage effecting growth over x-risk.

You can agree with Bostrom that x-risk is important, and also think that you should be working on growth. This is something very close to my personal view on what I'm working on.

What posts do you want someone to write?

I think the framing is weird because of EAs allergy to systemic change, but I think on practice all of the points in that cause profile apply to broader change.

Halffull's Shortform

It's been pointed out to me on Lesswrong that depressions actually save lives. Which makes the "two curves" narrative much harder to make.

Halffull's Shortform

This argument has the same problem as recommending people don't wear masks though, if you go from "save lives save lives don't worry about economic impacts" to "worry about economics impacts it's as important as quarantine" you lose credibility.

You have to find a way to make nuance emotional and sticky enough to hit, rather than forgoing nuance as an information hazard, otherwise you lose the ability to influence at all.

This was the source of my "two curves" narrative, and I assume would be the approach that others would take if that was the reason for their reticence to discuss.

Load More