This is a special post for quick takes by DirectedEvolution. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 9:43 AM

EA needs its own risk assessment team. I'm sure I'm not the only one who looked at FTX and quietly thought to myself, "this is crypto money and it could vanish anytime." We do a lot of unasked-for risk management on behalf of the planet. How can we fill that role if we can't even do an adequate job of managing risk for the movement itself?

EA risk management should focus on things like:

  • Protecting grantees from making major life changes based on winning a grant before their funding is absolutely locked in.
  • Protecting organizations from disbursing grants until they're sure they won't have to claw it back.
  • Preventing the EA movement from spreading infohazards.
  • Considering the risks of aims like movement growth, media engagement, and branding organizations as "part of EA."

EA-style utilitarianism is to morality as BMI is to health. Both are useful for describing a rough "healthy range," and as metrics of health at the population level.

At the individual level, we should be more careful. For some people, they'll serve as a helpful guide in forming moral or health habits and decisions. For others, they'll feed into disordered thought and action, and it might be best for some people to ignore certain metrics, culture, and activities surrounding weight loss/morality.

When we manage weight, we have to take it seriously without getting obsessive. Counting calories and weighing yourself isn't the same thing as eating less or more healthily. Participating in EA culture, having EA status, and thinking about EA ideas isn't the same thing as being effectively altruistic.

Precisely defining endorsable guidance on exactly how each individual person should eat on a day-to-day basis is beyond the capacity of the medical system. Analogously, EA can't tell you exactly what you should do with your time and money to make the world a better place. It can offer general guidance and create programs for you to try. At some point, though, most people will have to branch off from EA, create a set of moral behaviors that make sense for their lifestyle, and focus on maintaining those habits. Some people will remain heavily involved in EA throughout their life, some people will come back in for regular "checkups," some people will find that they find it incredibly distasteful for some reason and stay away.

Think about the EA movement as a nascent, secular "moral system," analogous to the "medical system" (albeit with many important differences - I'd never want to see EA adopt all the properties of any currently existing medical system).

I'm going to be exiting the online EA community for an indefinite period of time. Anyone who'd like to keep in touch is free to PM me and I'll provide my personal contact info when I check my messages occasionally. Best wishes.

Request for beta readers for my new post "Unit Test Everything"

My post is LessWrong-centric, but relevant to EA since it is about how to accomplish practical tasks in the real world, especially ones involving a lot of uncertainty or novelty.

It's had a few editing passes, and I'd love to get some fresh eyes on it. If you're willing to give it a read, please let me know in comments or via PM!

Utility monsters only undermine the argument for utilitarianism insofar as the threat of monopoly undermines the argument for capitalism.

Today, I discovered a way to improve how I communicate proposals to others.

Normally, I might say “we should build a lot more housing in this city.”

This meets lots of resistance, in ways that lead to circular and confusing arguments on both sides.

Now, I use something like ”Would you support creating a commission that would plan and permit building a lot more housing, while also making sure there’s still lots of space for parks, small businesses, and good transit options? The aim would be to bring down rents and the cost of home ownership, reduce homelessness, and keep the city vibrant.”

I think the key here is highlighting that this would be a thoughtful and deliberate process, one that takes multiple goals into account and looks for smart development options. People want to feel like the projects affecting them come from a process they can see and trust. It’s also a softer way of putting it.

This might be effective for other issues as well.

“We should study how to cure aging” could be presented as “would you support funding research into the fundamental causes of aging, and potentially how to slow that process down?”

“We should get rid of the Foreign Dredge Act and do a lot more dredging in American ports” could be “Would you support changing the law to make it easier to maintain our ports, as long as the EPA reviewed these projects so that environmental damage was kept to a minimum?”

Curated and popular this week
Relevant opportunities