Bio

Disenangling "nature." 
It is my favorite thing, but I want to know its actual value.
Is it replaceable. Is it useful. Is it morally repugnant. Is it our responsibility.  Is it valuable. 
"I asked my questions. And then I discovered a whole world I never knew. That's my trouble with questions. I still don't know how to take them back."

Comments
64

Octopi are some of the most intelligent creatures, with a fascinatingly alien path to getting there and unrecognizable brain structure. I encourage anyone who doesn't know about octopi intelligence to look into it - they aren't social, don't teach each other skills, don't live long, and don't have centralized processing but they rank among the highest intelligence we are aware of.

Something I felt was missing from the post was a mention of how intelligent the octopi and cephalopods are which are likely to be farmed. I thought only a few species of octopi were intelligent, and assume many are average or low levels of cognition for the animal world. I might prefer it to chickens and cows depending on the species...

Your other points about why it would be a terrible subject for farming are compelling, and I appreciate you spelling them out so concisely. Even if they are species average in perceptiveness they might be far worse to farm than other species.

In any case, I'm really glad you brought my attention to this and that you care about this subject!!

Very useful and illustrative. I especially like how you manage to tie both the personal perspective and the group dynamics together. I was acquainted with this idea but your write up was definitely illuminating of aspects I missed. I expect this to be useful to me and others!

I can't figure out why this didn't get more traction. This post seems extremely relevant and brought up well considered points that I'm surprised I've never encountered before. This subject seems fundamental to life changing career decisions, and highly relevant to both EA earning to give and EA career impacts. I also can't spot any surface level presentation reasons it might have gotten overlooked or prematurely dismissed.

Edit: Ah, I think what happened is it was evaluated by the suggested actions when scrolling to see the outcomes/results. I am also much less positive these are good approaches to addressing the problem. They are offered without much evidence, and transparently acknowledged as such, but it's potentially the posts biggest most obvious fault.

Excellent post.

I'm not very involved with EA/politics but I'd be interested in hearing discussion about how to improve decision making and institution design. For example - a fundamental problem with government bodies is they seem to function well early on, when they are made up of people who believe in the goal and there is a strong unified culture. But suffer from malaise as years pass and both people and systems get entrenched to the point that the goal is secondary. Incentive alignment decays and becomes virtually nonexistent in many governmental bodies.

Of course I also have a special interest in how the government can address wrong incentives caused by externalities.

What about more political experiments - stronger states rights, charter cities, special economic zones, as a way to move forward, and demonstrate effectiveness/ineffectiveness without trying to go through the disfunction we currently see in federal government?

And solving vetocracy at local levels through things like quadratic voting, systems that prevent gerrymandering, street votes, etc.

Anything else I haven't heard of that seems a promising way to improve political outcomes!

This is great, thank you. Honestly it feels a little telling that this has barely been explored? Despite being THE x-risk? I get that the intervention point happens before it gets to this point, but knowing the problem is pretty core to prevention.

A force smarter/more powerful than us is scary, no matter what form it takes. But we (EA) feels a little swept up in one particular vision of AI timelines that doesn't feel terribly grounded. I understand its important to assume the worst, but its also important to imagine what would be realistic and then intermingle the two. Maybe this is why the EA approach to AI risk feels blinkered to me. So much focus is on the worst possible outcome and far less on the most plausible outcome? 

(or maybe I'm just outside the circles and all this is ground being trodden, I'm just not privy to it)

I suggest adding your anki deck to the EA anki deck list!
(I took the liberty of adding your link but didn't feel qualified to fully add an entry - please add it!)

What We Owe the Future: A Flashcard Summary
https://ankiweb.net/shared/info/1539708817

(Not my deck, but definitely an EA anki deck!)
More information here.
 

Everyone wants to live in a better world, but it's very difficult to know how. Some people will tell you the problem is greed, we don't help our neighbors, or are obsessed with materialism. But other people will tell you spirituality is part of the problem, local problems are a distraction from the big picture, and desiring things is what drives us to improve the world.

Getting everyone to believe one thing is impossible with all these different ideas of what is the right way to a better world. Everyone uniting on one belief is not even a good idea: if we all focus on one problem and one solution, we will suffer from all the other problems we left in order to focus on this one.

We have to try our best to navigate all these conflicting problems and solutions, and EA is a very good method at doing that. Trying harder to do right isn't enough. (Most people already are trying to do right!) Maybe we could convince them to try harder. I want people to care more, and do more to make the world a better place, but I'm worried it's hard to convince people to change their lives. I think a bigger problem is even when people try to do right, they don't actually help achieve the things they want to.

Load More