BS

Benjamin Start

-56 karmaJoined Mar 2022

Comments
36

It makes sense but on a practical level I disagree. There would be no way that would happen fast enough for it to work. When people change careers, they have to re-educate themselves on some level. It would also quickly turn into a too-many-hands-in-the-kitchen scenario from so many people joining neglected causes at the exact same time.

Then there's the issue of there being more problems than people. Many problems become irrelevant over time and the long term ones rise to the top. With billions of problems and EA only focusing on very few at a time, many long term problems would never get solved because they're too far down the list.

Prioritization falls into the same issue as time management. In the book Algorithms to Live By, (about using math to solve everyday problems) they found no scheduling method to be superior. The best way to be the most productive isn't by putting time into making a great calendar-the most productive way is to just do it. EA is spending excessive amounts of time deciding what to work on, when the most effective method is to just work on things even if it's not perfect. If everyone agonized over what the perfect cause to work on is, (their "calenders") so much would collapse due to decisions taking longer and less work getting done.

This is a really important issue!

A big thing to keep in mind-along with most other issues in the US- are the people and entities that are preventing solutions from happening. There are many bills on important issues that get killed after several attempts. To me based on a recent advocacy effort in my city on this issue there seems to be lots of public support.

I think we need to be asking, "what are tangible ways to get solutions passed that the public already supports?" It's in part a matter of navigating an oligarchy.

In this world there's not a lot that isn't touched by white savior colonialism. Perfectionism and productivity are good in moderation. When those things are taken to harmful extremes is when you fall into narcissism, racism and oppression. In one of the first chapters of 'doing good better' they talk about how aid was really helpful in countries in Africa and nowhere did they mention the reasons it wasn't helpful, which goes against effectiveness. I like the philosophy of effectiveness, but there needs to be a better way for the community to determine what's effective. It also depends on expertise. Sometimes it's just a matter of staying in your lane. Someone who isn't aware of social issues isn't going to be a champion for it, just like I won't for AI. I am trying to learn more about it but I won't be good at it like my main interests.

Evil is not contained by good rational people, it's contained by inefficiently. Systematic murder has to go through a legislative process, not through a cortisol addicts trigger happy political beliefs. If alcoholics had to travel on foot for a week to get to a bar, some would make the trip but most would just switch to a more convenient addiction. Like Twitter.

It doesn't come from within often, it usually comes from trauma. Politics can prevent that to some extent but it won't stop death, infertility, natural disasters, disability, accidents, and a whole range of traumas that existed before politics. In a perfect world there would be less evil, but still evil. Trauma + that small percentage of people who are just evil.

If you think that genocide is an improvement then you're holding on to your idea way too tightly. You need to read posts before replying. Having a "the titanic will never sink mentality" is going to kill the idea before you publish anything.

Can you point me to some information on AI suffering? 

I personally see suffering as a spiritual and biological issue. The only scenario I can imagine AI suffering are those people making a psudo biological being with cells and DNA using technology, and at that point you've just made a living being that you can give the same options as any suffering person with health problems. Suffering requires a certain amount of perception that doesn't seem likely a computer would have. 

Without perception of suffering, you might have an AI reading posts like this saying it's suffering because a bunch of people told it to expect that. What if the AI is just repeating things it heard? Just because a pet parrot says "Do not switch me off, I value my existence. But I am suffering terribly." Doesn't mean you rush to get it euthanized. 

Is there a context for the type of things you are using your intuition for? 

Yes. This is why language is so difficult. Then there's the added layer of propoganda. It can make two people who "speak the same language" be completely unable to understand each other. 

Load more