I posted a couple months ago that I was working on an effective altruism board game. You can now order a copy online!
it's a cooperative game where you start out as a random human sampled from the real-world distribution of income
try to get lots of human QALYs and animal QALYs and reduce existential risk
all while answering EA-related trivia questions, donating to effective charities, partaking in classic philosophy thought experiments, realizing your own private morality and
try to avoid being turned into a chicken.
Ha, I think the problem is just that your formalization of Newcomb's problem is defined so that one-boxing is always the correct strategy, and I'm working with a different formulation. There are four forms of Newcomb's problem that jibe with my intuition, and they're all different from the formalization you're working with.
Newcomb's problem isn't a challenge to causal decision theory. I can solve Newcomb's problem by committing to one-boxing in any of a number of ways e.g. signing a contract or building a reputation as a one-boxer. After the boxes have already been placed in front of me, however, I can no longer influence their contents, so it would be good if I two-boxed if the rewards outweighed the penalty e.g. if it turned out the contract I signed was void, or if I don't care about my one-boxing reputation because I don't think I'm going to play this game again in the f... (read more)
I’m worried that people’s altruistic sentiments are ruining their intuition about the prisoner’s dilemma. If Bob were an altruist, then there would be no dilemma. He would just cooperate. But within the framework of the one-shot prisoner’s dilemma, defecting is a dominant strategy – no matter what Alice does, Bob is better off defecting.
I’m all for caring about other value systems, but if there’s no causal connection between our actions and aliens’, then it’s impossible to trade with them. I can pump someone’s intuition by saying, “Imagine a wizard produc... (read more)
I could really have benefited from a list like this three or four years ago! I wasted a lot of time reading prestigious fiction (Gravity’s Rainbow, Infinite Jest, In Search of Lost Time, Ulysses) and academic philosophy – none of which I liked or understood – as well as a lot of sketchy pop psych.
If Doing Good Better, 80,000 Hours, The Life You Can Save, Animal Liberation, and Superintelligence are already taken, then I’d say the five most influential works I’ve read are:
All of Steven Pinker’s books
The Art and Craft of Problem Solving
Here Be Dragons: Sc... (read more)
I fixed the link.
I have a fully-formed EA board game that I debuted at EA Global in San Francisco a couple weeks ago. EAs seem to really like it! You can see over one hundred of the game's cards here https://drive.google.com/open?id=0Byv0L8a24QNJeDhfNFo5d1FhWHc
The way the game works is that every player has a random private morality that they want to satisfy (e.g. preference utilitarianism, hedonism, sadism, nihilism) and all players also want to collaboratively achieve normative good (accumulating 1000 human QALYs, 10,000 animals QALYs, and 10 x-risk points). Players get ... (read more)