I'm a theoretical CS grad student at Columbia specializing in mechanism design. I write a blog called Unexpected Values which you can find here: https://ericneyman.wordpress.com/. My academic website can be found here: https://sites.google.com/view/ericneyman/.
This is probably my favorite proposal I've seen so far, thanks!
I'm a little skeptical that warnings from the organization you propose would have been heeded (especially by people who don't have other sources of funding and so relying on FTX was their only option), but perhaps if the organization had sufficient clout, this would have put pressure on FTX to engage in less risky business practices.
I think this fails (1), but more confidently, I'm pretty sure it fails (2). How are you going to keep individuals from taking crypto money? See also: https://forum.effectivealtruism.org/posts/Pz7RdMRouZ5N5w5eE/ea-should-taboo-ea-should
I think my crux with this argument is "actions are taken by individuals". This is true, strictly speaking; but when e.g. a member of U.S. Congress votes on a bill, they're taking an action on behalf of their constituents, and affecting the whole U.S. (and often world) population. I like to ground morality in questions of a political philosophy flavor, such as: "What is the algorithm that we would like legislators to use to decide which legislation to support?". And as I see it, there's no way around answering questions like this one, when decisions have significant trade-offs in terms of which people benefit.
And often these trade-offs need to deal with population ethics. Imagine, as a simplified example, that China is about to deploy an AI that has a 50% chance of killing everyone and a 50% chance of creating a flourishing future of many lives like the one many longtermists like to imagine. The U.S. is considering deploying its own "conservative" AI, which we're pretty confident is safe, and which will prevent any other AGIs from being built but won't do much else (so humans might be destined for a future that looks like a moderately improved version of the present). Should the U.S. deploy this AI? It seems like we need to grapple with population ethics to answer this question.
(And so I also disagree with "I can’t imagine a reasonable scenario in which I would ever have the power to choose between such worlds", insofar as you'll have an effect on what we choose, either by voting or more directly than that.)
Maybe you'd dispute that this is a plausible scenario? I think that's a reasonable position, though my example is meant to point at a cluster of scenarios involving AI development. (Abortion policy is a less fanciful example: I think any opinion on the question built on consequentialist grounds needs to either make an empirical claim about counterfactual worlds with different abortion laws, or else wrestle with difficult questions of population ethics.)
Does anyone have an estimate of how many dollars donated to the campaign are about equal in value to one hour spent phonebanking? Thanks!
I guess I have two reactions. First, which of the categories are you putting me in? My guess is you want to label me as a mop, but "contribute as little as they reasonably can in exchange" seems an inaccurate description of someone who's strongly considering devoting their career to an EA cause; also I really enjoy talking about the weird "new things" that come up (like idk actually trade between universes during the long reflection).
My second thought is that while your story about social gradients is a plausible one, I have a more straightforward story about who EA should accept which I like more. My story is: EA should accept/reward people in proportion to (or rather, in a monotone increasing fashion of) how much good they do.* For a group that tries to do the most good, this pretty straightforwardly incentivizes doing good! Sure, there are secondary cultural effects to consider-- but I do think they should be thought of as secondary to doing good.
*You can also reward trying to do good to the best of each's ability. I think there's a lot of merit to this approach, but might create some not-great incentives of the form "always looking like you're trying" (regardless of whether you really are trying effectively).
I may have misinterpreted what exactly the concept-shaped hole was. I still think I'm right about them having been surprised, though.
If it helps clarify, the community builders are talking about are some of the Berkeley(-adjacent) longtermist ones. As some sort of signal that I'm not overstating my case here, one messaged me to say that my post helped them plug a "concept-shaped hole", a la https://slatestarcodex.com/2017/11/07/concept-shaped-holes-can-be-impossible-to-notice/
Great comment, I think that's right.
I know that "give your other values an extremely high weight compared with impact" is an accurate description of how I behave in practice. I'm kind of tempted to bite that same bullet when it comes to my extrapolated volition -- but again, this would definitely be biting a bullet that doesn't taste very good (do I really endorse caring about the log of my impact?). I should think more about this, thanks!
Yup -- that would be the limiting case of an ellipse tilted the other way!
The idea for the ellipse is that what EA values is correlated (but not perfectly) with my utility function, so (under certain modeling assumptions) the space of most likely career outcomes is an ellipse, see e.g. here.
Great question -- you absolutely need to take that into account! You can only bargain with people who you expect to uphold the bargain. This probably means that when you're bargaining, you should weight "you in other worlds" in proportion to how likely they are to uphold the bargain. This seems really hard to think about and probably ties in with a bunch of complicated questions around decision theory.