R

RedStateBlueState

1070 karmaJoined Apr 2022

Comments
88

All punishment is tragic, I guess, in that it would be a better world if we didn't have to punish anyone. I guess I just don't think the fact that SBF on some level "believed" in EA (whatever that means, and if that is even true) - despite not acting in accordance with the principles of EA - is a reason that his punishment is more tragic than anyone else's

This is just not true if you read about the case, he obviously knew he was improperly taking user funds and tells all sorts of incoherent lies to explain it, and it's really disappointing to see so many EAs continue to believe he was well-intentioned. You can quibble about the length of sentencing, but he broke the law, and he was correctly punished for it.

Suppose someone were to convince you that the interventions GiveWell pursues are not the best way to improve "global capacity", and that a better way would be to pursue more controversial/speculative causes like population growth or long-run economic growth or whatever. I just don't see EA reorienting GHW-worldview spending toward controversial causes like this, ever. The controversial stuff will always have to compete with animal welfare and AI x-risk. If your worldview categorization does not always make the GHW worldview center on non-controversial stuff, it is practically meaningless. This is why I was so drawn to this post - I think you correctly point out that "improving the lives of current humans" is not really what GHW is about!

The non-controversial stuff doesn't have to be anti-malaria efforts or anything that GiveWell currently pursues; I agree with you there that we shouldn't dogmatically accept these current causes. But you should really be defining your GHW worldview such that it always centers on non-controversial stuff. Is this kind of arbitrary? You bet! As you state in this post, there are at least some reasons to stay away from weird causes, so it might not be totally arbitrary. But honestly it doesn't matter whether it's arbitrary or not; some donors are just really uncomfortable about pursuing philosophical weirdness, and GHW should be for them.

How are you defining global capacity, then? This is currently being argued in other replies better than I can, but I think there’s a good chance that the most reasonable definition implies optimal actions very different than GiveWell. Although I could be wrong.

I don’t really think the important part is the metric - the important part is that we’re aiming for interventions that agree with common sense and don’t require accepting controversial philosophical positions (beyond rejecting pro-local bias I guess)

Love the post, don't love the names given.

I think "capacity growth" is a bit too vague, something like "tractable, common-sense global interventions" seems better.

I also think "moonshots" is a bit derogatory, something like "speculative, high-uncertainty causes" seems better.

This post is a great exemplar for why the term “AI alignment” has proven a drag on AI x-risk safety. The concern is and has always been that AI would dominate humanity like humans dominate animals. All of the talk about aligning AI to “human values” leads to pedantic posts like this one arguing about what “human values” are and how likely AIs are to pursue them.

Altman, like most people with power, doesn’t have a totally coherent vision for why him gaining power is beneficial for humanity, but can come up with some vague values or poorly-thought-out logic when pressed. He values safety, to some extent, but is skeptical of attempts to cut back on progress in the name of safety.

Hmm, I still don’t think this response quite addresses the intuition. Various groups yield outsized political influence owing to their higher rates of voting - seniors, a lot of religious groups, post-grad degree ppl, etc. Nonetheless, they vote in a lot of uncompetitive races where it would seem their vote doesn’t matter. It seems wrong that an individual vote of theirs has much EV in an uncompetitive race. On the other hand, it seems basically impossible to mediate strategy such that there is still a really strong norm of voting in competitive races but not in uncompetitive races (and besides it’s not clear that that would even suffice given that uncompetitive races would become competitive in the absence of a very large group). I think all the empirical evidence shows that groups that turn out more in competitive races also do so in uncompetitive races.

Sorry, I shouldn’t have used the phrase “the fact that”. Rephrased, the sentence should say “why would the universe taking place in an incomputable continuous setting mean it’s not implemented”. I have no confident stance on if the universe is continuous or not, just that I find the argument presented unconvincing.

That and/or acausal decision theory is at play for this current election

Load more