I am a rising junior at the University of Chicago (co-president of UChicago EA and founder of Rationality Group). I am mostly interested in philosophy (particularly metaethics, formal epistemology, and decision theory), economics, and entrepreneurship.
I also have a Substack where I post about philosophy (ethics, epistemology, EA, and other stuff). Find it here: https://substack.com/@irrationalitycommunity?utm_source=user-menu.
Reach out to me via email @ dnbirnbaum@uchicago.edu
If anyone has any opportunities to do effective research in the philosophy space (or taking philosophy to real life/ related field) or if anyone has any more entrepreneurial opportunities, I would love to hear about them. Feel free to DM me!
I can help with philosophy stuff (maybe?) and organizing school clubs (maybe?)
Yea, unclear if these self-reports will be reliable, but I agree that this could be true (and I briefly mention something like it: "Broadly, AW has high tractability, enormous current scale, and stronger evidence of sentience—at least for now, since future experiments or engineering relevant to digital minds could change this."
This seems to assume that EA funds ought to be distributed “democratically” by people who identify as EAs or EA leaders. I don’t buy that.
If the goal is good resource allocation, we want funding decisions to track competence about cause prioritization, not representativeness. Randomly sampled EAs—many of whom haven’t spent much time thinking seriously about cause prioritization—don’t seem like the right decision-makers.
And it’s also not obvious that “EA leaders,” as such, are the right allocators either. The relevant property isn’t status or identity, but judgment, track record, and depth of thinking about cause prioritization.
I think one can reasonably ask this question of consciousness/welfare more broadly: how does one have access to their consciousness/welfare?
One idea is that many philosophers think one, by definition, has immediate epistemic access to their conscious experiences (though whether those show up in reports is a different question, which I try to address in the piece). I think there are some phenomenological reasons to think this.
Another idea is that we have at least one instance where one supposedly has access to their conscious experiences (humans), and it seems like this shows up in behavior in various ways. While I agree with you that our uncertainty grows as you get farther from humans (i.e. to digital minds), I still think you're going to get some weight from there.
Finally, I think that, if one takes your point too far (there is no reason to trust that one has epistemic access to their conscious states), then we can't be sure that we are conscious, which I think can be seen as a reductio (at least, to the boldest of these claims).
Though let me know if something I said doesn't make sense/if I'm misinterpreting you.
I agree this is a super hard problem, but I do think there are somewhat clear steps to be made towards progress (i.e. making self reports more reliable). I am biased, but I did write this piece on a topic that touches on this problem a bit that I think is worth checking out.
Yea, I agree with this a lot.
One thing to warn, though: one can be inclined to say something like “I’m an actor, so I don’t have to do any of the accuracy-based forecasting, etc,” which I think is definitely wrong.
One should choose which direction they act based on some somewhat accuracy/EV + vibes. Otherwise, there are infinite ways to act and most of them, without any foresight, fail even with the type of agency you’re describing.
In addition, once you are in the project you are acting in, you (ofc) shouldn’t constantly be doing the accuracy/EV thing. Sometimes, though, it’s probably worth taking a step back and doing it to consider opportunity cost (on some marked date every x amount of time to avoid decision/acting paralysis).
Thanks for the comment and good points.
What I meant is that they can be MORE politically charged/mainstream/subject to motivated reasoning. I definitely agree that current incentives around AI don't perfectly track good moral reasoning.