Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

This is helpful.

For what it's worth I find the upshot of (ii) hard to square with my (likely internally inconsistent) moral intuitions generally, but easy to square with the person-affecting corners of them, which is I guess to say that insofar as I'm a person-affector I'm a non-identity-embracer.

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

Well hello thanks for commenting, and for the paper!

Seems right that you'll get the same objection if you adopt cross-world identity. Is that a popular alternative for person-affecting views? I don't actually know a lot about the literature. I figured the most salient alternative was to not match the people up across worlds at all, which was why people say that e.g. it's not good for a(3) than W1 was brought about.

What does it mean to become an expert in AI Hardware?

So cool to see such a thoughtful and clear writeup of your investigation! Also nice for me since I was involved in creating them to see that 80k's post and podcast seemed to be helpful.

I think [advising on hardware] would involve working at one of the industries like those listed above and maintaining involvement in the EA community.

What I know about this topic is mostly exhausted by the resources you've seen, but for what it's worth I think this could also be directed at making sure that AI companies that are really heavily prioritising safety are able to meet their hardware needs. In other words, depending on the companies it could make sense to advise industry in addition to the EA community.

University professor doing research at the cutting edge of AI hardware. I think some possible research topics could be: anything in section 3, computer architecture focusing on AI hardware, or research in any of the alternative technologies listed in section 4. Industry: See section 4 for a list of possible companies to work at.

For these two career ideas I'd just add -- what is implicit here I think but maybe worth making explicit -- that it'd be important to be highly selective and pretty canny about what research topics/companies you work with in order to specifically help AI be safer and more beneficial.

These experiences will probably update my thoughts on my career significantly.

Seems right - and if you were to write an update at that point I'd be interested to read it!

Literature Review: Why Do People Give Money To Charity?

Hey Aaron, I know this is from a while ago and your head probably isn't in it, but I'm curious if you have any intuitions on whether analogues of the successful techniques you list do/don't apply to making career changes or other actions besides giving to charity.

Also really appreciating the forum tags lately -- really nice to be able to search by topic!

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

Yeah, I mean you're probably right, though I have a bit more hope in the 'does this thing spit out the conclusions I independetnly think are right' methodology than you do. Partly that's becuase I think some of the intuitions that are, jointly, impossible to satisfy a la impossibility theorems are more important than others -- so I'm ok trying to hang on to a few of them at the expense of others. Partly it's becuase I feel unsure of how else to proceed -- that's part of why I got out of the game!

I also think there's something attractive in the idea that what moral theories are are webs of implications, and the things to hold on to are the things you're most sure are right for whatever reason, and that might be the implications rather than the underlying rationales. I think whether that's right might depend on your metaethics -- if you think the moral truth is determined by your moral committments, then being very committed to a set of outputs could make it the case that the theories that imply them are true. I don't really think that's right as a matter of metaethics, though I'm not sure.

Some promising career ideas beyond 80,000 Hours' priority paths

Hey, thanks for this comment -- I think you're right there's a plausibly more high-impact thing that could be described as 'research management' which is more about setting strategic directions for research. I'll clarify that in the writeup!

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

You're right radical implications are par for the course in population ethics, and that this isn't that surprising. However, I guess this is even more radical than was obvious to me from the spirit of the theory, since the premautre deaths of the presently existing people can be so easily outweighed. I also agree, although a big begrudgingly in this case, that "I strongly dislike the implications!" isn't a valid argument against something.

I did also think the counterpart relations were fishy, and I like your explanation as to why! The de dicto/de re distinction isn't someting I'd thought about in this context.

Can I have impact if I’m average?

Thanks for posting this -- I think this might be a pretty big issue and I'm glad you've had success helping reduce this misconception by talking to people!

As for explanations as to why it is happening, I wonder if in addition to what you said, it could be that because EA emphasises comparing impact between different interventions/careers etc. so heavily, people just get in a really compare-y mindset, and end up accidentally thinking that comparing well to other interventions is itself what matters, instead of just having more impact. I think improved messaging could help.

Kelsey Piper on "The Life You Can Save"

Thanks Aaron, I wouldn't read this if you hadn't posted it, and I think it contains good lessons on messaging.

Load More