Hey, thanks for your comment! I hadn't really realised the extent to which someone can study full-time while also skilling up in research engineering - that definitely makes me feel more willing to go for PPE.
Re your third paragraph, I wouldn't have a year off - it'd just be like doing a year of PPE, followed by three years of CS & philosophy. I do have a scholarship, and would do the first year of PPE anyway in case I didn't get into CS & phil.
Either way, your first point does point me more in the direction of just sticking with PPE :)
Hey - I’d be really keen to hear peoples' thoughts on the following career/education decision I'm considering (esp. people who think about AI a lot):
What mistakes am I making here/am I being too self-limiting? I should add that (from talking to people at Oxford) I’ll have quite a lot of time to study other stuff on the side during my PPE degree. Thanks for reading this, if you’ve got this far! I’d greatly appreciate any comments.
Would an AI governance book that covered the present landscape of gov-related topics (maybe like a book version of the FHI's AI Governance Research Agenda?) be useful?
We're currently at a weird point where there's a lot of interest in AI - news coverage, investment, etc. It feels weird to not be trying to shape the conversation on AI risk more than we are now. I'm well aware that this sort of thing can backfire, and I'm aware that most people are highly sceptical of trying not to "politicise" issues like these, but it might be a good idea.
If it was written by, say, Toby Ord - or anyone sufficiently detached from American left/right politics, with enough prestige, background, and experience with writing books like these - I feel like it might be really valuable.
It might also be more approachable than other books covering AI risk, like, say, Superintelligence. It might also seem a little more concrete, because it might cover scenarios that are easier for most people to imagine/scenarios that are more near-term, and less "sci-fi".
Thoughts on this?
This is an older post now, so I have no idea if anyone will see this, but it seems to me that you almost need "pockets" of cultishness in the broader EA movement. This follows up from Geoffrey Miller's final sentence on his comment, about how a lot of impactful movements do seem a bit cultish. Peter Thiel writes really well about why some start-ups seem cultish (and why they should be) in Zero to One, and I think I agree with him: it does seem to me that a sense of unity/mission-alignment and we're-better-than-everyone-else can produce extraordinary results. Sometimes this is extraordinarily bad (like Adam Neumann and WeWork) and sometimes it's extraordinarily good (like Steve Jobs and Apple, Jack Dorsey and Twitter, Bill Gates and Microsoft, etc), where certain people can motivate others to tirelessly perform extremely high-value work.
Obviously, cultishness has major downsides. One is external: for example, proponents of the environmental movement were frequently dismissed as hippies before environmentalism went mainstream, and I wouldn't want the same to happen to any EA. The second is internal: as you've noted, problems like sexual harassment and abuse come up, which is obviously extremely traumatising for the victims involved.
I'd say that one thing I'm saddened by is the relative lack of public awareness of EA or the big EA causes (x-risk, global health, animal welfare, etc), and in a way, it may require us to become more cultish to solve that problem. There's a kind of optimal stopping problem at play here in that once EA becomes more cultish, it's hard to make it less cultish (at least in the eyes of the non-EA public) but if EA is too non-cultish, I fear that we won't appropriately be able to spread the word. I'm also afraid that a lot of our community-building efforts aren't very high-leverage, and seem like they often fizzle out, particularly at universities. It's great that we have such a huge collection of smart people working on important stuff, but we might need a few cult leader personalities (or, to use Ayn Rand's words, "prime movers") to really move the needle.
One thing I've been thinking about - which perhaps flies in the face of what I've just said about spreading awareness - is the need to prevent reputational tail risk for the EA movement altogether. For example, can we spread awareness of key issues without mentioning EA, and can we get more people to commit their careers to doing good without mentioning EA? In some sense, using the blanket term "EA" is a blessing and a curse: a blessing in that it's a very versatile calling card to put in social media bios, introductory blurbs, etc (e.g. "...I'm really into effective giving...") but also that it creates huge collateral damage for hit-piece journalists in case anything seriously bad does happen (like when environmentalists would get called dope-smoking hippies).
Like everything, there's a crucial balancing act here: how can we be rational, but also be highly motivated and aligned? Curious to know people's thoughts on this one, because I still feel like EA (as a community) is in the early days, and could become so much more.
After a quick google, I'm pleasantly surprised by how much this sort of thing seems to happen - thanks for the pointer!