I agree with this, but to add on since the post mentioned 3-4 courses.
I would say if you're picking 3, definitely econometrics, stats/probability to supplement analysis skills. For the third, I would say probably development economics (both to visibly show interest in the topic and have a professor you can try to build a relationship for resources/recommendations in that network). Two potential caveats- if you think the ability to leverage the network of the behavioral econ professor is better, or if that's a substantially more research skill building class that's also a pretty good option. Other caveat would be that depending on the level of the course, the Econometrics course could plausibly require or at least benefit a lot from better linear algebra skills- that'd suggest econometrics/stats/lin alg.
If you're taking 4 to stand out to employers: same logic as I described above probably applies. Would also add that depending on grad school being a possibility for you, many PhDs require or strongly suggest linear algebra.
One final thought here: I'm treating this as if you need to stay within that list- if there is an option to go outside that list (maybe to a CS or stats department?), learning programming/statistical computing skills might be among the highest value couple options.
This comment feels important, like something I’ve been considering trying to spin into a full post. Finding a frame has been hard, because it feels like I’m trying to translate what’s (unfortunately) a distinctively non-EA culture norm into reasoning that EAs will take more seriously.
One thought that I do want to share though is that I don’t think seeing this as something that needs to be weighed against good epistemics feels quite right. I think our prizing good epistemics should mean being able to reason clearly and adjust our reactions to tone/emotional tenor from people who (very understandably!) are speaking from a place of trauma and deep hurt.
The best frame I have so far for a post is reminding people about Julia Galef’s straw-Vulcan argument and arguing what it implies for conversations on (understandably) incredibly emotionally heavy topics, and in tough times more generally. Roughly rehashing the argument because I can’t find a good link on it: Spock frequently makes assumptions that humans will be perfectly rational creatures under all circumstances, and when this leads him astray essentially shrugs and responds “it’s not my fault that I did not predict their actions correctly, they were being irrational!”. Galef’s point of course, is that this is horrible rationality: the failure to reason about how emotions might effect people and adjust accordingly means your epistemics are severely impoverished.
Setting aside the Klingon style rationality argument, there also feels like there should be a argument along the lines of how (to me, incredibly!) obvious it should be that tone like this demands sympathy and willingness to take on the burden of being accommodating from people serious about thinking of themselves as invested in altruism as a value. I’m still figuring out how to express this crisply (and to be clear, without bitterness) so that it will resonate.
If you have thoughts on what the best frame would be here, would love to hear any thoughts you have or discuss more.
Edited to take out something unkind. Trying to practice what I preach here.
Trying to write a response quickly before work starts at the end of long week (working on dem races, being EA-ish), so open to being too hasty or needing to flesh out these ideas. Two immediate reactions:
A last quick clarifying thought- my claim isn’t just “external people looking might be concerned”, it’s “this is not the tone we should bring to doing politics as a community”.
Interesting post- since my academic training is heavily in political science (+ stats and CS), I’ve thought about this topic some as well. Disclaimer is that I engage with poli sci research pretty heavily through working in electoral politics/follow broader PS through friends who do other work, but I don’t have a poli sci PhD and don’t have a particular identity as a political scientist.
A general thought here is that this post is a little hard to engage with because you’re making two related claims at the same-ish time, and not providing particularly concrete suggested actions specifically related to EA. As I read you, the claims are:
One thing I’m especially left wondering here is whether you have a specific claim about how relatively important engaging with these topics is, and for which parts of the EA community that’s true. For example, how much of a priority should engaging with the gerrymandering literature be, and for which EAs? Where does this fall in the hierarchy of things EAs could spend time learning about versus say, microeconomic quant tools? Hopefully that’s a helpful point in trying to flesh out the case you’re making here (I realize you posted this as “some thoughts”, and not “here is a deeply researched, group reviewed long form piece with deeply felt calls to action”.)
Moving on to discussing the specific points you make:
To clearly caveat with my level of knowledge here, my undergraduate thesis was on why fixing gerrymandering is harder than proposing good algorithms, and I learned quite a bit after that from seeing researchers speak at the MaDS seminar series while I was in grad school at NYU. So I have a decent impression, but you may well know more and have a good basis to disagree.
I’m completely unequipped to respond on the other formal methods ideas you propose, but looping back to the broader response I have to this post, it would be beneficial to have more concrete applications of these ideas for EA, as am well as discussion of how they rank in priorities of things we could learn.
This is a pretty long response already, so will end by saying that this is definitely a topic I’d be interested in discussing more.
For example, I could envision trying to seek out specific EA problems that could benefit from recent hot topics in quant poli sci like conjoint experiments (to name one example). Separately, this is a more a intersection of my background (political practitioner) and quant poli sci, but I’ve been pondering wether it’s a good use of time to produce general educational materials on better understanding campaigning effectively and how elections are won- it seems many EAs fall prey to many of the common misconceptions that typical well-educated but not politically experienced people fall into. To the extent there are folks who might try something like another Flynn campaign or try to give effectively in influencing the 2024 cycle, there seem to be some easy wins in providing better mental models.
Quick heads up, the email announcement and this post don’t have the same application deadline- email says the 19th, post says the 17th.
Looks great!
I believe your linked text for existential catastrophe (in the second table) is incorrect- I get a page not found error.
Substantively, I realize this is probably not something you originally asked (nor am I asking for it since presumably this’d take a bunch of time), but I’d be super curious to see what kind of uncertainty estimates folks put in this, and how combining using those uncertainties might look. If you have some intuition on what those intervals look like, that’d be interesting.
The reason I’m curious about this is probably fairly transparent, but given the pretty extensive broader community uncertainty on the topic, aggregating using those might yield a different point estimate, but more importantly might help people understand the problem better by seeing the large degree uncertainty involved. For example, it’d be interesting/useful to see how much probability people put outside a 10-90% range.
Trying this now, thank you for the timely heads up. One thing I wanted to elevate from giving Tuesday website and one question.
First: it may be possible to set up multiple recurring donations to multiple orgs and so get multiple matches. No guarantees, but that’s a possible read of the meta rules the Giving Tuesday website mentions. I’ll be trying this, I’d encourage others to as well.
Second, do folks have recommendations for longtermist charities set up to receive more funds this way, especially those that might’ve been hit hard by FTX fallout ? There aren’t any mentioned I immediately recognized here: https://www.eagivingtuesday.org/eagtnonprofits. I would think these are good opportunities for people to be especially efficient given the FTX news; also, some people leaning more longtermist may be more likely to use this platform if they have options made clear to them. I’d do some digging but have to go to work now.