Comments

Careers Questions Open Thread

(I'm a trader at a NY-based quant firm, and work on education and training for new traders, among other things.)

I'm nearly certain that your hiring manager (or anyone involved in hiring you) would be happy to receive literally this question from you, and would have advice specifically tailored to the firm you're joining.

The firm has very a strong interest in your success (likely more so than anyone you've interacted with in college), and they've already already committed to spending substantial resources to helping you prepare for a successful career as a trader. Answering questions like this one (even before you've "officially" started) is literally (part of) someone's job.

(I'm declining to answer the actual question not to be unfriendly, but because I think the folks at your future employer will have more accurate answers than I can give.)

2018-19 Donor Lottery Report, pt. 1

Finally, I expect that my earmarking of grant funds will be partially funged within the GFI organization, and I think this is inevitable, basically fine, and in fact weakly good.

I received a private request (from an early reviewer of this post) to expand on my thoughts here, so a few more words:

When making decisions under collective uncertainty, aggregating information is a hard problem (citation not required). I think that my relative opinions here push the world towards a more efficient allocation, but I recognize that my opinions about GFI are inevitably incomplete. So if I overstated my certainty too much when translating my opinions into effects-on-the-world, I expect I would be making the allocation of resources less efficient overall. If I insisted on absolutely no counterfactual funging, I would be overstating my confidence.

On the other hand, if I trust GFI to take my grants in the spirit that they're intended, then I expect they'll take them as information given in good faith, trust that I was trying to communicate something that I thought was not already known to them, consider what things they know that (they think) were not known to me, and decide what the net effect of my additional opinion should be. (This should remind you of Aumann's agreement theorem, if you're familiar with that concept from the rationality literature.)

(I think it's also plausible in general that earmarking $X in a vote of confidence in a particular program prompts the receiving organization to update their beliefs and direct more non-earmarked funding than they would have otherwise, causing the opposite of funging.)

Do I actually believe that GFI's principals are as good at playing this Aumann-esque information-aggregation game as the professional colleagues I'm used to working with? Probably not, no. But this is the way I think cooperative allocation of resources should play out, and I think that the EA community only gets better at it if we start discussing ideas like this and playing "cooperate" in the epistemic prisoners' dilemma. And my instinct is actually that if some of my funding ends up being funged towards initiatives that GFI principals think are highest-value, it's probably net good for the overall work.