Working at a ritzy quant firm shouldn't impact your competitiveness for PhD programs too much (could even improve it), and if you're getting $1M+ / 5y E2G-worthy offers halfway through ugrad (and have already published!), you'll probably still be able to get comparable offers if you decide to e.g. master out. So in that regard, it probably doesn't matter too much which path you take, since neither preclude reinvention.If it were me, I'd take the bird in hand and work in the quant role... but if I felt myself able to make more meaningful "direct" contributions, focus on not just E2G'ing but also achieving financial independence as soon as possible. PhD program stipends are quite a bit lower than industry pay (at my current school, CS students only make around ~$45k / y), so being able to supplement that income with proceeds from investments would free you from monetary concerns and let you focus your attentions on more valuable pursuits (e.g. you wouldn't have to waste time on unpleasant trivialities, like household chores, if you could instead hire a regular cleaning service + meal delivery. Hell, spend another year or two at the firm and get yourself a part-time personal assistant for the duration of the grad program to manage your emails for you haha). Focus on solving those claims on your time that can be most cheaply solved first, to give yourself greater opportunities to direct more valuable hours down the line.
Maybe so! Might just be the career questions are a bit too targeted (partner also has had trouble getting advice on how to best leverage her tissue engineering / veterinary background to best serve animal welfare, e.g. working directly with researchers using animal models vs. developing in vitro meat in a more wet bench role). Was just curious to get an outside view, especially from a more "value-aligned" group than might be found in your typical career center or through existing mentors etc. Thank you for your response!
I'd second the Ng Coursera course -- very straightforward and easy to follow for those lacking technical backgrounds! Which may be a plus or a minus, depending on your desired rigor.
(removed for privacy + inappropriateness)
Sure! Though unfortunately most of the stuff comes from scattered lectures, workshops, discussions, book chapters, seminars, papers, etc. But for intro multilevel Bayesian regression in R/STAN I'd say John Kruschke's "Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan" and Richard McElreath's "Statistical Rethinking: A Bayesian Course with Examples in R and Stan" would be really solid (Richard also has his course lectures up on youtube if you prefer that, though I found his book super readable, so much so that when I took the class with him a few years back I skipped most of his lectures since the room was really hot. But don't let that dissuade you from watching them, he's a great guy/speaker and quite fun and funny!).
Purely in terms of building my own intuitions/understanding, though, I've found little more helpful than just looking up the relevant algorithms and implementing the damn things from scratch (to talk of reinventing square wheels above lol... though ofc you'd use the far superior underlying code others have written for your actual analysis).
Ah, gotcha. But re: code review, even the most beautifully constructed chains can fail, and how you specify your model can easily cause things to go kabloom even if the machine's doing everything exactly how it's supposed to. And it only takes a few minutes to drag your log files into something like Tracer and do some basic peace-of-mind checks (and others, e.g. examine bivariate posterior distributions to assess nonidentifiably wrt your demographic params). More sophisticated diagnostics are scattered across a few programs but don't take too long to run either (unless you have e.g. hundreds or thousands of chains, like in marginal likelihood estimation w/ stepping stones... a friend's actually coming out with a program soon -- BONSAI -- that automates a lot of that grunt work, which might be worth looking out for!). :]
(on phone at gym with shit wifi so can't provide links/refs atm, sorry!)
Of course (though wheel reinvention can be super helpful educationally), but there are great free public R packages that interface to STAN (I use "rethinking" for my hierarchical Bayesian regression needs but I think Rstan would work, too), so going with someone's unnamed, private code isn't necessary imo. How much did the survey cost (was it a lot longer than the included google doc, then? e.g. Did you have screening questions to make sure people read the paragraph?). And model+mcmc specification can have lots of fiddly bits that can easily lead us astray, I'd say
Ah, I guess that's better than no control, and presumably paying attention to a paragraph of text doesn't make someone substantially more or less generous. Did you fit a bunch of models with different predictors and test for a sufficient improvement of fit with each? Might do to be wary of overfitting in those regards maybe... though since those aren't focal Bayes tends to be pretty robust there, imo, so long as you used sensible priors
"I used a multilevel model to estimate the effects among those with and without a bachelor's degree. So, the bachelor's estimate borrow's power from those without a degree, reducing problems with over fitting."
If I'm understanding correctly, you had a hyperprior on the effect of education level? With just two options? IDK that that would help you much (if you had more: e.g. HS, BA/S, MS, PhD, etc. it might, but I'd try to preserve ordering there, myself).
"These models used STAN, which handles these multilevel models well. Convergence was assessed with gelman-rubin statistics."
STAN's great, but certainly not magic or perfect, and though idk them personally I'm sure its authors would strongly advocate paranoia about its output. So you got convergence with multiple (2?) chains from a random (hopefully) starting value? R_hats were all 1? That's good! Did all the other cheap diagnostics turn up ok (e.g trace plots, autocorrelation times/ESS, marginal histograms, quick within-chain metrics, etc.)?
Ah, interesting! What package? I've never heard of something like that before. Usually in the cold, mechanical heart of every R package is the deep desire to be used and shared as far as possible. If it's just someone's personal interface code, why not use something more publicly available? Can you write out your basic script in pseudocode (or just math/words?)? Especially the model and MCMC specification bits?
Yep, and alongside it, of course, the raw data!