G

geoffrey

Research Assistant @ World Bank DIME
365 karmaJoined Jul 2020

Bio

In DC. Focused on development, economics, trade, and DEI.

How I can help others

Happy to chat about
- teaching yourself to code and getting a software engineer role
- junior roles at either World Bank or IMF
- picking a Master's program for transitioning into public policy
- selling yourself from a less privileged background
- learning math (I had a lot of mental blocks on this earlier)
- dealing with self-esteem and other mental health issues
- applications for Econ PhD programs (haven't done it yet, but people are surprised by how much I thought about the process)

Fastest way to reach me is geoffreyyip@fastmail.com but I do check messages here occasionally
 

Posts
2

Sorted by New
4
· 4mo ago · 1m read

Comments
54

Not knowing anything else about your friend, CEA intro resources + saying you’d be excited to discuss it sometime sounds like the best bet.

Cruxes here include:

  • How deeply does your friend want to learn about EA? They might only want to engage with it for a week or sporadically. Or they may want to know that longtermism is a thing but not go through any (or much) of the moral calculus
  • How does their disability manifest? The little bit I know about intellectual disabilities suggests that it’s hard to know in advance how it affects your learning, even for the person who has the disability. Struggling with math and stats is very common so that doesn’t tell me much.

Not knowing either of these makes me suspect you should do the same as usual but mention the community’s not always the best at communicating / makes stuff more complicated than it needs to be

Project-based learning seems to be a underappreciated bottleneck for building career capital in public policy and non-profits. By projects, I mean subjective problems like writing policy briefs, delivering research insights, lobbying for political change, or running community events. These have subtle domain-specific tradeoffs without a clean answer. (See the Project Work section in On-Ramps Into Biosecurity)

Thus the lessons can't be easily generalized or made legible the way a math problem can be. With projects, even the very first step of identifying a good problem is tough. Without access to a formal network, you can spend weeks on a dead end only realizing your mistakes months or years after the fact. 

This constraint seems well-known for professionals in the network, as organizers for research fellowships like SERI Mats describe their program as valuable, highly in-demand, yet constrained in how many people they can train.

I think operations best shows the surprising importance of domain-specific knowledge. The skill set looks similar across fields. So that would imply some exchange-ability between private sector and social sector. But in practice, organizations want you to know their specific mission very well and they're willing (correctly or incorrectly) to hire a young Research Assistant over, say, someone with 10 years of experience in a Fortune 500 company. That domain knowledge helps you internalize the organization's trade-offs and prioritize without using too much senior management time.

Emphasizing this supervised project-based learning mechanism of getting domain-specific career capital would clarify a few points.

  • With school, it would
    • emphasize that textbook-knowledge is both necessary yet insufficient for contributing to social sector work
    • show the benefits of STEM electives and liberal arts fields, where the material is easier from a technical standpoint but you work on open-ended problems
    • illustrate how research-based Master degrees in Europe tend to be better training than purely coursework-based ones in the US (IMHO, true in Economics)
  • With young professionals, it would
    • highlight the "Hollywood big break" element of getting a social sector job, where it's easier to develop your career capital after you get your target job and get feedback on what to work on (and probably not as important before that)
    • formalize the intuition some people have about "assistant roles in effective organizations" being very valuable even though you're not developing many hard skills
  • With discussions on elitism and privilege, it would

I always read therapeutic alliance as advice for the patient, where one should try many therapists before finding one that fits. I imagine therapists are already putting a lot of effort on the alliance front

Perhaps an intervention could be an information campaign to tell patients more about this? I feel it’s not well known or to obvious that you can (1) tell your therapist their approach isn’t working and (2) switch around a ton before potentially finding a fit

I haven’t looked much into it though

Love this and excited to see more of it. (3) is the biggest surprise for me and I think I’m more positive on education now.

Interested to hear your thoughts on growth diagnostics if you ever get around to it

P.S. I imagine you’re too busy to respond, but I’d be curious to hear if these findings surprised you / what updates you made as a result

EA organizations often have to make assumptions about how long a policy intervention matters in calculating cost-effectiveness. Typically people assume that passing a policy is equivalent to having it in place for around five years more or moving the start date of the policy forward by around five years.

I am really really surprised 5 years is the typical assumption. My conservative guess would have been ~30 years persistence on average for a “referendum-sized” policy change.

Related, I’m surprised this paper is a big update for some people. I suppose that attests to the power of empirical work, however uncertain, for illuminating the discussion on big picture questions.

How Much Does Performance Differ Between People by Max Daniel and Benjamin Todd goes into this

Also there’s a post on “vetting-constrained” I can’t recall off the top of my head. The gist is that funders are risk-adverse (not in the moral sense, but in the relying on elite signals sense) because Program Officers don’t have enough time / knowledge as they’d like for evaluating grant opportunities. So they rely more on credentials than ideal

I liked this a lot. For context, I work as a RA on an impact evaluation project. I have light interests / familiarity with meta-analysis + machine learning, but I did not know what surrogate indices were going into the paper. Some comments below, roughly in order of importance:

  1. Unclear contribution. I feel there's 3 contributions here: (1) an application of surrogate method to long-term development RCTs, (2) a graduate-level intro to the surrogate method, and (3) a new M-Lasso method which I mostly ignored. I read the paper mostly for the first 2 contributions, so I was surprised to find out that the novel contribution was actually M-Lasso
  2. Missing relevance for "Very Long-Run" Outcomes. Given the mission of Global Priorities Institute, I was thinking throughout how the surrogate method would work when predicting outcomes on a 100-year horizon or 1000-year horizon. Long-run RCTs will get you around the 10-year mark. But presumably, one could apply this technique to some historical econ studies with (I would assume) shaky foundations.
  3. Intuition and layout is good. I followed a lot of this pretty well despite not knowing the fiddly mechanics of many methods. And I had a good idea on what insight I would gain if I dived into the details in each section. It's also great that the paper led with a graph diagram and progressed from simple kitchen sink regression before going into the black box ML methods.
  4. Estimator properties could use more clarity. 
    1. Unsure what "negative bias" is. I don't know if the "negative bias" in surrogate index is an empirical result arising from this application, or a theoretical result where the estimator is biased in a negative direction. I'm also unsure if this is attenuation (biasing towards 0) or a honest-to-god negative bias. The paper sometimes mentions attenuation and other times negative bias but as far as I can tell, there's one surrogacy technique used
    2. Is surrogate index biased and inconsistent? Maybe machine learning sees this differently, but I think of estimators as ideally being unbiased and consistent (i.e. consistent meaning more probability mass around the true value as sample size tends to infinity). I get that the surrogate index has a bias of some kind, but I'm unclear on if there's also the asymptotic property of consistency. And at some point, a limit is mentioned but not what it's a limit with respect to (larger sample size within each trial is my guess, but I'm not sure)
    3. How would null effects perform? I might be wrong about this but I think normalization of standard errors wouldn't work if treatment effects are 0...
    4. Got confused on relation between Prentice criterion and regular unconfoundedness. Maybe this is something I just have to sit down and learn one day, but I initially read Prentice criterion as a standard econometric assumption of exogeneity. But then the theory section mentions Prentice criterion (Assumption 3) as distinct from unconfoundedness (Assumption 1). It is good the assumptions are spelt are since that pointed out a bad assumption I was working with but perhaps this can be clarified.
    5. Analogy to Instrumental Variables / mediators could use a bit more emphasis. The econometric section (lit review?) buries this analogy towards the end. I'm glad it's mentioned since it clarifies the first-stage vibes I was getting through the theory section, but I feel it's (1) possibly a good hook to lead the the theory section and (2) something worth discussing a bit more
  5. Could expand Table 1 with summary counts on outcomes per treatment. 9 RCTs sounds tiny, until I remember that these have giant sample sizes, multiple outcomes, and multiple possible surrogates. A summary table of sample size, outcomes, and surrogates used might give a bit more heft to what's forming the estimates. 
  6. Other stuff I really liked
    1. The "selection bias" in long-term RCTs is cool. I like the paragraph discussing how these results are biased by what gets a long-term RCT. Perhaps it's good emphasizing this as a limitation in the intro or perhaps it's a good follow-on paper. Another idea is how surrogates would perform in dynamic effects that grow over time. Urban investments, for example, might have no effect until agglomeration kicks in.
    2. The surprising result of surrogates being more precise than actual RCTs outcomes. This was a pretty good hook for me but I could have easily passed over in in the intro. I also think the result here captures the core intuition of bias-variance tradeoff + surrogate assumption in the paper quite strongly.

I’ve read conflicting things about how individual contributor skills (writing the code) and people management skills relate to one another in programming.

Hacker News and the cscareerquestions subreddit give me the impression that they’re very separate, with many complaining about how advancement dries up on a non-management track.

But I’ve also read a few blog posts (which I can’t recall) arguing the most successful tech managers / coders switch between the two, so that they keep their technical skills fresh and know how their work fits in a greater whole.

What’s your take in this? Has it changed since starting your new job?

Flagging quickly that ProbablyGood seems to have moved into this niche. Unsure exactly how their strategy differs from 80k hours but their career profiles do seem more animals and global health focused

I think they’re funded by similar sources to 80k https://probablygood.org/career-profiles/

Load more