Hide table of contents
You are viewing a version of this post published on the . This link will always display the most recent version of the post.

Summary

  • I present a framework to estimate the expected value of rejecting a job offer when there are other ongoing applications.
  • I illustrate how to apply the framework with a real world example involving applications of mine.

Framework

If one has to accept/reject now a job offer for opportunity 0, but has N other ongoing applications whose results will only be known in T_results years, assuming one always works for at least 1 year on an opportunity T_start years after accepting an offer for it, and the time until starting to work on an opportunity has no value, the expected value over the 1st 1 + T_results + T_start years after deciding on opportunity 0 is as follows. If one:

  • Rejects the offer, p_1*V_1 + p_2*V_2 + … + p_N*V_N + EV_other, where p_i is the probability of accepting an offer for opportunity i if one rejects that for opportunity 0, V_i is the value of working for 1 year on opportunity i, V_1 >= V_2 >= … >= V_N without loss of generality, and EV_other is the expected value from opportunities outside the ongoing applications.
  • Accepts the offer, V_0*(1 + T_results).

p_i is equal to the probability of receiving an offer for opportunity i, p_offer_i, but not for any better ones, which is (1 - p_offer_1)*(1 - p_offer_2)*...*(1 - p_offer_{i - 1})*p_offer_i.

V_i should account for the impact of direct work, donations, and career capital. The contribution from donations is:

  • For opportunities besides 0, I_i - S_i, where I_i is the net income from the 1st year working on opportunity i, and S_i is the spending excluding donations in the 1 + T_results + T_start years after deciding on opportunity 0 if one accepts opportunity i.
  • For opportunity 0, I_0*(1 + T_results) - S_0.

Real example

Context

I applied to join Anonymous Organisation as a founding researcher of a project aiming to steer the giving of philanthropists in India towards cost-effective interventions[1]. I completed the 1st 2 stages, was invited to the 3rd on January 20 conditional on accepting a seemingly likely future offer, and was supposed to decide on whether I would accept such an offer until January 21. I concluded I would not accept an offer made on January 21 if I had to decide then on it. This was partly informed by my calculations below, which illustrate how to apply the framework I presented above. Some caveats:

  • I only considered the applications which intuitively accounted for the most expected value.
  • I did not account for the time it would take to know the results of other applications, which corresponds to assuming that T_results is 0.
  • I did not quantify the value of career capital, which is relevant if it is not proportional to the value from direct work and donations.

Values

I estimated the value in terms of additional donations to the Shrimp Welfare Project (SWP) from working 1 year as (ordered from the highest to the lowest impact):

  • A fund manager at the Animal Welfare Fund (AWF) was 211 k$ (= (186 + 25.2)*10^3). I got this adding:
    • 186 k$ (= 930*10^3*0.2) from direct work. I calculated this multiplying:
      • 930 k$ (= (555 + 375)*10^3) granted in 2024 to help wild animals and shrimp in 2024.
      • Annual impact relative to the 2nd best hire equivalent to moving 20 % of the above per year to SWP.
    • 25.2 k$ (= (35.2 - 10)*10^3) from donations (assuming the impact of my donations would be much larger than those of the 2nd best hire because they would not donate much to helping shrimp or wild animals). I calculated this from the difference between:
      • The net salary of 35.2 k$ (= 80*10^3*(1 - 0.56)). I assumed a gross salary of 80 k$ (= (60 + 100)*10^3/2), which was the mean between the lower and upper bound in the job ad, and a 56 % (= 0.45 + 0.11) reduction due to income tax and social security in Portugal.
      • 10 k$, which was my rough guess for my annual expenditure excluding donations.
  • A research analyst at Ark Philanthropy was 45.0 k$ (= (28.6 + 16.4)*10^3). I got this adding:
    • 28.6 k$ (= 1*10^6*0.143*0.2) from direct work. I calculated this multiplying:
      • 1 M$ granted per year, as guessed by me.
      • 14.3 % (= 1/7) of the above goes towards helping farmed and wild animals, since animal welfare is one of Ark’s 7 causes.
      • Annual impact relative to the 2nd best hire equivalent to moving 20 % of the above per year to SWP.
    • 16.4 k$ (= (26.4 - 10)*10^3) from donations. I calculated this from the difference between:
      • The net salary of 26.4 k$ (= 60*10^3*(1 - 0.56)). I assumed a gross salary of 60 k$, as guessed by me, and a 56 % (= 0.45 + 0.11) reduction due to income tax and social security in Portugal.
      • 10 k$, which was my rough guess for my annual expenditure excluding donations.
  • An operations associate at Epoch AI was 21.9 k$ (= (0 + 21.9)*10^3). I got this adding:
    • 0 from direct work, as I guessed the impact from donations to be way larger.
    • 21.9 k$ (= (31.9 - 10)*10^3) from donations. I calculated this from the difference between:
      • The net salary of 31.9 k$ (= 72.5*10^3*(1 - 0.56)). I assumed a gross salary of 72.5 k$ (= (65 + 80)*10^3/2), which was the mean between the lower and upper bound in the job ad, and a 56 % (= 0.45 + 0.11) reduction due to income tax and social security in Portugal.
      • 10 k$, which was my rough guess for my annual expenditure excluding donations.
  • A founding researcher of Anonymous Organisation’s project was 12.8 k$ (= (7.78 + 5.00)*10^3). I got this adding:
    • 7.78 k$ (= 3.1*10^6*0.00251) from direct work. I calculated this multiplying:
      • 3.33 M$ granted per year (= 10*10^6/3), given RBG’s goal of influencing 10 M$ over 3 years.
      • An annual impact relative to the 2nd best hire equivalent to moving 0.251 % (= 0.01*0.251) of the annual amount granted per year to SWP. I obtained this multiplying:
        • 1 % (= 10*0.001) of the amount granted per year going towards helping farmed and wild animals, which is 10 times ChatGPT’s and Claude’s guess that 0.1 % of the donations of Indian philanthropists help farmed and wild animals in India. For reference, 3 % of donations in the United States help farmed animals.
        • An annual impact relative to the 2nd best hire equivalent to moving 25.1 % of the above to SWP, as, among AWF’s grants in 2024, 25.1 % (= 930*10^3/(3.7*10^6)) went towards helping wild animals and shrimp.
    • 5.00 k$ (= (15.0 - 10)*10^3) from donations. I calculated this from the difference between:
      • The net salary of 15.0 k$ (= 28.0*10^3*(1 - 0.465)). I assumed a gross salary of 28.0 k€, as mentioned in the job ad, and a 46.5 % (= 0.355 + 0.11) reduction due to income tax and social security in Portugal.
      • 10 k$, which was my rough guess for my annual expenditure excluding donations.
  • An operations associate at Giving What We Can (GWWC) is 12.0 k$ (= (0 + 12.0)*10^3). I got this adding:
    • 0 from direct work, as I guessed the impact from donations to be way larger.
    • 12.0 k$ (= (22.0 - 10)*10^3) from donations. I calculated this from the difference between:
      • The net salary of 22.0 k$ (= 50.0*10^3*(1 - 0.56)). I assumed a gross salary of 50.0 k$ (= (35 + 65)*10^3/2), which is the mean between the lower and upper bound in the job ad, and a 56 % (= 0.45 + 0.11) reduction due to income tax and social security in Portugal.
      • 10 k$, which was my rough guess for my annual expenditure excluding donations.
  • A freelance math tutor of high school students was 10.0 k$ (= (0 + 10.0)*10^3). I got this adding:
    • 0 from direct work, as I guessed the impact from donations to be way larger.
    • 10.0 k$ from donations. I calculated this from the difference between:
      • The net salary of 10.0 k$ (= 15.0*10^3*(1 - 0.33)). I assumed a gross salary of 15.0 k$ (= 1*10^3*15), which is 1 k hours times 15 $/h, as guessed by me having a quick look into what people listed on Superprof charge in my area, and a 33 % (= 0.22 + 0.11) reduction due to income tax and social security in Portugal. In hindsight, I should probably have assumed less than 1 k hours per year of work.
      • 10 k$, which was my rough guess for my annual expenditure excluding donations.

Expected values

I estimated an expected value of rejecting an offer from Anonymous Organisation over the subsequent 1st year of work of 21.4 k$ (= (11.7 + 2.12 + 0.983 + 0.511 + 6.07)*10^3) adding the following contributions:

  • 11.7 k$ (= 0.0556*211*10^3) from joining AWF as a fund manager. I computed this multiplying:
    • A probability of joining of 5.56 % (= 1/18), since 18 people were invited to stage 2, which was the last one I had completed.
    • An expected value conditional on joining of 211 k$, as estimated above.
  • 2.12 k$ (= 0.0472*45.0*10^3) from joining Ark as a research analyst. I computed this multiplying:
    • A probability of 4.72 % (= (1 - 0.0556)*0.05) for a 5 % (= 1/20) chance of joining if I did not join AWF, as I guessed 20 people, including me, would be invited to stage 2, although I had only completed stage 1.
    • An expected value conditional on joining of 45.0 k$, as estimated above.
  • 983 $ (= 0.0449*21.9*10^3) from joining Epoch AI as an operations associate. I computed this multiplying:
    • A probability of 4.49 % (= (1 - 0.0556)*(1 - 0.05)*0.05) for a 5 % (= 1/20) chance of joining if I did not join AWF or Ark, as I guessed 20 people had been invited to stage 2, which was the last one I had completed.
    • An expected value conditional on joining of 21.9 k$, as estimated above.
  • 511 $ (= 0.0426*12.0*10^3) from joining GWWC as an operations associate. I computed this multiplying:
    • A probability of 4.26 % (= (1 - 0.0556)*(1 - 0.05)^2*0.05) for a 5 % (= 1/20) chance of joining if I did not join AWF, Ark or Epoch AI, as I guessed 20 people, including me, would be invited to stage 2, although I had only completed stage 1.
    • An expected value conditional on joining of 21.9 k$, as estimated above.
  • 6.07 k$ (= 0.607*10.0*10^3) from working freelance. I computed this multiplying:
    • A probability of 60.7 % (= (1 - 0.0556)*(1 - 0.05)^3*0.75) for a 75 % chance of working freelance if I did not join any organisation.
    • An expected value conditional on working freelance of 10.0 k$, as estimated above for my backup option of becoming a math tutor.

Consequently, I arrived at a ratio between the expected value from turning down and accepting an offer from Anonymous Organisation of 1.67 (= 21.4*10^3/(12.8*10^3)).

Decision and outcome

I strongly endorse expectational total hedonistic utilitarianism (increasing happiness, and decreasing suffering), and I got a ratio higher than 1, so I decided I would not accept an offer from Anonymous Organisation. Just kidding! I did decide that, but I had other considerations in mind. I noted I underestimated the value of turning down the offer because I could get other jobs I did not include, including other ongoing applications I had. Furthermore, Anonymous Organisation’s project would overwhelmingly involve doing research on human welfare interventions that I do not know are beneficial or harmful, which would be demotivating. I suggested joining as a volunteer who would work 8 h/week on animal welfare. They replied a few weeks later saying they had figured it was best for them to focus just on human welfare, such that it would not be fair for me to join as a volunteer. So I believe I decided well.

  1. ^

     I wanted to identify the organisation and project, but they asked me to remain anonymous.

Comments7


Sorted by Click to highlight new comments since:

Hi Vasco,

I find it worthwhile to try to illustrate counterfactual reasoning and expected value calculations in the various decisions one may have to make. Thanks for this post! I have comments on the figures in two places:

A freelance math tutor of high school students was 10.0 k$ (= (0 + 10.0)*10^3). I got this adding:

  • 0 from direct work, as I guessed the impact from donations to be way larger.
  • 10.0 k$ from donations. I calculated this from the difference between:
    • The net salary of 10.0 k$ (= 15.0*10^3*(1 - 0.33)). I assumed a gross salary of 15.0 k$ (= 1*10^3*15), which is 1 k hours times 15 $/h, as guessed by me having a quick look into what people listed on Superprof charge in my area, and a 33 % (= 0.22 + 0.11) reduction due to income tax and social security in Portugal. In hindsight, I should probably have assumed less than 1 k hours per year of work.
    • 10 k$, which was my rough guess for my annual expenditure excluding donations.

How can you make $10k in donations as a math tutor if your net salary is $10k and your annual expenses excluding donations are also $10k?

 

A fund manager at the Animal Welfare Fund (AWF) was 211 k$ (= (186 + 25.2)*10^3). I got this adding:

  • 186 k$ (= 930*10^3*0.2) from direct work. I calculated this multiplying:
    • 930 k$ (= (555 + 375)*10^3) granted in 2024 to help wild animals and shrimp in 2024.
    • Annual impact relative to the 2nd best hire equivalent to moving 20 % of the above per year to SWP.
  • 25.2 k$ (= (35.2 - 10)*10^3) from donations (assuming the impact of my donations would be much larger than those of the 2nd best hire because they would not donate much to helping shrimp or wild animals).

Unless we accept double counting (donors who clicked on the GWWC site to donate $930k claim $930k of impact, then GWWC claims to have generated this $930k of value (thanks to $31,000 in donations for their operations, given their giving multiplier of 30x, so that the GWWC donors who provided the $31k also claim $930k of impact), then AWF claims $930k of impact, then the grantees claim $930k of impact), it seems to me that the counterfactual impact actually resulting from the direct work of a fund manager is much lower. I would tend to consider that the bulk of the impact is at the level of the organization that carries out the work useful to the animals and at the level of its funders. For example, for $930k spent by a selection of nonprofits to help shrimps ($930k total impact), I would imagine a distribution along the lines of:

  • 60% specific to grantee nonprofits (if they didn't exist, the initial donors as well as AWF would give much less money due to lack of cost-effective impact opportunities, and much fewer animals would ultimately be helped),
  • 30% specific to AWF donors (if they did not exist, AWF and the charities would do their best to find donors, but would still get less money because it is not easy to create new donors for wild animals),
  • 10% specific to the work of AWF (if they did not exist, the EA-oriented donors who fund the work on shrimp would be a little less successful in identifying good impact opportunities, although they could count on other funds or evaluators, and the charities would be a little less successful in their fundraising, because they would seek funds from funders who do not recognize so well the high value of their work in comparison with shelters for example).

So we would have $93k of impact actually attributed to AWF as a whole, and then we would have to look at how it is distributed among the work of the different people who make the existence of the fund possible (the original founders, even if they are no longer there, because the fund might not exist if they had not been there; the thinkers who influenced their ideas - because they probably would never have done that if they hadn't come across EA literature; the various people involved in managing the fund, etc.).

Then if we assume that the fund manager position is responsible for 30% of AWF's impact (which seems very optimistic to me), we arrive at an impact of $27,900.

Finally, we still need to apply your 20% ratio (which seems high to me given how this kind of position is likely to attract very similar qualified profiles) to get the counterfactual impact of the person who occupies the position compared to the next candidate, and we arrive at a specific individual impact of $5,580 (97% less than your estimate).

Obviously, my percentages depend on how much we think the movement is constrained relatively by the lack of effective interventions vs. the lack of donors vs. the lack of funds vs. the lack of talented fund managers within the funds, etc., but I think you see my point.

Thanks for the comment, Matta! I strongly upvoted it.

How can you make $10k in donations as a math tutor if your net salary is $10k and your annual expenses excluding donations are also $10k?

I cannot! Thanks for finding that error in my calculations. I have now updated the post. The ratio between the expected value from turning down and accepting the offer from Anonymous Organisation went from 1.67 to 1.20.

Unless we accept double counting (donors who clicked on the GWWC site to donate $930k claim $930k of impact, then GWWC claims to have generated this $930k of value (thanks to $31,000 in donations for their operations, given their giving multiplier of 30x, so that the GWWC donors who provided the $31k also claim $930k of impact), then AWF claims $930k of impact, then the grantees claim $930k of impact), it seems to me that the counterfactual impact actually resulting from the direct work of a fund manager is much lower.

I am not sure I followed. I agree the credits respecting the 930 k$ AWF granted in 2024 to help wild animals and shrimp should go to AWF and their donors. In my calculation, there is no double counting because I assumed my impact would come from increasing the cost-effectiveness of the donations of the donors, not from increasing the amount granted by AWF. In essence, I supposed the amount granted by AWF would have been the same with or without me, but that they would have granted 186 k$ (= 930*10^3*0.2) more to helping shrimp or wild animals with me.

Well, I structured my comment by discussing the problem of double counting on the $930k amount and then talking about the impact of the position (20% of the $930k), but in fact it might have been clearer if I had proceeded the other way around (commenting directly on your figure of $186k of “value”).

If I understand your post correctly, you are saying that by being recruited as fund manager for AWF, you will direct $186k to SWP, whereas if you are not recruited, the next candidate will allocate these funds to other interventions whose impact is comparatively negligible, so that the value of your work for 1 year in this position will be 186-0 = $186k.

The point on which I am unsure is that by attributing all the funds moved to the value of your work in this role, I suspect there is double counting, because I fear that:

  • you would end up saying, “By working at AWF for a year, I have moved $186k towards much more effective interventions than before, so the direct impact of my work for animals has been $186k (without me, this money would not have been moved),”
  • and meanwhile, SWP would say, “This year, we spent an additional $186k on animals, so the direct impact of our work on animals has increased by $186k (without us, this money would have helped far fewer shrimps, if any at all),”
  • and furthermore, GWWC would say, “This year again, we found donors who enabled $186k to go to AWF and then to SWP, which cost us $6,200 for a value of $186k (without us, these donations would not have been made),”
  • a GWWC donor would say, “This year again, I donated $6,200 to GWWC, which allowed $186k to go to SWP (without me, these donations would not have been made),”
  • some AWF donors would say “This year, we collectively donated $186k to AWF on the GWWC site, and that money ended up in SWP's account (without us, they wouldn't have had that money)”.

This does not necessarily invalidate the conclusions of this section of your post (the ranking of the various positions in terms of direct impact of the work would be unchanged).

In fact, I would have had no objection to this section if you had avoided talking about “value” and only talked about “amounts moved to SWP”, instead of presenting these terms as equivalent.

Where it may become more debatable is when you sum these redirected amounts with the amounts of donations you make thanks to salaries that are higher than your living expenses, as if in both cases they were “values” that could be added together in the same calculation. Following this logic, $100 redirected to SWP because of your direct work + $100 that you yourself donate to SWP = $200 of value.

However, at this stage of my reflection, it seems to me that to avoid any double counting, we would need to break down the responsibilities of the various actors in the final impact for each of the two terms of the sum:

  • Let's consider the $100 redirected to SWP. Using the assumptions from my previous comment regarding the distribution of responsibilities among the various actors in the production of the final value, we can attribute $60 of specific value (i.e. without double counting) to SWP, $30 of specific value to AWF donors and $10 of specific value to AWF (including $3 of value specifically produced by the fund manager and $7 of value specifically produced by other employees).
  • Now let's consider the $100 that you donate directly to SWP, without an intermediary (I then assume 60% responsibility of SWP in the final impact and 40% responsibility of the donor in the final impact). Your donation produces $40 of value.
  • The overall value produced by your work in this position, specifically attributable to you, is therefore 3+40 = $43, which is very different from the sum 100+100 = $200.

What do you think?

Thanks for clarifying, Matta!

If I understand your post correctly, you are saying that by being recruited as fund manager for AWF, you will direct $186k to SWP, whereas if you are not recruited, the next candidate will allocate these funds to other interventions whose impact is comparatively negligible, so that the value of your work for 1 year in this position will be 186-0 = $186k.

Yes, that is practically it. In rigour, the 2nd best candidate would also direct funds to interventions as cost-effective as SWP. I assumed I would direct 186 k$ more than whatever they would.

What do you think?

@Mata'i Souchon, I have updated this paragraph. I agree more actors would be responsible for the impact linked to AWF granting more to organisations as cost-effective as SWP (me, AWF, their donors, and the organisations) than to that linked to me donating more to such organisations (me, and the organisations). My counterfactual value, which was I estimated in my post, is the same in both cases, but my Shapley value, which is what matters, is larger in the latter. In both cases, all the actors I listed are necessary to produce impact, so I think I would be responsible for 25 % (= 1/4) of the impact linked to AWF granting more to organisations as cost-effective as SWP, but 50 % (= 1/2) of the impact linked to me donating more to such organisations. So I believe I should have weighted the former 50 % (= 0.25/0.5) as heavily as I originally did in my post. I have now corrected for this by halving the impact of my direct work I originally estimated. The ratio between the expected value from turning down and accepting the offer from Anonymous Organisation went from 1.20 to 1.07.

Thanks to your comments, I went from a ratio of 1.67 to 1.07. My decision would have been the same based on this, but it is a significant update. Thanks for engaging!

@Mata'i Souchon, I have updated this paragraph. I agree more actors would be responsible for the impact linked to AWF granting more to organisations as cost-effective as SWP (me, AWF, their donors, and the organisations) than to that linked to me donating more to such organisations (me, and the organisations). My counterfactual value, which was I estimated in my post, is the same in both cases, but my Shapley value, which is what matters, is larger in the latter.

I have reverted the changes regarding the Shapley value. Thinking more about it, I realised what matters is not the number of necessary actors, but whether their actions are sufficiently independent from my decision about the offer, which I think they are.

Interested in the human welfare intervention program! Could I reach out by DM to ask for the name? Also totally understand if you are hesitant to provide name as well. Thanks!

Thank, yz! I have shared the job ad and contact of the hiring manager privately with you. I would be happy to share the same information privately with other readers. They asked to remain anonymous, so please do not share it publicly.

I do not know if they are still hiring. If not, you are welcome to reply to this comment clarifying that.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 8m read
 · 
In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.  The ideas I think could have the highest impact are:  1. Government placements/secondments in key GHW areas (e.g. international development), and 2. Expanded (ultra) high-net-worth ([U]HNW) advising Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups.  I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering.  I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space! Introduction I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here). Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion. At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in