All of jh's Comments + Replies

jh
9mo12
5
0

Yeah, it seems we do have a semantic difference here. But, how you're using 'raw impact units' makes sense to me.

Nice, clear examples! I feel inspired by them to sketch out what I think the "correct" approach would look like. With plenty of room for anyone to choose their own parameters.

Let's simplify things a bit. Say the first round is as described above and its purpose is to fund the organization to test its intervention. Then let's lump all future rounds together and say they total $14m and fund the implementation of the intervention if the tests are s... (read more)

2
Jason
9mo
Yes, I think the "generous-to-Givewell" model should be seen as the right bookend on defensible models on available data, just like I see GIF's current model as the left bookend on defensible models. I think it's plausible that $1 to GIF has either higher or lower impact than $1 to GiveWell.  As for the counterfactual impact that other funders would have, I would expect funders savvy enough and impact-motivated enough to give to GIF-supported projects to be a cut above the norm in effectiveness (although full-GiveWell effectiveness is a stretch as you note). Also, the later-round funders could plausibly make decisions while disregarding prior funding as sunk costs, if they concluded that the relevant project was going to go under otherwise. This could be because they are thinking in a one-off fashion or because they don't think their fund/no fund decision will affect the future decisions of early-stage funders like GIF. Although I like your model at a quick glance, I think it's going to be challenging to come up with input numbers we can have a lot of confidence in. If there's relatively low overlap between the GiveWell-style donor base and the GIF-style donor base, it may not be worthwhile to invest heavily enough in that analysis to provide a confidence interval that doesn't include equality.  Also, GiveWell's diminishing returns curve is fairly smooth, fairly stable over time, and fairly easy to calculate -- most of its portfolio is in a few interventions, and marginal funding mostly extends one of those interventions to a new region/country. GIF's impact model seems much more hits-based, so I'd expect diminishing returns to kick in more forcefully. Indeed, my very-low-confidence guess is that GIF is more effective at lower funding levels, but that the advantage switches to GiveWell at some inflection point. All that is to say that we'd probably need to invest resources into continuously updating the relevant inputs for the counterfactual impact forumula.
jh
9mo10
2
0

I wouldn't put the key point here down to 'units'. I would say the aggregate units of GiveWell tends to use ('units of value' and lives saved) and of GIF (person-year of income-equivalent, "PYI") are very similar. I think any differences in terms of these units is going to be more about subjective differences in 'moral weights'. Other than moral weight differences, I'd expect the same analysis using GiveWell vs GIF units to deliver essentially the same results.

The point you're bringing up, and that Ken discusses as 'Apples vs oranges', is that the analysis... (read more)

3
Jason
9mo
I think we mostly have a semantic difference here. At present, I think the method of analysis is so different that it's better not to speak of the units as being of the same type. That's in part based on clarity concerns -- speaking of GIF units and GiveWell units as the same risks people trying to compare them without applying an appropriate method for allocating impact in a comparative context. I think it's possible to agree on a range, but I think that is going to require a lot of data from GIF that it probably isn't in a position to disclose (and which may require several more years of operation to collect). If I'm understanding Ken correctly, I do not think GIF's current calculation method is sufficient to allow for comparisons between GIF and GiveWell: Let's say GIF gave a $1MM grant to an organization in the first funding round, which is 50% of the total round. Another grantor gave a $2MM grant in the second round (50% of that round), and a third grantor gave $4MM as 50% of a final funding round. (I'm using grants to simplify the toy model.) The organization produces 14 million raw impact units, as projected. If I'm reading the above statement correctly, GIF allocates all 14 million raw units to the first funding round, and assigns itself half of them for a final impact of 7 raw impact units per dollar. For this to be comparable to a GiveWell unit (which represents unduplicated impact), you'd have to assign the other two funders zero impact, which isn't plausible. Stated differently, you'd have to assume the other grantors' counterfactual use of the money in GIF's absence would have been to light it on fire. A generous-to-GiveWell option would be to assume that the other two grantors would have counterfactually given their $6MM to GiveWell. Under this assumption, GIF's impact is 7 million raw impact units for the $1MM minus how many ever raw impact units GiveWell would have generated with an additional $6MM. Under the assumption that GiveWell converts mon

Thank you for engaging with this discussion, Ken!

It's great to have these clarifications in your own words. As you highlight there are many important and tricky issues to grapple with here. I think we're all excited about the innovative work you're doing and excited to learn more as you're able to publish more information.

Actually, they are more of a grant fund than an impact investment fund. I've updated the post to clarify this. Thanks for bringing it up.

One might call them an 'investing for impact' fund - making whatever investments they think will generate the biggest long-term impact.

The reported projections aren't adjusted for counterfactuals (or additionality, contribution, funging, etc.). I wonder if the fact we're mostly talking about GIF grants vs GiveWell grants changes your worry at all?

For my part, I'd be excited to see more grant analyses (in addition to impac... (read more)

jh
1y19
8
2

I'm torn with this post as while I agree with the overall spirit (that EAs can do better at cooperation and counterfactuals, be more prosocial), I think the post makes some strong claims/assumptions which I disagree with. I find it problematic that these assumptions are stated like they are facts.

First, EA may be better at "internal" cooperation than other groups, but cooperation is hard and internal EA cooperation is far from perfect.

Second, the idea that correctly assessed counterfactual impact is hyperopic. Nope, hyperopic assessments are just a sign of... (read more)

1
Davidmanheim
1y
I'm frustrated by your claim that I make strong claims and assumptions, since it seems like what you disagree with me about are conclusions you'd have from skimming, rather than engaging, and being extremely uncharitable.  First, yes, cooperation is hard, and EAs do it "partially."  I admit that fact, and it's certainly not the point of this post, so I don't think we disagree. Second, you're smuggling the entire argument into  "correctly assessed counterfactual impact," and again, sure, I agree that if it's correct, it's not hyperopic - but correct requires a game theoretic approach, which we don't generally use in practice.   Third, I don't think we should just use Shapley values, which you seem to claim I believe. I said in the conclusion, "I'm unsure if there is a simple solution to this," and I agreed that it's relevant only to where we have goals that are amenable to cooperation. Unfortunately, as I pointed out, in exactly those potentially cooperative scenarios, it seems that EA organizations are the ones attempting to eke out marginal attributable impact instead of cooperating to maximize total good done. I've responded to the comment about Toby's claims, and again note that those comments are assuming we're not in a potentially cooperative scenario, or that we are pretending we get to ignore the way others respond to our decisions over time. And finally, I don't know where your attack on economists is coming from, but it seems completely unrelated to the post. Yes, we need more practical work on this, but more than that, we need to admit there is a problem, and stop using poorly reasoned counterfactuals about other group's behavior - something you seem to agree with in your comment.
jh
1y8
2
0

Interesting thesis! Though, it's his doctoral thesis, not from one of his bachelor's degrees, right?

2
Gavin
1y
Yep ta, even says so on page 1. 
jh
1y8
1
0

Yes, and is there a proof of this that someone has put together? Or at least a more formal justification?

1
anonymous6
1y
Here's one set of lecture notes (don't endorse that they're necessarily the best, just first I found quickly) https://lucatrevisan.github.io/40391/lecture12.pdf Keywords to search for other sources would be "multiplicative weight updates", "follow the leader", "follow the regularized leader". Note that this is for what's sometimes called the "experts" setting, where you get full feedback on the counterfactual actions you didn't take. But the same approach basically works with some slight modification for the "bandit" setting, where you only get to see the result of what you actually did.
jh
2y19
4
0

A comment and then a question. One problem I've encountered in trying to explain ideas like this to a non-technical audience is that actually the standard  rationales for 'why softmax' are either a) technical or b) not convincing or even condescending about its value as a decision-making approach. Indeed, the 'Agents as probabilistic programs' page you linked to introduces softmax as "People do not always choose the normatively rational actions. The softmax agent provides a simple, analytically tractable model of sub-optimal choice." The 'Softmax demy... (read more)

One justification might be that in an online setting where you have to learn which options are best from past observations, the naive "follow the leader" approach -- exactly maximizing your  action based on whatever seems best so far -- is easily exploited by an adversary. 

This problem resolves itself if you make actions more likely if they've performed well, but regularize a little to smooth things out. The most common regularizer is entropy, and then as described on the "Softmax demystified" page, you basically end up recovering softmax (this is the well-known "multiplicative weight updates" algorithm).

jh
2y8
1
0

Good to see more and more examples of using Squiggle. Do you think you can use these or future examples to really show how this leads to "ultimately better decisions"?

2
NunoSempere
2y
So two questions here: * Can people get value from simple estimates? Definitely yes, and e.g., like in the case of the expected value of a list of careers, the estimates don't even have to be that complicated * Can people get value from Squiggle specifically? I'm betting on yes, but this is still to be seen, and we're still relatively in the early days. Lmk if you want me to quantify this more precisely (e.g., improving decisions worth X million by Y percent in the next T years)
jh
2y8
1
0

Thanks for sharing this reference, Inga!

jh
2y8
0
0

Thanks for putting this idea out there, Michael!

I have several questions, all in the spirit of helping you sharpen up the idea:

  • Why a loan product? Is that to mimic cat bonds? Standard insurance (just pay the premiums) would be even easier for the client wouldn't it?
  • It seems to me that existing players (banks, FTX) have a strong competitive advantage in formally creating new products. Perhaps this organization could have more value add as an advisory/intermediary. Helping clients implement and manage such strategies (part of which may include helping with d
... (read more)
1[comment deleted]2y

Thanks for checking and sharing that update, Pablo! 

By the way, I expect to see 'mission hedging' continue to be the most 'commonly' used term in this area because this is arguably the right way to describe the AI portfolio Open Philanthropy has publicly mentioned considering. That is, if we label short AI timelines as a bad thing, then this is 'hedging'. Still, I do like to put it in the overall 'mission-correlated' bucket so we remember that the key bet with this portfolio is that short timelines lead to higher cost-effectiveness (i.e. we're betting timelines and cost-effectiveness are correlated).

So, obviously you and Pablo surely have a better sense of what is desired on the Forum/Wiki in general. I am just going based on intuition.

If this is important it would be helpful to know in more detail what place original research is supposed to have on Forum/Wiki. The same with  summaries of existing research. Is a series of 'original research' EA Forum posts on mission-correlated investing acceptable? Then as the 'mission-correlated investing' Wiki tag summarizes these posts it is a summary of existing research.

3
Stefan_Schubert
2y
Certainly - the function of forum posts are totally different from those of Wiki tags. In general, I'd say that the more posts there are using a particular concept, the stronger is the case for a tag on that concept - yes. It's a bit hard for me to tell what the exact cut-off point is though. Pablo would have a better sense of that, since he works on the Wiki.

That's an interesting point you make. I think you might have mistaken 'mission-correlated investing' as a replacement/equivalent for 'mission hedging'? Rather, the latter is a subset of the former.


For the record, some other relevant points:


i. The orders of magnitude of hits for 'mission hedging' needs to be taken with a pinch of salt. It doesn't look to me like it's thousands of people talking about mission hedging. Rather it's thousands of crossposts and similar listings, as well as false hits.


ii. When I created this tag (as 'mission hedging') there was n... (read more)

4
Stefan_Schubert
2y
"I think you might have mistaken 'mission-correlated investing' as a replacement/equivalent for 'mission hedging'? Rather, the latter is a subset of the former." I don't think he has, but that he understands that it's a subset. I think it's fine to have an article on a subset of X and then discuss X as part of that article (if one wants to focus more on the subset. for whatever reason). In general, I share the intuition that the Wiki isn't the place for original research, but should summarise original research and usage. That means that I'd put a lot of weight on Pablo's point that "the expression "mission-correlated investing" is not established EA terminology".

Thanks Stefan! The definition before was hard to parse. I've updated it and hope it's better now. 

I'm not sure I agree about mission hedging being more intuitive. Perhaps, especially if 'investing in evil to do more good' is intuitive or memorable. But how many people who have read early articles about mission hedging would be able to point out it both increases the expected value of good done and decreases the variance?

If what is intuitive is 'investing to have more money in worlds where money is more valuable' then that is mission-correlated investing. 

I agree examples are important. There are now more posts with examples so hopefully that helps.

2
Pablo
2y
Hi, As far as I can tell, the expression "mission-correlated investing" is not established EA terminology. A Google search for "mission-correlated investing" produces only eight hits. This is three orders of magnitude less than the hits produced by "mission hedging", which was the original name of this article. So I suggest sticking with that name, and revising the article's contents accordingly. We can always switch back to "mission-correlated investing" if and when the EA community decides to adopt it as the canonical designation for this idea.
jh
2y7
1
0

Thank you Wayne and Michael for the helpful nudges and encouragement.

I agree that the table at the bottom of the post was at best ambiguous. I have now deleted it from this post, revised it and turned it into this new post with several examples.

This current post then, without the table, remains to make the point that 'mission hedging' is just a subset of 'mission correlated investing'. And that mission correlation research needs to focus on forecasting cost-effectiveness, not whether the world is 'good' or 'bad'.

jh
2y9
1
0

Thanks for the kind words, Ramiro. Yes, it's on my to do list both to write more short posts on the key ideas in that paper (in posts) and to revise it to make it easier to follow (it's too ambitious).

jh
2y36
2
0

(I drafted this then realized that it is largely the same as Zac's comment above - so I've strong upvoted that comment and I'm posting here in case my take on it is useful.)

Crowding in other funding

We're excited to see ideas for structuring projects in our areas of interest that leverage our funds by aligning with the tastes of other funders and investors. While we are excited about spending billions of dollars on the best projects we can find, we're also excited to include other funders and investors in the journey of helping these projects scale in ... (read more)

jh
2y74
2
0

Investment strategies for longtermist funders

Research That Can Help Us Improve, Epistemic Institutions, Economic growth

Because of their non-standard goals, longtermist funders should arguably follow investment strategies that differ from standard best practices in investing. Longtermists place unusual value on certain scenarios and may have different views of how the future is likely to play out. 

We'd be excited to see projects that make a contribution towards producing a pipeline of actionable recommendations in this regard. We think this is mostly a... (read more)

I have had a similar idea, which I didn't submit, relating to trying to create investor access to tax-deductible longtermist/patient philanthropy funds across all major EA hubs. Ideally these would be scaled up/modelled on the existing EA long term future fund (which I recall reading about but can't find now, sorry)

 

Edit - found it and some ideas - see this and top level post.

1
brb243
2y
A systemic change investment strategy for your review.
2
Greg_Colbourn
2y
Just going to note that SBF/FTX/Alameda are already setting a very high benchmark when it comes to investing!
1
JBPDavies
2y
You may be interested in the following project I'm working for: https://deeptransitions.net/news/the-deep-transition-futures-project-investing-in-transformation/ . The project goal is developing a new investment philosophy & strategy (complete with new outcome metrics) aimed at achieving transformational systems change. The project leverages the Deep Transitions theoretical framework as developed within the field of Sustainability Transitions and Science, Technology and Innovation Studies to create a theory of change and subsequently enact it with a group of public and private investors. Would recommend diving into this if you're interested in the nexus of investment and transformation of current systems/shaping future trajectories. I can't say too much about future plans at this stage, except that following the completion of the current phase (developing the philosophy, strategies and metrics), there will be an extended experimentation phase in which these are applied, tested and continuously redeveloped.
jh
2y10
1
0

Also, if you combine $1/ton with the estimated lives per ton from Bressler's paper, then you get $4,400 per life saved.

jh
2y6
1
0

Yes, what I was trying to say was that in my opinion the word 'Scalability' is a good match for 80'000 Hours stated definition of Solvability. In practice, Solvability and Tractability are not used as if they represent Scalability. I think this is a shame as: a) I think Scalability makes sense given the mathematical intuition for ITN developed by Owen Cotton-Barratt, and b) I think there is a risk of circular logic in how people use Solvability/Tractability (e.g. they judge them based on a sense of the marginal cost-effectiveness of work on a problem).

I ag... (read more)

jh
2y6
0
0

Very well put!

I would add that Scalability is already implicitly there in the ITN/SSN framework. At least if you take 80,000 Hours' description of Solvability at face value (i.e. "if we doubled the resources dedicated to solving this problem, what fraction of the problem would we expect to solve?"). Albeit, this is just my observation and not a common opinion.

With limited investment, more scalable projects will tend to have higher cost-effectiveness because they will still have plenty of room for more funding.

What is happening with the 'modern' view is tha... (read more)

2
MichaelA
2y
I agree with your final three paragraphs, but: 1. You seem to be implying that Scalability was one of the terms in ITN/SSN, which I think it never was. 1. The Ss have been Scale and Solvability, which aren't the same as Scalability 2. iirc, Charity Entrepreneurship does account for scalability in their own weighted factor models or frameworks, but that's separate from ITN 2. I don't think the ITN/SSN frameworks made the points in my post or in your final three paragraphs clear. 1. Those are primarilyframeworks for prioritizing among problems, not projects.  2. "if we doubled the resources dedicated to solving this problem, what fraction of the problem would we expect to solve?" doesn't tell me how scalable a given project is. "resources dedicated to solving this problem" would mean things like total resources dedicated to solving wild animal suffering or extreme climate change risks, not resources dedicated toward a given project.  1. You could have cases where a given project could grow to 100 times its current size without losing much cost-effectiveness per dollar and yet the cost-effectiveness was fairly low to begin with or the problem area it's related to isn't very tractable. 2. You could also have cases where a project is very cost-effective and is in a very tractable area but isn't very scalable.  3. Scale, Tractability, and Neglectedness are also often used to evaluate intervention or project ideas, but in that case Scale is used to mean things like "How big would the impacts be if the project were successful?" or "How big a problem is this aiming to tackle?", rather than things like "How large can this project grow to while remaining somewhat cost-effective?"
jh
2y8
0
0

This is a nice post that touches on many important topics. One little note for future reference: I think the logic in the section 'Extended Ramsey model with estimated discount rate' isn't quite right. To start it looks like the inequality is missing a factor of 'b' on the lefthand side. More importantly, the result here depends crucially on the context. The one used is log utility with initial wealth equal to 1. This leads to the large, negative values for small delta. It also makes cost-effectiveness become infinitely good as delta become small.  Th... (read more)

jh
2y6
0
0

I'm still not sure I understand your point(s). The payment of the customers was accounted for as a negligible (negative) contribution to the net impact per customer.

To put it another way: Think of the highly anxious customers each will get $100 in benefits from the App plus 0.02 DALYs averted (for themselves) on top of this. The additional DALYs being discounted for the potential they could use another App.

Say the App fee is $100 dollars. This means to unlock the additional DALYs the users as a group will pay $400 million over 8 years.

The investor puts in ... (read more)

2
Paul_Lang
2y
I agree with your statement that "The message of the post is that specific impact investments can pass a high effectiveness bar" But when you say >>I think the message of this post isn't that compatible with general claims like "investing is doing good, but donating is doing more good".<< I think I must have been misled by the decision matrix. To me it suggested this comparison between investment and donation, while not being able to resolve a difference between the columns "Pass & Invest to Give", "Pass & Give now" (and a hypothetical column "Pass & Invest to keep the money for yourself", with presumably all zero rows), which would all result in zero total portfolio return (differences between these three options would become visible if the consumer wallet would be included and the "Pass & Invest to Give" would create impact through giving, like the "Pass & Give now" column does). Anyway,  I now understand that this comparison between investing and donating was never the message of the post, so all good.
jh
2y8
0
0

Thanks for this comment and question, Paul.

It's absolutely true that the customer's wallets are worth potentially considering. An early reviewer of our analysis also made a similar point. In the end we are fairly confident this turns out to not be a key consideration. The key reason is that mental health is generally found to be a service for which people's willingness to pay is far below the actual value (to them). Especially for likely paying customer markets of e.g. high-income country iPhone users, the subscription costs were judged to be trivial compa... (read more)

2
Paul_Lang
2y
Thanks for the response. My issue was just that the money flow from the customer to the investor was accounted as positive for the investor, but not negative for the customer. I see the argument that customers are reasonably well off non-EAs whereas the investor is an EA. I am not sure if it can be used to justify the asymmetry in the accounting. Perhaps it would make sense that an EA investor is only 10% altruistic and 90% selfish (somewhat in line with the 10% GW pledge)? The conclusion of that would be that investing is doing good, but donating is doing more good.
jh
2y6
0
0

Thanks Alex.

On Angel Investing, in case you haven't seen it, there is this case study. But much more to discuss.

On Technology Deployment, are there any links you can share as examples of what you have in mind?

jh
2y7
0
0

Hi Derek, hope you are doing well. Thank you for sharing your views on this analysis that you completed while you were at Rethink Priorities.

The difference between your estimates and Hauke's certainly made our work more interesting.

A few points that may be of general interest:

  • For both analysts we used 3 estimates, an 'optimistic guess', 'best guess' and 'pessimistic guess'.
  • For users from middle-income countries we doubled the impact estimates. Without reviewing our report/notes in detail, I don't recall the rationale for the specific value of this mult
... (read more)
jh
2y7
0
0

Just to add that in the analysis we only assumed Mind Ease has impact on 'subscribers'. This meanings paying users in high income countries (and active/committed users in low/middle income countries). We came across this pricing analysis while preparing our report. It has very little to do with impact but it does a) highlight Brendon's point that Headspace/Calm are seen as meditation apps, and b) that anxiety reduction looks to be among the highest Willingness To Pay / high value to the customer segments into which Headspace/Calm could expand (e.g. by rel... (read more)

jh
2y6
0
0

Just to add, for the record, that we released most of Hauke's work because it was a meta-analysis that we hope contributes to the public good. We haven't released either Hauke or Derek's analyses of Mind Ease's proprietary data. Though, of course, their estimates and conclusions based on their analyses are discussed at a high level in the case study.

jh
2y6
0
0

To add two additional points to Brendon's comment.

The 1,000,000 active users is cumulative over the 8 years. So, just for example, it would be sufficient for Mind Ease to attract 125,000 users a year each year. Still very non-trivial, but not quite as high a bar as 1,000,000 MAU.

We were happy we the 25% chance of success primarily because of the base rates Brendon mentioned. In addition this can include the possibility that Mind Ease isn't commercially viable for reasons unconnected to its efficacy, so the IP could be spun out into a non-profit. We didn't ... (read more)

jh
2y11
0
0

Thought provoking post, thanks Jackson.

You humbly note that creating an 'EA investment synthesis' is above your pay grade. I would add that synthesizing EA investment ideas into a coherent framework is a collective effort that is above any single person's pay grade. Also, that I would love to see more people from higher pay grades, both in EA and outside the community, making serious contributions to this set of issues. For example, top finance or economics researchers or related professionals. Finally, I'd also say that any EA with an altruistic strategy ... (read more)

jh
2y8
0
0

Yes, Watson and Holmes definitely discuss other approaches which are more like explicitly considering alternative distributions. And I agree that the approach I've described does have that benefit that it can uncover potentially unknown biases and work for quite complicated models/simulations. Hence why I've found it useful to apply to my portfolio optimization with altruism paper (and actually to some practical work). Along with using common sense exploration of alternative models/distributions.

jh
2y6
0
0

Great question and thanks for looking into this section. I've now added a bit on this to the next version of the paper I'll release.

Watson and Holmes investigate this issue :)

They propose several heuristic methods that use simple rules or visualization to rule out values where the robust distribution becomes 'degenerate' (that is, puts an unreasonable amount of weight on a small set of scenarios). How to improve on these heuristics seems to be an open problem.

It seems to me that what seem like different techniques, like cross validation, are ultimately t... (read more)

3
MichaelStJules
2y
I'm thinking in practice, it might just be better to explicitly consider different distributions, and do a sensitivity analysis for the expected value. You could maximize the minimum expected value over the alternative distributions (although maybe there are better alternatives?). This is especially helpful if there are specific parameters you are very concerned about and you can be honest with yourself about what you think a reasonable person could believe about their values, e.g. you can justify ranges for them. Maybe it's good to do both, though, since considering other specific distributions could capture most of your known potential biases in cases where you suspect it could be large (and you don't think your risk of bias is as high in other ways than the ones covered), while the approach you describe can capture further unknown potential biases. Cross-validation could help set ψ  when your data follows relatively predictable trends (and is close to random otherwise), but it could be a problem for issues where there's little precedent, like transformative AI/AGI.
jh
2y7
0
0

Great points. You've inspired me to look at ways to put more emphasis on these ideas in the discussion section that I haven't yet added to the model paper.

Developing a stream of the finance literature that further develops and examines ideas from the EA community is one of the underlying goals with these papers.  I believe these ideas are valid and interesting enough to attract top research talent. Also, that there is plenty of additional work to do to flesh these ideas out so having more researchers working on these topics would be valuable.

In this c... (read more)

jh
2y13
0
0

Thanks Madhav. I'm a big fan of using simple language most of the time. In this case all of those words are pretty normal for my target audience.

jh
3y6
0
0

@Neel Nanda. Quick update: I've now discussed this offline with a bunch of people who are considering potential strategies of this nature. It seems to me that 'mission-correlated investing' is a better umbrella term for these strategies that work with financial-mission correlations to enhance expected value. 'Mission hedging' strategies would be the subset of mission-correlated strategies that both increase expected value and reduce the variance of outcomes.

jh
3y8
0
0

Thanks Sjir. Interesting thought to muse on.

Just quickly riffing on the example in this post, if you have a great business idea that will only work under one politician you might bet on them. Or if you think one politician will be good for your current job, but the other could make it optimal for you to retrain and change jobs, then bet on the other. Or if one will make you want to leave the country, then bet on them to help with your moving costs.

jh
3y12
0
0

Great point and perhaps more interesting than you might have expected.

To repeat back what I think you meant, what I've called the mission hedging strategy for this case makes the two possible outcomes 15 vs 0. While for just donating the possible outcomes are 10 vs 1. So actually the variance of outcomes is higher. It's more like anti-hedging.

First, this depends on how happy you are about Biden v Trump for other reasons. If a Biden win is worth +100 in utility for you and Trump -100, then the mission hedging outcomes are 115 & -100, whereas for simply ... (read more)

6
jh
3y
@Neel Nanda. Quick update: I've now discussed this offline with a bunch of people who are considering potential strategies of this nature. It seems to me that 'mission-correlated investing' is a better umbrella term for these strategies that work with financial-mission correlations to enhance expected value. 'Mission hedging' strategies would be the subset of mission-correlated strategies that both increase expected value and reduce the variance of outcomes.
jh
3y12
0
0

Thank you jackva. Great points on this specific example.

In general, suppose we didn't think this was a special moment. Then essentially this means we think 'investing to give' also presents a good opportunity. If 'investing to give' is also 10x CCF under Trump, then indeed you would want to just wait and either give under Biden or invest to give. But if 'investing to give' is only 5x CCF, then we're in the scenario I discussed under 'More general context'. So, fair point, I have added a sentence to the main post to explicitly rule out 'investing to give' b... (read more)