jh

593Joined Jun 2021

Bio

Jonathan Harris, PhD | jonathan@total-portfolio.org

Total Portfolio Project's goal is to help altruistic investors prioritize the most impactful funding opportunities, whether that means grants, investing to give or impact investments. Projects we've completed range from theoretical research (like in this post), to advising on high impact investment deals, to strategic research on balancing give now versus later for new major donors.

Comments
37

Topic Contributions
4

Interesting thesis! Though, it's his doctoral thesis, not from one of his bachelor's degrees, right?

Yes, and is there a proof of this that someone has put together? Or at least a more formal justification?

A comment and then a question. One problem I've encountered in trying to explain ideas like this to a non-technical audience is that actually the standard  rationales for 'why softmax' are either a) technical or b) not convincing or even condescending about its value as a decision-making approach. Indeed, the 'Agents as probabilistic programs' page you linked to introduces softmax as "People do not always choose the normatively rational actions. The softmax agent provides a simple, analytically tractable model of sub-optimal choice." The 'Softmax demystified' page offers relatively technical reasons (smoothing is good, flickering bad) and an unsupported claim (it is good to pick lower utility options some of the time). Implicitly this makes presentations of ideas like this have the flavor of "trust us, you should use this because it works in practice, even it has origins in what we think is irrational or that we can't justify". And, to be clear, I say that as someone who's on your side, trying to think of how to share these ideas with others. I think there is probably a link between what I've described above and Michael Plant's point (3).

So, I'm wonder if 'we can do better' in justifying softmax (and similar approaches). What is the most convincing argument you've seen? 

I feel like the holy grail would be an empirical demonstration that an RL agent develops softmax like properties across a range of realistic environments. And/or a theoretical argument for why this should happen.

Good to see more and more examples of using Squiggle. Do you think you can use these or future examples to really show how this leads to "ultimately better decisions"?

Thanks for sharing this reference, Inga!

Thanks for putting this idea out there, Michael!

I have several questions, all in the spirit of helping you sharpen up the idea:

  • Why a loan product? Is that to mimic cat bonds? Standard insurance (just pay the premiums) would be even easier for the client wouldn't it?
  • It seems to me that existing players (banks, FTX) have a strong competitive advantage in formally creating new products. Perhaps this organization could have more value add as an advisory/intermediary. Helping clients implement and manage such strategies (part of which may include helping with design/specification of the products). Have you considered that?
  • On profit-seeking, if there was big enough demand out there, wouldn't prediction markets be bigger already? 

Happy to talk offline too. But I do like seeing open discussion of these ideas on the forum. Whatever input you get on this post will hopefully be useful for others considering similar ideas. For now mission-correlated investing and related ideas seem mostly academic (and perhaps private/secret). If you find otherwise it would be exciting to see that shared.

Thanks for checking and sharing that update, Pablo! 

By the way, I expect to see 'mission hedging' continue to be the most 'commonly' used term in this area because this is arguably the right way to describe the AI portfolio Open Philanthropy has publicly mentioned considering. That is, if we label short AI timelines as a bad thing, then this is 'hedging'. Still, I do like to put it in the overall 'mission-correlated' bucket so we remember that the key bet with this portfolio is that short timelines lead to higher cost-effectiveness (i.e. we're betting timelines and cost-effectiveness are correlated).

So, obviously you and Pablo surely have a better sense of what is desired on the Forum/Wiki in general. I am just going based on intuition.

If this is important it would be helpful to know in more detail what place original research is supposed to have on Forum/Wiki. The same with  summaries of existing research. Is a series of 'original research' EA Forum posts on mission-correlated investing acceptable? Then as the 'mission-correlated investing' Wiki tag summarizes these posts it is a summary of existing research.

That's an interesting point you make. I think you might have mistaken 'mission-correlated investing' as a replacement/equivalent for 'mission hedging'? Rather, the latter is a subset of the former.


For the record, some other relevant points:


i. The orders of magnitude of hits for 'mission hedging' needs to be taken with a pinch of salt. It doesn't look to me like it's thousands of people talking about mission hedging. Rather it's thousands of crossposts and similar listings, as well as false hits.


ii. When I created this tag (as 'mission hedging') there was no tag. 3 years or so after Hauke's original article. This isn't a strong indication of EA attachment to the term.


iii. It was then correctly pointed out to me by an astute forum member that 'mission hedging' is only a good term for a subset of strategies which match the underlying idea ('invest to have more money when it will be more valuable'). 'Mission-correlated investing' is a natural term to capture the whole idea (though suggestions for catchier terms would be welcome). Hence I updated the tag to 'mission-correlated investing'.

iv. My categorization of the 9 posts currently linked to the term would be 5 'mission-correlated investing', 3 strictly 'mission hedging', 1 ambiguous. So, if we were to add a 'mission hedging' tag as well, it would have 3-4 posts. 

v. My intuitions when creating this tag, and refining it to be 'mission-correlated investing', were that it's helpful to have a tag that collects all posts related to this 'niche' and that it's helpful to bring all people who are thinking about these ideas together. Whether they're not experts and have only heard about 'mission hedging' so far, or if they're really into it and considering all angles of 'mission-correlated investing'.

vi. I would say I'm in regular contact with the other main existing authors on 'mission hedging'/'mission-correlated investing'. I'd be really excited to know if there were secretly a ton of people who are actively mission hedging. It would be great for them to share what they are doing and with enough posts this would justify using the term.

Thanks Stefan! The definition before was hard to parse. I've updated it and hope it's better now. 

I'm not sure I agree about mission hedging being more intuitive. Perhaps, especially if 'investing in evil to do more good' is intuitive or memorable. But how many people who have read early articles about mission hedging would be able to point out it both increases the expected value of good done and decreases the variance?

If what is intuitive is 'investing to have more money in worlds where money is more valuable' then that is mission-correlated investing. 

I agree examples are important. There are now more posts with examples so hopefully that helps.

Load More