J

jh

670 karmaJoined Jun 2021

Bio

Jonathan Harris, PhD | jonathan@total-portfolio.org

Total Portfolio Project's goal is to help altruistic investors prioritize the most impactful funding opportunities, whether that means grants, investing to give or impact investments. Projects we've completed range from theoretical research (like in this post), to advising on high impact investment deals, to strategic research on balancing give now versus later for new major donors.

Comments
38

Topic Contributions
4

jh
2mo174

I'm torn with this post as while I agree with the overall spirit (that EAs can do better at cooperation and counterfactuals, be more prosocial), I think the post makes some strong claims/assumptions which I disagree with. I find it problematic that these assumptions are stated like they are facts.

First, EA may be better at "internal" cooperation than other groups, but cooperation is hard and internal EA cooperation is far from perfect.

Second, the idea that correctly assessed counterfactual impact is hyperopic. Nope, hyperopic assessments are just a sign of not getting your counterfactual right.

Third, the idea that Shapley values are the solution. I like Shapley values but only within the narrow constraints for which they are well specified. That is, environments where cooperation should inherently be possible: when all agents agree on the value that is being created. In general you need an approach that can hand both cooperative and adversial environments and everything in between. I'd call that general approach counterfactual impact. I see another commentor has noted Toby's old comments about this and I'll second that.

Finally, economists may do more counterfactual reasoning than other groups but that doesn't mean they have it all figured out. Ask your average economist to quickly model a counterfactual and it could easily end up being as myopic or hyperopic too. The solution is really to get all analysts better trained on heuristics for reasoning about counterfactuals in a way that is prosocial. To me that is what you get to if you try to implement philosophies like Toby's global consequentialism. But we need more practical work on things like this, not repetitive claims about Shapley values.

I'm writing quickly and hope this comes across in the right spirit. I do find the strong claims in this post frustrating to see, but I welcome that you raised the topic.

jh
7mo60

Interesting thesis! Though, it's his doctoral thesis, not from one of his bachelor's degrees, right?

jh
7mo60

Yes, and is there a proof of this that someone has put together? Or at least a more formal justification?

jh
8mo173

A comment and then a question. One problem I've encountered in trying to explain ideas like this to a non-technical audience is that actually the standard  rationales for 'why softmax' are either a) technical or b) not convincing or even condescending about its value as a decision-making approach. Indeed, the 'Agents as probabilistic programs' page you linked to introduces softmax as "People do not always choose the normatively rational actions. The softmax agent provides a simple, analytically tractable model of sub-optimal choice." The 'Softmax demystified' page offers relatively technical reasons (smoothing is good, flickering bad) and an unsupported claim (it is good to pick lower utility options some of the time). Implicitly this makes presentations of ideas like this have the flavor of "trust us, you should use this because it works in practice, even it has origins in what we think is irrational or that we can't justify". And, to be clear, I say that as someone who's on your side, trying to think of how to share these ideas with others. I think there is probably a link between what I've described above and Michael Plant's point (3).

So, I'm wonder if 'we can do better' in justifying softmax (and similar approaches). What is the most convincing argument you've seen? 

I feel like the holy grail would be an empirical demonstration that an RL agent develops softmax like properties across a range of realistic environments. And/or a theoretical argument for why this should happen.

jh
9mo70

Good to see more and more examples of using Squiggle. Do you think you can use these or future examples to really show how this leads to "ultimately better decisions"?

jh
9mo70

Thanks for sharing this reference, Inga!

jh
1y60

Thanks for checking and sharing that update, Pablo! 

By the way, I expect to see 'mission hedging' continue to be the most 'commonly' used term in this area because this is arguably the right way to describe the AI portfolio Open Philanthropy has publicly mentioned considering. That is, if we label short AI timelines as a bad thing, then this is 'hedging'. Still, I do like to put it in the overall 'mission-correlated' bucket so we remember that the key bet with this portfolio is that short timelines lead to higher cost-effectiveness (i.e. we're betting timelines and cost-effectiveness are correlated).

jh
1y80

So, obviously you and Pablo surely have a better sense of what is desired on the Forum/Wiki in general. I am just going based on intuition.

If this is important it would be helpful to know in more detail what place original research is supposed to have on Forum/Wiki. The same with  summaries of existing research. Is a series of 'original research' EA Forum posts on mission-correlated investing acceptable? Then as the 'mission-correlated investing' Wiki tag summarizes these posts it is a summary of existing research.

jh
1y60

That's an interesting point you make. I think you might have mistaken 'mission-correlated investing' as a replacement/equivalent for 'mission hedging'? Rather, the latter is a subset of the former.


For the record, some other relevant points:


i. The orders of magnitude of hits for 'mission hedging' needs to be taken with a pinch of salt. It doesn't look to me like it's thousands of people talking about mission hedging. Rather it's thousands of crossposts and similar listings, as well as false hits.


ii. When I created this tag (as 'mission hedging') there was no tag. 3 years or so after Hauke's original article. This isn't a strong indication of EA attachment to the term.


iii. It was then correctly pointed out to me by an astute forum member that 'mission hedging' is only a good term for a subset of strategies which match the underlying idea ('invest to have more money when it will be more valuable'). 'Mission-correlated investing' is a natural term to capture the whole idea (though suggestions for catchier terms would be welcome). Hence I updated the tag to 'mission-correlated investing'.

iv. My categorization of the 9 posts currently linked to the term would be 5 'mission-correlated investing', 3 strictly 'mission hedging', 1 ambiguous. So, if we were to add a 'mission hedging' tag as well, it would have 3-4 posts. 

v. My intuitions when creating this tag, and refining it to be 'mission-correlated investing', were that it's helpful to have a tag that collects all posts related to this 'niche' and that it's helpful to bring all people who are thinking about these ideas together. Whether they're not experts and have only heard about 'mission hedging' so far, or if they're really into it and considering all angles of 'mission-correlated investing'.

vi. I would say I'm in regular contact with the other main existing authors on 'mission hedging'/'mission-correlated investing'. I'd be really excited to know if there were secretly a ton of people who are actively mission hedging. It would be great for them to share what they are doing and with enough posts this would justify using the term.

Load more