I'm currently working as a Research Scholar at the Future of Humanity Institute. I've previously co-created the application Guesstimate. Opinions are typically my own.
I enjoyed this and am considering doing similar posts myself.
One thing I've noticed is that the responses seem like they may fall into clusters. I get the impression there's one cluster of "doesn't feel elite and is frustrated with EA for not being accommodating" and a very different cluster of "very worried that EA is being too friendly, and not being properly disagreeable where it matters". I don't have a good sense on exactly what these clusters are. I could imagine it being the case that they are distinct, and if so, recognizing this would be very valuable. Perhaps they could be optimized separately, for instance.
Yea, I'd love to see things like this, but it's all a lot of work. The existing tooling is quite bad, and it will probably be a while before we could rig it up with Foretold/Guesstimate/Squiggle.
One challenge with willingness to pay is that we need to be clear who the money would be coming from. For instance, I would pay less for things if the money were coming from the budget of EA Funds than I would Open Phil, than I would the US Government. This seems doable to me, but is tricky. Ideally we could find a measure that wouldn't vary dramatically over time. For instance, the EA Funds budget might be desperate for cash some years have have too much others, changing the value of the marginal dollar dramatically.
I have a bunch of thoughts on this, and would like to spend time thinking of more. Here are a few:
---I’ve been advising this effort and gave feedback on it (some of which he explicitly included in the post in the “Caveats and warnings” section). Correspondingly, I think it’s a good early attempt, but definitely feel like things are still fairly early. Doing deep evaluation of some of this without getting more empirical data (for instance, surveying people to see which ones might have taken this advice, or having probing conversations with Guesstimate users) seems necessary to get a decent picture. However, it is a lot of work. This effort was much more of Nuño intuitively estimating all the parameters, which can get you kind of far, but shouldn’t be understood to be substantially more. Rubrics like these can be seen as much more authoritative than they actually are.
Reasons to expect these estimates to be over-positive
I tried my best to encourage Nuño to be fair and unbiased, but I’m sure he felt incentivized to give positive grades. I don’t believe I gave feedback to encourage him to exchange the scores favorably, but I did request that he made uncertainty more clear in this post. This wasn’t because I thought I did poorly in the rankings, is more because I thought that this was just a rather small amount of work for the claims being made. I imagine this will be an issue forward with evaluation, especially if people are evaluated you might be seen as possibly holding grudges or similar later on. It is not enough for them to not retaliate, the problem is that from an evaluator’s perspective, there’s a chance that they might retaliate.
Also, I imagine there is some selection pressure to a positive outcome. One of the reasons why I have been advising his efforts is because they are very related to my interests, so it would make sense that he might be more positive towards my previous efforts then would be others of different interests. This is one challenging thing about evaluation; typically the people who best understand the work have the advantage of better understanding its quality, but the disadvantage typically be biased towards how good this type of work is.
Note that all none of the projects wound up with a negative score, for example. I’m sure that at least one really should if we were clairvoyant, although it’s not obvious to me to say which one at this point.
Reasons to expect these estimates to be over-negative
I personally care whole lot more about being able to be neutral, and also in seeming neutral, than I do that my projects were evaluated favorably at the stage. I imagine this could been the case for Nuño as well. So it’s possible there was some over-compensation
here, but my guess is that you should expect things to be biased on the positive side regardless.
I think this work brings to light how valuable improved tooling (better software solutions) could be. A huge spreadsheet can be kind of a mess, and things get more complicated if multiple users (like myself) would try to make rankings. I’ve been inspecting no-code options and would like to do some iteration here.
One change that seems obvious would be for reviews to be posted on the same page as the corresponding blog post. This could be done on the comments or in the post itself, like a Github status icon.
I’m hesitant to update much due to the rather low weight I place on this. I was very uncertain about the usefulness my projects before this and also I’m uncertain about it afterwards. I agree that most of it is probably not to be valuable at all unless I specifically, are much more unlikely someone else, continues his work into a more accessible or valuable form.
If it’s true the Guesstimate is actually far more important than anything else I’ve worked on, it would probably update me to focus a lot more on software. Recently I’ve been more focused on writing and mentorship than on technical development, but I’m considering changing back.
I think I would have paid around $1,000 or so for a report like this for my own usefulness. Looking back, the main value perhaps would come from talking through the thoughts with the people doing the rating. We haven’t done this yet, but might going forward. I’d probably pay at least $10,000 or so if I was sure that it was “fairly correct”.
The value of research in neglected areas
I think one big challenge with research is that you either focus on an active area or a neglected one. In active areas, marginal contributions may be less valuable because others are much more like you to come up with them. There’s one model where there is basically a bunch of free prestige lying around, and if you get there first you are taking zero-sum gains directly from someone else. In the EA community in particular I don’t want to play zero-sum games with other people. However, for neglected work, it seems very possible that almost no one will continue with it. My read is that neglected work is generally a fair bit more risky. There are instances where goes well and this could actually encourage a whole field to emerge (though this takes a while). There are other instances where no one happens to be interested in continuing this kind of research, and it dies before being able to be useful at all.
I think of my main research as being in areas I feel are very neglected. This can be exciting, but has obvious challenge that is difficult to be adopted by others, and so far this has been the case.
Thanks! That's useful to know. I intend to host more prizes in the future but can't promise things yet. There's no harm in writing up a bunch of rough ideas instead of aiming for something that looks super impressive. We're optimizing more to encourage creativity and inspire good ideas, rather than to produce work that can be highly cited. You can look through my LessWrong posts for examples of the kinds of things I'm used to. A few were a lot of work, but many just took a few hours or so.
My read of this article was that this could have been interpreted as meaning "for a form of consequentialism that doesn't give extra favor to oneself, it's often optimal to maximize a decent amount for oneself."
I'm totally fine optimizing for oneself when under the understanding that their philosophical framework favors favoring oneself, it just wasn't clear to me that that was what was happening in this article.
If the lesson there is, "I'm going to make myself happy because the utility function I'm optimizing for favors myself heavily", that's fine, it's just a very different argument then "actually, optimizing for my own happiness heavily is the optimal way of achieving a more universally good outcome." My original read is that the article was saying the latter, I could have been mistaken. Even if I were mistaken, I'm happy to discuss the alternative view; not the one Nicole meant, but the one I thought she meant. I'm sure other readers may have had the same impression I did.
All that said, I would note that often being personally well off is a great way to be productive. I know a lot of altruistic people who would probably get more done if they could focus more on themselves.
I enjoyed reading this, thank you.
One small point:
"that I am a person whose life has value outside of my potential impact."
"that I am a person whose life has value outside of my potential impact."
I'm happy to hear that this insight is worked for you, but want to flag that I don't think it's essential. Personally have been trying to think of my life only as a means to an end. Will my life technically might have value, I am fairly sure it is rather minuscule compared to the potential impact can make. I think it's' possible, though probably difficult, to intuit this and still feel fine / not guilty, about things. It makes me fear death less, for one.
I'm a bit wary on this topic that people might be a bit biased to select beliefs based on what is satisfying or which ones feel good. This is the type of phrase that I would assume would be well accepted in common views of morality, but in utilitarianism it is suspect.
To be clear, of course within utilitarianism one's wellbeing does have "some" "intrinsic/comparative" value, I suspect it's less than what many people would assume when reading that sentence.
Definitely agreed. That said, I think some of this should probably be looked through the lens of "Should EA as a whole help people with personal/career development rather than specific organizations, as the benefits will accrue to the larger community (especially if people only stay at orgs for a few years).I'm personally in favor of expensive resources being granted to help people early in their careers. You can also see some of this in what OpenPhil/FHI funds; there's a big focus on helping people get useful PhDs. (though this helps a small minority of the entire EA movement)
I think people have been taking up the model of open sourcing books (well, making them free). This has been done for [The Life You can Save](https://en.wikipedia.org/wiki/The_Life_You_Can_Save) and [Moral Uncertainty](https://www.williammacaskill.com/info-moral-uncertainty).
I think this could cost $50,000 to $300,000 or so depending on when this is done and how popular it is expected to be, but I expect it to be often worth it.
Many kudos for doing this, I've been impressed seeing this work progress.
I think it could well be the case that EAs have a decent comparative advantage in prioritization itself. I could imagine a world where the community does help prioritize a large range of globally important issues. This could work especially well if these people could influence the spending and talent of other people. Things that are poorly neglected present opportunity for significant leverage through prioritization and leadership.
On politics, my impression is that the community is going to get more involved on many different fronts. It seems like the kind of thing that can go very poorly if done wrong, but the potential benefits are too big to ignore.
As Carl Shulman previously said, one interesting aspect about politics is the potential to absorb a deep amount of money and talent. so I imagine one of the most valuable things about doing this can work is producing information value to inform us if and how to scale it later.