I'm an independent researcher, hobbyist forecaster, programmer, and aspiring effective altruist.
In the past, I've studied Maths and Philosophy, dropped out in exhasperation at the inefficiency; picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 and 2019, and SPARC during 2020; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.
I like to spend my time acquiring deeper models of the world, and a good fraction of my research is available on nunosempere.github.io.
With regards to forecasting, I am LokiOdinevich on GoodJudgementOpen, and Loki on CSET-Foretell, and I have been running a Forecasting Newsletter since April 2020. I also enjoy winning bets against people too confident in their beliefs.
I was a Future of Humanity Institute 2020 Summer Research Fellow, and I'm working on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." You can share feedback anonymously with me here.
I have got the impression that there is going to be a single funnelling exercise that aims to directly compare shorttermist vs longtermist areas including on their 'scale'.
Yeah, so I (and others) have been exploring different things, but I don't know what I'll end up going with. That said, I think that there are gains to be had in optimizing first two stages, not just the third evaluation stage.
Nitpick: A change of basis might also be combined with a projection into a subspace. In the example, if one doesn't care about animals, or about the long term future at all, then instead of the volume of the cuboid they'd just consider the area of one of its faces.
Another nitpick: The ratio of humans to animals would depend on the specific animals. However, I sort of feel that the high level disagreements of the sort jackmalde is pointing to are probably in the ratio of the value between a happy human life and a happy cow life, not about the ratio of the life of a happy cow to a happy pig, chicken, insect, etc.
So suppose you have a cause candidate, and some axis like the ones you mention:
But also some others, like
For simplicity, I'm going to just use three axis, but the below applies to more. Right now, the topmost vectors represent my own perspective on the promisingness of a cause candidate, across three axis, but they could eventually represent some more robust measure (e.g., the aggregate of respected elders, or some other measure you like more). The vectors at the bottom are the perspectives of people who disagree with me across some axis.
For example, suppose that the red vector was "ratio of the value of a human to a standard animal", or "probability that a project in this cause area will successfully influence the long-term future".
Then person number 2 can say "well, no, humans are worth much more than animals". Or, "well, no, the probability of this project influencing the long-term future is much lower". And person number 2 can say something like "well, overall I agree with you, but I value animals a little bit more, so my red axis is somewhat higher", or "well, no, I think that the probability that this project has of influencing the long-term future is much higher".
Crucially, they wouldn't have to do this for every cause candidate. For example, if I value a given animal living a happier life the same as X humans, and someone else values that animal as 0.01X humans, or as 2X humans, they can just apply the transformation to my values.
Similarly, if someone is generally very pessimistic about the tractability of influencing the long-term future they could transform my probabilities as to that happening. They could divide my probabilities by 10x (or, actually, subtract some amount of probability in bits). Then the tranformation might not be linear, but it would still be doable.
Then, knowing the various axis, one could combine them to find out the expected impact. For example, one could multiply three axis to get the volume of the box, or add them as vectors and consider the length of the purple vector, or some other transformation which isn't a toy example.
So, a difference in perspectives would be transformed into a change of basis
So:
Even in that case there's a difficulty, in that anyone who disagrees with your stance on these foundational questions would then have the right to throw out all of your funneling work and just do their own.
doesn't strike me as true. Granted, I haven't done this yet, and I might never because other avenues might strike me as more interesting, but the possibility exists.
Makes sense, thanks, changed.
Acknowledged.
tl;dr/Notes:
I have some models of the world which lead me to think that the idea was unpromising. Some of them clearly have a subjective component. Still, I'm using the same "muscles" as when forecasting, and I trust that those muscles will usually produce sensible conclusions.
It is possible that in this case I had too negative a view, though not in a way which is clearly wrong (to me). If I was forecasting the question "will a charity be incubated to work on philosophy in schools" (surprise reveal: this is similar to what I was doing all along), I imagine I'd give it a very low probability, but that my team mates would give it a slightly higher probability. After discussion, we'd both probably move towards the center, and thus be more accurate.
Note that if we model my subjective promisingness = true promisingness + error term, if we pick the candidate idea at the very bottom of my list (in this case, philosophy in schools, the idea under discussion and one of the four ideas to which I assigned a "very unpromising" rating), we'd expect it to both be unpromising (per your own view) and have a large error term (I clearly don't view philosophy very favorably)
Let me try to translate my thoughts to something which might be more legible / written in a more formal tone.
Thanks!
So suppose that the intervention was about cows, and I (the vectors in "1" in the image) gave them some moderate weight X, the length of the red arrow. Then if someone gives them a weight of 0.0001X, their red arrow becomes much smaller (as in 2.), and the total volume enclosed by their cube becomes smaller. I'm thinking that the volume represents promisingness. But they can just apply that division X -> 0.0001X to all my ratings, and calculate their new volumes and ratings (which will be different from mine, because cause areas which only affect, say, humans, won't be affected).
In this case, the red arrow would go completely to 0, and that person would just focus on the area of the square in which the blue and green arrows lie, across all cause candidates. Because I am looking at volume and they are looking at areas, our ratings will again differ.