“Partial” might work instead of “non-absolute,” but I still favor the latter even though it’s bulkier. I like that “non-absolute” points to a challenge that arises when our predictive powers are nonzero, even if they are very slim indeed. By contrast, “partial” feels more aligned with the everyday problem of reasoning under uncertainty.
One of the challenges is that “absolute cluelessness” is a precise claim: beyond some threshold of impact scale or time, we can never have any ability to predict the overall moral consequences of any action.
By contrast, the practical problem is not as a precise claim, except perhaps as a denial of “absolute cluelessness.”
After thinking about it for a while, I suggest “problem of non-absolute cluelessness.” After all, isn’t it the idea that we are not clueless about the long term future, and therefore that we have a responsibility to predict and shape it fo... (read more)
This reminds me of a conversation I had with John Wentworth on LessWrong, exploring the idea that establishing a scientific field is a capital investment for efficient knowledge extraction. Also of a piece of writing I just completed there on expected value calculations, outlining some of the challenges in acting strategically to diminish our uncertainty.
One interesting thing to consider is how to control such a capital investment, once it is made. Institutions have a way of defending themselves. Decades ago, people launched the field of AI research. Now, ... (read more)
All these projects seem beneficial. I hadn't heard of any of them, so thanks for pointing them out. It's useful to frame this as "research on research," in that it's subject to the same challenges with reproducibility, and with aligning empirical data with theoretical predictions to develop a paradigm, as in any other field of science. Hence, I support the work, while being skeptical of whether such interventions will be useful and potent enough to make a positive change.
The reason I brought this up is that the conversation on improving the productivity of... (read more)
Indoor CO2 concentrations and cognitive function: A critical review (2020)
"In a subset of studies that meet objective criteria for strength and consistency, pure CO2 at a concentration common in indoor environments was only found to affect high-level decision-making measured by the Strategic Management Simulation battery in non-specialized populations, while lower ventilation and accumulation of indoor pollutants, including CO2, could reduce the speed of various functions but leave accuracy unaffected."I haven't been especially impressed by claims th... (read more)
it could be a lot more valuable if reporting were more rigorous and transparent
Rigor and transparency are good things. What would we have to do to get more of them, and what would the tradeoffs be?
Do I understand your comment correctly that you think that in your field that the purpose of publishing is mainly to communicate to the public, and that publications are not very important for communicating within the field to other researchers or towards end users in the industry?
No, the purpose of publishing is not mainly to communicate to the public. After all... (read more)
My experience talking with scientists and reading science in the regenerative medicine field has shifted my opinion against this critique somewhat. Published papers are not the fundamental unit of science. Most labs are 2 years ahead of whatever they’ve published. There’s a lot of knowledge within the team that is not in the papers they put out.
Developing a field is a process of investment not in creating papers, but in creating skilled workers using a new array of developing technologies and techniques. The paper is a way of stimulating conversation and a... (read more)
Looking forward to hearing about those vetting constraints! Thanks for keeping the conversation going :)
Imagine we can divide up the global economy into natural clusters. We'll refer to each cluster as a "Global Project." Each Global Project consists of people and their ideas, material resources, institutional governance, money, incentive structures, and perhaps other factors.
Some Global Projects seem "bad" on the whole. They might have directly harmful goals, irresponsible risk management, poor governance, or many other failings. Others seem "good" on net. This is not in terms of expected value for the world, but in terms of the intrinsic properties of the ... (read more)
Yeah, I am worried we may be talking past each other somewhat. My takeaway from the grantmaker quotes from FHI/OpenPhil was that they don't feel they have room to grow in terms of determining the expected value of the projects they're looking at. Very prepared to change my mind on this; I'm literally just going from the quotes in the context of the post to which they were responding.
Given that assumption (that grantmakers are already doing the best they can at determining EV of projects), then I think my three categories do carve nature at the joints. But ... (read more)
Oh, I definitely don't think that grantmakers are already doing the best that could be done at determining the EV of projects. And I'd be surprised if any EA grantmaker thought that that was the case, and I don't think the above quotes say that. The three quotes you gave are essentially talking about what the biggest bottleneck is, and saying that maybe the biggest bottleneck isn't quite "vetting", which is not the same as the claim that there'd be zero value in increasing or improving vetting capacity.
Also note that one of the three quotes still foc... (read more)
Your previous comment seemed to me to focus on demand and supply and note that they'll pretty much always not be in perfect equilibrium, and say "None of those problems indicate that something is wrong", without noting that the thing that's wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.
In the context of the EA forum, I don't think it's necessary to specify that these are problems. To state it another way, there are three conditions that could exist (let's say in a given year):
In particular, I think it implies the only relevant type of "demand" is that coming from funders etc., whereas I'd want to frame this in terms of ways the world could be improved.
My position is that "demand" is a word for "what people will pay you for." EA exists for a couple reasons:
I can see how you might interpret it that way. I'm rhetorically comfortable with the phrasing here in the informal context of this blog post. There's a "You can..." implied in the positive statements here (i.e. "You can take 15 years and become a domain expert"). Sticking that into each sentence would add flab.
There is a real question about whether or not the average person (and especially the average non-native English speaker) would understand this. I'm open to argument that one should always be precisely literal in their statements online, to prioritize avoiding confusion over smoothing the prosody.
Thanks for that context, John. Given that value prop, companies might use a TB-like service under two constraints:
Great thoughts, ishaan. Thanks for your contributions here. Some of these thoughts connect with MichaelA's comments above. In general, they touch on the question of whether or not there are things we can productively discover or say about the needs of EA orgs and the capabilities of applications that would reduce the size of the "zone of uncertainty."
This is why I tried to convey some of the recent statements by people working at major EA orgs on what they perceive as major bottlenecks in the project pipeline and hiring process.
One key challenge is triangu... (read more)
Good thoughts. I think this problem decomposes into three factors:
My argument actively argues that we should have a bar, is agnostic on how high the bar should be, and assumes that the bar is immobile for the purposes of the reader.
At some point, I may give conside... (read more)
I agree, I should have included "or a safe career/fallback option" to that.
My sense is that Triplebyte focuses on "can this person think like an engineer" and "which specific math/programming skills do they have, and how strong are they?" Then companies do a second round of interviews where they evaluate Triplebyte candidates for company culture. Triplebyte handles the general, companies handle the idiosyncratic.
It just seems to me that Triplebyte is powered by a mature industry that's had decades of time and massive amounts of money invested into articulating its own needs and interests. Whereas I don't think EA is old or big or... (read more)
Triplebyte's value proposition to its clients (the companies who pay for its services) is an improved technical interview process. They claim to offer tests that achieve three forms of value:
If there's room for an "EA Triplebyte," that would suggest that EA orgs have at least one of those three problems.
So it seems like your first step would be to look in-depth at the ways EA orgs assess technical research skills.
A... (read more)
Figuring out how to give the right advice to the right person is a hard challenge. That's why I framed skilling up outside EA as being a good alternative to "banging your head against the wall indefinitely." I think the link I added to the bottom of this post addresses the "many paths" component.
The main goal of my post, though, is to talk about why there's a bar (hurdle rate) in the first place. And, if readers are persuaded of its necessity, to suggest what to do if you've become convinced that you can't surpass it at this stage in your journey.
It would ... (read more)
Hi Michael, thanks for your responses! I'm mainly addressing the metaphorical runner on the right in the photograph at the start of the post.
I am also agnostic about where the bar should be. But having a bar means that you have to maintain the bar in place. You don't move the bar just because you couldn't find a place to spend all your money.
For me, EA has been an activating and liberating force. It gives me a sense of direction, motivation to continue, and practical advice. I've run EA research and community development projects with Vaidehi Agarwalla, an... (read more)
Just to address point (2), the comments in "EA is vetting-constrained" suggest that EA is not that vetting-constrained:
Here's a list of critiques of the ITN framework many of which involve critiques of the neglectedness criterion.
Ending the war on drugs has a few obvious goods:
This seems to be a cause where partial success is meaningful. Every reduction in unnecessary imprisonment, tax dollar saved, and terrorist cell put out of business is a win. We also have some roughly sliding scales - the level of en... (read more)
Those are the circles many of us exist in. So a more precise rephrasing might be “we want to stay in touch with the political culture of our peers beyond EA.”
This could be important for epistemic reasons. Antagonistic relationships make it hard to gather information when things are wrong internally.
Of course, PR-based deference is also a form of antagonistic relationship. What would a healthy yet independent relationship between EA and the social justice movement look like?
That makes sense. I like your approach of self-diagnosing what sort of resources you lack, then tailoring your PhD to optimize for them.
One challenge with the "work backwards" approach is that it takes quite a bit of time to figure out what problems to solve and how to solve them. As I attempted this planning my own immanent journey into grad school, my views gained a lot of sophistication, and I expect they'll continue to shift as I learn more. So I view grad school partly as a way to pursue the ideas I think are important/good fits, but also as a w... (read more)
This is great, I’ll put a note in the main post highlighting this when I get home.
Just to clarify, it sounds like you are:
I also wanted to encourage you to add more specific observations and personal experiences that motivate this advice. What type of grad program are you in now (PhD or master's), and how long have you been in it? Were you as strategic in your approach t... (read more)
This prior should also work for other technologies sharing these reference classes. Examples might include a tech suite amounting to 'longevity escape velocity', mind reading, fully-immersive VR, or highly accurate 10+ year forecasting.
Hi Rob. I can only speak for myself. A lot of people, myself included, discover EA online, because the name or the ideas feel right.
Then we discover there’s a lot of people involved, huge amounts written, and many efforts going on. How do we meet people? How can we contribute? How can you find your place? How do we make sense of all the ideas?
I can only say that nobody is a nobody, and everybody struggles with these questions. It takes time to work it all out, so I advise patience. Write your thoughts out, and make sure to take care of yourself. It sounds like you are in the middle of building up a stable life for yourself, and I believe it’s extremely important for people in EA to focus on that first. Good luck!
Hi Jonas. On taking a second look, the sentence that clinched me interpreting your argument as being for a name change from EA to GP (or something else) was:
“ I personally would feel excited about rebranding "effective altruism" to a less ideological and more ideas-oriented brand (e.g., "global priorities community", or simply "priorities community")”
I will make a note that you aren’t advocating a name change. You may want to consider making this clearer in your post as well :)
I think it can be all of this, and much more. EA can have tremendous capacity for issuing broad recommendations and tailored advice to individual people. It can be about philosophy, governance, technology, and lifestyle.
How could we have a movement for effective altruism if we couldn’t encompass all that?
This is a community, not a think tank, and a movement rather than an institution. It goes beyond any one thing. So to join it or explain it - that’s a little like explaining what America is all about, or Catholicism is all about, or science is all about. You don’t just explain it, you live it, and the journey will look different to different people. That’s a feature, not a bug.
I didn’t say anything about what size/duration of returns would make you a top 1% trader.
That’s good feedback and a complementary point of view! I wanted to check on this part:
“I think that a thing that this post gets wrong is that EA seems to be particularly prone to generating bycatch, and although there are solutions at the individual level, I'd also appreciate having solutions at higher levels of organization.”
Are you saying that you think EA is not particularly prone to generating bycatch? Or that it is, but it’s a problem that needs higher-level solutions?
Yeah, that's not my proudest sentence. I meant the former, that it is particularly prone to generating bycatch, and hence it would benefit from higher level solutions. In your post, you try to solve this at the level of the little fish, but addressing that at the fisherman level strikes me as a better (though complimentary) idea.
Did I get them all? :D
So close, yet so far! By ending your comment with a question and a smiley face, you missed "disengaged" and "prickly"! But keep trying, I know you've got this in you :P
I think for me, it might be best to use a straightforward “join us!” pitch.
Most people I know have considered the idea that there are better and worse ways to help the world. But they don’t extend that thinking to realize the implication that there might be a set of best ways. Nor do they have the long-tail of value concept. They also don’t have any emotional impulse pushing them to explore “what’s the best wat to help the world?” Nor do they have any links to the community besides me.
My experience is that most of my friends and family have very limited ba... (read more)
Update: We were unsuccessful in seeking funding to automate this project, and for the time being we do not have capacity to maintain it manually. The project is closed.
I think these issues are extremely complex, and I think you bring up a good point, one with underlying values that I agree with. Nevertheless, many of my research interests are in Alzheimer's, chronic severe pain, and life extension. I think that people in poor countries ultimately are going to improve their length and quality of life, and there's a strong trend in that direction already. I am long on Malaria being eradicated within the next 30 years. We mostly know what to do; what's holding us back is a combination of environmental caution... (read more)
Thank you :)
Do the book and other resource recommendations especially apply to people interested in working on animal welfare?
Here is that review I mentioned. I'll try and add this post to that summary when I get a chance, though I can't do justice to all the mathematical details.
If you do give it a glance, I'd be curious to hear your thoughts on the critiques regarding the shape and size of the marginal returns graph. It's these concerns that I found most compelling as fundamental critiques of using ITN as more than a rough first-pass heuristic.
The end of this post will be beyond my math til next year, so I’m glad you wrote it :) Have you given thought to the pre-existing critiques of the ITN framework? I’ll link to my review of them later.
In general, ITN should be used as a rough, non-mathematical heuristic. I’m not sure the theory of cause prioritization is developed enough to permit so much mathematical refinement.
In fact, I fear that it gives a sheen of precision to what is truly a rough-hewn communication device. Can you give an example of how an EA organization presently using ITN could improve their analysis by implementing some of the changes and considerations you’re pointing out?
I also hoped to imply that ITN is more than a heuristic. It also serves a rhetorical purpose.
I worry that its seeming simplicity can belie the complexity of cause prioritization. Calculating an ITN rank or score can be treated as the end, rather than the beginning, of such an effort. The numbers can tug the mind in the direction of arguing with the scores, rather than evaluating the argument used to generate them.
My hope is to encourage people to treat ITN scores just as you say - taking them lightly and setting them aside once they've developed a deeper understanding of an issue.
Thanks for reading.
Agreed. However, one of the subcritiques in that point is the divide-by-zero issue that makes issues that have received zero investment "theoretically unsolvable." This is because a % increase in resources from a starting point of 0 will always yield zero. The critic seems to feel it's a result of dividing up the issue in this way.
I leave it to the forum to judge!
Can you give a few examples? Having options and avoiding risk are both good things, all else being equal.
There’s a range of posts critiquing ITN from different angles, including many of the ones you specify. I was working on a literature review of these critiques, but stopped in the middle. It seemed to me that organizations that use ITN do so in part because it’s an easy to read communication framework. It boils down an intuitive synthesis of a lot of personal research into something that feels like a metric.
When GiveWell analyzes a charity, they have a carefully specified framework they use to derive a precise cost effectiveness estimate. By contrast, I don
I want to give more context for the MacAskill quote.
The most obvious implication [of the Hinge of History hypothesis], however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building. If you think that some future time will be much more influential than today, then a natural strategy is to ensure that future decision-makers, who you are happy to defer to, have as many resources as possible when some futu
Her first example of "complex cluelessness" is the same population size argument made by Morgensen, which I dealt with in section 2a. I think both simple and complex cluelessness are dealt with nicely by the debugging model I am proposing. But I'm not sure it's a valid distinction. I suspect all cluelessness is complex.
Debugging is a form of capacity building, but the distinction I drew is necessary. Sometimes we try to build advance capacity to solve an as-yet-intractable problem, as in AI safety research. This is vulnerable to the clu... (read more)
Same. Keep up the good work. I'm looking forward to hearing more.
In my OP, I just meant that if the applicant gets in, they can teach. Too many applicants doesn't necessarily indicate that the field is oversubscribed, it just means that there's a mentorship bottleneck. One possible reason is that senior people in the field simply enjoy direct work more than teaching and choose not to focus on it. Insofar as that's the case, candidates are especially suitable if they're willing to focus more on providing mentorship if they get in and a bottleneck remains by the time they become senior.
Thanks for the feedback, it helps me understand that my original post may not have been as clear as I thought.
in the absence of other empirical information, I think it's a safe assumption that present bottlenecks correlate with future bottlenecks, though your first point is well taken.
I'm not quite following your second argument. It seems to say that the same level of applicant pool growth produces fewer mentors in mentorship-bottlenecked fields than in less mentorship-bottlenecked fields, but I don't understand why. Enlighten me?
Your third point is also correct. Stated generally, finding ways to increase the availability of the primary bottlenecked resource, or accomplish the same goal while using less of it, is how we can get the most leverage.