All of AllAmericanBreakfast's Comments + Replies

What should we call the other problem of cluelessness?

“Partial” might work instead of “non-absolute,” but I still favor the latter even though it’s bulkier. I like that “non-absolute” points to a challenge that arises when our predictive powers are nonzero, even if they are very slim indeed. By contrast, “partial” feels more aligned with the everyday problem of reasoning under uncertainty.

What should we call the other problem of cluelessness?

One of the challenges is that “absolute cluelessness” is a precise claim: beyond some threshold of impact scale or time, we can never have any ability to predict the overall moral consequences of any action.

By contrast, the practical problem is not as a precise claim, except perhaps as a denial of “absolute cluelessness.”

After thinking about it for a while, I suggest “problem of non-absolute cluelessness.” After all, isn’t it the idea that we are not clueless about the long term future, and therefore that we have a responsibility to predict and shape it fo... (read more)

1AllAmericanBreakfast1mo“Partial” might work instead of “non-absolute,” but I still favor the latter even though it’s bulkier. I like that “non-absolute” points to a challenge that arises when our predictive powers are nonzero, even if they are very slim indeed. By contrast, “partial” feels more aligned with the everyday problem of reasoning under uncertainty.
Why scientific research is less effective in producing value than it could be: a mapping

This reminds me of a conversation I had with John Wentworth on LessWrong, exploring the idea that establishing a scientific field is a capital investment for efficient knowledge extraction. Also of a piece of writing I just completed there on expected value calculations, outlining some of the challenges in acting strategically to diminish our uncertainty.

One interesting thing to consider is how to control such a capital investment, once it is made. Institutions have a way of defending themselves. Decades ago, people launched the field of AI research. Now, ... (read more)

Why scientific research is less effective in producing value than it could be: a mapping

All these projects seem beneficial. I hadn't heard of any of them, so thanks for pointing them out. It's useful to frame this as "research on research," in that it's subject to the same challenges with reproducibility, and with aligning empirical data with theoretical predictions to develop a paradigm, as in any other field of science. Hence, I support the work, while being skeptical of whether such interventions will be useful and potent enough to make a positive change.

The reason I brought this up is that the conversation on improving the productivity of... (read more)

Has anyone found an effective way to scrub indoor CO2?

Indoor CO2 concentrations and cognitive function: A critical review (2020)
 

"In a subset of studies that meet objective criteria for strength and consistency, pure CO2 at a concentration common in indoor environments was only found to affect high-level decision-making measured by the Strategic Management Simulation battery in non-specialized populations, while lower ventilation and accumulation of indoor pollutants, including CO2, could reduce the speed of various functions but leave accuracy unaffected."

I haven't been especially impressed by claims th... (read more)

Why scientific research is less effective in producing value than it could be: a mapping

it could be a lot more valuable if reporting were more rigorous and transparent

Rigor and transparency are good things. What would we have to do to get more of them, and what would the tradeoffs be?

Do I understand your comment correctly that you think that in your field that the purpose of publishing is mainly to communicate to the public, and that publications are not very important for communicating within the field to other researchers or towards end users in the industry?

No, the purpose of publishing is not mainly to communicate to the public. After all... (read more)

2C Tilli1moI think that we have a rather similar view actually - maybe it's just the topic of the post that makes it seem like I am more pessimistic than I am? Even though this post focuses on mapping up problems in the research system, my point is not in any way that scientific research would be useless - rather the opposite, I think it is very valuable, and that is why I'm so interested in exploring if there are ways that it can be improved. It's not at all my intention to say that research, or researchers, or any other people working in the system for that matter, are "bad". My concern is not that published papers are not clear guides that a novice could follow or understand. Especially now that there is an active debate around reproducibility I would also not expect (good) researchers to be naive about it (and that has not at all been my personal experience from working with researchers). Still it seems to me that if reproducibility is lacking in fields that produce a lot of value, initiatives that would improve reproducibility would be very valuable? From what I have seen so far, I think that the work by OSF [https://osf.io/] (particularly on preregistration) and publications from METRICS [https://metrics.stanford.edu/] seems like it could be impactful - what do you think of these? The ARRIVE [https://arriveguidelines.org/]guidelines also seem like a very valuable initiative for reporting of research with animals.
Why scientific research is less effective in producing value than it could be: a mapping

My experience talking with scientists and reading science in the regenerative medicine field has shifted my opinion against this critique somewhat. Published papers are not the fundamental unit of science. Most labs are 2 years ahead of whatever they’ve published. There’s a lot of knowledge within the team that is not in the papers they put out.

Developing a field is a process of investment not in creating papers, but in creating skilled workers using a new array of developing technologies and techniques. The paper is a way of stimulating conversation and a... (read more)

4MichaelA1moWithout weighing in on your perspective/position here, I'd like to share a section of Allan Dafoe's post AI Governance: Opportunity and Theory of Impact [https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact] that you/some readers may find interesting:
2C Tilli1moThank you for this perspective, very interesting. I definitely agree with you that a field is not worthless just because the published figures are not reproducible. My assumption would be that even if it has value now, it could be a lot more valuable if reporting were more rigorous and transparent (and that potential increase in value would justify some serious effort to improve the rigorousness and transparency). Do I understand your comment correctly that you think that in your field that the purpose of publishing is mainly to communicate to the public, and that publications are not very important for communicating within the field to other researchers or towards end users in the industry? That got me thinking - if that were the case, would we actually need peer-reviewed publications at all for such a field? I'm thinking that the public would anyway rather read popular science articles, and that this could be produced with much less effort by science journalists? (Maybe I'm totally misunderstanding your point here, but if not I would be very curious to hear your take on such a model).
What's wrong with the EA-aligned research pipeline?

Looking forward to hearing about those vetting constraints! Thanks for keeping the conversation going :)

Help me find the crux between EA/XR and Progress Studies

Imagine we can divide up the global economy into natural clusters. We'll refer to each cluster as a "Global Project." Each Global Project consists of people and their ideas, material resources, institutional governance, money, incentive structures, and perhaps other factors.

Some Global Projects seem "bad" on the whole. They might have directly harmful goals, irresponsible risk management, poor governance, or many other failings. Others seem "good" on net. This is not in terms of expected value for the world, but in terms of the intrinsic properties of the ... (read more)

What's wrong with the EA-aligned research pipeline?

Yeah, I am worried we may be talking past each other somewhat. My takeaway from the grantmaker quotes from FHI/OpenPhil was that they don't feel they have room to grow in terms of determining the expected value of the projects they're looking at. Very prepared to change my mind on this; I'm literally just going from the quotes in the context of the post to which they were responding.

Given that assumption (that grantmakers are already doing the best they can at determining EV of projects), then I think my three categories do carve nature at the joints. But ... (read more)

Oh, I definitely don't think that grantmakers are already doing the best that could be done at determining the EV of projects. And I'd be surprised if any EA grantmaker thought that that was the case, and I don't think the above quotes say that. The three quotes you gave are essentially talking about what the biggest bottleneck is, and saying that maybe the biggest bottleneck isn't quite "vetting", which is not the same as the claim that there'd be zero value in increasing or improving vetting capacity. 

Also note that one of the three quotes still foc... (read more)

What's wrong with the EA-aligned research pipeline?

Your previous comment seemed to me to focus on demand and supply and note that they'll pretty much always not be in perfect equilibrium, and say "None of those problems indicate that something is wrong", without noting that the thing that's wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.

In the context of the EA forum, I don't think it's necessary to specify that these are problems. To state it another way, there are three conditions that could exist (let's say in a given year):

  1. Grantmakers run out of money and a
... (read more)
3MichaelA3moI think it's not especially useful to focus on the division into just those three conditions. In particular, we could also have a situation where vetting is one of the biggest constraints, and even if we're not in that situation vetting is still a constraint - it's not just about the number of high-EV projects (with a competent and willing team etc.) and the number of dollars, but also whether the grantmakers can find the high-EV projects and discriminate between them and lower-EV ones. Relatedly, there could be a problem of grantmakers giving to things that are "actually relatively low EV" (in a way that could've been identified by a grantmaker with more relevant knowledge and more time, or using a better selection process, or something like that). I think maybe there's been some confusion where you're thinking I'm saying grantmakers have "too high a bar"? I'm not saying that. (I'm agnostic on the question, and would expect it differs between grantmakers.)
What's wrong with the EA-aligned research pipeline?

In particular, I think it implies the only relevant type of "demand" is that coming from funders etc., whereas I'd want to frame this in terms of ways the world could be improved.

My position is that "demand" is a word for "what people will pay you for." EA exists for a couple reasons:

  1. Some object-level problems are global externalities, and even governments face a free rider problem. Others are temporal externalities, and the present time is "free riding" on the future. Still others are problems of oppression, where morally-relevant beings are exploited in
... (read more)
2MichaelA3moThis seems reasonable (at least in an econ/business context), but I guess really what I was saying in my comment is that your previous comment seemed to me to focus on demand and supply and note that they'll pretty much always not be in perfect equilibrium, and say "None of those problems indicate that something is wrong", without noting that the thing that's wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc. I think I sort-of agree with your other two points, but I think they seem to constrain the focus to "demand" in the sense of "how much will people pay for people to work on this", and "supply" in the sense of "people who are willing and able to work on this if given money", whereas we could also think about things like what non-monetary factors drive various types of people to be willing to take the money to work on these things. (I'm not sure if I've expressed myself well here. I basically just have a sense that the framing you've used isn't clearly highlighting all the key things in a productive way. But I'm not sure there are actual any interesting, major disagreements here.)
EA is a Career Endpoint

I can see how you might interpret it that way. I'm rhetorically comfortable with the phrasing here in the informal context of this blog post. There's a "You can..." implied in the positive statements here (i.e. "You can take 15 years and become a domain expert"). Sticking that into each sentence would add flab.

There is a real question about whether or not the average person (and especially the average non-native English speaker) would understand this. I'm open to argument that one should always be precisely literal in their statements online, to prioritize avoiding confusion over smoothing the prosody.

EA is a Career Endpoint

Thanks for that context, John. Given that value prop, companies might use a TB-like service under two constraints:

  1. They are bottlenecked by having too few applicants. In this case, they have excess interviewing capacity, or more jobs than applicants. They hope that by investigating more applicants through TB, they can find someone outstanding.
  2. Their internal headhunting process has an inferior quality distribution relative to the candidates they get through TB. In this case, they believe that TB can provide them with a better class of applicants than their o
... (read more)
EA is a Career Endpoint

Great thoughts, ishaan. Thanks for your contributions here. Some of these thoughts connect with MichaelA's comments above. In general, they touch on the question of whether or not there are things we can productively discover or say about the needs of EA orgs and the capabilities of applications that would reduce the size of the "zone of uncertainty."

This is why I tried to convey some of the recent statements by people working at major EA orgs on what they perceive as major bottlenecks in the project pipeline and hiring process.

One key challenge is triangu... (read more)

2ishaan3moI guess the "easy" answer is "do a poll with select interviews" but otherwise I'm not sure. I guess it would depends on which specific types of information you mean? To some degree organizations will state what they want and need in outreach. If you're referring to advice like what I said re: "indicate that you know what EA is in your application", a compilation of advice posts like this one about getting a job in EA [ https://forum.effectivealtruism.org/posts/btgGxzFaHaPZr7RQm/a-guide-to-improving-your-odds-at-getting-a-job-in-ea?fbclid=IwAR3pT1Z1rI7PhMZeBuhmDwxgmHND7tawg7Z2CyfjtSbSeQ1tAsvU_Vfgl6U] might help. Or you could try to research/interview to find more concrete aspects of what the "criteria +bar to clear on those criteria" is for different funders if you see a scenario where the answer isn't clearly legible. (If it's a bar at all. For some stuff it's probably a matter of networking and knowing the right person.) Another general point on collecting advice is that I think it's easy to accidentally conflate "in EA" (or even "in the world") with "in the speaker's particular organization, in that particular year, within that specific cause area" when listening to advice…The same goes for what both you and I have said above. For example, my perspective on early-career is informed by my particular colleagues, while your impression that "funders have more money than they can spend" or the work being all within "a small movement" etc is not so applicable for someone who wants to work in global health. Getting into specifics is super important.
4MichaelA3moI think that conflicts with some phrasings in this post, which are stated as recommendations/imperatives. So if in future you again have the goal of not telling people what they should do but rather providing something more like emotional support or a framework, I recommend trying to avoid that kind of phrasing. (Because as mentioned in another comment, I think this post in effect provides career advice and that that advice is overly specific and will only be right for some readers.) Example paragraph that's stated as about what people should do:
EA is a Career Endpoint

Good thoughts. I think this problem decomposes into three factors:

  1. Should there be a bar, or should all EA projects get funded in order of priority until the money runs out?
  2. If there's a bar, where should it be set, and why?
  3. After the bar is set, when should grantmakers re-examine its underlying reasoning to see if it still makes sense under present circumstances?

My argument actively argues that we should have a bar, is agnostic on how high the bar should be, and assumes that the bar is immobile for the purposes of the reader.

At some point, I may give conside... (read more)

EA is a Career Endpoint

I agree, I should have included "or a safe career/fallback option" to that.

EA is a Career Endpoint

My sense is that Triplebyte focuses on "can this person think like an engineer" and "which specific math/programming skills do they have, and how strong are they?" Then companies do a second round of interviews where they evaluate Triplebyte candidates for company culture. Triplebyte handles the general, companies handle the idiosyncratic.

It just seems to me that Triplebyte is powered by a mature industry that's had decades of time and massive amounts of money invested into articulating its own needs and interests. Whereas I don't think EA is old or big or... (read more)

5John_Maxwell3moI used to work as an interviewer for TripleByte. Most companies using TripleByte put TripleByte-certified candidates through their standard technical onsite. From what I was able to gather, the value prop for companies working with TripleByte is mostly about 1. expanding their sourcing pipeline to include more quality candidates and 2. cutting down on the amount of time their engineers spend administering screens to candidates who aren't very good. Some of your comments make it sound like a TB like service for EA has to be a lot better than what EA orgs are currently doing to screen candidates. Personally, I suspect there's a lot of labor-saving value to capture if it is merely just as good (or even a bit worse) than current screens. It might also help organizations consider a broader range of people.
3MichaelA3moI agree that there are fewer lower hanging fruit than there used to be. On the other hand, there's more guidance on what to do and more support for how to do it (perhaps "better maps to the trees" and "better ladders" - I think I'm plagiarising someone else on the ladder bit). I'd guess that it is now overall somewhat or significantly harder for someone in the position Ben Todd was in to make something as useful as 80k, but it doesn't seem totally clear. And in any case, something as useful as 80k is a high bar! I think something could be much less useful and still be very useful. And someone perhaps could "skill up" more than Ben Todd had, but only for like a couple years. And I think there really are still a lot of fairly low hanging fruit. I think some evidence for this is the continuing number of EA projects that seem to fill niches that seem like they obviously should be filled, seem to be providing value, and are created by pretty early-career people. (I can expand on this if you want, but I think e.g. looking at lists of EA orgs already gives a sense of what I mean.) I agree with many parts of your comment, but I continue to think only some sizeable fraction of people should be advised to "Go make yourself big and strong somewhere else, then come back here and show us what you can do", while also: * many people should try both approaches at first * many people should focus mostly on the explicitly EA paths (usually after trying both approaches and getting some evidence about comparative advantage) * many people should go make themselves big and strong and impactful somewhere else, and then just stay there, doing great stuff I think it's perhaps a little irresponsible to give public advice that's narrower than that - narrower advice makes sense if you're talking to a specific person and you have evidence about which of those categories of people they're part of, but not for a public audience. (I think it's also fine to give public advice like "
7tamgent3moI don't think this is true, at least not as a general rule. I think you can do both (have a safe career and pursue something entrepreneurial) if you make small, focused bets to begin with and build from there. Related discussion here [https://forum.effectivealtruism.org/posts/E4BB4GoXgm3bYPp9G/is-pursuing-ea-entrepreneurship-becoming-more-costly-to-the?commentId=XiyMZEpAv5kXqZYNn ].
EA is a Career Endpoint

Triplebyte's value proposition to its clients (the companies who pay for its services) is an improved technical interview process. They claim to offer tests that achieve three forms of value:

  1. Less biased
  2. More predictive of success-linked technical prowess
  3. Convenient (since companies don't have to run the technical interviews themselves)

If there's room for an "EA Triplebyte," that would suggest that EA orgs have at least one of those three problems.

So it seems like your first step would be to look in-depth at the ways EA orgs assess technical research skills.

A... (read more)

3MichaelA3moThanks for these thoughts - I think I'll add a link to your comment from that section of my post. I think your analysis basically sounds correct to me. I would also be quite surprised if this came into existence (and was actually used by multiple orgs) in the next 10 years, and I don't think it's likely to be the highest priority intervention for improving the EA-aligned research pipeline, though I'd be keen to at least see people flesh out and explore the idea a bit more. FWIW, I'm guessing that this commenter on my doc meant something a little bit more distant from Triplebyte specifically than what your comment suggests - in particular, I don't think the idea would be just to conduct technical interviews, but also other parts of the selection process. At least, that's how I interpreted the comment, and seems better to me, given that I think it's relatively rare for EA orgs to actually have technical interviews in their selection processes. (There may often be a few questions like that, but without it being the main focus for the interview. Though I also might be misinterpreting what a technical interview is - I haven't worked in areas like engineering or IT.)
EA is a Career Endpoint

Figuring out how to give the right advice to the right person is a hard challenge. That's why I framed skilling up outside EA as being a good alternative to "banging your head against the wall indefinitely." I think the link I added to the bottom of this post addresses the "many paths" component.

The main goal of my post, though, is to talk about why there's a bar (hurdle rate) in the first place. And, if readers are persuaded of its necessity, to suggest what to do if you've become convinced that you can't surpass it at this stage in your journey.

It would ... (read more)

3MichaelA3moAlso, here's a somewhat relevant intervention idea that seems interesting to me, copied from an upcoming post in my sequence on improving the EA-aligned research pipeline (so this passage focuses on research roles, but you can easily extrapolate the ideas): IMPROVING THE VETTING OF (POTENTIAL) RESEARCHERS, AND/OR BETTER “SHARING” THAT VETTING For example: * Improving selection processes at EA-aligned research organisations * Increasing the number and usefulness of referrals of candidates from one selection process (e.g., for a job or a grant) to another selection process. * This already happens, but could perhaps be improved by: * Increasing how often it happens * Increasing how well-targeted the referrals are * Increasing the amount of information provided to the second selection process? * Increasing how much of the second selection process the candidate can “skip”? * Creating something like a "Triplebyte [https://triplebyte.com/] for EA researchers", which could scalably evaluate aspiring/junior researchers, identify talented/promising ones, and then recommend them to hirers/grantmakers^[This idea was suggested as a possibility by a commenter on a draft of this post.] * This could resolve most of the vetting constraints if it could operate efficiently and was trusted by the relevant hirers/grantmakers
2MichaelA3moI do think getting it would be good to get more clarity on what proportion of EAs are spending too much vs too little time pursuing explicitly EA-aligned roles (given their ultimate goals, fit, etc.), and more clarity on what proxies can be used to help people work out which group they're in. Though I think some decent insights can already be gleaned from, for example, posts tagged Working at EA vs Non-EA Orgs [https://forum.effectivealtruism.org/tag/working-at-ea-vs-non-ea-orgs] or things linked to from those posts, and one on one career advice conversations. (And I think we can also improve on the current situation - where some people are writing themselves off and others have too narrow a focus - by just making sure we always clearly acknowledge individual variation, there being many different good paths, it taking time to work out what one is a fit for, etc.) (I also like that Slate Star Codex post you link to, and agree that it's relevant here.)
EA is a Career Endpoint

Hi Michael, thanks for your responses! I'm mainly addressing the metaphorical runner on the right in the photograph at the start of the post.

I am also agnostic about where the bar should be. But having a bar means that you have to maintain the bar in place. You don't move the bar just because you couldn't find a place to spend all your money.

For me, EA has been an activating and liberating force. It gives me a sense of direction, motivation to continue, and practical advice. I've run EA research and community development projects with Vaidehi Agarwalla, an... (read more)

8MichaelA3moI guess in terms of this metaphor, part of what I'm saying is that there are also some people who aren't "in the race" but really would do great if they joined it (maybe after a few false starts), and other people who are in the race but would be better off switching to tennis instead (and that's great too!). And I'm a little worried about saying something like "Hey, the race is super hard! But don't feel bad, you can go train up somewhere else for a while, and then come back!" Because some people could do great in the race already! (Even if several applications don't work out or whatever; there's a lot of variation between what different roles need, and a lot of random chance, etc.) And some of these people are erring in the direction of self-selecting out, feeling imposter syndrome, getting 4 job rejections and then thinking "well, that proves it, I'm not right for these roles", etc. Meanwhile, other people shouldn't "train up and come back", but rather go do great things elsewhere long-term! (Not necessarily leaving the community, but just not working at an explicitly EA org or with funding from EA funders.) And some of these people are erring in the direction of having their heart set of getting into an "EA job" eventually, even if they have to train up elsewhere first. --- I'd also be worried about messaging like "Everyone needs to get in this particular race right now! We have lots of money and lots to do and y'all need to come over here!" And it definitely seems good to push against that. But I think we can try to find a middle ground that acknowledges there are many different paths, and different ones will be better for different people at different times, and that's ok. (E.g., I think this post by Rob Wiblin [https://forum.effectivealtruism.org/posts/vHPR95Gnsa3Gkgjof/consider-a-wider-range-of-jobs-paths-and-problems-if-you] does that nicely.)
What's wrong with the EA-aligned research pipeline?

Just to address point (2), the comments in "EA is vetting-constrained" suggest that EA is not that vetting-constrained:

  • Denise Melchin of Meta Fund: "My current impression for the Meta space is that we are not vetting constrained, but more mentoring/pro-active outreach constrained.... Yes, everything I said above is sadly still true. We still do not receive many applications per distribution cycle (~12)."
  • Claire Zabel of Open Philanthropy: "Based on my experience doing some EA grantmaking at Open Phil, my impression is that the bottleneck isn't in vetting pr
... (read more)
5MichaelA2moMultiple comments from multiple fund managers on the EA Infrastructure Fund's recent Ask Us Anything [https://forum.effectivealtruism.org/posts/KesWktndWZfGcBbHZ/] strongly suggest they also believe there are strong vetting constraints (even if other constraints also matter a lot). So I'm confident that the start of your comment is incorrect in an important way about an important topic. I think I was already confident of this due to the very wide array of other indications that there are strong vetting constraints, the fact that the quotes you mention don't really indicate that "EA is not that vetting-constrained" (with the exception of Denise's comment and the meta space specifically), and the fact that other comments on the same post you're quoting comments from suggest EA is quite vetting constrained. (See my other replies for details.) But this new batch of evidence reminded me of this and made the incorrectness more salient. I've therefore given your comment a weak downvote. I think it'd be better if it had lower karma because I think the comment would mislead readers about an important thing (and the high karma will lend it more credence). But you were writing in good faith, you were being polite, and other things you said in the comment were more reasonable, so I refrained from a strong downvote. (But I feel a little awkward/rude about this, hence the weird multi-paragraph explanation.)
2MichaelA3moTo be clear, I agree that "vetting" isn't the only key bottleneck or the only thing worth increasing or improving, and that things like having more good project ideas, better teams to implement them, more training and credentials, etc. can all be very useful too. And I think it's useful to point this out. In fact, my second section was itself only partly about vetting:
7MichaelA3moI think there's valuable something to this point, but I don't think it's quite right. In particular, I think it implies the only relevant type of "demand" is that coming from funders etc., whereas I'd want to frame this in terms of ways the world could be improved. I'm not sure we could ever reach a perfect world where it seems there's 0 room for additional impactful acts, but we could clearly be in a much better world where the room/need for additional impactful act is smaller and less pressing. Relatedly, until we reach a far better world, it seems useful to have people regularly spotting what there's an undersupply of at the moment and thinking about how to address that. The point isn't to reach a perfect equilibrium between the resources and then stay there, but to notice which type of resource tends to be particularly useful at the moment and then focus a little more on providing/finding/using that type of resource. (Though some people should still do other things anyway, for reasons of comparative advantage, taking a portfolio approach, etc.) I like Ben Todd's comments [https://80000hours.org/podcast/episodes/ben-todd-on-what-effective-altruism-most-needs/] on this sort of thing.
9MichaelA3moI actually don't think that this is correct. Denise's comment does suggest that, for the meta space specifically. But Claire's comment seems broadly in agreement with the "vetting-constrained" view, or at least the view that that's one important constraint. Some excerpts: And Jan Kulveit's comment is likewise more mixed. And several other comments mostly just agree with the "vetting-constrained" view. (People can check it out themselves.) Of course, this doesn't prove that EA is vetting-constrained - I'm just contesting the specific claim that "the comments" on that post "suggest that EA is not that vetting-constrained". (Though I also do think that vetting is one key constraint in EA, and I have some additional evidence for that that's independent of what's already in that post and the comments there, which I could perhaps try expand on if people want.)
Ending The War on Drugs - A New Cause For Effective Altruists?

Here's a list of critiques of the ITN framework many of which involve critiques of the neglectedness criterion.

Ending the war on drugs has a few obvious goods:

  1. Making therapeutic or life-improving drugs more available
  2. Freeing up tax money for other purposes
  3. Decreasing punishment
  4. Decreasing revenue for terrorists and other bad actors

This seems to be a cause where partial success is meaningful. Every reduction in unnecessary imprisonment, tax dollar saved, and terrorist cell put out of business is a win. We also have some roughly sliding scales - the level of en... (read more)

Concerns with ACE's Recent Behavior

Those are the circles many of us exist in. So a more precise rephrasing might be “we want to stay in touch with the political culture of our peers beyond EA.”

This could be important for epistemic reasons. Antagonistic relationships make it hard to gather information when things are wrong internally.

Of course, PR-based deference is also a form of antagonistic relationship. What would a healthy yet independent relationship between EA and the social justice movement look like?

1tamgent3moCullen asked a similar question here [https://forum.effectivealtruism.org/posts/dqFjPFHmgFEZpg8ua/what-makes-outreach-to-progressives-hard] recently. Progressives and social justice movement are definitely not the same, but there's some overlap.
How to PhD

That makes sense. I like your approach of self-diagnosing what sort of resources you lack, then  tailoring your PhD to optimize for them.

One challenge with the "work backwards" approach is that it takes quite a bit of time to figure out what problems to solve and how to solve them. As I attempted this planning my own immanent journey into grad school, my views gained a lot of sophistication, and I expect they'll continue to shift as I learn more. So I view grad school partly as a way to pursue the ideas I think are important/good fits, but also as a w... (read more)

6eca4mo"Working backwards" type thinking is indeed a skill! I find it plausible a PhD is a good place to do this. I also think there might be other good ways to practice it, like for example seeking out the people who seem to be best at this and trying to work with them. +1 on this same type of thinking being applicable to gathering resources. I don't see any structural differences between these domains.
Why I prefer "Effective Altruism" to "Global Priorities"

This is great, I’ll put a note in the main post highlighting this when I get home.

How to PhD

Just to clarify, it sounds like you are:

  1. Encouraging PhD students to be more strategic about how they pursue it
  2. Discouraging longtermist EA PhD-holders from going on to pursue a faculty position in a university, thus implying that they should pursue some other sector (perhaps industry, government, or nonprofits)

I also wanted to encourage you to add more specific observations and personal experiences that motivate this advice. What type of grad program are you in now (PhD or master's), and how long have you been in it? Were you as strategic in your approach t... (read more)

1eca4moI am doing 1. 2 is an incidental from the perspective of this post, but is indeed something I believe (see my response to bhalperin). I think my attempt to properly flag my background beliefs may have led to the wrong impression here. Or alternatively my post doesn't cover very much on pursuing academia, when the expected post would have been almost entirely focused on this, thereby seeming like it was conveying a strong message? In general I don't think about pursuing "sectors" but instead about trying to solve problems. Sometimes this involves trying to get a particular government gig to influence a policy, or needing to write a paper with a particular type of credibility that you might get from an academic affiliation or a research non-profit, or needing to build and deploy a technical system in the world, which maybe requires starting an organization. I'd encourage folks to work backwards from problems, to possible solutions, to what would need to happen on an object level to realize those solutions, to what you do with your PhD and other career moves. "Academia" isn't the most useful unit of analysis in this project, which is partly why I wasn't primarily trying to comment on it. Regarding specific observations and personal experiences: I agree this post could be better with more things like this. Unfortunately, I don't feel like including them. Open invite to DM me if you are thinking about a PhD or already in one and want to talk more, including about my strategy.
Report on Semi-informative Priors for AI timelines (Open Philanthropy)

This prior should also work for other technologies sharing these reference classes. Examples might include a tech suite amounting to 'longevity escape velocity', mind reading, fully-immersive VR, or highly accurate 10+ year forecasting.

1Tom_Davidson4moAgreed - the framework can be applied to things other than AGI.
Can you turn me into an effective altruist and do you want to?

Hi Rob. I can only speak for myself. A lot of people, myself included, discover EA online, because the name or the ideas feel right.

Then we discover there’s a lot of people involved, huge amounts written, and many efforts going on. How do we meet people? How can we contribute? How can you find your place? How do we make sense of all the ideas?

I can only say that nobody is a nobody, and everybody struggles with these questions. It takes time to work it all out, so I advise patience. Write your thoughts out, and make sure to take care of yourself. It sounds like you are in the middle of building up a stable life for yourself, and I believe it’s extremely important for people in EA to focus on that first. Good luck!

3Rob4moThank you. I didn't want to talk about myself too much but since you referenced stability in my life, I have one major project to complete to put me in a position where I will be a capable human being ready to make an effective contribution to the lives of others who no longer needs to prioritize themselves and seek out a calling, preferably one that can provide myself a modest income too so that I can focus on it full time, but this isn't a necessity, so now is a good time to be thinking about what I want to do and how to go about it, then I will be in a position to take action straight away. The name is attractive in that I want to be of service in an effective way and I'm not sure what the ideals of EAs as a community are yet beyond using reason to make a difference in the most effective way. I have much to learn and I like to listen to educational material in the form of audiobooks in particular, so I will listen to one of the recommendations for reading from this website "Doing Good Better" on my long, slow run, either today or next week depending on if I can it ready in time (flat audiobook player battery, book not on Audible UK and other obstacles being present).
Why I prefer "Effective Altruism" to "Global Priorities"

Hi Jonas. On taking a second look, the sentence that clinched me interpreting your argument as being for a name change from EA to GP (or something else) was:

“ I personally would feel excited about rebranding "effective altruism" to a less ideological and more ideas-oriented brand (e.g., "global priorities community", or simply "priorities community")”

I will make a note that you aren’t advocating a name change. You may want to consider making this clearer in your post as well :)

2jackmalde4moIf you look back at Jonas' post a name change was just a "potential implication", alongside other steps to "de-emphasize the EA brand". I wouldn't say therefore that he is advocating a name change, just putting the idea out there. Also he certainly doesn't advocate changing it to "Global Priorities" specifically as you have claimed. It was just one very tentative idea he had (clue is in the use of "e.g."). EDIT: re-tracted as I thought AllAmericanBreakfast still thought Jonas was advocating for a name change but I misread
Why I prefer "Effective Altruism" to "Global Priorities"

I think it can be all of this, and much more. EA can have tremendous capacity for issuing broad recommendations and tailored advice to individual people. It can be about philosophy, governance, technology, and lifestyle.

How could we have a movement for effective altruism if we couldn’t encompass all that?

This is a community, not a think tank, and a movement rather than an institution. It goes beyond any one thing. So to join it or explain it - that’s a little like explaining what America is all about, or Catholicism is all about, or science is all about. You don’t just explain it, you live it, and the journey will look different to different people. That’s a feature, not a bug.

3Meadowlark4moThanks for this response! This is helpful, but I still have uncertainties. Take conferences as an example. Conferences can only be about so much, obviously given their limited time and bandwidth. Should we expect that EA conferences in the next ten years (let's say) will have all of these things? That Session A will be about how veganism is necessary (or unnecessary) and that Session B will be about how it only makes sense to focus on the longterm? I think it seems possible that you're right, but also EA is still very young and has already changed a lot in its short time on Earth. So, I think it's reasonable to assume that it will continue to change, and I think we can't easily say that it will or won't change in a way that becomes far less interested in lifestyle issues and far more interested in really big, cerebral questions about the future, cause x, and so on. Anecdotally, I think it's fair to notice that EA is moving in this direction a bit already. Why would we think that it won't continue to? The proportion of EA that is interested in lifestyle vs metaethics vs whatever else is not destined to be the same proportion forever, right? And therefore the content of the movement will change. Some of this disagreement might come down to the earlier forum debate of EA as a question vs an ideology. I view it as an ideology and very much not as something that you live in the way that you describe. But that strikes me as an agree to disagree-type situation.
Strong Evidence is Common

I didn’t say anything about what size/duration of returns would make you a top 1% trader.

Don't Be Bycatch

That’s good feedback and a complementary point of view! I wanted to check on this part:

“I think that a thing that this post gets wrong is that EA seems to be particularly prone to generating bycatch, and although there are solutions at the individual level, I'd also appreciate having solutions at higher levels of organization.”

Are you saying that you think EA is not particularly prone to generating bycatch? Or that it is, but it’s a problem that needs higher-level solutions?

Yeah, that's not my proudest sentence. I meant the former, that it is particularly prone to generating bycatch, and hence it would benefit from higher level solutions. In your post, you try to solve this at the level of the little fish, but addressing that at the fisherman level strikes me as a better (though complimentary) idea.

For Better Commenting, Avoid PONDS

Did I get them all? :D 

 

So close, yet so far! By ending your comment with a question and a smiley face, you missed "disengaged" and "prickly"! But keep trying, I know you've got this in you :P

A framework for discussing EA with people outside the community

I think for me, it might be best to use a straightforward “join us!” pitch.

Most people I know have considered the idea that there are better and worse ways to help the world. But they don’t extend that thinking to realize the implication that there might be a set of best ways. Nor do they have the long-tail of value concept. They also don’t have any emotional impulse pushing them to explore “what’s the best wat to help the world?” Nor do they have any links to the community besides me.

My experience is that most of my friends and family have very limited ba... (read more)

3AaronBoddy6moI love this! I think for me a real barrier is the fact that I barrel ahead with the ideas too quickly... like I want to jump straight in at the deep-end with "we should think of all lives as equally important and we should be trying to consider the ways our donation can go farthest" - that idea on its own maybe isn't controversial, but probably hasn't engaged my conversational partner in the same way as in your example. One of the main motivations for me writing this post was to have a mental checklist when discussing EA so that I don't barrel ahead without bringing the other person along for the ride :) So for me, I think it's useful to have a framework in my head so I can ensure that these ideas build upon each other: 1. do they want to do some good in the world 2. do they agree that all lives are equally important 3. do they agree that there are some situations where your donation/time will make far more of a difference than others 4. do they agree that it is possible/worthwhile to figure out which interventions are the most effective 5. this stuff is really engaging and there is already a whole movement that you can join so you don't have to do all this on your own! That's a simplified framework (I just tried to pick out the key beats in your conversation example) but it definitely helps for me to have a framework :)
Call for beta-testers for the EA Pen Pals Project!

Update: We were unsuccessful in seeking funding to automate this project, and for the time being we do not have capacity to maintain it manually. The project is closed.

EA lessons from my father

I think these issues are extremely complex, and I think you bring up a good point, one with underlying values that I agree with. Nevertheless, many of my research interests are in Alzheimer's, chronic severe pain, and life extension. I think that people in poor countries ultimately are going to improve their length and quality of life, and there's a strong trend in that direction already. I am long on Malaria being eradicated within the next 30 years. We mostly know what to do; what's holding us back is a combination of environmental caution... (read more)

2brb2431yVery clear argument, thank you. While I do not believe that I can change your mind, judging from your tone, I also think that I do not need to: happier and more relaxed people may truly be in a better position to share their privileges with others, who then will be also happier and more relaxed. Then, I hope you will succeed in your research, while reminding your peers about the cost-effective, EA ways to share happiness with persons in the world.
Research on developing management and leadership expertise

Do the book and other resource recommendations especially apply to people interested in working on animal welfare?

1Jamie_Harris1yNot really! It's just that some of the recommendations come from our conversations with managers and leaders who work in animal advocacy. The third tab on the spreadsheet has some animal-advocacy-specific resources, but most of them are generic.
Biases in our estimates of Scale, Neglectedness and Solvability?

Here is that review I mentioned. I'll try and add this post to that summary when I get a chance, though I can't do justice to all the mathematical details.

If you do give it a glance, I'd be curious to hear your thoughts on the critiques regarding the shape and size of the marginal returns graph. It's these concerns that I found most compelling as fundamental critiques of using ITN as more than a rough first-pass heuristic.

2MichaelStJules1yI've added a summary at the start of the Bonus section you could use: (And this is because we're taking the product of the expected values rather than the expected value of the product and not explicitly modelling correlations between terms.) Is this about Neglectedness assuming diminishing marginal returns? I think if you try to model the factors as they're defined formally and mathematically by 80,000 Hours, Tractability can capture effects in the opposite direction and, e.g. increasing marginal returns. At any rate, if the terms, as defined mathematically, are modelled properly (and assuming no division by zero issues), then when we take the product, we get Good done / extra resources, and there's nothing there that implies an assumption of diminishing or increasing marginal returns, so if Neglectedness assumes diminishing marginal returns, then the other factors assume increasing marginal returns to compensate. How many extra resources we consider in Neglectedness could be important, though, and it could be the case that Good done / extra resources is higher or lower depending on the size of "extra resources". I think this is where we would see diminishing or increasing marginal returns, but no assumption either way.
Biases in our estimates of Scale, Neglectedness and Solvability?

The end of this post will be beyond my math til next year, so I’m glad you wrote it :) Have you given thought to the pre-existing critiques of the ITN framework? I’ll link to my review of them later.

In general, ITN should be used as a rough, non-mathematical heuristic. I’m not sure the theory of cause prioritization is developed enough to permit so much mathematical refinement.

In fact, I fear that it gives a sheen of precision to what is truly a rough-hewn communication device. Can you give an example of how an EA organization presently using ITN could improve their analysis by implementing some of the changes and considerations you’re pointing out?

3MichaelStJules1yThanks! I think I've looked at a few briefly. I think the framework is mostly fine theoretically, based on the formal definitions of the (linear scale) terms and their product as a cost-effectiveness estimate. I'd imagine the concerns are more with the actual interpretation and application. For example, Neglectedness is calculated based on current resouces, not also projected resources, so an issue might not be really neglected, because you expect resources to increase in the future without your intervention. This accounts partially for "urgency". I think EA orgs are mostly not making the mistakes I describe in each of sections 1, 2 and 3, but the solution is pretty straightforward: consider the possibility of negative outcomes, and take expectations before taking logarithms. For the Bonus section, my suggestion would be to give (independent) distributions for each of the factors, and check the bounds I describe in "Bounding the error." to check how sensitive the analysis could be to dependencies, and if it's not sensitive enough to change your priorities, then proceed as usual. If you find that it could be sensitive enough and you think there may be dependencies, model dependencies in the distributions and actually calculate/estimate the expected value of the product (using Guesstimate [https://www.getguesstimate.com/], for example, but keeping in mind that extremely unlikely outcomes might not get sampled, so if most of the value is in those, your estimate will usually be way off). Or you can just rely on cost-effectiveness analyses for specific interventions when they're available, but they aren't always available.
[WIP] Summary Review of ITN Critiques

I also hoped to imply that ITN is more than a heuristic. It also serves a rhetorical purpose.

I worry that its seeming simplicity can belie the complexity of cause prioritization. Calculating an ITN rank or score can be treated as the end, rather than the beginning, of such an effort. The numbers can tug the mind in the direction of arguing with the scores, rather than evaluating the argument used to generate them.

My hope is to encourage people to treat ITN scores just as you say - taking them lightly and setting them aside once they've developed a deeper understanding of an issue.

Thanks for reading.

[WIP] Summary Review of ITN Critiques

Agreed. However, one of the subcritiques in that point is the divide-by-zero issue that makes issues that have received zero investment "theoretically unsolvable." This is because a % increase in resources from a starting point of 0 will always yield zero. The critic seems to feel it's a result of dividing up the issue in this way.

I leave it to the forum to judge!

Can you give a few examples? Having options and avoiding risk are both good things, all else being equal.

The ITN framework, cost-effectiveness, and cause prioritisation

There’s a range of posts critiquing ITN from different angles, including many of the ones you specify. I was working on a literature review of these critiques, but stopped in the middle. It seemed to me that organizations that use ITN do so in part because it’s an easy to read communication framework. It boils down an intuitive synthesis of a lot of personal research into something that feels like a metric.

When GiveWell analyzes a charity, they have a carefully specified framework they use to derive a precise cost effectiveness estimate. By contrast, I don

... (read more)

I want to give more context for the MacAskill quote.

The most obvious implication [of the Hinge of History hypothesis], however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building. If you think that some future time will be much more influential than today, then a natural strategy is to ensure that future decision-makers, who you are happy to defer to, have as many resources as possible when some futu
... (read more)

Her first example of "complex cluelessness" is the same population size argument made by Morgensen, which I dealt with in section 2a. I think both simple and complex cluelessness are dealt with nicely by the debugging model I am proposing. But I'm not sure it's a valid distinction. I suspect all cluelessness is complex.

Debugging is a form of capacity building, but the distinction I drew is necessary. Sometimes we try to build advance capacity to solve an as-yet-intractable problem, as in AI safety research. This is vulnerable to the clu... (read more)

5Pablo2yI think your dismissal is premature. For one thing, the "debugging" approach you favor has been discussed by Will MacAskill [https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1] , a Senior Research Fellow at GPI: For another, the cluelessness literature isn't exhausted by GPI's contributions to it, and it includes other, more extensive discussions of your favorite approach, notably by Brian Tomasik [https://foundational-research.org/charity-cost-effectiveness-in-an-uncertain-world/#Punting_to_the_future] :
An update on Operations Camp 2019

Same. Keep up the good work. I'm looking forward to hearing more.

Competition is a sign of neglect in important causes with long time horizons for impact.

In my OP, I just meant that if the applicant gets in, they can teach. Too many applicants doesn't necessarily indicate that the field is oversubscribed, it just means that there's a mentorship bottleneck. One possible reason is that senior people in the field simply enjoy direct work more than teaching and choose not to focus on it. Insofar as that's the case, candidates are especially suitable if they're willing to focus more on providing mentorship if they get in and a bottleneck remains by the time they become senior.


Thanks for the feedback, it helps me understand that my original post may not have been as clear as I thought.

Competition is a sign of neglect in important causes with long time horizons for impact.

in the absence of other empirical information, I think it's a safe assumption that present bottlenecks correlate with future bottlenecks, though your first point is well taken.

I'm not quite following your second argument. It seems to say that the same level of applicant pool growth produces fewer mentors in mentorship-bottlenecked fields than in less mentorship-bottlenecked fields, but I don't understand why. Enlighten me?

Your third point is also correct. Stated generally, finding ways to increase the availability of the primary bottlenecked resource, or accomplish the same goal while using less of it, is how we can get the most leverage.

1Lukas_Finnveden2yIf a field is bottlenecked on mentors, it has too few mentors per applicants, or put differently, more applicants than the mentors can accept. Assuming that each applicant needs some fixed amount of time with a mentor before becoming senior themselves, increasing the size of the applicant-pool doesn't increase the number of future senior people, because the present mentors won't be able to accept more people just because the applicant-pool is bigger. Caveats: * More people in the applicant-pool may lead to future senior people being better (because the best people in a larger pool are probably better). * It's not actually true that a fixed amount of mentor-input makes someone senior. With a larger applicantpool, you might be able to select for people who requires less mentor-input, or who has a larger probability of staying in the field, which will translate to more future senior people (but still significantly less than in applicant-bottlenecked fields). * My third point above: some people might be able to circumvent applying to the mentor-constrained positions altogether, and still become senior.
Load More