IanDavidMoss

Wiki Contributions

Comments

FTX EA Fellowships

FWIW I'd also want to watch out for a "town/gown" dynamic developing over time where the newcomers and locals basically don't interact, which could cause various problems down the road. I'm sure FTX has thought about this, but I'd love to see the vision of "an EA community in the Bahamas" include people who are from the Bahamas as well.

Prioritization Research for Advancing Wisdom and Intelligence

Great to see more attention on this topic! I think there is an additional claim embedded in this proposal which you don't call out:

6. Categories of intervention in the wisdom/intelligence space are sufficiently differentiated in long-term impact potential for a prioritization exercise to yield useful insights.

I notice that I'm intuitively skeptical about this point, even though I basically buy your other premises. It strikes me that there is likely to be much more variation in impact potential between specific projects or campaigns, e.g. at the level of a specific grant proposal, than there is between whole categories, which are hard to evaluate in part because they are quite complementary to each other and the success of one will be correlated with the success of others. You write, "We wouldn’t want to invest a lot of resources into one field, to realize 10 years later that we could have spent them better in another." But what's to say that this is the only choice we face? Why not invest across all of these areas and chase optimality by judging opportunities on a project-by-project basis rather than making big bets on one category vs. another?

The Cost of Rejection

As another option to get feedback, many colleges and universities' career development offices offer counseling to their schools' alumni, and resume review (often in the context of specific applications to specific jobs) is one of the standard services they provide at no extra charge.

Noticing the skulls, longtermism edition

But essential to the criticism is that I shouldn't decide for them.

It seems like this is a central point in David's comment, but I don't see it addressed in any of what follows. What exactly makes it morally okay for us to be the deciders?

It's worth noting that in both US philanthropy and the international development field, there is currently a big push toward incorporating affected stakeholders and people with firsthand experience with the issue at hand directly into decision-making for exactly this reason. (See participatory grantmaking, the Equitable Evaluation Initiative, and the process that fed into the Sustainable Development Goals, e.g.) I recognize that longtermism is premised in part on representing the interests of moral patients who can't represent themselves. But the question remains: what qualifies us to decide on their behalf? I think the resistance to longtermism in many quarters has much more to do with a suspicion that the answer to that question is "not much" than any explicit valuation of present people over future people.

Improving Institutional Decision-Making: Which Institutions? (A Framework)

Thanks for the comment!

Do you have any further thoughts since posting this regarding how difficult vs valuable it is to attempt quantification of the values? Approximately how time-consuming is such work in your experience?

With the caveat that I'm someone who's pretty pro-quantification in general and also unusually comfortable with high-uncertainty estimates, I didn't find the quantification process to be all that burdensome. In constructing the FDA case study, far more of my time was spent on qualitative research to understand the potential role the FDA might play in various x-risk scenarios than coming up with and running the numbers. Hope that helps!

Does the Forum Prize lead people to write more posts?

I agree with other commenters who have pointed out that using "more posts by previous prize-winning authors" as a proxy for the stated goal of "the creation of more content of the sort we want to see on the Forum"  seems like a strange way to evaluate the efficacy of the Forum Prize. In addition to the points already mentioned, I would add two more:

  • It doesn't consider potential variation in quality among posts by the same author. If prize-winning authors feel that they have set a standard that they feel it's important to continue meeting in the future and that means they post less frequent but more thoughtful articles, that's generally a trade I'd be happy to accept as a reader.
  • It ignores the potential impact of the Forum Prize on other people's writing. How many people have been inspired to write something either because of the existence of the prize itself or because of some piece of writing that they learned about because of the prize? I would bet it's not zero.

Indeed, I would argue that the prize adjudication process itself offers a useful infrastructure for evaluating the Forum experience. Since you have a record of the scores that posts received each month as well as the qualitative opinions of longtime judges, you have the tools you need to assess in a semi-rigorous way whether the quality of the top posts has increased or decreased over time.

I also wanted to express that if CEA really is ceasing the Forum Prize as such, that seems like a fairly major decision that should get its own top-level post, as the prize announcements themselves do. As it is, it's buried in an article whose title poses what I think most people would consider to be a pretty esoteric research question, so I expect that a lot of people will miss it.

Disentangling "Improving Institutional Decision-Making"

Once again, I think I agree, although I think there are some rationality/decision-making projects that are popular but not very targeted or value-oriented. Does that seem reasonable?

It does, and I admittedly wrote that part of the comment before fully understanding your argument about classifying the development of general-use decision-making tools as being value-neutral. I agree that there has been a nontrivial focus on developing the science of forecasting and other approaches to probability management within EA circles, for example, and that those would qualify as value-neutral using your definition, so my earlier statement that value-neutral is "not really a thing" in EA was unfair.

If I were to draw this out, I would add power/scope of institutions as a third axis or dimension (although I would worry about presenting a false picture of orthogonality between power and decision quality). The impact of an institution would then be related to the relevant volume  of a rectangular prism, not the relevant area of a rectangle. 

Yeah, I also thought of suggesting this, but think it's problematic as well. As you say, power/scope is correlated with decision quality, although more on a long-term time horizon than in the short term and more for some kinds of organizations (corporations, media, certain kinds of nonprofits) than others (foundations, local/regional governments). I think it would be more parsimonious to just replace decision quality with institutional capabilities on the graphs and to frame DQ in the text as a mechanism for increasing the latter, IMHO. (Edited to add: another complication is that the line between institutional capabilities that come from DQ and capabilities that come from value shift is often blurry. For example, a nonprofit could decide to change its mission in such a way that the scope of its impact potential becomes much larger, e.g., by shifting to a wider geographic focus. This would represent a value improvement by EA standards, but it also means that it might open itself up to greater possibilities for scale from being able to access new funders, etc.)

Would you mind if I added an excerpt from this or a summary to the post?

No problem, go ahead!

Disentangling "Improving Institutional Decision-Making"

Wow! It's really great to see such an in-depth response to the definitional and foundational work that's been taking place around IIDM over the past year, plus I love your hand-drawn illustrations! As the author or co-author of several of the pieces you cited, I thought I'd share a few thoughts and reactions to different issues you brought up. First, on the distinctions and delineations between the value-neutral and value-oriented paradigms (I like those labels, by the way):

  • I don't quite agree that Jess Whittlestone's problem profile for 80K falls into what you're calling the "value neutral" category, as she stresses at several points the potential of working with institutions that are working on "important problems" or similar. For example, she writes: "Work on 'improving decision-making' very broadly isn’t all that neglected. There are a lot of people, in both industry and academia, trying out different techniques to improve decision-making....However, there seems to be very little work focused on...putting the best-proven techniques into practice in the most influential institutions." The definition of "important" or "influential" is left unstated in that piece, but from the context and examples provided, I read the intention as one of framing opportunities from the standpoint of broad societal wellbeing rather than organizations' parochial goals.
  • This segues nicely into my second response, which is that I don't think the value-neutral version of IIDM is really much of a thing in the EA community. CES is sort of an awkward example to use because a core tenet of democracy is the idea that one citizen's values and policy preferences shouldn't count more than another's; I'd argue that the impartial welfarist perspective that's core to EA philosophy is rooted in similar ideas. By contrast, I think people in our community are much more willing to say that some organizations' values are better than others, both because organizations don't have the same rights as human beings and also because organizations can agglomerate disproportionate power more easily and scalably than people.  I've definitely seen disagreement about how appropriate or effective it is to try to change organizations' values, but not so much about the idea that they're important to take into account in some way.
  • There is a third type of value-oriented approach that you don't really explore but I think is fairly common in the EA community as well as outside of it: looking for opportunities to make a positive impact from an impartial welfarist perspective on a smaller scale within a non-aligned organization (e.g., by working with a single subdivision or team, or on one specific policy decision) without trying to change the organization's values in a broader sense.

I appreciated your thought-provoking exploration of the two indirect pathways to impact you proposed. Regarding the second pathway (selecting which institutions will survive and flourish), I would propose that an additional complicating factor is that non-value-aligned institutions may be less constrained by ethical considerations in their option set, which could give them an advantage over value-aligned institution from the standpoint of maximizing power and influence.

I did have a few critiques about the section on directly improving the outcomes of institutions' decisions:

  • I think the 2x2 grid you use throughout is a bit misleading. It looks like you're essentially using decision quality as a proxy for institutional power, and then concluding that intentions x capability = outcomes. But decision quality is only one input into institutional capabilities, and in the short term is dominated by institutional resources—e.g., the government of Denmark might have better average decision quality than the government of the United States, but it's hard to argue that Denmark's decisions matter more. For that reason, I think that selecting opportunities on the basis of institutional power/positioning is at least as important as value alignment. The visualization approach you took in the "A few overwhelmingly harmful institutions" graph seems to be on the right track in this respect.
  • One issue you don't really touch on except in a footnote is the distinction between stated values and de facto values for institutions, or internal alignment among institutional stakeholders. For example, consider a typical private health insurer in the US. In theory, its goal is to increase the health and wellbeing of millions of patients—a highly value-aligned goal! Yet in practice, the organization engages in many predatory practices to serve its own growth, enrichment of core stakeholders, etc. So is this an altruistic institution or not? And does bringing its (non-altruistic) actions into greater alignment with its (altruistic) goals count as improving decision quality or increasing value alignment under your paradigm?

While overall I tend to agree with you that a value-oriented approach is better, I don't think you give a fair shake to the argument that "value-aligned institutions will disproportionately benefit from the development of broad decision-making tools." It's important to remember that improving institutional decision-making in the social sector and especially from an EA perspective is a very recent concept. The professional world is incredibly siloed, and it's not hard at all for me to imagine that ostensibly publicly available resources and tools that anyone could use would, in practice, be distributed through networks that ensure disproportionate adoption by well-intentioned individuals and groups. I believe that something like this is happening with Metaculus, for example.

One final technical note: you used "generic-strategy" in a different way that we did in the "Which Institutions?" post—our definition imagines a specific organization that is targeted through a non-specific strategy, whereas yours imagines a specific strategy not targeted to any specific organization. I agree that the latter deserves its own label, but suggest a different one than "generic-strategy" to avoid confusion with the previous post.

I've focused mostly on criticisms here for the sake of efficiency, but I really was very impressed with this article and hope to see more writing from you in the future, on this topic and others!

Miranda_Zhang's Shortform

(I'm also wondering whether I am being overly concerned with theoretically justifying things!)

I think I would agree with this. It seems like you're trying to demonstrate your knowledge of a particular framework or set of frameworks through this exercise and you're letting that constrain your choices a lot. Maybe that will be a good choice if you're definitely going into academia as a political scientist after this, but otherwise, I would structure the approach around how research happens most naturally in the real world, which is that you have a research question that would have concrete practical value if it were answered, and then you set out to answer it using whatever combination of theories and methods makes sense for the question.

Miranda_Zhang's Shortform

Suggestion: use an expert lens, but make the division you're looking at [experts connected to/with influence in the Biden administration] vs. ["outside" experts].

Rationale: The Biden administration thinks of and presents itself to the public as technocratic and guided by science, but as with any administration politics and access play a role as well. As you noted, the Biden administration did a clear about-face on this despite a lack of a clear consensus from experts in the public sphere. So why did that happen, and what role did expert influence play in driving it? Put another way, which experts was the administration listening to, and what does that suggest for how experts might be able to make change during the Biden administration's tenure?

Load More