Could you say more about what you see as the practical distinction between a "slow down AI in general" proposal vs. a "pause" proposal?
Fun! I'm glad that you're working with experts on administering this and applaud the intention to post lessons learned. If you haven't already come across them, you might find these resources on participatory grantmaking helpful.
a system of governance that has been shown repeatedly to lead to better organizational performance.
This is a pretty strong empirical claim, and I don't see documentation for it either in your comment or the original post. Can you share what evidence you're basing this on?
Several years ago, 12 self-identified women and people of color in EA wrote a collaborative article that directly addresses what it's like to be part of groups and spaces where conversation topics like this come up. It's worth a read. Making discussions in EA groups inclusive
I'll bite on the invitation to nominate my own content. This short piece of mine spent little time on the front page and didn't seem to capture much attention, either positive or negative. I'm not sure why, but I'd love for the ideas in it to get a second look, especially by people who know more about the topic than I do.
Title: Leveraging labor shortages as a pathway to career impact? [note: question mark was added today to better reflect the intended vibe of the post]
Author: Ian David Moss
URL: https://forum.effectivealtruism.org/posts/xdMn6FeQGjrXDPnQj/le...
Hi David, thanks for your interest in our work! I need to preface this by emphasizing that the primary purpose of the quantitative model was to help us assess the relative importance of and promise of engaging with different institutions implicated in various existential risk scenarios. There was less attention given to the challenge of nailing the right absolute numbers, and so those should be taken with a super-extra-giant grain of salt.
With that said, the right way to understand the numbers in the model is that the estimates were about the impact over 1...
Dustin & Cari were also among the largest donors in 2020: https://www.vox.com/recode/2020/10/20/21523492/future-forward-super-pac-dustin-moskovitz-silicon-valleys
Wow, I didn't see it at the time but this was really well written and documented. I'm sorry it got downvoted so much and think that reflects quite poorly on Forum voting norms and epistemics.
I like how Hacker News hides comment scores. Seems to me that seeing a comment's score before reading it makes it harder to form an independent impression.
I fairly frequently find myself thinking something like: "this comment seems fine/interesting and yet it's got a bunch of downvotes; the downvoters must know something I don't, so I shouldn't upvote". If others also reason this way, the net effect is herd behavior? What if I only saw a comment's score after voting/opting not to vote?
Maybe quadratic voting could help, by encouraging everyone to focus t...
I think the post ended up around 0 or 1 karma, is that right? (I mean before people changed their voting based on hindsight!) I think it's important to distinguish between "got downvoted a lot but ended up at neutral karma" vs. "got downvoted double digits into no longer being visible." The former reflects somewhat poorly on EA, the latter very poorly.
Moreover, Sven Rone is a pseudonym. The author used a pen name astheir views were unpopular and underappreciated at the time; they likely feared career repercussions if they went public with it. It's unfortunate that this was the environment they found themselves in.
I think it would have been very easy for Jonas to communicate the same thing in less confrontational language. E.g., "FWIW, a source of mine who seems to have some inside knowledge told me that the picture presented here is too pessimistic." This would have addressed JP's first point and been received very differently, I expect.
I understood the heart of the post to be in the first sentence: "what should be of greater importance to effective altruists anyway is how the impacts of all [Musk's] various decisions are, for lack of better terms, high-variance, bordering on volatile." While Evan doesn't provide examples of what decisions he's talking about, I think his point is a valid one: Musk is someone who is exceptionally powerful, increasingly interested in how he can use his power to shape the world, and seemingly operating without the kinds of epistemic guardrails that EA leaders try to operate with. This seems like an important development, if for no other reason that Musk's and EA's paths seem more likely to collide than diverge as time goes on.
I agree this is an important point, but also think identifying top-ranked paths and problems is one of 80K's core added values, so don't want to throw out the baby with the bathwater here.
One less extreme intervention that could help would be to keep the list of top recommendations, but not rank them. Instead 80K could list them as "particularly promising pathways" or something like that, emphasizing in the first paragraphs of text that personal fit should be a large part of the decision of choosing a career and that the identification of a top tier of car...
I was also going to say that it's pretty confusing that this list is not the same as either the top problem areas listed elsewhere on the site or the top-priority career paths, although it seems derived from the latter. Maybe there are some version control issues here?
I feel like this proposal conflates two ideas that are not necessarily that related:
I agree with both of these premises, but focusing on their intersection feels pretty narrow and impact-limiting to me. As an example of an alterative way of looking at the first problem, you might consider instead or in addition having people on who work in high...
Hmm, I guess I'm more optimistic about 3 than you are. Billionaires are both very competitive and often care a lot about how they're perceived, and if a scaled-up and properly framed version of this evaluation were to gain sufficient currency (e.g. via the billionaires who score well on it), you might well see at least some incremental movement. I'd put the chances of that around 5%.
I thought this was great! With a good illustrator and some decent connections I think you could totally get it published as a picture book. A couple of feedback notes:
It's possible there's a more comprehensive writeup somewhere, but I can offer two data points regarding the removal of $30B in pandemic preparedness funding that was originally part of Biden's Build Back Better initiative (which ultimately evolved into the Inflation Reduction Act):
...I have some sympathy for the second view, although I'm skeptical that sane advisors have significant real impact. I'd love a way to test it as decisively as we've tested the "government (in its current form) responds appropriately to warning shots" hypotheses.
On my own models, the "don't worry, people will wake up as the cliff-edge comes more clearly into view" hypothesis has quite a lot of work to do. In particular, I don't think it's a very defensible position in isolation anymore....if you want to argue that we do need government support but (fortunatel
"I think the second view is basically correct for policy in general, although I don't have a strong view yet of how it applies to AI governance specifically. One thing that's become clear to me as I've gotten more involved in institution-focused work and research is that large governments and other similarly impactful organizations are huge, sprawling social organisms, such that I think EAs simultaneously underestimate and overestimate the amount of influence that's possible in those settings."
This is a problem I've spoken often about, and I'm curren...
Amazing resource, thanks so much! I'll add that the Effective Institutions Project is in the process of setting up an innovation fund to support initiatives like these, and we are planning to make our first recommendations and disbursements later this year. So if anyone's interested in supporting this work generally but doesn't have the time/interest to do their own vetting, let us know and we can get you set up as a participant in our pooled fund (you can reach me via PM on the Forum or write info@effectiveinstitutionsproject.org).
Also worth noting that you can be influential on Twitter without necessarily having a large audience (e.g., by interacting strategically with elites and frequently enough that they get to know you).
It seems worth noting that you can get famous on Twitter for tweeting, or you can happen to be famous on Twitter as a result of becoming famous some other way. The two pathways imply very different promotional strategies and theories of impact. But my sense is that it's pretty hard to grow an audience on Twitter through tweeting alone, no matter how good your content is.
He seems like a natural fit for the American economist-public intellectual cluster (Yglesias/Cowen/WaitButWhy/etc.) that's already pretty sympathetic to EA. The twitter content is basically "EA in depth," but retaining the normie socially responsible brand they've come to expect and are comfortable with. Max Roser would be another obvious candidate to promote Peter. I'd start there and see where it goes.
I'm curious how this applies to infohazards specifically. Without actually spilling any infohazards, could you comment on how one could do a good job applying this model in such a situation?
I'm a little surprised that Rob Wiblin doesn't have more followers, but he's already high-profile enough that it wouldn't take that big of a push to get him into another tier. He's also the most logical person to leverage 80K's broader content on social media given his existing profile and activity. (ETA: although Habiba could do this too, per your suggestion.)
Peter Wildeford is an A+ follow on Twitter IMHO. I think it's realistic to get him a bunch more followers if that's something he wanted.
Do we know that he doesn't already have a social media manager? He's had a lot of help to promote the book.
In light of the two-factor voting, I'm unclear what you mean by "upvote." I would suggest using the "agree/disagree" box as the scoring, with "upvote/downvote" meant to refer to your wisdom in suggesting the person and/or the analysis you provided. But I think you should clarify which one you intend to actually pay attention to.
I think raising one's own kids is often significantly more rewarding than raising adopted kids, just because one's own kids will share so much more of one's cognitive traits, personality traits, quirks, etc, that you can empathize better with them.
I'm extremely skeptical of this claim. Many parents I know with multiple biological children report that they have immensely different personalities, and it seems intuitively obvious that any statistical correlations of such traits between child and parent that are driven by genes will be overwhelmed by statistic...
Haha, well it would depend a lot on the specifics but we'd probably at least be up for having a conversation about it :)
Maybe indirectly? Addressing talent gaps within the EA community isn't a primary focus of ours, but it does seem that our outreach is helping to increase the pool of mid-career and senior people out in the world who take EA seriously.
Effective Institutions Project here. As of now I'd say our number is more like $150-200K, assuming we're talking about an annual commitment. The number is lower because our networks give us access to a large talent pool and I'm fairly optimistic that we can fill openings easily once we have the budget for them.
Thanks for the response!
I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside
That's fair, and I should also be clear that I'm less familiar with LTFF's grantmaking than some others in the EA universe.
It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.
Oh, I totally agree that the kind of risk analysis I mentioned is not costless, a...
I strongly agree with Sam on the first point regarding downside risks. My view, based on a range of separate but similar interactions with EA funders, is that they tend to overrate the risks of accidental harm [1] from policy projects, and especially so for more entrepreneurial, early-stage efforts.
To back this up a bit, let's take a closer look at the risk factors Asya cited in the comment above.
Re: "Why haven't I heard of OR?", I think your comments on the fragmentation and branding challenges are extremely on point. Last year Effective Institutions Project did a scoping exercise looking at different fields and academic disciplines that intersect with institutional decision-making, and it was amazing to see the variety of names and frames for what is ultimately a collection of pretty similar ideas. With that said, I think the directions that have been explored under the OR banner are particularly interesting and impressive, and am really glad to ...
One thing that occurs to me is that your post assumes that the only way to address the issues raised here is to hire different people and/or give them different responsibilities. But another possible route is for EA organizations to make more use of management consultancies. That could be a path worth considering for small nonprofits whose leaders mainly do just want to hire someone to take care of all the tasks they don't want to do themselves, and whose opportunity to make use of more strategic and advanced operations expertise is likely to be too sporad...
I think this post is excellent overall, but I do want to register a disagreement with your bid to separate operations work from the work that PAs do in most small nonprofit organizations. You have a keen observation about how the nature of operations work changes with scale: at top levels of a multinational corporation, the notion of a senior operations executive doing PA-style work is ludicrous. But for most EA organizations, that comparison is kind of nonsensical; we're talking about small outfits with 2-6 staff members and a mishmash of interns, contrac...
Do you think that some of the people who would have been attracted to effective philanthropy in the past now just join effective altruism?
Some, sure. EA seems to be a lot more mainstream now than it was even 3-4 years ago, so that's probably the main reason.
While I think EP has been influential, I just didn't find the work from CEP and similar places as intellectually engaging as what EA puts out (or as important overall).
I think the main thing EA has going for it over EP is that it has a much better track record of taking ideas seriously. EP explored a lo...
I wasn't there at the very beginning, but have followed the effective philanthropy "scene" since 2007 or so. My sense is that most EA community members aren't very knowledgeable about this whole side of institutional philanthropy, so I was pleasantly surprised to see the history recounted pretty accurately here! With that said, one quibble is that the book you cited entitled Effective Philanthropy by Mary Ellen Capek and Molly Mead is not one I'd ever heard of before reading this post; I think this is just a case of a low-profile resource happening to get ...
I don't have any inside info here, but based on my work with other organizations I think each of your first three hypotheses are plausible, either alone or in combination.
Another consideration I would mention is that it's just really hard to judge how to interpret advocacy failures over a short time horizon. Given that your first try failed, does that mean the situation is hopeless and you should stop throwing good money after bad? Or does it mean that you meaningfully moved the needle on people's opinions and the next campaign is now likelier to succeed? ...
One context note that doesn't seem to be reflected here is that in 2014, there was a lot of optimism for a bipartisan political compromise on criminal justice reform in the US. The Koch network of charities and advocacy groups had, to some people's surprise, begun advocating for it in their conservative-libertarian circles, which in turn motivated Republican participation in negotiations on the hill. My recollection is that Open Phil's bet on criminal justice reform funding was not just a "bet on Chloe," but also a bet on tractability: i.e., that a relativ...
I do not believe this explains the funding rationale. If you look at the groups funded (as per my comment), these are not groups interested in bipartisan political compromise. If OP were interested in bipartisan efforts there are surely better and more effective groups to fund in that direction rather than the groups funded here with very particular, and rather strong, political beliefs which cannot in many cases (even charitably) be described as likely to contribute to bipartisan efforts at reform.
Separating out how important networking is for different kinds of roles seems valuable, not only for the people trying to climb the ladder but also for the people already on the ladder. (e.g., maybe some of these folks desperate to find good people to own valuable projects that otherwise wouldn't get done should be putting more effort into recruiting outside of the Bay.)
I like this comment because it does a great job of illustrating how socioeconomic status influences the risks one can take. Consider the juxtaposition of these two statements:
(from the comment)
...Maybe this is mainly targeted at undergraduate students, who are more likely to have a few months of time over the summer with no commitments. But in that case how do they have the money to do what is basically an extended vacation? Most students aren't earning much/any money.
- Maybe this is only targeted at students who have wealthy families willing to fund expe
Really appreciate you writing this! Echoing others, I think many of these more self-serving motivations are pretty common in the community. With that said, I think some of these are much more potentially problematic than others, and the list is worth disaggregating on that dimension. For example, your comment about EA helping you not feel so fragile strikes me as prosocial, if anything, and I don't think anyone would have a problem with someone gaining hope that their own suffering could be reduced from engaging in EA.
The ones that I think are most worryin...
I think the issue is more that different users have very disparate norms about how often to vote, when to use a strong vote, and what to use it on. My sense (from a combination of noticing voting patterns and reading specific users' comments about how they vote) is that most are pretty low-key about voting, but a few high-karma users are much more intense about it and don't hesitate to throw their weight around. These users can then have a wildly disproportionate effect on discourse because if their vote is worth, say, 7 points, their opinion on one piece ...
I would be in favor of eliminating strong downvotes entirely. If a post or comment is going to be censored or given less visibility, it should be because a lot of people wanted that to happen rather than just two or three.
FWIW, in the (rough) BOTECs we use for opportunity prioritization at Effective Institutions Project, this has been our conclusion as well. GCR prevention is tough to beat for cost-effectiveness even only considering impacts on a 10-year time horizon, provided you are comfortable making judgments based on expected value with wide uncertainty bands.
I think people have a cached intuition that "global health is most cost-effective on near-term timescales" but what's really happened is that "a well-respected charity evaluator that researches donation opportunit... (read more)