All of Peter Wildeford's Comments + Replies

Hi - thanks for this comment. As someone working on export control policy, let me give you my perspective.

Firstly, an important precondition for a cooperative pause is leverage. You don't get China to agree to a mutual pause by first giving away your main strategic advantage. You get them to agree by making the alternative to be "a race they're losing", which is worse than cooperation. Export controls are thus part of what creates the conditions for being able to pause. If you equalize compute access first, China has no reason to agree to a pause because t... (read more)

5
alesziegler
Hi, thx for engaging, but I guess we’ll have to agree to disagree.  I do agree that chip controls would make a lot of sense as a source of leverage in negotiations over an AI pause in a world where China would be trying to build superintelligence and the US would be trying to force them to do a mutual pause. But we don’t live in that world. The Biden administration had no intention to agree to a cooperative pause when it imposed chip controls, and the Trump administration has somehow even less than no intention to pause; Trump seems to be going all-in on AI development. You might object that controls are good because they are sort of increasing the supply of leverage which in the future could be used to do a pause, and that is true, BUT to do a deal, both leverage and trust are needed. And imho even without controls, the US has a lot of leverage over China just by the sheer power of its global hegemony. So, leverage over China is already relatively abundant, so to speak, but Chinese trust is what is relatively scarce (as is, of course, American trust in Chinese intentions). So, I am skeptical that steps which build more leverage over China but decrease Chinese trust in American intentions get these countries closer to cooperation. And I disagree that lifting controls doesn’t recoup Chinese trust. It does not recoup it fully, as if controls never happened, but I do think that it builds trust relative to a situation of continuing controls. To use a somewhat strained example with which most are surely familiar, when Trump put up his Liberation Day tariffs, prices of various financial assets fell off a cliff, but when he partially reversed the tariffs, prices partially recouped. How come, if Trump has shown that he is untrustworthy and might change his mind any minute? Well, many investors are evidently betting he won’t do that, and so far they’ve been correct. Imho it is a fairly general principle of human relationships that for maintaining trust: never doing a t

Hi! I'm a long time effective altruist (14+ years) and utilitarian/utilitarian-adjacent. This is a sweet and earnest post - you're clearly bright and I admire your dedication at such a young age. You're right to recognize that burnout isn't utilitarian, but I worry that your "donate everything / camper van" framing is premature and probably wrong.

At age 14/15, the highest-EV move is almost always building optionality, not making binding commitments to extreme frugality. You're right to maximize income, but I think you're thinking too much about this in ter... (read more)

9
RyanCarey
Yeah, the cost of cheap shared housing is something like $20k/yr of 2026 dollars, whereas your impact would be worth a lot more than that, either because you are making hundreds of thousands of post-tax dollars per year, or because you're foregoing those potential earnings to do important research or activism. Van-living is usually penny-wise, but pound-foolish.

Hi. Thanks for writing this. I find electoral reform to be a genuinely interesting cause area, and I appreciate the effort to apply EA frameworks to it. I have a few concerns with the framing and some factual details:

On neglectedness: The claim that this is "the most neglected intervention in EA" doesn't match the track record. The Center for Election Science has received over $2.4M from Open Philanthropy, $100K from EA Funds, and $40K+ from SFF. 80,000 Hours has a problem profile on voting reform calling it a "potential highest priority area" and did a fu... (read more)

4
Matt Koesters
I agree with your critique. This is my first post on this forum, and while I had an idea of the rigor this audience would demand, I see how this falls short. Thank you for your feedback - I will be more diligent in future posts. 

Ronnie had a bit of a script but also improvised a lot. I had no script, no knowledge of Ronnie's script, and was very blind to the whole thing. But we did many takes, so I was able to adapt accordingly.

5
Nathan Young
Did the not knowing he was in the film come up organically then?
4
abrahamrowe
See the last sentence :) But I should have highlighted this more. It's a great piece.

This may be less fun but, for completeness, I want to present an alternative perspective -- I think I know exactly how I'd spend it and don't have any particular questions. Feel free to send the $20M over whenever works best.

2
NickLaing
completely agree just give @Peter Wildeford they money right now. Every minute we delay is lost expected value...
7
Ben Stevenson
Call off the Long Reflection
2
AppliedDivinityStudies
Ah wow, yeah super relevant. Thanks for sharing!

Hi David - I work a lot on semiconductor/chip export policy, so very important to think about the strategy here.

My biggest issue is that "short vs. long" timelines is not a binary. I agree that under longer timelines, say post-2035, China likely can significantly catch up on chip manufacturing. (Seems much less likely pre-2035.) But I think the controls logic matters really strongly for timelines 2025-2035 and still might create a larger strategic advantage post-2035.

Who has the chips still matters, since it determines whether the country has enough comput... (read more)

8
Davidmanheim
First, I was convinced, separately, that chip production location matters more than I presumed here because chips are not commodities in an important way I neglected - the security of a chip isn't really verifiable post-hoc, and worse, the differential insecurity of chips to US versus Chinese backdoors means that companies based in different locations will have different preferences for which risks to tolerate. (On the other hand, I think you're wrong in saying that "the chip supply chain has unique characteristics [compared to oil,] with extreme manufacturing concentration, decades-long development cycles, and tacit knowledge that make it different" - because the same is true for crude oil extraction! What matters is who refines it, and who buys it, and what it's used for.) Second, I agree that the dichotomy of short versus long timelines unfairly simplifies the question - I had intended to indicate that this was a spectrum in the diagram, but on rereading, didn't actually say this. So I'll clarify a few points. First, as others have noted, the relevant timeline is from now to takeoff, not from now to actual endgame. Second, if we're talking about takeoff after 2035, the investments in China are going to swamp western production. (This is the command economy advantage - though I could imagine it's vulnerable to the typical failure modes where they overinvest in the wrong thing, and can't change course quickly.) On the other hand, for the highest acceleration short timelines, for fabrication, we're past the point of any decisive decisions on chip production, and arguably past the point of doing anything on the hardware usage to decide what occurs - the only route to control the tech is short term policy, where only the relative leads of the specific frontier companies matters, and controlling the chips is about maintaining a very short term lead that doesn't depend on technical expertise, just on hardware. (I'm skeptical of this - not because it's implausible, but

Congrats! I also thought it was great.

Sorry for the slightly off-topic question but I noticed EAG London 2025 talks are uploaded to YouTube but I didn't see any EAG Bay Area 2025 talks. Do you know when those will go up?

1
Jordan Pieters 🔸
Thanks Peter! We were delayed in processing the Bay Area talk videos, but they'll be up in the next couple of days.

I still stand by the book and I attribute a lot of my historical failures in management to not implementing this book well enough (especially the part about creating clarity around goals).

1
EffectiveAdvocate🔸
Thank you! 

If you're considering a career in AI policy, now is an especially good time to start applying widely as there's a lot of hiring going on right now. I documented in my Substack over a dozen different opportunities that I think are very promising.

Thank you for sharing your perspective and I'm sorry this has been frustrating for you and people you know. I deeply appreciate your commitment and perseverance.

I hope to share you a bit of perspective from me as a hiring manager on the other side of things:

Why aren’t orgs leaning harder on shared talent pools (e.g. HIP’s database) to bypass public rounds? HIP is currently running an open search.

It's very difficult to run an open search for all conceivable jobs and have the best fit for all of them. And even if you do have a list of the top candidates ... (read more)

7
SiobhanBall
Hi Peter, I see that you’re hiring right now (slicks hair back, clears throat). Thanks for engaging! Addressing your points in order: 1. I agree that all conceivable jobs is indeed a broad category. By contrast, the vast majority of EA jobs are soft-skills based and, in my personal experience, rather straightforward. They follow standard business functions such as marketing, fundraising/growth, etc. I think most applicants can do the job well enough that it makes full hiring rounds hard to justify from an effectiveness standpoint. I don’t think it takes a very special someone, a needle in the haystack, to do a mid-senior comms role with decent competence. If I were hiring for such a role, I might get 10-15 leads from the HIP directory who are actively searching, interview a handful of those and then extend an offer. I don’t think the majority of roles require more than that or that the benefit of doing more than that can be balanced against the cost. 2. EA isn’t one big employer, true. However, it and the orgs under its banner are based on a set of principles of which cost effectiveness is foundational. Central EA orgs also play a part in influencing the internal policies of such orgs, including hiring. I suppose the hiring utopia/most cost-effective outcome would be to get good, committed people in high impact roles and have them stay at their orgs for decades so that you never need to hire again. In pursuit of that cost effective ideal, hirers should put more weight on proven commitment to the movement. 3. Hopefully yes, but it seems like it’s not benefitting her at all. You’re right, it doesn’t prove automatic fit - but again, I don’t think many roles are in need of a special matrix of fit-forming factors. Why not go to the opposite end and spend even more on hiring rounds in pursuit of ever-better fit? Has anyone benchmarked output quality vs. search length? 4. I said professional networks, not personal. I’m not advocating for pure nepotism. But if there’s

I think you should get the LLM to give you the citation and then cite that (ideally after checking it yourself).

At least in my own normative thought, I don't just wonder about what meets my standards. [...] I think the most important disagreement of all is over which standards are really warranted.

Really warranted by what? I think I'm an illusionist about this in particular as I don't even know what we could be reasonably disagreeing over.

For a disagreement about facts (is this blue?), we can argue about actual blueness (measurable) or we can argue about epistemics (which strategies most reliably predict the world?) and meta-epistemics (which strategies most reli... (read more)

You're right that I need to bite the bullet on epistemic norms too and I do think that's a highly effective reply. But at the end of the day, yes, I think "reasonable" in epistemology is also implicitly goal-relative in a meta-ethical sense - it means "in order to have beliefs that accurately track reality." The difference is that this goal is so universally shared, so universal across so many different value systems, and so deeply embedded in the concept of belief itself that it feels categorical.

You say I've "replaced all the important moral questions wi... (read more)

9
Richard Y Chappell🔸
I agree it's often helpful to make our implicit standards explicit. But I disagree that that's "what we're actually asking". At least in my own normative thought, I don't just wonder about what meets my standards. And I don't just disagree with others about what does or doesn't meet their standards or mine. I think the most important disagreement of all is over which standards are really warranted.  On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts. I think it's key to philosophy that there is more we can wonder about than just that. (There may not be any tractable disagreement once we get down to bedrock clashing standards, but I think there is still a further question over which we really disagree, even if we have no way to persuade the other of our position.) It's interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.

You raise a fair challenge about epistemic norms! Yes, I do think there are facts about which beliefs are most reasonable given evidence. But I'd argue this actually supports my view rather than undermining it.

The key difference: epistemic norms have a built-in goal - accurate representation of reality. When we ask "should I expect emeralds to be green or grue?" we're implicitly asking "in order to have beliefs that accurately track reality, what should I expect?" The standard is baked into the enterprise of belief formation itself.

But moral norms lack thi... (read more)

Why couldn't someone disagree with you about the purpose of belief-formation: "sure, truth-seeking feels obviously correct to you, but that's just because [some story]... not because we've discovered some goal-independent truth."

Further, part of my point with induction is that merely aiming at truth doesn't settle the hard questions of epistemology (any more than aiming at the good settles the hard questions of axiology).

To see this: suppose that, oddly enough, the grue-speakers turn out to be right that all new emeralds discovered after 2030 are observed ... (read more)

Thanks!

I think all reasons are hypothetical, but some hypotheticals (like "if you want to avoid unnecessary suffering...") are so deeply embedded in human psychology that they feel categorical. This explains our moral intuitions without mysterious metaphysical facts.

The concentration camp guard example actually supports my view - we think the guard shouldn't follow professional norms precisely because we're applying a different value system (human welfare over rule-following). There's no view from nowhere; there's just the fact that (luckily) most of us share similar core values.

Do you think there's an epistemic fact of the matter as to what beliefs about the future are most reasonable and likely to be true given the past? (E.g., whether we should expect future emeralds to be green or grue?) Is probability end-relational too? Objective norms for inductive reasoning don't seem any less metaphysically mysterious than objective norms for practical reasoning.

One could just debunk all philosophical beliefs as mere "deeply embedded... intuitions" so as to avoid "mysterious metaphysical facts". But that then leaves you committed to think... (read more)

You were negative toward the idea of hypothetical imperatives elsewhere but I don't see how you get around the need for them.

You say epistemic and moral obligations work "in the same way," but they don't. Yes, we have epistemic obligations to believe true things... in order to have accurate beliefs about reality. That's a specific goal. But you can't just assert "some things are good and worth desiring" without specifying... good according to what standard? The existence of epistemic standards doesn't prove there's One True Moral Standard any more than the... (read more)

7
Richard Y Chappell🔸
It's an interesting dialectic! I don't have heaps of time to go into depth on this, but you may get a better sense of my view from reading my response to Maguire & Woods, 'Why Belief is No Game':

"Nihilism" sounds bad but I think it's smuggling in connotations I don't endorse.

I'm far from a professional philosopher but I don't see how you could possibly make substantive claims about desirability from a pure meta-ethical perspective. But you definitely can make substantive claims about desirability from a social perspective and personal perspective. The reason we don't debate racist normative advice is because we're not racists. I don't see any other way to determine this.

Distinguish how we determine something from what we are determining.

There's a trivial sense in which all thought is "subjective". Even science ultimately comes down to personal perspectives on what you perceive as the result of an experiment, and how you think the data should be interpreted (as supporting some or another more general theory). But it would be odd to conclude from this that our scientific verdicts are just claims about how the world appears to us, or what's reasonable to conclude relative to certain stipulated ancillary assumptions. Commonse... (read more)

Peter Wildeford
5
0
0
80% ➔ 50% agree

Morality is Objective


People keep forgetting that meta-ethics was solved back in 2013.

fwiw, I think the view you discuss there is really just a terminological variant on nihilism:

The key thing to understand about hypothetical imperatives, thus understood, is that they describe relations of normative inheritance. “If you want X, you should do Y,” conveys that given that X is worth pursuing, Y will be too. But is X worth pursuing? That crucial question is left unanswered. A view on which there are only hypothetical imperatives is thus a form of normative nihilism—no more productive than an irrigation system without any liquid to flow through

... (read more)

I recently made a forecast based on the METR paper with median 2030 timelines and much less probability on 2027 (<10%). I think this forecast of mine is weaker to much fewer of titotal's critiques, but still weak to some (especially not having sufficient uncertainty around the type of curve to fit).

p(doom) is about doom. For AI, I think this can mean a few things:

  • Literal human extinction

  • Humans lose power over their future but are still alive (and potentially even have nice lives), either via stable totalitarianism or gradual disempowerment or other means

The second bucket is pretty big

8
Jeroen Willems🔸
I checked parts of the study, and the 0.12% figure is for P(AI-caused existential catastrophe by 2100) according to the "AI skeptics". This is what is written about the definition of existential catastrophe just before it:  That sounds similar to the classic existential risk definition?  (Another thing that's important to note is that the study specifically sought forecasters skeptical of AI. So it doesn't tell us much if anything about what a group of random superforecasters would actually predict!) I am very very surprised your 'second bucket' contains the possibility of humans potentially having nice lives! I suspect if you had asked me the definition of p(doom) before I read your initial comment, I would actually have mentioned the definition of existential risks that includes the permanent destruction of future potential. But I simply never took that second part seriously? Hence my initial confusion. I just assumed disempowerment or a loss of control would lead to literal extinction anyway, and that most people shared this assumption. In retrospect, that was probably naive of me. Now I'm genuinely curious how much of people's p(doom) estimates actually comes from actual extinction versus other scenarios...

What do the superforecasters say? Well, the most comprehensive effort to ascertain and influence superforecaster opinions on AI risk was the Forecasting Research Institute’s Roots of Disagreement Study.[2] In this study, they found that nearly all of the superforecasters fell into the “AI skeptic” category, with an average P(doom) of just 0.12%. If you’re tempted to say that their number is only so low because they’re ignorant or haven’t taken the time to fully understand the arguments for AI risk, then you’d be wrong; the 0.12% figure was obtained after

... (read more)
4
Jeroen Willems🔸
Interesting, I thought p(doom) was about literal extinction? If it also refers to unrecoverable collapse, then I'm really surprised that takes up 15-30% of your potential scenarios! I always saw that part of the existential risk definition as negligible.
5
Vasco Grilo🔸
Hi Peter, Relatedly, Table 1 of the report on the Existential Risk Persuasion Tournament (XPT) shows there was much more agreement between superforecasters and experts about catastrophic risk than extinction risk.

I just saw that Season 3 Episode 9 of Leverage: Redemption ("The Poltergeist Job") that came out on 2025 May 29 has an unfortunately very unflattering portrayal of "effective altruism".

Matt claims he's all about effective altruism. That it's actually helpful for Futurilogic to rake in billions so that there's more money to give back to the world. They're about to launch Galactica. That's free global Internet.

[...] But about 50% of the investments in Galactica are from anonymous crypto, so we all know what that means.

The main antagonist and CEO of Fu... (read more)

Good reminder that you should red team your Theory of Change!

Peter Wildeford
18
7
1
43% disagree

I don't think this is as clear of a dichotomy as people think it is. A lot of global catastrophic risk doesn't come from literal extinction because human extinction is very hard. A lot of mundane work on GCR policy involves a wide variety of threat models that are not just extinction.

2
Davidmanheim
What about the threat of strongly superhuman artificial superintelligence?

Here's my summary of the recommendations:

  • National security testing
    • Develop robust government capabilities to evaluate AI models (foreign and domestic) for security risks
    • Once ASL-3 is reached, government should mandate pre-deployment testing
    • Preserve the AI Safety Institute in the Department of Commerce to advance third-party testing
    • Direct NIST to develop comprehensive national security evaluations in partnership with frontier AI developers
    • Build classified and unclassified computing infrastructure for testing powerful AI systems
    • Assemble interdisciplinary team
... (read more)

If you've liked my writing in the past, I wanted to share that I've started a Substack: https://peterwildeford.substack.com/

Ever wanted a top forecaster to help you navigate the news? Want to know the latest in AI? I'm doing all that in my Substack -- forecast-driven analysis about AI, national security, innovation, and emerging technology!

Something that I personally would find super valuable is to see you work through a forecasting problem "live" (in text). Take an AI question that you would like to forecast, and then describe how you actually go about making that forecast. The information you seek out, how you analyze it, and especially how you make it quantitative. That would

  1. make the forecast process more transparent for someone who wanted to apply skepticism to your bottom line
  2. help me "compare notes", ie work through the same forecasting question that you pose, come to a conclusion, a
... (read more)

Yeah I think so, though there still is a lot of disagreement about crucial considerations. I the OP advice list is about as close as it’s going to get.

2
Nathan Young
I think that feels like a failure of the community in some sense, or maybe a reduction in ambition.

I'm very uncertain about whether AI really is >10x neglected animals and I cannot emphasize enough that reasonable and very well-informed people can disagree on this issue and I could definitely imagine changing my mind on this over the next year. This is why I framed my comment the way I did hopefully making it clear that donating to neglected animal work is very much an answer I endorse.

I also agree it's very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact. I think there's higher-level strategic issu... (read more)

1
CB🔸
Makes sense ! I understand the position. Regarding AI x animals donation opportunities, all of this is pretty new but I know a few. Hive launched a Ai for Animals website, with an upcoming conference: https://www.aiforanimals.org/ I also know about Electric Sheep, which has made a fellowship on the topic : https://electricsheep.teachable.com/

Since it looks like you're looking for an opinion, here's mine:

To start, while I deeply respect GiveWell's work, in my personal opinion I still find it hard to believe that any GiveWell top charity is worth donating to if you're planning to do the typical EA project of maximizing the value of your donations in a scope sensitive and impartial way. ...Additionally, I don't think other x-risks matter nearly as much as AI risk work (though admittedly a lot of biorisk stuff is now focused on AI-bio intersections).

Instead, I think the main difficult judgement ca... (read more)

4
Nathan Young
I think I am happy to take this as the point I am trying to make. I don't see a robust systematic take on where to donate in animals and AI.  Isn't it reasonable to expect the EA community to synthesise one of these, rather than each of us having to do our own?    
7
CB🔸
I agree with this comment. Thanks for this clear overview.  The only element where I might differ is whether AI really is >10x neglected animals.  My main issue is that while AI is a very important topic, it's very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact.  First, it's hard to know what will work and what won't accidentally increase capabilities. More importantly, if we end up in a future aligned with human values but not animals or artificial sentience, this could still be a very bad world in which a large number of individuals are suffering (e.g., if factory farming continues indefinitely).  My tentative and not very solid view is that work at the intersection of AI x animals is promising (eg work that aims to get AI companies to commit towards not committing animal mistreatment), and attempts for a pause are interesting (since they give us more time to figure out stuff). If you think that an aligned AGI will truly maximise global utility, you will have a more positive outlook. But since I'm rather risk averse, I devote most of my resources to neglected animals. 

I do agree the EA Funds made a mistake not returning to fixed grant rounds after the Mega Money era was over. It's so much easier to organize, coordinate, and compare.

3
JJ Hepburn
I think it’s actually better for applicant to have a deadline too. Plenty of people procrastinate on applying, some to the point of eventually not bothering at all. Also, if you are rejected but encouraged to reapply it’s clearer when it’s ok to apply. SFF has two rounds a year and I’ve applied, been rejected and then applied again the next round. I’ve probably applied more times to SFF than LTFF and this mostly comes down to there being an application deadline. Also probably helps when getting rejected and comparing to other things you see get funded. If you see something a year ago that is just like what you were doing or seems much less value got funded it’s hard to understand that the funding bar might have been different then. It’s a much cleaner comparison when you are all in one group. I think even for myself getting funded by SFF one round and not a later round. I think “oh, look at all these new projects that didn’t exist before that are so clearly awesome. Fair enough they don’t have funding for me any more”

Thanks for the comment, I think this is very astute.

~

Recently it seems like the community on the EA Forum has shifted a bit to favor animal welfare. Or maybe it's just that the AI safety people have migrated to other blogs and organizations.

I think there's a (mostly but not entirely accurate) vibe that all AI safety orgs that are worth funding will already be approximately fully funded by OpenPhil and others, but that animal orgs (especially in invertebrate/wild welfare) are very neglected.

I don't think that all AI safety orgs are actually fully funded... (read more)

8
Ozzie Gooen
That makes sense, but I'm feeling skeptical. There are just so many AI safety orgs now, and the technical ones generally aren't even funded by OP.  For example: https://www.lesswrong.com/posts/9n87is5QsCozxr9fp/the-big-nonprofits-post While a bunch of these salaries are on the high side, not all of them are.

I basically endorse this post, as well as the use of the tools created by Rethink Priorities that collectively point to quite strong but not overwhelming confidence in the marginal value of farmed animal welfare.

1
Lovkush 🔸
Tiny comment: you have ImportAI twice in the list.

I think more EAs should consider operations/management/doer careers over research careers, and that operations/management/doer careers should be higher status within the community.

I get a general vibe that in EA (and probably the world at large), that being a "deep thinking researcher"-type is way higher status than being an "operations/management/doer"-type. Yet the latter is also very high impact work, often higher impact than research (especially on the margin).

I see many EAs erroneously try to go into research and stick to research despite having very ... (read more)

2
PeterSlattery
Just wanted to quickly say that I hold a similar opinion to the top paragraph and have had similar experiences on terms of where I felt I had most impact. I think that the choice of whether to be a researcher or do operations is very context dependant. If there are no other researchers doing something important your competitive advantage may be to do some research because that will probably outperform the counterfactual (no research) and may also catalyze interest and action within that research domain. However if there are a lot of established organizations and experienced researchers, or just researchers who are more naturally skilled than you already involved in the research domain, then you can often have a more significant impact by helping to support those researchers or attract new researchers. One way to navigate this is to have a what I call a research hybrid role where you work as researcher but allocate some flexible amount of time to more operations / field building activities depending on what seems most valuable.
4
Joseph
Which of these two things do you mean? * operations/management/doer careers should be higher status than they currently are within EA * operations/management/doer careers should be higher status than research careers within EA
3
Chris Leong
I suspect it varies by cause area. In AI Safety, the pool of people who can do useful research is smaller than the pool of people who could do good ops work (which is more likely to involve EA’s who prefer a different cause area, but are happy to just have an EA ops job).
7
SiebeRozendal
I guess this is the same dynamic as why movie and sports stars are high status in society: they are highly visible compared to more valuable members of society (and more entertaining to watch). We don't really see much of highly skilled operations people compared to researchers

For operations roles, and focusing on impact (rather than status), I notice that your view contrasts markedly with @abrahamrowe’s in his recent ‘Reflections on a decade of trying to have an impact’ post:

Impact Through Operations

  • I don’t really think my ops work is particularly impactful, because I think ops staff are relatively easy to hire for compared to other roles. However I have spent a lot of my time in EA doing ops work.
    • I was RP’s COO for 4 years, overseeing its non-research work (fiscal sponsorship, finance, HR, communications, fundraising, etc
... (read more)
2[anonymous]
Did the research experience help you be a better manager and operator from within research organizations? I feel like getting an understanding by doing some research could be helpful and probably you could gain generalizable/transferable skills but I’m just speculating here.
1
yz
I think it might be fine if people have genuine interest in research (though had to be intrinsic motivation), which will make their learning fast with more devoted energy. But overall generally I see a lot of value in operations/management/application work, as it gives people opportunities to learn how to land research into real impacts, and how tricky sometimes real world or applications can be.

Do you have any ideas or suggestions (even rough thoughts) regarding how to make this change,  or for interventions that would nudge peoples' behavior?

Off the top of my head: A subsidized bootcamp on core operations skills? Getting more EAG speakers/sessions focused on operations-type topics? Various respected and well-known EAs publicly stating that Operations is important and valuable? A syllabus (readings, MOOCs, tutorials) that people can work their way through independently?

5
Dylan Richardson
I do think that the marginal good of additional researchers, journalists, content creators and etc isn't exactly as high as it is thought to be. But there's an obvious rational-actor (collective action problem?) explanation: other people may not be needed, but me, with my idiosyncratic ideologies? Yep! This also entails that the less representative an individual is of the general movement, the higher the marginal value for him in particular to choose a research role.

One question I often grapple with is the true benefit of having EAs fill certain roles, particularly compared to non-EAs. It would be valuable to see an analysis—perhaps there’s something like this on 80,000 Hours—of the types of roles where having an EA as opposed to a non-EA would significantly increase counterfactual impact. If an EA doesn’t outperform the counterfactual non-EA hire, their impact is neutralized. This is why I believe that earning to give should be a strong default for many EAs. If they choose a different path, they should consider wheth... (read more)

I really appreciate these dates being announced in advance - it makes it much easier to plan!

I'm not sure I understand the expectations enough about what these questions are looking for to answer.

Firstly, I don't think "the movement" is centralized enough to explicitly acknowledge things as a whole - that may be a bad expectation. I think some individual people and organizations have done some reflection (see here and here for prominent examples), though I would agree that there likely should be more.

Secondly, It definitely seems very wrong to me though to say that EA has had no new ideas in the past two years. Back in 2022 the main answer to "how... (read more)

Answer by Peter Wildeford174
33
1
1
4

It's very difficult to underrate how much EA has changed over the past two years.

For context, two years ago was 2022 July 30. It was 17 days prior to the "What We Owe the Future" book launch. It was also about three months before the FTX fraud was discovered (but at this time it was massively underway in secret) and the ensuing bankruptcy. We were still at the height of the Big Money Big Longtermism era.

It was also about eight months before the FLI Pause Letter, which I think coincided with roughly when the US and UK governments took very serious and inten... (read more)

8
Arepo
Nit - I'm pretty sure you mean 'overrate'.

This looks pretty much right, as a description of how EA has responded tactically to important events and vibe shifts. Nevertheless it doesn't answer OP's questions, which I'll repeat:

  • What ideas that were considered wrong/low status have been championed here?
  • What has the movement acknowledged it was wrong about previously?
  • What new, effective organisations have been started?

Your reply is not about new ideas, or the movement acknowledging it was wrong (except about Bankman-Fried personally, which doesn't seem like what OP is asking about), or new organizatio... (read more)

5
Nathan Young
So you'd say the major shift is: * Towards AI policy work * Towards AI x bio policy work Also this seems notable:

I agree with all this advice. I also want to emphasize that I think researchers ought to spend more time talking to people relevant to their work.

Once you’ve identified your target audience, spend a bunch of time talking to them at the beginning, middle, and end of the project. At the beginning learn and take into account their constraints, at the middle refine your ideas, and at the end actually try to get your research into action.

I think it’s not crazy to spend half of your time on the research project talking.

That’s fair - you’re right to make this distinction where I failed and I’m sorry. I think I have a good point but I got heated in describing it and strayed further from charitableness than I should. I regret that.

Thanks Linch. I appreciate the chance to step back here. So I want to apologize to @Austin and @Rachel Weinberg and @Saul Munn if I stressed them out with my comments. (Tagging means they'll see it, right?)

I want to be very clear that while I disagree with some of the choices made, I have absolutely no ill will towards them or any other Manifest organizer, I very much want Manifold and Manifest to succeed, and I very much respect their rights to have their conference the way they want. If I see any of them I will be very warm and friendly and there's reall... (read more)

BTW I want to add -- to all those who champion Hanania because they think free speech should mean that anyone should be able to be platformed without criticism or condemnation, Hanania is no ally to those principles:

Here's Hanania:

I don’t feel particularly oppressed by leftists. They give me a lot more free speech than I would give them if the tables were turned. If I owned Twitter, I wouldn’t let feminists, trans activists, or socialists post. Why should I? They’re wrong about everything and bad for society. Twitter [pre-Musk] is a company that is overw

... (read more)

Has anyone said he should be platformed without criticism? The point of contention seems to be that many people think he shouldn't have been a speaker at all and that everyone who interacts with him is tainted. That is not a subtle difference.

As HL Mencken famously said, “The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all.”

If principles only apply to the people that uphold them, then they're not principles: they're just another word for tribalism. Lovely conflict theory you've got there.

-41
Guy Raveh

Yeah, because there's such a geographically clustered dichotomy in views between the London set and the SF set, it seems pretty important to me to give it 24 hours yeah.

Also just a general generic caution: we should know that this poll will mainly be seen by only the most active and most engaged people, which may not be representative enough to generalize.

3
Jason
This is also a reason to exercise great caution in interpreting early results; I would expect this statement to be even more true as applied to the first 6-48ish hours than it it is at a week.

I think the diurnal effect is real and is based on there being a lot of people in both the UK and the SF Bay Area that have opposite and geographically correlated views on this topic.

It's pretty interesting that Hanania just happens to frequently make these kinds of accidents, right?

4
TheAthenians
I'm surprised. You just found out that one of the worst things you thought he said was wrong.  Are you not going to update and maybe think that maybe he's not the villain you originally thought?  I know you're usually quite good at updating based on new evidence. It's hard to convey over text, but I genuinely recommend taking a step back from this and reflecting on your views.  I've seen in one other thread as well you realizing that what you'd heard about Hanania was wrong, so that's twice in one day. Consider that maybe the other things were also not as bad as you originally thought. 
1
TheAthenians
Neither were accidents? It was just people misinterpreting what he was saying or interpreting things uncharitably.  People interpret people uncharitably all the time on the internet, especially if you ever mention race. 

I reached out to Hanania and this is what he said:

"“These people” as in criminals and those who are apologists for crimes. A coalition of bad people who together destroy cities. Yes, I know how it looks. The Penny arrest made me emotional, and so it was an unthinking tweet in the moment."

He also says it's quoted in the Blocked and Reported podcast episode, but it's behind a paywall and I can't for the life of me get Substack to accept my card, so I can't doublecheck. Would appreciate if anybody figured out how to do that and could verify. 

I think gene... (read more)

-15
TheAthenians

To be clear, I haven't cut ties with anyone other than Manifold (and Hanania). Manifold is a very voluntary use of my non-professional time and I found the community to be exhausting. I have a right to decline to participate there, just as much as you have a right to participate there. There's nothing controlling about this.

0
TheAthenians
If you had simply stopped using Manifold privately and it had nothing to do with who they associate, that's one thing.  But if you 1) publicly stop 2) because of who they associate with and 3) imply that you'll do that to others who associate with the "wrong" people (see quote below), then that's boycotting and trying to encourage other people to boycott and telling everybody who's watching that you'll boycott them too if they associate with the wrong people.  Ergo, trying to control who people can hang out with. 

The precise quote for others to assess is "Daniel Penny getting charged. These people are animals, whether they’re harassing people in subways or walking around in suits."

-5
TheAthenians
Load more