All of Peter Wildeford's Comments + Replies

The danger of nuclear war is greater than it has ever been. Why donating to and supporting Back from the Brink is an effective response to this threat

In "The chance of accidental nuclear war has been going down" I argue in the section "Are we at the 'highest risk of nuclear war since the Cuban Missile Crisis'?" that it is almost certainly not the case that nuclear risk is higher now than it was before.

That being said, the risk being the highest now is not a necessary belief for deciding to support work on reducing the risk from nukes.

1Ira Helfand7d
Peter, as noted above, a number of key international experts feel the danger is the greatest it has been. But your point is well taken: even if it is not worse than during the Cold War it is unacceptably high and the Russian invasion of Ukraine has certainly increased the risk further.
1Weaver10d
I wrote a bit about this here: Public Relations: Message & Rapport [https://forum.effectivealtruism.org/posts/gvYSNx9QLQQwvfNau/public-relations-message-and-rapport] I'll probably have a bit more about this as I think about what specific messaging "how to" would be useful for EAs.
EA in the mainstream media: if you're not at the table, you're on the menu

My understanding is that this has indeed been an unfortunate vacuum but as of a few months ago plans are now underway to fix this. So I can say that at least some "people who might be able to fund this or otherwise make it happen" are working on it, though I'm not part of these plans, I don't have much detail, and I won't claim that the plans will actually work (or that they won't work - I don't know).

I do think if anyone else decides to work on this it would be great if they would coordinate. I think it would be bad for us to have multiple non-coordinating media strategies targeted at "effective altruism" specifically.

3Weaver10d
I'm working on an EA consultancy startup (still in planning phases) but with my experience running a large scale operation, I could fold this kind of talent into the consultancy portion and then farm that out. Part of my job is media training and perhaps that is something that EA lacks? I need to research this more but I tend to get the feeling that EAs disregard politics when it's not convenient or don't engage as much as they could. Messaging is politics as it's basest level, and I'll probably write a forums post on this later- it's about building rapport and speaking clearly what you are doing. Comment on this with your thoughts as to what you would want with a PR team or media training so I can make some red teaming stuff about it.
How EA is perceived is crucial to its future trajectory

Thanks! That's a nice compliment! Luckily a lot of our research directions are fully funded and we do have a lot of funding overall, so we will keep putting out more work!

On Elitism in EA

No specific analysis as we don't collect this data for our applicants and we don't judge applicants based on it.

Looking now at this list of top 25 global universities and looking at the backgrounds of our research and executive teams, we are 13/41 (32%) for "people attended a top 25 university for any of their education or post-doc work" (5/12 or 42% for people with people management responsibilities).

Maybe that's actually undercutting my point. But I don't think say trying to target our recruiting to elite universities or trying to give bonus points to pe... (read more)

How EA is perceived is crucial to its future trajectory

I wish I knew! My leading hypothesis is that (a) we still need to prove ourselves more with the money we have received and (b) I’ve done a bad job explaining how we would go about generating value if funded.

6niplav16d
That is insanely surprising to me. I'm continuously impressed by especially the volume of research, but also the quality that RP puts out (example for me is Schukraft 2020 [https://www.rethinkpriorities.org/blog/2020/5/16/comparisons-of-capacity-for-welfare-and-moral-status-across-species] (and in general the whole moral patienthood series), but also Zhang 2022 [https://rethinkpriorities.org/publications/potentially-great-ways-forecasting-can-improve-the-longterm-future] and Dillon 2021 [https://rethinkpriorities.org/publications/an-analysis-of-metaculus-predictions-of-future-ea-resources] , Dillon 2021a [https://rethinkpriorities.org/publications/an-examination-of-metaculus-resolved-ai-predictions] and Dillon 2021b [https://rethinkpriorities.org/publications/how-does-forecast-quantity-impact-forecast-quality-on-metaculus] ). To me it looks like you're one of the only orgs that actually puts out research.
How EA is perceived is crucial to its future trajectory

I think this is very underrated and is an important tail risk for the movement.

We should do more work to message test and focus group EA content/approaches.

Rethink Priorities has a lot of capabilities to do this well but unfortunately we are very funding constrained from expanding this work.

I'm very surprised to hear that this work is funding constrained. Why do you currently think this has received less interest from funders?

As noted in our recent post [insert link]

Nitpick: Worth fixing this

On Elitism in EA

I think elitism is overrated, even for senior-level positions and co-founders.

Not coincidentally, I didn't go to an elite university.

I think people screening on elitism is a lazy heuristic that sometimes works but often times we can do much better.

And after screening potentially 10,000 applicants for over 50 positions at Rethink Priorities -- including multiple senior level roles -- I don't notice that traditional markers of elite background (e.g., going to Harvard/Oxford) have any correlation with whether people do well on our test tasks, get hired, and u... (read more)

7Linch16d
Have you run data analysis on this? I'm a bit surprised given the hiring rounds I've actually seen, though we do have a lot of people with non-elite backgrounds in senior positions. For example, on our website, we have 12 people (including temporary fellows) in Longtermism, and 5 of them come from what I'd consider "elite" universities (2 Yale, Oxford, Cambridge, UChicago), plus two edge cases. So for traditional markers of elite background to have no correlation with how people do on our test tasks, we'd need our pool of job applicants to have ~50% people from elite colleges. Which isn't impossible, given the demographics of EA and relevant self-selection of who might want to apply to research EA jobs, but would be surprising to me. I do think RP is better for people with backgrounds that aren't traditionally prestigious than many other institutions, possibly due to founder effects.
Why EA needs Operations Research: the science of decision making

Rethink Priorities has recently hired a full-time operations researcher. I'm excited to see what she comes up with.

[This comment is no longer endorsed by its author]Reply

I think what RP means by that term is "a researcher focused on figuring out & improving various operations-related things", which is closer to industrial/organizational psychology + business administration + some other stuff than to the field of "operations research" ("a discipline that deals with the development and application of advanced analytical methods to improve decision-making [...] considered to be a subfield of mathematical sciences" (Wikipedia)). 

So I think this is just an unfortunate overlap of terminology, rather than us actually doi... (read more)

How would a language model become goal-directed?

My understanding is that no one expects current GPT systems or immediate functional derivatives (eg, GPT5 trained only on predict the next word but does it much better) to become power-seeking, but that in the future we will likely mix language models with other models (eg, reinforcement learning) that could be power-seeking.

Note I am using "power seeking" instead of "goal seeking" because goal seeking isn't an actual thing - systems have goals, they don't seek goals out.

1David Mears22d
Changed post to use 'goal-directed' instead of 'goal-seeking'.
Announcing Epoch: A research organization investigating the road to Transformative AI

Thanks! We’re very excited to be both an accelerant and a partner for Epoch’s work

Red-teaming Holden Karnofsky's AI timelines

Thanks for putting this together! I think more scrutiny on these ideas is incredibly important so I'm delighted to see you approach it.

So meta to red team a red team, but some things I want to comment on:

  • Your median estimate for the conservative and aggressive bioanchor reports in your table are accidentally flipped (2090 is the conservative median, not the aggressive one - and vice versa for 2040).

  • Looking literally at Cotra's sheet the median year occurs is 2053. Though in Cotra's report, you're right that she rounds this to 2050 and reports this as

... (read more)
5Vasco Grilo1mo
Thanks for commenting, Peter! Corrected, thanks! I agree. (Note the distribution we fitted to "Bio anchors" (row 4 of the 1st table of this [https://forum.effectivealtruism.org/posts/gCw84aBgTcJh8euFm/red-teaming-holden-karnofsky-s-ai-timelines#Results_and_discussion] section) only relies on Cotra's "best guesses" for the probability of TAI by 2036 (18 %) and 2100 (80 %).) Thanks for the sources! Regarding the aggregation of forecasts, I thought this [https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0256919] article to be quite interesting.
Save the date: EAGxVirtual 2022

I’m super excited for this! I love the in person conferences but I think virtual conferences also fill an important void for people who for a variety of reasons can’t easily travel to the US or UK.

6Manuel_Allgaier1mo
*US, UK or any other country with an EAGx conferences. Just in case anyone reading this missed the news, there are now EAG(x) conferences planned on five different continents! :) --> www.eaglobal.org [https://www.eaglobal.org/]
Preventing a US-China war as a policy priority

If you have time, could you (or someone else) explain strategic ambiguity with regard to the US against China? I never really understood it because my understanding is that deterrence relies on clear communication and a lot of wars arise from miscalculations around how likely an adversary is to engage.

I've found this short article useful in explaining the case for it. Basically it says that a guarantee of defense could embolden Taiwan to more aggressively pursue independence which could provoke China, while committing to not interfere could embolden China to invade. The US benefits from better relations with both countries if it walks a line between them and it may be better for peace between them if Taiwan has to tread carefully and China expects a high chance of the US fighting off an invasion of Taiwan.

Notes on "A World Without Email", plus my practical implementation

Sounds exciting!

I’m curious how that will work for people who aren’t self-employed teams of one?

2Hauke Hillebrandt2mo
You could submit it as question to his podcast!
Critiques of EA that I want to read

Right now the thing we are most interested in is finding a strong candidate to work on the Insect Welfare Project full-time: https://careers.rethinkpriorities.org/en/jobs/50511

Donations would also be helpful. This kind of stuff can be harder to find financial support for than other things in EA. https://rethinkpriorities.org/donate

Emphasize Vegetarian Retention
  • More time periods
  • Better question wording
  • 2.8x bigger sample size
Emphasize Vegetarian Retention

However, polls suggest that the percentage of the population that’s vegetarian has stayed basically flat since 1999

I think there's three nitpicks I'd make here:

1.) The sample size of this poll you cite (margin of error of +/- 4%) is typically not large enough to detect subtle shifts in the percentage of vegetarians, especially since the initial population is so small, such that the veg rate could approximately double and still have a ~50% chance of not being detected by the poll.

2.) As you may know, asking people whether they are vegetarian/vegan in a p... (read more)

I think point 3 deserves its own post and to be shared more widely, since it would be a big update for a lot of people.

3Jamie_Harris2mo
This is cool. What makes these polls "a better collection" in your view?
Emphasize Vegetarian Retention

the "*" is meant to be a glob/wildcard rather than a censor

The dangers of high salaries within EA organisations

Speaking for Rethink Priorities, I'd just like to add that benchmarking to market rates is just one part of how we set compensation, and benchmarking to academia is just one part of how we might benchmark to market rates.

In general, academic salaries are notoriously low and I think this is harmful for building long-term relationships with talent that let them afford a life that we want them to be able to live. Also we want to be able to attract the top-tier of research assistant and a higher salary helps with that.

2James Ozden2mo
I totally agree - like I said above, I don't think paying above market rate is necessarily erroneous, but I was just responding to Khorton's question of how many EA orgs actually paid above market rate. And as you point out, attracting top talent to tackle important research questions is very important and I definitely agree that this is main perk of paying higher salaries. In this case of research, I also agree! Academic salaries are far too low and benchmarking to academia isn't even necessarily the best reference class (as one could potentially do research in the private sector and get paid much more).
Announcing the launch of Open Phil's new website

I found a few bugs:

2Aaron Gertler2mo
Thanks, all resolved!
What does ‘one dollar of value’ mean?

I’ve wondered this a lot myself and find this lack of clarity to always be an issue. I personally think something in the realm of 9 makes the most sense, and I personally define “$X of value” as “as good as shifting $X from a morally neutral use to being donated to GiveDirectly”. It helps that I roughly expect GiveDirectly to have linear returns, even in the range of billions spent. But I do try to make this explicit in a footnote or something when I discuss value.

Another good idea in the realm of 9 is how GiveDirectly defines their ROI:

We measure philan

... (read more)
The 2015 Survey of Effective Altruists: Results and Analysis

The original results are hosted on a site that no longer works, so the results have been moved here: https://rethinkpriorities.org/s/EASurvey2015.pdf

The 2014 Survey of Effective Altruists: Results and Analysis

The previous link to the survey results died, so I edited to update the link.

Introducing EAecon: Community-Building Project

I hope someday you organize a convention and call it EAEconCon

6Brian Jabarian2mo
Thanks good point. On a different note, I invited David from your team to come and present which type of work economists could do at your EA org, let me know in mp or email if you or another member would like to join as well.
7benleo2mo
Now I really want to call it EaE(con)^2
Should we be hiring more “unqualified” people?

I can't think of any problem area where I'd be excited to actively hire a ton of people without vetting or supervision, but I agree that just because I can't think of one doesn't mean that one doesn't exist.

Also, as you and others mention, giving out prizes our bounties could work well if you have an area where you could easily evaluate the quality of a piece of work.

Should we be hiring more “unqualified” people?

I think the core issue with your idea is that the problems we are interested in are all problems where progress is very difficult, and it’s furthermore very difficult to evaluate the quality of someone’s work, and furthermore it is very hard for them to make progress without lots of guidance and feedback, so you cannot just throw a ton of people at the problem and expect it to work well.

I like the idea of giving more people opportunities though, and I like that Rethink Priorities plays a role in this by trying to hire a lot of people to do research. But we find it requires a lot of mentorship and management for people to do well.

1Yitz3mo
Are you sure that all problems we’re facing are necessarily difficult in this he sort of way a non-expert would be bad at? I don’t have the time right now to search through past bounties, but I remember a number of them involved fairly simple testable theories which would simply take a lot of time and effort, but not expertise.
Intro and practical ideas around Salesforce within EA

Can you elaborate more on what benefits an organization might get from Salesforce?

3Eli Kaufman3mo
Here are a few typical use cases: * a process whereby a few team members have to collaborate (for example reviewing forms submitted via a webpage with one person doing the initial screening, selecting a subset of the forms for a second person to review and sending response to the form submitter). While this can be done using a spreadsheet and email, it does not scale well and has a lot of friction without a proper system. Building an end-to-end solution with notifications and automation allows scaling it up massively without increasing overheads. * creating data-driven reports allows making better decisions on the basis of trends rather than anecdotal evidence. Example: customer support team sees an increase of queries about a particular topic shortly after a software release. Flagging it up early speeds up getting it resolved and drives customer satisfaction. * storing better data about volunteers/donors to help tailoring more relevant marketing messages for them increasing engagement. Can I get a list of donors who recently attended an event and are interested in animal welfare? No problem! In general, it's about storing the relevant organization data in a way that contributes to getting more done, more efficiently and more transparently.
What are some high-EV but failed EA projects?

I think three key differences:

  • By 2018, we had more of a track record before starting.

  • For the 2018 attempt, we self-funded for six months before seeking funding to build an even bigger track record, rather than trying to get funding right at the beginning.

  • EA funding was notably more plentiful in 2018 than 2016. (Though still notably less plentiful than in 2022.)

What are some high-EV but failed EA projects?

Few people know that we tried to start something pretty similar to Rethink Priorities in 2016 (our actual founding was in 2018). We (Marcus and me, the RP co-founders, plus some others) did some initial work but failed to get sustained funding and traction so we gave up for >1 year before trying again. Given that RP -2018 seems to have turned out to be quite successful, I think RP-2016 could be an example of a failed project?

4IanDavidMoss3mo
That's fascinating! What do you think were the key differences between the 2016 approach and the 2018 approach, and how much was just luck of timing?
Bad Omens in Current Community Building

I think it will be really important for EAs to engage in more empirical work to understand how people think about EA. Of course you don't want people to feel like they're being fed the results of a script tested by a focus group (that's the whole point of this post), but you do want to actually know in reliable ways how bad some of these problems are, how things are resonating, and how to do better in a genuine and authentic way. Empirical results should be a big part of this (though not all of it), but right now they aren't, and this seems bad. Instead, w... (read more)

1nananana.nananana.heyhey.anon3mo
I agree with you, and I think this somewhat supports the OPs concern. Are most uni groups capable of producing or critiquing empirical work about their group, or about EA or about their cause areas of choice? Are they incentivized to do so at all? Sometimes yes, but mostly no.
Some clarifications on the Future Fund's approach to grantmaking

Do you think it was a mistake to put "FTX" in the "FTX Future Fund" so prominently? My thinking is that you likely want the goodness of EA and philanthropy to make people feel more positively about FTX, which seems fine to me, but in doing so you also run a risk of if FTX has any big scandal or other issue it could cause blowback on EA, whether merited or not.

I understand the Future Fund has tried to distance itself from effective altruism somewhat, though I'm skeptical this has worked in practice.

To be clear, I do like FTX personally, am very grateful for what the FTX Future Fund does, and could see reasons why putting FTX in the name is also a positive.

Potatoes: A Critical Review

Good example of red teaming a paper!

5JasperGeh3mo
I agree! I added the Red teaming wiki tag but since that tag is a mix of meta-discussion and examples, it might also be nice to have a separate tag for red teaming examples.
EA Tours of Service

I'm interested in this idea. I also really like and endorse the idea of making very clear, actionable, mostly objective goals for employment even if that employment is open-ended and not tied to a specific length.

'Beneficentrism', by Richard Yetter Chappell

Thanks! Both of those approaches sounds justifiable to me.

Some clarifications on the Future Fund's approach to grantmaking

Note that it may be hard to give criticism (even if anonymous) about FTX's grantmaking because a lot of FTX's grantmaking is (currently) not disclosed. This is definitely understandable and likely avoids certain important downsides, but it also does amplify other downsides (e.g., public misunderstanding of FTX's goals and outputs) - I'm not sure how to navigate that trade-off, but it is important to acknowledge that it exists!

Completely agree! Although I imagine that the situation will change soon due to 1) last funding decisions being finalized 2) funded projects coming out of stealth mode 3) more rejected applicants posting their applications publicly (when there are few downsides to doing so) 4) the Future Fund publishes a progress report in the next months.

So I expect the non-disclosure issue to be significantly reduced in the next few months.

Totally agreed!

although, to be frank, it does make me a bit confused where some of the consternation about specific, unspecified grants has come from...

'Beneficentrism', by Richard Yetter Chappell

I'm a big fan of your philosophical writing and your attempts to philosophically defend and refine utilitarianism and effective altruism. I also really like your more general idea here of pushing people to think less about avoiding wrongdoing and towards thinking more about rightdoing.

I think one thing I'd wonder is what it means to make something a "central life project" and what kind of demandingness this implies. Is GWWC membership sufficient? Is 30min of volunteering a week sufficient? This is the hard part I think about satisficing views (even though ... (read more)

8Richard Y Chappell3mo
Thanks Peter! Right, I agree that beneficence should be impartial. What I had in mind was that one can combine a moderate degree of impartial beneficence with significant partiality in other areas of one's life (e.g. parenting). Thanks for flagging that this didn't come through clearly enough. re: "central life project", this is deliberately vague, and probably best understood in scalar terms: the more, the better. My initial aim here is just to get more people on board with adopting it as a project that they take seriously. I don't think I can give a precise specification of where to draw the line. But also, I don't really want to be drawing attention to the baseline minimum, because that shouldn't be the goal.
The case for becoming a black-box investigator of language models

I’ve started doing a bunch of this and posting results to my Twitter.

EA is more than longtermism

I think it's a factor of global health being already allocated to much more scalable opportunities than exist in longtermism, whereas the longtermists have a much smaller amount of funding opportunities to compete for. EA individuals are the main source of longtermist opportunities and thus we get showered in longtermist money but not other kinds of money.

Animals is a bit more of a mix of the two.

Why CEA Online doesn’t outsource more work to non-EA freelancers

What's an example of something that is a core competency yet operationally unimportant (the top-left grid)? I'm starting to think the entire operational importance axis isn't needed.

4Ben_West3mo
An example is something like "having a CEO who is considered a prestigious thought leader in their field." The day-to-day operations of the business aren't really impacted by this, but it's also not something you can really outsource. (That being said, maybe it would have been a lot simpler and almost as correct to just leave this axis off, like you suggest.)
EA is more than longtermism

On one hand it's clear that global poverty does get the most overall EA funding right now, but it's also clear that it's more easy for me to personally get my 20th best longtermism idea funded than to get my 3rd best animal idea or 3rd best global poverty idea funded and this asymmetry seems important.

2frances_lorenz3mo
Do you think that's a factor of: how many places you could apply for longtermist vs. other cause area funding? How high the bar is for longtermist ideas vs. others? Something else?
Why CEA Online doesn’t outsource more work to non-EA freelancers

I get that outsourcing doesn't work for core competencies but why does outsourcing not work for operationally unimportant activities? Basically I'm confused by the bottom-left quadrant.

1Lorenzo3mo
See [2] [https://forum.effectivealtruism.org/posts/kz3Czn5ndFxaEofSx/why-doesn-t-cea-outsource-more-work-to-non-ea-freelancers#fnlwnjga63h8n] > Also I guess technically you could outsource work which is neither a core competency nor operationally important, but you should probably just not be doing that work at all.
Load More