All of Peter Wildeford's Comments + Replies

Oh ok, thanks! Sorry for my confusion.

I’m confused - elsewhere you identify yourself as the author of this post but here you are commenting as if you have independently reviewed it?

[This comment is no longer endorsed by its author]Reply
7
Owen Cotton-Barratt
10d
Habryka identifies himself as the author of a different post which is linked to and being discussed in a different comment thread.

To be clear, I don't think people have turned against earning to give as a concept, as in they think it's no longer good or something.

But I do think people have turned against "donating $5K a year to GiveWell[1] is sufficient to feel like I'm an EA in good standing, that I'm impactful, and that I can feel good about myself and what I'm doing for the world" as a concept. And this seems pretty sad to me.

Moreover, there's been a lot of pressure over the past five more recent years of EA to push people onto concrete "direct good" career paths, especially at th... (read more)

The TV show Loot, in Season 2 Episode 1, introduces a SBF-type character named Noah Hope DeVore, who is a billionaire wonderkid who invents "analytic altruism", which uses an algorithm to determine "the most statistically optimal ways" of saving lives and naturally comes up with malaria nets. However, Noah is later arrested by the FBI for wire fraud and various other financial offenses.

3
Jason
12d
I wonder if anyone else will getting a thinly veiled counterpart -- given that the lead character of the show seems somewhat based on MacKenzie Scott, this seems to be maybe a thing for the show.

It's been lost a bit in all the noise, but I think people should still be very excited and satisfied about "earning to give" and donating.

Anyone who can donate $5000 via GiveWell can save a life.

Possibly you can do even better than that per dollar if you're okay accepting some premises around nonhuman animal welfare / sentience or risks from emerging technologies.

I think this is all very cool.

Moreover, while a $5000 donation is a big commitment, it is also achievable by a rather wide array of people. Many people are wealthy enough to do this donation, but ... (read more)

6
Dylan Richardson
11d
This seems like a common group misperception to me, that (other) EAs have turned against earning to give. Take this comment for instance - zero disagrees.  But maybe there's a vague unease as opposed to explicit beliefs? Like student clubs just not broaching the subject as much as they had before? Self-censoring? If so, it's not obviously represented in any forum activity I've seen, neither is it obvious on the EA survey, which finds "further de-emphasize ETG" in only 5% of responses. Maybe that's enough to be worried anyways?

Bold move launching this apparently quite serious new process on April Fools Day.

4
Animal Charity Evaluators
23d
Rest assured, our new charity application process and commitment to serious impact-driven advocacy is no joke. We're here to keep the conversation on animal advocacy going strong, even on April Fool's Day!
0
Milan_Griffes
24d
ACE isn't fucking around. 
9
Sean_o_h
24d
April Fools is A strange game. The only high-utility move is Not to play.

Rather than give a price tag for each (as there's many), maybe you or other donors could flag the ones you're most interested in and I could let you know? (Also this may be helpful for our internal prioritization even if there weren't any money on the line.)

Hey, thanks for the feedback. I do think reasonable people can disagree about this policy and it entails an important trade-off.

To understand a bit about the other side of the trade-off, I would ask that you consider that we at RP are roughly an order of magnitude bigger than Lightcone Infrastructure and we need policies to be able to work well with ~70 people that I agree make no sense with ~7 (relatively independent, relatively senior) people.

7
Ben_West
4mo
Peter, I'm not sure if it is worth your time to share, but I wonder if there are some additional policies RP has which are obvious to you but are not obvious to outsiders, and this is what is causing outsiders to be surprised. E.g. perhaps you have a formal policy about how funders can get feedback from your employees without going through RP leadership which replaces the informal conversations that funders have with employees at other organizations. And this policy would alleviate some of the concerns people like Habryka have; we just aren't aware of it because we aren't familiar enough with RP.

Could you say more about the other side of the tradeoff? As in, what's the affirmative case for having this policy? So far in this thread the main reason has been "we don't want people to get the impression that X statement by a junior researcher represents RP's views". I see a very simple alternative as "if individuals make statements that don't represent RP's views they should always make that clear up front". So is there more reason to have this policy?

9
Habryka
4mo
Hmm, my sense is that the tradeoffs here mostly go in the opposite direction. It seems like the costs scale with the number of people, and it's clear that very large organizations (200+) don't really have a chance of maintaining such a policy anyways, as the cost of enforcement grows with the number of members (as well as the costs on information flow), while the benefits don't obviously scale in the same way. It seems to me more reasonable for smaller organizations to have a policy like this, and worse the larger the organization is (in terms of how unhappy to be with the leadership of the organization for instituting such a policy). 

Fair, but the flip side of that is that it's considerably less likely that a sophisticated donor would somehow misunderstand a junior researcher's clearly-expressed-as-personal views as expressing the institutional view of a 70-person org. 

Is it possible to elaborate at all on why they'd be particularly good fits for individual donors? I imagine in many cases the answer is a bit sensitive as to why OP may prefer to fund an org more but the org itself may prefer not to be funded by OP more but instead funded by individual donors. And I certainly can use my own private information to make some of those guesses. But reading this list it's actually pretty hard to tell what is going on.

Thank the heavens my prayers have been answered! My birthday wish came true! I’ve been waiting so long for this moment and it’s finally here! This is the true meaning of the holiday season!

  1. How will you prioritise amongst the projects listed here with unrestricted funds from small donors? Most of these projects I find very exciting, but some more than others. Do you have a rough priority ordering or a sense of what you would do in different scenarios, like if you ended up with unrestricted funding of $0.25m/$0.5m/$1m/$2m/$4m etc how you would split it between the projects you list?

I think views on this will vary somewhat within RP leadership. Here I am also reporting my own somewhat quick independent impressions which could update upon f... (read more)

  1. Can you assure me that Rethink's researchers are independent?

Yes. Rethink Priorities is devoted to editorial independence. None of our contracts with our clients include editorial veto power (except that we obviously cannot publish confidential information) and we wouldn't accept these sorts of contracts.

I think our reliance on individual large clients is important, but overstated. Our single largest client made up ~47% of our income in 2023 and we're on track for this to land somewhere between 40% and 60% in 2024. This means in the unlikely event tha... (read more)

saulius
4mo166
8
0
9
5

When I was asked to resign from RP, one of the reasons given was that I wrote the sentence “I don't think that EAs should fund many WAW researchers since I don't think that WAW is a very promising cause area” in an email to OpenPhil, after OpenPhil asked for my opinion on a WAW (Wild Animal Welfare) grant. I was told that this is not okay because OpenPhil is one of the main funders of RP’s WAW work. That did not make me feel very independent. Though perhaps that was the only instance in the four years I worked at RP.

Because of this instance, I was also con... (read more)

Rethink's cost per published research report that is not majority funded by "institutional support"

Our AI work, XST work, and our GHD work were entirely funded by institutions. Our animal welfare work was mostly funded by institutions. However, our survey work and WIT work were >90% covered by individual donors, so let's zoom in on that.

The survey department and WIT department produced 31 outputs in 2023 against a spending of $2,055,048.78 across both departments including all relevant management and operations. This is $66,291.90 per report.

Notably ... (read more)

2
Jason
4mo
Are there slightly more precise estimated costs for some of the specific items listed in your post? As an example smaller individual donor, there is a real big difference between $1K and $10K -- the former would be pretty doable on my own if there were a report I had particular interest in, the latter is a nonstarter, and values in the middle could be achievable if 1-2 other people wanted to go in with me. [2024 is an off year for my donations due to tax strategy, so don't invest any time into this on my account specifically.]

Rethink's cost per published research report (again total org cost not amount spent on a specific projects, divided by the number of published reports where a research heavy EA Forum post of typical Rethink quality would count as a published report).

Collectively, RP's AI work, global health + development work, animal work, worldview investigations work, and survey work lead to the generation of 90 reports in 2023.

The budget for these six departments was $7,838,001.20, including all relevant management and operations.

This results in $87,088.90 per report... (read more)

Hi Sam,

Thanks for the detailed engagement! I am going to respond to each with a separate reply.

Rethink's cost per researcher year on average, (i.e. the total org cost divided by total researcher years, not salary).

I think the best way to look at this is marginal cost. A new researcher hire costs us ~$87K USD in salary (this is a median, there is of course variation by title level here) and ~$28K in other costs (e.g., taxes, employment fees, benefits, equipment, employee travel). We then need to spend ~$31K in marginal spending on operations and ~$28K i... (read more)

I'm very excited to see this. To be honest when I first heard of the "evaluate the evaluators" project I was very skeptical and thought it would just be a rubber stamp on the EA ecosystem in a way that would play well for social media and attract donations.

I definitely was wrong!

It's good to see that there actually was substantive meta-evaluation here and that the GWWC meta-evaluators did not pull punches!

1
Sjir Hoeijmakers
5mo
Thank you, Peter, we're obviously very happy to hear this!

I agree with this and I'd also be curious to hear more details about where GWWC's current funding does come from, to help evaluate the extent to which GWWC is impartial (though to be clear I do think GWWC is impartial).

4
Luke Freeman
5mo
Our largest funder has been OP, and we received some (now returned) money from Future Fund. Other than that it hs mostly been individuals and small foundations (eg family foundations).

I'm really happy to see the “Add 10% to support our work” button and I check this every time it comes up!

1
Luke Freeman
5mo
Thanks!! Great to hear!

FWIW I would've expected the Content Manager manages the Content Specialist, not the other way around.

What's the difference between a Content Specialist and a Content Manager?

3
tobytrem
5mo
The difference in role titles reflects the fact that Lizka is the team lead (of our team of two). From what I understand, the titles needn't make much difference in practice. PS- I'm presuming there is a disagree react on my above comment because Lizka can in fact do everything at once. Fair enough. 
1
Constance Li
5mo
Yes I am also curious about the difference. I’ve been using them interchangeably.

Just to be clear, Lizka isn't being replaced and you're a new, additional content manager? Or does Lizka have a new role now?

4
tobytrem
5mo
Yep, Lizka is still Content Specialist, and I'm additive. There were a lot of great content related ideas being left on the table because Lizka can't do everything at once. So once I'm up to speed we should be able to get even more projects done. 

Sorry I missed that. I think that's a sensible way to handle ballot exhaustion.

I assume in the renormalization step there will still be exhausted ballots (e.g., they voted for three orgs and none of the orgs made it). I assume then the plan would be that those ballots just won't continue to matter in the election? I know this sounds bad the way I'm writing it, but this is how ranked choice voting works and seems totally fine + normal to me, just wanted to make sure you've thought about it because I didn't see it mentioned.

I also assume that the way the renormalization step works is that if everyone gets 10pts and someone voted A - 6p... (read more)

3
harfe
5mo
No, it sounds like they will count as equally weighted votes. From the article: "If all of a voter’s points were assigned to candidates which are now eliminated, we’ll pretend that the voter spread their points out equally across the remaining candidates." I think both would be ok choices Basically, yes, according to my understanding. Except that there is no fixed point limit, but every voters points get normalized so that voters have equal weights. I would guess yes, based on the description.
5
Lizka
5mo
Most of the info on exhausted ballots was in footnotes, unfortunately:  Renormalization: you've got it right!  Under-voting: yeah, we're allowing it.  [Edit: looks like harfe and I answered at basically the same time. :) ]

I definitely had roles I've hired for this year where the top candidate was significantly better than the second place candidate by a large margin

2
Elizabeth
5mo
How senior was this position? or, can you say more about how this varies across different roles and experience levels? Based on some other responses to this question I think replaceability may be a major crux, so the more details the better.

I think it's worth considering. My guess is that doing so would not necessarily be very time consuming. Could also be interested for them to pool donations to limit the number of people who need to do it, form a giving circle, or donate to a fund (e.g., EA Funds).

Why not both? I assume OP is fixing their capacity issues as fast as they can, but there still will be capacity issues remaining. IMO Neel still would add something here that is worth his marginal time, especially given Neel's significant involvement, expertise, and networks.

1
Ryan Greenblatt
5mo
The underlying claim is that many people with technical expertise should do part time grant making? This seems possible to me, but a bit unlikely.

OP doesn't have the capacity to evaluate everything, so there are things they don't fund that are still quite good.

Also OP seems to prefer to evaluate things that have a track record, so taking bets on people to be able to get more of a track record to then apply to OP would be pretty helpful.

I also think orgs generally should have donor diversity and more independence, so giving more funding to the orgs that OP funds is sometimes good.

2
Neel Nanda
5mo
I'd be curious to hear more about this - naively, if I'm funding an org, and then OpenPhil stops funding that org, that's a fairly strong signal to me that I should also stop funding it, knowing nothing more. (since it implies OpenPhil put in enough effort to evaluate the org, and decided to deviate from the path of least resistance) Agreed re funding things without a track record, that seems clearly good for small donors to do, eg funding people to do independent research or start a small new research group, if you believe they're promising

OP doesn't have the capacity to evaluate everything, so there are things they don't fund that are still quite good.

Maybe there should be some way for OP to publicize what they don't evaluate, so others can avoid the adverse selection.

4
Ryan Greenblatt
5mo
IMO, these both seem like reasons for more people to work at OP on technical grant making more than reasons for Neel to work part time on grant making with his money.
2
Lorenzo Buonanno
6mo
It works for me. I do have to be precise about tapping the heart in the top right though, despite the whole box looking like a target

I’m guessing stopping scaling by US POTUS executive order is not even legally possible though? So I don’t think we’d have to worry about that.

9
dan.pandori
6mo
Legal or constitutional infeasibility does not always prevent executive orders from being applied (or followed). I feel like the US president declaring a state of emergency related to AI catastrophic risk (and then forcing large AI companies to stop training large models) sounds at least as constitutionally viable as the attempted executive order for student loan forgiveness. I agree that this seems fairly unlikely to happen in practice though.

Do you have examples of ideas that would fall into each category? I think that would help me better understand your idea.

1
Mo Putera
4mo
OP commented on this here.

Why is Boston favored over DC? I'd expect DC would have more EAs in general than Boston, plus would open up valuable policy-focused angles of engagement.

8
Eli_Nathan
6mo
The main issue is that some DC-based stakeholders have expressed concern that an EAG DC would draw unwanted attention to their work, partly because EA has negative connotations in certain policy/politics crowds. We're trying to evaluate how serious these concerns (still) are before making a decision for 2024.
4
Will Bradshaw
6mo
I'm also curious about this. Boston is convenient to me as a Cambridge resident, but I'd guess that holding an event in DC would be more valuable.

46% reported being vegan or vegetarian in the 2019 EA Survey.

Seems like a good fit for Rethink Priorities, but we’re very funding constrained

Except that RSPs don't concern with long-term economic, social, and political implications. The ethos of AGI labs is to assume, for the most part, that these things will sort out themselves, and they only need to check technical and momentary implications, i.e., do "evals".

The public should push for "long-term evals", or even mandatory innovation in political and economic systems coupled with the progress in AI models.

The current form of capitalism is simply unprepared for autonomous agents, no amount of RLHF and "evals" will fix this.

We're working on animals vs xrisk next!

concludes that human extinction would be a big welfare improvement

I don't think he concludes that either, nor do I know if he agrees with that. Maybe he implies that? Maybe he concludes that if our current trajectory is maintained / locked-in then human extinction would be a big welfare improvement? Though Kyle is also clear to emphasize the uncertainty and tentativeness of his analysis.

8
Larks
7mo
I think if you want to emphasize uncertainty and tentativeness it is a good idea to include something like error bars, and to highlight that one of the key assumptions involves fixing a parameter (the weight on hedonism) at the maximally unfavourable value (100%).
-2
Jeff Kaufman
7mo
Edited again; see above.

Two nitpicks:

Here's a selection from their bottom-line point estimates for how many animals of a given species are morally equivalent to one human:

The chart is actually estimates for how many animal life years of a given species are morally equivalent to one human life year. Though you do get the comparison correct in the paragraph after the table.

~

The post weighs the increasing welfare of humanity over time against the increasing suffering of livestock, and concludes that human extinction would be a very good thing.

You'd have to ask Kyle Fish but ... (read more)

4
Jeff Kaufman
7mo
Thanks; edited to change those to "here's a selection from their bottom-line point estimates comparing animals to humans" and [EDIT: see above].

I want to add that personally before this RP "capacity for welfare" project I started with an intuition that a human year was worth about 100-1000 times more than a chicken year (mean ~300x) conditional on chickens being sentient. But after reading the RP "capacity for welfare" reports thoroughly I have now switched roughly to the RP moral weights valuing a human year at about 3x a chicken year conditional on chickens being sentient (which I think is highly likely but handled in a different calculation). This report conclusion did come at a large surprise ... (read more)

6
Linch
7mo
I think my median is mostly still at priors (which started off not very different from yours). Though I guess I have more uncertainty now, so if forced to pick, the average is closer to the project's results than I previously thought, simply because of how averages work.
9
Jeff Kaufman
7mo
Mmm, good point, I'm not trying to say that. I would predict that most people looking into the question deeply would shift in the direction of weighting animals more heavily than they did at the outset. But it also sounds like you did start with, for a human, unusually pro-animal views?

I am happy to see that Nick and Will have resigned from the EV Board. I still respect them as individuals but I think this was a really good call for the EV Board, given their conflicts of interests arising from the FTX situation. I am excited to see what happens next with the Board as well as governance for EV as a whole. Thanks to all those who have worked hard on this.

Caro
7mo69
18
7

I agree that these decisions are going in the right direction. I think their resignations should have been given earlier given the severity of the conflicts of interest with FTX and the problem of their judgments over the situations.

(I still appreciate Nick and Will as individuals and value immensely their contribution to the fields)

Will - of course I have some lingering reservations but I do want to acknowledge how much you've changed and improved my life.

You definitely changed my life by co-creating Centre for Effective Altruism, which played a large role in organizations like Giving What We Can and 80,000 Hours, which is what drew me into EA. I was also very inspired by "Doing Good Better".

To get more personal -- you also changed my life when you told me in 2013 pretty frankly that my original plan to pursue a Political Science PhD wasn't very impactful and that I should consider 8... (read more)

I agree - I think the financial uncertainty created by having to renew funding each year is very significantly costly and stressful and makes it hard to commit to longer-term plans.

Hi Elizabeth,

I represent Rethink Priorities but the incubator Charlie is referencing was/is run by Charity Entrepreneurship, which is a different and fully separate org. So you would have to ask them.

If there are any of your questions you'd want me to answer with reference to Rethink Priorities, let me know!

2
Elizabeth
9mo
Oops, should have read more carefully, sorry about that.

Hi Charlie,

Peter Wildeford from Rethink Priorities here. I think about this sort of thing a lot. I'm disappointed in your cheating but appreciate your honesty and feedback.

We've considered many times about using a time verification system and even tried it once. But it was a pretty stressful experience for applicants since the timer then required the entire task to be done in one sitting. The system we used also introduced some logistical difficulty on our end.

We'd like to try to make things as easy for our applicants as possible since it's already such a ... (read more)

3
Elizabeth
9mo
I'd be very interested in information about the second claim: that the incubator round already had 2k applicants and thus the time from later applicants was a waste. Did you end up accepting late applicants? Did they replace earlier applicants who would otherwise have been accepted, or increase the total class size? Do you have a guess for the effects of the new participants? Or more generally: how do you think about the time unaccepted applicants spend on applications? My guess is that evaluating applications is expensive so you wouldn't invite more if it didn't lead to a much higher quality class, but I'm curious for specifics. CE has mentioned before that the gap between top and median participant is huge, which I imagine plays into the math.

Hi Peter thanks for the response - I am/was disappointed in myself also. 

I assumed RP had thought about this. and I hear what you are saying about the trade-off. I don't have kids or anything like that and I can't really relate to struggling to sit down for a few hours straight but I totally believe this is an issue for some applicants and I respect that. 

What I am more familiar with is doing school during COVID. My experience left me with a strong impression that even relatively high-integrity people will cheat in this version of the prisoner's ... (read more)

Yes. I think animal welfare remains incredibly understudied and thus it is easier to have a novel insight, but also there is less literature to draw from and you can end up more fundamentally clueless. Whereas in global health and development work there is much more research to draw from, which makes it nicer to be able to do literature reviews to turn existing studies and evidence into grant recommendations, but also means that a lot of the low-hanging fruit has been done already.

Similarly, there is a lot more money available to chase top global health in... (read more)

I think it varies a lot by cause area but I think you would be unsurprised to hear me recommend more marginal thinking/research. I think we’re still pretty far from understanding how to best allocate a doing/action portfolio and there’d still be sizable returns from thinking more.

  • I like pop music, like Ariana Grande and Olivia Rodriguo, though Taylor Swift is the Greatest of All Time. I went to the Eras Tour and loved it.

  • I have strong opinions about the multiple types of pizza.

  • I'm nowhere near as good at coming up with takes and opinions off-the-cuff in verbal conversations as I am in writing. I'm 10x smarter when I have access to the internet.

Load more