All of Peter Wildeford's Comments + Replies

Introducing High Impact Professionals

I think it would be really great if we could make "earning to give" cool again.

New Data Visualisations of the EA Forum

This is really cool.

One self-aggrandizing nitpick - You have me at under 3000 karma but I see on my profile that I have 7771 karma... did something go wrong in your calculations there?

2NunoSempere1dAlso on the self-aggrandizing arena, you can't see the 5th 6th authors by word count.
2Hamish Huggard1dI only included karma from posts you're the first author of. So the missing karma is probably from comments or second author posts.
Notes on "Managing to Change the World"

Oh cool! That's great to hear about Green.

Notes on "Managing to Change the World"

Thanks for the compliment. You're welcome!

Lizka's Shortform

Protip: if you find yourself with a slow computer, fix that situation asap.

Note to onlookers that we at Rethink Priorities will pay up to $2000 for people to upgrade their computers and that we view this as very important! And if you work with us for more than a year, you can keep your new computer forever.

I realize that this policy may not be a great fit for interns / fellows though, so perhaps I will think about how we can approach that.

Notes on "Managing to Change the World"

I read the book on the Kindle app on my phone. This made it easy to read it in small bits during downtime. I highlighted everything I thought was worth following up on. I then spent ~8hrs going through all the highlights and turning them into the document you see here. The process of making this document greatly helps me remember (and actually act on) the key points of the book.

Obviously I won't do this for every book I read but I want to mainly read books that are so good I feel compelled to make notes out of them.

You can see all the books I've read here.

1kirstenangeles4dThanks so much!
Peter Wildeford's Shortform

If we are taking Transformative AI (TAI) to be creating a transformation at the scale of the industrial revolution ... has anyone thought about what "aligning" the actual 1760-1820 industrial revolution might've looked like or what it could've meant for someone living in 1720 to work to ensure that the 1760-1820 industrial revolution was beneficial instead of harmful to humanity?

I guess the analogy might break down though given that the industrial revolution was still well within human control but TAI might easily not be, or that TAI might involve more dis... (read more)

3Samuel Shadrach2dTo be honest, intuiting what a human being in the 1600s would have thought about anything seems like a non-trivial endeavour. I find it hard to imagine myself without the current math background I have. Probability was just invented, calculus was just invented. Newton had just given the world a realist mechanical way of viewing the world, except idk how many people thought in those terms because philosophical background was lacking too. Nietzche, Hume, Wittgenstein, none of them existed. One trends that may nevertheless have been foreseeable would be the sudden tremendous importance of scientists and science - in both understanding and reshaping how the world works. And general importance of high-level abstractions, rather than just practical engineering knowledge that existed at the time. People knew architecture and geometry but idk how many people realised the general-purpose theorems of geometry are actually useful - and not just what helps you build building #48. Today we take it as matter of fact that theorems are done with symbols not specifics, all useful reasoning is symbolic and often at a high level of abstraction. Idk if people (even scientists) had such clear intuition then.
1Daniel_Eth6dSome people at FHI have had random conversations about this, but I don't think any serious work has been done to address the question.
Notes on "Managing to Change the World"

FWIW my personal opinion having both taken the course and read the book is that reading the book is much more valuable (and much less expensive) than taking the course. I think this is also the opinion of other people I know who have done both, but I'm not sure.

The Cost of Rejection

Yeah we can look into changing our policy

Really appreciate this!

Yeah we can discuss this a bit more, in particular if it looks like we studied it and it's actually too time-consuming/risky or if it's too expensive or time-consuming to do the legal research to figure out whether it's too risky, I'm happy to continue to abide by the current policy! Just want to make sure the policy is evidence-based or at least based on evidence that being evidence-based is too hard! 

The Cost of Rejection

I think giving feedback to rejected applicants is very useful psychologically, but is very hard to do well. Not only would it be time consuming but the key issue for me is that, at least in the United States, organizations take on considerable legal risk when explaining to applicants why they were rejected. (For example, the statement "we were looking for someone with more energy" has initiated an age discrimination suit in the US.) Even short of potential lawsuits, you also open yourself to the applicant arguing with you and asking you to reconsider their... (read more)

We've discussed this internally, but I want to register that I continue to think that while there are considerable costs and risks to organizations for giving feedback, there are also considerable benefits to individuals for precise, actionable feedback as well, and the case has not been adequately made that the revealed preferences of orgs is anywhere close to altruistically optimal. 

In particular, I also have not seeing much evidence that the legal risks are actually considerable in EV terms compared to either the org time costs of giving feedback or the individual practical benefits of receiving feedback. 

(all views my view, of course)

[Job ad] Research important longtermist topics at Rethink Priorities!

It’s why they pay them the big bucks...except for the founders of Rethink Priority and their officers, with mean salaries being about $33K according to their 2020 Form 990.

I think the takeaway is that I think there is a problem here that can be resolved completely by at least tripling the current salaries of RP officers and founders.

 

It's worth noting that we have tripled pay since our 2020 Form 990 (covering 2019). CEO pay is currently $103,959/yr.

[Job ad] Research important longtermist topics at Rethink Priorities!

I'm not sure if you are giving us accolades for putting this information in the job ads or missed that specific salary information is in the job ads. But we definitely believe in salary transparency for all the reasons you mentioned and if there's anything we can do to be more transparent, please let us know!

5catehall13dI just totally missed that the info was in the job ads -- so thank you very much for providing that information, it's really great to see. Sorry for missing it the first time around!
2remmelt14dBut wait, how do we know that was really written by an algorithm? ^^
We're Redwood Research, we do applied alignment research, AMA

How do we know the AMA answers are coming from real Redwood staff and not cleverly trained text models?

GPT-3 suggests: "We will post the AMA with a disclaimer that the answers are coming from Redwood staff. We will also be sure to include a link to our website in the body of the AMA, with contact information if someone wants to verify with us that an individual is staff."

AMA: Jeremiah Johnson, Director/Founder of the Neoliberal Project

What, concretely, do you think The Neoliberal Project has accomplished in its existence so far? The 1/3 million and 70+ chapters are cool, but have you seen any traction on policies you care about due to your influence?

This is a complex thing to measure, because the largest thing we're trying to do is to create an ideological movement that captures a lot of people in the long run. I admire the DSA a lot and think they're very much an example of the impact I'd like to have (but obviously with what I think are preferable political views).  I think they have had enormous impact on current US politics.

But if you had asked 10 years ago 'What has the DSA accomplished?', it'd be a tough question to answer.  They had a handful of local politicians, but nobody really no... (read more)

Snapshot of a career choice 10 years ago

I think this is a good example of how career advice is mainly about funneling people to thinks that currently exist and are sufficiently scaled to take new people, but often the best thing to do is go to something that does not exist yet. Similarly, it's not a good idea to plan a career ten years into the future, especially when you are young, because things change too fast in hard to predict ways.

6ruthgrace21dYes, this!! I would be very interested in talking to more people who are preparing themselves (building career capital, for example) for a project that doesn't exist yet. If this is you (or has been you in the past), please send me a message! There's a lot more uncertainty in a path like this, but I think more people doing it really raises the bar for what can be possible for EA to accomplish.
Honoring Petrov Day on the EA Forum: 2021

Haha it's ok!

Hopefully we can actually play a game version sometime.

Honoring Petrov Day on the EA Forum: 2021

You could upvote something else I said ;)

3SiebeRozendal24dI think this is an excellent contribution to the forum: strong upvote! ;)
Honoring Petrov Day on the EA Forum: 2021

Yeah I actually thought you were legit mad at me rather than just in-game strategizing, so that's +1 to this game being unnecessarily stressful.

Thanks for clarifying.

2SiebeRozendal24dOops! Sorry Peter, not my intention at all!
Clarifying the Petrov Day Exercise

I agree with the sentiment here. I am confused about how some people are taking this super seriously and some people are not, and I feel distracted by being worried about offending people over this ritual. I'd love to play a game, play in a social experiment, or observe a ritual, but I agree it would be more fun to know which. Right now, this is not as fun as it could be.

Context: I have both LW and EA Forum launch codes. I never opted-in.

Honoring Petrov Day on the EA Forum: 2021

People offering forecasting questions like this is really cool, but is there any way to resolve these questions later and give people track records? Or at that point are we just re-inventing Metaculus too much?

Probably a question for Aaron Gertler / the EA Dev team. Semi-relatedly, is there a way to tag Aaron? That might be another good feature.

4Aaron Gertler23dYou can tag me with a quick DM for now — totally fine if you just literally send the URL of a comment and nothing else, if you want to optimize for speed/ease. Tagging users to ping them is a much-discussed feature internally, with an uncertain future.
4Chi25dedit: Feature already exists, thanks Ruby! Another feature request: Is it possible to make other people's predictions invisible by default and then reveal them if you'd like? (Similar to how blacked-out spoilers work, which you can hover over to see the text.) I wanted to add a prediction but then noticed that I heavily anchored on the previous responses and didn't end up doing it.
Honoring Petrov Day on the EA Forum: 2021

I hope we invested in secure second strike capabilities. I think Lesswrong has a nuclear triad - we have guest posts on other websites that can launch nukes even after Lesswrong itself has been destroyed

Honoring Petrov Day on the EA Forum: 2021

Last year the site looked very obviously nuked. If I see that situation, I will retaliate. If I see some other situation, I will use my best judgement.

8Larks25dSurely after the site has been nuked you will no longer be able to enter the codes, because your silos will have been destroyed? And prior to that you risk mis-classifying our civilian space exploration vehicles, whose optimal launch trajectory just happens to go over LessWrong airspace, as weapons?
Honoring Petrov Day on the EA Forum: 2021

Too bad - I am committing to retaliating to establish a deterrent.

7Alex HT25dWhat if LessWrong is taken down for another reason? Eg. the organisers of this game/exercise want to imitate the situation Petrov was in, so they create some kind of false alarm
Honoring Petrov Day on the EA Forum: 2021

Attention EA Forum - I am a chosen user of LessWrong and I have the codes needed to destroy the EA Forum. I hereby make a no first use pledge and I will not enter my codes for any reason, even if asked to do so. I also hereby pledge to second strike - if LessWrong is taken down, I will retaliate.

4SiebeRozendal24dI motion to 1. remove Peter Wildeford's launch codes from the list of valid launch codes for both this forum and LessWrong. Reason: he clearly does not understand that this precommitment is unlikely to deter any of the 'trusted' LW users to press the button (see this [https://forum.effectivealtruism.org/posts/hyWgdmHTNGSHM5ZaE/honoring-petrov-day-on-the-ea-forum-2021?commentId=jYJTqhXGysdLfajsA] David Mannheim's comment and discussion below) 2. evaluate our method of chosing 'trusted users'. We may want to put specific users that take dangerous actions like these on a black list for future instances of Petrov Day. I would ask how users are chosen, but I imagine that making that knowledge more available increasing the information risk it will be misused by nefarious actors.

I downvoted this. I'm not sure if that was an appropriate way to express my views about your comment, but I think you should lift your pledge to second strike, and I think it's bad that you pledged to do so in the first place.

I think one important disanalogy between real nuclear strategy and this game is that there's kind of no reason to press the button, which means that for someone pressing the button, we don't really understand their motives, which makes it less clear that this kind of comment addresses their motives.

Consider that last time LessWrong wa... (read more)

I will not enter my codes for any reason ... if LessWrong is taken down, I will retaliate.

Ahh, Nixon's madman strategy.

9WilliamKiely25dPlease don't retaliate; that just ~doubles the damage for no reason. Per David's comments, I don't think threatening retaliation helps the situation here.
6EricHerboso25dWere you selected to have the codes for both LessWrong and the EA Forum? I see you made a similar post on LW [https://www.lesswrong.com/posts/EW8yZYcu3Kff2qShS?commentId=j2sRiyzuFnBf47Pid].
[Creative writing contest] Blue bird and black bird

I enjoyed this. I liked that it was short and sweet, and the art is excellent. I'd be curious what people who have children think about this.

Great Power Conflict

Even without new technological development, why couldn't there be a great power war over a classical Flashpoint, like what caused past wars? Seems like a war over disputed territories in the seas near China, or disputed territories between India and Pakistan could plausibly cause a great power war.

1Zach Stein-Perlman1moIt's certainly possible, and I think such analysis is valuable. It's just not my comparative advantage and not so neglected (I think). Also, I think we don't lose much analytically by separating foreseeable causes of great power conflict into two distinct categories: 1. Conflict due to specific factors that we recognize as important today (e.g., US-China tension and India-Pakistan tension and their underlying causes) 2. Conflict due to more general forces and phenomena (and due to my empirical beliefs, I think emerging-technology-related forces are relatively likely to cause conflict) This post aims to start a conversation on 2 — or get people to direct me to previous work on 2. Also to explain my focus, I would be surprised by major conflict for normal reasons by 2040 but not surprised by major conflict because the world is going crazy by 2040. But I didn't justify this. I should have mentioned my exclusion of major conflict for normal reasons in my post; thanks for your comment.
The motivated reasoning critique of effective altruism

My pithy critique of effective altruism is that we have turned the optimizer's curse into a community.

It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link]

This makes a lot of sense to me given our limited progress on simulating even very simple animal brains so far, given the huge amount of compute we have nowadays. The only other viable hypothesis I can think of is that people aren't trying that hard, which doesn't seem right to me.

6RyanCarey1moWhat about the hypothesis that simple animal brains haven't been simulated because they're hard to scan - we lack a functional map of the neurons - which ones promote or inhibit one another, and other such relations.
Extrapolated Age Distributions after We Solve Aging

I imagine suicide rates would not stay the same in a world like this.

4MichaelStJules2moYour're thinking they'd be lower, right? Presumably people would have better quality of life and mental health, so be less inclined to commit suicide each year.
Questions for Howie on mental health for the 80k podcast

I think there are a variety of EAs that would benefit from therapy (and medication)  for mental health issues but lack an incredibly clear path to go from "no therapy" to "in therapy". This is especially troubling when the mental health issues cause barriers/blocks to actually seeking out resources and slogging through a tough system. Would there be a way to make a guide for this?

Obviously this will vary a ton between states and countries and insurances, so such a guide might be hard.

To be clear, I say this as an EA who has benefitted from therapy and... (read more)

6MaxDalton2moThere is this website, [https://eamentalhealth.wixsite.com/navigator] which might be the sort of thing you were thinking of?

There seems to be an opportunity for founding an org for “EA mental health”:
 

  • It’s plausible that, even ignoring the wellbeing of the recipients, the cost effectiveness for impact could be enough. For  example, even if you had to pay the costs of treating a thousand EAs, if doing so pulled a dozen of those people out of depression, the impact might be worth it. Depression and burnout is terrible. Also, the preventative value from understanding/protecting burnout and other issues seems high.
     
  • Conditional on mental health being a viable cause ar
... (read more)
Writing about my job: Data Scientist

FWIW I made $187K/yr in total comp (£136K/yr) in Chicago as a data scientist after four years of experience. My starting salary was $83K/yr in total comp (£60K/yr) with no experience. In both jobs, I worked about 30hrs/wk. My day-to-day experience was rather identical to this post.

Writing about my job: Internet Blogger

This is cool, and I think it is underrated as a path. In either case, I wish more people tried out just writing, especially on the EA Forum.

What do you see as the difference, if any, between being an internet blogger and being an independent EA researcher (besides sounding less pretentious)? What would you see as the difference, if any, between being an internet blogger and a journalist?

7AppliedDivinityStudies3moThanks! That's one perk I neglected to mention. You can try blogging in your spare time without much commitment. Though I do think it's a bit risky to do it half-heartedly, get disappointed in the response, and never find out what you would be capable of if you went full time. There are lots of bloggers who definitely don't do independent research, but within the broader EA space it's a really blurry line. One wacky example is Nadia Eghbal [https://nadiaeghbal.com/] who's writing products include tweets, notes, a newsletter, blog posts, a 100 page report, and a book. The journalism piece is interesting. Previously I would have said there are mainstream journalists, and then small-scale citizen journalists who focus on hyperlocal reporting or something. Now so many high profile journalists have gone to Substack to do something that is often opinion-writing, but sometimes goes beyond that. In the past, I also would have said that journalists have more of a responsibility to be impartial, be the view from nowhere, etc. That seems less true today, but it's possible I'm conflating op-eds with "real reporting", and an actual journalist would tell you that there are still clear boundaries.
Notes on EA-related research, writing, testing fit, learning, and the Forum

I definitely agree that one of the best things applicants interested in roles at organizations like ours can do to improve their odds of being a successful researcher is to read and write independent research for this forum and get feedback from the community.

I think another underrated way to acquire a credible and relevant credential is to become a top forecaster on Metaculus, Good Judgement Open, or Facebook’s Forecastapp.

Some 2021 CEA Retention Statistics

Peter Wildeford has done the largest non-manual retention analysis I know, which looked at the percentage of people who answered the EA survey using the same email in multiple years. He found retention rates of around 27%, but cautioned that this was inaccurate due to people using different email addresses each year.


Thanks for citing me, and I'm excited for the new data sources you are looking at.

One thing you might want to add is that I looked at two different approaches. You quote the first approach, but the second approach - which I think is more accura... (read more)

Load More