All of PeterMcIntyre's Comments + Replies

AMA: Jason Brennan, author of "Against Democracy" and creator of a Georgetown course on EA

What do you think of the proposals in Longtermist Institutional Reform? If you're supportive, what should happen at the current margin to push them forward?

Outline of Galef's "Scout Mindset"

Thanks for the great summary! 

For effective altruists, I think (based on the topic and execution) it's straightforwardly the #1 book you should use when you want to recruit new people to EA.

I really liked the book, and think it's an important read for folks early in their EA journey but I want to quickly say that I disagree with this claim.  The book "doesn't actually talk much about EA", so it'd be surprising if it was the best introduction to a field. Statistics is a useful field for understanding and contributing to social science, but it'd be surprising if it was straightforwardly the #1 book to recommend to someone wanting to learn social science. 

If someone's specifically looking for a book about EA, I wouldn't give them Scout Mindset and say 'this is a great introduction to EA' -- it's not!  Riffing on your analogy, it's more like a world where:

  • There's a book about statistics (or whatever) that happens to be especially useful as a prereq for social science resources -- e.g., it provides the core tools for evaluating social-science claims, even if it doesn't discuss social science on the object level.
  • Social science departments end up healthier when they filter on the kind of person who's inter
... (read more)
Anki deck for "Some key numbers that (almost) every EA should know"

I imported them into RemNote where you can read all the cards. You can also quiz yourself on the questions using the queue functionality at the top.  Or here's a Google Doc.

If someone was interested in adding more facts to the deck, there are a bunch in these notes from The Precipice. (It's fairly easy to export from RemNote to Anki and vice versa, though formatting is sometimes a little broken.)

3Pablo3moThanks, I'll try to add these shorty.
The EA Forum Editing Festival has begun!

Is there a way to show my appreciation for an edit? 

Often I see excellent edits[1] to the Wiki show up in my Forum homepage, and I would like to be able to show my appreciation to someone[2].  Ideally with low effort and without otherwise adding any value.

Is there a like/upvote button for Wiki edits I'm missing?

--

[1] For example, check out how much information this article on iterated embryo selection is collating and condensing. It was written a few months ago, and is now Google's featured snippet for iterated embryo selection (a sign that Googl... (read more)

5JP Addison4moVoting on edits is recently in the pipeline [https://github.com/LessWrong2/Lesswrong2/pull/4052]. In the mean time you can comment on the tag, which gives the author public recognition.
Bottlenecks and Solutions for the X-Risk Ecosystem

Thanks for writing this!

Just wanted to let everyone know that at 80,000 Hours we’ve started headhunting for EA orgs and I’m working full-time leading that project. We’re advised by a headhunter from another industry, and as suggested, are attempting to implement executive search best practices.

Have reached out to your emails listed above - looking forward to speaking.

Peter

Personal thoughts on careers in AI policy and strategy

Great article, thanks Carrick!

If you're an EA who wants to work on AI policy/strategy (including in support roles), you should absolutely get in touch with 80,000 Hours about coaching. Often, we've been able to help people interested in the area clarify how they can contribute, made introductions etc.

Apply for coaching here.

Cognitive Science/Psychology As a Neglected Approach to AI Safety

We agree these are technical problems, but for most people, all else being equal, it seems more useful to learn ML rather than cog sci/psych. Caveats:

  1. Personal fit could dominate this equation though, so I'd be excited about people tackling AI safety from a variety of fields.
  2. It's an equilibrium. The more people already attacking a problem using one toolkit, the more we should be sending people to learn other toolkits to attack it.
3Kaj_Sotala4yGot it. To clarify: if the question as framed as "should AI safety researchers learn ML, or should they learn cogsci/psych", then I agree that it seems better to learn ML.
Cognitive Science/Psychology As a Neglected Approach to AI Safety

Hi Kaj,

Thanks for writing this. Since you mention some 80,000 Hours content, I thought I’d respond briefly with our perspective.

We had intended the career review and AI safety syllabus to be about what you’d need to do from a technical AI research perspective. I’ve added a note to clarify this.

We agree that there a lot of approaches you could take to tackle AI risk, but currently expect that technical AI research will be where a large amount of the effort is required. However, we’ve also advised many people on non-technical routes to impacting AI safety, s... (read more)

4Kaj_Sotala4yHi Peter, thanks for the response! Your comment seems to suggest that you don't think the arguments in my post are relevant for technical AI safety research. Do you feel that I didn't make a persuasive case for psych/cogsci being relevant for value learning/multi-level world-models research, or do you not count these as technical AI safety research? Or am I misunderstanding you somehow? I agree that the "understanding psychology may help persuade more people to work on/care about AI safety" and "analyzing human intelligences may suggest things about takeoff scenarios" points aren't related to technical safety research, but value learning and multi-level world-models are very much technical problems to me.
EA Facebook New Member Report

Thanks for writing this up! It's very useful to be able to compare this to census data. Did you use the same/similar message for everyone? If so, I'd be interested to see what it was. This sort of thing would also be useful to a/b test to refine it. There is also the option to add people manually, bypassing the need for admin approval; did you contact these people too?

2ClaireZabel6yWe used: "Hey, welcome to the Effective Altruism facebook group! If you have a moment, would you mind telling us where you first heard about EA? Thanks! Claire (moderator)" We are considering a/b testing some new questions, and would love suggestions on different phrasing. And when someone in the group adds a new member, we still have to approve them. We messaged them as well.
You Could be the Warren Buffett of Social Investing

Hi Eric, thanks for writing these and pointing us to them. I think this is a great idea. I just posted these on our business society and law society Facebook page to test the waters and see what response we'd get from a similar input. Out of interest, what has the response been that you've gotten so far?

1[anonymous]6yI would guess that the first article would have had a quite positive response. It was well written, and a pleasure to read. But I fear the second article has not had as positive a response, for two reasons: 1. It appears to be dismissive and cynical of its own target audience - and from the very first sentence: "For people systematically chosen for being able to root out and analyze the rationality of arguments, lawyers are pitifully bad at being reasonable." It goes on to do things such as dismiss the positive impact of believed-to-be-ethical jobs as 'the warm fuzzies', without justification. 2. It doesn't address what its target audience believes to be the biggest factor in determining an ethical job; the direct impact of the job; the millions of dollars which the big corporation sues from the more deserving; the dozens of individuals the public lawyer works to help. Writing these articles can do a great amount of good, and is to be commended. But to maximise this good, we should be meticulous about catering to the needs of our audience.
1rohinmshah6yAdding on to this question, there are a lot of negative comments on the second article - do you think that represents a vocal minority, or a majority, and why? It would be interesting to try this at Berkeley as well, although we'd probably have a different target audience depending on where the article gets published.
Request for Feedback: Researching global poverty interventions with the intention of founding a charity.

Thanks for posting this. I think explicitly asking for critical feedback is very useful.

If the intervention is not currently supported by a large body of research then we want to fund/carry out a randomized controlled trial to test whether it’s worth pursuing this intervention.

RCTs are seriously expensive, would take years to get meaningful data, would need to be replicated as well before you could put much faith in it, and it wouldn't align with the core skillset I'd imagine you'd need to be starting an organisation (so you'd need to outsource it, wh... (read more)

1Denise_Melchin6yThank you for pointing this out. I had expected them to be a lot cheaper. If GiveWell, as Ben said, has decided against funding RCTs, I'm not very likely to be convinced of their usefulness either.
1lincolnq6yBut all those costs of RCTs are clearly worth it. Expensive? If your intervention is vaguely promising then EAs will throw enough money at you to get started. Time? Better get started now. Replication? More cost, EAs will fund. Outsource? Higher quality, EAs will fund.
Best way to invest with leverage?

If I remember correctly, CEA et al. decided against pursuing this strategy due to risk adversity. Due to the large downsides which may be unique to EA, it's not clear - to me at least - that our personal strategy should differ from this. I'd be interested in seeing some more thoughts on this.

2Brian_Tomasik6yI agree the situation would be different for a single small organization or if the charity you're donating to depends sensitively on your donations. But if you're just an individual earning to give to relatively big charities (e.g., MIRI, which has a budget >$1m/year), then if you lose, say, ~$20K due to leverage, you can just make it up again with another ~2-3 months of work, and no major harm is done.
April Open Thread

You've probably considered it, but it's not on your list: To hedge against any change in our consumption of meat, you could invest in in vitro meat, and other meat-alikes.

The Outside Critics of Effective Altruism

I think one of my concerns with this would be the consistency and commitment effect created by incentivising a criticism, leading to someone seeing herself as an EA critic, or opposed to these ideas. Similar to companies having rewards for customers writing why it's their favourite company or product in the world. See also the American prisoners of war of China in the Korean war (I think), having small incentives to write criticisms of America or Capitalism. If it were being seriously considered, it'd be good to see some more done to work out if this would be a real consequence.

Source: Influence, Cialdini.