It's great to see more funding for meta initiatives, so thank you for your work on the MCF!
and we will welcome similar applications as the last round, especially "giving multipliers" that help grow the pie of effective donations.
Could you say more about your circle's reasons for focusing on giving multipliers? I'd be especially curious about why you might focus on donations instead of multiplying other resources like human capital.
(Maybe answered in the first question) What is the object-level cause prioritisation of circle funders?
Thanks! I'm the author of most of the concepts on Conceptually, and also the founder of Non-trivial. I'll send you an email. :)
I think I'd find them helpful, though it's hard to say for sure. As one data point, I'm currently at an extremely basic level of learning javascript, and I find Codecademy's quizzes useful (as well as the project-based learning, which might be cool to replicate for EA but would take a lot of work).
FWIW the quizzes are by far our most popular feature amongst users I'm doing interviews with.
Re: making them optional. It's possible this would be better, but if a user wants to skip a quiz they can very quickly give a dummy answer, which is an ok user experience...
The reason why we ask for sign in is that it allows the user to track their progress through the course, one of our user's favourite features. Learning about EA can at be an overwhelming sea of links, and we wanted to give users a clearer way to track their progress through it.
The other reason is that it's on our backlog to consider, but didn't get to it in time for launch.
To be totally honest, I didn't actually check and just assumed. However, my read of the style guides is that you don't capitalise the second word in hyphenated titles if there's a prefix (source).
Yeah I agree that some talented teenagers don't want to engage with material targeted at their age group.
I try not to use the word teenager on the site (there may be some old references), and write basically as if it's for me at my current age without assuming the knowledge I have.
But I'm not at all sure we've got the tone and design right – I'd appreciate hearing if anyone finds any examples on the site of something that seems condescending, belittling, or unempowering etc..
Strongly upvoted because I think product is an important, underrated framework for movement building (though I only skimmed).
(I think it's particularly true for building web products, and if you're running other services like 1:1s you might find fields like service design or sales more useful.)
The product development literature has informed a lot of the processes and frameworks I'm using to build Non-trivial Pursuits.
My take on the best two books to get up to speed on modern product development:
What do you think of the proposals in Longtermist Institutional Reform? If you're supportive, what should happen at the current margin to push them forward?
Thanks for the great summary!
For effective altruists, I think (based on the topic and execution) it's straightforwardly the #1 book you should use when you want to recruit new people to EA.
I really liked the book, and think it's an important read for folks early in their EA journey but I want to quickly say that I disagree with this claim. The book "doesn't actually talk much about EA", so it'd be surprising if it was the best introduction to a field. Statistics is a useful field for understanding and contributing to social science, but it'd be surprising if it was straightforwardly the #1 book to recommend to someone wanting to learn social science.
If someone's specifically looking for a book about EA, I wouldn't give them Scout Mindset and say 'this is a great introduction to EA' -- it's not! Riffing on your analogy, it's more like a world where:
I imported them into RemNote where you can read all the cards. You can also quiz yourself on the questions using the queue functionality at the top. Or here's a Google Doc.
If someone was interested in adding more facts to the deck, there are a bunch in these notes from The Precipice. (It's fairly easy to export from RemNote to Anki and vice versa, though formatting is sometimes a little broken.)
Is there a way to show my appreciation for an edit?
Often I see excellent edits[1] to the Wiki show up in my Forum homepage, and I would like to be able to show my appreciation to someone[2]. Ideally with low effort and without otherwise adding any value.
Is there a like/upvote button for Wiki edits I'm missing?
--
[1] For example, check out how much information this article on iterated embryo selection is collating and condensing. It was written a few months ago, and is now Google's featured snippet for iterated embryo selection (a sign that Googl...
Thanks for writing this!
Just wanted to let everyone know that at 80,000 Hours we’ve started headhunting for EA orgs and I’m working full-time leading that project. We’re advised by a headhunter from another industry, and as suggested, are attempting to implement executive search best practices.
Have reached out to your emails listed above - looking forward to speaking.
Peter
Great article, thanks Carrick!
If you're an EA who wants to work on AI policy/strategy (including in support roles), you should absolutely get in touch with 80,000 Hours about coaching. Often, we've been able to help people interested in the area clarify how they can contribute, made introductions etc.
Apply for coaching here.
We agree these are technical problems, but for most people, all else being equal, it seems more useful to learn ML rather than cog sci/psych. Caveats:
Hi Kaj,
Thanks for writing this. Since you mention some 80,000 Hours content, I thought I’d respond briefly with our perspective.
We had intended the career review and AI safety syllabus to be about what you’d need to do from a technical AI research perspective. I’ve added a note to clarify this.
We agree that there a lot of approaches you could take to tackle AI risk, but currently expect that technical AI research will be where a large amount of the effort is required. However, we’ve also advised many people on non-technical routes to impacting AI safety, s...
Thanks for writing this up! It's very useful to be able to compare this to census data. Did you use the same/similar message for everyone? If so, I'd be interested to see what it was. This sort of thing would also be useful to a/b test to refine it. There is also the option to add people manually, bypassing the need for admin approval; did you contact these people too?
Hi Eric, thanks for writing these and pointing us to them. I think this is a great idea. I just posted these on our business society and law society Facebook page to test the waters and see what response we'd get from a similar input. Out of interest, what has the response been that you've gotten so far?
Thanks for posting this. I think explicitly asking for critical feedback is very useful.
If the intervention is not currently supported by a large body of research then we want to fund/carry out a randomized controlled trial to test whether it’s worth pursuing this intervention.
RCTs are seriously expensive, would take years to get meaningful data, would need to be replicated as well before you could put much faith in it, and it wouldn't align with the core skillset I'd imagine you'd need to be starting an organisation (so you'd need to outsource it, wh...
I think one of my concerns with this would be the consistency and commitment effect created by incentivising a criticism, leading to someone seeing herself as an EA critic, or opposed to these ideas. Similar to companies having rewards for customers writing why it's their favourite company or product in the world. See also the American prisoners of war of China in the Korean war (I think), having small incentives to write criticisms of America or Capitalism. If it were being seriously considered, it'd be good to see some more done to work out if this would be a real consequence.
Source: Influence, Cialdini.
Thanks for saying! Let me know if you invent a time machine – I've got some ideas