Dwarkesh (of the famed podcast) recently posted a call for new guest scouts. Given how influential his podcast is likely to be in shaping discourse around transformative AI (among other important things), this seems worth flagging and applying for (at least, for students or early career researchers in bio, AI, history, econ, math, physics, AI that have a few extra hours a week).
The role is remote, pays ~$100/hour, and expects ~5–10 hours/week. He’s looking for people who are deeply plugged into a field (e.g. grad students, postdocs, or practitioners) with high taste. Beyond scouting guests, the role also involves helping assemble curricula so he can rapidly get up to speed before interviews.
More details are in the blog post; link to apply (due Jan 23 at 11:59pm PST).
Oscar Wilde once wrote that "people nowadays know the price of everything and the value of nothing." I can see a particular type of uncharitable EA-critic say the same about our movement, grossed out by how we try to put a price tag on human (or animal) lives. This is wrong.
What they should be appalled by is if a life was truly worth $3500, but that's not what we are claiming. The claim is that a life is invaluable. The world just happens to be in such a way that we can buy this incredibly precious thing for the meager cost of a few thousand.
Nate Sores has written about this in greater detail here.
After hanging out with the local Moral Ambition group (sadly there's only one in Malmö), I've found a shorthand to exprss the difference in methodology compared to EA. Both movements aim to find people who aready have the "A," and cultivate the other component in them.
Many effective altruism communities target people who already wish to help the world (Altruism), then guide and encourage them to reach further (be more Effective).
Moral Ambition meanwhile targets high achieving professionals and Ivy Leaguers (Ambition), then remind them that the world is burning and they should help put out the fire (be more Moral).
Applications for EA Global: San Francisco close February 1st. The event is February 13–15 at the Hilton Union Square.
We're expecting over 1,000 attendees this year. For more information, visit our website or contact hello@eaglobal.org with any questions.
Not sure who needs to hear this, but Hank Green has published two very good videos about AI safety this week: an interview with Nate Soares and a SciShow explainer on AI safety and superintelligence.
Incidentally, he appears to have also come up with the ITN framework from first principles (h/t @Mjreard).
Hopefully this is auspicious for things to come?
Hey y'all,
My TikTok algorithm recently presented me with this video about effective altruism, with over 100k likes and (TikTok claims) almost 1 million views. This isn't a ridiculous amount, but it's a pretty broad audience to reach with one video, and it's not a particularly kind framing to EA. As far as criticisms go, it's not the worst, it starts with Peter Singer's thought experiment and it takes the moral imperative seriously as a concept, but it also frames several EA and EA-adjacent activities negatively, saying EA quote "has an enormously well funded branch ... that is spending millions on hosting AI safety conferences."
I think there's a lot to take from it. The first is in relation to @Bella's argument recently that EA should be doing more to actively define itself. This is what happens when it doesn't. Because EA is legitimately an interesting topic to learn about because it asks an interesting question. That's what I assume drew many of us here to begin with. It's interesting enough that when outsiders make videos like this, even when they're not the picture that'd we'd prefer,[1] they will capture the attention of many. This video is a significant impression, but it's not the end-all-be-all, and we should seek to define ourself lest we be defined by videos like it.
The second is about zero-sum attitudes and leftism's relation to EA. In the comments, many views like this were presented:
@LennoxJohnson really thoughtfully grappled with this a few months ago, when he talked about how his journey from a zero-sum form of leftism and the need for structural change towards becoming more sympathetic to the orthodox EA approach happened. But I don't think we can necessarily depend on similar reckonings happening to everyone, all at the same time. With this, I think there's a much less clear solution than the PR problem, as I think on the one hand that EA sometimes doesn't grapple enough with systemic change, but on the other hand that society would be
Question: Should I serve on the board of a non-EA charity?
I have an opportunity through work to help guide a charity doing work on children's education and entertainment in the UK and US. It has an endowment in the tens of millions of pounds.
Has anyone else had experience serving on the board or guiding committee of a non-EA charity? Did you feel like you were able to have a positive influence? Do you have any advice?
I am sure someone has mentioned this before, but…
For the longest time, and to a certain extent still, I have found myself deeply blocked from publicly sharing anything that wasn’t significantly original. Whenever I have found an idea existing anywhere, even if it was a footnote on an underrated 5-karma-post, I would be hesitant to write about it, since I thought that I wouldn’t add value to the “marketplace of ideas.” In this abstract concept, the “idea is already out there” - so the job is done, the impact is set in place. I have talked to several people who feel similarly; people with brilliant thoughts and ideas, who proclaim to have “nothing original to write about” and therefore refrain from writing.
I have come to realize that some of the most worldview-shaping and actionable content I have read and seen was not the presentation of a uniquely original idea, but often a better-presented, better-connected, or even just better-timed presentation of existing ideas. I now think of idea-sharing as a much more concrete, but messy contributor to impact, one that requires the right people to read the right content in the right way at the right time; maybe even often enough, sometimes even from the right person on the right platform, etc.
All of that to say, the impact of your idea-sharing goes much beyond the originality of your idea. If you have talked to several cool people in your network about something and they found it interesting and valuable to hear, consider publishing it!
Relatedly, there are many more reasons to write other than sharing original ideas and saving the world :)