I proposed the Nonlinear Emergency Fund and Superlinear as Nonlinear Intern.[1]
I co-founded Singapore's Fridays For Future (featured on Al Jazeera and BBC). After arrests + 1 year of campaigning, Singapore adopted all our demands (Net Zero 2050, $80 Carbon Tax and fossil fuel divestment).
I developed a student forum with >300k active users and a study site with >25k users. I founded an education reform campaign with the Singapore Ministry of Education.
I proposed both ideas at the same time as the Nonlinear team, so we worked on these together.
Plans I'm planning:
And probably more. See: linktr.ee/menhguin
Adding that when I first did EV estimates of successful protest/activist movement that:
Activism is never really convenient or "high-EV". I think the public generally holds contradictory and unrealistic expectations of activism. For one, it's very easy to put off activism as "not a priority" because it doesn't lead to obvious career/monetary benefit and always costs time and poses perceived reputation risk. Whenever I hear someone say they care about a cause but don't have time to advocate for it, I just tell them they'll never find a better time. A busy, career-focused 20 year old becomes a busy, career-focused 30 year old becomes and busy, career-focused 40 year old then they forget whatever they cared about. There's a reason EA skews so young, time works against wanting to do meaningful things.
Activism is almost always either controversial/untractable or unnecessary. For the simple reason that if everyone's already convinced of an idea, you don't really need activism. Progress would, by definition, be crucial on issues that seem controversial or so niche that it seems people will "never understand". So when someone tells me that issue [X] is unpopular/controversial/too obscure, I'm like ... yeah, that's the point. Of course the current discourse makes progress seem untractable, that's how all activism starts out and aims too induce beliefs away from. Perhaps more annoying is when activists spend years being harassed and dismissed, then when the Overton Window finally shifts, people just accept the ideas as obvious/default and go back to dismissing the value of activism for the next topical issue, without acknowledging the work done to raise the sanity waterline.
I think these paradoxes are hard to explain to people, because if one never engages/participates in activism, it's very easy to be cynical and dismiss activism as frivolous/misguided/performative. Which is as true as dismissing EA orgs with "I read somewhere that nonprofits are just a way for rich people to launder money while claiming admin costs".
Sigh, oh well.
As a former climate activist who organised a protest outside Exxon offices after my country failed to commit to climate agreements, I can personally confirm Scott's hypothetical.
I also share many of the same concerns of the AGI race dynamics. The current frontrunners in the feared "AGI race" are all AI Safety companies, and ChatGPT has attracted billions into capabilities research from people who otherwise would've never looked into AI.
Just a week ago, Peking University Zhu Song-Chun professor spoke at a CCP conference about how China needs to go all-in to beat the US to AGI. ChatGPT created a very compelling proof-of-concept to pour money into AI.
Counterfactuals and uncertainties aside, the AI Safety community has created the AGI race. I wonder if it's a good idea.
I love it!
Speaking as someone who started my country’s local FridaysForFuture, this is basically the same plan I had. If you go to my profile or have seen me at EAG, you’ll know this is an idea I can’t shut up about, because I think it’s super tractable!
Some comments:
Overall, from my context in climate advocacy, I think people underrate how reasonable others are, especially other high-engaged activists. I expect EAs will be surprised at how receptive climate activists are. Climate activists care a lot about engaging with important ideas and mobilising to do good, and I find they respond (relatively) positively to EA/longtermist ideas. In fact, I know a lot of EAs who used to work in the climate space and entered through other cause areas like animal rights, alt proteins or global poverty alleviation. Like you mentioned, there's also concepts of regulation, social equity and skepticism of large corporations that could also be leveraged to find common ground.
Anyway, would love to chat with you and anyone else who finds this idea compelling!
Wait, is this not the case? 0.0
I worked in some startups and a business consultancy and this is like, the first thing I learned in hiring/headhunting. While writing up Superlinear prize ideas, I made a few variations of SEO prizes targeting mid to senior-level experts, such as field-specific jargon, upcoming conferences, common workflow queries and new regulations.
>"AI is getting more powerful. It also makes a lot of mistakes. And it's being used more often. How do we make sure (a) it's being used for good, and (b) it doesn't accidentally do terrible things that we didn't want."
Very similar to what I currently use!
I've been training with AI Safety messaging for a bit, and I've stuck to these principles:
1. Use simple, agreeable language.
2. Refrain from immediately introducing concepts that people have preconceived misconceptions
So mine is something like:
1. AI is given a lot of power and influence.
2. Large tech companies are pouring billions into making AI much more capable.
3. We do not know how to ensure this complex machine respects our human values and doesn't cause great harm.
I do agree that this understates the risks associated with superintelligence, but in my experience speaking with laymen, if you introduce superintelligence as the central concept at first, the debate becomes "Will AI be smarter than me?" which provokes a weird kind of adversarial defensiveness. So I prioritise getting people to agree with me before engaging with "weirder" arguments.
I've sent about 5 people to EA VP and AGI SF, and yes, I have thought about how to "get credit".
I think the simplest option would be:
1. An option on applications to Intro Programs/roles that asks "Who referred you to this?"
2. A question on surveys like the annual EA Survey that asks "Which individuals/organisers have been particularly helpful in your EA journey?"
3. I've also thought of prizes or community days dedicated to recognising fellow EAs who have helped you a lot in your journey, but that's a bit more complex to organise well.
Hi!
Just saw this on my feed. I'm not sure if you've already read this, but the book Does Altruism Exist? by David Sloan Wilson is about this exact premise: altruistic/pro-social behaviours and the conditions under which it comprises a successful evolutionary strategy, both for individuals and groups. It's written by a biologist, so I think you might find some use out of it!
Personally, I like the book and I think EAs would find it interesting. Effective Altruism has a ton of research examining the Effective part, but far less on the Altruism part. The book rigorously defines definitions such as altruism, and examines the contexts under which altruistic individuals and groups can thrive, as well as the risks that could undermine such behaviours.
> 2. Furthermore, one said “whenever major technological developments happen, everyone gets a promotion”—everyone’s job will be slightly more interesting and slightly less grunt-work-y.
Interesting framing. It's true in a way, and also means that people need to learn more in order to contribute meaningfully. A reasonably productive worker used to be a labourer, then they needed to be literate (which takes a very long time!), then they needed to read up more and more institutional knowledge and leverage increasingly complex tools.
> 3. However, on longer timescales, one noted that economists’ predictions on employment rates contradicted their own predictions on how much work AI would do. He said he thought economists generally have a status quo bias.
I think people in general have a status quo bias. Even people working within AI spent most of their lives in a non-AI world.
> 4. On the other hand, he thought AI researchers who predicted massive societal revolution generally had an “excitement bias.”
This is pretty much what I believe about AI predictions as well. The closer someone works with AI, the more likely they are to overestimate it. For AI Safety, I think my peers have a bias towards overestimating progress and how soon timelines are. However, I'd still prefer to err on the side of overestimation when playing Russian Roulette with humanity's future.
> 7. and, maybe most interestingly and comfortingly (?), as soon as it’s normalized for AI to, e.g., write, illustrate, decide, summarize, compare, research, etc, we’ll turn around and call it bizarre that we ever did those things, and no one will feel particularly bad about it.
I feel weird seeing these opinions. I write and make art occasionally. It's soooo tedious and time-intensive to actually produce things (edit sentences, writer's block, staring at a drawing for 2 hours to figure out that you drew the jawline 20% to the left etc.). And honestly, you have to practice a lot just to produce something of passable quality. Seeing people who don't do [X] gatekeep [X] feels weird, because people who actually do [X] are spending 80% of their time on fairly basic, low-level mental tasks.