I proposed the Nonlinear Emergency Fund and Superlinear as Nonlinear Intern.[1]
I co-founded Singapore's Fridays For Future (featured on Al Jazeera and BBC). After arrests + 1 year of campaigning, Singapore adopted all our demands (Net Zero 2050, $80 Carbon Tax and fossil fuel divestment).
I developed a student forum with >300k active users and a study site with >25k users. I founded an education reform campaign with the Singapore Ministry of Education.
I proposed both ideas at the same time as the Nonlinear team, so we worked on these together.
Plans I'm planning:
And probably more. See: linktr.ee/menhguin
>"AI is getting more powerful. It also makes a lot of mistakes. And it's being used more often. How do we make sure (a) it's being used for good, and (b) it doesn't accidentally do terrible things that we didn't want."
Very similar to what I currently use!
I've been training with AI Safety messaging for a bit, and I've stuck to these principles:
1. Use simple, agreeable language.
2. Refrain from immediately introducing concepts that people have preconceived misconceptions
So mine is something like:
1. AI is given a lot of power and influence.
2. Large tech companies are pouring billions into making AI much more capable.
3. We do not know how to ensure this complex machine respects our human values and doesn't cause great harm.
I do agree that this understates the risks associated with superintelligence, but in my experience speaking with laymen, if you introduce superintelligence as the central concept at first, the debate becomes "Will AI be smarter than me?" which provokes a weird kind of adversarial defensiveness. So I prioritise getting people to agree with me before engaging with "weirder" arguments.
I've sent about 5 people to EA VP and AGI SF, and yes, I have thought about how to "get credit".
I think the simplest option would be:
1. An option on applications to Intro Programs/roles that asks "Who referred you to this?"
2. A question on surveys like the annual EA Survey that asks "Which individuals/organisers have been particularly helpful in your EA journey?"
3. I've also thought of prizes or community days dedicated to recognising fellow EAs who have helped you a lot in your journey, but that's a bit more complex to organise well.
Hi!
Just saw this on my feed. I'm not sure if you've already read this, but the book Does Altruism Exist? by David Sloan Wilson is about this exact premise: altruistic/pro-social behaviours and the conditions under which it comprises a successful evolutionary strategy, both for individuals and groups. It's written by a biologist, so I think you might find some use out of it!
Personally, I like the book and I think EAs would find it interesting. Effective Altruism has a ton of research examining the Effective part, but far less on the Altruism part. The book rigorously defines definitions such as altruism, and examines the contexts under which altruistic individuals and groups can thrive, as well as the risks that could undermine such behaviours.
I upvoted this because AI-related advocacy has become a recent focus of mine. My background is from organising climate protests, and I think EAs have a bit of a blindspot when it comes to valuing advocacy. So it's good to have this discussion. However, I do disagree on a few points.
1. Just Ask: In broad strokes, I think people tend to overestimate exactly how unreasonable and persistent initial objections will be. My simplest rebuttal would be: How do you know these advocates would even disagree with your approach? An approach I'm considering now is to find a decent AI Governance policy proposal, present it to the advocates explaining how it solves their problem and see who says yes. If half of them say no, you work with the other half. Before assuming the "neo-Luddites" won't listen to reason, shouldn't you ... ask? Present them with options? I don't see why it's not at least worth reaching out to potential allies, and I don't see why it's an irredeemable sin to be angry at something with no clear solutions, when no one has presented a solution. It's perhaps ironic the assumptions given here.
2. Counterfactuals I think by most estimates, anti-AI advocacy only grows from here. Having a lot of structurally unemployed angry people is historically a recipe for trouble. You then have to consider that reactionary responses will happen regardless of whether "we align with them". If they are as persistently unreasonable as you say they are, they will force bad policy regardless. They will influence mainstream discourse towards their views, and be loud enough to crowd out our "more reasonable" views. I just think it makes a lot of sense to engage these groups early on, and make an earnest effort to make our case. Because the counterfactual is that they get bad policies passed without our input.
3. False dichotomy of advocates and researchers I speak more generally here. In my time in climate risk, everyone had an odd fixation on separating climate advocates and researchers.[1] I don't think this split was helpful for epistemics or strategy overall. Because then you had scientists who had all the solutions and epistemics that the public/policymakers generally ignored out of lack of engagement, and the advocates who started latching onto poorly-informed and counterproductive radical agendas, and were constantly rebutted with "why are we listening to you clueless youngsters and not the scientists (who we ignore anyway)". It was just a constant headache to have two subgroups needlessly divide themselves while the clock ran down. Like sure, the advocates were ... not the most epistemically rigorous. And the scientists generally struggled to put across their concerns. But I'd greatly prefer if everyone valued more communication/coordination, and not less.
And for my sanity's sake, I'd like the AI risk community to not repeat this dynamic.
I suspect most of this dichotomy was not made in good faith, but simply by people uncomfortable with the premise of anthropogenic climate change and throwing out fallacies to discredit any arguments they're confronted with in their daily lives.
Strong upvoted because this is indeed an approach I'm investigating in my work and personal capacity.
For other software fields/subfields, upskilling can be done fairly rapidly, by grinding knowledge bases with high feedback loops. It is possible to be as good as a professional software engineer quickly, independently and in a short timeframe.
If AI Safety wants to develop its talent pool to keep up with the AI Capabilities talent pool (which is probably growing much faster than average), researchers-especially juniors- need an easy way to learn quickly and conveniently. I think existing researchers may underrate this, since they're busy putting out their own fires and finding their own resources.
Ironically, it has not been quick and convenient for me to develop this idea to a level where I'd work on it, so thanks for this.
Hi Vaidehi,
Some thoughts, as someone who has founded a climate protest movement (a Singapore branch of Fridays For Future), and also read a lot of research on social movements to inform my decision making, and also somewhat acquainted with community organising in EA:
Anyway, I just discovered your sequence and theories of change. I agree and have had similar thoughts for quite a while. As someone who researched member-organised movement and tried to build one as a contingency for the co-founders’ imprisonment, I’d say a member-organised structure is difficult for EA to adopt.
That said, I’m a very vocal supporter of EAs learning best practices from others. The climate movement that turned climate risk from a niche x-risk into the largest mobilisation of people, capital and resources in human history, and I regularly apply its lessons to planning EA meta/longtermism projects. Would love to talk more on this![4]
This also applies to some branches of FridaysForFuture where organiser status carries significant legal risks (i.e. most places outside the US and EU).
As a side note, I find that in EA, virtue signalling in the technical sense is far less prominent (see: opinions towards protests, intersectionality and veganism), and others have suggested that EA has a Deference Culture. There’s also the elephant in the room where “Core EA” is 70% male while climate activists are 60-70% female, a comparison that is very noticeable and very baffling.
Now that I’m doing the “EA networking” thing I should be more structured with introductions+engaging people across multiple topics/project ideas I have. If anyone has recommendations please let me know.
Personally, I think the arguments put forth make sense. However, I'd simply like to caution whoever tries this that they might alienate far more potential allies purely by association with the meat industry. The benefit provided - an alliance of convenience with beef producers - will go away the moment the beef producers no longer consider it expedient, while a past association with the beef producers could be a major reputation risk for a very long time.
Not to say that such reputation risks are rational, but they exist.
As someone who had previously made news for a "radical climate protest" in my country back in 2020, I agree with this finding!
I’d like to share my own application of this phenomenon:
In 2019, the wave of global youth climate activism inspired by Greta Thunberg had spread to Singapore. Broadly speaking, Asian countries are generally underrepresented in climate activism, even in developed countries. [1]Consequently, the inaugural SG Climate Rally was relatively small at ~2,000 participants. I helped organise this rally.
There's a few things to note here:
From an x-risk prevention POV, the idea was to increase the probability of climate action by creating the threat of radical protests to supplement/increase support for moderate advocacy.
Basically, I did not think Radical > Moderate, but rather Radical+Moderate > Moderate Only.
I calculated the rough Expected Value (EV) of my climate protest as follows:
So, with about 2-3 orders of magnitude margin of error, I figured it was high-EV. After a big controversy and a year of organising, Singapore released climate goals that included a net zero goal by 2050, and a $80 carbon tax.
I think a lot of people misinterpret advocacy, or at least climate advocacy.
Anyway, just sharing my (hopefully relevant) experience. I did do a lot of social movement research literature review while organising climate protests, so even this is a very small fraction of my thoughts on the topic. People seem to assume that activists are impulsive and have poorly-crafted theories of change, so it's hard to elaborate on reasoning when a critic just asserts that you're dumb.
Happy to engage with other discussions on this topic! Nowadays I work at Nonlinear mainly on AI Safety/meta stuff, so climate activism doesn't come up super often other than cross-applying x-risk theories of change.
The reason why is worthy of its own research/thread.
Ride the current way of AI skepticism by people worried about it being racist, or being replaced and left unemployed. To lobby for significantly more government involvement, to slow down progress (like the FDA in medicine).
I agree! In recent days, I've been soundboarding an idea of mine:
Current Gaps:
1. Policy around services provided by AIGC is probably not gonna be good within the next decade, despite the speed with which AI will begin automating tasks and industries. See: social media, crypto policy.
2. AI Safety community currently struggles with presenting strong, compelling value propositions or near-term inroads into policymaking circles. This is consistent with other x-risk topics. See: climate and pandemic risk.
Proposition: EA community gathers law and tech people together to formulate and AIGC policy framework. Will require ~10 tech/law people which is quite feasible as an EA project.
1. Formulating AIGC policy will establish credibility and political capital to tackle alignment problems
2. AIGC is the most publicly understandable way to present AI risk to the public, allowing AIS to reach mainstream appeal
3. Playing into EA’s core competencies of overanalysing problems
4. Likely high first mover advantage, where if EA can set the tone for AI policy discourse, it will mitigate people believing misconceptions about AI as a new tech, which of course benefits AIS in the long run
Coming from a climate advocate background, I think this is the least low-probability way for EA to engage the public and policymakers on AIS. It seeks to answer “How to we get politicians to take EA’s AIS stances seriously”
I find that some AIS people I've talked to don't immediately see the value of this idea. However, my context is that having been a climate advocate, I learned of an incredibly long history of scientists' input being ignored simply because the public and policymakers did not prioritise the value of climate risk work.
It was ultimately engaging, predominantly youth, advocacy that mobilised institutional resources and demand to the level required. I highly suspect this will hold true for AI Safety, and I hope this time, the x-risk community doesn't make the same mistake of undervaluing external support. So this plan is meant to provide a value proposition for AI Safety that non-AIS people understand better.
So far, I haven't been able to make much progress on this idea. Problem being that I am neither in the law field nor technical AIS field (something I hope to work on next year), so if it happens, I essentially need to find someone else to spearhead it.
Anyway, I posted this idea publicly because I've procrastinating on developing it for ~1 week, so I figured it was better to send it out into the ether and see if anyone feels inspired, rather than just let it sit in my Drafts. Do reach out if you or anyone you know might be interested!
Wait, is this not the case? 0.0
I worked in some startups and a business consultancy and this is like, the first thing I learned in hiring/headhunting. While writing up Superlinear prize ideas, I made a few variations of SEO prizes targeting mid to senior-level experts, such as field-specific jargon, upcoming conferences, common workflow queries and new regulations.