Minh Nguyen

Platform Development Intern @ Nonlinear
Pursuing an undergraduate degree
314SingaporeJoined Jul 2022
linktr.ee/menhguin

Bio

Participation
5

I proposed the Nonlinear Emergency Fund and  Superlinear as Nonlinear Intern.[1]

I co-founded Singapore's Fridays For Future (featured on Al Jazeera and BBC). After arrests + 1 year of campaigning, Singapore adopted all our demands (Net Zero 2050, $80 Carbon Tax and fossil fuel divestment).

I developed a student forum with >300k active users and a study site with >25k users. I founded an education reform campaign with the Singapore Ministry of Education.

  1. ^

    I proposed both ideas at the same time as the Nonlinear team, so we worked on these together.

How others can help me

Plans I'm planning:

  1. FridaysForFuture for AI Safety/ AIS advocacy (!!!)
  2. An AI Generated Content (AIGC) policy consultancy
  3. A scaleable EA Model UN framework 
  4. Creating video content on EA/longtermism/x-risk
  5. EA digital marketing/outreach/SEO funnels
  6. Tools for EA job searching and AI Safety research
  • + An EA Common Application, an AIS standardised test,  etc.

And probably more. See: linktr.ee/menhguin

How I can help others

If it helps others, I will help you build it.[1]

  1. ^

    OK, assuming I'm not completely swamped with work. I'll definitely give input tho.

Comments
46

Wait, is this not the case? 0.0

I worked in some startups and a business consultancy and this is like, the first thing I learned in hiring/headhunting. While writing up Superlinear prize ideas, I made a few variations of SEO prizes targeting mid to senior-level experts, such as field-specific jargon, upcoming conferences, common workflow queries and new regulations.

>"AI is getting more powerful. It also makes a lot of mistakes. And it's being used more often. How do we make sure (a) it's being used for good, and (b) it doesn't accidentally do terrible things that we didn't want."

Very similar to what I currently use!

I've been training with AI Safety messaging for a bit, and I've stuck to these principles:

1. Use simple, agreeable language.
2. Refrain from immediately introducing concepts that people have preconceived misconceptions

So mine is something like:
1. AI is given a lot of power and influence.
2. Large tech companies are pouring billions into making AI much more capable.
3. We do not know how to ensure this complex machine respects our human values and doesn't cause great harm.

I do agree that this understates the risks associated with superintelligence, but in my experience speaking with laymen, if you introduce superintelligence as the central concept at first, the debate becomes "Will AI be smarter than me?" which provokes a weird kind of adversarial defensiveness. So I prioritise getting people to agree with me before engaging with "weirder" arguments.

I've sent about 5 people to EA VP and AGI SF, and yes, I have thought about how to "get credit".

I think the simplest option would be:

1. An option on applications to Intro Programs/roles that asks "Who referred you to this?"
2. A question on surveys like the annual EA Survey that asks "Which individuals/organisers have been particularly helpful in your EA journey?"
3. I've also thought of prizes or community days dedicated to recognising fellow EAs who have helped you a lot in your journey, but that's a bit more complex to organise well.

Hi!

Just saw this on my feed. I'm not sure if you've already read this, but the book Does Altruism Exist? by David Sloan Wilson is about this exact premise: altruistic/pro-social behaviours and the conditions under which it comprises a successful evolutionary strategy, both for individuals and groups. It's written by a biologist, so I think you might find some use out of it!

Personally, I like the book and I think EAs would find it interesting. Effective Altruism has a ton of research examining the Effective part, but far less on the Altruism part. The book rigorously defines definitions such as altruism, and examines the contexts under which altruistic individuals and groups can thrive, as well as the risks that could undermine such behaviours.

I upvoted this because AI-related advocacy has become a recent focus of mine. My background is from organising climate protests, and I think EAs have a bit of a blindspot when it comes to valuing advocacy. So it's good to have this discussion. However, I do disagree on a few points.

1. Just Ask: In broad strokes, I think people tend to overestimate exactly how unreasonable and persistent initial objections will be. My simplest rebuttal would be: How do you know these advocates would even disagree with your approach? An approach I'm considering now is to find a decent AI Governance policy proposal, present it to the advocates explaining how it solves their problem and see who says yes. If half of them say no, you work with the other half. Before assuming the "neo-Luddites" won't listen to reason, shouldn't you ... ask? Present them with options? I don't see why it's not at least worth reaching out to potential allies, and I don't see why it's an irredeemable sin to be angry at something with no clear solutions, when no one has presented a solution. It's perhaps ironic the assumptions given here.

2. Counterfactuals I think by most estimates, anti-AI advocacy only grows from here. Having a lot of structurally unemployed angry people is historically a recipe for trouble. You then have to consider that reactionary responses will happen regardless of whether "we align with them". If they are as persistently unreasonable as you say they are, they will force bad policy regardless. They will influence mainstream discourse towards their views, and be loud enough to crowd out our "more reasonable" views. I just think it makes a lot of sense to engage these groups early on, and make an earnest effort to make our case. Because the counterfactual is that they get bad policies passed without our input.

3. False dichotomy of advocates and researchers I speak more generally here. In my time in climate risk, everyone had an odd fixation on separating climate advocates and researchers.[1] I don't think this split was helpful for epistemics or strategy overall. Because then you had scientists who had all the solutions and epistemics that the public/policymakers generally ignored out of lack of engagement, and the advocates who started latching onto poorly-informed and counterproductive radical agendas, and were constantly rebutted with "why are we listening to you clueless youngsters and not the scientists (who we ignore anyway)". It was just a constant headache to have two subgroups needlessly divide themselves while the clock ran down. Like sure, the advocates were ... not the most epistemically rigorous. And the scientists generally struggled to put across their concerns. But I'd greatly prefer if everyone valued more communication/coordination, and not less.

And for my sanity's sake, I'd like the AI risk community to not repeat this dynamic.

  1. ^

    I suspect most of this dichotomy was not made in good faith, but simply by people uncomfortable with the premise of anthropogenic climate change and throwing out fallacies to discredit any arguments they're confronted with in their daily lives.

Strong upvoted because this is indeed an approach I'm investigating in my work and personal capacity.

For other software fields/subfields, upskilling can be done fairly rapidly, by grinding knowledge bases with high feedback loops. It is possible to be as good as a professional software engineer quickly, independently and in a short timeframe.

If AI Safety wants to develop its talent pool to keep up with the AI Capabilities talent pool (which is probably growing much faster than average), researchers-especially juniors- need an easy way to learn quickly and conveniently. I think existing researchers may underrate this, since they're busy putting out their own fires and finding their own resources.

Ironically, it has not been quick and convenient for me to develop this idea to a level where I'd work on it, so thanks for this.

Hi Vaidehi,

Some thoughts, as someone who has founded a climate protest movement (a Singapore branch of Fridays For Future), and also read a lot of research on social movements to inform my decision making, and also somewhat acquainted with community organising in EA:

  1. The first difference I’ve seen between EA organising and climate organising is the initial barrier to new member participation. You’ve cited Extinction Rebellion as an example of member-led participation. From beginning to end, the main thing a Rebel needs to do as an active, contributing member is to show up to civil disobedience actions. While this requires personal risk and planning, it’s significantly more straightforward and well-defined than what Core EA members would usually go through. Even “beginner” EAs have to read incredibly lengthy introductions to EA. Of course, one could correctly point out that EA has significantly higher standards for epistemic rigour and sustained long-term contribution. As you note in your earlier posts in the sequence, more steps requiring more guidance/support/ feedback/expertise, drifts towards more centralisation.
  2. The second difference is different demands for core members. One thing that fascinates me ever since I “transitioned” from climate movement to EA movement is how, justice sensitivity and expansive altruism aside, the two select for completely different traits. EA essentially wants highly-engaged members to do one of two things: contribute by working in EA long-term (research, operations etc.), or donating, which also skews long-term. These behaviours generally fit into a “normal” leadership hierarchy. Climate/social cause movements, meanwhile, are generally dealing with well-defined and contentious issues, where the theory of change relies on public shows of mass support. The most highly engaged members may take on high risk of social/legal/physical repercussions, and participation correlates with risk tolerance and disagreeableness. While participation does rely on strong ties,[1] the selection pressure is practically polar opposite. Rebels are more inclined follow those willing to face prison for the cause,[2] while EAs select for “potential for impact”, which correlates with proxies that are high-status and suitable for hierarchies (degree qualification, technical skill, references from other EAs).[3]

Anyway, I just discovered your sequence and theories of change. I agree and have had similar thoughts for quite a while. As someone who researched member-organised movement and tried to build one as a contingency for the co-founders’ imprisonment, I’d say a member-organised structure is difficult for EA to adopt.

That said, I’m a very vocal supporter of EAs learning best practices from others. The climate movement that turned climate risk from a niche x-risk into the largest mobilisation of people, capital and resources in human history, and I regularly apply its lessons to planning EA meta/longtermism projects. Would love to talk more on this![4]

  1. ^
  2. ^

    This also applies to some branches of FridaysForFuture where organiser status carries significant legal risks (i.e. most places outside the US and EU).

  3. ^

    As a side note, I find that in EA, virtue signalling in the technical sense is far less prominent (see: opinions towards protests, intersectionality and veganism), and others have suggested that EA has a Deference Culture. There’s also the elephant in the room where “Core EA” is 70% male while climate activists are 60-70% female, a comparison that is very noticeable and very baffling.

  4. ^

    Now that I’m doing the “EA networking” thing I should be more structured with introductions+engaging people across multiple topics/project ideas I have. If anyone has recommendations please let me know.

Personally, I think the arguments put forth make sense. However, I'd simply like to caution whoever tries this that they might alienate far more potential allies purely by association with the meat industry. The benefit provided - an alliance of convenience with beef producers - will go away the moment the beef producers no longer consider it expedient, while a past association with the beef producers could be a major reputation risk for a very long time.

Not to say that such reputation risks are rational, but they exist.

As someone who had previously made news for a "radical climate protest" in my country back in 2020, I agree with this finding!

I’d like to share my own application of this phenomenon:

Case study: Climate protesting in Singapore

In 2019, the wave of global youth climate activism inspired by Greta Thunberg had spread to Singapore. Broadly speaking, Asian countries are generally underrepresented in climate activism, even in developed countries. [1]Consequently, the inaugural SG Climate Rally was relatively small at ~2,000 participants. I helped organise this rally.

There's a few things to note here:

  1. Singapore is known for very strict laws restricting protests. Under the Public Order Act introduced in 2008, any person assembling in a public place expressing support for or against a cause must register with the police for a permit. Long story short, even a solo protest must be pre-approved by the police. And the police don’t approve topics deemed controversial … yeah.
  2. Singaporeans have a very negative opinion of protesting. It’s a chicken-and-egg thing, but essentially in Singapore protesting carries a social taboo. Protests generally considered “moderate” in other countries would be considered “radical” in Singapore.
  3. Singaporean climate advocacy organisations were all “moderate”. There was no “radical wing” of climate activism in Singapore. In the Singaporean context, standing in the street alone for 15 minutes to protest for climate action would be “radical”. Our Overton Window is very different.
  4. Singapore had very insufficient climate commitments. In early 2020, after the SG Climate Rally, Singapore announced climate goals that included net zero timelines beyond 2050 "some time in the later half of this century" (i.e. no actual timeline), and a carbon tax of $5.

Decision Matrix

  1. If I continued with “Moderate Groups”, it seemed high-probability that commitments would remain insufficient.
  2. If I branched off into “Radical Tactics”, then policymakers have to deal with both “Moderate Groups” and “Radical Tactics”. Following the same logic as outlined in this post, this suggested a probability of improved climate commitments.

From an x-risk prevention POV, the idea was to increase the probability of climate action by creating the threat of radical protests to supplement/increase support for  moderate advocacy.
Basically, I did not think Radical > Moderate, but rather Radical+Moderate > Moderate Only.

EV calculations of "Radical" Climate Protests

I calculated the rough Expected Value (EV) of my climate protest as follows:

  • Assuming 1% chance of counterfactually affecting climate discourse (favourable due to lack of protests in SG increasing marginal benefit of 1 individual protest)
  • Excess deaths from 4.0C vs 1.5C: 4 million/year
  • Singapore's contribution to excess death: 1/1000 = 4,000/year
  • EV: 40 excess deaths/year  = ~1,000/25 yrs
  • Risk: Major tail risk of 1-2 year's jail, criminal record, 5% chance of exile

So, with about 2-3 orders of magnitude margin of error, I figured it was high-EV. After a big controversy and a year of organising, Singapore released climate goals that included a net zero goal by 2050, and a $80 carbon tax.

Further thoughts

I think a lot of people misinterpret advocacy, or at least climate advocacy.

  1. In general, people don't like activism/protesting. More broadly, people are extra skeptical of ideas requiring high commitment that imply moral judgement. You can see this with EA, veganism, protesting, donating etc. People just don't respond well to the implicit premise that "Because I haven't been doing this, therefore I am immoral", and instead it's more comfortable to go with "I really want to believe this person is wrong and misguided.". This applies even between activities: for example, EAs feeling awkward discussing veganism/donations/protests with other EAs, regardless of the actual EV of the actions. There's a very valid discussion to be had with regards to the efficacy of advocacy/protest campaigns, but I'm usually wary of the extra skepticism I get just by virtue of being a climate activist and the negative connotations people have surrounding that.
  2. Critics often ignore “moderate” groundwork, and then criticise a lack of moderate groundwork. A common criticism I hear to this day is “Why don’t you do [X] instead of [Y]”. X is usually implied as something vaguely less radical. However, in my experience, people who do “radical” advocacy often have years of experience in “moderate” advocacy, even simultaneously doing both. I’d say 99% of my work involved normal stuff like outreach to policymakers, organising petitions and lobbying. I think people just assume “radicals” dismiss “moderates”, when in fact radicals often respect and work closely alongside the moderates who have always comprised 99% of the climate movement, but aren’t reported on.
  3. People just assume activists are attention-seeking. This one, I don’t fully get. Nowadays, there’s countless ways to optimise for attention that have no downside risk. In fact, I messaged the most attention-seeking people I knew and asked them to join, but none of them did. Instead, it was usually people who were extremely anxious about climate risk+had a very high justice sensitivity. Interestingly, all of them were either LGBTQ+ or were neurodivergent.

Anyway, just sharing my (hopefully relevant) experience. I did do a lot of social movement research literature review while organising climate protests, so even this is a very small fraction of my thoughts on the topic. People seem to assume that activists are impulsive and have poorly-crafted theories of change, so it's hard to elaborate on reasoning when a critic just asserts that you're dumb.

Happy to engage with other discussions on this topic! Nowadays I work at Nonlinear mainly on AI Safety/meta stuff, so climate activism doesn't come up super often other than cross-applying x-risk theories of change.

  1. ^

    The reason why is worthy of its own research/thread.

Ride the current way of AI skepticism by people worried about it being racist, or being replaced and left unemployed. To lobby for significantly more government involvement, to slow down progress (like the FDA in medicine).

 

I agree! In recent days, I've been soundboarding an idea of mine: 


Idea: AI Generated Content (AIGC) Policy Consultancy

Current Gaps:
1. Policy around services provided by AIGC is probably not gonna be good within the next decade, despite the speed with which AI will begin automating tasks and industries. See: social media, crypto policy.

2. AI Safety community currently struggles with presenting strong, compelling value propositions or near-term inroads into policymaking circles. This is consistent with other x-risk topics. See: climate and pandemic risk.


Proposition: EA community gathers law and tech people together to formulate and AIGC policy framework. Will require ~10 tech/law people which is quite feasible as an EA project.


Benefits:

1. Formulating AIGC policy will establish credibility and political capital to tackle alignment problems

2. AIGC is the most publicly understandable way to present AI risk to the public, allowing AIS to reach mainstream appeal

3. Playing into EA’s core competencies of overanalysing problems

4. Likely high first mover advantage, where if EA can set the tone for AI policy discourse, it will mitigate people believing misconceptions about AI as a new tech, which of course benefits AIS in the long run

Further Thoughts

Coming from a climate advocate background, I think this is the least low-probability way for EA to engage the public and policymakers on AIS. It seeks to answer “How to we get politicians to take EA’s AIS stances seriously”

I find that some AIS people I've talked to don't immediately see the value of this idea. However, my context is that having been a climate advocate, I learned of an incredibly long history of scientists' input being ignored simply because the public and policymakers did not prioritise the value of climate risk work.

It was ultimately engaging, predominantly youth, advocacy that mobilised institutional resources and demand to the level required. I highly suspect this will hold true for AI Safety, and I hope this time, the x-risk community doesn't make the same mistake of undervaluing external support. So this plan is meant to provide a value proposition for AI Safety that non-AIS people understand better.

So far, I haven't been able to make much progress on this idea. Problem being that I am neither in the law field nor technical AIS field (something I hope to work on next year), so if it happens, I essentially need to find someone else to spearhead it.

Anyway, I posted this idea publicly because I've procrastinating on developing it for ~1 week, so I figured it was better to send it out into the ether and see if anyone feels inspired, rather than just let it sit in my Drafts. Do reach out if you or anyone you know might be interested!

Load More