MN

Minh Nguyen

Platform Development Intern @ Nonlinear
648 karmaJoined Jul 2022Pursuing an undergraduate degreeSingapore
linktr.ee/menhguin

Bio

Participation
5

I proposed the Nonlinear Emergency Fund and  Superlinear as Nonlinear Intern.[1]

I co-founded Singapore's Fridays For Future (featured on Al Jazeera and BBC). After arrests + 1 year of campaigning, Singapore adopted all our demands (Net Zero 2050, $80 Carbon Tax and fossil fuel divestment).

I developed a student forum with >300k active users and a study site with >25k users. I founded an education reform campaign with the Singapore Ministry of Education.

  1. ^

    I proposed both ideas at the same time as the Nonlinear team, so we worked on these together.

How others can help me

Plans I'm planning:

  1. FridaysForFuture for AI Safety/ AIS advocacy (!!!)
  2. An AI Generated Content (AIGC) policy consultancy
  3. A scaleable EA Model UN framework 
  4. Creating video content on EA/longtermism/x-risk
  5. EA digital marketing/outreach/SEO funnels
  6. Tools for EA job searching and AI Safety research
  • + An EA Common Application, an AIS standardised test,  etc.

And probably more. See: linktr.ee/menhguin

How I can help others

If it helps others, I will help you build it.[1]

  1. ^

    OK, assuming I'm not completely swamped with work. I'll definitely give input tho.

Posts
2

Sorted by New

Comments
73

a very minor and inconsequential point:

I initially read this as "Philosophers Against Malaria Fundraisers" and thought it was gonna be an interesting essay about how AMF fundraisers are bad, actually.

I find that asking EAs (or anyone, really) for open-ended feedback tends not to yield novel insight by default. EAs tend to have high openness, and as long something passes the bar of "this has plausible theory of change, has no obviously huge downside and is positive EV enough to be worth exploring" , it isn't subject to particularly intense scrutiny. Consider also that you have thought about the problem for days/weeks whereas they've only thought about it for maybe 10-20 minutes

Haven't found a 100% perfect solution, but usually I express my 2-3 most pressing doubts and ask them whether there's a possible solution to those doubts. It scopes the question towards more precise, detailed and actionable feedback.

Alternatively, if the person has attempted a similar project, I would ask them what goals/principles they found most important, and 2-3 things they wish they knew before they'd started.

I will say I also never use the Drowning Child argument. For several reasons:

  • I generally don't think negative emotions like shame and guilt are a good first impression/initial reason to join EA. People tend to distance themselves from sources of guilt. It's fine to mention the drowning child argument maybe 10-20 minutes in, but I prefer to lead with positive associations.
  • I prefer to minimise use of thought experiments/hypotheticals in intros, and prefer to use examples relatable to the other person. IMO, thought experiments make the ethical stakes seem too trivial and distant.

What I often do is to figure out what cause areas the other person might relate to based on what they already care about, describe EA as fundamentally "doing good, better" in the sense of getting people to engage more thoughtfully with values they already hold.

On CB, my views that are half informed by EA CBs and half personal opinions:

  1. Very casual events - If you are holding no events for a long time and don't have much capacity, just hold low-stakes casual events and follow-up with high-engaged people afterwards. Highly-engaged people tend to show up/follow up several times after learning about EA anyway. 80-90% of the time, I think having some casual events every few weeks is better than no casual events.
  2. Bigger events - Try to direct highly-engaged people to bigger and/or more specialised events. The EA community is big and diverse, and letting people know other events exist lets them self-select better. When I first explored beyond EA Singapore, I spent 2 months straight learning about every EA org and resource in existence, individually reviewing all the Swapcard profiles at every EAG. That was absolutely worth the effort, IMO.[1]
  3. 1-on-1s are probably still important - 1-on-1s with someone of very similar interest areas or career trajectories are the most valuable experiences in EA, in my opinion. Only 10% of 1-on-1s are like this, but they more than make up for the 90% that don't really go anywhere. As much as I try to optimise, this seems to be a numbers game of just finding and meeting a lot of potentially interesting people.[2]
  4. Online resources - For highly-engaged EAs, important information should be online-first. I'm of the opinion that highly-engaged/agentic new EAs tend to read a lot online, and can gain >80% of the same field-specific knowledge reading on their own. This especially holds true in AI Safety, which is like ... code and research that's all publicly available short of frontier models. I think events should be for casual socials, intentional networking and accountability+complex coordination (basically, coworkers).
  1. ^

    If you want the 80/20 for AI Safety, check out aisafety.training, aisafety.world, check EA Forum, Lesswrong and Alignment Forum once a week (~1 hour/week), check 80k job board and EA Opportunities Board once a week (~20 minutes/week), review forum tags for things like prizes, job opportunities and research programs to see what programs were run last year that will be run again this year.

    It is possible to capture all open opportunities this way. The rest is just researching interesting orgs, seeing which ones you vibe with and engaging with them. This is just for AI Safety, for other cause areas I'd expect the same amount of time spent passively checking.

  2. ^

    My personal view is people should slightly prioritise "potentially interesting" over "potentially useful". The few times I've met EAs just because they're high-ranking, the conversation is usually generic and could have been had by Googling and emailing/texting.

I have heard many people argue against organising relatively simple events 

I'm actually very surprised to hear this. What does the "common view" presume then?

Personally, I see 3 tiers of events: 1. Any casual, low-commitment, low stakes events 2. Big EA conferences that I find quite valuable for meeting lots of people intentionally and socially 3. Professionally-focused events (research fellowships, incubators etc)

I think "simple" events like 1 are great for socialising and meeting new people. While 2 and 3 get more done, I don't think the community would feel as welcoming if the only events occurring were ones where you had to be fully professional.

Sometimes I still want to interact with EAs, but without the expectation of "meeting right" or "networking". I suspect this applies especially to introverts and beginners. Even just going to a conference with the expectation of booking lots of 1-on-1s vs just chilling feels very different.

Perhaps the thinking was that FTX is not directly relevant to this specific post?

But I agree, clearly FTX is on the minds of many people reading this, so avoiding the elephant in the room is not working.

Found this on Reddit: Anxious_Bandicoot126 comments on Sam Altman is leaving OpenAI (reddit.com)

I feel compelled as someone close to the situation to share additional context about Sam and company.

Engineers raised concerns about rushing tech to market without adequate safety reviews in the race to capitalize on ChatGPT hype. But Sam charged ahead. That's just who he is. Wouldn't listen to us.

His focus increasingly seemed to be fame and fortune, not upholding our principles as a responsible nonprofit. He made unilateral business decisions aimed at profits that diverged from our mission.

When he proposed the GPT store and revenue sharing, it crossed a line. This signaled our core values were at risk, so the board made the tough decision to remove him as CEO.

Greg also faced some accountability and stepped down from his role. He enabled much of Sam's troubling direction.

Now our former CTO, Mira Murati, is stepping in as CEO. There is hope we can return to our engineering-driven mission of developing AI safely to benefit the world, and not shareholders.

Obviously just speculation for now, but seems plausible. The moment the GPT store was released I thought:

"wow that's really good for business ... wow that's really bad for alignment"

Honestly, I like that this essay tries to engage with and understand EA more than other critiques I've seen. Usually when I see an article about EA's association with billionaires/elites, it tends to be a lot less substantive.

I do agree that EA decision making generally biases towards privilege. EA orgs pretty openly bias towards recruiting and hiring from prestigious universities, which are significantly overrepresented in EA demographics. I also recall data from Spencer Greenberg that essentially placed EAs as ideologically centrist, as oppose to more left-leaning social movements popular with similar demographics.

I think her critique here is actually quite sound:

Effective altruism, when it began, had ascetic norms designed to constrain power, but slowly and surely they have been eroded by arguments about ‘effectiveness’, and as they receded the absence of strong institutional constraints became apparent. This leaves effective altruism and longtermism exposed to what we may call ‘the despotism trap’.

It rejects constraints against these visionaries as the pettiness of small minds. They drape themselves in the regalia of liberty to challenge government, but their interest is in being able to exercise uncontrolled power. The excuse is that the ends will justify the means. We can see how these attitudes have influenced effective altruism when it overlaps with ‘philanthrocapitalism’, whereby people like Gates bring the mechanisms of the market into philanthropy on the grounds that is more effective.

I think this is a pretty accurate description. EA and EA-adjacent orgs are generally funded by rich philanthropists, and receive less support and collaboration with governmental bodies. A lot of EA-adjacent or recommended orgs exist in the private sector. I think at the very least, we have to acknowledge that funding sources do affect culture. EA happens to be disproportionately funded by tech billionaires, so EA is more likely to be sympathetic to the views and ideologies of tech billionaires.

I think where outside critics get it wrong, is the interpretation that because EA is funded by social elites, it actively sides/colludes with social elites. I'm not sure how to convey it, but outside coverage seems to imply that EA is essentially a front for social elites to gain more power without actually doing any good, when my take is that EA began in elite college campuses, and so the most accessible sources of support and funding just happened to be social elites. Most notably, coverage of EA often focuses more on ties with social elites than the actual work done by EA orgs in various cause areas. The average article I read contains maybe 1-2 brief sentences about the fairly complex work/research done by EA orgs, and the rest of the article is just speculating about its ties w rich elites.

When founding something new, how do you balance money and impact?

I’m (attempting to be) an EA entrepreneur, but I find it difficult to balance finding product-market-fit+scaling with actual impact. At different times, I’ve found myself focusing too much on building something “big” that doesn’t have any actual object-level impact, or being too perfectionist about optimising impact versus simply doubling down on something that works and is influential/makes lots of money.

I haven’t figured it out and don’t expect this to be an easy answer, so just curious what your thoughts are on the resource/impact tradeoffs in decision making.

Just gonna weigh in on some of these from my time researching this stuff at Nonlinear.

A common knowledge spreadsheet of directly responsible individuals for important projects.

Strongly agree. It's logistically easy to do, one person could cover 80% of EA projects within a week. I've been using AI Existential Safety Map (aisafety.world) a lot in my list of followups for 1-on-1s.

In the long run, a well-maintained wiki similar to/synced with the EA Opportunities Board (which I also heavily recommend) could make this really comprehensive.

More “public good”-type resources on the state of different talent pipelines and important metrics (e.g., interest in EA).  

I read every EA survey I see. They're often quite interesting and useful. I wouldn't say they're neglected since EAs do seem to love surveys, but usually a net positive.

More coherent and transparent communication about the funding situation/bar and priorities.

I am of the opinion that every EA funder should be as transparent and detailed about their funding bar/criteria as possible. Unlike for-profits/VCs, I don't see a strong reason for secrecy other than infohazards. It helps applicants understand what funders look for which helps both funders and applicants. I believe that applicant misconceptions about "what funders want" can hinder EA a lot in the long run due to mismatched incentives. I see a lot of compelling project directions censored/discarded in the early stages simply because applicants think they should be more generic (because being more generic works well in conventional success pathways).

More risk management capacity for EA broadly as a field and not just individual orgs. 

I really liked this post Cash and FX management for EA organizations — EA Forum (effectivealtruism.org) by @JueYan.

Advanced 80K: Career advice targeted at highly committed and talented individuals.

Agree, but I never figured out how to scalably execute this. Usually, if someone has a skillset+motive to do really well in EA, my priority is to 1. inform them of helpful resources to fill in themselves 2. try to link them with someone doing what they're trying to do.

The problem is that it seems hard to predict in advance who they'd consider a valuable connection. I think none of my most valuable connections in EA so far would've been referred to me by someone else.

Tractable idea: A list of helpful links sent to EAGx and EAG attendees post-conference.

A survey to identify why high-value potential members "bounce off" EA.

I actually have bounced off EA for 3 years before (2019-2022). For me, the big reason was that I couldn't find any follow-up steps to pursue (especially coming Singapore). My experience within EA is very inspiring and exciting interactions followed by not much follow-up (guidance, next steps, pursuing opportunities, encouraging people to start projects etc.).

[just gonna agree with all the AI Safety points, they've all come up before in my discussions]

 

Evaluation and Accountability

Shoutout to @Mo Putera who is working on this.

Media and Outreach

Casual observation that I can't recall a single EA social media account that I browse simply because it's fascinating and not because I wanna support EA on social media.

And I'm into weird stuff, too. I just binged hour-long videos on Soviet semiconductors and the history of Chang'an.

Incubators: One respondent stated that incubators are "super hard and over-done," mentioning that they are too meta and often started by people without entrepreneurial experience.

Agree, this point has been discussed in detail before. What we learned from a year incubating longtermist entrepreneurship — EA Forum (effectivealtruism.org)

I think it's just hard to do well because there's so many points of failure, it takes a long time for any results to show and it requires both social skills and technical expertise. That said, I do think a longtermist version of Charity Entrepreneurship seems promising to pilot (actually, I'm gonna bring this up to Kat Woods right now).

Fastgrants and other quick funding mechanisms.

I really like Manifund as a platform!

Load more