All of yiyang's Comments + Replies

EA views on the AUKUS security pact?

So perhaps the idea behind your first bullet point from the Economist is that a balanced power dynamic reduces either side's credence that conflict will help their position?

Yes, that's right! 

And the follow up points about China's economic influence tilt the balance in China's favour, thereby raising again the chances of conflict? 

Yes, but it's more like China's economic influence has tilted the balance in China's favour for some years now (i.e. Belt & Road Initiative). It's only  recently with AUKUS that there's more of a balance betwee... (read more)

EA views on the AUKUS security pact?

Key takeaways from The Economist's latest briefing:

  • ASEAN members probably benefit from a balance of power between the US and China, so AUKUS tips the scale slightly towards more balance. However, there is also a short history of flip-flopping support (e.g. Philippines preferring China at first then the US, Malaysia not liking the nine-dash line but still kowtowing to China). 
  • "But China’s gambit makes stark the fact that America is unable to match it. And its lack of economic leadership remains, in the words of Bilahari Kausikan, Singapore’s former top
... (read more)
1DavidZhang16dThis is a good analysis! Just to extend / build on your argument, the key thing I'm interested in is the probability and extent of any armed conflict. There is a lot of game theory involved with this, but crudely speaking conflict can arise when one side sees an advantage in attacking first. This could be because they hold a stronger-but-not-dominant position or a weaker-but-not-crushed position, as it is in these positions that the payoffs to conflict are highest. So perhaps the idea behind your first bullet point from the Economist is that a balanced power dynamic reduces either side's credence that conflict will help their position? And the follow up points about China's economic influence tilt the balance in China's favour, thereby raising again the chances of conflict?
Learning, Knowledge, Intelligence, Mastery, Anki - TYHTL post 2

Hi Alexander, thanks for writing this up!

Some context. I used to use Anki for 1-2 years. Completed the "Learning How to Learn" MOOC and read the book it was based on. Taught 13-16 year olds math and English for 2 years. Conducted EA presentations in Malaysia and previously in Singapore. Currently running EA Virtual Programs (I noticed that you're in the intro program!). FYI, my opinions are mine and not CEA's.

 In conjunction with "learning how to learn better",  "learning how to prioritise which learning strategy works for specific scenarios" see... (read more)

Can the EA community copy Teach for America? (Looking for Task Y)

I think this sounds right! This makes me feel like we should also pay particular attention to making the facilitator experience is great too. 

Organising local intro EA programs can also be a great Task Y candidate. 

There will now be EA Virtual Programs every month!

Hi Michael!

I'm interested in running a local in-person program at my university from September to October with the virtual program as overflow capacity, in case our capacity for in-person cohorts isn't enough to accept all quality applicants. Would that setup be possible?

Yes, just direct people who are not able to join your local program to EA VP's website! And tell them to state in the application form that they want to be in a cohort with other people from the same uni.  

Also, is there a reason that the program is no longer called a fellowship?

I spo... (read more)

2michaelchen3moIt seems inconvenient if applicants potentially have to fill out the Virtual Programs application form too and receive a second acceptance/rejection decision—could we have just one application form for them to fill out and one acceptance/rejection decision notification? I was thinking that hopefully we could have something like the following process: * Have applicants apply through the EA Virtual Programs form, or have a form specific to our chapter which puts data into the EA Virtual Programs application database. (I don't know enough about Airtable to know whether this is possible or unrealistic). * Include a multiple-choice application question about whether they prefer in-person or virtual. I think we can assume by default that Georgia Tech applicants prefer to be with other Georgia Tech students—or at least that should help with building the community at Effective Altruism at Georgia Tech. * Tell EA Virtual Programs how many in-person cohorts we could have and the availability of the in-person facilitators. Perhaps the facilitators could fill out the regular EA Virtual Programs facilitator form but with some info about whether or not they can facilitate on-campus. * EA Virtual Programs assigns people to in-person or virtual cohorts. * Something extra that might be nice: If they were rejected due to limited capacity and their application answers were not bad, automatically offer them an option to be considered for the next round (for EA Georgia Tech, I'm thinking we'd have rounds in September–October and February–March). If it is true that people who are rejected tend not to reapply or engage with the EA group because they might feel discouraged [] , then it seems important to try to minimize how many people get a rejection from the Intro EA Program. Interesting, I didn't
My current impressions on career choice for longtermists

This might just be an extension of the "community building" aptitudes, but here's another potential aptitude.

"Education and training" aptitudes

Basic profile: helping people absorb crucial ideas and the right skills efficiently, so that we can reduce talent/skills bottlenecks in key areas.


Introductory EA program, in-depth EA fellowship, The Precipice reading group, AI safety programmes, alternative protein programmes, operations skills retreat, various workshops organised in EAGs/EAGxs, etc

How to try developing this aptitude:

I'll split these into t... (read more)

5MichaelA4moYeah, this seems worth highlighting in addition to the aptitudes Holden highlighted (though I'm not necessarily saying it's as important as those other aptitudes; I haven't thought carefully abut that). And that seems like a good breakdown of relevant skills, how to tell you're on track, etc. Regarding examples and places to apply this, I think an additional important (and perhaps obvious) place is with actual school students; see posts tagged Effective altruism outreach in schools [] . (There's also a Slack workspace for that topic, which I didn't create but could probably add people to if they send me a message.)
2Holden Karnofsky4moThis general idea seems pretty promising to me.
AMA: Working at the Centre for Effective Altruism

Looking at the comments, it seems like CEA has changed a lot over the years! 

This may be too broad, but in CEA's list of team values, what has CEA as a whole done well in? And which ones do you think the team wants to prioritise improving on? 

3Amy Labenz5moI think this varies based on the team. My team (Events) is very strong on Alliance Mentality and Purpose First. I think we could improve on Perpetual Beta, which is why we are emphasizing skills related to impact analysis and program evaluation in our current hiring round.
3MaxDalton5moGood question! Unfortunately I don't have an amazing answer. I think the values are a bit of a mix between simply reflecting where we currently are, and where we'd like to go. Overall, it feels like we're maybe 60-80% towards the ideal on these dimensions. So they are genuine strengths, but I think there's still room for us to grow in the dimension. There isn't one that stands out as more already-achieved, or as more in need for improvement: they're all in that ~60-80% range.
EA Malaysia Cause Prioritisation Report (2021)

You've made some good points that I didn't get to write in our forum post, and I've made an edit to direct readers to your comment. 

EA Malaysia Cause Prioritisation Report (2021)

Hi Jamie!

Looking at your methodology though, it seems as if you were attempting to essentially redo EA cause prioritisation research to date from scratch in a short timeframe?

My guess of the most useful process would have been to just take some of the most commonly / widely recommended EA cause areas (and maybe a couple of other contenders) and try to clarify how they seem more or less promising in the Malaysian context specifically.

If you agree with my characterisation of your process, with the benefit of hindsight, would you recommend that other national

... (read more)
EA Malaysia Cause Prioritisation Report (2021)

Hi Zeshen! I'll be answering you from my own personal capacity, so my views are not EA Malaysia's.  

I'm wondering if we have good reliable statistics on causes of deaths in the country (death being a proxy for suffering), and we could look into the categories of avoidable deaths (e.g. curable illnesses)

For health specific statistics, I've used information from IHME. For animal consumption, I've used data from FAO

and whether those areas are receiving enough support / funding.

It's a bit tough finding exact information about this. I did find one e... (read more)

EA Malaysia Cause Prioritisation Report (2021)

Hi Brian! Thank for your response.  I'll be using "we" (as a team) to address most of your comments, and "I" at the end to address one point. 

I think it would be a lot better though if you had "problem profiles" like 80,000 Hours's for those causes you listed, especially the top 2-4 causes.  

Yes if there is a case for conducting further research, we are definitely considering deeper research in the top causes, and producing “problem profiles”. 

Or if not making full problem profiles, putting a few sentences or bullets about t

... (read more)
Singapore’s Technical AI Alignment Research Career Guide

Hi Misha, sorry for the late reply. Thanks for the heads up! I've added this feedback for a future draft.

Singapore’s Technical AI Alignment Research Career Guide

In regards to what I meant by "short term AI capabilities", I was referring to prosaic AGI - potentially powerful AI systems that uses current techniques instead of hypothetical new ideas surrounding how intelligence works. When you mentioned "I estimated a very rough 50% chance of AGI within 20 years, and 30-40% chance that it would be using 'essentially current techniques'", I took it as prosaic AGI too, but you might mean something else.

I've reread all the write-ups, and you're right that they don't imply that "research on short term AI capabilities i

... (read more)
5rohinmshah1yOh yeah, that sounds correct to me. I think the issue was that I thought you meant something different from "prosaic AGI" when you were talking about "short term AI capabilities". I do think it is very impactful to work on prosaic AGI alignment; that's what I work on. Your rephrasing sounds good to me -- I think you can make it stronger; it is true that many researchers including me endorse working on prosaic AI alignment.