most prominently transforming LessWrong into something that looks a lot more respectable in a way that I am worried might have shrunk the overton window of what can be discussed there by a lot, and having generally contributed to a bunch of these dynamics
Would you mind sharing a bit more of what you mean here?
I'm not sure I understand how an increase in respectability in LessWrong equates to a shrinking overton window. I would have guessed the opposite -- an increase in respectability would have shifted or expanded the overton window in ways th...
Another thing I've noticed -- folks from elite cultures seem less inclined to mix and hangout with non-elite cultures.
Somewhat adjacent to your "culture clash" segment. I've noticed folks from "perceived-to-be-higher-status-cultures" hijacking (probably unconsciously) norms or spaces where there are more folks from "perceived-to-be-lower-status-cultures".
Thanks for writing this up! Some rough thoughts about the LMIC category:
1. I think the LMIC is a pretty useful category insofar as it's used as "non-high-income-countries".
2. Otherwise, I worry that folks might conflate with LMICs as just "low income countries", when most countries in the LMIC category are lower to upper middle income (or developing).
3. I have a light preference for separating LMICs into two categories: "least developed countries" and "middle income countries".
A few people have mentioned about buckets (1, 2) as a way to segment different parts of your life. Each bucket has a corresponding goal or set of goals that you spend resources on. Since we all have many different goals, it's a useful exercise to distribute resources between them accordingly, so one bucket doesn't "eat" into another bucket's resources. For example, you might have a bucket for your close friends, in which you spend a few hours a week of your time to cultivate genuine and happy friendships but not more, since you have other important buckets...
Hi Benjamin, I run EA Virtual Programs. Thanks for sharing about your project! I don't have a lot of time to think too deeply about your project, but here are my quick impressions (caveat: this is my personal opinion and not of my employer):
1. I worry about fidelity. I know you're hoping to certification for your university, but the four courses you listed don't seem relevant.
2. I worry that the "creating more EAs" goal you have might be goodharted.
3. I worry that you're not tracking risks to the wider movement well. You didn't mention how your project mig...
(Weakly held personal opinion) I would go further and say that you attract people like you.[1]If what you or your core group is signalling most to outsiders are your community building (or marketing) qualities, you're likely to attract folks who are also keen on community building (and put off folks who are likely keen on the object level work you're recruiting for).
Here's an intuition pump I have. Imagine two EA uni group websites that are exactly the same except for one difference in their profile page:
Hi Pride, I'm Yi-Yang and I run EA VP. Unfortunately, we don't usually let folks apply late. EA VP does run programs every month so you could catch the upcoming one. The next deadline is on Sun, June 26th.
A service/consultancy that calculates the value of information of research projects
Epistemic Institutions, Research That Can Help Us Improve
When undertaking any research or investigations, we want to know whether it's worth spending money or time on it. There are a lot of research-type projects in EA and the best way to evaluate and prioritise them is to calculate their value of information (VOI). However, VoI calculations can be complex and we need to build a team of experts that can form a VoI consultancy or service provider.
Examples of use cases:
1...
Hi Naomi! Do the participants engage with any required learning materials outside of group discussions in this version of the fellowship? Something like the usual 8 week virtual programs version.
Agree with this! I can definitely see that there's some kind of fine tuning you can do, like making it less challenging so your motivation and probability of success goes up.
(1), (2) great points!
(3) Possibly, I definitely took some inspiration from 80K's career planning guide too.
A low-energy version of this could be a co-working retreat
Oh interesting! I see a few examples of this when Googling. If you have a go-to resource for organising this, would love to check it out.
Less tailor-made events and more consistent simple meetups (socials, YT watch parties, etc).
Less tailor-made targeted outreach and more advertising.
Small cohort size seems costly from a facilitator's point of view. And some participants found smaller group sizes more intimidating too.
EA VP has been increasing their cohort sizes recently. Attrition rates are at around 30% so having a cohort size of at least 4 participants by the end of program seems like a good number to have.
I'm curious what the attrition rates are for the Stanford EA format, and how they're able to get so many facilitators.
So perhaps the idea behind your first bullet point from the Economist is that a balanced power dynamic reduces either side's credence that conflict will help their position?
Yes, that's right!
And the follow up points about China's economic influence tilt the balance in China's favour, thereby raising again the chances of conflict?
Yes, but it's more like China's economic influence has tilted the balance in China's favour for some years now (i.e. Belt & Road Initiative). It's only recently with AUKUS that there's more of a balance betwee...
Key takeaways from The Economist's latest briefing:
Hi Alexander, thanks for writing this up!
Some context. I used to use Anki for 1-2 years. Completed the "Learning How to Learn" MOOC and read the book it was based on. Taught 13-16 year olds math and English for 2 years. Conducted EA presentations in Malaysia and previously in Singapore. Currently running EA Virtual Programs (I noticed that you're in the intro program!). FYI, my opinions are mine and not CEA's.
In conjunction with "learning how to learn better", "learning how to prioritise which learning strategy works for specific scenarios" see...
I think this sounds right! This makes me feel like we should also pay particular attention to making the facilitator experience is great too.
Organising local intro EA programs can also be a great Task Y candidate.
Hi Michael!
I'm interested in running a local in-person program at my university from September to October with the virtual program as overflow capacity, in case our capacity for in-person cohorts isn't enough to accept all quality applicants. Would that setup be possible?
Yes, just direct people who are not able to join your local program to EA VP's website! And tell them to state in the application form that they want to be in a cohort with other people from the same uni.
Also, is there a reason that the program is no longer called a fellowship?
I spo...
This might just be an extension of the "community building" aptitudes, but here's another potential aptitude.
"Education and training" aptitudes
Basic profile: helping people absorb crucial ideas and the right skills efficiently, so that we can reduce talent/skills bottlenecks in key areas.
Examples:
Introductory EA program, in-depth EA fellowship, The Precipice reading group, AI safety programmes, alternative protein programmes, operations skills retreat, various workshops organised in EAGs/EAGxs, etc
How to try developing this aptitude:
I'll split these into t...
Looking at the comments, it seems like CEA has changed a lot over the years!
This may be too broad, but in CEA's list of team values, what has CEA as a whole done well in? And which ones do you think the team wants to prioritise improving on?
You've made some good points that I didn't get to write in our forum post, and I've made an edit to direct readers to your comment.
Hi Jamie!
...Looking at your methodology though, it seems as if you were attempting to essentially redo EA cause prioritisation research to date from scratch in a short timeframe?
My guess of the most useful process would have been to just take some of the most commonly / widely recommended EA cause areas (and maybe a couple of other contenders) and try to clarify how they seem more or less promising in the Malaysian context specifically.
If you agree with my characterisation of your process, with the benefit of hindsight, would you recommend that other national
Hi Zeshen! I'll be answering you from my own personal capacity, so my views are not EA Malaysia's.
I'm wondering if we have good reliable statistics on causes of deaths in the country (death being a proxy for suffering), and we could look into the categories of avoidable deaths (e.g. curable illnesses)
For health specific statistics, I've used information from IHME. For animal consumption, I've used data from FAO.
and whether those areas are receiving enough support / funding.
It's a bit tough finding exact information about this. I did find one e...
Hi Brian! Thank for your response. I'll be using "we" (as a team) to address most of your comments, and "I" at the end to address one point.
I think it would be a lot better though if you had "problem profiles" like 80,000 Hours's for those causes you listed, especially the top 2-4 causes.
Yes if there is a case for conducting further research, we are definitely considering deeper research in the top causes, and producing “problem profiles”.
...Or if not making full problem profiles, putting a few sentences or bullets about t
Hi Misha, sorry for the late reply. Thanks for the heads up! I've added this feedback for a future draft.
In regards to what I meant by "short term AI capabilities", I was referring to prosaic AGI - potentially powerful AI systems that uses current techniques instead of hypothetical new ideas surrounding how intelligence works. When you mentioned "I estimated a very rough 50% chance of AGI within 20 years, and 30-40% chance that it would be using 'essentially current techniques'", I took it as prosaic AGI too, but you might mean something else.
I've reread all the write-ups, and you're right that they don't imply that "research on short term AI capabilities i
...
Got it, this was helpful. Thanks!