Linch

Linch's Comments

How much will local/university groups benefit from targeted EA content creation?

Thanks for the link and I agree that it's a valuable resource for a group starting out!

That said, I wonder if there is an illusion of transparency here and maybe we're talking past each other?

To be concrete, here are two problems I don't think the Hub's collection of resources currently fulfill.

1. My impression from looking through the content list on the EA Hub is that none of the sheets from the other groups can be directly adapted (even with significant modifications) for South Bay EA's audience, since the questions are either a) too broad and intro-level (like the CEA sheets) or b) have a lot of mandatory reading that's arguably not realistic for a heterogeneous group with many working professionals (eg the Harvard Arete stuff). I think SB EA is open to trying for more mandatory reading/high-engagement stuff among a subset of the members however. But right now if we are interested in an intermediate-level discussion on a topic that we haven't previously discussed (eg, geo-engineering, hinge of history), we basically have to make the sheets ourselves.

Historically we've found this to be true even for common topics that the online EA community has discussed for many years.

This isn't just a problem with the Hub to be clear; my group has been looking for a way to steal sheets from other groups since at least mid-2018. (It's possible our needs are really idiosyncratic but it'd be a bit of a surprise if that's true?)

2. I don't think of any of the existing sheets or guiding material as curriculum, per se. At least when we were creating the sheets, my co-organizers and I mostly did things that "seemed reasonable" through a combination of intuition and rough guesses/surveys about what our members liked. At no point did we have a strong educational theory or built things with an eye towards the latest advances in the educational literature. I suspect other local groups are similar to us in that when they created sheets and organized discussions, they tried their best with limited time and care, rather than have a strong theory of education or change.

If I were to design things from scratch, I'd probably want to work in collaboration with eg, educational or edutech professionals who are also very familiar with EA (some of whom have expressed interest in this). It's possible that EA material is so out-of-distribution that being familiar with the pedagogical literature isn't helpful, but I feel like it's at least worth trying?

Growth and the case against randomista development

(I talked more with brunoparga over PM).

For onlookers, I want to say I really appreciate bruno's top-level comment and that I have a lot of respect for bruno's contributions, both here and elsewhere. The comment I made two levels up was probably stronger than warranted and I really appreciate bruno taking it in stride, etc.

Growth and the case against randomista development

On a meta-level, in general I think your conversation with lucy is overly acrimonious, and it would be helpful to identify clear cruxes, have more of a scout's mindset, etc.

My read of the situation is that you (and other EAs upvoting or downvoting content) have better global priors, but lucy has more domain knowledge in the specific areas they chose to talk about.

I do understand that it's very frustrating for you to be in a developing country and constantly see people vote against their economic best interests, so I understand a need to vent, especially in a "safe space" of a pro-growth forum like this one.

However, lucy likely also feels frustrated about saying what they believe to be true things (or at least well-established beliefs in the field) and getting what they may perceive to be unjustifiably attacked by people who have different politics or epistemic worldviews.

My personal suggestion is to have a stronger "collaborative truth-seeking attitude" and engage more respectfully, though I understand if either you or lucy aren't up for it, and would rather tap out.

Growth and the case against randomista development

Apologies for the delayed response. I was surprised at not finding a single source (after several minutes of searching) that plotted literacy rates across time, however:

Prior to 1949, China faced a stark literacy rate of only 15 to 25 percent, as well as lacking educational facilities with minimal national curricular goals. But as the Chinese moved into the 1950s under a new leadership and social vision, a national agenda to expand the rate of literacy and provide education for the majority of Chinese youth was underway.

http://schugurensky.faculty.asu.edu/moments/1949china.html

In China, the literacy rate has developed from 79 percent in 1982 to 97 percent in 2010

https://www.statista.com/statistics/271336/literacy-in-china/

At least naively, this suggests a ~60% absolute change in literacy rates from 1949-~1980, which is higher than in the next 40 years (since you cannot go above 100%).

I think the change here actually understates the impact of the first 30 years, since there's an obvious delay between the implementation of a schooling system and the adult literacy rate (plus at least naively, we would expect the Cultural Revolution to have wiped out some of the progress).

One thing to flag with cobbling sources together is that there's a risk of using different (implicit or explicit) operationalizations, so the exact number can't be relied upon as much.

However, I think it's significantly more likely than not that under most reasonable operationalizations of adult literacy, the first 30 years of China under CCP rule was more influential than the next 40.

How much will local/university groups benefit from targeted EA content creation?

Do you have a sense of whether/how much new material is needed vs. we already have all the existing material and it's just a question of compiling everything together?

If the former, a follow-up question is which new material will be helpful. Would be excited you (or anybody else) also answer this related question:

https://forum.effectivealtruism.org/posts/prrKzvCXuyRn4MHbu/what-types-of-content-creation-would-be-useful-for-local

How much will local/university groups benefit from targeted EA content creation?

Yeah I guess that's the null hypothesis, thought it's possible that people don't use the current resources because it's not "good" enough (eg, insufficiently accessible, too much jargon, too many local context specific stuff, etc).

Another thing to consider is "curriculum", ie, right now discussion sheets, etc are shared to the internet without tips on how to adapt them (since local groups who wrote the sheets have enough local context/institutional knowledge on how the sheets should be used).

An interesting analogy is the "instructor's edition" of textbooks, which iirc in the US K-12 system often has almost as much supplementary material as the textbook's content itself!

What are the best arguments that AGI is on the horizon?
I realize that for the EA community to dedicate so many resources to this topic there must be good reasons to believe that AGI really is not too far away

First, a technicality: you don't have to strongly believe that the median probability is that AGI/Transformative AI is happening soonish, just that the probability is high enough to be worth working on[1].

But in general, several points of evidence of a relatively soon AGI:

1. The first is that we can look at estimates from AI experts. (Not necessarily AI Safety people). It seems like their estimates for when Human Level AI/AGI/TAI are all over the place, but roughly speaking, the median is <60 years, so expert surveys say that it seems more likely than not to happen in our lifetimes[2]. You can believe that AI researchers are overconfident about this, but bias could be in either direction (eg, plenty of examples in history where famous people in a field dramatically underestimate progress in that field).

2. People working specifically on AGI (eg, people at OpenAI, DeepMind) seem especially bullish about transformative AI, even relative to experts not working on AGI. Note that this is not uncontroversial, see eg, criticisms from Jessica Taylor, among others. Note also that there's a strong selection effect for the people who're the most bullish on AGI to work on it.

3. Within EA, people working on AI Safety and AI Forecasting have more specific inside view arguments. For example, see this recent talk by Buck and a bunch of stuff by AI Impacts. I find myself confused about how much to update on believable arguments vs. just using them as one number among many of "what experts believe".

4. A lot of people working in AI Safety seem to have private information that updates them towards shorter timelines. My knowledge of a small(?) subset of them does lead me to believe in somewhat shorter timelines than expert consensus, but I'm confused about whether this information (or the potential of this information) feeds into expert intuitions for forecasting, so it's hard to know if this is in a sense already "priced in." (see also information cascades, this comment on epistemic modesty). Another point of confusion is how much you should trust people who claim to have private information; a potentially correct decision-procedure is to ignore all claims of secrecy as BS.

_

[1] Eg, if you believe with probability 1 that AGI won't happen for 100 years, I think a few people might still be optimistic about working now to hammer out the details of AGI safety, but most people won't be that motivated. Likewise, if you believe (as I think Will MacAskill does) that the probability of AGI/TAI in the next century is 1%, I think many people may believe there are marginally more important long-termist causes to work on. How high does X have to be for "X% chance of AGI in the next Y years", in your words, is a harder question.

[2] "Within our lifetimes" is somewhat poetic but obviously the "our" is doing a lot of the work in that phrase. I'm saying that as an Asian-American male in my twenties, I expect that if the experts are right, transformative AI is more likely than not to happen before I die of natural causes.

How much will local/university groups benefit from targeted EA content creation?
Have you shared these with other local groups before now? Have they been adopted or adapted there?

I know Stanford EA sometimes use some of our old sheets with their modifications[1]. I believe they don't focus as much on the same type of discussion-focused meetups that we do anymore, so unclear if they have solid metrics on how helpful the sheets are for them (though at least we're saving them some time).

I've also shared our sheets (notably none of them are designed for any group in mind other than SB) online a few times. A lot of other local group organizers appeared excited about them but nobody followed up, so my guess is that uptake elsewhere probably is nonexistent or pretty low [2].

[1] SB EA sort of grew out of Stanford EA so it makes a lot of sense that our structure/content is sufficiently similar that it's usable for their purposes.

[2] Notably I wasn't really tracking that Stanford EA used our sheets until I explicitly asked a few weeks ago, so I guess it's unlikely though not impossible if, eg, a few groups saw my posts on FB or our material on the EA Hub and adapted our sheets but never bothered contacting us.

How much will local/university groups benefit from targeted EA content creation?

Thanks! Though this seems more like a comment than an answer.

Load More