All of Chi's Comments + Replies

Why I am probably not a longtermist

Again, I haven't actually read this, but this article discusses intransitivity in asymmetric person-affecting views, i.e. I think in the language you used: The value of pleasure is contingent in the sense that creating new lives with pleasure has no value. But the disvalue of pain is not contingent in this way. I think you should be able to directly apply that to other object-list theories that you discuss instead of just hedonistic (pleasure-pain) ones.

An alternative way to deal with intransitivity is to say that not existing and any life are incomparable... (read more)

Noticing the skulls, longtermism edition

person-affecting view of ethics, which longtermists reject

I'm a longtermist and I don't reject (asymmetric) person(-moment-)affecting views, at least not those that think necessary ≠ only present people. I would be very hard-pressed to give a clean formalization of necessary people though. I think it's bad if effective altruists think longtermism can only be justified with astronomical waste-style arguments and not at all if someone has person-affecting intuitions. (Staying in a broadly utilitarian framework. There are, of course, also obligation-to-ancest... (read more)

How would you run the Petrov Day game?

Big fan of what you describe in the end or something similar.

It's still not great, and it would still be hard to distinguish the people who opted-in and received the codes but decided not to use them from the people who just decided to not receive their codes in the first place

Not sure whether you mean it's hard from the technical side to track who received their code and who didn't (which would be surprising) or whether you mean distinguishing between people who opted out and people who opted in but decided not to see the code. If the latter: Any downside... (read more)

Honoring Petrov Day on the EA Forum: 2021

edit: Feature already exists, thanks Ruby!

Another feature request: Is it possible to make other people's predictions invisible by default and then reveal them if you'd like? (Similar to how blacked-out spoilers work, which you can hover over to see the text.)

I wanted to add a prediction but then noticed that I heavily anchored on the previous responses and didn't end up doing it.

3Peter Wildeford4moI agree this would be good to see!
9Ruby4moThere's a user setting that lets you do this.
Is effective altruism growing? An update on the stock of funding vs. people

edit: no longer relevant since OP has been edited since. (Thanks!)

Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10.

(emphasis mine)

This would also mean that if you have a 10% chance of succeeding, then the expected value of the path is $300,000–$2 million (and the value of information will be very high if you can determine your fit within a couple of years

... (read more)

Yes, they're all per year. I'll add them.

Is effective altruism growing? An update on the stock of funding vs. people

And do you have any idea how the numbers for total funding break down into different cause areas? That seems important for reasoning about this.

+1

I think I often hear longtermists discuss funding in EA and use the 22 Bil number from OpenPhilanthropy. And I think people often make some implicit mental move thinking that's also the money dedicated to longtermism- even though my understanding is very much that that's not all available to longtermism.

In the recent podcast with Alexander Berger, he estimates it'll be split roughly 50:50 longtermism vs. global health and wellbeing.

This means that the funding available to global health and wellbeing has also grown a lot too, since Dustin Moskovitz's net worth has gone from $8bn to $25bn.

anoni's Shortform

1.

1.1.: You might want to have a look at group of positions in metaethics called person affecting views, some of which include future people and some of which don't. The ones that do often don't care about increasing/decreasing the number of people in the future, but about improving the lives of future people that will exist anyway. That's compatible with longtermism - not all longtermism is about extinction risk. (See trajectory change and s-risk.)

1.2.: No, we don't just care about humans. In fact, I think it's quite likely that most of the value or disva... (read more)

COVID: How did we do? How can we know?

On Human Challenge Trials (HCTs):

Disclaimer: I have been completely plugged out of Covid-19 stuff for over a year, definitely not an expert on these things (anymore), and definitely speaking for myself and not 1Day Sooner (which is more bullish on HCTs)

I worked for 1Day sooner last year as one of the main people investigating the feasibility and usefulness of HCTs for the pandemic. At least back then (March 2020), we estimated that it would optimiscally take 8 months to complete  the preparations for an HCT (so not even the HCT itself). Most of this t... (read more)

6Ghost_of_Li_Wenliang7moThanks! That is good knowledge.
An animated introduction to longtermism (feat. Robert Miles)

Note: this is mostly about your earlier videos. I think this one was better done, so maybe my points are redundant. Posting this here because the writer has expressed some unhappiness with reception so far. I've watched the other videos some weeks ago and didn't rewatch them for this comment. I also didn't watch the bitcoin one.

First of, I think trying out EA content on youtube is really cool (in the sense of potentially high value), really scary, and because of this really cool (in the sense of "of you to do this".) Kudos for that. I think this could be r... (read more)

4Writer7moUpdate: as a result of feedback here and in other comments (and some independent thinking), we made a few updates to the channel. * Made new thumbnails without the clickbaity feel that the previous ones had. * Changed titles (I did that already weeks ago, but it's worth mentioning again). * Removed the arm from the cover photo. * Removed mentions of EA from the channel description. For now, I will associate the channel with EA and LW the least I can. I will mention names of specific EA topics (e.g. Longtermism) only when I think it's really necessary. And it will be probably never necessary to mention the EA movement itself. In this way, I can focus on improving with a lighter heart since the probability of causing PR damage is now lower. Obviously, I'll have to relax these constraints in the future if I want to increase impact. * Hidden the weakest of the two "digital circuits in Minecraft" videos. I have also readCEA's models of community building [https://www.centreforeffectivealtruism.org/models-of-community-building/], which were suggested in some comments. The future direction the channel will take is more important than previous videos, but still, I wanted to let people know that I made these changes. I wanted to make a post to explain both these changes and future directions in detail, but I don't know if I'll manage to finish it, so in the meanwhile, I figured that it would probably be helpful to comment here.
3Writer7moI feel like this is the most central criticism I had so far. Which means it is also the most useful. I think it's very likely that what you said is also the sentiments of other people here. I think you're right about what you say and that I botched the presentation of the first videos. I'll defend them a little bit here on a couple of points, but not more. I will not say much in this comment other than this, but know that I'm listening and updating. 1. The halo effect video argues in part that the evolution of that meme has been caused by the halo effect. It is certainly not an endorsement. 2. The truth is cringe video is not rigorous and was not meant to be rigorous. It was mostly stuff from my my intuitions and pre-existing knowledge. The example I used made total sense to me (and I considered it interesting because it was somewhat original), but heh apparently only to me. Note: I'm not going to do only core EA content (edit: not even close actually). I'm trying to also do rationality and some rationality-adjacent and science stuff. Yes, currently the previous thumbnails are wrong. I fixed the titles more recently. I'm not fond of modifying previous content too hard, but I might make more edits. Edit to your edit: yes.
A bunch of reasons why you might have low energy (or other vague health problems) and what to do about it

Hm, I'm a bit unhappy with the framing of symptoms vs. root causes, and am skeptical about whether it captures a real thing (when it comes to mental health and drugs vs. therapy). I'm worried that making the difference between the two contributes to the problems alexrjl pointed out.

Note, I have no clinical expertise and am  just spitballing: e.g. I understand the following trajectory as archetypical for what others might call "aha! First a patch and then root causes":

[Low energy --> takes antidepressants --> then has enough energy to do therapy ... (read more)

How much do you (actually) work?

Hah! Random somewhat fun personal anecdote: I think tracking actually helped me a bit with that. When I first started tracking I was pretty neurotic about doing it super exactly. Having to change my toggl so frequently + seeing the '2 minutes of supposed work X' at the end of the day when looking at my toggl was so embarrassing that I improved a bit over time. Now I'm either better at swtiching less often and less neurotic about tracking or only the latter. It also makes me feel worse to follow some distraction if I know my time is currently being tracked as something else.

Concerns with ACE's Recent Behavior

I might be a little bit less worried about the time delay of the response. I'd be surprised if fewer than say 80% of the people who would say they find this very concerning won't end up also reading the response from ACE.

FWIW, depending on the definition of 'very concerning', I wouldn't find this surprising. I think people often read things, vaguely update, know that there's another side of the story that they don't know, have the thing they read become a lot less salient,  happen to not see the follow-up because they don't check the forum much,  ... (read more)

Launching a new resource: 'Effective Altruism: An Introduction'

And if hours went into carefully picking the original ten episodes and deciding how to sequence them, I'd like to see modifications made via a process of re-listening to different podcasts for hours and experimenting with their effects in different orders, seeing what "arcs" they form, etc., rather than via quick EA Forum comments and happy recollections of isolated episodes.

 

I agree that that's how I want the eventual decision to be made. I'm not sure what exactly the intended message of this paragraph was, but at least one reading is that you want t... (read more)

Yeah, I endorse all of these things:

  • Criticizing 80K when you think they're wrong (especially about object-level factual questions like "is longtermism true?").
  • Criticizing EAs when you think they're wrong even if you think they've spent hundreds of hours reaching some conclusion, or producing some artifact.
    • (I.e.: try to model how much thought and effort people have put into things, and keep in mind that no amount of effort makes you infallible. Even if it turns out the person didn't make a mistake, raising the question of whether they messed up can help mak
... (read more)
What material should we cross-post for the Forum's archives?
Answer by ChiApr 15, 202111
  • Some stuff from Paul Christiano's 'The sideways view'

In addition to everything that Pablo said (esp. the Tomasik stuff because AFAICT none of his stuff is on the forum?)

The EA Forum Editing Festival has begun!
  1. I found tagging buggy. I tried to tag something yesterday, and I believe it didn't get through although it worked today. The 'S-risks" tag doesn't show up in my list to tag posts at all, although it's an article. But that might also be something about the difference between tags and articles that I don't understand? I use firefox and didn't check on other browsers.

  2. Is there a consensus for how to use organisation tags? Specifically, is it desirable to have have every output that's ever come out of an organisation to be tagged to them or only e.g. orga

... (read more)
4Aaron Gertler10moOn point #2: Here's what I had suggested on my post about organization tags: In general, I think it's better to have more posts tagged rather than fewer, and I'd consider "paid work by an employee of X" to be "work paid for by X" and thus, in some sense, "the work of X".
8JP Addison10mo1. Sorry about the issues. On S-Risks, it is a wiki-only tag, though probably we should change that. 2. I really like the idea of tagging everything that's been officially produced by an organization with the organization's tag. So you might go to the Rethink Priorities [https://forum.effectivealtruism.org/tag/rethink-priorities] tag, sort by top [https://forum.effectivealtruism.org/tag/rethink-priorities?sortedBy=top], and see a "best of" list. 3. [Edit reply] Not to my knowledge, sorry.
How to work with self-consciousness?
Answer by ChiFeb 04, 202113

I'm not a very experienced researcher, but I think in my short research career, I've had my fair share of dealing with self-consciousness. Here are some things I find:

Note that I mostly refer to the "I'm not worth other people's time", "This/I am dumb", "This is bad, others will hate me for it" type of self-consciousness. There might be other types of self-consciousness, e.g. "I'm nervous I'm not doing the optimal thing and feel bad because then more morally horrible things will happen" in a way that's genuinely not related to self-confidence, self-esteem ... (read more)

4MichaelA10moThis seems like a great bundle of ideas, advice, and perspectives! This reminds me of the Eliezer Yudkowsky post Working hurts less than procrastinating, we fear the twinge of starting [https://www.lesswrong.com/posts/9o3QBg2xJXcRCxGjS/working-hurts-less-than-procrastinating-we-fear-the-twinge] .
3Denis Drescher1yWoah! Thank you for all these very concrete tips! A lot of great ideas for me to pick from. :-D
How to discuss topics that are emotionally loaded?

Ironically, I felt somewhat upset reading OP, I think for the reason you point out. (No criticism towards OP, I was actually amused at myself when I noticed)

I think some reason-specific heterogeneity in how easily something is expressible/norms in your society also play a role:

  1. I think some reasons are just inherently fuzzier (or harder to crisply grasp), e.g. why certain language makes you feel excluded. (It's really hard to point at a concrete damage (or in summer circles, something that can't be countered with "that's not how it's meant [, but if you w
... (read more)
3vin1yThat's a good point, that the upset person in the conversation might be prone to be taken less seriously, even by themselves, especially if their reasons are hard-to-describe, but not necessarily wrong. Looking back at theses situations through this lens, I actually think at one point I didn't take myself seriously enough. If my reasons are fuzzy, and I'm upset, it is tempting to conclude that I'm just being silly. A better framing is to view negative emotions as a kind of pointer, that says "Hey, in this topic there is still some unresolved issue. There may actually be a good reason why I have this emotion. Let's investigate where it comes from." For the non-offended person, I think it already helps a lot to have the possibility in the back of you mind, that a topic may be emotional. For example, many people aren't aware that privacy is a topic that can be emotional for people.
Chi's Shortform

Thanks for the reply!

Honestly, I'm confused by the relation to gender. I'm bracketing out genders that are both not-purely-female and not-purely-male because I don't know enough about the patterns of qualifiers there.

  • In general, I think anxious qualifying is more common for women. EA isn't known for having very many women, so I'm a bit confused why there's seemingly so much of it in EA.
  • (As a side: This reminds me of a topic I didn't bring into the original post: How much is just a selection effect and how much is EA increasing anxious qualifying. Intuit
... (read more)
Chi's Shortform

Reply 3/3

"displaying uncertainty or lack of knowledge sometimes helps me be more relaxed"

I think there's a good version of that experience and I think that's what you're referring to, and I agree that's a good use of qualifiers. Just wanted to make a note to potential readers because I think the literal reading of that statement is a bit incomplete. So, this is not really addressed at you :)

I think displaying uncertainty or lack of knowledge always helps to be more relaxed even when it comes from a place of anxious social signalling. (See my first reply f... (read more)

1Misha_Yagudin1yChi, I appreciate the depth of your engagement! I mostly agree with your comments.
Chi's Shortform

Reply 2/3

I like the suggestions, and they probably-not-so-incidentally are also things that I often tell myself I should do more and that I hate. One drawback with them is that they are already quite difficult, so I'm worried that it's too ambitious of an ask for many. At least for an individual, it might be more tractable to (encourage them to) change their excessive use of qualifiers as a first baby step than to jump right into quantification and betting. (Of course, what people find more or less difficult confidence-wise differs. But these things are ... (read more)

1Misha_Yagudin1yI agree that the mechanisms proposed in my comment are quite costly sometimes. But I think higher-effort downstream activities only need to be invoked occasionally (e.g. not everyone who downvotes needs to explain why but it's good that someone will occasionally) — if they are invoked consistently they will be picked up by people. Right, I think I see how this can backfire now. Maybe upvoting "ugh, I still think that this is likely but am uncomfortable about betting" might still encourage using qualifiers for reasons 1–3 while acknowledging vulnerability and reducing pressure on commenters?
Chi's Shortform

Reply 1/3 Got it now, thanks! I agree there's confident and uncertain, and it's an important point. I'll spend this reply on the distinction between the two, another response on the interventions you propose, and another response on your statement that qualifiers often help you be more relaxed.

The more I think about it, the more I think that there's quite a bit for someone to unpack here conceptually. I haven't done so, but here a start:

  1. There's stating epistemic degree of epistemic uncertainty to inform others how much they should update based on your be
... (read more)
3Misha_Yagudin1yI like your 1–5 list. Tangentially, I just want to push back a bit on 1 and 2 being obviously good. While I think that quantification is in general good, my forecasting experience taught me that quantitative estimates without a robust track record and/or reasoning are quite unsatisfactory. I am a bit worried that misunderstanding of the Aumann agreement theorem [https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem] might lead to overpraising communication of pure probabilities (which are often unhelpful).
Chi's Shortform

I just wondered whether there is systematic bias in how much advice there is in EA for people who tend to be underconfident and people who tend to be appropriately or overconfident. Anecdotally, when I think of Memes/norms in effective altruism that I feel at least conflicted about, that's mostly because they seem to be harmful for underconfident people to hear.

Way in which this could be true and bad: people tend to post advice that would be helpful to themselves, and underconfident people tend to not post advice/things in general.

Way in which this could b... (read more)

Chi's Shortform

Hey Misha! Thanks for the reply and for linking the post, I enjoyed reading the conversation. I agree that there's an important difference. The point I was trying to make is that one can look like the other, and that I'm worried that a culture of epistemic uncertainty can accidentally foster a culture of anxious social signaling, esp. when people who are inclined to be underconfident can smuggle anxious social signaling in disguised (to the speaker/writer themselves) as epistemic uncertainty. And because anxious social signalling can superficially look sim... (read more)

2Misha_Yagudin1yI mostly wanted to highlight that there is a confident but uncertain mode of communication. And that displaying uncertainty or lack of knowledge sometimes helps me be more relaxed. People surely pick up bits of style from others they respect; so aspiring EAs are likely to adopt the manners of respected members of our community. It seems plausible to me that this will lead to the negative consequences you mentioned in the fifth paragraph (e.g. there is too much deference to authority for the amounts of cluelessness and uncertainty we have). I think a solution might be not in discouraging display of uncertainty but in encouraging positive downstream activities like betting, quantification, acknowledging that arguments changed your mind &c — likely this will make cargo culting less probable (a tangential example is encouraging people to make predictions when they say "my model is…"). I agree underconfidence and anxiety could be confused on the forum. But not in real life as people leak clues about their inner state all the time.
Chi's Shortform

Should we interview people with high status in the effective altruism community (or make other content) featuring their (personal) story, how they have overcome challenges, and live into their values?

Background: I think it's no secret that effective altruism has some problems with community health. (This is not to belittle the great work that is done in this space.) Posts that talk about personal struggles, for example related to self-esteem and impact, usually get highly upvoted. While many people agree that we should reward dedication and that the thing ... (read more)

Chi's Shortform

Observation about EA culture and my journey to develop self-confidence:

Today I noticed an eerie similarity between things I'm trying to work on to become more confident and effective altruism culture. For example, I am trying to reduce my excessive use of qualifiers. At the same time, qualifiers are very popular in effective altruism. It was very enlightening when a book asked me to guess whether the following piece of dialogue was from a man or woman:

'I just had a thought, I don't know if it's worth mentioning...I just had a thought about [X] on this one,... (read more)

4EmeryCooper1yThis is a really interesting point! I think I'm also sometimes guilty of using the norms of signalling epistemic uncertainty in order to mask what is actually anxious social signalling on my part, which I hadn't thought about so explicitly until now. One thing that occurred to me while reading this - I'd be curious as to whether you have any thoughts on how this might interact with gender diversity in EA, if at all?
3Linch1yI think within EA, people should report their accurate levels of confidence, which in some cultures and situations will come across as underconfident and in other cultures and situations will come across as overconfident. I'm not sure what the practical solution is to this level of precision bleeding outside of EA; I definitely felt like there were times where I was socially penalized for trying to be accurate in situations where accuracy was implicitly not called for. If I was smarter/more socially savvy the "obvious" right call would be to quickly codeswitch [https://en.wikipedia.org/wiki/Code-switching] between different contexts, but in practice I've found it quite hard. ___ Separate from the semantics used, I agree there is a real issue where some people are systematically underconfident or overconfident relative to reality, and this hurts their ability to believe true things or achieve their goals in the long run. Unfortunately this plausibly correlates with demographic differences (eg women on average less confident than men, Asians on average less confident than Caucasians), which seems worth correcting for if possible.
7Misha_Yagudin1yHey Chi, let me report my personal experience: uncertainty and putting qualifiers feel quite different to me than anxious social signaling. The conversation in the beginning ofConfidence all the way up [https://mindingourway.com/confidence-all-the-way-up/]points to the difference. You can be uncertain or potentially wrong, and be chill about it. Acknowledging uncertainty helps with (fear of) saying "oops, was wrong" and hence makes one more at ease.
Training Bottlenecks in EA (professional skills)

Thanks for the reply! I was initially just self-interestedly wondering which training you got and whether you would recommend it. But I am also happy to hear about your plans in that direction.

Given the time constraints, do you think there any other people for whom it would make sense to take the lead regarding this that you are not yet in touch with about this, (e.g. a specific type of person rather than specific individuals.) And if so, which traits would that person need? You already mentioned that you want to work on it with help anyway, and I can imag... (read more)

My mistakes on the path to impact

I think the comparison to "the current average experience a college graduate has" isn't quite fair, because the group of people who see 80k's advice and act on is is already quite selected for lots of traits (e.g. altruism). I would be surprised if the average person influenced by 80k's EtG advice had the average college graduate experience in terms of which careers they consider and hence, where they look for advice, e.g. they might already be more inclined to go into policy, the non-profit sector or research to do good.

(I have no opinion on how your poin... (read more)

Effektiv Spenden - Fundraising and 2021 Plans

Hey, I wanted to probe a bit into why you don't write in gender neutral language on your website.

  • (For those who are not German: in German most nouns that refer to persons are not gender neutral by default, but always refer to either male or female persons, with the male version having been the default version for a long time. In the last decade, there has been a pushback against this and people started to adopt gender neutral language, which often looks a bit clunky though.) -

I saw that you justify this with better readability in your FAQ, but I didn't... (read more)

3Sebastian Schwiecker1yHi Chi! Thanks for your comments. We'll most likely start to "gender" once we relaunch the website somewhen in the next couple of quarters. The reason why I'm reluctant to do this is because I'm quite certain that this will decrease the mass appeal of the website. So when we do it we'll do it with the expectation of decreasing the amount of donations. Reasons are: - Currently our site is kind of gender neutral already since we don't just use the male version but male and female versions alternate (see https://blog.zeit.de/glashaus/2018/02/07/gendern-schreibweise-geschlecht-maenner-frauen-ansprache/ [https://blog.zeit.de/glashaus/2018/02/07/gendern-schreibweise-geschlecht-maenner-frauen-ansprache/] for a longer explanation). There are at least some people who care about a gender neutral language who prefer this approach (it was also the new and progressive way to do it when I was at the university). - The vast majority of Germans don't use a gender neutral language themselves and I would assume that most don't want it to be used in general. I don't have a data source for the latter but the fact that pretty much all newspaper don't use gender neutral language different to ours seems to be a clear indicator for that. This obviously doesn't mean that one shouldn't do it, just that it's not mainstream yet. Obviously it's very different with different demographics. Eg when I think of people I'm close with I know several who are kind of actively annoyed with gender neutral language but they are all 40+. It's not because they are opposed to the concept but because they are used to a different language and it make the language less appealing for them. I tend to agree (I'm also 40+). For me it's the same as with vegan food. It's the right thing to do but it's just not as appealing as the stuff I'm used to. Talking to EAs who attended university during the last ten years I'm sure it will be quite the opposite.
Training Bottlenecks in EA (professional skills)

Hey Kathryn, this is a bit off-topic, but I was wondering what that impostor syndrome training is that Michelle mentions in the post. Asking here because I imagine more people might be interested in this.

My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda

Hey Max, thanks for your comment :)

Yeah, that's a bit confusing. I think technically, yes, IDA is iterated distillation and amplification and that Iterated Amplification is just IA. However, IIRC many people referred to Paul Christiano's research agenda as IDA even though his sequence is called Iterated amplification, so I stuck to the abbreviation that I saw more often while also sticking to the 'official' name. (I also buried a comment on this in footnote 6)

I think lately, I've mostly seen people refer to the agenda and ideas as Iterated Amplification. (And IIRC I also think the amplification is the more relevant part.)

5antimonyanthony1yI'm glad "distillation" is emphasized as well in the acronym, because I think it resolves an important question about competitiveness. My initial impression, from the pitch of IA as "solve arbitrarily hard problems with aligned AIs by using human-endorsed decompositions," was that this wouldn't work because explicitly decomposing tasks this way in deployment sounds too slow. But distillation in theory solves that problem, because the decomposition from the training phase becomes implicit. (Of course, it raises safety risks too, because we need to check that the compression of this process into a "fast" policy didn't compromise the safety properties that motivated decomposition in the training in the first place.)
The Case for Education

Hm, I'm not sure how easily it's reproducible/what exactly he did. I had to write essays on the topic every week and he absolutely destroyed my first essays. I think reading their essay is an exceptionally good way to find out how much the person in question misunderstands and I'm not sure how easily you can recreate this in conversation.

I guess the other thing was a combination of deep subject-matter expertise + [being very good at normal good things EAs would also do] + a willingness to assume that when I said something that didn't seem to make sense, it

... (read more)
2Denis Drescher1yInteresting, thank you! Assuming there are enough people who can do the “normal good things EAs would also do,” that leaves the problem that it’ll be expensive for enough people with the necessary difference in subject-matter expertise to devote time to tutoring. I’m imagining a hierarchical system where the absolute experts on some topic (such as agent foundations or s-risks) set some time aside to tutor carefully junior researchers at their institute; those junior researchers tutor somewhat carefully selected amateur enthusiasts; and the the amateur enthusiasts tutor people who’ve signed up for (self-selected into) a local reading club on the topic. These tutors may need to be paid for this work to be able to invest the necessary time. This is difficult if the field of research is new because then (1) there may be only a small number of experts with very little time to spare and no one else who comes close in expertise or (2) there may be not yet enough knowledge in the area to sustain three layers of tutors while still having a difference in expertise that allows for this mode of tutoring socially. But whenever problem 2 occurs, the hierarchical scheme is just unnecessary. So only problem 1 in isolation remains unsolved. Do you think that could work? Maybe this is something that’d be interesting for charity entrepreneurs to solve. :-) What would also be interesting: (1) How much time do these tutors devote to each student per week? (2) Does one have to have exceptional didactic skills to become tutor or are these people only selected for their subject-matter expertise? (3) Was this particular tutor exceptional or are they all so good? Maybe my whole idea is unrealistic because too few people could combine subject-matter expertise with didactic skill. Especially the skill of understanding a different, incomplete or inconsistent world model and then providing just the information that the person needs to improve it seems unusual.
EA Meta Fund Grants – July 2020

Small point that's not central to your argument:

A similar thing might happen here: if there was a universal mentoring group that gave women access to both male and female mentors, why would they choose the segregated group that restricted them to a subset of mentors?

I had actually also asked WANBAM at some point whether they considered adding male mentors as well but for different reasons.

I think at least some women would still prefer female mentors. Anecdotally, I often made the experience that it's easier for other women to relate to some of my work-rela

... (read more)
2Dale1yThat makes perfect sense to me. But a co-ed mentoring group would presumably be able to offer female mentors to those who wanted them, leaving it equally good for those who preferred women and superior for those who were open-minded or preferred men. I guess some women might be too shy to specify "and I would like a women" in a mixed group, so having WANBAM allows them to satisfy their preference more discretely.
EA Forum update: New editor! (And more)

Is there a way to have footnotes and tables in the same post? I tried just now and can't see a way. (You have to switch to EA forum doc [beta] editor for the tables which kills your footnotes; you have to switch to markdown for footnotes which kills your tables)


edit: I found some markdown code for tables which worked but then had trouble formatting within the table. Decided to just take pictures of the tables instead and upload them as pictures which also works. If anyone knows an easier/nicer way to do this, or if anything is planned, that would be great :)

+1 that the footnotes issue is quite an inconvenience. 

The Case for Education

Thanks for writing this :) I certainly agree that the education system isn't optimal and maybe only useful to a handful of people. However, I'd like to provide myself as a data point of someone who actually thinks they benefit from their education. I'm worried that people might sometimes come away with the feeling that they're doing something wrong and pointless when going to uni/only doing signalling when that's not true in some cases.

I'm a bit of an outlier in that I'm actually in my second bachelor's degree and I ... (read more)

3Denis Drescher1yHi Chi! I keep thinking about this: If you have a moment, I’d be very interested to understand what exactly this tutor did right and how. Maybe others (like me) can emulate what they did! :-D
2019 Ethnic Diversity Community Survey

Thanks for doing this work!

I've thought of the "Improving awareness and training of social justice" point a bit in the past when thinking about gender diversity and find it difficult. I am a bit worried that it is extremely hard or impossible without everyone investing a substantial amount of time:

My impression is that a lot of (ethnic/gender/...) diversity questions has no easy fixes that some people can think about and implement, but would rather benefit a lot from every single person trying to educate themselves more to increase their own... (read more)

Concerning the Recent 2019-Novel Coronavirus Outbreak

I respect that you are putting money behind your estimates and get the idea behind it, but would recommend you to reconsider if you want to do this (publicly) in this context and maybe consider removing these comments. Not only because it looks quite bad from the outside, but also because I'm not sure it's appropriate on a forum about how to do good, especially if the virus should happen to kill a lot of people over the next year (also meaning that even more people would have lost someone to the virus). I personally found this quite morbid and I ... (read more)

I have downvoted this, here are my reasons:

Pretty straightforwardly, I think having correct beliefs about situations like this is exceptionally important, and maybe the central tenet this community is oriented around. Having a culture of betting on those beliefs is one of the primary ways in which we incentivize people to have accurate beliefs in situations like this.

I think doing so publicly is a major public good, and is helping many others think more sanely about this situation. I think the PR risk that comes with this is completely dwarfed by that con... (read more)

9Sean_o_h2yI'm happy to remove my comments; I think Chi raises a valid point. The aim was basically calibration. I think this is quite common in EA and forecasting, but agree it could look morbid from the outside, and these are publicly searchable. (I've also been upbeat in my tone for friendliness/politeness towards people with different views, but this could be misread as a lack of respect for the gravity of the situation). Unless this post receives strong objections by this evening, I will delete my comments or ask moderators to delete.
0laurenwhetstone3yWe've fixed the data link so it should be working now. Apologies for the inconvenience!