Isaac Dunn


Sorted by New

Topic Contributions


“Should you use EA in your group name?” An update on PISE’s naming experiment

I want to add to the chorus of commenter praising this post - thank you for writing it! I think it's really cool that you tried something you thought could have a high upside, actually checked if it worked, and shared your findings.

A model for engagement growth in universities

Thanks for writing this! I hadn't thought about high engagement levels being more stable than medium or low ones, and that seems right to me. I agree that having people spend time with highly-engaged people is likely to be a good way to make them more engaged. And I definitely agree with your points about fidelity and epistemics being particularly important.

I'm uncertain some of your suggestions, though. You suggest inviting a few "promising" people to socials where most people are highly-engaged. I worry that doing this could result in a "cool kids club" in-group vibe, where people who haven't been invited to join might not feel welcome in or good enough for EA. There are benefits to this - it might make people more strongly desire to join the highly-engaged group - it's not obvious to me that it's worth the cost of exclusivity.

Besides the "why am I not invited" cost, there's another cost that you point out: only adding a few new people limits how quickly the group can grow. I agree that your approach would fairly reliably create new HEAs, but my guess is we're early enough in working out how to grow EA that it's worth looking for a more scalable approach that (a) isn't exclusive and (b) has a better HEA to new person ratio. For example, 1:1 mentorship is somewhere in between your suggestion (several HEAs to each new person) and so-called "fellowships" (several new people to each HEA).

APPLY NOW | EA Global: London (29-31 Oct) | EAGxPrague (3-5 Dec)

I think this is a great question to ask.

As it happens, I think it probably is an effective use of money, in short because it's an investment in the human capital of the community, which is probably one of the main bottlenecks to impact at the moment. That's because there's a large amount of money committed to EA compared to the number of people in the community working out the best ways to spend it. It's true that there are global health charities that could absorb a lot more money, but there's interest in finding even more impactful ways to spend money!

ETA: looks like Stefan got there slightly quicker with a very similar answer!

University EA Groups Should Form Regional Groups

Here's a similar but slightly different suggestion: rather than there being one definitive regional organisation for each area, we just encourage the creation of more organisations that are in between local groups and large funders.

Some examples of these organisations:

  • A team that runs and is responsible for a few local groups (e.g. a successful local group expands locally)
  • An organisation that centralises certain specific group functions (e.g. marketing, organising talks, introductory programs), so that local groups can outsource
  • A team that specialises in seeding new groups, and provides significant help and support to new organisers
  • A national organisation that tries to coordinate and encourage collaboration across local groups, including keeping an eye on groups that are at risk of disappearing

Some reasons this could be good:

  • Local groups wouldn't automatically be reliant on the one designated regional organisation
  • A designated regional organisation that is doing poorly is a problem because it's more difficult to replace
  • Teams can specialise in the thing that they are good at, rather than having responsibility for every aspect of all local groups in their area
  • Teams can work at whatever regional scope makes most sense (could be just a few groups, could be global) - overlap in geographical scope between different kinds of these organisations is often fine, whereas every geographical location would need exactly one designated regional organisation

Some reasons that designated regional organisations could be better:

  • Who has responsibility for what would be clearer, so fewer things would fall through the cracks
  • The role of a regional organisation would be clearly defined and the same across regions, making evaluation easier, making knowledge sharing between regions easier, and making it more straightforward to start or join an organisation (i.e. a clearer career progression pipeline)
  • A regional organisation may be well placed to decide what kinds of functions to prioritise to best support its local region 

There are quick thoughts, I'm likely missing important considerations. I'm not sure which approach seems better, and they aren't mutually exclusive, but I thought I'd share the thought.

A related example is multi-academy trusts in the UK school system, which are essentially organisations that run multiple schools. Schools can choose to join an existing trust, and trust can start new schools. Rather than the central government funding each school individually, it funds trusts, who have responsibility for the schools in their control.

Thanks for the brilliant post, by the way, I'm really glad you wrote it!

Frank Feedback Given To Very Junior Researchers

I agree that it's valuable to give honest feedback if you think that someone should consider trying something else, rather than just giving blithely positive feedback that might cause them to continue pursuing something that's a bad fit.

It's probably worth being especially thoughtful about the way that such feedback is framed. For example, if feedback of type a) can be made constructive, it might make it seem more sincerely encouraging: rather than "it's probably bad for you to do this kind of work", saying "I actually think that you might not be as well suited to this kind of work as others in the EA community because others are better at [specific thing], but from [strength X] and [strength Y] that I've noticed, I wonder if you've considered [type of work T] or [type of work S]?" (I know that you were paraphrasing and wouldn't say those actual phrases to people)

For feedback of type b), my gut reaction is that basically no one should be given feedback of that type because of the risk if you're wrong as you say, but also because of the risk of exacerbating feelings that only sufficiently impressive people are welcome in EA. I guess it depends whether you mean "you're a valued member of this community, but not competitive for a job in the community" or "you're not good enough to be a member of this community". I agree that some people should be given the first type of feedback if you're sure enough, but I don't think anyone should be told they're not good enough to join the community.

Frank Feedback Given To Very Junior Researchers

Thanks for sharing this! I enjoyed the comments about picking the right scope for a project. I also liked the general nudge towards being transparent about reasoning and uncertainty rather than overstating how much evidence supports particular conclusions.

I think that it probably is worth the trouble to be more encouraging. I'd consider being specific about some things that have been done well, beginning and ending the feedback with encouraging words, and taking a final pass to word things in a way that implies that you're glad they've done this work and you're rooting for them. That said, it definitely seems much better to give unpolished feedback rather than no feedback, so if it'd be too high a burden then I'd go ahead with potentially discouraging feedback.

I agree that the EA community does try to be welcoming to new members, but I suspect that doing it even more would probably be good to counteract the shame and guilt I think many people might have about not being good enough for a community that places high value on success.

EA Forum feature suggestion thread

I suspect that many people don't post on the forum because they're worried about their post being poorly received and damaging their reputation in the EA community.

I believe this because I feel this way myself, because I've heard other people around me worrying a lot about posting to the forum, because Will MacAskill spoke on the 80,000 hours podcast of being anxious about their reputation being damaged after posting on the forum, and because of the existence of Aaron Gertler's talk "Why you (yes, you) should post on the EA Forum".

Perhaps, by default, new posts could be anonymous until a certain karma threshold (say 30 karma) is met. After that post meets the karma threshold, the true author of the post could become visible.

That way, authors could post knowing that their reputation wouldn't be damaged if their post wasn't well received, but that they would get the credit if the post was well received.

I'd expect this to increase the number of posts (both good and bad) from hesitant new users, and I think that the increase in the number of mediocre new posts would be a cost worth paying. It's good for people to contribute and feel valued for their contribution, especially if it encourages them to make more valuable contributions in the future.

I think it'd be important that the anonymous-until-threshold was the default (i.e. opt out), so that people didn't feel embarrassed about using it.

New? Start here! (Useful links)

I agree. How about just a right arrow? (🡲)

In defense of a "statistical" life

Thanks for writing this! I especially enjoyed the part where you described how donating has given you a sense of purpose and self-worth when things have been difficult for you - I can relate.

I think I have to disagree with your last point, though, because it seems to me that whenever we make a decision to spend resources, we are making a trade off. A donation to an effective global health charity could in fact have gone to a different cause.

I don't think that diminishes how worthwhile any donation is, but I think that the spirit of effective altruism is to keep asking ourselves whether there's something else we could do that would be even better. What do you think?

Complex cluelessness as credal fragility

I agree that there may be cases of "complex" (i.e. non-symmetric) cluelessness that are nevertheless resiliently uncertain, as you point out.

My interpretation of @Gregory_Lewis' view was that rather than looking mainly at whether the cluelessness is "simple" or "complex", we should look for the important cases of cluelessness where we can make some progress. These will all be "complex", but not all "complex" cases are tractable.

I really like this framing, because it feels more useful for making decisions. The thing that lets us safely ignore a case of "simple" cluelessness isn't the symmetry in itself, but the intractability of making progress. I think I agree with the conclusion that we ought to be prioritising the difficult task of better understanding the long-run consequences of our actions, in the ways that are tractable.

Load More