rosehadshar

Topic Contributions

Comments

Pre-announcing a contest for critiques and red teaming

Minor point on how you communicate the novelty point: I'm slightly worried about people misreading and thinking 'oh, I have to be super original', and then either neglecting important unoriginal things like reassessing existing work, or  twisting themselves into knots to prove how original they are.

I agree with you that all else equal a new insight is more valuable than one others have already had, but as originality is often over-egged in academia, it might be worth paying attention to how you phrase the novelty criterion in particular.
 

Pre-announcing a contest for critiques and red teaming

I think a list like this might be useful for other purposes too:

  • raising the profile of critiques that matter amongst EAs, thus hopefully improving people's thinking
  • signalling that criticism genuinely is welcome and seen as useful
Having a standard word-of-mouth goal

Personally I'm a bit wary of things like this. A few reactions:

  • You mention that the appropriate number might vary depending on extroversion. I think it should also vary by situation. A student in an average year might meet 20 new people who might be interested in hearing about EA stuff. Someone who works in a steady job in a small company and has young kids might not meet any. A nice thing about donate 10% is that it applies across the board and scales to people's incomes. This sort of thing doesn't.
  • I don't think you're doing a strong version of this, but I'm still worried about frames which can slip into 'I should persuade people to think like me'. I think this is particularly likely when the frame is 'introduce people to EA', rather than 'if I meet someone who I think might be interested in AI safety, then chatting about that is a natural, normal, and fun thing to do. Maybe they find it interesting and helpful; if so, good'. The latter thing just seems unproblematic and good to me; if people in the EA community aren't doing enough of this at the margin for some reason then pushing for more of it seems correct. The former I'm a bit wary of, though clearly some versions of it are good.
What pieces of ~historical research have been action-guiding for you, or for EA?

I think this is a really good summary of what historians might do, thanks Oscar.

One contextual point is that I think 1 and 2 are something like 'central examples of useful things historians might do', rather than something like 'the main things current historians actually do'.

In particular, my outdated impression from when I studied history is that a lot of historical work is very zoomed in source work that may not involve much integration or summarisation. Some of this work is necessary groundwork for 1 and 2; some of it I think comes from specialisation pressures within the field and doesn't produce much value.

What pieces of ~historical research have been action-guiding for you, or for EA?

I especially like your points on 2 Ramiro, and the distinction between studying history/what historians do. I'm interested in both of these things, and also agree that 'studying history' is vague and ambiguous.

I'm still confused about what contentful things I'm trying to think about, and so I'm using a kind of empty label, 'history', to point at the cloud of stuff I think might be relevant. My hope was that people would interpret 'history' differently, and I'd get a range of answers that might help me think about what I do and don't mean - and that I might get useful ideas that I wouldn't have received if I'd asked for something more narrow. But it's possible that the question is just too broad for people to generate responses to it.

How big are risks from non-state actors? Base rates for terrorist attacks

Thanks for this Michael, I think that's a good point. I've changed those labels to 'US radical right (see definition)' and 'US radical left (see definition)'. Not perfect, but less misleading.

Trading time in an EA relationship

Yes, very happy to respond to messages on this :)

Trading time in an EA relationship

Yeah, I think that's a good point.

I expect there are things other than ability to take risks that it's worth tracking too - like skill acquisition, demonstrable achievements...

Some reflections on testing fit for research

Thanks, I enjoyed that post (and it's quite short, for people considering whether to read).

Some reflections on testing fit for research

This seems like a useful point, thanks!

It makes me want to give a clarification: the reflections above are just the most important things I happened to learn - not a list of generally most important points to consider when testing fit for research. I think I'd need more research experience to write a good version of the latter thing (though I think my list probably overlaps with it somewhat).

I also want to respond to "you should definitely try [...] before you write off research in general". I think I agree with this, conditional on it being a sensible idea for you to be testing your fit for research in general in the first place. Some thoughts:

  • There are loads and loads of other important things to do that are not research. For lots of people I imagine there being more information in switching tack completely and trying a few new things, than in working their way through a long list of different kinds of research.
  • The space of research is too big for it to be sensible to test your fit for everything, so you need to narrow down to things that seem especially fun/especially likely to be a good fit for you.
  • I particularly care about this because I think research has inflated prestige in the EA community, and so there's a danger of people spending too much time testing fit for different kinds when really what they want is approval. I think the ideal solution here isn't 'keep testing your fit till you find some kind of research you're good at' - it's 'the norms of the community change such that there's more social reward for things other than research'.
Load More