brook

31Joined Mar 2022

Bio

https://www.lesswrong.com/users/brook

Comments
4

Topic Contributions
69

I agree that publishing results of the form "it turns out that X can be done, though we won't say how we did it" is clearly better than publishing your full results, but I think it's much more harmful than publishing nothing in a world where other people are still doing capabilities research. 

This is because it seems to me that knowing something is possible is often a first step to understanding how. This is especially true if you have any understanding of where this researcher or organisation were looking before publishing this result. 

 

I also think there are worlds where it's importantly harmful to too openly critique capabilities research, but I lean towards not thinking we are in this world, and think the tone of this post is a pretty good model for how this should look going forwards. +1!

This is a good point, and I think most EV in general is in X-risk. I'd include COVID-level or HIV-level emerging pandemics as being worth thinking about even if they don't represent X-risk, though. 
It's not obvious to me that generalised solutions (for both natural & man-made andemics) are the most efficient answer. For instance as a random and un-researched example, it could be really cheap to encourage farmers to wear gloves or surgical masks when handling certain animals (or in certain regions), but this is only worth doing if we're worried about farm animal pandemics. 

Hi everybody! I'm Victoria, I'm currently based in Edinburgh and I heard about EA through LessWrong. I've been involved with the local EA group for almost a year now, and with rationalism for a few years longer than that. I'm only now getting around to being active on the forum here. 

I was a medical student, but I'm taking a year out and seriously considering moving into either direct existential risk research/policy or something like operations/'interpreting' research. When I've had opportunities to do things like that I've really enjoyed it. I've also previously freelanced with Nonlinear and CEA for research and writing gigs.

Long-term I could see myself getting into AI, possibly something like helping build infrastructure for AI researchers to better communicate, or direct AI work (with my neuroscience degree).

See youse all around!

I think something like "only a minority of people [specific researchers, billionaires, etc.] are highly influential, so we should spend a lot of energy influencing them" is a reasonable claim that implies we maybe shouldn't spend as much energy empowering everyday people. But I haven't seen any strong evidence either way about how easy it is to (say) convert 1,000 non-billionaires to donate as much as one billionaire. 

I do think the above view has some optics problems, and that many people who 'aren't highly influential' obviously could become so if they e.g. changed careers. 

As somebody strongly convinced by longtermist arguments, I do find it hard to 'onboard' new EAs without somebody asking "do you really think most people will sit and have a protracted philosophical discussion about longtermism?" at some point. I think there are two reasonable approaches to this:

  1. If you start small (suggest donating to the AMF instead of some other charity, and maybe coming to some EA meetings), some people will become more invested and research longtermism on their own who would have otherwise been put off.
  2. It's useful to have two different pitches for EA for different audiences; discuss longtermism with people who are in philosophy or related fields, and something easier to explain the rest of the time. My impression is this is your pitch in this post?

I'm not currently convinced of either view, but would be interested to hear about other peoples' experiences.