My first thought on reading this suggestion for working groups was "That's a great idea, I'd really support someone trying to set that up!"
My second thought was "I would absolutely not have wanted to do that as a student. Where would I even begin?"
My third thought was that even if you did organise a group of people to try implementing the frameworks of EA to build some recommendations from scratch, this will never compare to the research done by long-standing organisations that dedicate many experienced people's working lives to finding the answers. The co... (read more)
Agreed, hence "I don't even think the main aim should be to produce novel work". Imagine something between a Giving Game and producing GiveWell-standard work (much closer to the Giving Game end). Like the Model United Nations idea - it's just practice.
Really glad that you brought up this topic Dedicating one's career (or an appreciable fraction of time or happiness) to a project that will likely fail is a huge deal for someone's personal narrative, and we're hoping that swathes of people will be committed enough to do this. I don't have any answers that aren't mere applause lights, but hope this remains a prevalent discussion.
To clarify, my position could be condensed to "I'm not convinced small scale longtermist donations are presently more impactful than neartermist ones, nor am I convinced of the reverse. Given this uncertainty, I am tempted to opt for neartermist donations to achieve better optics."
The point you make seems very sensible. If I update strongly back towards longtermist giving I will likely do as you suggest.
That seems like a very robust approach if one had a clear threshold in mind for how many qualified AI alignment researchers is enough. Sadly, I have no intuition or information for this, nor a finger on the pulse of that research community.
That's a really interesting point about Toby Ord!
Hi Olivia, really good of you to share these experiences. A few points I think might be helpful for your next conference:
-The social norms in EA are probably the most open and accepting of any group I've seen in my life. Provided two people aren't engaged in a focused one-on-one, walking up to a group and saying "Hi, can I join this conversation?" seems universally allowed with no sense of alienation or awkwardness at all. People would always catch me up on the conversation topic and include me fully.
- It was my first conference too and I also hadn't... (read more)
Will we need to email Clare whenever some new oxygen needs producing?
I was going to suggest the last point, but you're way ahead of me! In the next couple of years, the first batch of St Andrews EAs will have fully entered the world of work/advanced study, and keeping some record of what the alumni are doing would be meaningful.
[As highlighted in the thread post, we are two EAs who know each other outside the forum.]
I think some form of this could be valuable, noting Sebastian's point that decreasing risk should be the main priority. It struck me reading the main article that the tendency for EAs to congregate to some extent geographically poses a challenge from a long-term perspective. Oxford, the community's beating heart, is uncomfortably close to London (the obvious civilian target) and Portsmouth (home of the Royal Navy, probably second-top priority military target), meaning a large fraction of the community would be wiped out in a nuclear war. It might be prudent for EAs who can work remotely to set up 'colonies' in places unlikely to be devastated by a nuclear exchange, to provide resilience.
I'm definitely going to change my attitude to community building, to the extent I am involved with it, as a result of reading this. Making sure that criticisms are addressed to the satisfaction of the critic seems hugely important and I don't think I had grasped that before.