Create prediction markets and forecasting questions on AI risk and biorisk. I have been awarded an FTX Future Fund grant and I work part-time at a prediction market.
Use my connections on Twitter to raise the profile of these predictions and increase the chance that decision-makers discuss these issues.
I am uncertain about how to run an org for my FTXFF grant and whether prediction markets will be a good indicator in practice.
Talking to those in forecasting to improve my forecasting question generation tool
Writing forecasting questions on EA topics.
Meeting EAs I become lifelong friends with.
Connecting them to other EAs.
Writing forecasting questions on metaculus.
Talking to them about forecasting.
I guess I think possibly that a "high feedback" culture needs to be:
In particular, I sense most EAs should work on these things, rather than giving more feedback, unless the person has asked for it or are doing more than, say $100k of harm.
Also as a side note, sometimes a desire for feedback can be unhealthy. It can be a desire to provide feedback to others, or to not do the work to figure out what is right and wrong - "if everyone can give feedback and they aren't, my behaviour must be fine". Sometimes I ask for feedback out of a desire to hurt myself. I think in general feedback is good, but at times it can become pathological. I sense this isn't the case for most people.
I imagine that it has cost and does cost 80k to push for AI safety stuff even when it was wierd and now it seems mainstream.
Like, I think an interesting metric is when people say something which shifts some kind of group vibe. And sure, catastrophic risk folks are into it, but many EAs aren't and would have liked a more holistic approach (I guess).
So it seems a notable tradeoff.
I get why I and other give to Givewell rather than catastrophic risk - sometimes it's good to know your "Impact account" is positive even if all the catastrophic risk work was useless.
But why do people not give to animal welfare in this case? Seems higher impact?
And if it's just that we prefer humans to animals that seems like something we should be clear to ourselves about.
Also I don't know if I like my mental model of an "impact account". Seems like my giving has maybe once again become about me rather than impact.
ht @Aaron Bergman for surfacing this
I think that's part of the problem.
Who is loyal to the chinese people?
And I don't think I'm good here. I think I try to be loyal to them, but I don't know what the chinese people want and I think if I try and guess I'll get it wrong in some key areas.
I'm reminded of when givewell?? asked recipients how they would trade money for children's lives and they really fucking loved saving children's lives. If we are doing things for others benefit we should take their weightings into account.
There is more to get into here but two main things:
And since posting this I've said this to several people and 1 was like "yeah no I would downrate religious people too"
I think a poll on this could be pretty uncomfortable reading. If you don't, run it and see.
Put it another way, would EAs discriminate against people who believe in astrology? I imagine more than the base rate. Part of me agrees with that, part of me thinks its norm harming to do. But I don't think this one is "less than the population"
Also I guess that current proposals would benefit openAI, google DeepMind and Anthropic. If there becomes a need to register large training runs, they have more money and infrastructure and smaller orgs would need to build that if they wanted to compete. It just probably would benefit them.
As you say, I think that its wrong to say this is their primary aim (which other CEOs would say there products might kill us all to achieve regulatory capture?) but there is real benefit.
Thanks for your work Claire. I am really grateful.
I feel frustrated that many people learned a new concept "longtermism" which many people misunderstand and relate to EA but now even many EAs don't think this concept is that high priority. Feels like an error from us, that could have been predicted beforehand.
I am grateful for all the hard work that went into popularising the concept and I think weak longtermism is correct. But I dunno, seems like an oops moment that it would be helpful for someone to acknowledge.