Gideon Futerman

1455Joined Dec 2021

Bio

Participation
4

Visiting Researcher at CSER working on the interaction of SRM and Existential Risk, and looking at how we can study existential risk. Have an interest in AI governance as well

How I can help others

Reach out to me if you have questions about SRM/Solar geoengineering

Comments
118

Ye I basically agree with this.

  1. On evals, I think it is good for us to be doing as much evals as possible, firstly because both sorts of evaluations are important, but also more (even self imposed) regulatory hurdles to jump through, the better. Slow it down and bring the companies under control. 
  2. Indeed, the call is a broader political coalition building. Not everyone, not all the time, not on everything. But on substantially more than we currently are.
  3. Yes
  4. There are a number of counterarguments to this post, but I didn't include them because a) I probably can't give the strongest counterarguments to my own beliefs b) This post was already very long, and I had to cut out sections already on Actor-Network Theory and Agency and something else I can't remember c) I felt it might muddle the case I'm trying to make here if it was intersperced with counterarguments. One quick point on counterarguments is I think a counterargument would need to be strong enough to not just prove that the extreme end result is bad ( a lot more coalition building would be bad ) , but probably that the post is directionally bad (some more coalition building would be bad). 

Thanks for this. What does this data further out from resolution look like for community predictions?

What would the brier score be if it involved forecasts significantly far removed from the event (6 months, 1 year, 2 years let's say?)

A number of things. Firstly, this criticism may be straightforwardly correct; it may be pursuing something that is the first time in history (I'm less convinced eg bioweapons regulation etc) ; nonetheless, other approaches to TAI governance seem similar (eg trust 1 actor to develop a transformative and risky technology and not use it for ill). It may indeed require such change, or at least change of perceptionof the potential and danger of AI (which is possible). Secondly, this may not be the case. Foundation models (our present worry) may be no more (or even less) beneficial in military contexts than narrow systems. Moreover, foundation models, developed by private actors, seem pretty challenging to their power in a way that neither the Chinese government nor US military is likely to accept. Thus, AI development may continue without dangerous model growth. Finally, very little development of foundation models are driven by military actors, and the actors that do develop it may be constructed as legitimately trying to challenge state power. If we are on a path to TAI (we may not be), then it seems in the near term only a very small number of actors, all private, could develop it. Maybe the US Military could gain the capacity to, but it seems hard at the moment for them to

Just quickly on that last point: I recognise there is a lot of uncertainty (hence the disclaimer at the beginning). I didn't go through the possible counterarguments because the piece was already so long! Thanks for your comment though, and I will get to the rest of it later!

'expected harm can be still much lower' this may be correct, but not convinced its orders of magnitude. And it also hugely depends on ones ethical viewpoint. My argument here isn't that under all ethical theories this difference doesn't matter (it obviously does), but that to the actions of my proposed combined AI Safety and Ethics knowledge network that this distinction actually matters very little.  This I think answers your second point as well; I am addressing this call to people who broadly think that on the current path, risks are too high. If you think we are nowhere near AGI and that near term AI harms aren't that important, then this essay simply isn't addressed to you. 

I think this is the core point I'm making. It is not that the stochastic parrots vs superintelligence distinction is  necessarily irrelevant if one is deciding for oneself if to care about AI. However, once one thinks that the dangers of the status quo are too high for whatever reason,  then the distinction stops mattering very much. 

I think this is a) not necessarily true (as shown in the essay, it could still lead to existential catastrophe eg by integration with nuclear command and control) and b) if we ought to sll be pulling in the same direction against these companies, why is the magnitude difference relevant. Moreover, your claims would suggest that the AI Ethics crowd would be more pro-AI development than the AI Safety crowd. In practice, the opposite is true, so I'm not really sure why your argument holds

I'd be pretty interested in this as well, particularly age. I feel political orientation may be a little harder to collect, as what these terms mean differs in different countries, although there could still be ways to deal with this

This would surprise me, given the funding available to Anthropic anyway?

I found the implication in Ozy's piece that the homogeneity section was trying to say that any of those traits were bad, particularly because ConcernedEAs said the 'Sam'description fit them all very well. It seems really strange therefore, to suggest they think neurodivergent people are somehow bad. Also, I found the implication that saying the avergae EA was culturally protestant is Antisemitic a little bit bizarre, and I would quite like Ozy to justify this a bit more

Load more