I recently founded Apollo Research: https://www.apolloresearch.ai/
I was previously doing a Ph.D. in ML at the International Max-Planck research school in Tübingen, worked part-time with Epoch and did independent AI safety research.
For more see https://www.mariushobbhahn.com/aboutme/
I subscribe to Crocker's Rules
Just to clarify, we are not officially coordinating this or anything. We were just brainstorming ideas. And so far, nobody has been setting up anything intentionally by us.
But some cities and regions just grew organically very quickly in the last half year. Mexico, Prague, Berlin and the Netherlands come to mind as obvious examples.
Usually just asking a bunch of simple questions like "What problem is your research addressing?", "why is this a good approach to the problem?", "why is this problem relevant to AI safety?", "How does your approach attack the problem?", etc.
Just in a normal conversation that doesn't feel like an interrogation.
No, I think there is a phase where everyone wished they had renewables but they can't yet get them so they still use fossil fuels. I think energy production will stay roughly constant or increase but the way we produce it will change slower than we would have hoped.
I don't think we will have a serious decline in energy production.
I think narrow AIs won't cause mass unemployment but more general AIs will. I also think that objectively that isn't a problem at this point anymore because AIs can do all the work but I think it will take at least another decade that humans can accept that.
The narrative that work is good because you contribute something to society and so on is pretty deeply engrained, so I guess lots of people won't be happy after being automated away.
I think narrow AIs won't cause massive unemployment but the more general they get, the harder it will be to justify using humans instead of ChatGPT++
I think education will have to change a lot because students could literally let their homework be done entirely by ChatGPT and get straight A's all the time.
I guess it's something like "until class X you're not allowed to use a calculator, and then after that you can" but for AI. So it will be normal that you can just print an essay in 5 seconds similar to how you can do complicated math that would usually take hours on paper in 5 seconds on a calculator.
What do you think a better way of writing would have been?
Just flagging uncertainties more clearly or clarifying when he is talking about his subjective impressions?
Also, while that doesn't invalidate your criticism, I always read the most important century as something like "Holden describes in one piece why he thinks we are potentially in a very important time. It's hard to define what that means exactly but we all kind of get the intention. The arguments about AI are hard to precisely make because we don't really understand AI and its implications yet but the piece puts the current evidence together so that we get slightly less confused."
I explicitly read the piece NOT as something that would be written in academic analytical philosophy and much more as something that points to this really big thing we currently can't articulate precisely but all agree is important.