Could you elaborate on what you mean by as ad tech gets stronger? Is that just because all tech gets stronger with time, or is it in response to the current shifts, like privacy sandbox?
Yeah I also had a strong sense of this from reading this post. It reminded me of this short piece by C. S. Lewis called The Inner Ring, which I highly recommend. Here is a sentence from it that sums it up pretty well I think:IN the whole of your life as you now remember it, has the desire to be on the right side of that invisible line ever prompted you to any act or word on which, in the cold small hours of a wakeful night, you can look back with satisfaction?
I found this to be an interesting way to think about this that I hadn't considered before - thanks for taking the time to write it up.
On the philosophical side paragraph - totally agree; this is why worldview diversification makes so much sense (to me). The necessity of certain assumptions leads to divergence of kinds of work, and that is a very good thing, because maybe (almost certainly) we are wrong in various ways, and we want to be alive and open to new things that might be important. Perhaps on the margin an individual's most rational action could sometimes be to defer more, but as a whole, a movement like EA would be more resilient with less deference. Disclaimer: I personally find myself very turned off by the deference culture in EA. Maybe that's just the way it should be though.I do think that higher deference cultures are better at cooperating and getting things done - and these are no easy tasks for large movements. There have also been movements that have done terrible things in the past, accidentally, with these properties. There have also been movements that have done wonderful things, with these properties.I'd guess there may be a correlation between people who think there should be more deference being in the "row" camp and people who think less in the "steer" camp, or another camp, described here.
This is not about the EA community, but something that comes to mind which I enjoyed is the essay Tyranny of the Structurelessness, written in the 70s.
I think the issue is that some of these motivations might cause us to just not actually make as much positive difference as we might think we're making. Goodharting ourselves.
Have you spoken to the Czech group about their early days? I'd recommend it, and can put you in touch with some folks there if you like.
Agreed. One book that made it really clear for me was The Alignment Problem by Brian Christian. I think that book does a really good job of showing how it's all part of the same overarching problem area.
I'm not Hayden but I think behavioural science is useful area for thinking about AI governance, in particular about the design of human-computer interfaces. One example with current widely deployed AI systems is recommender engines (this is not a HCI eg). I'm trying to understand the tendencies of recommenders towards biases like concentration, or contamination problems, and how they impact user behaviour and choice. Additionally, how what they optimise for does/does not capture their values, whether that's because of a misalignment of values between the user and the company or because it's just really hard to learn human preferences because they're complex. In doing this, it's really tricky to actually distinguish in the wild between the choice architecture (behavioural parts) vs the algorithm when it comes to attributing to users' actions.
So from the perspective of the recruiting party these reasons make sense. From the perspective of a critical outsider, these very same reasons can look bad (and are genuine reasons to mistrust the group that is recruiting):- easier to manipulate their trajectory- easier to exploit their labour- free selection, build on top of/continue rich get richer effects of 'talented' people- let's apply a supervised learning approach to high impact people acquisition, the training data biases won't affect it