Interesting that you don't think the post acknowledged your second collection of points. I thought it mostly did.
1. The post did say it was not suggesting to shut down existing initiatives. So where people disagree on (for example) which evals to do, they can just do the ones they think are important and then both kinds get done. I think the post was identifying a third set of things we can do together, and this was not specific evals, but more about big narrative alliance when influencing large/important audiences. The post also suggested some other...
Nice paper on the technical ways you could monitor compute usage, but governance-wise, I think we're extremely behind on anything making an approach like this remotely plausible (unless I'm missing something, which I may well be).
If we put aside the question b) in the abstract, of getting international compliance, and just focus on a) national governments regulating this for their own citizens. This likely requires some kind of regulatory authority with the remit and the authority to do this. This includes information gathering powers, which require compan...
On the competition vs caution approach, I think that often people assume government is a homogenous entity, when instead there are very different parts of government with very different remits and some remits are naturally aligned with a caution approach and others to a competition approach.
I don't think it's obvious that Google alone is the engine of competition here, it's hard to expect any company to simply do nothing if their core revenue generator is threatened (I'm not justifying them here), they're likely to try to compete rather than give up immediately and work on other ways to monetiz. It's interesting to note that it just happened to be the case that Google's core revenue generator (search) is a possible application area of one of the LLMs, the fastest progressing/most promising area of AI research right now. I don't think OpenAI p...
I am imagining a hoverable [i] for info button, not putting it in the terms, as people often don't bother to even open them as they know they'll be long and legalistic.
There could be a little information summary next to the terms of use which is more accessible that explains the implications eg as you have here.
I would also be interested in knowing who/which org was "owning" the relationship with FTX...
Not to assign blame, but to figure out what the right institutional responsibility/oversight should have been, and what needs to be put in place should a similar situation emerge in future.
Surely it's the people working for the FTX Foundation who were the connection between FTX and EA.
Are people downvoting because they believe this not relevant enough to the FTX scandal? I understand it is only tangentially relevant (ie. FTX abused its customers money, they did not start a ponzi scheme). Or maybe because it is insensitive or wrong to share critical pieces of the wider area at a time like this in case people's emotions about the event get overgeneralised to related wider debates? If people disagreed with my view that the video has good arguments or is educational, they would have disagree-voted instead. My intention in sharing it was tha...
Some people are saying this is no surprise, as all of crypto was a Ponzi scheme from the start.
Earlier this year when it went semi-viral I watched 'The Line Goes Up', which I found pretty educational (as an outsider). Despite the title, it's about more than NFTs, and covers crypto/blockchain/DLT/so-called 'web3' stuff. It is a critical/skeptical take on the whole space with lots of good arguments (in my view).
Are people downvoting because they believe this not relevant enough to the FTX scandal? I understand it is only tangentially relevant (ie. FTX abused its customers money, they did not start a ponzi scheme). Or maybe because it is insensitive or wrong to share critical pieces of the wider area at a time like this in case people's emotions about the event get overgeneralised to related wider debates? If people disagreed with my view that the video has good arguments or is educational, they would have disagree-voted instead. My intention in sharing it was tha...
Was going to ask if you had integrity failure or failure by capture, but I think what I had in mind in these overlaps already to a large extent with what you have under rigor failure.
It seems to me Jack believes that they are impactful and is wondering why they are therefore absent from EA literature. I could be wrong here, he could instead be unsure how impactful it is and assuming that if EA hasn't indexed it it's not impactful (fwiw I think this general inference pattern is pretty wrong). Seems to additionally be wondering whether he should work there, and taking into account views people from this community might have when making his decision.
I also don't get this. I can;t help thinking about the Inner Ring essay by C.S. Lewis. I hope that's not what's happening.
I am a software engineer who transitioned to tech/AI policy/governance. I strongly agree with the overall message (or at least title) of this article: that AI governance needs technical people/work, especially for the ability to enforce regulation.
However in the 'types of technical work' you lay out I see some gaping governance questions/gaps. You outline various tools that could be built to improve the capability of actors in the governance space, but there are many such actors, and tools by their nature are dual use - where is the piece on wh...
Every time I've used VR (including latest ones), I feel sick and dizzy afterwards. I don't think this issue is unique to me. It feels difficult to me to imagine that most people would want to spend significant daily time in something that has such an effect and nothing in this post addressed this issue. Your prediction feels wildly wrong to me.
Great development. Does this mean GovAI will start inputting to more government consultations on AI and algorithms? The UK gov recently published a call for input on its AI regulation strategy - is GovAI planning to respond to it? On the regulation area - there's a lot of different areas of regulation (financial, content, communication infra, data protection, competition and consumer law), and the UK gov is taking a decentralised approach, relying on individual regulators' areas of expertise rather than creating a central body. How will GovAI stay on top of these different subject matter areas?
Just to add to UK regulator stuff in the space: the DRCF has a stream on algorithm auditing. Here is a paper with a short section on standards. Obviously it's early days, and focused on current AI systems, but it's a start: https://www.gov.uk/government/publications/findings-from-the-drcf-algorithmic-processing-workstream-spring-2022/auditing-algorithms-the-existing-landscape-role-of-regulators-and-future-outlook
Well I disagree but there's no need to agree - diverse approaches to a hard problem sounds good to me.
AI doesn't exist in a vacuum, and TAI won't either. AI has messed up, is messing up and will mess up bigger as it gets more advanced. Security will never be a 100% solved problem, and aiming for zero breaches of all AI systems is unrealistic. I think we're more likely to have better AI security with standards - do you disagree with that? I'm not a security expert, but here some relevant considerations of one applied to TAI. See in particular the section "Assurance Requires Formal Proofs, Which Are Provably Impossible". Given the probably imposs...
I can respond to your message right now via a myriad of potential software because of the establishment of a technical standard, HTTP. Additionally, all major web browsers run and interpret Javascript, in large part due to SSOs like IETF and W3C. By contrast, on mobile, we have two languages for the duopoly, and a myriad of issues I won't go into, but suffice to say there has been a failure of SSOs in the space to replicate what happened with web browsing and early internet. It may be that TAI present novel and harder challenges, but in some of the h...
Thank you kindly for the summary! I was just thinking today when the paper was making the rounds - I'd really like a summary of this whilst I'm waiting on making the time to read it in full. So this is really helpful for me.
I work in this area, and can attest to the difficulty of getting resources towards capability building for detecting trends towards future risks, as opposed to simply firefighting the ones we've been neglecting. However, I think the near vs long term distinction is often unhelpful and limited, and I prefer to try to think about things i...
Agree on aggregate it's good for a collection of people to pursue many different strategies, but would you personally/individually weight all of these equally? If so, maybe you're just uncertain? My guess is that you don't weight all of these equally. Maybe another framing is to put probabilities on each and then dedicate the appropriate proportion of resources accordingly. This is a very top down approach though and in reality people will do what they will! I guess it seems hard to span more than two beliefs next to each other on any axis as an individual to me. And when I look at my work and my beliefs personally, that checks out.
Could you elaborate on what you mean by as ad tech gets stronger? Is that just because all tech gets stronger with time, or is it in response to the current shifts, like privacy sandbox?
Yeah I also had a strong sense of this from reading this post. It reminded me of this short piece by C. S. Lewis called The Inner Ring, which I highly recommend. Here is a sentence from it that sums it up pretty well I think:
IN the whole of your life as you now remember it, has the desire to be on the right side of that invisible line ever prompted you to any act or word on which, in the cold small hours of a wakeful night, you can look back with satisfaction?
I found this to be an interesting way to think about this that I hadn't considered before - thanks for taking the time to write it up.
On the philosophical side paragraph - totally agree; this is why worldview diversification makes so much sense (to me). The necessity of certain assumptions leads to divergence of kinds of work, and that is a very good thing, because maybe (almost certainly) we are wrong in various ways, and we want to be alive and open to new things that might be important. Perhaps on the margin an individual's most rational action could sometimes be to defer more, but as a whole, a movement like EA would be more resilient with less deference.
Disclaimer: I personall...
This is not about the EA community, but something that comes to mind which I enjoyed is the essay Tyranny of the Structurelessness, written in the 70s.
I think the issue is that some of these motivations might cause us to just not actually make as much positive difference as we might think we're making. Goodharting ourselves.
Have you spoken to the Czech group about their early days? I'd recommend it, and can put you in touch with some folks there if you like.
Agreed. One book that made it really clear for me was The Alignment Problem by Brian Christian. I think that book does a really good job of showing how it's all part of the same overarching problem area.
I'm not Hayden but I think behavioural science is useful area for thinking about AI governance, in particular about the design of human-computer interfaces. One example with current widely deployed AI systems is recommender engines (this is not a HCI eg). I'm trying to understand the tendencies of recommenders towards biases like concentration, or contamination problems, and how they impact user behaviour and choice. Additionally, how what they optimise for does/does not capture their values, whether that's because of a misalignment of values between the u...
So from the perspective of the recruiting party these reasons make sense. From the perspective of a critical outsider, these very same reasons can look bad (and are genuine reasons to mistrust the group that is recruiting):
- easier to manipulate their trajectory
- easier to exploit their labour
- free selection, build on top of/continue rich get richer effects of 'talented' people
- let's apply a supervised learning approach to high impact people acquisition, the training data biases won't affect it
I've wondered in the past whether it's like dropout in a neural network. (I've never looked into this and know nothing about it)
Yeah I just couldn't understand his comment until I realised that he'd misunderstood the OP as saying it should be a big movement rather than it should be a movement with diverse views that doesn't deter great people for having different views. So I was looking for an explanation and that's what my brain came up with.
Your comment now makes more sense given that you misunderstood the OP. Consider adding an edit mentioning what your misunderstanding was at top of your comment, I think it'd help with interpreting it.
So you agree 3 is clearly false. I thought that you thought it was near enough true to not worry about the possibility of being very wrong on a number of things. Good to have cleared that up.
I imagine then our central disagreement lies more in what it looks like once you collapse all that uncertainty on your unidimensional EV scale. Maybe you think it looks le...
Yeah maybe. Sorry if you found it unhelpful, I could have been clearer. I find your decomposition interesting. I was most strongly gesturing at the third.
Correct me if I'm wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked 'is X or Y more impactful'?
I got this impression from what I understood your main point to be, something like:
There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.
I think there are several assumptions in both of these points that I want to unp...
Correct me if I'm wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked 'is X or Y more impactful
I think this is unhelpfully conflating at least three pretty different concepts.
(I also felt that the applause lights argument largely didn’t hold up and came across as unnecessarily dismissive, I think the comment would have held up better without it)
I guess some scientific topics have some pretty good evidence and are hard to believe are extremely wrong (e.g. physics) given how much works so well that is based on it today, and then there are other scientific/medical areas that look scientific/medical without having the same robust evidence-base. I'd like to read a small overview meta analysis with some history of each field that claims (and is widely believed) to be scientific/medical, with discussion of some of its core ideas, and an evaluation of how sure we are that it is good and real in the way that a lot of physics is. I don't want to name particular other scientific/medical areas to contrast, but I do have at least one prominently in my mind.
Mmm I sense a short life thusfar. I posit that the shorter the life thusfar the more likely you are to feel this way. How high impact! Think of all the impact we can make on the impactable ones!
Some things I like about this post:
- I like the topic, I am interested in failure and places where failure and mistake making is discussed openly feels more growthy.
- I liked that you gave lots of examples.
Some things I didn't like about this post:
- Sometimes I couldn't always see the full connections you were making, or I could but had to leap to them based on my own preconceptions, maybe they could be more explained? For example, a benefit was a stronger community, but you didn't explain the mechanism by which that leads to a stronger community. I ...
Thanks for sharing your motivations! Personally, I would have liked to read your original post, even if it was more one-sided, and got the other side elsewhere. Being helped with heuristics for making decisions is not really what I was looking for in this post - it feels paternalistic and contrived in me, and I'd enjoy you advocating earnestly for more of something you think is good.
Why isn't anyone talking about the Israel-Gaza situation much on the EA Forum? I know it's a big time for AI, but I just read that number of Palestinian deaths, the vast majority of whom are innocent people, and 65% are women and children, is approaching the level of civilians killed in Ukraine since the Russian invasion 21 months ago; just in the last 3-4 weeks.
The Israel-Gaza situation doesn't strike me as very neglected or tractable. The eyes of much of the world are on that situation, and it's not clear to me that EA actors have much to add to the broader conversation. It's also not clear to me why we would expect that actions that EA actors could take would be expected to have a significant impact on the situation.
- It's true that the Russian invasion also garnered heavy public attention. However, I'd suggest that it touched on existing EA knowledge bases (e.g., great power conflict and nuclear security) more t
... (read more)