T

tamgent

597 karmaJoined Jun 2019

Posts
2

Sorted by New
5
· 5mo ago · 1m read

Comments
124

Why isn't anyone talking about the Israel-Gaza situation much on the EA Forum? I know it's a big time for AI, but I just read that number of Palestinian deaths, the vast majority of whom are innocent people, and 65% are women and children, is approaching the level of civilians killed in Ukraine since the Russian invasion 21 months ago; just in the last 3-4 weeks. 

Interesting that you don't think the post acknowledged your second collection of points. I thought it mostly did. 
1. The post did say it was not suggesting to shut down existing initiatives. So where people disagree on (for example) which evals to do, they can just do the ones they think are important and then both kinds get done. I think the post was identifying a third set of things we can do together, and this was not specific evals, but more about big narrative alliance when influencing large/important audiences. The post also suggested some other areas of collaboration, on policy and regulation, and some of these may relate to evals so there could be room for collaboration there, but I'd guess that more demand, funding, infrastructure for evals helps both kinds of evals.
2. Again I think the post addresses this issue: it talks about how there is this specific set of things the two groups can work on together that is both in their interest to do. It doesn't mean that all people from each group will only work on this new third thing (coalition building), but if a substantial number do, it'll help. I don't think the OP was suggesting a full merger of the groups. They acknowledge the 'personal and ethical problems with one another; [and say] that needn’t translate to political issues'. The call is specifically for political coalition building.
3. Again I don't think the OP is calling for a merger of the groups. They are calling for collaborating on something.
4. OK the post didn't do this that much, but I don't think every post needs to and I personally really liked that this one made its point so clearly. I would read a post which responds to this with some counterarguments with interest so maybe that implies I think it'd benefit from one too, but I wouldn't want a rule/social expectation that every post lists counterarguments as that can raise the barrier to entry for posting and people are free to comment in disagreements and write counter posts.

Nice paper on the technical ways you could monitor compute usage, but governance-wise, I think we're extremely behind on anything making an approach like this remotely plausible (unless I'm missing something, which I may well be).

If we put aside the question b) in the abstract, of getting international compliance, and just focus on a) national governments regulating this for their own citizens. This likely requires some kind of regulatory authority with the remit and the authority to do this. This includes information gathering powers, which require companies by law to give specified information to the regulator. Such powers are common in regulation. However, we do not have AI regulators or even tech regulators (with the exception of data protection whose remit is more specific). We have a bunch of sector regulators, and some cross-sectoral ones (such as data protection, competition etc). The closest regulatory regime to being able to legally do something like this that I'm aware of is the EU, via the EU's AI Act, still in draft. Under this horizontal legislation which is not sector specific it will regulate all high risk AI systems (the annexes stipulate examples of what they consider high-risk). However, they have not defined compute as a relevant risk parameter (to my knowledge, although I think they have a new thing on General Purpose AI systems which could put this in so you might want to influence that, but I'm not sure what their capacity to enforce looks like).

No other western government has a comparable AI regulation plan. The US have a voluntary risk management framework. The UK has a largely voluntary policy framework they're developing (although they are starting to introduce more tech regulation some of which will include AI regulation).

Of course there are other parts of governments than regulators - and I'd really like it if 'compute monitoring' started to pay attention to how differently these different parts might use such a capability. One advantage of regulators is that they have clear, specified, remits and transparency requirements they routinely balance with confidentiality obligations. Other government departments may have more latitude and less transparency.

On the competition vs caution approach, I think that often people assume government is a homogenous entity, when instead there are very different parts of government with very different remits and some remits are naturally aligned with a  caution approach and others to a competition approach.

I don't think it's obvious that Google alone is the engine of competition here, it's hard to expect any company to simply do nothing if their core revenue generator is threatened (I'm not justifying them here), they're likely to try to compete rather than give up immediately and work on other ways to monetiz. It's interesting to note that it just happened to be the case that Google's core revenue generator (search) is a possible application area of one of the LLMs, the fastest progressing/most promising area of AI research right now. I don't think OpenAI pursued LLMs for this reason (to compete with Google), and instead pursued them because they're promising, but  interesting to note that search and LLMs are both bets on language being the thing to bet on.

Maybe posts themselves should have separate agree/disagree vote.

Load more