DT

David T

448 karmaJoined Dec 2023

Comments
73

I don't disagree that these are also factors, but if tech leaders are pretty openly stating they want the regulation to happen and they want to guide the regulators, I think it's accurate to say that they're currently more motivated to achieve regulatory capture (for whatever reason) than they are to ensure that x-risk concerns don't become a powerful political argument as suggested by the OP, which was the fairly modest claim I made. 

(Obviously far more explicit and cynical claims about, say, Sam Altman's intentions in founding OpenAI exist, but the point I made doesn't rest on them)

Because their leaders are openly enthusiastic about AI regulation and saying things like "better that the standard is set by American companies that can work with our government to shape these models on important issues" or "we need a referee", rather than arguing that their tech is too far away from AGI to need any regulation or arguing the risks of AI are greatly exaggerated, as you might expect if they saw AI safety lobbying as a threat rather than an opportunity. 

I'm not sure that I buy that critics lack motivation. At least in the space of AI, there will be (and already are) people with immense financial incentive to ensure that x-risk concerns don't become very politically powerful.

The current situation still feels like the incentives are relatively small compared with the incentive to create the appearance that the existence of anthropogenic climate change is still uncertain. Over decades advocates have succeeded in actually reducing fossil fuel consumption in many countries as well as securing less-likely-to-be-honoured commitments to Net Zero, and direct and indirect energy costs are a significant part of everyone' household budget.

Not to mention that Big Tech companies whose business plans might be most threatened by "AI pause" advocacy are currently seeing more general "AI safety" arguments as an opportunity to achieve regulatory capture...

I don't believe that the people who are currently doing high quality Xrisk advocacy would counter-factually be writing nasty newspaper hit pieces; these just seem like totally different activities, or that Timnit would write more rigourously if people gave her more money.

I don't think that's what the OP argues though.[1] The argument is that the people motivated to seek funding to assess X-risk as a full time job tend to be disproportionately people that think X-risk and the ability to mitigate it significant. So of course advocates produce more serious research, and of course people who don't think it's that big a deal don't tend to choose it as a research topic (and on the rare occasions they put actual effort in, it's relatively likely to be motivated by animus against x-risk advocates). 

If those x-risk advocates had to do something other than x-risk research for their day job, they might not write hit pieces, but there would be blogs instead of a body of high quality research to point to, and some people would still tweet angrily and insubstantially about Sam Altman and FAANG. 

Gebru's an interesting example looked at the other way, because she does write rigorous papers on her actual research interests as well as issue shallow, hostile dismissals of groups in tech she doesn't like. But funnily enough, nobody's producing high quality rebuttals of those papers[2] - they're happy to dismiss her entire body of work based on disagreeing with her shallower comments. Less outspoken figures than Gebru write papers on similar lines, but these don't get the engagement at all.

I do agree people love to criticize.

  1. ^

    the bar chart for x-risk believers without funding actually stops short of the "hit piece" FWIW

  2. ^

    EAs may not necessarily actually disagree with her when she's writing about implicit biases in LLMs or concentration of ownership in tech rather than tweeting angrily about TESCREALs, but obviously some people and organizations have reason to disagree with her papers as well.

I also think that it's far from given that the option which would minimise consumer harm from monopoly would also minimise pressure to race.

An AI research institute spun off by the regulator under pressure to generate business models to stay viable is plausibly a lot more inclined to 'race', than an AI research institute swimming in ad money which can earn its keep by incrementally improving search, ads and phone UX and generating good PR with its more abstract research along the way. Monopolies are often complacent about exploiting their research findings, and Google's corporate culture has historically not been particularly compatible with launching sort of military or enterprise tooling that represents the most obviously risky use of 'AI'. 

There are of course arguments the other way (Google has a lot more money and data than putative spinouts) but people need to predict what a divested DeepMind would do before concluding breaking up Google is a safety win.

I don't think the "3% credence in utilitarianism" is particularly meaningful; doubting the merits of a particular philosophical framework someone uses isn't an obvious reason to be suspicious of them. Particularly not when Sam ostensibly reached similar conclusions to Will about global priorities, and MacAskill himself has obviously been profoundly influenced by utilitarian philosophers in his goals too.

But I do think there's one specific area where SBF's public philosophical statements were extremely alarming even at the time, and he was doing so whilst in "explain EA" mode. That's when Sam made it quite clear that if he had a 51% chance of doubling world happiness vs a 49% of ending it, he'd accept the bet....  a train to crazytown not many utilitarians would jump on and also one which sounds a lot like how he actually approached everything. 

Then again, SBF isn't a professional philosopher and never claimed to be, other people have said equally dumb stuff and not gambled away billions of other people's money, and I'm not sure MacAskill himself would even have read or heard Sam utter those words.

I also didn't vote but would be very surprised if that particular paper - a policy proposal for a biosecurity institute in the context of a pandemic - was an example of the sort of thing Oxford would be concerned about affiliating with (I can imagine some academics being more sceptical of some of the FHI's other research topics). Social science faculty academics write papers making public policy recommendations on a routine basis, many of them far more controversial.

The postmortem doc says "several times we made serious missteps in our communications with other parts of the university because we misunderstood how the message would be received" which suggests it might be internal messaging that lost them friends and alienated people. It'd be interesting if there are any specific lessons to be learned, but it might well boil down to academics being rude to each other, and the FHI seems to want to emphasize it was more about academic politics than anything else.

I think a dedicated area would minimise the negative impact on people that aren't interested whilst potentially adding value (to prospective applicants in understanding what did and didn't get accepted, and possibly also to grant assessors if there was occasional additional insight offered by commenters)

I 'd expect there would be some details of some applications that wouldn't be appropriate to share on a public forum though

I think the combination of bottom-up approach of local communities proposing their own improvements with EA-style rigorous quantitative evaluation (which, like you say would be best undertaken by evaluators based in similar LMICs) is potentially really powerful, and I'm not sure the extent to which it's already been tried in mainstream aid. 

Or possibly even better from a funding perspective, turn that round and have an organization that helps local social entrepreneurs secure institutional funding for their projects (a little bit like Charity Entrepreneurship). Existing aid spend is enormous, but I don't think it's easy for people like Antony to access.

I also think there's the potential for interesting online interaction between the different local social entrepreneurs (especially those who have already part-completed projects with stories to share), putative future donors and other generally interested Westerners who might bring other perspectives to the table.  I'm not sure to what extent and where that happens at the moment.

I’d also extend this to people who have strong skills and expertise that’s not obviously convertable into ‘working in the main EA cause areas’.

I think this is a key part. "Main EA cause areas" does centre a lot on a small minority of people with very specific technical skills and the academic track record to participate in (especially if you're taking 80k Hours for guidance on that front) 

But people can have a lot of impact in areas like fundraising with a completely different skillset (one that is less likely to benefit from a quantitative degree from an elite university) or earn well enough to give a lot without having any skills in research report writing, epidemiology or computer science.

And if your background isn't one that the "do cutting edge research or make lots of money to give away" advice is tailored to at all, there are a lot of organizations doing a lot of effective good that really really, really need people with the right motivations allied to less niche skillsets. So I don't think people should feel they're not a 'success' if they end up doing GHD work rather than paying for it, and if their organization isn't particularly adjacent to EA they might have more scope to positively influence its impactfulness.

Also, people shouldn't label themselves mediocre :) 

Load more