DT

David T

463 karmaJoined

Comments
78

I think it's inaccurate that only people at top universities are likely to have outsized influence, or to dismiss everyone else as "poorer students"  that it "doesn't make as much sense" to encourage to engage in altruistic activity. The university Sorting Hat really isn't that good.

And more specifically from a movement building perspective it usually makes sense to prioritise reaching more people than to ensure a small group of [already advantaged] people have access to particularly lavish allowances. Students at elite universities' ability to achieve outsized impact later in life probably isn't particularly closely linked to the size of the stipend the current organizer of their well-established EA group is able to claim from central funding bodies, whereas actually having some outreach at other universities is going to have more impact, even if fewer of those students' impact will be outsized and the median earn to give amounts might be a little lower.

Edit: not really sure what's so controversial here, though I've amended the quote just in case it's because my representation of DavidNash's original comment was considered uncharitable. 

Whilst I sympathise with the desire to see more of this kind of information particularly given EA jobs being notoriously competitive, I'd be concerned that the signals sent out by raw numbers might be misleading and deter suitable applicants

The classic example is LinkedIn, which does display applicant numbers. Having seen the other side of LinkedIn job ads, I'm well aware that a job with 30+ applicants probably has about 25 who one-click apply to everything vaguely related to their field even when lacking basic qualifying criteria such as visa status. If I hadn't seen that side of things, I'd probably be deterred from applying based on not meeting a bullet point or two where actually I'd probably be in the top 10% of qualified candidates.

What I think would be valuable to some people is those organizations with relatively complex processes involving exercises and application forms choosing to indicate roughly how many people completed exercises for similar jobs in the past (as a marginal candidate I'm a lot more likely to fancy my chances of standing out if it's 10 than if it's 70), but that's a helpful thing before people devote considerable time to a process rather than something to search for.

I think it's elitist (and inaccurate) to assume that only attendees of a small number of elite universities will have the future funds to give away. 

And ultimately it's not a straight decision between whether to fund a student group at Oxford or one at Oxford Brookes, it's a decision whether to pay student society leaders at a small number of target universities so much they feel uncomfortable about it and fund expensive retreats for them, or spreading movement building budget more widely to support outreach in more places (that's not to suggest there aren't other challenges to setting up more student groups in places that don't have an existing community). I can see the argument that focusing resources on a handful of courses at a handful of elite universities makes sense for recruitment into a small number of highly specialised positions, but not for maximising future fundraising capacity.

I'd class those comments as mostly a disagreement around ends . The emphasis on not getting the credit from his own support base and Republicans not wanting to talk about it are the most revealing. A sizeable fraction of his most committed support base are radically antivax to the point there was audible booing at his own rally when he recommended they got the vaccine, even after he'd very carefully worded it in terms of their "freedoms". It's less a narrow disagreement about a specific layer of Biden bureaucracy and more a recognition that his base sees less government involvement in healthcare and less reaction to future pandemics and in some cases even rejection of evidence-based medicine as valuable ends. And whilst he clearly doesn't reject evidence-based medicine himself, above all Trump loves adulation from that fanbase.

Either way, his position is quite different from those EAs who see pandemic preparedness as an extremely important permanent priority rather than a reactive thing..

And I can't believe it needs saying, but a "Torres exception" is not a good idea here. Even completely disregarding Torres' own feelings there are a lot of people who are not Emile Torres which those lines of attack stigmatise.

Also when, the fundamental complaint about someone is that they repeatedly make uncharitable and probably false claims about people's true motivations and engage in odd personal attacks on people they might legitimately be unimpressed by, adding a drive-by pop-diagnosis of a mental health condition and a nasty observation on their gender identity doesn't strengthen that observation, it just sets off the irony meter...

I don't disagree that these are also factors, but if tech leaders are pretty openly stating they want the regulation to happen and they want to guide the regulators, I think it's accurate to say that they're currently more motivated to achieve regulatory capture (for whatever reason) than they are to ensure that x-risk concerns don't become a powerful political argument as suggested by the OP, which was the fairly modest claim I made. 

(Obviously far more explicit and cynical claims about, say, Sam Altman's intentions in founding OpenAI exist, but the point I made doesn't rest on them)

Because their leaders are openly enthusiastic about AI regulation and saying things like "better that the standard is set by American companies that can work with our government to shape these models on important issues" or "we need a referee", rather than arguing that their tech is too far away from AGI to need any regulation or arguing the risks of AI are greatly exaggerated, as you might expect if they saw AI safety lobbying as a threat rather than an opportunity. 

I'm not sure that I buy that critics lack motivation. At least in the space of AI, there will be (and already are) people with immense financial incentive to ensure that x-risk concerns don't become very politically powerful.

The current situation still feels like the incentives are relatively small compared with the incentive to create the appearance that the existence of anthropogenic climate change is still uncertain. Over decades advocates have succeeded in actually reducing fossil fuel consumption in many countries as well as securing less-likely-to-be-honoured commitments to Net Zero, and direct and indirect energy costs are a significant part of everyone' household budget.

Not to mention that Big Tech companies whose business plans might be most threatened by "AI pause" advocacy are currently seeing more general "AI safety" arguments as an opportunity to achieve regulatory capture...

I don't believe that the people who are currently doing high quality Xrisk advocacy would counter-factually be writing nasty newspaper hit pieces; these just seem like totally different activities, or that Timnit would write more rigourously if people gave her more money.

I don't think that's what the OP argues though.[1] The argument is that the people motivated to seek funding to assess X-risk as a full time job tend to be disproportionately people that think X-risk and the ability to mitigate it significant. So of course advocates produce more serious research, and of course people who don't think it's that big a deal don't tend to choose it as a research topic (and on the rare occasions they put actual effort in, it's relatively likely to be motivated by animus against x-risk advocates). 

If those x-risk advocates had to do something other than x-risk research for their day job, they might not write hit pieces, but there would be blogs instead of a body of high quality research to point to, and some people would still tweet angrily and insubstantially about Sam Altman and FAANG. 

Gebru's an interesting example looked at the other way, because she does write rigorous papers on her actual research interests as well as issue shallow, hostile dismissals of groups in tech she doesn't like. But funnily enough, nobody's producing high quality rebuttals of those papers[2] - they're happy to dismiss her entire body of work based on disagreeing with her shallower comments. Less outspoken figures than Gebru write papers on similar lines, but these don't get the engagement at all.

I do agree people love to criticize.

  1. ^

    the bar chart for x-risk believers without funding actually stops short of the "hit piece" FWIW

  2. ^

    EAs may not necessarily actually disagree with her when she's writing about implicit biases in LLMs or concentration of ownership in tech rather than tweeting angrily about TESCREALs, but obviously some people and organizations have reason to disagree with her papers as well.

I also think that it's far from given that the option which would minimise consumer harm from monopoly would also minimise pressure to race.

An AI research institute spun off by the regulator under pressure to generate business models to stay viable is plausibly a lot more inclined to 'race', than an AI research institute swimming in ad money which can earn its keep by incrementally improving search, ads and phone UX and generating good PR with its more abstract research along the way. Monopolies are often complacent about exploiting their research findings, and Google's corporate culture has historically not been particularly compatible with launching sort of military or enterprise tooling that represents the most obviously risky use of 'AI'. 

There are of course arguments the other way (Google has a lot more money and data than putative spinouts) but people need to predict what a divested DeepMind would do before concluding breaking up Google is a safety win.

Load more