DT

David T

1270 karmaJoined

Comments
231

I think it's also disanalogous in the sense that the EA community's belief in imminent AGI isn't predicated on the commercial success of various VC-funded companies in the same way as the EA community's belief in the inherent goodness and amazing epistemics of its community did kind of assume that half its money wasn't coming from an EA-leadership endorsed criminal who rationalized his gambling of other people's money in EA terms...

The AI bubble popping (which many EAs actually want to happen) is somewhat orthogonal to the imminent AGI hypothesis;[1] the internet carried on growing after a bunch of overpromisers who misspent their capital fell by the wayside.[2] I expect that (whilst not converging on superintelligence) the same will happen with chatbots and diffusion models, and there will be plenty of scope for models to be better fit to benchmarks or for researchers to talk bots into creepier responses over the coming years.

The Singularity not happening by 2027 might be a bit of a blow for people that attached great weight to that timeline, but a lot are cautious to do that or have already given themselves probabilistic getouts. I don't think its going to happen in 2027 or ever, but if I thought differently I'm not sure 2027 actually being the year some companies failed to convince sovereign wealth funds they were close enough to AGI to deserve a trillion would or even should have that much impact. 

I do agree with the wider point that it would be nice if EAs realized that many of their own donation preferences might be shaped at least as much by personal interests and vulnerable to rhetorical tricks as normies'; but I'm not sure that was the main takeaway from FTX 

  1. ^

    FWIW I hold similar views about it not being about to happen and about undue weight being placed on certain quasi-religious prophecies...

  2. ^

    there's perhaps also a lesson that the internet isn't that different from circa 2000, but certain aspects of it did keep getting better...

I would add that it's not just extreme proposals to make "AI go well" like Yudkowsky's airstrike that potentially have negative consequences beyond the counterfactual costs of not spending the money on other causes. Even 'pausing AI' through democratically elected legislation enacted as a result of smart and well-reasoned lobbying might be significantly negative in its direct impact, if the sort of 'AI' restricted would have failed to become a malign superintelligence but would have been very helpful to economic growth generally and perhaps medical researchers specifically.

This applies if the imminent AGI hypothesis is false, and probably to an even greater extent it if it is true.

(The simplest argument for why it's hard to justify all EA efforts to make AI go well based purely on its neglectedness as a cause is that some EA theories about what is needed for AI to go well directly conflict with others; to justify the course of action one needs to have some confidence not only that AGI is possibly a threat but that the proposed approach to it at least doesn't increase the threat. It is possible that both donations to a "charity" that became a commercial AI accelerationist and donations to lobbyists attempting to pause AI altogether were both mistakes, but it seems implausible that they were both good causes)

I'm more confused by how this apparent near future, current world resource base timeline interacts with the idea that this Dyson swarm is achieved clandestinely (I agree with your sentiment the "disassemble Mercury within 31 years" scenario is even more unlikely, though close to Mercury is a much better location for a Dyson swarm). Most of the stuff in the tech tree doesn't exist yet and the entities working on it are separate and funding-starved: the relationship between entities writing papers about ISRU or designing rectenna for power transmission and an autonomous self-replicating deep space construction facility capable of acquiring unassailable dominance of the asteroid belt within a year is akin to the relationship between a medieval blacksmith and a gigafactory. You could close that gap more quickly with an larger-than-Apollo-scale joined up research endeavour, but that's the opposite of discreet. 

Stuff like the challenges of transmitting power/data over planetary distances and the constant battle against natural factors like ionizing radiation don't exactly point towards permanent dominance by a single actor either.

Also you look at the current US administration and the priorities and ... they're certainly not Singaporean or particularly interested in x-risk mitigation

Feels like the most straightforwardly rational argument for portfolio diversification is the assumption your EV and probability estimates almost certainly aren't the accurate or at least unbiased estimator they need to be for the optimal strategy to be to stick everything on the highest EV outcome. Even more so when the probability that a given EV estimate is accurate is unlikely to be uncorrelated with whether it scores particularly highly (the good old optimiser's curse, with a dose of wishful thinking thrown in). Financiers don't trust themselves to be perfectly impartial about stuff like commodity prices in central Asia or binary bets on the value of Yen on Thursday, and it seems unlikely that people who are extremely passionate about the causes they and their friends participate in ahead of a vast range of other causes that nominally claim to do good achieve a greater level of impartiality. Pascalian odds seem particularly unlikely to be representative of the true best option (in plain English, a 0.0001% subjective probability assessment of a 1 shot event is roughly "I don't really know what the outcome of this will be and it seems like there could be many, many things more likely to achieve the same end"). You can make the assumption that if they appear to be robustly positive and neglected they might deserve funding anyway, but that is a portfolio argument...

Doesn't this depend on what you consider the "top tier areas for making AI go well" (which doesn't seem to be defined by the post)? If that happens to be AI safety research institutes focused specifically on preventing "AI doom" via stuff you consider to be non-harmful, then naively I'd expect nearly all of them to be aligned with the movement focused on that priority, given that those are relatively small niches, the OP and their organisation and the wider EA movement are actively nudging people into them based on EA assumption that they're the top tier ones, and anyone looking more broadly at AI as a professional interest will find a whole host of lucrative alternatives where they won't be scrutinised on their alignment at interview and can go and make cool tools and/or lots of money on options.

If you define it as "areas which have the most influence on how AI is built" then those are more the people @titotal was talking about, and yeah, they don't seem particularly aligned with EA, not even the ones that say safety-ish things as a marketing strategy and took money from EA funds.

And if you define "safety" more broadly there are plenty of other AI research areas focusing on stuff like cultural bias or job market impact. But you and your organisation and 80000 hours probably don't consider them top tier for effectiveness and (not coincidentally) I suspect these have very low proportions of EAs. Same goes for defence companies who've decided the "safest" approach to AI is to win the arms race.  Similarly, it's no surprise that people who happen to be very concerned about morality and utilitarianism and doing the best they can with their 80k hours of working life who get their advice from Brutger don't become AI researchers at all, despite the similarities of their moral views.

Got to agree with the AI "analysis" being pretty limited, even though it flatters me by describing my analysis as "rigorous".[1] It's not a positive sign that this news update and jobs listing is flagged as having particularly high "epistemic quality"

That said, I enjoyed the 'egregore' section bits about the "ritualistic displays of humility", "elevating developers to a priesthood" and  "compulsive need to model, quantify, and systematize everything, even with acknowledged high uncertainty and speculative inputs => illusion of rigor".[2] Gemini seems to have absorbed the standard critiques of EA and rationalism better than many humans, including humans writing criticisms of and defences of those belief systems. It's also not wrong.

Its poetry is still Vogon-level though.

  1. ^

    For a start I think most people reading our posts would conclude that Vasco and I disagree on far too much to be considered "intellectually aligned", even if we do it mostly politely by drilling down to the details of each others' arguments

  2. ^

    OK, if my rigour is illusory maybe that complement is more backhanded than I thought  :)

Fair. I agree with this

Plenty of entities who aren't EAs doing that sort of lobbying already anyway

There are some good arguments that in some cases, developing countries can benefit from protecting some of their own nascent industries.

There are basically no arguments that the developed world putting tariffs (or anti dumping duties) on imports helps the developing world, which is the harmful scenario Karthik discusses in his article as an example of Nunn's argument that rich countries should stop doing things that harm poorer countries. Developed countries know full well these limit poorer countries' ability to export to them... but that's also why they impose them

At face value that might seem the case. In practice, Reform is a party dominated by a single individual, who enjoys promoting hunting, deregulation and criticising the idea of vegan diets: he's not exactly the obvious target for animal welfare arguments, particularly not when it's equally likely a future coalition will include representatives of a Green Party.

The point in the original article about conservatives and country folk being potentially sympathetic to  arguments for restrictions on importing meat from countries with lower animal welfare standards is a valid one, but it's the actual Conservative Party (who will be present in any coalition Reform needs to win and have a yawning policy void of their own) that fits that bracket, not the upstart "anti-woke", pro-deregulation party whose core message is a howl of rage about immigration. Farage's objections to the EU were around the rules, not protectionism, and he's actually highly vocal on the need to reduce restrictions in the import of meat from the US, which has much lower standards in many areas. Funnily enough, Farage political parties have had positions on regulating stunning animals for slaughter, but the targeting of slaughtering practices associated with certain religions might have been for... other reasons, and Farage rowed back on it[1]

  1. ^

    halal meat served in the UK is often pre-stunned, whereas kosher meat isn't, so the culture war arguments for mandatory stunning hit the wrong target....

Load more