titotal

Computational Physicist
7766 karmaJoined

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 

Comments
665

I work in computational material science and have spent a lot of time digging into drexlerian nanotech. The idea that drexler style nanomachines can be invented in 2026 is straight up absurd. Progress towards nanomachines has stalled out for decades. This is not a "20 years from now" type project, absent transformative AI speedups the tech could be a century away, or even straight up impossible. And the effect of AI on material science is far from transformative at present, this is not going to change in 1 year.  

You are not doing your cause a service by proposing scenarios that are essentially impossible. 

I think it is extremely easy to imagine the left/Democrat wing of AI safety becoming concerned with AI concentrating power, if it hasn't already

 

To back this up: I mostly peruse non-rationalist, left leaning communities, and this is a concern in almost every one of them. There is a huge amount of concern and distrust of AI companies on the left.

 Even AI skeptical people are concerned about this: AI that is not "transformative" can concentrate power. Most lefties think that AI art is shit, but they are still concerned that it will cost people jobs: this is not a contradiction as taking jobs does not mean AI needs to better than you, just cheaper.  And if AI does massively improve, this is going to make them more likely to oppose it, not less. 

The gini coefficient "is more sensitive to changes around the middle of the distribution than to the top and the bottom". When you are talking about the top billionaires, like Ozzie is, it's not the correct metric to use:

In absolute terms, the income share of the top 1% in the US  has been steadily rising since the 1980's (although this is not true for countries like japan or sweden)

I'm not sure the "passive" finding should be that reassuring. 

I'm imagining someone googling "ethical career" 2 years from now and finding 80k, noticing that almost every recent article, podcast, and promoted job is based around AI, and concluding that EA is just an AI thing now. If they have no interest in AI based careers (either through interest or skillset), they'll just move on to somewhere else. Maybe they would have been a really good fit for an animal advocacy org, but if their first impressions don't tell them that animal advocacy is still a large part of EA they aren't gonna know.   

It could also be bad even for AI safety: There are plenty of people here who were initially skeptical of AI x-risk, but joined the movement because they liked the malaria nets stuff. Then over time and exposure they decided that the AI risk arguments made more sense than they initially thought, and started switching over. In hypothetical future 80k, where malaria nets are de-emphasised, that person may bounce off the movement instantly. 

Remember that this is graphing the length of task that the AI can do with an over 50% success rate. The length of task that an AI can do reliably is much shorter than what is shown here (you can look at figure 4 in the paper): for an 80% success rate it's 30 seconds to a minute. 

Being able to do a months work of work at a 50% success rate would be very useful and productivity boosting, of course, but it would really be close to recursive self improvement? I don't think so. I feel that some part of complex projects needs reliable code, and that will always be a bottleneck. 

Welcome to the forum. You are not missing anything: in fact you have hit upon some of the most important and controversial questions about the EA movement, and there is wide disagreement on many of them, both within EA and with EA's various critics. I can try and give both internal and external sources asking or rebutting similar questions. 

In regards to the issue of unintended consequences from global aid, and the global vs local issue. this was an issue raised by Leif Wenar in a hostile critique of EA here. You can read some responses and rebuttals to this piece here and here

With regards to the merits of Longtermism, this will be a theme of the debate week this coming week, so you should be able to get a feel for the debate within EA there. Plenty of EA's are not longtermist for exactly the reasons you described. Longtermism the focus of a lot of external critique of EA as well, with some seeing it as a dangerous ideology, although that author has themselves been exposed for dishonest behaviour. 

AI safety is a highly speculative subject, and their are a wide variety of views on how powerful AI can be, how soon "AGI" could arrive, how dangerous it is likely to be, and what the best strategy is for dealing with it. To get a feel for the viewpoints, you could try searching for "P doom", which is a rough estimate for the chance of destruction. I might as well plug my own argument for why I don't think it's that likely. For external critics, pivot to AI is a newsletter that compiles articles with the perspective that AI is overhyped and that AI safety isn't real. 

The case for "earning to give" is given in detail here. The argument you raise of working for unethical companies is one of the most common objections to the practice, particularly in the wake of the SBF scandal, however in general EA discourages ETG with jobs that are directly harmful. 

Again, I'm not sure exactly how to respond to comments like this. Like, yeah, if AI could reliably do everything a top researcher does, it could enable a lot of breakthroughs. But I don't believe that an AI will be able to do that anytime soon. All I can say is that there is a massive gap between current AI capabilities and what they would need to fully automate a material science job. 30 years sounds like a long time, but AI winters have lasted that long before: there's no guarantee that because AI has rapidly advanced recently that it will not stall out at some point. 

I will say that I just disagree that an AI could suddenly go from "no major effect on research productivity" to "automate everything" in the span of a few years. The scale of difficulty of the latter compared to the former is just too massive, and in all new technologies it takes a lot of time to experiment and figure out how to use it effectively. Ai researchers have done a lot of work to figure out how to optimise and get good at the current paradigm: but by definition, the next paradigm will be different, and will require different things to optimize.

Hey, thanks for weighing in, those seem like interesting papers and I'll give them a read through. 

To be clear, I have very little experience in quantum computing, and haven't looked into it that much and so I don't feel qualified to comment on it myself (hence why this was just an aside there). All I am doing is relaying the views of prominent professors in my field, who feel very strongly that it is overhyped and were willing to say so in the panel, although I do not recall them giving much detail on why they felt that way. This matches with the general views I've had with other physicists in casual conversations. If I had to guess the source of these views, I'd say it was skepticism of the ability to actually build such large scale fault-tolerant systems.

Obviously this is not strong evidence and should not be taken as such. 

From my (small) experience in climate activist groups, I think this is an excellent article. 

Some other points in favour:  

Organising for small, early wins allows your organisation to gain experience with how to win, and what to do with said wins. A localised climate campaign will help you understand which messages resonate with people and which are duds, and familiarise yourself with how to deal with media, government, etc. 

It's also helps to scale with your numbers: a few hundred people aren't going to be enough to stop billion dollar juggernauts, but they can cause local councils to feel the heat. 

One counterpoint: you shouldn't be so unambitious that people feel like you're wasting their time. If just stop oil had started with a campaign to put flower gardens outside public libraries, they wouldn't have attracted the committed activist base they needed. 

Load more