PeterMcCluskey

614Joined Oct 2014

Bio

I'm a stock market speculator who has been involved in transhumanist and related communities for a long time. See my website at http://bayesianinvestor.com.

Comments
90

Convincing a venue to implement it well (or rewarding one that has already done that) will have benefits that last more than three days.

I agree about the difficulty of developing major new technologies in secret. But you seem to be mostly overstating the problems with accelerating science. E.g.:

These passages seem to imply that the rate of scientific progress is primarily limited by the number and intelligence level of those working on scientific research. Here it sounds like you're imagining that the AI would only speed up the job functions that get classified as "science", whereas people are suggesting the AI would speed up a wide variety of tasks including gathering evidence, building tools, etc.

My understanding of Henrich's model says that reducing cousin marriage is a necessary but hardly sufficient condition to replicate WEIRD affluence.

European culture likely had other features which enabled cooperation on larger-than-kin-network scales. Without those features, a society that stops cousin marriage could easily end up with only cooperation within smaller kin networks. We shouldn't be confident that we understand what the most important features are, much less that we can cause LMICs to have them.

Successful societies ought to be risk-averse about this kind of change. If this cause area is worth pursuing, it should focus on the least successful societies. But those are also the societies that are least willing to listen to WEIRD ideas.

Also, the idea that reduced cousin marriage was due to some random church edict seems to be the most suspicious part of Henrich's book. See The Explanation of Ideology for some claims that the nuclear family was normal in northwest Europe well before Christianity.

Resilience seems to matter for human safety mainly via food supply risks. I'm not too concerned about that, because the world is producing a good deal more food than is needed to support our current population. See my more detailed analysis here.

It's harder to evaluate the effects on other species. I expect a significant chance that technological changes will make current biodiversity efforts irrelevant. So to the limited extent I'm worried about wild animals, I'm focused more on ensuring that technological change develops so as to keep as many options open as possible.

Why has this depended on NIH? Why aren't some for-profit companies eager to pursue this?

This seems to nudge people in a generally good direction.

But the emphasis on slack seems somewhat overdone.

My impression is that people who accomplish the most typically have had small to moderate amounts of slack. They made good use of their time by prioritizing their exploration of neglected questions well. That might create the impression of much slack, but I don't see slack as a good description of the cause.

One of my earliest memories of Eliezer is him writing something to the effect that he didn't have time to be a teenager (probably on the Extropians list, but I haven't found it).

I don't like the way you classify your approach as an alternative to direct work. I prefer to think of it as a typical way to get into direct work.

I've heard a couple of people mention recently that AI safety is constrained by the shortage of mentors for PhD theses. That seems wrong. I hope people don't treat a PhD as a standard path to direct work.

I also endorse Anna's related comments here.

This seems mostly right, but it still doesn't seem like the main reason that we ought to talk about global health.

There are lots of investors visibly trying to do things that we ought to expect will make the stock market more efficient. There are still big differences between companies in returns on R&D or returns on capital expenditures. Those returns go mainly to people who can found a Moderna or Tesla, not to ordinary investors.

There are not (yet?) many philanthropists who try to make the altruistic market more efficient. But even if there were, there'd be big differences in who can accomplish what kinds of philanthropy.

Introductory EA materials ought to reflect that: instead of one strategy being optimal for everyone who wants to be an EA, the average person ought to focus on easy-to-evaluate philanthropy such as global health. A much smaller fraction of the population with unusual skills ought to focus on existential risks, much as a small fraction of the population ought to focus on founding companies like Moderna and Tesla.

Can you give any examples of AI safety organizations that became less able to get funding due to lack of results?

Worrying about the percent of spending misses the main problems, e.g. donors who notice the increasing grift become less willing to trust the claims of new organizations, thereby missing some of the best opportunities.

I have some relevant knowledge. I was involved in a relevant startup 20 years ago, but haven't paid much attention to this area recently.

My guess is that Drexlerian nanotech could probably be achieved in less than 10 years, but would need on the order of a billion dollars spent on an organization that's at least as competent as the Apollo program. As long as research is being done by a few labs that have just a couple of researchers, progress will likely continue to be slow to need much attention.

It's unclear what would trigger that kind of spending and that kind of collection of experts.

Profit motives aren't doing much here, due to a combination of the long time to profitability and a low probability that whoever produces the first usable assembler will also produce one that's good enough for a large market share. I expect that the first usable assembler will be fairly hard to use, and that anyone who can get a copy will use it to produce better versions. That means any company that sells assemblers will have many customers who experiment with ways to compete. It seems

Maybe some of the new crypto or Tesla billionaires will be willing to put up with those risks, or maybe they'll be deterred by the risks of nanotech causing a catastrophe.

Could a new cold war cause militaries to accelerate development? This seems like a medium-sized reason for concern.

What kind of nanotech safety efforts are needed?

I'm guessing the main need is for better think-tanks to advise politicians on military and political issues. That requires rather different skills than I or most EAs have.

There may be some need for technical knowledge on how to enforce arms control treaties.

There's some need for more research into grey goo risks. I don't think much has happened there since the ecophagy paper. Here's some old discussion about that paper: Hal Finney, Eliezer, me, Hal Finney

Load More