All of PeterMcCluskey's Comments + Replies

One good prediction that he made was in his 1986 book Engines of Creation, that a global hypertext system would be available within a decade. Hardly anyone in 1986 imagined that.

But he has almost entirely stopped trying to predict when technologies will be developed. You should read him to imagine what technologies are possible.

That's mostly bearish for bonds because it increases inflation.

I haven't given that a lot of thought. AI is likely to have the strongest effects further out. A year ago I was mainly betting on interest rates going up around 2030 via SOFR futures, because I expected interest rates to go down in 2025-6. But now I'm guessing there's little difference in which durations go up.

These ETFs seem better than leveraged ETFs, for reasons related to the excessive trading by leveraged ETFs.

I see multiple reasons why bonds are likely to be bad investments over the next few years:

  • AI is likely to drive up real interest rates, by making capital more productive.
  • AI-induced job loss might cause the Fed to be less concerned about inflation.
  • AI-induced job loss may reduce tax revenues, so the government will need to sell more bonds.
  • Trump is pressuring the Fed to adopt policies that would cause inflation.
  • If AI doesn't increase GDP growth, th
... (read more)
1
simon
“Trump is pressuring the Fed to adopt policies that would cause inflation.” That’s more cleanly expressed as a curve steepener (front lower, back higher), so bullish short end vs bearish back. 
1
simon
“AI-induced job loss might cause the Fed to be less concerned about inflation.” This sounds more bullish bonds because low inflation concerns -> fed can cut. Also (more importantly) the fed has a dual mandate so low employment -> cut. 
2
MichaelDickens
Do you have an opinion about what maturity is best to short? (My first thought is that if you have a view about what interest rates will do over the next 5–10 years, then you should short 5–10 year bonds. But I'm not sure that's right.)

Oysters are significantly more nutrient dense than beef, partly because we eat the whole oyster, but ignore the most nutritious parts of the cow. So $1 of oyster is roughly as beneficial as $1 of pasture-raised beef. Liver from grass-fed cows is likely better than bivalves, and has almost no effect on how many cows are killed.

My experience suggests that there probably isn't much that you can do.

most of the billions of people who know nothing about AI risks have a p(doom) of zero.

This seems pretty false. E.g. see this survey.

Destroying Taiwan's fabs would make it harder for the West to maintain much of a lead in chips. China likely cares a fair amount about that.

The strongest concern I have heard to this approach is the fact that as model algorithms improve, at some point it is possible to train and build human-level intelligence on anyone’s home laptop, which makes hardware monitoring and restricting trickier. While this is cause for concern, I don’t think this should distract us from pursuing a pause.

There are many ways to slow AI development, but I'm concerned that it's misleading to label any of them as pauses. I doubt that the best policies will be able to delay superhuman AI by more than a couple of years... (read more)

5
Richard Annilo
Right. I was also concerned some of the proposals here might be misleading to be named 'pauses'. Proposals to 'significantly slow down development' might be more accurate in that case. Maybe that's a better way to approach talking about pausing. See it more as a spectrum of stronger and weaker slowdown mechanisms?

I've been buying Alexandre's eggs. Should I switch to the Berkeley Bowl brand pasture-raised eggs? Do you have any other recommendations for eggs?

2
Amy Labenz
Thank you! Added a footnote. 

I want to emphasize that this just sets a lower bound on the importance.

E.g. there's a theory that fungal infections are the primary cause of cancer.

How much of chronic fatigue is due to undiagnosed fungal infections? Nobody knows. I know someone with chronic fatigue who can't tell whether it's due in part to a fungal infection. He's got elevated mycotoxins in his urine, but that might be due to past exposure to a moldy environment. He's trying antifungals, but so far the side effects have prevented him from taking more than a few doses of the two that he ... (read more)

1
Mo Putera
Agree with the lower bound on fungal burden. For the post you linked I'd signal-boost J Bostock's 7 criticisms too.
1
Jenny Kudmowa
I agree this is most likely a lower bound - we tried to emphasize this in the report.  I was not aware of the theory that fungal infections are the primary cause of cancer - many thanks for sharing!

the typical time from vaccine development was decades and the fastest ever time was 10 years.

Huh? It was about 6 months for the 1957 pandemic.

I meant vaccines for diseases that didn't yet have a vaccine. The 1957 case was a vaccine for a new strain of influenza, when they already had influenza vaccines.

We shouldn't be focused too heavily on what is politically feasible this year. A fair amount of our attention should be on what to prepare in order to handle a scenario in which there's more of an expert consensus a couple of years from now.

3
Cecil Abungu
This is a fair point, but we're thinking about a scenario where such consensus takes a much much longer time to emerge. There's no real reason to be sure that a super advanced model a couple of years from now will do the kinds of things that would produce a consensus. 

Nanotech progress has been a good deal slower than was expected by people who were scared of it.

4
Robi Rahman🔸
I agree, however, isn't there still the danger that as scientific research is augmented by AI, nanotechnology will become more practical? The steelmanned case for nanotech x-risk would probably argue that various things that are intractable for us to do now, have no theoretical reasons why they couldn't be done if we were slightly better at other adjacent techniques.

I have alexithymia.

Greater awareness seems desirable. But I doubt it "severely affects" 1 in 10 people. My impression is that when it's correlated with severe problems, the problems are mostly caused by something like trauma, and alexithymia is more a symptom than a cause of the severe problems.

3
Misha_Yagudin
Yes, the mechanism is likely not alexithymia directly causing undesirable states like trauma but rather diminishing one's ability to get unstack given that traumatic events happened.

Author of the manifesto and Animi here. I was also doubtful initially when I was researching alexithymia to improve my condition. But that was gradually changing the more papers I read and the more people I talked with. There are 50+ years of research on the topic, and some papers show more than 10% of the general population with alexithymia score in the "clinical" range where it is correlated with all the associated problems. 1 in 10 actually makes a lot of sense given how prevalent and comorbid it is with mental disorders or i.e. neurodiversity - ~50% of... (read more)

It's not obvious that unions or workers will care as much about safety as management. See this post for some historical evidence.

6 months sounds like a guess as to how long the leading companies might be willing to comply.

The timing of the letter could be a function of when they were able to get a few big names to sign.

I don't think they got enough big names to have much effect. I hope to see a better version of this letter before too long.

Something important seems missing from this approach.

I see many hints that much of this loneliness results from trade-offs made by modern Western culture, neglecting (or repressing) tightly-knit local community ties to achieve other valuable goals.

My sources for these hints are these books:

One point from WEIRDest People is summarized here:

Neolocal residence occurs when a newly married couple establishes their home independent of both sets of relatives. While only about 5%

... (read more)

I doubt most claims about sodium causing health problems. High sodium consumption seems quite correlated with dietary choices that have other problems, which makes studying this hard.

See Robin Hanson's comments.

2
Joel Tan🔸
It seems the scientific consensus, and Cochrane reviews/meta-analysis of RCTs(e.g. https://www.bmj.com/content/346/bmj.f1325) are supportive. I wouldn't rule out the possibility that sodium isn't as harmful as health authorities think it is (c.f. the whole fracas over saturated fat vs sugar), but I guess I don't see this as a serious worry or something that demands more research given the current evidence/expert opinion and limited research time.

I expect most experts are scared of the political difficulties. Also, many people have been slow to update on the declining costs of solar. I think there's still significant aversion to big energy-intensive projects. Still, it does seem quite possible that experts are rejecting it for good reasons, and it's just hard to find descriptions of their analysis.

I agree very much with your guess that SBF's main mistake was pride.

I still have some unpleasant memories from the 1984 tech stock bubble, of being reluctant to admit that my successes during the bull market didn't mean that I knew how to handle all market conditions.

I still feel some urges to tell the market that it's wrong, and to correct the market by pushing up prices of fallen stocks to where I think they ought to be. Those urges lead to destructive delusions. If my successes had gotten the kind of publicity that SBF got, I expect that I would have made mistakes that left me broke.

I haven't expected EAs to have any unusual skill at spotting risks.

EAs have been unusual at distinguishing risks based on their magnitude. The risks from FTX didn't look much like the risk of human extinction.

8
Nathan Young
But half our resources to combat human extinction were at risk due to risks to FTX. Why didn't we take that more seriously.

I agree that there's a lot of hindsight bias here, but I don't think that tweet tells us much.

My question for Dony is: what questions could we have asked FTX that would have helped? I'm pretty sure I wouldn't have detected any problems by grilling FTX. Maybe I'd have gotten some suspicions by grilling people who'd previously worked with SBF, but I can't think of what would have prompted me to do that.

Nitpick: I suspect EAs lean more toward Objective Bayesianism than Subjective Bayesianism. I'm unclear whether it's valuable to distinguish between them.

1
Noah Scales
I read Violet's post, am reviewing some of the background material, and just browsed some online stuff about Bayesianism. I would learn something by your elaboration on the difference you think applies to EA's.

It's risky to connect AI safety to one side of an ideological conflict.

2
NickGabs
I think you can stress the "ideological" implications of externalities to lefty audiences while having a more neutral tone with more centrist or conservative audiences.  The idea that externalities exist and require intervention is not IMO super ideologically charged.
3
JakubK
There are ways to frame AI safety as (partly) an externality problem without getting mired in a broader ideological conflict.

Convincing a venue to implement it well (or rewarding one that has already done that) will have benefits that last more than three days.

I agree about the difficulty of developing major new technologies in secret. But you seem to be mostly overstating the problems with accelerating science. E.g.:

These passages seem to imply that the rate of scientific progress is primarily limited by the number and intelligence level of those working on scientific research. Here it sounds like you're imagining that the AI would only speed up the job functions that get classified as "science", whereas people are suggesting the AI would speed up a wide variety of tasks including gathering evidence, building tools, etc.

My understanding of Henrich's model says that reducing cousin marriage is a necessary but hardly sufficient condition to replicate WEIRD affluence.

European culture likely had other features which enabled cooperation on larger-than-kin-network scales. Without those features, a society that stops cousin marriage could easily end up with only cooperation within smaller kin networks. We shouldn't be confident that we understand what the most important features are, much less that we can cause LMICs to have them.

Successful societies ought to be risk-averse abou... (read more)

Resilience seems to matter for human safety mainly via food supply risks. I'm not too concerned about that, because the world is producing a good deal more food than is needed to support our current population. See my more detailed analysis here.

It's harder to evaluate the effects on other species. I expect a significant chance that technological changes will make current biodiversity efforts irrelevant. So to the limited extent I'm worried about wild animals, I'm focused more on ensuring that technological change develops so as to keep as many options open as possible.

2
Karla Still 🔸
How would technological change make current biodiversity efforts irrelevant? And by irrelevant, do you mean that the technologies reduce environmental burden and degregation e.g. by being more resource efficienct or that they would be actual new solutions aimed at reducing biodiversity loss?
1
RayTaylor
Is there another link? I couldn't open that one. Does your analysis consider GCRs and tail risks through this century?

Why has this depended on NIH? Why aren't some for-profit companies eager to pursue this?

2
gwern
What disease would you seek FDA approval for? "I sleep more than 4 hours a day" is not a recognized disease under the status quo. (There is the catch-all of 'hypersomnia', but things like sleep apnea or neurodegenerative disorders or damage to clock-keeping neurons would not plausibly be treated by some sort of knockout-mimicking drug.)
7
JohnBoyle
I think both Ying-Hui and I had the impression that the research had to be somewhat further along before any profit-minded people would fund it.  But someone recently explained to me that Silicon Valley companies are often funded with much less scientific backing than this, so this week I've written to one venture capitalist I know, and will probably contact others.  Regarding getting funding from an existing company, I don't know much about that option. Advice is appreciated.

This seems to nudge people in a generally good direction.

But the emphasis on slack seems somewhat overdone.

My impression is that people who accomplish the most typically have had small to moderate amounts of slack. They made good use of their time by prioritizing their exploration of neglected questions well. That might create the impression of much slack, but I don't see slack as a good description of the cause.

One of my earliest memories of Eliezer is him writing something to the effect that he didn't have time to be a teenager (probably on the Extropian... (read more)

This seems mostly right, but it still doesn't seem like the main reason that we ought to talk about global health.

There are lots of investors visibly trying to do things that we ought to expect will make the stock market more efficient. There are still big differences between companies in returns on R&D or returns on capital expenditures. Those returns go mainly to people who can found a Moderna or Tesla, not to ordinary investors.

There are not (yet?) many philanthropists who try to make the altruistic market more efficient. But even if there were, the... (read more)

Can you give any examples of AI safety organizations that became less able to get funding due to lack of results?

4
Lorenzo Buonanno🔸
Also RAISE https://www.lesswrong.com/posts/oW6mbA3XHzcfJTwNq/raise-post-mortem

CSER is the obvious example in my mind, and there are other non-public examples.

Worrying about the percent of spending misses the main problems, e.g. donors who notice the increasing grift become less willing to trust the claims of new organizations, thereby missing some of the best opportunities.

I have some relevant knowledge. I was involved in a relevant startup 20 years ago, but haven't paid much attention to this area recently.

My guess is that Drexlerian nanotech could probably be achieved in less than 10 years, but would need on the order of a billion dollars spent on an organization that's at least as competent as the Apollo program. As long as research is being done by a few labs that have just a couple of researchers, progress will likely continue to be slow to need much attention.

It's unclear what would trigger that kind of spending and th... (read more)

Acting without information on the relative effectiveness of the vaccine candidates was not a feasible strategy for mitigating the pandemic.

I'm pretty sure that with a sufficiently bad virus, it's safer to vaccinate before effectiveness is known. We ought to plan ahead for how to make such a decision.

This was the fastest vaccine rollout ever

Huh? 40 million doses of the 1957 flu vaccine were delivered within about 6 months of getting a virus sample to the US. Does that not count due to its similarity to existing vaccines?

Here are some of my reasons for disliking high inflation, which I think are similar to the reasons of most economists:

Inflation makes long-term agreements harder, since they become less useful unless indexed for inflation.

Inflation imposes costs on holding wealth in safe, liquid forms such as bank accounts, or dollar bills. That leads people to hold more wealth in inflation-proof forms such as real estate, and less in bank accounts, reducing their ability to handle emergencies.

Inflation creates a wide variety of transaction costs: stores need to change the... (read more)

I don't see high value ways to donate money for this. The history of cryonics suggests that it's pretty hard to get more people to sign up. Cryonics seems to grow mainly from peer pressure, not research or marketing.

1
AndyMcKenzie
Hi Peter, I agree with you that right now there are not any obvious high-value ways to donate money to this area. Although as I just wrote in a comment elsewhere in this thread, I am hoping to do more research on this question in the future, and hopefully others can contribute to that effort as well.  I also agree with you that the history of cryonics suggests it's hard to get people to sign up. But, I do think that the cost of signing up is an obvious area where interventions can be made. My understanding is that the general public's price sensitivity has not really been tested very thoroughly. 

I expect speed limits to hinder the adoption of robocars, without improving any robocar-related safety.

There's a simple way to make robocars err in the direction of excessive caution: hold the software company responsible for any crash it's involved in, unless it can prove someone else was unusually reckless. I expect some rule resembling that will be used.

Having speed limits on top of that will cause problems, due to robocars having to drive slower than humans drive in practice (annoying both the passengers and other drivers), when it's safe for them to ... (read more)

How much of this will become irrelevant when robocars replace human drivers? I suspect the most important impact of safety rules will be how they affect the timing of that transition. Additional rules might slow that down a bit.

7
vicky_cox
Hi Peter, thanks for your comment!  I must admit I have not really thought about this before, but intuitively it still seems important to have appropriate road safety legislation like speed limits in place even if it is robocars following them rather than human drivers. In fact, I could see it as important to have appropriate speed limits in place before the introduction of robocars in case robocars are programmed to drive faster than is safe as a reflection of a too high speed limit. I think the use of seat belts is still a good norm to have, even if robocars will drive safer than human drivers.   I'm not sure whether this would affect the timing of the transition, but if the robocar was going to be programmed with a speed limit anyway then lowering the speed limit doesn't seem like it would slow down the transition (not sure on this though).

CFTC regulations have been at least as much of an obstacle as gambling laws. It's not obvious whether the CFTC would allow this strategy.

You're mostly right. But I have some important caveats.

The Fed acted for several decades as if it was subject to political pressure to reduce inflation. Economists mostly agree that the optimal inflation rate is around 2%. Yet from 2008 to about 2019 the Fed acted as if that were an upper bound, not a target.

But that doesn't mean that we always need more political pressure for inflation. In the 1960s and 1970s, there was a fair amount of political pressure to increase monetary stimulus by whatever it took to reduce unemployment. That worked well when infla... (read more)

1
Remmelt
These caveats are helpful, thank you. I appreciate the elaboration on changing plans for interest rates and inflation by the Fed board and changing influences by non-high income employees and people with pension plans. I was wondering about whether I had misinterpreted OpenPhil staff’s opinion being that rich people have been indirectly influencing the Fed towards a more hawkish stance (I recalled hearing something like this in another interview with Holden, but haven’t been able to find that interview back). Either way, OpenPhil’s analysis around this is probably much more ‘clustery’ and nuanced. I would agree with you though that high net-worth individuals who have most of their capital put into ownership stakes of companies that hold relatively little cash or bonds on their balance sheets and can flexibly hike up pricing of their products/services won’t be impacted much by rising inflation. Edit: Good nuance re: not assuming a constant velocity of money (how fast money passes hands from transaction to transaction). What you wrote doesn’t seem to refute the argument I made concerning model error in current macroeconomic theories. As again a complete amateur, I don’t have any comment on what range of inflation to target or what the trade-offs are, except that all else equal a 2% inflation rate seems pretty benign. Overall, your points makes me more uncertain about my understanding of what current stakeholder groups particularly can and tend to influence Fed monetary policy decisions, and how they are motivated to act. Will read your review.

Hanson reports estimates that under our current system, elites have about 16 times as much influence as the median person.

My guess is that under futarchy, the wealthy would have somewhere between 2 and 10 times as much influence on outcomes that are determined via trading.

You seem to disagree with at least one of those estimates. Can you clarify where you disagree?

The original approach was rather erratic about finding high value choices, and was weak at identifying the root causes of the biggest mistakes.

So participants would become more rational about flossing regularly, but rarely noticed that they weren't accomplishing much when they argued at length with people who were wrong on the internet. The latter often required asking embarrassing questions their motives, and sometimes realizing that they were less virtuous than assumed. People will, by default, tend to keep their attention away from questions like that.

T... (read more)

To the best of my knowledge, internal CEAs rarely if ever turn up negative.

Here's one example of an EA org analyzing the effectiveness of their work, and concluding the impact sucked:

CFAR in 2012 focused on teaching EAs to be fluent in Bayesian reasoning, and more generally to follow the advice from the Sequences. CFAR observed that this had little impact, and after much trial and error abandoned large parts of that curriculum.

This wasn't a quantitative cost-effectiveness analysis. It was more a subjective impression of "we're not getting good enough re... (read more)

It might be orthogonal to the point you're making, but do we have much reason to think that the problem with old-CFAR was the content? Or that new-CFAR is effective?

Another two examples off the top of my head:

Thanks a lot! Is there a writeup of this somewhere? I tend to be a pretty large fan of explicit rationality (at least compared to EAs or rationalists I know), so evidence that reasoning in this general direction is empirically kind of useless would be really useful to me!

It seems strange to call populism anti-democratic.

My understanding is that populists usually want more direct voter control over policy. The populist positions on immigration and international trade seem like stereotypical examples of conflicts where populists side with the average voter more than do the technocrats who they oppose.

Please don't equate anti-democratic with bad. It seems mostly good to have democratic control over the goals of public policy, but let's aim for less democratic control over factual claims.

2
Hauke Hillebrandt
Sorry for being unclear, I didn't mean that populism must necessarily be anti-democratic- I've made a small edit to say that populism has any of the three features 'anti-democratic, illiberal, or anti-technocratic' to make this more clear - thanks for the feedback! I've used my own rough and fuzzy definition of populism as a bit of a catch-all term for some things that are not liberal democracy, where illiberalism violates minority rights. So for example the Swiss Minaret controversy, where a majority banned the building of a Turkish Minaret through a popular referendum, I call populist here, despite being democratic. But you could replace 'populism' with another term, but I think it's not worth to get hung up on definitions. Yes, agreed- I don't think direct democracy  (a la Switzerland) is always better. But yes, in the long-term  policy goals should ideally not be 'anti-democratic', even if they're technocratic and not very  illiberal (like the King of Jordan).  If you have too much technocracy and too little democratic accountability that might lead to populist backlash (see David Autor's studies on trade I cite here or Peter Singer's case against migration). So let's aim for whatever create most utility on the margin, which can sometimes be more democratic control (Jordan, but not Switzerland), sometimes more technocracy (e.g. US left), and sometimes more liberalism (e.g. US right).

I doubt that that study was able to tell whether the dietary changes improved nutrition. They don't appear to have looked at many nutrients, or figured out which nutrients the subjects were most deficient in. Even if they had quantified all important nutrients in the diet, nutrients in seeds are less bioavailable than nutrients in animal products (and that varies depending on how the seeds are prepared).

There's lots of somewhat relevant research, but it's hard to tell which of it is important, and maybe hard for the poor to figure out whether they ought ... (read more)

Load more