Wei_Dai

3264Joined Jun 2015

Comments
173

Would be interested in your (eventual) take on the following parallels between FTX and OpenAI:

  1. Inspired/funded by EA
  2. Taking big risks with other people's lives/money
  3. Attempt at regulatory capture
  4. Large employee exodus due to safety/ethics/governance concerns
  5. Lack of public details of concerns due in part to non-disparagement agreements

just felt like SBF immediately became a highly visible EA figure for no good reason beyond $$$.

Not exactly. From Sam Bankman-Fried Has a Savior Complex—And Maybe You Should Too:

It was his fellow Thetans who introduced SBF to EA and then to MacAskill, who was, at that point, still virtually unknown. MacAskill was visiting MIT in search of volunteers willing to sign on to his earn-to-give program. At a café table in Cambridge, Massachusetts, MacAskill laid out his idea as if it were a business plan: a strategic investment with a return measured in human lives. The opportunity was big, MacAskill argued, because, in the developing world, life was still unconscionably cheap. Just do the math: At $2,000 per life, a million dollars could save 500 people, a billion could save half a million, and, by extension, a trillion could theoretically save half a billion humans from a miserable death.

MacAskill couldn’t have hoped for a better recruit. Not only was SBF raised in the Bay Area as a utilitarian, but he’d already been inspired by Peter Singer to take moral action. During his freshman year, SBF went vegan and organized a campaign against factory farming. As a junior, he was wondering what to do with his life. And MacAskill—Singer’s philosophical heir—had the answer: The best way for him to maximize good in the world would be to maximize his wealth.

SBF listened, nodding, as MacAskill made his pitch. The earn-to-give logic was airtight. It was, SBF realized, applied utilitarianism. Knowing what he had to do, SBF simply said, “Yep. That makes sense.” But, right there, between a bright yellow sunshade and the crumb-strewn red-brick floor, SBF’s purpose in life was set: He was going to get filthy rich, for charity’s sake. All the rest was merely execution risk.

To give some additional context, China emitted 11680 MT of Co2 in 2020, out of 35962 MT globally. In 2022 it plans to mine 300 MT more coal than the previous year (which also added 220 MT of coal production), causing an additional 600 MT of Co2 from this alone (might be a bit higher or lower depending on what kind of coal is produced). Previously, China tried to reduce its coal consumption, but that caused energy shortages and rolling blackouts, forcing the government to reverse direction.

Given this, it's really unclear how efforts like persuading Canadian voters to take climate change more seriously can make enough difference to be considered "effective" altruism. (Not sure if that line in your conclusions is targeted at EAs, or was originally written for a different audience.) Perhaps EAs should look into other approaches (such as geoengineering) that are potentially more neglected and/or tractable?

To take a step back, I'm not sure it makes sense to talk about "technological feasibility" of lock-in, as opposed to say its expected cost, because suppose the only feasible method of lock-in causes you to lose 99% of the potential value of the universe, that seems like a more important piece of information than "it's technologically feasible".

(On second thought, maybe I'm being unfair in this criticism, because feasibility of lock-in is already pretty clear to me, at least if one is willing to assume extreme costs, so I'm more interested in the question of "but can it be done at more acceptable costs", but perhaps this isn't true of others.)

That aside, I guess I'm trying to understand what you're envisioning when you say "An extreme version of this would be to prevent all reasoning that could plausibly lead to value-drift, halting progress in philosophy." What kind of mechanism do you have in mind for doing this? Also, you distinguish between stopping philosophical progress vs stopping technological progress, but since technological progress often requires solving philosophical questions (e.g., related to how to safely use the new technology), do you really see much distinction between the two?

Consider a civilization that has "locked in" the value of hedonistic utilitarianism. Subsequently some AI in this civilization discovers what appears to be a convincing argument for a new, more optimal design of hedonium, which purports to be 2x more efficient at generating hedons per unit of resources consumed. Except that this argument actually exploits a flaw in the reasoning processes of the AI (which is widespread in this civilization) such that the new design is actually optimized for something different from what was intended when the "lock in" happened. The closest this post comes to addressing this scenario seems to be "An extreme version of this would be to prevent all reasoning that could plausibly lead to value-drift, halting progress in philosophy." But even if a civilization was willing to take this extreme step, I'm not sure how you'd design a filter that could reliably detect and block all "reasoning" that might exploit some flaw in your reasoning process.

Maybe in order to prevent this, the civilization tried to locked in "maximize the quantity of this specific design of hedonium" as their goal instead of hedonistic utilitarianism in the abstract. But 1) maybe the original design of hedonium is already flawed or highly suboptimal, and 2) what if (as an example) some AI discovers an argument that they should engage in acausal trade in order to maximize the quantity of hedonium in the multiverse, except that this argument is actually wrong.

This is related to the problem of metaphilosophy, and my hope that we can one day understand "correct reasoning" well enough to design AIs that we can be confident are free from flaws like these, but I don't know how to argue that this is actually feasible.

I don't have good answers to your questions, but I just want to say that I'm impressed and surprised by the decisive and comprehensive nature of the new policies. It seems that someone or some group actually thought through what would be effective policies for achieving maximum impact on the Chinese AI and semiconductor industries, while minimizing collateral damage to the wider Chinese and global economies. This contrasts strongly with other recent US federal policy-making that I've observed, such as COVID, energy, and monetary policies. Pockets of competence seem to still exist within the US government.

But two formidable new problems for humanity could also arise

I think there are other AI-related problems that are comparable in seriousness to these two, which you may be neglecting (since you don't mention them here). These posts describe a few of them, and this post tried to comprehensively list my worries about AI x-risk.

They are building their own alternatives, for example CodeGeeX is a GPT-sized language model trained entirely on Chinese GPUs.

It used Huawei Ascend 910 AI Processors, which was fabbed by TSMC, which will no longer be allowed to make such chips for China.

absent a war, China can hope to achieve parity with the West (by which I mean the countries allied with the US including South Korea and Japan) on the hardware side by buying chips from Taiwan like everyone else

Apparently this is no longer true as of Oct 2022. From https://twitter.com/jordanschnyc/status/1580889364233539584:

Summary from Lam Research, which is involved with these new sanctions:

  1. All Chinese advanced computing chip design companies are covered by these sanctions, and TSMC will no longer do any tape-out for them from now on;

This was apparently based on this document, which purports to be a transcript of a Q&A session with a Lam Research official. Here's the relevant part in Chinese (which is consistent with the above tweet):

Q:台积电/Global Foundry给中国流片受到什么影响?

高算力芯片给中国继续流片有困难,可以提供先进制程但是非高算力的芯片,高算力芯片给了相应的指标定义。如果是28家实体清单的企业是全部芯片都不能流片。

What precautions did you take or would you recommend, as far as preventing the (related) problems of falling in with the wrong crowd and getting infected with the wrong memes?

What morality and metaethics did you try to teach your kids, and how did that work out?

(Some of my posts that may help explain my main worries about raising a kid in the current environment: 1 2 3. Would be interested in any comments you have on them, whether from a parent's perspective or not.)

Load More