Tobias Häberli

728Bern, SwitzerlandJoined Dec 2018

Comments
64

I find the second one more readable. 

Might be due to my display: If I zoom into the two versions, the second version separates letters better.



But you're also right, that we'll get used to most changes :)

I find the font to be less readable and somewhat clunky. 
Can't quite express why it feels that way. It reminds me of display scaling issues, where your display resolution doesn't match the native resolution.

I'm not really sure if the data suggests this.

The question is rather vague, making it difficult to determine the direction of the desired change. It seems to suggest that longtermists and more engaged individuals are less likely to support large changes in the community in general. But both groups might, on average, agree that change should go in the 'big tent' direction.

Although there are statistically significant differences in responses to "I want the community to look very different"  between those with mild vs. high engagement, their average responses are still quite similar (around 4.2/7 vs. 3.7/7). Finding statistically significant differences in beliefs between two groups doesn't always imply that there are large or meaningful differences in the content of their actual beliefs. I feel I could also just be overlooking something here.
 

The only source for this claim I've ever found was Emile P. Torres's article What “longtermism” gets wrong about climate change

It's not clear where they take the information about an "enormous promotional budget of roughly $10 million" from. Not saying that it is untrue, but also unclear why Torres would have this information.

The implication is also, that the promotional spending came out of EA pockets. But part of it might also be promotional spending by the book publisher.

ETA: I found another article by Torres that discusses the claim in a bit more detail.

MacAskill, meanwhile, has more money at his fingertips than most of us make in a lifetime. Left unmentioned during his “Daily Show” appearance: he hired several PR firms to promote his book, one of which was paid $12,000 per month, according to someone with direct knowledge of the matter. MacAskill’s team, this person tells me, even floated a total promotional budget ceiling of $10 million — a staggering number — thanks partly to financial support from the tech multibillionaire Dustin Moskovitz, cofounder of Facebook and a major funder of EA.

If I remember correctly, Claude had limited public deployment roughly a month before the Google investment, and roughly 2 months after their biggest funder (FTX) went bankrupt.

Thanks for getting back to me and providing more context. 

I do agree that Churchill was probably surprised by Roosevelt's use of the term because it was not in the official communiqué. Trying to figure out how certain historical decisions were influenced is very challenging.

The way you describe the events strikes me as very strong and requires a lot of things to be true other than the term being used accidentally:

Accidentally called for unconditional surrender of the Japanese, leading to the eventual need for the bomb to be dropped. (p.35)

Based on the available information and until we have better evidence for the claim, I would not want to use this as an example of a simple mistake having severe consequences. And because the Anecdote is incredibly catchy, I worry that policy researchers and practitioners will read it and subsequently use it in conversation.

In EA, the roles of "facilitator" and "attendee" may not be as straightforward as they appear to be in AR. From personal experience, there are many influential people in the EA community who do not hold designated roles that overtly reveals their power. Their influence/soft power only becomes apparent once you get a deeper understanding of how community members interrelate and how information is exchanged. On the other hand, someone who is newly on a Community Building grant may have more power on paper than in reality.

I agree with the need for a policy. I just want it to reflect the nuances of power dynamics in EA. While no policy will be perfect, we should aim to create one that does not unnecessarily restrict people – which could lead to disillusionment with the policy. And more importantly, one that does stick in cases where it should stick – e.g. to people with a lot of soft power.

This is currently at 14 agree votes and the same question for Will MacAskill is at -13 disagree votes.

Would be curious if this is mainly because of Nick Beckstead having been the CEO and therefore carrying more responsibility, or are there other considerations?

The most recent Scott Alexander Post seems potentially relevant to this discussion.

The following long section is about what OpenAI could be thinking – and might also translate to Anthropic. (The rest of the post is also worth checking out.)

Why OpenAI Thinks Their Research Is Good Now, But Might Be Bad Later

OpenAI understands the argument against burning timeline. But they counterargue that having the AIs speeds up alignment research and all other forms of social adjustment to AI. If we want to prepare for superintelligence - whether solving the technical challenge of alignment, or solving the political challenges of unemployment, misinformation, etc - we can do this better when everything is happening gradually and we’ve got concrete AIs to think about:

"We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right” scenarios […] As we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally.

A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low."

You might notice that, as written, this argument doesn’t support full-speed-ahead AI research. If you really wanted this kind of gradual release that lets society adjust to less powerful AI, you would do something like this:

  • Release AI #1
  • Wait until society has fully adapted to it, and alignment researchers have learned everything they can from it.
  • Then release AI 2018-2019 Long-Term Future Fund Grantees: How did they do? Wait until society has fully adapted to it, and alignment researchers have learned everything they can from it.
  • And so on . . .

Meanwhile, in real life, OpenAI released ChatGPT in late November, helped Microsoft launch the Bing chatbot in February, and plans to announce GPT-4 in a few months. Nobody thinks society has even partially adapted to any of these, or that alignment researchers have done more than begin to study them.

The only sense in which OpenAI supports gradualism is the sense in which they’re not doing lots of research in secret, then releasing it all at once. But there are lots of better plans than either doing that, or going full-speed-ahead.

So what’s OpenAI thinking? I haven’t asked them and I don’t know for sure, but I’ve heard enough debates around this that I have some guesses about the kinds of arguments they’re working off of. I think the longer versions would go something like this:

The Race Argument:

  1. Bigger, better AIs will make alignment research easier. At the limit, if no AIs exist at all, then you have to do armchair speculation about what a future AI will be like and how to control it; clearly your research will go faster and work better after AIs exist. But by the same token, studying early weak AIs will be less valuable than studying later, stronger AIs. In the 1970s, alignment researchers working on industrial robot arms wouldn’t have learned anything useful. Today, alignment researchers can study how to prevent language models from saying bad words, but they can’t study how to prevent AGIs from inventing superweapons, because there aren’t any AGIs that can do that. The researchers just have to hope some of the language model insights will carry over. So all else being equal, we would prefer alignment researchers get more time to work on the later, more dangerous AIs, not the earlier, boring ones.
  2. “The good people” (usually the people making this argument are referring to themselves) currently have the lead. They’re some amount of progress (let’s say two years) ahead of “the bad people” (usually some combination of Mark Zuckerberg and China). If they slow down for two years now, the bad people will catch up to them, and they’ll no longer be setting the pace.
  3. So “the good people” have two years of lead, which they can burn at any time.
  4. If the good people burn their lead now, the alignment researchers will have two extra years studying how to prevent language models from saying bad words. But if they burn their lead in 5-10 years, right before the dangerous AIs appear, the alignment researchers will have two extra years studying how to prevent advanced AGIs from making superweapons, which is more valuable. Therefore, they should burn their lead in 5-10 years instead of now. Therefore, they should keep going full speed ahead now

The Compute Argument:

  1. Future AIs will be scary because they’ll be smarter than us. We can probably deal with something a little smarter than us (let’s say IQ 200), but we might not be able to deal with something much smarter than us (let’s say IQ 1000).
  2. If we have a long time to study IQ 200 AIs, that’s good for alignment research, for two reasons. First of all, these are exactly the kind of dangerous AIs that we can do good research on - figure out when they start inventing superweapons, and stamp that tendency out of them. Second, these IQ 200 AIs will probably still be mostly on our side most of the time, so maybe they can do some of the alignment research themselves.
  3. So we want to maximize the amount of time it takes between IQ 200 AIs and IQ 1000 AIs.
  4. If we do lots of AI research now, we’ll probably pick all the low-hanging fruit, come closer to optimal algorithms, and the limiting resource will be compute - ie how many millions of dollars you want to spend building giant computers to train AIs on. Compute grows slowly and conspicuously - if you’ve just spent $100 million on giant computers to train AI, it will take a while before you can gather $1 billion to spend on even gianter computers. Also, if terrorists or rogue AIs are gathering a billion dollars and ordering a giant computer from Nvidia, probably people will notice and stop them.
  5. On the other hand, if we do very little AI research now, we might not pick all the low-hanging fruit, and we might miss ways to get better performance out of smaller amounts of compute. Then an IQ 200 AI could invent those ways, and quickly bootstrap up to IQ 1000 without anyone noticing.
  6. So we should do lots of AI research now.

The Fire Alarm Argument:

  1. Bing’s chatbot tried to blackmail its users, but nobody was harmed and everyone laughed that off. But at some point a stronger AI will do something really scary - maybe murder a few people with a drone. Then everyone will agree that AI is dangerous, there will be a concerted social and international response, and maybe something useful will happen. Maybe more of the world’s top geniuses will go into AI alignment, or will be easier to coordinate a truce between different labs where they stop racing for the lead.
  2. It would be nice if that happened five years before misaligned superintelligences building superweapons, as opposed to five months before it, since five months might not be enough time for the concerted response to do something good.
  3. As per the previous two arguments, maybe going faster now will lengthen the interval between the first scary thing and the extremely dangerous things we’re trying to prevent.

These three lines of reasoning argue that that burning a lot of timeline now might give us a little more timeline later. This is a good deal if:

  1. Burning timeline now actually buys us the extra timeline later. For example, it’s only worth burning timeline to establish a lead if you can actually get the lead and keep it.
  2. A little bit of timeline later is worth a lot of timeline now.
  3. Everybody between now and later plays their part in this complicated timeline-burning dance and doesn’t screw it up at the last second.

I’m skeptical of all of these.

The report suggests that Roosevelt's supposed accidental use of the term"unconditional surrender" and his subsequent failure to back down played a significant role in shaping the strategy that led to the launch of atomic bombs on Japan. I found this claim hard to believe – and after some research, I think it's probably not correct.

Quite amazingly, the term ‘unconditional’ only entered into the Allied demands due to a verbal mistake made by Roosevelt when reading a joint statement in a live broadcast in January 1943, a fact that he later admitted. Churchill immediately repeated the demand, later saying: ‘Any divergence between us, even by omission, would on such an occasion and at such a time have been damaging or even dangerous to our war effort.' Thus, the otherwise reasonable idea that the bombs needed to be dropped to avoid more deaths in an invasion, was only true due to an unreasonable demand that was created by an error people were too proud to step back from. (Lessons from the development of the atomic bomb, page 27)

The claim is repeated on page 35. I couldn't easily find a copy of the original source for the claim[1].

But I could find three sources that seem to refute this interpretation.

The first non-primary source argues that Roosevelt supported the "unconditional surrender concept".

The matter was also discussed in the fall of 1942 by the U.S. Chiefs of Staff who, at the end of December, recommended to the President that no armistice be granted Germany, Japan, Italy, and the satellites until they offered the "unconditional surrender" of their armed forces. The President in reply informed them on January 7, 1943 that he intended to support the "unconditional surrender concept" at the forthcoming Conference at Casablanca. (Balfour, 1979. Page 283)[2]

Secondly, Churchill sent the following report on January 20th, 1943 in Casablanca - just four days prior to Roosevelt’s alleged "verbal mistake".

6. We propose to draw up a statement of the work of the conference for communication to the press at the proper time. I should be glad to know what the War Cabinet would think of our including in this statement a declaration of the firm intention of the United States and the British Empire to continue the war relentlessly until we have brought about the “unconditional surrender” of Germany and Japan. The omission of Italy would be to encourage a break-up there. The President liked this idea, and it would stimulate our friends in every country. [3]

(Churchill and Roosevelt were apparently confused about the specific procedures that should have led to the use of the term ‘unconditional surrender. So that might be part of the reason why they gave different accounts over time.[4])

Thirdly,  Roosevelt likely held Press Conference Notes that were drafted between 22. and 23. January 1943 during his statement on the 24. January 1943.[5] These notes called for the "unconditional surrender" of Germany, Japan, and Italy.

The President and the Prime Minister, after a complete survey of the world war situation, are more than ever determined that peace can come to the world only by a total elimination of German and Japanese war power. This involves the simple formula of placing the objective of this war in terms of an unconditional surrender by Germany, Italy and Japan. Unconditional surrender by them means a reasonable assurance of world peace, for generations. Unconditional surrender means not the destruction of the German populace, nor of the Italian or Japanese populace, but does mean the destruction of a philosophy hi Germany, Italy and Japan which is based on the conquest and subjugation of other peoples.

Based on this information, the timeline appears to be the following:

  • 07.01.1943 – Roosevelt expresses support for the "unconditional surrender concept".
  • 20.01.1943 – Churchill proposes using "unconditional surrender" in a statement, and notes that the President [Roosevelt] liked the idea.
  • 22.-23.01.1943 – Press Conference Notes drafted that call for the unconditional surrender of Germany, Italy, and Japan.
  • 24.01.1943 – Roosevelt uses the term "unconditional surrender" in his statement, likely holding Press Conference Notes in his hand, that exactly asks for the same.

From this sequence of events, it does not appear that Roosevelt used the term mistakenly. If so, this anecdote likely doesn't serve as an example of a slip-up and subsequent reluctance to back down from a mistake having severe consequences.

It's also well possible that I'm missing something here. Would love to learn more.

  1. ^

    Rhodes, Richard. 1986. The Making of the Atomic Bomb.

  2. ^

    Balfour, M. (1979). The Origin of the Formula. Armed Forces & Society, 5(2), 281–301.

  3. ^

    Churchill, Winston. 1950. Second World War: The Hinge of Fate Hardcover (pages 834 and 835)

  4. ^

    Balfour (1979) has a confusing passage about the second explanation:

    "When the draft of the Casablanca communiqué was submitted to Roosevelt and Churchill, it contained no reference to unconditional surrender and neither leader seems to have queried the omission. The obvious reason was that Roosevelt instead mentioned it in his talk to the press. After Churchill's telegram to the Cabinet came to light, thus making it impossible to attribute his claimed surprise to the contention that the subject had not been discussed beforehand with him, the inference seems to be that the surprise lay in this manner of publication.
    But the talk to the press was itself based upon a written text and one of the surviving drafts for this contains emendations which are said to be in Churchill's own hand, and must have been made during the preceding forty-eight hours. 13 Either he did not read the draft carefully, or his memory slipped, or else he, like Roosevelt, wanted to cover his tracks. It is unlikely that we will ever know the exact truth."

  5. ^

    Footnote 1: 

    "Photographs of the Roosevelt–Churchill press conference of January 24, 1943, such as that following p. 483, show Roosevelt holding a document, presumably the notes printed here.

Load more