TeddyW

94 karmaJoined Nov 2022Working (15+ years)

Posts
4

Sorted by New
2
TeddyW
· 8mo ago · 1m read
10
TeddyW
· 5mo ago · 1m read
11
TeddyW
· 1y ago · 1m read
12
TeddyW
· 1y ago · 1m read

Comments
32

Answer by TeddyWSep 01, 20236
0
0
1

Your Tech Transfer office is a member of AUTM.  AUTM encourages licenses that include altruistic clauses.  For example, the license could waive royalties for sales in third world countries when the licensee is selling at cost.

  Such a deal allows the licensee to make a profit on first world sales to recover development costs and also recognize the altruistic value.  Most inventions require development money and if open sourced they die from lack of funding.

"when I know a bunch of excellent forecasters..."

Perhaps your sampling techniques are better than Tetlock's then.

Answer by TeddyWApr 19, 20232
0
0

Highly effective causes saturate, making it impossible to distribute large sums of money especially more effectively.

Answer by TeddyWApr 19, 20233
0
0

AGI is more likely to save us from all-cause existential risk than it is likely to kill us all. 

@Linch Have you ever met any of these engineers who work on advancing AI in spite of thinking that the "most likely result ... is that literally everyone on Earth will die."
  I have never met anyone so thoroughly depraved.
  Mr. Yudkowsky and @RobBensinger think our field has many such people.
  I wonder if there is a disconnect in the polls.  I wonder if people at MIRI have actually talked to AI engineers who admit to this abomination.  What do you even say to someone so contemptible?  Perhaps there are no such people.

  I think it is much more likely that these MIRI folks have worked themselves into a corner of an echo chamber than it is that our field has attracted so many low-lifes who would sooner kill every last human than walk away from a job.

I do not believe @RobBensinger 's and Yudkowsky's claim that "there are also lots of people in ML who do think AGI is likely to kill us all, and choose to work on advancing capabilities anyway."

What experiences  tell you there are also lots of people in ML who do think AGI is likely to kill us all, and choose to work on advancing capabilities anyway?

Yudkowsky claims that AI developers are plunging headlong into our research in spite of believing we are about to kill all of humanity. He says each of us continues this work because we believe the herd will just outrun us if any one of us were to stop.
 The truth is nothing like this. The truth is that we do not subscribe to Yudkowsky’s doomsday predictions. We work on artificial intelligence because we believe it will have great benefits for humanity and we want to do good for humankind.
 We are not the monsters that Yudkowsky makes us out to be.

Load more