Tobias Häberli

599Bern, SwitzerlandJoined Dec 2018

Comments
49

I recently heard the Radio Bostrom audio version of the Unilateralist's Curse after only having read it before. Something about the narration made me think that it lends itself very well to an explainer video. 

One additional thing I’d be curious about:

You played the role of a messenger between SBF and Elon Musk in a bid for SBF to invest up to 15 billion of (presumably mostly his) wealth in an acquisition of Twitter. The stated reason for that bid was to make Twitter better for the world. This has worried me a lot over the last weeks. It could have easily been the most consequential thing EAs have ever done and there has - to my knowledge- never been a thorough EA debate that signalled that this would be a good idea.

What was the reasoning behind the decision to support SBF by connecting him to Musk? How many people from FTXFF or EA at large were consulted to figure out if that was a good idea? Do you think that it still made sense at the point you helped with the potential acquisition to regard most of the wealth of SBF as EA resources? If not, why did you not inform the EA community?

Source for claim about playing a messenger: https://twitter.com/tier10k/status/1575603591431102464?s=20&t=lYY65-TpZuifcbQ2j2EQ5w

I think that it's supposed to be Peter Thiel (right) and Larry Page (top) in the cover photo. They are mentioned in the article, are very rich and look to me more like the drawings.

Release shocking results of an undercover investigation ~2 weeks before the vote. Maybe this could have led to a 2-10% increase?


My understanding is, that they did try to do this with an undercover investigation report on poultry farming. But it was only in the news for a very short time and I'm guessing didn't have a large effect.

A further thing might have helped:

  • Show clearly how the initiative would have improved animal welfare. 
    The whole campaign was a bit of a mess in this regard.  In the "voter information booklet" the only clearly understandable improvement was about maximum livestocks – which only affected laying hens. This lead to this underwhelming infographic in favour of the initiative [left column: current standards, righ column: standards if initiative passes].

    The initiative committee does claim on their website, that the initiative will lead to more living space for farmed animals. But it never advertised how much. I struggled to find the space requirement information with a quick google search, before a national newspaper reported on it. 

I looked into evidence for the quote you posted for one hour. While I think the phrasing is inaccurate, I’d say the gist of the quote is true.  For example, it's pretty understandable that people jump from "Emile Torres says that Nick Beckstead supports white supremacy" to  "Emile Torres says that Nick Beckstead is a white supremacist". 

White Supremacy:
In a public facebook post you link to this public google doc where you call a quote from Nick Beckstead “unambiguously white-supremacist”.

You reinforce that view in a later tweet:
https://twitter.com/xriskology/status/1509948468571381762

You claim that the writing of Bostrom, Beckstead, Ord, Greaves, etc. is “very much about the preservation of white Western civilization”:

https://twitter.com/xriskology/status/1527250704313856000

You also tweeted about a criticism of Hilary Greaves  in which you “see white supremacy all over it”:

https://twitter.com/xriskology/status/1229107714015604736

Genocide:
On another facebook post you agree with Olle Häggström [note: Häggström actually strongly disagrees with this characterization of their position] that Bostrom’s idea of transhumanism and utilitarianism in Letters from Utopia “is a recipe for moral disaster—for genocide, white supremacy, and so on.”

Eugenics:
In your Salon article you call some of Bostrom’s ideas “straight out of the handbook of eugenics”.
https://www.salon.com/2022/08/20/understanding-longtermism-why-this-suddenly-influential-philosophy-is-so/

You reinforce this view in the following tweet:

 https://twitter.com/xriskology/status/1562003541539037186

You also say that “Longtermism is deeply rooted in the ideology of eugenics”.

https://twitter.com/xriskology/status/1557338332702572545

Racism:

You called Sam Harris “quite racist”:
https://twitter.com/xriskology/status/1384425549091774466

In this tweet you strongly imply that some of Bostrom’s views are indistinguishable from scientific racism:

https://twitter.com/xriskology/status/1569365203049140224

There’s also this tweet that describes the EA community as welcoming to misogynists, neoreactionaries, and racists:

https://twitter.com/xriskology/status/1510708370285776902
 

Toby Ord touches on that in The Precipice.
For example here (at 11:40)

But the same study also found that only 41% of respondents from the general population placed AI becoming more intelligent than humans into the 'first 3 risks of concern' out of a choice of 5 risks. 
Only for 12% of respondents was it the biggest concern. 'Opinion leaders' were again more optimistic – only 5% of them thought AI intelligence surpassing human intelligence was the biggest concern.

Question: "Which of the potential risks of the development of artificial intelligence concerns you the most? And the second most? And the third most?"
Option 1: The risks related to personal security and data protection.
Option 2: The risk of misinterpretation by machines.
Option 3: Loss of jobs.
Option 4: Artificial intelligence that surpasses human intelligence.
Option 5: Others

I recently found a Swiss AI survey that indicates that many people do care about AI.
[This is only very weak evidence against your thesis, but might still interest you 🙂.]

Sample size:
Population – 1245 people
Opinion Leaders – 327 people [from the economy, public administration, science and education]

The question: 
"Do you fear the emergence of an "artificial super-intelligence", and that robots will take power over humans?"

From the general population, 11% responded "Yes, very", and 37% responded "Yes, a bit". 
So, half of the respondents (that expressed any sentiment) were at least somewhat worried.

The 'opinion leaders' however are much less concerned. Only 2% have a lot of fear and 23% have a bit of fear.

Load More