H

hmijail

6 karmaJoined

Comments
7

I wrote much of this in January and February before it became apparent how much of an incredible own-goal we were going to make by undermining our alliances.

Personally, not addressing this came through as USA-absorbed; largely lacking self-awareness and making me itch to find a Chinese counterpoint.

As an European living in Australia and interested in China, right now I look worriedly at what will USA-backed Israel preemptively bomb next while everything in Western news seems to be about how Iran's leader is "vulnerable" but with an "iron grip"[1] and how exactly will he be killed. 

So I find it hard to take seriously a post that is all worried about Xi and China vs Taiwan while the USA are just some kind of abstract defender that just "maybe yes, maybe not" will join wars just because somehow it happens to have bases next door to China. And Russia. And Iran. Etc, etc.

The referring to CIA intelligence (just like with Iraq's WMDs[2]) and seeing that the Plan A video starts with a nuclear strike from Russia (never USA, of course) doesn't help.

I guess my question is: how much risk reduction would there be if USA stopped messing with the world?

  1. ^

    The enemy is weak and strong at the same time - just like Umberto Eco warned about fascist movements. Who are the baddies again?

  2. ^

    ADDED 2 days later: USA "spy chief" repeats there's no evidence Iran is building nuclear weapons. Trump disavows her. 12 hours later, there's news about USA bombing Iran. Why should I be worried about Xi again?

Thank you for this interesting post. 

You provide a lot of examples of companies and studies already using various flavors of AI, ML - and in many cases things get thin enough that feels like they are using AI-washed databases and simulations. At the risk of getting cynical, the take I end up with is "lots of companies and studies have been using software and statistics for years to develop alternative proteins, and many are happy to embrace AI-washing". 

This feels ironic given that you yourself mention AI-washing as a risk. 

So I guess my question is, why the AI focus in the post, and what is the future implication that I'm missing?

"Places that have had nuclear power plants, but no longer do, seem to really want nuclear to come back — these are good jobs."


I was surprised by this statement. Isn't it easy to say the same for coal power plants and mines?


 

Reminds me of the part in Douglas Adams' "The Restaurant at the End of the Universe" where a cow-like being is eager to be eaten, describes how she had been overfeeding to fatten herself, and suggests to the Earthlings dishes made of parts of its body. They end up horrified and ordering a salad instead.

I don't expect that Adams wrote it to defend veganism, but he was good at laughing at this kind of absurdity / hypocrisy.

Just an anecdote and bordering on off-topic I guess, but the "vegetarian/vegan tastes better/best than meat" is a point that I (a non-vegan!) have found myself defending multiple times. In fact, my safest bet when trying a new cuisine is to go for vegan-est dishes, for taste alone.

When I express this socially, I typically find others agreeing.

So this sprinkled insistence on "veganism defended for taste is suspicious" is suspicious to me, and makes me go meta. It's not the point of the post however so I'll drop it here.

It's great to have the positive example, and it'd be great too to have some concrete negative example of the ads that were unsuccessful. 
Or maybe it's not really that they weren't unsuccessful but rather just "ambient-level"?

I'd also be very interested in both Lanier and Gupta's views. But, could you elaborate on why you'd expect Lanier to be "probably controversial in EA circles"?