T

trevor1

165 karmaJoined Sep 2019

Comments
247

The Dark Forest Problem implies that people centralizing power might face strong incentives to hide, act through proxies, and/or disguise their centralized power as decentralized power. The question is to what extent high-power systems are dark forests vs. the usual quid-pro-quo networks and stable factions.

Changing technology and applications for power, starting in the 1960s, implies that factions would not be stable and iterative trust is less reliable, and therefore a dark forest system was more likely to emerge.

Yep, that's the way it goes! 

Also, figuring out what's original and what's memetically downstream, is an art. Even more so when it comes to dangerous technologies that haven't been invented yet.

Ah, I didn't know about the EA handbook and would not have found out if not for this post, thanks! It looks pretty good and along with the CFAR handbook, I wish I had known about it many years ago.

Yeah, a lot of them are not openly advertised for good reasons. One example that's probably fine to talk about is NunoSempere's claim that EAforum is shifting towards catering to new or marginal users

The direct consequence is reducing the net quality of content on EAforum, but it also allows it to steer people towards events as they get more interested in various EA topics, where they can talk more freely without worrying about saying things controversial, or get involved directly with people working on those areas via face-to-face interaction. And it doesn't stop EAforum from remaining a great bulletin board for orgs to publish papers and updates and get feedback.

But at first glance, catering towards marginal users normally makes you think that they're just trying to do classic user retention. That's not what's happening; this is not a normal forum and that's the wrong way to think about it.

My thinking about EAforum over the years has typically been "Jesus, why on earth would they people deliberately set things like that" and then maybe a couple months later, maybe a couple years later, I start to notice a possible explanation, and I'm like "oooooooooooohhhhhh, actually, that might make a lot of sense, I wish I had noticed that immediately".

Large multi-human systems tend to be pretty complicated and counterintuitive, but it becomes way, way more so when most of the people are extremely thoughtful. Plus, the system changes in complicated and unprecedented ways as the world changes around it, or as someone here or there discovered a game-changing detail about the world, meaning that EAforum is entering uncharted territory and tearing down Schelling fences rather frequently.

Sinocism is like Zvi's blog, except for China Watchers instead of AI safety. It leans a little towards open source, but it's free and the guy knows the space (though doesn't know everything).

I would like to add that certain types of people might be predisposed towards power seeking (and succeeding at power seeking), rather than just being corrupted by power, status, money, or fame. 

Social Dark Matter offers some interesting takes on this; it's more nuanced than it appears e.g. neurotic people might even be more reputation-obsessed but also potentially more likely than the median human to internalize moral values (or, in the case of EA, commit to internalizing moral values in a lasting way). This is purely speculative food for thought to illustrate the complexity of this situation (empirically researching the psychology of different kinds of powerful people is difficult due to nonresponse bias making samples disproportionately stacked towards people who aren't as powerful as they look).

Oh, sorry, by profiteers I was referring to people like forum lurkers and hostile open source researchers, not you at all. 

My thinking was that this plan works fine with or without funding so long as someone (e.g. you) coordinates it, but it can't be open-source on EAforum or Lesswrong because the bad guys (not journalists, the other bad guys) would get too much information out of it.

My current thinking about this is that EAforum and Lesswrong have confused, mentally ill, or profiteering people trying to do open source research and find ways to maximize damage to EA. 

As a result, aggregating criticism in an open and decentralized way will boost the adversary's epistemics in parallel, and is thus better done in an closed, in-person networked, and centralized way (I made the same mistake a couple years ago).

Answer by trevor1Jan 14, 20241
0
0

Raemon, a moderator on Lesswrong, recommends Scott Alexander's Superintelligence FAQ.

Load more