T

trevor1

175 karmaJoined Sep 2019

Comments
226

Do you think you could linkpost your article to Lesswrong too? 

I know this article mainly focuses on EA values, but it also overlaps with a bunch of stuff that LW users like to research and think about (e.g. in order to better understand the current socio-political and and geopolitical situation with AI safety). 

There's a lot of people on LW who mainly spend their days deep into quantitative technical alignment research, but are surprisingly insightful and helpful when given a fair chance to weigh in on the sociological and geopolitical environment that EA and AI safety take place in, e.g. johnswentworth's participation in this dialogue

Normally the barriers to entry are quite high, which discourages involvement from AI safety's most insightful and quantitative thinkers. Non-experts typically start out, by default, with really bad takes on US politics or China (e.g. believing that the US military just hands over the entire nuclear arsenal to a new president every 4-8 years), and people have to call them out on that in order to preserve community epistemics. 

But it also keeps alignment researchers and other quant people separated from the people thinking about the global and societal environment that EA and AI safety take place in, which currently needs as many people as possible understanding the problems and thinking through viable solutions.

Upvoted, I'm grateful for the sober analysis.

The previous point notwithstanding, people's attention spans are extremely short, and the median outcome of a news story is ~nothing. I've commented before that FTX's collapse had little effect on the average person’s perception of EA, and we might expect a similar thing to happen here.

I think this is an oversimplification. This effect is largely caused by competing messages; the modern internet optimizes information for memetic fitness e.g. by maximizing emotional intensity or persuasive effect, and people have so much routine exposure to stuff that leads their minds around in various directions that they get wary (or see having strong reactions to anything at all as immature, since a large portion of outcries on the internet are disproportionately from teenagers). This is the main reason why people take things with a grain of salt.

However, overton windows can still undergo big and lasting shifts (this process could also be engineered deliberately long before generative AI emerged, e.g. via clown attacks which exploit social status instincts to consistently hijack any person's impressions of any targeted concept). The 80,000 hours podcast with Cass Sunstein covered how Overton windows are dominated by vague impressions of what ideas are acceptable or unacceptable to talk about (note: this podcast was from 2019). This dynamic could plausibly strangle EA's access to fresh talent, and AI safety's access to mission-critical policy influence, for several years (which would be far too long).

It can be frustrating to feel that a group you are part of is being judged by the actions of a couple people you’ve never met nor have any strong feelings about.

On the flip side, johnswentworth actually had a pretty good take on this; that the human brain is instinctively predisposed to over-focus on the risk of their in-group becoming unpopular among everyone else:

First, [AI safety being condemned by the public] sure does sound like the sort of thing which the human brain presents to us as a far larger, more important fact than it actually is. Ingroup losing status? Few things are more prone to distorted perception than that.

Strong upvoted. I'm a huge fan of optimized combinations of words as a communication strategy, and have been for almost two years now. 

I think that converting key x-risk ideas into poetry has a ton of potential to produce communication value from the creation of galaxy-brained combinations of words (including solving many problems fundamental to the human mind and human groups, such as the one mentioned in Raemon's You Get About 5 Words). 

I recommend pioneering this idea and seeing how far you can run with it; I think the expected value makes it worth trying, even if there's a risk that it won't work out, or that you won't be the one credited for getting it going.  

(As a side note, I also think it's valuable to say at the beginning whether LLM generation was used and to what extent. It might seem obvious to you, and it probably actually is obvious that this is human-written to people with tons of experience with both poetry and LLMs, but LLM capabilities are always changing, and modern readers might need to feel reassured, especially for people new to poetry. Skill building for Cyborg poetry might be high EV too, and it might be important to be an early adopter so that EA will be the first to ride the wave when things get serious).

If this is true, then I think the board has made a huge mess of things. They've taken a shot without any ammunition, and not realised that the other parties can shoot back. Now there are mass resignations, Microsoft is furious, seemingly all of silicon valley has turned against EA, and it's even looking likely that Altman comes back.

How much of this is "according to anonymous sources"?

The Board was deeply aware of intricate details of other parties's will and ability to shoot back. Probably nobody was aware of all of the details, since webs of allies are formed behind closed doors and rearrange during major conflicts, and since investors have a wide variety of retaliatory capabilities that they might not have been open about during the investment process.

Agreed, I only used the word "dominance games" because it seemed helpful for understandability and the wordcount. But it was inaccurate enough to be worth effort to find a better combination of words.

Because humans are primates, we have a strong drive to gain social status and play dominance games. The problem is that humans tend to take important concepts and turn them into dominance games.

As a result, people anticipate some sort of dominance or status game whenever they hear about an important concept. For many people, this anticipation has become so strong that they stopped believing that important concepts can exist.

Henrik Karlsson's post Childhoods of Exceptional People did research indicating that there are intensely positive effects from young children spending lots of time talking and interacting with smart, interested adults; so much so that we could even reconsider the paradigm of kids mostly spending time with other kids their age.

It's probably really important to go to lots of events and meet and talk to a bunch of different people and get a wider variety of perspectives; there's only so much that a couple can do from inside one's own two heads.

Western culture is highly individualistic, especially when it comes to major life decisions. However, aligning oneself with empirical reality is typically best done by gathering information from lots of people.

The difficulty of pitching AI safety to someone has been going down by ~50% every ~18 months. This thanksgiving might be a great time to introduce it to family; run Murphyjitsu and be goal-oriented! 🦃

Load more