T

trevor1

186 karmaJoined Sep 2019

Comments
256

Ah, I see; for years I've been pretty pessimistic about the ability of people to fool systems (namely voice-only lie detectors facilitated by large numbers of retroactively-labelled audio recordings of honest and dishonest statements in the natural environments of different kinds of people) but now that I've read more about humans genetic diversity, that might have been typical mind fallacy on my part; people in the top 1% of charisma and body-language self-control tend to be the ones who originally ended up in high-performance and high-stakes environments as they formed (or forming around then, just as innovative institutions form around high-intelligence and high-output folk).

I can definitely see the best data coming from a small fraction of the human body's outputs such as pupil dilation; most of the body's outputs should yield bayesian updates but that doesn't change the fact that some sources will be wildly more consistent and reliable than others.

Why are you pessimistic about eyetracking and body language? Although those might not be as helpful in experimental contexts, they're much less invasive per unit time, and people in high-risk environments can agree to have specific delineated periods of eyetracking and body language data collected while in the high-performance environments themselves such as working with actual models and code (ie not OOD environments like a testing room).

AFAIK analysts might find uses for this data later on, e.g. observing differences in patterns of change over time based on based on the ultimate emergence of high risk traits, or comparing people to others who later developed high risk traits (comparing people to large amounts of data from other people could also be used to detect positive traits from a distance), spotting the exact period where high risk traits developed and cross-referencing that data with the testimony of a high risk person who voluntarily wants other high risk people to be easier to detect, or depending on advances in data analysis, using that data to help refine controlled environment approaches like pupillometry data or even potentially extrapolating it to high-performance environments. Conditional on this working and being helpful, high-impact people in high-stakes situations should have all the resources desired to create high-trust environments.

The crypto section here didn't seem to adequately cover a likely root cause of the problem. 

The "dark side" of crypto is a dynamic called information asymmetry; in the case of Crypto, it's that wealthier traders are vastly superior at buying low and selling high, and the vast majority of traders are left unaware of how profoundly disadvantaged they are in what is increasingly a zero-sum game. Michael Lewis covered this concept extensively in Going Infinite, the Sam Bankman-Fried book.

This dynamic is highly visible to those in the crypto space (and quant/econ/logic people in general who catch so much as a glimpse), and many elites in the industry like Vitalik and Altman saw it coming from a mile away and tried to find/fund technical solutions e.g. to fix the zero-sum problem e.g. Vitalik's d/acc concept

It was very clear that SBF also appeared to be trying to find technical solutions, rather than just short-term profiteering, but his decision to commit theft points towards the hypothesis that this was superficial.

I can't tell if there's any hope for crypto (I only have verified information on the bad parts, not the good parts if there are any left), but if there is, it would have to come from elite reformers, who are these types of people (races to the bottom to get reputation and outcompete rivals) and who each come with the risk of being only superficially committed.

Hence why the popular idea of "cultural reform" seems like a roundaboutly weak plan. EA needs to get better at doing the impossible on a hostile planet, including successfully sorting/sifting through accusationspace/powerplays/deception, and evaluating the motives of powerful people in order to determine safe levels of involvement and reciprocity. Not massive untested one-shot social revolutions with unpredictable and irreversible results.

There's people who are good at EA-related thinking and people who are less good at that.

There's people who are good at accumulating resume padding, and people who are less good at that.

Although these are correlated, there will still be plenty of people who are good at EA thinking, but bad at accumulating resume padding. You can think of these people as having fallen through the cracks of the system.

Advances in LLMs give me the impression that we're around ~2-5 years out from most EA orgs becoming much better at correctly identifying/drawing talent from this pool e.g. higher-quality summaries of posts and notes, or tracing upstream origins of original ideas.

I'm less optimistic about solutions to conflict theory/value alignment issues, but advances in talent sourcing/measurement might give orgs more room to focus hiring/evaluation energy on character traits. If talent is easy to measure then there's less incentive to shrug and focus on candidates based on metrics that historically correlated with talent e.g. credentials.

Understanding malevolent high-performance individuals is a highly data-constrained area, even for just high-performance individuals; for example, any so-called "survey of CEOs" should be taken dubiously due to a high risk of intense nonresponse bias (e.g. most of the people on the survey have the CEO title but are only answering the survey because they aren't actually doing most of the tasks undertaken by real CEOs). 

Harder still to get data on individuals who are also malevolent on top of being high-performance. I'm pretty optimistic about technical solutions like fMRI and lie detectors (lie detection systems today are probably much more powerful than they have been for their ~100 years of history), especially when combined with genomics. 

But data on high-performance individuals must be labelled based on the history/cases of revealed malevolent traits, and data on non-high-performance individuals might be plentiful but it's hard to tell if it's useful because high-performance individuals are in such an OOD environment.

Ah, my bad, I did a ctrl + f for "sam"! Glad that it was nothing.

That's interesting, it still doesn't show anywhere on my end. I took this screenshot around 7:14 pm, maybe it's a screen size or aspect ratio thing.

Important to note: I archived the Washington Post homepage here and it showed Robinson's op-ed, but when I went to https://www.washingtonpost.com itself immediately after, at ~5:38 pm San Francisco time, it was nowhere to be found! (I was not signed in for either case).

[This comment is no longer endorsed by its author]Reply

This entire thing is just another manifestation of academic dysfunction 

(philosophy professors using their skills and experience to think up justifications for their pre-existing lifestyle, instead of the epistemic pursuit that justified the emergence of professors in the first place).

It started with academia's reaction to Peter Singer's Famine, Affluence, Morality essay in 1972, and hasn't changed much since. The status quo had already hardened, and the culture became so territorial that whenever someone has a big idea, everyone with power (who already optimized for social status) had an allergic reaction to the memetic spread rather than the epistemics behind the idea itself.

The Dark Forest Problem implies that people centralizing power might face strong incentives to hide, act through proxies, and/or disguise their centralized power as decentralized power. The question is to what extent high-power systems are dark forests vs. the usual quid-pro-quo networks and stable factions.

Changing technology and applications for power, starting in the 1960s, implies that factions would not be stable and iterative trust is less reliable, and therefore a dark forest system was more likely to emerge.

Load more