T

trevor1

227 karmaJoined

Comments
263

One of the things I found extraordinary about MrBeast videos was how it seems like viewers come for the extreme content on the thumbnail, and then stay to see the details of how exactly people succeed at doing extraordinary things.

On an economic basis, it looks like it scales really well to find ways to do really big things (unambiguously net positive) that you can also bundle into an entertainment product that also inspires other people. I don't know what the median viewer would think of this, but I found it really vivid to see the videos "ramp up" with more and more people getting helped per minute of the video.

Have you thought of other ways you could set up something where the complexity of the situation or the number people helped gets "ramped up" over the course of the video, or where the subjects of the story find increasingly extraordinary ways to overcome increasingly extraordinary challenges? Showing people shine brightly, being their best self and then winning for it, seems to be a common theme for the channel.

One thing I really liked about it was the title: "situational awareness". I think that phrase is very well-put given the situation, and I got pretty good results from it in conversations which were about AI but not Leopold's paper.

I also found "does this stem from the pursuit of situational awareness" or "how can I further improve situational awareness" to be helpful questions to ask myself every now and then, but I haven't been trying this for very long and might get tired of it eventually (or maybe they will become reflexive automatic instincts which stick and activate when I'd want them to activate; we'll see).

I might be mistaken about this, but I thought there was a possibility that Khrushchev and others anticipated that leaders and influential people in the US/USSR and elsewhere in the world would interpret space race victories as a costly signal of strategic space superiority (while simultaneously being less aggressive and less disruptive to diplomacy than developing and testing more directly military-related technology such as Starfish Prime), and separately there was a possibility that this anticipation was a correct prediction about what stakeholders around the world would conclude about the relative power of the US and USSR (including the third world and "allied" countries which often contained hawk and dove factions and regime change etc).

Momentum behind the space race itself had died out by 1975, possibly as part of the trend described in the 2003 paper "The Nuclear Taboo" which argued that a strong norm against nuclear weapon use developed over time; during the Korean War in 1950, American generals were friendly towards the idea of using nuclear weapons to break the stalemate and ultimately decided not to, but were substantially less friendly towards nuclear weapon use by the time the Vietnam War started and since then have only considered it progressively more unthinkable (the early phases of the Ukraine War in 2022, particularly the period leading up to the invasion, might have been an example of backsliding).

At some point in the 90s or the 00s, the "whole of person" concept became popular in the US Natsec community for security clearance matters.

It distinguishes between a surface level vibe from a person, and trying to understand the whole person. The surface level vibe is literally taking the worst of a person and taking it out of context, whereas the whole person concept is making any effort at all to evaluate the person and the odds that they're good to work with and on what areas. Each subject has their own cost-benefit analysis in the context of different work they might do, and more flexible people (e.g. younger) and weirder people will probably have cost-benefit analysis that change somewhat over time.

In environments where evaluators are incompetent, lack the resources needed to evaluate each person, or believe that humans can't be evaluated, then there's a reasonable justification to rule people out without making an effort to optimize.

Otherwise, evaluators should strive to make predictions and minimize the gap between their predictions of whether a subject will cause harm again, and the reality that comes to pass; for example, putting in any effort at all to succeed at distinguishing between individuals causing harm due to mental health, individuals causing harm due to mistakes due to unpreventable ignorance (e.g. the pauseAI movement), mistakes caused by ignorance that should have been preventable, harm caused by malice correctly attributed to the subject, harm caused by someone spoofing the point of origin, or harm caused by a hostile individual, team, or force covertly using SOTA divide-and-conquer tactics to disrupt or sow discord in an entire org, movement, or vulnerable clique; see Conflict vs mistake theory.

Thanks for making a post for this! Coincidentally (probably both causally downstream of something) I had just watched part of the EAG talk and was like "wow, this is surprisingly helpful, I really wish I had access to something like this back when I was in uni, so I could have at least tried to think seriously about plotting a course around the invisible helicopter blades, instead of what I actually did, which was avoiding it all with a ten-foot pole".

I'm pretty glad that it's an 8-minute post now instead of just a ~1-hour video.

My bad- I should have looked into Nvidia more before commenting.

Your model looked like something that people were supposed to try to poke holes in, and I realized midway through my comment that it was actually a minor nitpick + some interesting dynamics rather than a significant flaw (e.g. even if true it only puts a small dent in the OOM focus).

Stock prices represent risk and information asymmetry, not just the P/E ratio.

The big 5 tech companies (google, amazon, microsoft, facebook, apple) primarily do data analysis and software (with apple as a partial exception). That puts each of the five (except apple to some extent, as their thread to hang on is iphone marketing) at the cutting edge of all the things that high-level data analysis is needed for, which is a very diverse game where each of the diverse elements add in a ton of risk (e.g. major hacks, data poisoning, military/geopolitical applications, lighting-quick historically unprecedented corporate espionage strategies, etc).

The big 5 are more like the wild west, everything that's happening is historically unprecedented and they could easily become the big 4, since a major event e.g. a big data leak could cause a staff exodus or a software exodus that allows the others to subsume most of their market share (imagine how LLMs affected Google's moat for search, except LLMs are just one example of historical unprecedence (that EA happens to focus way closer on, relative to other advancements, than wall street and DC), and most of the big 5 companies are vulnerable in ways as brutal and historically unprecedented as the emergence of LLMs).

Nvidia, on the other hand, is exclusively hardware and has a very strong moat (obviously semiconductor supply chains are a big deal here). This reduces risk premiums substantially, and I think it's reasonable likely that they would even be substantially lower risk per dollar than holding stock diversified between all 5 of the big 5 tech companies combined; I think the big 5 set a precedent that the companies making up the big leagues are each very high risk including in aggregate and Nvidia's unusual degree of stability, while also emerging on the bigleagues stage without diversifying or getting great access to secure data, might potentially shatter the high-risk bigtech company investment paradigm. I think this could cause people's p/e ratio for Nvidia to maybe be twice or even three times higher than it should, if they depend heavily on comparing Nvidia specifically to google, amazon, facebook, microsoft, and apple. This is also a qualitative risk that can also spiral into other effects e.g. a qualitatively different kind of bubble risk than what we've seen from the big 5 over the last ~15 years of the post-2008 paradigm where data analysis is important and respected.

tl;dr Nvidia's stable hardware base might make comparisons to the 5 similarly-sized tech companies unhelpful, as those companies probably have risk premiums that are much higher and more difficult to calculate for investors.

Ah, I see; for years I've been pretty pessimistic about the ability of people to fool systems (namely voice-only lie detectors facilitated by large numbers of retroactively-labelled audio recordings of honest and dishonest statements in the natural environments of different kinds of people) but now that I've read more about humans genetic diversity, that might have been typical mind fallacy on my part; people in the top 1% of charisma and body-language self-control tend to be the ones who originally ended up in high-performance and high-stakes environments as they formed (or forming around then, just as innovative institutions form around high-intelligence and high-output folk).

I can definitely see the best data coming from a small fraction of the human body's outputs such as pupil dilation; most of the body's outputs should yield bayesian updates but that doesn't change the fact that some sources will be wildly more consistent and reliable than others.

Why are you pessimistic about eyetracking and body language? Although those might not be as helpful in experimental contexts, they're much less invasive per unit time, and people in high-risk environments can agree to have specific delineated periods of eyetracking and body language data collected while in the high-performance environments themselves such as working with actual models and code (ie not OOD environments like a testing room).

AFAIK analysts might find uses for this data later on, e.g. observing differences in patterns of change over time based on based on the ultimate emergence of high risk traits, or comparing people to others who later developed high risk traits (comparing people to large amounts of data from other people could also be used to detect positive traits from a distance), spotting the exact period where high risk traits developed and cross-referencing that data with the testimony of a high risk person who voluntarily wants other high risk people to be easier to detect, or depending on advances in data analysis, using that data to help refine controlled environment approaches like pupillometry data or even potentially extrapolating it to high-performance environments. Conditional on this working and being helpful, high-impact people in high-stakes situations should have all the resources desired to create high-trust environments.

The crypto section here didn't seem to adequately cover a likely root cause of the problem. 

The "dark side" of crypto is a dynamic called information asymmetry; in the case of Crypto, it's that wealthier traders are vastly superior at buying low and selling high, and the vast majority of traders are left unaware of how profoundly disadvantaged they are in what is increasingly a zero-sum game. Michael Lewis covered this concept extensively in Going Infinite, the Sam Bankman-Fried book.

This dynamic is highly visible to those in the crypto space (and quant/econ/logic people in general who catch so much as a glimpse), and many elites in the industry like Vitalik and Altman saw it coming from a mile away and tried to find/fund technical solutions e.g. to fix the zero-sum problem e.g. Vitalik's d/acc concept

It was very clear that SBF also appeared to be trying to find technical solutions, rather than just short-term profiteering, but his decision to commit theft points towards the hypothesis that this was superficial.

I can't tell if there's any hope for crypto (I only have verified information on the bad parts, not the good parts if there are any left), but if there is, it would have to come from elite reformers, who are these types of people (races to the bottom to get reputation and outcompete rivals) and who each come with the risk of being only superficially committed.

Hence why the popular idea of "cultural reform" seems like a roundaboutly weak plan. EA needs to get better at doing the impossible on a hostile planet, including successfully sorting/sifting through accusationspace/powerplays/deception, and evaluating the motives of powerful people in order to determine safe levels of involvement and reciprocity. Not massive untested one-shot social revolutions with unpredictable and irreversible results.

Load more