This is a special post for quick takes by DC. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 3:42 PM
DC
6mo43
15
0

"Jon Wertheim: He made a mockery of crypto in the eyes of many. He's sort of taken away the credibility of effective altruism. How do you see him?

Michael Lewis: Everything you say is just true. And it–and it's more interesting than that. Every cause he sought to serve, he damaged. Every cause he sought to fight, he helped. He was a person who set out in life to maximize the consequences of his actions-- never mind the intent. And he had exactly the opposite effects of the ones he set out to have. So it looks to me like his life is a cruel joke."

😢

Pretty astonishing that Lewis answered "put that way, no" to "do you think he knowingly stole customer money". Feels to me like evidence of the corrupting effect of getting special insider access to a super-rich and powerful person. 

I don't understand your underlying model of human psychology. Sam Bankman-Fried was super-rich and powerful, but is now the kind of person no one would touch with the proverbial ten-foot pole. If the claim is that humans tend to like super-rich and powerful people even after they become disgraced, that seems false based on informal evidence.

In any case, from what I know about Bankman-Fried and his actions, the claim that he did not knowingly steal customer money doesn't strike me as obviously false, and in line with my sense that much of his behavior is explained by a combination of gross incompetence and pathological delusion.

humans tend to like super-rich and powerful people even after they become disgraced, that seems false based on informal evidence

 

I think you fail to empathize with aspects of the nature of power, particularly in that there is a certain fraction of humans who will find cachet in the edgy and criminal. I am not that surprised Lewis may have been unduly affected by being in Sam's orbit and getting front-row seats to such a story. Though for all I know maybe he has accurate insider info, and Sam actually didn't knowingly steal money. ¯\_(ツ)_/¯

I was surprised too, and would be more except for awareness of human fallibility and how much of a sucker we are for good stories. I don't doubt that some of what Lewis said in that interview might be true, but it is being massively distorted by affinity and closeness to Sam.

I interpreted this as not such a negative for EA - sad for sure, it puts the blame more squarely on SBF than the movement which isn't so terrible.

DC
1y61
0
0

I am a bit worried people are going to massively overcorrect on the FTX debacle in ways that don't really matter and impose needless costs in various ways. We should make sure to get a clear picture of what happened first and foremost.

DC
1y6
0
0

I disagree with you somewhat: now is the time for group annealing to take place, and I want to make a bunch of wild reversible updates now because otherwise I may lose the motivation as will others. The 80/20 of information is already here and there are a bunch of decisions we can make to do what we can to improve things within our circle of control. There is something seriously wrong that's going on and it's better to take massive action in light of this plus other patterns.

DC
2mo11
1
9

"X-Risk" Movement-Building Considered Probably Harmful

My instinct has generally been for a while now that it's probably really really bad for the majority of the population to be aware of the meme  of x-risk, or at least more harm than good. See climate doomerism. See (attempted) gain of function research at Wuhan. See asteroid deflection techniques that are dual-use with respect to asteroid weaponization which is orders of magnitude worse of a still far-off risk than natural asteroid impact. See gain of function research at Anthropic which, idk, maybe it's good but that's kinda concerning, as well as all the other resources provided to questionably benevolent AGI companies under the assumption it will do good. "X-risk" seems like something that will make people go crazy in ways that will cause destruction, e.g. people use the term "pivotal act" even when I'd claim it's been superceded by Critch's "pivotal process". I'm also worried about dark triad elites or bureaucrats co-opting these memes for unnecessary power and control, a take from the e/acc vein of thought that I find their most sympathetic position, because it's probably correct when you think in the limit of social memetic momentum. Sorta relatedly, I'm worried about EA becoming a collection of high modernist midwittery as it mainstreams, watered down and unable to course correct from co-options and simplifications. Please message me if you want to riff on these topics.

DC
3mo11
2
1

one part of me is under the impression that more people should commit themselves to things that probably won't work out but would pay off massively if they do. The relevant conflict here is this means losing optionality and taking yourself out of the game for other purposes. We need more wild visions of the future that may work out if e.g. AI doesn't. Playing to your outs is very related but I'm thinking more generally we do in fact need more visions based on different epistemics about how the world is going, and someone might necessarily have to adopt some kind of provisional story of the world that will probably be wrong but is requisite to model any kind of payoff their commitment may have. Real change requires real commitment. Also, most ways to help look like particular bets towards building particular infrastructural upgrades, vs starting an AGI company that Solves Everything. On the flip side, we also need people holding onto their wealth and paying attention, ready to pounce on opportunities that may arise. And maybe you really should just get as close to the dynamo of technocapital acceleration as possible.

Thoughts on liability insurance for global catastrophic risks (either voluntary or mandatory) such as for biolabs or AGI companies? Do you find this to be a high-potential line of intervention?

DC
3y9
0
0

Would you be interested in a Cause Prioritization Newsletter? What would you want to read on it?

I'll sign up and read if it'd be good 😊

What I'd be most interested in are the curation of

  1. New suggestions for possible top cause areas
  2. New (or less known) organizations or experts in the field
  3. Examples of new methodologies 
  4. and generally, interesting new research on prioritization between and within practically any EA-relevant causes.

Add to (3) new explanations or additions to methodologies - e.g., I still haven't found anything substantial about the idea of adding something like 'urgence' to the ITN framework.

Definitely! And I'll raise by my general interest in thoughtful analyses of existing frameworks

Is there some sort of a followup?

DC
1y4
0
0

This seems like an important consideration with regard to  the profusion of projects that people are starting in EA: https://twitter.com/robinhanson/status/1582476452141797378?s=20&t=pTbeJY5mXaf-54e0xxzz-A

People instinctively tend toward solutions that consist of adding something rather than subtracting something, even if the subtraction would be superior. https://psyarxiv.com/4jkvn/ - Rolf Degen

Could you elaborate?

Seems like it could be a case of trying to maintain some sort of standard of high fidelity with EA ideas? Avoid dilution in the community and of the term by not too eagerly labeling ideas as “EA”.

DC
3y6
0
0

What does it mean for a human to properly orient their lives around the Singularity, to update on upcoming accelerating technological changes?

This is a hard problem I've grappled with for years.

It's similar to another question I think about, but with regards to downsides: if you in fact knew Doom was coming, in the form of World War 3 or whatever GCR is strong enough to upset civilization, then what in fact should you do? Drastic action is required. For this, I think the solution is on the order of building an off-grid colony that can survive, assuming one can't prevent the Doom. It's still hard to act on that, though. What is it like to go against the grain in order to do that?

DC
3y5
0
0

Would you be interested in a video coworking group for EAs? Like a dedicated place where you can go to work for 4-8 hours/day and see familiar faces (vs Focusmate which is 1 hour, one-on-one with different people). EAWork instead of WeWork.

DC
4y4
0
0

Someday, someone is going to eviscerate me on this forum, and I'm not sure how to feel about that. The prospect feels bad. I tentatively think I should just continue diving into not giving a fuck and inspire others similarly since one of my comparative advantages is that my social capital is not primarily tied in with fragile appearance-keeping for employment purposes. But it does mean I should not rely on my social capital with Ra-infested EA orgs.

I'm registering now that if you snipe me on here, I'm not gonna defensively respond. I'm not going to provide 20 citations on why I think I'm right. In fact, I'm going to double down on whatever it is I'm doing, because I anticipate in advance that the expected disvalue of discouraging myself due to really poor feedback on here is greater than the expected disvalue of unilaterally continuing something the people with Oxford PhDs think is bad.

This sounds very worrying, can you expand a bit more?  

DC
4y9
0
0

I don't have much slack to respond given I don't enjoy internet arguments, but if you think about the associated reference class of situations, you might note that a common problem is a lack of self-awareness of there being a problem. This is not the case with this dialogue, which should allay your worry somewhat.

The main point here, which this is vagueposting about, is that people on here will dismiss things rather quickly especially if it's a dismissal by someone with a lot of status, in a pile-on way without much overt reflection by the people who upvote such comments. I concluded from seeing this several times that at some point this will happen with a project of mine, and that I should be ok with this world, because this is not a location in which to get good project feedback as far as I can tell. The real risk here I am facing is that I would be dissuaded from the highest-impact projects by people who only believe in things vetted by a lot of academic-style reasoning and evidence that makes legible sense, at the cost of not being able to exploit secrets in the Thielian sense.

It's interesting that the Oxford PhDs are the ones you worry about! Me, I worry about the Bay Area Rat Pack.

DC
4y1
0
0

This is also valid! :)

Omg I can't believe that someone downvoted you for admitting your insecurities on your own shortform!! That's absolutely savage, I'm so sorry.

Inda
4y-8
0
0
DC
3y2
0
0

I am seeking funding so I can work on my collective action project over the next year without worrying about money so much. If this interests you, you can book a call with me here. If you know nothing about me, one legible accomplishment of mine is creating the EA Focusmate group, which has 395 members as of writing.

DC
3y2
0
0

What are ways we could get rid of the FDA?

(Flippant question inspired by the FDA waiting a month to discuss approval for coronavirus vaccines, and more generally it dragging its legs during the pandemic, killing many people, in addition to its other prohibitions being net-negative for humanity. IMO.)

So, I take issue with the implication that the FDA's process for approving the covid vaccine actually delays rollout or causes a significant number of deaths. From my understanding, pharma companies have been ramping up production since they determined their vaccines probably work. They aren't sitting around waiting for FDA approval. Furthermore, I think the approval process is important for ensuring that the public has faith in the vaccine, and that it's actually safe and effective.

DC
2y0
0
0

Stop using "Longtermism" as a pointer to  a particular political coalition with a particular cause.

Why do you think this?

DC
2y1
0
0

When I say this (in a highly compressed form, on the shortform where that's okay), it gets a bit downvoted; when Scott says it, or at least, says something highly similar to my intent, it gets highly upvoted.

Having read both your sentence and Scott's article, I would not have connected these two as saying the same thing without this addition. Given that one sentence in isolation, I'm not able to tell what your intent was. 

I think if you'd expanded it to a few sentences, of the form "Here's what we shouldn't do, this is why, this is what we should do instead" that may have been better, rather than just the former. 

 

DC
3y0
0
0

This post claims the financial system could collapse due to Reasons. I am pretty skeptical but haven't looked at it closely. Signal-boosting due to the low chance it's right. Can someone else who knows more finance analyze its claims?

https://www.reddit.com/r/GME/comments/mgucv2/the_everything_short/

More from DC
73
DC
· 2y ago · 10m read
61
DC
· 2y ago · 2m read
Curated and popular this week