I'm surprised by all the disagree votes on a comment that is primarily a question.
Do all the people who disagreed think it's obvious whether Ben meant while he was working at AR or subsequently? If so, which one?
(I'm guessing the disagree votes were meant to register disagreement with my claim that it's relatively normal for interviewers / employers to tell candidates reasons a job might not be a good for them. Is that it, or something else?)
These people knew about one of the biggest financial frauds in U.S. history but didn't try to stop it
I think you're stretching here. Nowhere in the article does it suggest that the EA leaders actually knew about ongoing fraud.
It just says (as in the quotes you cited), that they'd been warned Sam was shady. That's very different from having actual knowledge of ongoing fraud. If the article wanted to make that claim, I think it would have been more direct about it.
Sam was fine with me telling prospective AR employees why I thought they shouldn’t join (and in fact I did do this)
Didn't quite follow this part. Is this referring to while you were still at AR or subsequently?
If it was while you were still working there, that seems pretty normal. Not every candidate should be sold on the job. Some should be encouraged not to join if it's not going to be a good fit for them. Why would this even be controversial with Sam? Or were you telling them not to join specifically because of criticisms you had of the CEO?
If it was subsequent, how do you know he was fine with it? What would he have done if he wasn't fine with it?
It was both.
And yeah, the article reports Sam telling someone that he would "destroy them", but I don't fully understand the threat model. I guess the idea is that Sam would tell a bunch of people that I was bad, and then I wouldn't be able to get a job or opportunities in EA?
I guess I don't know for sure that Sam never attempted this, but I can't recall evidence of it.
Your summary of the article's thesis doesn't seem right to me:
b. Even though those EAs (including myself) quit before FTX was founded and therefore could not have had any first-hand knowledge of this improper relationship between AR and FTX, they knew things (like information about Sam’s character) which would have enabled them to predict that something bad would happen
c. This information was passed on to “EA leaders”, who did not take enough preventative action and are therefore (partly) responsible for FTX’s collapse
I interpreted the article as argu...
The article reads to me like it's trying to get away with insinuating that EA leaders somehow knew about or at least suspected the fraud, based on what they were told by employees who had no such suspicions.
They take pains to emphasise the innocence of their sources, of course - I agree that they're painted as the heroes of the story (emphasis mine):
...None of the early Alameda employees who witnessed Bankman-Fried’s behavior years earlier say they anticipated this level of alleged criminal fraud. There was no “smoking gun,” as one put it, that revealed
FWIW, I think such a postmortem should start w/ the manner in which Sam left JS. As far as I'm aware, that was the first sign of any sketchiness, several months before the 2018 Alameda walkout.
Some characteristics apparent at the time:
I believe these were perfectly legal, but to me they look like the first signs that SBF was inclined to:
In the past two years, the technical alignment organisations which have received substantial funding include:
In context it sounds like you're saying that Open Phil funded Anthropic, but as far as I am aware that is simply not true.
I think maybe what you meant to say is that, "These orgs that have gotten substantial funding tend to have ties to Open Phil, whether OP was the funder or not." Might be worth editin...
I'll limit myself to one (multi-part) follow-up question for now —
Suppose someone in our community decides not to defer to the claimed "scientific consensus" on this issue (which I've seen claimed both ways), and looks into the matter themselves, and, for whatever reason, comes to the opposite conclusion that you do. What advice would you have for this person?
I think this is a relevant question because, based in part on comments and votes, I get the impression that a significant number of people in our community are in this position (maybe more so on the r...
I would have to think more on this to have a super confident reply. See also my point in response to Geoffrey Miller elsewhere here--there are lots of considerations at play.
One view I hold, though, is something like "the optimal amount of self-censorship, by which I mean not always saying things that you think are true/useful, in part because you're considering the [personal/community-level] social implications thereof, is non-zero." We can of course disagree on the precise amount/contexts for this, and sometimes it can go too far. And by definition...
Generalizing a lot, it seems that "normie EAs" (IMO correctly) see glaring problems with Bostrom's statement and want this incident to serve as a teachable moment
My view is that the rationalist community deeply values the virtue of epistemic integrity at all costs, and of accurately expressing your opinion regardless of social acceptability.
The EA community is focused on approximately maximising consequentialist impact.
Rationalist EAs should recognise when theses virtue of epistemic integrity and epistemic accuracy are in conflict with maximising consequentialist impact, via direct, unintended consequences of expressing your opinions, or via effects on EA's reputation.
Happy to comment on this, though I'll add a few caveats first:
- My views on priorities among the below are very unstable
- None of this is intended to imply/attribute malice or to demonize all rationalists ("many of my best friends/colleagues are rationalists"), or to imply that there aren't some upsides to the communities' overlap
- I am not sure what "institutional EA" should be doing about all this
- Since some of these are complex topics and ideally I'd want to cite lots of sources etc. in a detailed positive statement on them, I am using the "things to t...
I think there’s evidence that both apologies are insincere, albeit for different reasons (though that may not be clear).
You literally listed the timeframe as a reason (among others) to reject both apologies.
Here are your words again:
The fact that Bostrom's statement comes 26 years after the post in question does little to support the idea that the apology might be motivated by genuine remorse.
and:
...In my eyes, this timeframe really undermines the credibility of his previous apology, to the point of making it irrelevant. If you claim to reject views
How can both a 24 hour turnaround and a 26 year delay be evidence of an insincere apology? Where is the apology delay sweet spot in your eyes — one week later? A month later?
Maybe you think he should have apologized once a year every year on the anniversary of the email?
Sorry for snarky tone, but I feel that being in the business of nitpicking and rejecting apologies is quite a bad policy.
The fact that Bostrom's statement comes 26 years after the post in question does little to support the idea that the apology might be motivated by genuine remorse.
Did you miss the fact that he also apologized within 24 hours of the original email?
Nit: I was very explicitly asking why not sell, not suggesting a commitment to sell; I don't appreciate the rhetorical pivot to argue against a point I was not making.
I don't get this nit. Wasn't Oliver's comment straightforwardly answering your question, "Why not sell it now?" by giving an argument against selling it now?
How is that a pivot? He added the word "commiting", but I don't see how that changes the substance. I think he was just emphasizing what would be lost if we sold now without waiting for more info. Which seems like a perfectly valid answer to the question you asked!
Copying over some comments I made on Twitter, in response to someone suggesting that Sam now appears to be "a sociopath who never gave a toss about EA or its ideals":
...He does seem pretty sociopathic, but it's still unclear to me whether he really cared about EA.
I think it's totally possible that he genuinely wanted to improve the world by funding EA causes, and is also a narcissistic liar who is unwilling to place limits on his own behavior.As Jess Riedel pointed out to me, it looks like Bill Gates ruthlessly exploited his monopoly in the 90s, and als
Yeah, "is a sociopath" is such a deceptively binary way to state it. He seems to be on that spectrum to a certain degree - likely aggravated by stress and psychopharmacology. I'm skeptical of the easy-out narrative to dismissively pathologize here; I also think that in doing so we lose the chance to more critically examine that spectrum as it relates to EAs at large
there is a thing where if you say stuff that seems weird from an EA framework this can come across as cringe to some people, and I do hate a bunch of those cringe reactions, and I think think it contributes a lot to conformity
Can you give an example (even a made up one) of the kind of thing you have in mind here? What kinds of things sound weird and cringy to someone operating within an EA framework, but are actually valuable from an EA perspective?
(Like, play-pumps-but-they-actually-work-this-time? Or some kind of crypto thing that looks like a scam but isn't? Or... what?)
My claims evoke cringe from some readers on this forum, I believe, so I can supply some examples:
The culture emphasizes analysis over practice, and it does not attract many of the leaders and builders that are critical for maximizing impact.
EA has a lot of rhetoric around openness to ideas and perspectives, but actual interaction with the EA universe can feel more like certain conclusions are encased in concrete.
It seems to me that there is some tension between these two criticisms — you want EA to focus less on analysis, but you also don't want us to be too wedded to our conclusions. So how are we supposed to change our minds about the conclusi...
Any tips on the 'how' of funding EA work at such think tanks?
Reach out to individual researchers and suggest they apply for grants (from SFF, LTFF, etc.)? Reach out as a funder with a specific proposal? Something else?
opinion which ... is mainly advocated by billionaires
Do you mean that most people advocating for techno-positive longtermist concern for x-risk are billionaires, or that most billionaires so advocate?
I don't think either claim is true (or even close to true).
It's also not the claim being made:
...minority opinion which happens to reinforce existing power relations and is mainly advocated by billionaires and those funded by [them]...
One reason to keep Tractability separate from Neglectedness is to distinguish between "% of problem solved / extra dollars from anyone" and "% of problem solved / extra dollars from you".
In theory, anybody's marginal dollar is just as good as anyone else's. But by making the distinction explicit, it forces you to consider where on the marginal utility curve we actually are. If you don't track how many other dollars have already been poured into solving a problem, you might be overly optimistic about how far the next dollar will go.
I think this may be close to the reason Holden(?) originally had in mind when he included neglectedness in the framework.
Note that Vitalik Buterin has also recently started promoting related ideas: Retroactive Public Goods Funding
Trendfollowing tends to perform worse in rapid drawdowns because it doesn't have time to rebalance
I wonder if it makes sense to rebalance more frequently when volatility (or trading volume) is high.
The AlphaArchitect funds are more expensive than Vanguard funds, but they're just as cheap after adjusting for factor exposure.
Do you happen to have the numbers available that you used for this calculation? Would be curious to see how you're doing the adjustment for factor exposure.
Looking at historical performance of those Alpha Architects funds (QVAL, etc), it looks like they all had big dips in March 2020 of around 25%, at the same time as the rest of the market.
And I've heard it claimed that assets in general tend to be more correlated during drawdowns.
If that's so, it seems to mitigate to some extent the value of holding uncorrelated assets, particularly in a portfolio with leverage, because it means your risk of margin call is not as low as you might otherwise think.
Have you looked into this issue of correlations during drawdowns, and do you think it changes the picture?
Ah, good point! This was not already clear to me. (Though I do remember thinking about these things a bit back when Piketty's book came out.)
I just feel like I don't know how to think about this because I understand too little finance and economics
Okay, sounds like we're pretty much in the same boat here. If anyone else is able to chime in and enlighten us, please do so!
My superficial impression is that this phenomenon it somewhat surprising a priori, but that there isn't really a consensus for what explains it.
Hmm, my understanding is that the equity premium is the difference between equity returns and bond (treasury bill) returns. Does that tell us about the difference between equity returns and GDP growth?
A priori, would you expect both equities and treasuries to have returns that match GDP growth?
But if you delay the start of this whole process, you gain time in which you can earn above-average returns by e.g. investing into the stock market.
Shouldn't investing into the stock market be considered a source of average returns, by default? In the long run, the stock market grows at the same rate as GDP.
If you think you have some edge, that might be a reason to pick particular stocks (as I sometimes do) and expect returns above GDP growth.
But generically I don't think the stock market should be considered a source of above-average returns. Am I m...
You could make an argument that a certain kind of influence strictly decreases with time. So the hinge was at the Big Bang.
But, there (probably) weren't any agents around to control anything then, so maybe you say there was zero influence available at that time. Everything that happened was just being determined by low level forces and fields and particles (and no collections of those could be reasonably described as conscious agents).
Today, much of what happens (on Earth) is determined by conscious agents, so in some sense the total amount of extant influ...
Separately, I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early.
Do you have some other way of updating on the arrow of time? (It seems like the fact that we can influence future generations, but they can't influence us, is pretty significant, and should be factored into the argument somewhere.)
I wouldn't call that an update on finding ourselves early, but more like just an update on the structure of the population being sampled from.
And the current increase in hinginess seems unsustainable, in that the increase in hinginess we’ve seen so far leads to x-risk probabilities that lead to drastic reduction of the value of worlds that last for eg a millennium at current hinginess levels.
Didn't quite follow this part. Are you saying that if hinginess keeps going up (or stays at the current, high level), that implies a high level of x-risk as well, which means that, with enough time at that hinginess (and therefore x-risk) level, we'll wipe ourselves out; and therefore that we can't have sust...
Just a quick thought on this issue: Using Laplace's rule of succession (or any other similar prior) also requires picking a somewhat arbitrary start point.
Doesn't the uniform prior require picking an arbitrary start point and end point? If so, switching to a prior that only requires an arbitrary start point seems like an improvement, all else equal. (Though maybe still worth pointing out that all arbitrariness has not been eliminated, as you've done here.)
The Nobel Prize comes with a million dollars (9,000,000 SEK). 50k doesn't seem like that much, in comparison.
Another Karnofsky series that I thought was important (and perhaps doesn't fit anywhere else) is his posts on The Straw Ratio.
ballistic ones are faster, but reach Mach 20 and similar speeds outside of the atmosphere
This seems notable, since there is no sound w/o atmosphere. So perhaps ballistic missiles never actually engage in hypersonic flight, despite reaching speeds that would be hypersonic if in the atmosphere? Though I would be surprised if they're reaching Mach 20 at a high altitude and then not still going super fast (above Mach 5) on the way down.
according to Thomas P. Christie (DoD director of Operational Test and Evaluation from 2001–2005) current defense systems “haven’t worked with any degree of confidence”.[12] A major unsolved problem is that credible decoys are apparently “trivially easy” to build, so much so that during missile defense tests, balloon decoys are made larger than warheads--which is not something a real adversary would do. Even then, tests fail 50% of the time.
I didn't follow this. What are the decoys? Are they made by the attacki...
Thanks! Just read it.
I think there's a key piece of your thinking that I don't quite understand / disagree with, and it's the idea that normativity is irreducible.
I think I follow you that if normativity were irreducible, then it wouldn't be a good candidate for abandonment or revision. But that seems almost like begging the question. I don't understand why it's irreducible.
Suppose normativity is not actually one thing, but is a jumble of 15 overlapping things that sometimes come apart. This doesn't seem like it poses any...
Don't Make Things Worse: If a decision would definitely make things worse, then taking that decision is not rational.
Don't Commit to a Policy That In the Future Will Sometimes Make Things Worse: It is not rational to commit to a policy that, in the future, will sometimes output decisions that definitely make things worse.
...
One could argue that R_CDT sympathists don't actually have much stronger intuitions regarding the first principle than the second -- i.e. that their intuitions aren't actually very "targeted" on the first o...
There may be a pretty different argument here, which you have in mind. I at least don't see it yet though.
Perhaps the argument is something like:
both R_UDT and R_CDT imply that the decision to commit yourself to a two-boxing policy at the start of the game would be rational
That should be "a one-boxing policy", right?
Thanks! This is helpful.
It seems like following general situation is pretty common: Someone is initially inclined to think that anything with property P will also have property Q1 and Q2. But then they realize that properties Q1 and Q2 are inconsistent with one another.
One possible reaction to this situation is to conclude that nothing actually has property P. Maybe the idea of property P isn't even conceptually coherent and we should stop talking about it (while continuing to independently discuss properties Q1 and Q2). Often the more natural reactio...
Sorry if I'm missing something (I've only skimmed the paper), but is the "mathematical framework" just the idea of integrating value over time?
I'm quite surprised to see this idea presented as new. Isn't this idea very obvious? Haven't we been thinking this way all along?
Like, how else could you possibly think of the value of the future of humanity? (The other mathematically s... (read more)