All of Czynski's Comments + Replies

The 'stylistic choices' were themselves evidence of wrongdoing, and most of their evidence against claims both misstated the claims they claimed to be refuting and provided further (unwitting?) evidence of wrongdoing.

If you have time, can you provide some examples of what you saw as evidence of wrongdoing? 

I feel that much of what I saw from my limited engagement was a valid refutation of the claims made. For instance, see the examples given in the post above.

There were responses to new claims and I saw those as being about making it clear that other claims, which had been made separately from Ben's post, were also false.

I did see some cases where a refutation and claim didn't exactly match, but I didn't register that as wrongdoing (which might be due to bias or n... (read more)

Also, the only known raids on the corporate assets happened post-crash and therefore long post-audit. Under the espoused worldview of the management, everything before that was plausibly 'good for the company'. In that it benefitted the company in raw EV across all possible worlds with no discount rate for higher gains or for massive losses.

That wasn't the question. The question was why any company would go to less-than-maximally-trustworthy auditors.

And it makes you wonder why companies would go to these known-worse-auditors, especially if they can afford the best auditing like FTX should have been able to, if they don't have something to hide.

Complying with an audit is expensive, and not just in money.

A thorough audit in progress is going to disrupt the workflow of all or most of your company in order to look at their daily operations more closely. This reduces productivity and slows down the ability to change anything, even if nothing improper is happening. It is expensive and disruptive.

A thorough ... (read more)

1
Jason
1y
Corporations have their own legal personhood; it's difficult to see how the corporation's interest could be served by such a shoddy audit that failed to detect apparently unsophisticated, and certainly massive, raids on the corporate fisc by insiders.

Simple: It's another meta thing. Those have a very poor track record and seem to require extraordinary competence to be net-positive.

That's literally just the same thing I said with more words. They don't have reasons to think finance is net negative, it just is polluted with money and therefore bad.

Those two are perfectly good examples. They did. Every successful startup does something approximately that bad, on the way to the top.

Because finance people are bad people and therefore anything associated with them is bad. Or for a slightly larger chain, because money is bad, people who spend their lives seeking money are therefore bad, and anything associated with those people is bad.

Don't overthink this. It doesn't have to make sense, there just have to be a lot of people who think it does.

3
lastmistborn
1y
This seems counterproductively uncharitable. Wall Street in particular and finance in general is perceived by many to be an industry that is overall harmful and has negative value, and that participating in it is contributing to harm and producing very little added value for those outside of high-earning elite groups. It makes a lot sense to me that someone who thinks the finance industry is, on net, harmful will see ETG in finance as a form of ends justify the means reasoning, without having to resort to reducing it to a caricature of "money bad = Wall Street bad = ETG bad, it doesn't have to make sense".

Why wouldn't it be controversial? It suggests something other than people acting according to their personal pet projects, ideologies, and social affiliations, and proposes a way by which those can be compared and found wanting. The fact that it also comes with significantly more demandingness than anything else just makes it a stronger implicit attack.

Most people will read EA as a claim to the moral high ground, regardless of how nicely it's presented to them. Largely because it basically is one. Implicit in all claims to the moral high ground - even if i... (read more)

1
Aaron_Scher
1y
I like this comment and think it answers the question at the right level of analysis. To try and summarize it back: EA’s big assumption is that you should purchase utilons, rather than fuzzies, with charity. This is very different from how many people think about the world and their relationship to charity. To claim that somebody’s way of “doing good” is not as good as they think is often interpreted by them as an attack on their character and identity, thus met with emotional defensiveness and counterattack. EA ideas aim to change how people act and think (and for some core parts of their identity); such pressure is by default met with resistance.

No, you're thinking about it entirely wrong. If everyone who did something analogous to Alameda 2018 was shunned, there probably wouldn't be any billionaire EA donors at all. It was probably worse than most startups, but not remarkably worse.  It was definitely not a reliable indicator that a fraud or scandal was coming down the road.

3
Greg_Colbourn
1y
Dustin Moskovitz and Jaan Tallinn were already EA ~billionaire donors well before 2018. They haven't done anything analogous to what SBF/FTX/Alameda did. What examples are you thinking of?

C, Neither. The obvious interpretation is exactly what he said - people ultimately don't care whether you maintained their standard of 'ethical' as long as you win. Which means that as far as talking about other people's ethics, it's all PR, regardless of how ethical you're being by your own standards.

 (I basically concur. Success earns massive amounts of social capital, and that social capital can buy a whole lot of forgiveness. Whether it also comes with literal capital which literally buys forgiveness is almost immaterial next to that.)

So he's said... (read more)

Again, that's orthogonal to the actual problems that surfaced.

5
Greg_Colbourn
1y
I wouldn't say orthogonal, more upstream. If SBF had been shunned from the community in 2018, would we be in this situation now? Sure, he might still have committed massive fraud with the ends of gaining wealth and influence, but the focus would be on the Democrats, or whatever other group became his main affiliation.

Yeah, still not seeing much good faith. You're still ahead of AutismCapital, though, which is 100% bad faith 100% of the time. If you believe a word it says I have a bridge to sell you.

5
Stuart Buck
1y
Is this Sam in disguise? You're literally the only person in existence who seems to think it was somehow unfair to be suspicious (and correctly so!) of SBF for having hired a chief compliance officer with a long history of fraud, and of his pattern of trying to buy up other people's frauds/scams.

Strongly disagree. That criticism is mostly orthogonal to the actual problems that surfaced. Conflicts of interest were not the problem here.

-4
Greg_Colbourn
1y
I'd regard incentive to discount highly immoral business practices (e.g. what happened with Alameda in 2018) as stemming from a conflict of interest (i.e. interest 1: promote integrity in EA; interest 2: get lots of money from SBF for EA. These were in conflict!)

Most of that isn't even clearly bad, and I find it hard to see good faith here. 

Your criticism of Binance amounts to "it's cryptocurrency". Everyone knows crypto can be used to facilitate money laundering; this was, for Bitcoin, basically the whole point. Similarly the criticism of Ponzi schemes; there were literally dozens of ICOs for things that were overtly labeled as Ponzis - Ponzicoin was one of the more successful ones, because it had a good name. Many people walked into this with eyes open; many others didn't, but they were warned, they just di... (read more)

6
Stuart Buck
1y
The only flaw in my earlier comment is that I was too charitable towards SBF in suggesting that there might be some plausible excuse for the multiple red flags I noticed. 
9
Stuart Buck
1y
My criticism of Binance was not "it's cryptocurrency." My criticism of Binance was that at the very time that that SBF allied with Binance, it was a "hub for hackers, fraudsters and drug traffickers." Apparently your defense of SBF is that "everyone knows" crypto is good for little else . . . but perhap if someone enters a field that is mostly or entirely occupied by criminal activity, that isn't actually an excuse?  As for backstopping other scams and frauds, that isn't a way to make sure that the "crypto sector stays healthy" (barring very unusual definitions of the word "healthy"), and in actuality, we're now seeing evidence that FTX was just trying to extract assets from other companies in a desperate attempt to shore up their own malfeasance and fraud. https://twitter.com/AutismCapital/status/1591569275642589184 

The 'unambitious' thing you ask the AI to do would create worldwide political change. It is absurd to think that it wouldn't. Even ordinary technological change creates worldwide political change at that scale!

And an AGI having that little impact is also not plausible; if that's all you do, the second mover -- and possibly the third, fourth, fifth, if everyone moves slow -- spits out an AGI and flips the table, because you can't be that unambitious and still block other AGIs from performing pivotal acts, and even if you want to think small, the other actor... (read more)

2
Robert Kralisch
2y
I believe that you are too quick to label this story as absurd. Ordinary technology does not have the capacity to correct towards explicitly smaller changes that still satisfy the objective. If the AGI wants to prevent wars while minimally disturbing the worldwide politics, I find it plausible that it would succeed. Similarly, just because an AGI has very little visible impact, does not mean that it isn't effectively in control. For a true AGI, it should be trivial to interrupt the second mover without any great upheaval. It should be able to surpress other AGIs from coming into existence without causing too much of a stir. I do somewhat agree with your reservations, but I find that your way of adressing them seems uncharitable (i.e. "at best completely immoral").

Again, that would produce moderate-to-major disruptions in geopolitics. The first doubling with any recursive self-improvement at work being eight years is, also, pretty implausible, because RSI implies more discontinuity than that, but that doesn't matter here, as even that scenario would cause massive disruption.

 If humans totally solve alignment, we'd probably ask our AGI to take us to Eutopia slowly, allowing us to savor the improvement and adjust to the changes along the way, rather than leaping all the way to the destination in one terrifying lurch.  

Directly conflicts with the geopolitical requirements. Also not compatible with the 'sector by sector' scope of economic impact - an AGI would be revolutionizing everything at once, and the only question would be whether it was merely flipping the figurative table or going directly to interpolating every... (read more)

7
Jackson Wagner
2y
See my response to kokotajlod to maybe get a better picture of where I am coming from and how I am thinking about the contest. "Directly conflicts with the geopolitical requirements." -- How would asking the AGI to take it slow conflict with the geopolitical requirements?  Imagine that I invent a perfectly aligned superintelligence tomorrow in my spare time, and I say to it, "Okay AGI, I don't want things to feel too crazy, so for starters, how about you give humanity 15% GDP growth for the next 30 years?   (Perhaps by leaking designs for new technologies discreetly online.)  And make sure to use your super-persuasion to manipulate public sentiment a bit so that nobody gets into any big wars."  That would be 5x the current rate of worldwide economic growth, which would probably feel like "transforming the economy sector by sector" to most normal people.  I think that world would perfectly satisfy the contest rules.  The only problems I can see are: * The key part of my story is not very realistic or detailed.  (How do I end up with a world-dominating AGI perfectly under my control by tomorrow?) * I asked my AGI to do something that you would consider unambitious, and maybe immoral.  You'd rather I command my genie to make changes somewhere on the spectrum from "merely flipping the figurative table" to dissolving the entire physical world and reconfiguring it into computronium.  But that's just a personal preference of yours -- just because I've invented an extremely powerful AGI doesn't mean I can't ask it to do boring ordinary things like merely curing cancer instead of solving immortality. I agree with you that there's a spectrum of different things that can be meant by "honesty", sliding from "technically accurate statements which fail to convey the general impression" to "correctly conveying the general impression but giving vague or misleading statements", and that in some cases the thing we're trying to describe is so strange that no matter where we go al

"Necessarily entails singularity or catastrophe", while definitely correct, is a substantially stronger statement than I made. To violate the stated terms of the contest, an AGI must only violate "transforming the world sector by sector". An AGI would not transform things gradually and limited to specific portions of the economy. It would be broad-spectrum and immediate. There would be narrow sectors which were rendered immediately unrecognizable and virtually every sector would be transformed drastically by five years in, and almost certainly by two years... (read more)

Even a slow takeoff! If there is recursive self-improvement at work at all, on any scale, you wouldn't see anything like this. You'd see moderate-to-major disruptions in geopolitics, and many or all technology sectors being revolutionized simultaneously.

This scenario is "no takeoff at all" - advancement happening only at the speed of economic growth.

4
Rohin Shah
2y
Sorry for the late reply. You seem to have an unusual definition of slow takeoff. If I take on the definition in this post (probably the most influential post by a proponent of slow / continuous takeoff), there's supposed to be an 8-year doubling before a 2-year doubling. An 8-year doubling corresponds to an average of 9% growth each year (roughly double the current amount). Let's say that we actually reach the 9% growth halfway through that doubling; then there are 4 years before the first 2-year doubling even starts. If you define AGI to be the AI technology that's around at 9% growth (which, let's recall, is doubling the growth rate, so it's quite powerful), then there are > 6 years left until the singularity (4 years from the rest of the 8-year doubling, 2 years from the first 2-year doubling, which in turn happens before the start of the first 0.5 year doubling, which in turn is before the singularity). Presumably you just think slow takeoff of this form is completely implausible, but I'd summarize that as either "Czynski is very confident in fast / discontinuous takeoff" or "Czynski uses definitions that are different from the ones other people are using".

A positive vision which is false is a lie. No vision meeting the contest constraints is achievable, or even desirable as a post-AGI target. There might be some good fiction that comes out of this, but it will be as unrealistic as Vinge's Zones of Thought setting. Using it in messaging would be at best dishonest, and, worse, probably self-deceptive.

These goals are not good goals.

  • Encourage people to start thinking about the future in more positive terms.

It is actively harmful for people to start thinking about the future in more positive terms, if those terms are misleading and unrealistic. The contest ground rules frame "positive terms" as being familiar, not just good in the abstract - they cannot be good but scary, as any true good outcome must be. See Eutopia is Scary:

We, in our time, think our life has improved in the last two or three hundred years.  Ben Franklin is probably smart and forwa

... (read more)
9
Jackson Wagner
2y
The contest is only about describing 2045, not necessarily a radically alien far-future "Eutopia" end state of human civilization.  If humans totally solve alignment, we'd probably ask our AGI to take us to Eutopia slowly, allowing us to savor the improvement and adjust to the changes along the way, rather than leaping all the way to the destination in one terrifying lurch.  So I'm thinking there are probably some good ways to answer this prompt. But let's engage with the harder question of describing a full Eutopia.  If Eutopia is truly good, then surely there must be honest ways of describing it that express why it is good and desirable, even if Eutopia is also scary.  Otherwise you'd be left with three options that all seem immoral: 1. Silent elitism -- the rabble will never understand Eutopia, so we simply won't tell them where we're taking humanity.  They'll thank us later, when we get there and they realize it's good. 2. Pure propaganda -- instead of trying to make a description that's an honest attempt at translating a strange future into something that ordinary people can understand, we give up all attempts at honesty and just make up a nice-sounding future with no resemblance to the Eutopia which is secretly our true destination. 3. Doomed self-defeating attempts at honesty -- if you tell such a scary story about "Eutopia" that nobody would want to live there, then people will react badly to it and they'll demand to be steered somewhere else.   Because of your dedication to always emphasizing the full horror and incomprehensibility, your attempts to persuade people of Eutopia will only serve to move us farther away from it. It's impossible to imagine infinity, but if you're trying to explain how big infinity is, surely it's better to say "it's like the number of stars in the night sky", or "it's like the number of drops of water in the ocean", than to say "it's like the number of apples you can fit in a bucket".  Similarly, the closest possible descr

This project will give people an unrealistically familiar and tame picture of the future.  Eutopia is Scary, and the most unrealistic view of the future is not the dystopia, nor the utopia, but the one which looks normal.[1] The contest ground rules requires, if not in so many words, that all submissions look normal. Anything which obeys these ground rules is wrong. Implausible, unattainable, dangerously misleading, bad overconfident reckless arrogant wrong bad

This is harmful, not helpful; it is damaging, not improving, the risk messaging;... (read more)

4
aaguirre
2y
There's obviously lots I disagree with here, but at bottom, I simply don't think it's the case that economically transformative AI necessarily entails singularity or catastrophe within 5 years in any plausible world: there are lots of imaginable scenarios compatible with the ground rules set for this exercise, and I think assigning accurate probabilities amongst them and relative to others is very, very difficult.
2
hrosspet
2y
That something is very unlikely doesn't mean it's unimaginable. The goal of imagining and exploring such unlikely scenarios is that with a positive vision we can at least attempt to make it more likely. Without a positive vision there are only catastrophic scenarios left. That's I think the main motivation for FLI to organize this contest. I agree, though, that the base assumptions stated in the contest make it hard to come up with a realistic image.

Very few people actually want to wirehead. Pleasure center stimulation is not the primary thing we value. The broader point there is the complexity of value thesis

For a realistic but largely utopic near-future setting, I recommend Rainbows End by Vernor Vinge. Much of the plot involves a weak and possibly immersion-breaking take on AGI, but in terms of forecasting a near-future world where most problems have become substantially more superficial and mild, the background  events and supporting material is very good.

Dimensional travel, in my head, but this is allegory, the details are intentionally unspecified. I worked on making the literalness more plausible without outright lying to the reader, but it's a hard needle to thread.

 

The conclusion is not as strong as I'd like, but illusion of transparency is real, so I'm leery of completely removing the didactic quality. It's much subtler than the Fable of the Dragon Tyrant already, and that one works well (though I think it would be better if it was less of an anvil-drop).

On which level? There's two intended morals here - one is the analogy to global poverty and open borders; the wonderful world is the West and Hell is the Third World. The other is the explicit one in the last sentence: what problems in the world are you missing, simply because they don't affect your life and are therefore easy to overlook? And particularly the point that it doesn't take anything special to notice - just someone without preconceptions who sees it and then refuses to look away.

The particular choice of analogy is inspired by Unsong.

The only concrete change specified here is something you've previously claimed to already do. This is yet one more instance of you not actually changing your behavior when sanctioned.

1
Gleb_T
7y
You are mistaken, we have never claimed that we will distance InIn publicly from the EA movement. We have previously talked about us not focusing on EA in our broad audience writings, and instead talking about effective giving - which is what we've been doing. At the same time, we were quite active on the EA Forum, and engaging in a lot of behind-the-scenes, and also public, collaborations to promote effective marketing within the EA sphere. Now, we are distancing from the EA movement as a whole.