No, you're thinking about it entirely wrong. If everyone who did something analogous to Alameda 2018 was shunned, there probably wouldn't be any billionaire EA donors at all. It was probably worse than most startups, but not remarkably worse. It was definitely not a reliable indicator that a fraud or scandal was coming down the road.
C, Neither. The obvious interpretation is exactly what he said - people ultimately don't care whether you maintained their standard of 'ethical' as long as you win. Which means that as far as talking about other people's ethics, it's all PR, regardless of how ethical you're being by your own standards.
(I basically concur. Success earns massive amounts of social capital, and that social capital can buy a whole lot of forgiveness. Whether it also comes with literal capital which literally buys forgiveness is almost immaterial next to that.)
So he's said essentially nothing about his own ethics and whether he believes he stuck to them. Later elaboration strongly suggests he considered his actions 'sketchy' but doesn't even say that outright. This is entirely consistent with SBF believing that he never did anything wrong on purpose.
Whether you think that belief is true, false but reasonable, or totally delusionary, is a separate matter. Just based on this interview I'd say "false but reasonable", but there's a lot of unsubstantiated claims of a history of lying that I haven't evaluated.
Again, that's orthogonal to the actual problems that surfaced.
Yeah, still not seeing much good faith. You're still ahead of AutismCapital, though, which is 100% bad faith 100% of the time. If you believe a word it says I have a bridge to sell you.
Strongly disagree. That criticism is mostly orthogonal to the actual problems that surfaced. Conflicts of interest were not the problem here.
Most of that isn't even clearly bad, and I find it hard to see good faith here.
Your criticism of Binance amounts to "it's cryptocurrency". Everyone knows crypto can be used to facilitate money laundering; this was, for Bitcoin, basically the whole point. Similarly the criticism of Ponzi schemes; there were literally dozens of ICOs for things that were overtly labeled as Ponzis - Ponzicoin was one of the more successful ones, because it had a good name. Many people walked into this with eyes open; many others didn't, but they were warned, they just didn't heed the warnings. Should we also refuse to take money from anyone who bets against r/wallstreetbets and Robinhood? Casinos? Anyone who runs a platform for sports bets? Prediction markets? Your logic would condemn them all.
It's not clear why FTX would want to spend this amount of money on buying a fraudulent firm.
FTX would prefer that the crypto sector stay healthy, and backstopping companies whose schemes were failing serves that goal. That is an entirely sufficient explanation and one with no clear ethical issues or moral hazard.
Even in retrospect, I think this was bad criticism and it was correct to downvote it.
The 'unambitious' thing you ask the AI to do would create worldwide political change. It is absurd to think that it wouldn't. Even ordinary technological change creates worldwide political change at that scale!
And an AGI having that little impact is also not plausible; if that's all you do, the second mover -- and possibly the third, fourth, fifth, if everyone moves slow -- spits out an AGI and flips the table, because you can't be that unambitious and still block other AGIs from performing pivotal acts, and even if you want to think small, the other actors won't. Even if they are approximately as unambitious, they will have different goals, and the interaction will immediately amp up the chaos.
There is just no way for an actual AGI scenario to meet these guidelines. Any attempt to draw a world which meets them has written the bottom line first and is torturing its logic trying to construct a vaguely plausible story that might lead to it.
Again, that would produce moderate-to-major disruptions in geopolitics. The first doubling with any recursive self-improvement at work being eight years is, also, pretty implausible, because RSI implies more discontinuity than that, but that doesn't matter here, as even that scenario would cause massive disruption.
If humans totally solve alignment, we'd probably ask our AGI to take us to Eutopia slowly, allowing us to savor the improvement and adjust to the changes along the way, rather than leaping all the way to the destination in one terrifying lurch.
Directly conflicts with the geopolitical requirements. Also not compatible with the 'sector by sector' scope of economic impact - an AGI would be revolutionizing everything at once, and the only question would be whether it was merely flipping the figurative table or going directly to interpolating every figurative chemical bond in the table with figurative gas simultaneously and leaving it to crumble into figurative dust.
Otherwise you'd be left with three options that all seem immoral
The 'Silent elitism' view is approximately correct, except in its assumption that there is a current elite who endorse the eutopia, which there is not. Even the most forward-thinking people of today, the Ben Franklins of the 2020s, would balk. The only way humans know how to transition toward a eutopia is slowly over generations. Since this has a substantial cost, speedrunning that transition is desirable, but how exactly that speedrun can be accomplished without leaving a lot of wreckage in its wake is a topic best left for superintelligences, or at the very least intelligences augmented somewhat beyond the best capabilities we currently have available.
Pure propaganda -- instead of trying to make a description that's an honest attempt at translating a strange future into something that ordinary people can understand, we give up all attempts at honesty and just make up a nice-sounding future with no resemblance to the Eutopia which is secretly our true destination.
What a coincidence! You have precisely described this contest. This is, explicitly, a "make up a nice-sounding future with no resemblance to our true destination" contest. And yes, it's at best completely immoral. At worst they get high on their own supply and use it to set priorities, in which case it's dangerous and aims us toward UFAI and impossibilities.
At least it's not the kind of believing absurdities which produces people willing to commit atrocities in service of those beliefs. Unfortunately, poor understanding of alignment creates a lot of atrocities from minimal provocation anyway.
the closest possible description of the indescribable Eutopia must be something that sounds basically good (even if it is clearly also a little unfamiliar), because the fundamental idea of Eutopia is that it's desirable
This is not true. There is no law of the universe which states that there must be a way to translate the ways in which a state good for its inhabitants (who are transhuman or posthuman i.e possessed of humanity and other various important mental qualities), into words which can be conveyed in present human language, by text or speech, that sound appealing. That might be a nice property for a universe to have but ours doesn't.
Some point along a continuum from here to there, a continuum we might slide up or down with effort, probably can be so described - a fixed-point theorem of some sort probably applies. However, that need not be an honest depiction of what life will be like if we slide in that direction, any more than showing a vision of the Paris Commune to a Parisien on the day Napoleon fell (stipulating that they approved of it) would be an honest view of Paris's future.
"Necessarily entails singularity or catastrophe", while definitely correct, is a substantially stronger statement than I made. To violate the stated terms of the contest, an AGI must only violate "transforming the world sector by sector". An AGI would not transform things gradually and limited to specific portions of the economy. It would be broad-spectrum and immediate. There would be narrow sectors which were rendered immediately unrecognizable and virtually every sector would be transformed drastically by five years in, and almost certainly by two years in.
An AGI which has any ability to self-improve will not wait that long. It will be months, not years, and probably weeks, not months. A 'soft' takeoff would still be faster than five years. These rules mandate not a soft takeoff, but no takeoff at all.