Also, the only known raids on the corporate assets happened post-crash and therefore long post-audit. Under the espoused worldview of the management, everything before that was plausibly 'good for the company'. In that it benefitted the company in raw EV across all possible worlds with no discount rate for higher gains or for massive losses.
That wasn't the question. The question was why any company would go to less-than-maximally-trustworthy auditors.
And it makes you wonder why companies would go to these known-worse-auditors, especially if they can afford the best auditing like FTX should have been able to, if they don't have something to hide.
Complying with an audit is expensive, and not just in money.
A thorough audit in progress is going to disrupt the workflow of all or most of your company in order to look at their daily operations more closely. This reduces productivity and slows down the ability to change anything, even if nothing improper is happening. It is expensive and disruptive.
A thorough ...
Simple: It's another meta thing. Those have a very poor track record and seem to require extraordinary competence to be net-positive.
That's literally just the same thing I said with more words. They don't have reasons to think finance is net negative, it just is polluted with money and therefore bad.
Those two are perfectly good examples. They did. Every successful startup does something approximately that bad, on the way to the top.
Because finance people are bad people and therefore anything associated with them is bad. Or for a slightly larger chain, because money is bad, people who spend their lives seeking money are therefore bad, and anything associated with those people is bad.
Don't overthink this. It doesn't have to make sense, there just have to be a lot of people who think it does.
Why wouldn't it be controversial? It suggests something other than people acting according to their personal pet projects, ideologies, and social affiliations, and proposes a way by which those can be compared and found wanting. The fact that it also comes with significantly more demandingness than anything else just makes it a stronger implicit attack.
Most people will read EA as a claim to the moral high ground, regardless of how nicely it's presented to them. Largely because it basically is one. Implicit in all claims to the moral high ground - even if i...
No, you're thinking about it entirely wrong. If everyone who did something analogous to Alameda 2018 was shunned, there probably wouldn't be any billionaire EA donors at all. It was probably worse than most startups, but not remarkably worse. It was definitely not a reliable indicator that a fraud or scandal was coming down the road.
C, Neither. The obvious interpretation is exactly what he said - people ultimately don't care whether you maintained their standard of 'ethical' as long as you win. Which means that as far as talking about other people's ethics, it's all PR, regardless of how ethical you're being by your own standards.
(I basically concur. Success earns massive amounts of social capital, and that social capital can buy a whole lot of forgiveness. Whether it also comes with literal capital which literally buys forgiveness is almost immaterial next to that.)
So he's said...
Yeah, still not seeing much good faith. You're still ahead of AutismCapital, though, which is 100% bad faith 100% of the time. If you believe a word it says I have a bridge to sell you.
Strongly disagree. That criticism is mostly orthogonal to the actual problems that surfaced. Conflicts of interest were not the problem here.
Most of that isn't even clearly bad, and I find it hard to see good faith here.
Your criticism of Binance amounts to "it's cryptocurrency". Everyone knows crypto can be used to facilitate money laundering; this was, for Bitcoin, basically the whole point. Similarly the criticism of Ponzi schemes; there were literally dozens of ICOs for things that were overtly labeled as Ponzis - Ponzicoin was one of the more successful ones, because it had a good name. Many people walked into this with eyes open; many others didn't, but they were warned, they just di...
The 'unambitious' thing you ask the AI to do would create worldwide political change. It is absurd to think that it wouldn't. Even ordinary technological change creates worldwide political change at that scale!
And an AGI having that little impact is also not plausible; if that's all you do, the second mover -- and possibly the third, fourth, fifth, if everyone moves slow -- spits out an AGI and flips the table, because you can't be that unambitious and still block other AGIs from performing pivotal acts, and even if you want to think small, the other actor...
Again, that would produce moderate-to-major disruptions in geopolitics. The first doubling with any recursive self-improvement at work being eight years is, also, pretty implausible, because RSI implies more discontinuity than that, but that doesn't matter here, as even that scenario would cause massive disruption.
If humans totally solve alignment, we'd probably ask our AGI to take us to Eutopia slowly, allowing us to savor the improvement and adjust to the changes along the way, rather than leaping all the way to the destination in one terrifying lurch.
Directly conflicts with the geopolitical requirements. Also not compatible with the 'sector by sector' scope of economic impact - an AGI would be revolutionizing everything at once, and the only question would be whether it was merely flipping the figurative table or going directly to interpolating every...
"Necessarily entails singularity or catastrophe", while definitely correct, is a substantially stronger statement than I made. To violate the stated terms of the contest, an AGI must only violate "transforming the world sector by sector". An AGI would not transform things gradually and limited to specific portions of the economy. It would be broad-spectrum and immediate. There would be narrow sectors which were rendered immediately unrecognizable and virtually every sector would be transformed drastically by five years in, and almost certainly by two years...
Even a slow takeoff! If there is recursive self-improvement at work at all, on any scale, you wouldn't see anything like this. You'd see moderate-to-major disruptions in geopolitics, and many or all technology sectors being revolutionized simultaneously.
This scenario is "no takeoff at all" - advancement happening only at the speed of economic growth.
A positive vision which is false is a lie. No vision meeting the contest constraints is achievable, or even desirable as a post-AGI target. There might be some good fiction that comes out of this, but it will be as unrealistic as Vinge's Zones of Thought setting. Using it in messaging would be at best dishonest, and, worse, probably self-deceptive.
These goals are not good goals.
It is actively harmful for people to start thinking about the future in more positive terms, if those terms are misleading and unrealistic. The contest ground rules frame "positive terms" as being familiar, not just good in the abstract - they cannot be good but scary, as any true good outcome must be. See Eutopia is Scary:
...We, in our time, think our life has improved in the last two or three hundred years. Ben Franklin is probably smart and forwa
This project will give people an unrealistically familiar and tame picture of the future. Eutopia is Scary, and the most unrealistic view of the future is not the dystopia, nor the utopia, but the one which looks normal.[1] The contest ground rules requires, if not in so many words, that all submissions look normal. Anything which obeys these ground rules is wrong. Implausible, unattainable, dangerously misleading, bad overconfident reckless arrogant wrong bad.
This is harmful, not helpful; it is damaging, not improving, the risk messaging;...
Very few people actually want to wirehead. Pleasure center stimulation is not the primary thing we value. The broader point there is the complexity of value thesis
For a realistic but largely utopic near-future setting, I recommend Rainbows End by Vernor Vinge. Much of the plot involves a weak and possibly immersion-breaking take on AGI, but in terms of forecasting a near-future world where most problems have become substantially more superficial and mild, the background events and supporting material is very good.
Dimensional travel, in my head, but this is allegory, the details are intentionally unspecified. I worked on making the literalness more plausible without outright lying to the reader, but it's a hard needle to thread.
The conclusion is not as strong as I'd like, but illusion of transparency is real, so I'm leery of completely removing the didactic quality. It's much subtler than the Fable of the Dragon Tyrant already, and that one works well (though I think it would be better if it was less of an anvil-drop).
On which level? There's two intended morals here - one is the analogy to global poverty and open borders; the wonderful world is the West and Hell is the Third World. The other is the explicit one in the last sentence: what problems in the world are you missing, simply because they don't affect your life and are therefore easy to overlook? And particularly the point that it doesn't take anything special to notice - just someone without preconceptions who sees it and then refuses to look away.
The particular choice of analogy is inspired by Unsong.
The only concrete change specified here is something you've previously claimed to already do. This is yet one more instance of you not actually changing your behavior when sanctioned.
The 'stylistic choices' were themselves evidence of wrongdoing, and most of their evidence against claims both misstated the claims they claimed to be refuting and provided further (unwitting?) evidence of wrongdoing.
If you have time, can you provide some examples of what you saw as evidence of wrongdoing?
I feel that much of what I saw from my limited engagement was a valid refutation of the claims made. For instance, see the examples given in the post above.
There were responses to new claims and I saw those as being about making it clear that other claims, which had been made separately from Ben's post, were also false.
I did see some cases where a refutation and claim didn't exactly match, but I didn't register that as wrongdoing (which might be due to bias or n... (read more)