What if, instead of releasing very long reports about decisions that were already made, there were a steady stream of small analyses on specific proposals, or even parts of proposals, to enlist others to aid error detection before each decision?
You know what, I was reading Zvi's musings on Going Infinite...
...Q: But it’s still illegal to mislead a bank about the purpose of a bank account.
Michael Lewis: But nobody would have cared about it.
He seems to not understand that this does not make it not a federal crime? That ‘we probably would not have otherwise gotten caught on this one’ is not a valid answer?
Similarly, Lewis clearly thinks ‘the money was still there and eventually people got paid back’ should be some sort of defense for fraud. It isn’t, and it shouldn’t be.
...
Nor was Sam a liar, in Lewis’
this almost confirms for me that FTX belongs on the list of ways EA and rationalist organizations can basically go insane in harmful ways,
I was confused by this until I read more carefully. This link's hypothesis is about people just trying to fit in―but SBF seemed not to try to fit in to his peer group! He engaged in a series of reckless and fraudulent behaviors that none of his peers seemed to want. From Going Infinite:
...He had not been able to let Modelbot rip the way he’d liked—because just about every other human being inside Alameda Research was doing
Superforecasters more than quadruple their extinction risk forecasts by 2100 if conditioned on AGI or TAI by 2070.
Replies to those comments mostly concur:
...(Replies to Jacob)
(AdamB) I had almost exactly the same experience.
(sclmlw) I'm sorry you didn't get into the weeds of the tournament. My experience was that most of the best discussions came at later stages of the tournament. [...]
(Replies to magic9mushroom)
(Dogiv) I agree, unfortunately there was a lot of low effort participation, and a shocking number of really dumb answers, like putting the probability that something will happen by 2030 higher than the probability it will happen by 2050. In one memorable ca
This post wasn't clear about how the college students were asked about extinction, but here's a hypothesis: public predictions for "the year of human extinction at 2500" and "the number of future humans at 9 billion" are a result of normies hearing a question that mentions "extinction", imagining an extinction scenario or three, guessing a year and simply giving that year as their answer (without having made any attempt to mentally create a probability distribution).
I actually visited this page to learn about how the "persuasion" part of the tournament panned out, though, and I see nothing about that topic here. Guess I'll check the post on AI next...
Apr 30 2022
Is there a newer one? Didn't find one with a quick search.
This post focuses on higher level “cause areas”, not on lower-level “interventions”
Okay, but what if my proposed intervention is a mixture of things? I think of it as a combination of public education, doing Google's job for them (organizing the world's information), promoting rationality/epistemics/empiricism, and reducing catastrophic risk (because popular false beliefs have exacerbated global warming, may destabilize the United States in the future, etc.)
I would caution against thinking the Hard Problem of Consciousness is unsolvable "by definition" (if it is solved, qualia will likely become quantifiable). I think the reasonable thing is to presume it is solvable. But until it is solved we must not allow AGI takeover, and even if AGIs stay under human control, it could lead to a previously unimaginable power imbalance between a few humans and the rest of us.
To me, it's important whether the AGIs are benevolent and have qualia/consciousness. If AGIs are ordinary computers but smart, I may agree; if they are conscious and benevolent, I'm okay being a pet.
quickly you discover that [the specifics of the EA program] are a series of tendentious perspectives on old questions, frequently expressed in needlessly-abstruse vocabulary and often derived from questionable philosophical reasoning that seems to delight in obscurity and novelty
He doesn't talk or quote specifics, as if to shield his claim from analysis. "tendentious"? "abstruse"? He's complaining that I, as an EA, am "abstruse" meaning obscure/difficult to understand, but I'm the one that has to look up his words in the dictionary. As for how EAs "seem", ...
I was one of those who criticized Kat's response pretty heavily, but I really appreciated TracingWoodgrains' analysis and it did shift my perspective. I was operating from an assumption that Ben & Hab were using an appropriate truthseeking process, because why wouldn't they? But now I have the sense that they didn't respond to counterevidence from Spencer G (and others), and the promise of counterevidence from Nonlinear, appropriately. So now I'm confused enough to agree with TW's conclusion: mistrial!
(edit: mind you, as my older comments suggest, in t...
TW, I want to thank you for putting this together, for looking at the evidence more closely than I did, for reminding me that Ben's article violated my own standards of truthseeking, and for highlighting some of Kat's evidence in more effective ways than Kat herself did. (of course, it also helps that you're an outside observer.)
I hadn't read some of those comments under Ben's article (e.g. by Spencer G) until now. I am certain that if I personally had received evidence that I was potentially quite wrong about details of an important article I'd just publi...
well, he was unaware of the existence of more than half of NL's staff.
If anyone has evidence that Ben was indeed this far off about the number of staff, please send it to me (or post it here). I am trying to look into this claim and am really struggling to find any trace of the 21 employees that Kat and drew claim have worked at Nonlinear.
I don't see how you could get to 21 in addition to Kat, Drew and Emerson. There are maybe some contractors, short-term volunteers, and temporary unpaid interns, which I wouldn't usually classify as employees, where ...
This "unambiguous" contradiction seems overly pedantic to me. Surely Kat didn't expect Ben would receive her evidence and do nothing with it? So when Kat asked for time to "gather and share the evidence", she expected Ben, as a reasonable person, would change the article in response, so it wouldn't be "published as is".
Opinions on this are pretty diverse. I largely agree with the bulleted list of things-you-think, and this article paints a picture of my current thinking.
My threat model is something like: the very first AGIs will probably be near human-level and won't be too hard to limit/control. But in human society, tyrants are overrepresented among world leaders, relative to tyrants in the population of people smart enough to lead a country. We'll probably end up inventing multiple versions of AGI, some of which may be straightforwardly turned into superintelligences ...
I strongly agree with the end of your post:
Remember:
Almost nobody is evil.
Almost everything is broken.
Almost everything is fixable.
I want you to know that I don't think you're a villain, and that your pain makes me sad. I wrote some comments that were critical of your responses ... and still I stand by those comments. I dislike and disapprove the approach you took. But I also know that you're hurting, and that makes me sad.
So... I'd like you to dwell on that for a minute.
I wrote something in an edited paragraph deep within a subthread, and thought I should...
Thank you for the empathy. Means a lot to me. This has been incredibly rough, and being expected to exhibit no strong negative emotions in the face of all of this has been very challenging.
And, yes, I do think an alternative timeline like that was possible. I really wish that had happened, and if the multiverse hypothesis is true, then it did happen somewhere, so that's nice to think about.
Exaggeration is fun, but not what this situation calls for. So for me, the only reason I didn't upvote you was the word "deranged". Naivety? Everybody's got some, but I think EAs tend to be below average in that respect.
I think you've both raised good points. Way upthread @Habryka said "I don't see a super principled argument for giving two weeks instead of one week", but if I were unfairly accused I'd certainly want a full two weeks! So Kat's request for a full week to gather evidence seems reasonable [ed: under the principle of due process], and I don't see what sort of opportunities would've existed for retribution from K&E in the two-week case that didn't exist in the one-week case.
However, when I read Ben's post (like TW, I did this "fresh" about two days ago; I ...
That may be, but they valued their community connections and the pay-related disputes suggest that their funding was limited.
What, exactly, do you expect NL to say to clarify that distinction
I expect them to say "advised". This isn't Twitter, and even on Twitter I myself use direct quotes as much as possible despite the increased length, for accuracy's sake. Much of this situation was "(s)he said / she said" where a lot of the claims were about events that were never recorded. So how do we make judgements, then? Partly we rely on the reputations of everyone involved―but in the beginning Kat and Ben had good reputations while (after Ben's post) Alice & Chloe were anonymous, w...
It sounds like what you would be more convinced by is a short, precise refutation of the exact things said by the original post.
But I feel the opposite. That to me would have felt corporate, and also is likely impossible given the way the original allegations are such a blend of verified factual assertions, combined with some things that are technically true but misleading, may be true but are hearsay, and some things that do seem directly false.
Rather than "retaliatory and unkind," my main takeaway from the post was something like "passive-aggressive bene...
Hmm, well Ben said "(for me) a 100-200 hour investigation" in the first post, then said he spent "~320 hours" in the second. Maybe people thought you should've addressed that discrepancy?️ Edit: the alternative―some don't like your broader stance and are clicking disagree on everything. Speaking of which, I wonder if you updated based on Spencer's points?
because of how chilling it is for everyone else to know they could be on blast if they try to do anything.
In part based on Ben's followup (which indicated a high level of care) and based on concerning aspects of this post discussed in other comments here, I'm persuaded that Ben's original post was sufficiently fair (if one keeps in mind the disclaimer that the post was "not from a search to give a balanced picture"), and that most EA orgs don't need to be afraid of unusual social arrangements as long as they're written down and expectations are made clear....
...I still find Chloe's broad perspective credible and concerning [...] it's begging the question to self-describe your group with "Your group has a really optimistic and warm vibe. [...]" some of the short-summary replies to Chloe seemed uncharitable to the point of being mean. [...] I thought it's simply implausible that the most Nonlinear leadership could come up with in terms of "things we could've done differently" is stuff like "Emerson shouldn't have snapped at Chloe during that one stressful day" [...] Even though many the things in my elaboration of
Yeah, at least several comments have much more severe issues than tone or stylistic choices, like rewording ~every claim by Ben, Chloe and Alice, and then assuming that the transformed claims had the same truth value as the original claim.
I'm in a position very similar to Yarrow here: While I think Kat Woods has mostly convinced me that the most incendiary claims are likely false, and I'm sympathetic to the case for suing Ben and Habryka, there was dangerous red flags in the responses, so much so that I'd stop funding Nonlinear entirely, and I think it's quite bad that Kat Woods responded the way they did.
Yes, when I saw that, I had to wonder whether the payment was offered afterward (as a gift) or in advance (possibly in exchange for information).
I disagree because (i) the forum is my main link to the EA community, and (ii) the SBF scandal suggests that it's better if negative info gets around more easily... though of course we should also be mindful of the harms of gossip.
I feel like this response ignores my central points ― my sense that Kat misrepresented/strawmanned the positions of Chloe/Alice/Ben and overall didn't respond appropriately. These points would still be relevant even in a hypothetical disagreement where there was no financial relationship between the parties.
I agree that Ben leaves an impression that abuse took place. I am unsure on that point; it could have been mainly a "clash of personalities" rather than "abuse". Regardless, I am persuaded (partly based on this post) that Kat & Emerson have personal...
I feel like this response ignores my central points ― my sense that Kat misrepresented/strawmanned the positions of Chloe/Alice/Ben and overall didn't respond appropriately.
And I disagree, and used one example to point out why the response is not (to me) a misrepresentation or strawman of their positions, but rather treating them as mostly a collection of vague insinuations peppered with specific accusations that NL can only really respond to by presenting all the ways they possibly can how the relationship they're asserting is not supported by whatever ev...
Ugh, yes of course if you got richer you got the money from somewhere. If you thought I thought otherwise, you were mistaken. (Of course it could've just been printed by the government, but that will cause inflation if not balanced by some kind of in-country value creation or spending reduction.) (Edit: also, Google tells me "Mercantilism was based on the principle that the world's wealth was static" and I do not have any such "mercantilist intuition".)
One way to think about services vs manufacturing: suppose you're very poor and you suddenly earn more money. How do you spend this limited new resource? Certainly you spend some of it on stuff: furniture, a better phone, electricity. If a country lacks manufacturing or exports, the money you spend on stuff leaves the country with no balancing inflow. And when you buy services, the person from whom you bought the services also buys stuff. So if people get richer at scale, the country as a whole tends to bleed that money back out. You can export services som...
An intervention that's on my mind is leveraging the sheer intelligence of some EAs to build a factory design company that is focused on building tools and processes for manufacturing. Might it be possible, for example, to build machines that can be used to help construct a wide variety of products? Could we invent the industrial equivalent of FoldScope (the paper microscope that makes medical diagnosis affordable) for small-scale manufacturing, build the machines at scale in an LMIC, and sell them around the world at cost? Or, could there be something like this for mining on small ore deposits that the big players don't touch?
I agree with this. I think overall I get a sense that Kat responded in just the sort of manner that Alice and Chloe feared*, and that the flavor of treatment that Alice and Chloe (as told by Ben) said they experienced from Kat/Emerson seems to be on display here. (* Edit: I mean, Kat could've done worse, but it wouldn't help her/Nonlinear.)
I also feel like Kat is misrepresenting Ben's article? For example, Kat says
Chloe claimed: they tricked me by refusing to write down my compensation agreement
I just read that article and don't remember any statement to t...
My read on this is that a lot of the things in Ben's post are very between-the-lines rather than outright stated. For example, the financial issues all basically only matter if we take for granted that the employees were tricked or manipulated into accepting lower compensation than they wanted, or were put in financial hardship.
Which is very different from the situation Kat's post seems to show. Like... I don't really think any of the financial points made in the first one hold up, and without those, what's left? A She-Said-She-Said about what they were as...
If you don't think you know what the moral reality is, why are you confident that there is one?
I am confident that if there is no territory relevant to morality, then illusionism is true and (paradoxically) it doesn't matter what our maps contain because the brains that contain the maps do not correlate with any experiences in base reality. I therefore ignore illusionism and proceed with the assumption that there is something real, that it is linked to brains and correlates positively with mental experience, that it is scientifically discoverable, and that...
my suspicion is that you'd run into difficulties defining what it means for morality to be real/part of the territory and also have that be defined independently of "whatever causes experts to converge their opinions under ideal reasoning conditions."
In the absence of new scientific discoveries about the territory, I'm not sure whether experts (even "ideal" ones) should converge, given that an absence of evidence tends to allow room for personal taste. For example, can we converge on the morality of abortion, or of factory farms, without understandin...
I was about to make a comment elsewhere about moral realism when it occurred to me that I didn't have a strong sense of what people mean by "moral realism", so I whipped out Google and immediately found myself here. Given all those references at the bottom, it seems like you are likely to have correctly described what the field of philosophy commonly thinks of as moral realism, yet I feel like I'm looking at nonsense.
Moral realism is based on the word "real", yet I don't see anything I would describe as "real" (in the territory-vs-map sense) in Philippa Fo...
Well, okay. I've argued that other decision procedures and moralities do have value, but are properly considered subordinate to CU. Not sure if these ideas swayed you at all, but if you're Christian you may be thinking "I have my Rock" so you feel no need for another.
If you want to criticize utilitarianism itself, you would have to say the goal of maximizing well-being should be constrained or subordinated by other principles/rules, such as requirements of honesty or glorifying God/etc.
You could do this, but you'd be arguing axiomatically. A claim like "my...
Thanks for taking my comment in the spirit intended. As a noncentral EA it's not obvious to me why EA has little art, but it could be something simple like artists not historically being attracted to EA. It occurs to me that membership drives have often been at elite universities that maybe don't have lots of art majors.
Speaking personally, I'm an engineer and a (unpaid) writer. As such I want to play to my strengths and any time I spend on making art is time not spent using my valuable specialized skills... at least I started using AI art in my latest art...
Well... Communism is structurally disinclined to work in the envisioned way. It involves overthrowing the government, which involves "strong men" and bloodshed, the people who lead a communist regime tend to be strongmen who rule with an iron grip ("for the good of communism", they might say) and are willing to use murder to further their goals. Thanks to this it tends to involve a police state and central planning (which are not the characteristics originally envisioned). More broadly, communism isn't based on consequentialist reasoning. It's an exaggerat...
I think surely EA is still pluralistic ("a question") and it wouldn't be at all surprised if longtermism gets de-emphasized or modified. (I am uncertain, as I don't live in a hub city and can't attend EAG, but as EA expands, new people could have new influence even if EAs in today's hub cities are getting a little rigid.)
In my fantasy, EAs realize that they missed 50% of all longtermism by focusing entirely on catastrophic risk while ignoring the universe of Path Dependencies (e.g. consider the humble Qwerty keyboard―impossible to change, right? Well, I'm ...
I see this as a fundamentally different project than Wikipedia. Wikipedia deliberately excludes primary sources / original research and "non-notable" things, while I am proposing, just as deliberately, to include those things. Wikipedia requires a "neutral point of view" which, I think, is always in danger of describing a linguistic style rather than "real" neutrality (whatever that means). Wikipedia produces a final text that (when it works well) represents a mainstream consensus view of truth, but I am proposing to allow various proposals about what is t...
Oh, I've heard all this crap before
This is my first time.
to develop expert-systems to identify the sequences of coding and non-coding DNA that would need to be changed to morally enhance humans
Forgive my bluntness, but that doesn't sound practical. Since when can we identify "morality nucleotides"?
I suspect morality is more a matter of cultural learning than genetics. No genetic engineering was needed to change humans from slave-traders to people who find slavery abhorrent. Plus, whatever genetic bits are involved, changing them sounds like a huge political can of worms.
I'm sure working for Metaculus or Manifold or OWID would be great.
I was hoping to get some help thinking of something smaller in scope and/or profitable that could eventually grow into this bigger vision. A few years from now, I might be able to afford to self-fund it by working for free (worth >$100,000 annually) but it'll be tough with a family of four and I've lost the enthusiasm I once had for building things alone with no support (it hasn't worked out well before). Plus there's an opportunity cost in terms of my various other ideas. Somehow I have to figure out how to get someone else interested...
You were right, this is one of the least popular ideas around. Perhaps even EAs think the truth is easy to find, that falsehoods aren't very harmful, or that automation can't help? I'm confused too. LW liked it a bit more, but not much.
Sort of related to this, I started to design an easier dialect of English because I think English is too hard and that (1) it would be easier to learn it in stages and (2) two people who have learned the easier dialect could speak it among themselves. This would be nice in reverse; I married a Filipino but found it difficult to learn Tagalog because of the lack of available Tagalog courses and the fact that my wife doesn't understand and cannot explain the grammar of her language. I wish I could learn an intentionally-designed pidgeon/simplified version of...
We are strongly against racism. It's just that Nick Bostrom is not racist (even though I find that his comment 26 years ago was extremely cringe and his apology wasn't done particularly well.)
Perhaps you have some insight about what was meant by "the views this particular academic expressed in his communications"? The criticisms of Bostrom I've seen have consistently declined to say what "views" they are referring to. One exception to this is that I heard one person say that almost everyone thinks it is racist to say that a racial IQ gap exists. To anyone ...
Bostrom did not say it was unknown how much the gap is genetic vs environmental. He said he didn't know. This apparently made some people mad, but I think what made people more mad was that they read things into the apology that Bostrom didn't say, then got mad about it. (That's why most people criticizing the apology avoid quoting the apology.)
There is a Wikipedia page that says
The scientific consensus is that there is no evidence for a genetic component behind IQ differences between racial groups.[9 citations]
I've also glanced at a couple of scientific p...
Give a man a fish, feed him for a day. Give him money for a fishing net or a nice plow, that'll help more.
They know what they need. They just need some money for it.
I find it extremely [...] threatening, and quite frightening that an exalted leader [...] holds these [...] beliefs
You haven't said what "these beliefs" refers to, but given the preceding context, you seem to be strongly objecting not to any belief Bostrom holds, but to his lack of belief. In other words, it is threatening and frightening (in context) that Bostrom said: "It is not my area of expertise, and I don’t have any particular interest in the question. I would leave to others, who have more relevant knowledge, to debate whether or not in addition to...
I feel like some people are reading "I completely repudiate this disgusting email from 26 years ago" and thinking that he has not repudiated the entire email, just because he also says "The invocation of a racial slur was repulsive". I wonder if you interpreted it that way.
One thing I think Bostrom should have specifically addressed was when he said "I like that sentence". It's not a likeable sentence! It's an ambiguous sentence (one interpretation of which is obviously false) that carries a bad connotation (in the same way that "you did worse than Joe on ...
I think that drawing attention to racial gaps in IQ test results without highlighting appropriate social context is in-and-of itself racist.
Why is it that this doesn't count as highlighting appropriate social context?
I also think that it is deeply unfair that unequal access to education, nutrients, and basic
healthcare leads to inequality in social outcomes, including sometimes disparities in skills and cognitive capacity. This is a huge moral travesty that we should not paper over or downplay. [apology paragraph 2]
I guess you could say that the social cont...
That's a fair point. But Rohit's complaint goes way beyond the statement being harmful or badly constructed. Ze is beating around the bush of a much stronger and unsubstantiated claim that is left unstated for some reason: "Bostrom was and is a racist who thinks that race directly affects intelligence level (and also, his epistemics are shit)".
What ze does say: "his apology, was, to put it mildly, mealy mouthed and without much substance" "I'm not here to litigate race science.;" "someone who is so clearly in a position of authority...maintaining this kind...
Sorry if I sounded redundant. I'd always thought of "evaporative cooling of group beliefs" like "we start with a group with similar values/goals/beliefs; the least extreme members gradually get disengaged and leave; which cascades into a more extreme average that leads to others leaving"―very analogous to evaporation. I might've misunderstood, but SBF seemed to break the analogy by consistently being the most extreme, and actively and personally pushing others away (if, at times, accidentally). Edit: So... arguably one can still apply the evaporative cooling concept to FTX, but I don't see it as an explanation of SBF himself.