All of DPiepgrass's Comments + Replies

Sorry if I sounded redundant. I'd always thought of "evaporative cooling of group beliefs" like "we start with a group with similar values/goals/beliefs; the least extreme members gradually get disengaged and leave; which cascades into a more extreme average that leads to others leaving"―very analogous to evaporation. I might've misunderstood, but SBF seemed to break the analogy by consistently being the most extreme, and actively and personally pushing others away (if, at times, accidentally). Edit: So... arguably one can still apply the evaporative cooling concept to FTX, but I don't see it as an explanation of SBF himself.

What if, instead of releasing very long reports about decisions that were already made, there were a steady stream of small analyses on specific proposals, or even parts of proposals, to enlist others to aid error detection before each decision?

You know what, I was reading Zvi's musings on Going Infinite...

Q: But it’s still illegal to mislead a bank about the purpose of a bank account.

Michael Lewis: But nobody would have cared about it.

He seems to not understand that this does not make it not a federal crime? That ‘we probably would not have otherwise gotten caught on this one’ is not a valid answer?

Similarly, Lewis clearly thinks ‘the money was still there and eventually people got paid back’ should be some sort of defense for fraud. It isn’t, and it shouldn’t be.

...

Nor was Sam a liar, in Lewis’

... (read more)

this almost confirms for me that FTX belongs on the list of ways EA and rationalist organizations can basically go insane in harmful ways,

I was confused by this until I read more carefully. This link's hypothesis is about people just trying to fit in―but SBF seemed not to try to fit in to his peer group! He engaged in a series of reckless and fraudulent behaviors that none of his peers seemed to want. From Going Infinite:

He had not been able to let Modelbot rip the way he’d liked—because just about every other human being inside Alameda Research was doing

... (read more)
2
Habryka
10d
(Author of the post) My model is that Sam had some initial tendencies for reckless behavior and bullet-biting, and those were then greatly exacerbated via evaporative cooling dynamics at FTX.  Relatedly, this kind of evaporative cooling is exactly the dynamic I was trying to point to in my post. Quotes: 
6
DPiepgrass
10d
You know what, I was reading Zvi's musings on Going Infinite... And it occurred to me that all SBF had to do was find a few people who thought like Michael Lewis, and people like that don't seem rare. I mean, don't like 30% of Americans think that the election was stolen from Trump, or that the cases against Trump are a witch hunt, because Trump says so and my friends all agree he's a good guy (and they seek out pep talks to support such thoughts)? Generally the EA community isn't tricked this easily, but SBF was smarter than Trump and he only needed to find a handful of people willing to look the other way while trusting in his Brilliance and Goodness. And since he was smart (and overconfident) and did want to do good things, he needed no grand scheme to deceive people about that. He just needed people like Lewis who lacked a gag reflex at all the bad things he was doing. Before FTX I would've simply assumed other EAs had a "moral gag reflex" already. Afterward, I think we need more preaching about that (and more "punchy" ways to hammer home the importance of things like virtues, rules, reputation and conscientiousness, even or especially in utilitarianism/consequentialism). Such preaching might not have affected SBF himself (since he cut so many corners in his thinking and listening), but someone in his orbit might have needed to hear it.

Superforecasters more than quadruple their extinction risk forecasts by 2100 if conditioned on AGI or TAI by 2070.

  • The data on this table is strange! Originally Superforecasters' gave 0.38% for extinction by 2100 (though 0.088% for RS top quintile) but on this survey it's 0.225%. Why? Also, somehow the first number has 3 digits of precision while the second number is "1%" which is maximally lacking in significant digits (like, if you were rounding off, 0.55% ends up as 1%).
  • The implied result is strange! How could participants' AGI timelines possibly be so l
... (read more)

Replies to those comments mostly concur:

(Replies to Jacob)

(AdamB) I had almost exactly the same experience.

(sclmlw) I'm sorry you didn't get into the weeds of the tournament. My experience was that most of the best discussions came at later stages of the tournament. [...] 

(Replies to magic9mushroom)

(Dogiv) I agree, unfortunately there was a lot of low effort participation, and a shocking number of really dumb answers, like putting the probability that something will happen by 2030 higher than the probability it will happen by 2050. In one memorable ca

... (read more)

This post wasn't clear about how the college students were asked about extinction, but here's a hypothesis: public predictions for "the year of human extinction at 2500" and "the number of future humans at 9 billion" are a result of normies hearing a question that mentions "extinction", imagining an extinction scenario or three, guessing a year and simply giving that year as their answer (without having made any attempt to mentally create a probability distribution).

I actually visited this page to learn about how the "persuasion" part of the tournament panned out, though, and I see nothing about that topic here. Guess I'll check the post on AI next...

Apr 30 2022

Is there a newer one? Didn't find one with a quick search.

This post focuses on higher level “cause areas”, not on lower-level “interventions”

Okay, but what if my proposed intervention is a mixture of things? I think of it as a combination of public education, doing Google's job for them (organizing the world's information), promoting rationality/epistemics/empiricism, and reducing catastrophic risk (because popular false beliefs have exacerbated global warming, may destabilize the United States in the future, etc.)

2
Leo
20d
I'm not updating this anymore. But your post made me curious. I will try to read it shortly.

I would caution against thinking the Hard Problem of Consciousness is unsolvable "by definition" (if it is solved, qualia will likely become quantifiable). I think the reasonable thing is to presume it is solvable. But until it is solved we must not allow AGI takeover, and even if AGIs stay under human control, it could lead to a previously unimaginable power imbalance between a few humans and the rest of us.

To me, it's important whether the AGIs are benevolent and have qualia/consciousness. If AGIs are ordinary computers but smart, I may agree; if they are conscious and benevolent, I'm okay being a pet.

1
Hayven Frienby
4mo
I'm not sure whether we could ever truly know if an AGI was conscious or experienced qualia (which are by definition not quantifiable). And you're probably right that being a pet of a benevolent ASI wouldn't be a miserable thing (but it is still an x-risk ... because it permanently ends humanity's status as a dominant species). 

quickly you discover that [the specifics of the EA program] are a series of tendentious perspectives on old questions, frequently expressed in needlessly-abstruse vocabulary and often derived from questionable philosophical reasoning that seems to delight in obscurity and novelty

He doesn't talk or quote specifics, as if to shield his claim from analysis. "tendentious"? "abstruse"? He's complaining that I, as an EA, am "abstruse" meaning obscure/difficult to understand, but I'm the one that has to look up his words in the dictionary. As for how EAs "seem", ... (read more)

I was one of those who criticized Kat's response pretty heavily, but I really appreciated TracingWoodgrains' analysis and it did shift my perspective. I was operating from an assumption that Ben & Hab were using an appropriate truthseeking process, because why wouldn't they? But now I have the sense that they didn't respond to counterevidence from Spencer G (and others), and the promise of counterevidence from Nonlinear, appropriately. So now I'm confused enough to agree with TW's conclusion: mistrial!

(edit: mind you, as my older comments suggest, in t... (read more)

TW, I want to thank you for putting this together, for looking at the evidence more closely than I did, for reminding me that Ben's article violated my own standards of truthseeking, and for highlighting some of Kat's evidence in more effective ways than Kat herself did. (of course, it also helps that you're an outside observer.)

I hadn't read some of those comments under Ben's article (e.g. by Spencer G) until now. I am certain that if I personally had received evidence that I was potentially quite wrong about details of an important article I'd just publi... (read more)

well, he was unaware of the existence of more than half of NL's staff.

If anyone has evidence that Ben was indeed this far off about the number of staff, please send it to me (or post it here). I am trying to look into this claim and am really struggling to find any trace of the 21 employees that Kat and drew claim have worked at Nonlinear. 

I don't see how you could get to 21 in addition to Kat, Drew and Emerson. There are maybe some contractors, short-term volunteers, and temporary unpaid interns, which I wouldn't usually classify as employees, where ... (read more)

This "unambiguous" contradiction seems overly pedantic to me. Surely Kat didn't expect Ben would receive her evidence and do nothing with it? So when Kat asked for time to "gather and share the evidence", she expected Ben, as a reasonable person, would change the article in response, so it wouldn't be "published as is".

1
David Seiler
4mo
Why not?  According to Nonlinear, they had already told Ben they had evidence, and he'd decided to publish anyway: "He insists on going ahead and publishing this with false information intact, and is refusing to give us time to provide receipts/time stamps/text messages and other evidence".  Ben already wasn't doing what Nonlinear wanted; the idea that he might continue shouldn't have been beyond their imagination.  Since that's unlikely, it follows that Lightcone shouldn't have believed it, and should instead have expected that Nonlinear's threat was meant the way it was written. More broadly, I think for any kind of claim of the form "your interpretation of what I said was clearly wrong and maybe bad faith, it should have been obvious what I really meant", any kind of thoughtful response is going to look pedantic, because it's going to involve parsing through what specifically was said, what they knew when they said it, and what their audience knew when they heard it.  In this kind of discussion I think your pedantry threshold has to be set much higher than usual, or you won't be able to make progress.

Opinions on this are pretty diverse. I largely agree with the bulleted list of things-you-think, and this article paints a picture of my current thinking.

My threat model is something like: the very first AGIs will probably be near human-level and won't be too hard to limit/control. But in human society, tyrants are overrepresented among world leaders, relative to tyrants in the population of people smart enough to lead a country. We'll probably end up inventing multiple versions of AGI, some of which may be straightforwardly turned into superintelligences ... (read more)

3
Hayven Frienby
4mo
Well said. I also think it's important to define what is meant by "catastrophe." Just as an example, I personally would consider it catastrophic to see a future in which humanity is sidelined and subjugated by an AGI (even a "friendly," aligned one), but many here would likely disagree with me that this would be a catastrophe. I've even heard otherwise rational (non-EA) people claim a future in which humans are 'pampered pets' of an aligned ASI to be 'utopian,' which just goes to show the level of disagreement. 

I strongly agree with the end of your post:

Remember:

Almost nobody is evil.

Almost everything is broken.

Almost everything is fixable.

I want you to know that I don't think you're a villain, and that your pain makes me sad. I wrote some comments that were critical of your responses ... and still I stand by those comments. I dislike and disapprove the approach you took. But I also know that you're hurting, and that makes me sad.

So... I'd like you to dwell on that for a minute.

I wrote something in an edited paragraph deep within a subthread, and thought I should... (read more)

Thank you for the empathy. Means a lot to me. This has been incredibly rough, and being expected to exhibit no strong negative emotions in the face of all of this has been very challenging.

And, yes, I do think an alternative timeline like that was possible. I really wish that had happened, and if the multiverse hypothesis is true, then it did happen somewhere, so that's nice to think about.

Exaggeration is fun, but not what this situation calls for. So for me, the only reason I didn't upvote you was the word "deranged". Naivety? Everybody's got some, but I think EAs tend to be below average in that respect.

I think you've both raised good points. Way upthread @Habryka said "I don't see a super principled argument for giving two weeks instead of one week", but if I were unfairly accused I'd certainly want a full two weeks! So Kat's request for a full week to gather evidence seems reasonable [ed: under the principle of due process], and I don't see what sort of opportunities would've existed for retribution from K&E in the two-week case that didn't exist in the one-week case.

However, when I read Ben's post (like TW, I did this "fresh" about two days ago; I ... (read more)

That may be, but they valued their community connections and the pay-related disputes suggest that their funding was limited.

What, exactly, do you expect NL to say to clarify that distinction

I expect them to say "advised". This isn't Twitter, and even on Twitter I myself use direct quotes as much as possible despite the increased length, for accuracy's sake. Much of this situation was "(s)he said / she said" where a lot of the claims were about events that were never recorded. So how do we make judgements, then? Partly we rely on the reputations of everyone involved―but in the beginning Kat and Ben had good reputations while (after Ben's post) Alice & Chloe were anonymous, w... (read more)

It sounds like what you would be more convinced by is a short, precise refutation of the exact things said by the original post.

But I feel the opposite. That to me would have felt corporate, and also is likely impossible given the way the original allegations are such a blend of verified factual assertions, combined with some things that are technically true but misleading, may be true but are hearsay, and some things that do seem directly false.

Rather than "retaliatory and unkind," my main takeaway from the post was something like "passive-aggressive bene... (read more)

Hmm, well Ben said "(for me) a 100-200 hour investigation" in the first post, then said he spent "~320 hours" in the second. Maybe people thought you should've addressed that discrepancy?️ Edit: the alternative―some don't like your broader stance and are clicking disagree on everything. Speaking of which, I wonder if you updated based on Spencer's points?

because of how chilling it is for everyone else to know they could be on blast if they try to do anything.

In part based on Ben's followup (which indicated a high level of care) and based on concerning aspects of this post discussed in other comments here, I'm persuaded that Ben's original post was sufficiently fair (if one keeps in mind the disclaimer that the post was "not from a search to give a balanced picture"), and that most EA orgs don't need to be afraid of unusual social arrangements as long as they're written down and expectations are made clear.... (read more)

I still find Chloe's broad perspective credible and concerning [...] it's begging the question to self-describe your group with "Your group has a really optimistic and warm vibe. [...]" some of the short-summary replies to Chloe seemed uncharitable to the point of being mean. [...] I thought it's simply implausible that the most Nonlinear leadership could come up with in terms of "things we could've done differently" is stuff like "Emerson shouldn't have snapped at Chloe during that one stressful day" [...] Even though many the things in my elaboration of

... (read more)

Yeah, at least several comments have much more severe issues than tone or stylistic choices, like rewording ~every claim by Ben, Chloe and Alice, and then assuming that the transformed claims had the same truth value as the original claim.

I'm in a position very similar to Yarrow here: While I think Kat Woods has mostly convinced me that the most incendiary claims are likely false, and I'm sympathetic to the case for suing Ben and Habryka, there was dangerous red flags in the responses, so much so that I'd stop funding Nonlinear entirely, and I think it's quite bad that Kat Woods responded the way they did.

Yes, when I saw that, I had to wonder whether the payment was offered afterward (as a gift) or in advance (possibly in exchange for information).

3
Habryka
4mo
(It was offered afterwards)

I disagree because (i) the forum is my main link to the EA community, and (ii) the SBF scandal suggests that it's better if negative info gets around more easily... though of course we should also be mindful of the harms of gossip.

I feel like this response ignores my central points ― my sense that Kat misrepresented/strawmanned the positions of Chloe/Alice/Ben and overall didn't respond appropriately. These points would still be relevant even in a hypothetical disagreement where there was no financial relationship between the parties.

I agree that Ben leaves an impression that abuse took place. I am unsure on that point; it could have been mainly a "clash of personalities" rather than "abuse". Regardless, I am persuaded (partly based on this post) that Kat & Emerson have personal... (read more)

I feel like this response ignores my central points ― my sense that Kat misrepresented/strawmanned the positions of Chloe/Alice/Ben and overall didn't respond appropriately.

And I disagree, and used one example to point out why the response is not (to me) a misrepresentation or strawman of their positions, but rather treating them as mostly a collection of vague insinuations peppered with specific accusations that NL can only really respond to by presenting all the ways they possibly can how the relationship they're asserting is not supported by whatever ev... (read more)

Ugh, yes of course if you got richer you got the money from somewhere. If you thought I thought otherwise, you were mistaken. (Of course it could've just been printed by the government, but that will cause inflation if not balanced by some kind of in-country value creation or spending reduction.) (Edit: also, Google tells me "Mercantilism was based on the principle that the world's wealth was static" and I do not have any such "mercantilist intuition".)

One way to think about services vs manufacturing: suppose you're very poor and you suddenly earn more money. How do you spend this limited new resource? Certainly you spend some of it on stuff: furniture, a better phone, electricity. If a country lacks manufacturing or exports, the money you spend on stuff leaves the country with no balancing inflow. And when you buy services, the person from whom you bought the services also buys stuff. So if people get richer at scale, the country as a whole tends to bleed that money back out. You can export services som... (read more)

2
Larks
4mo
There must always be a balancing flow. Your country has to be doing something to get the foreign currency required for that import. This could be exporting more of something else, or it could be attracting more foreign investment (or more aid), but there must be a balance. Your mercantilist intuition is a common one but it is mistaken.

An intervention that's on my mind is leveraging the sheer intelligence of some EAs to build a factory design company that is focused on building tools and processes for manufacturing. Might it be possible, for example, to build machines that can be used to help construct a wide variety of products? Could we invent the industrial equivalent of FoldScope (the paper microscope that makes medical diagnosis affordable) for small-scale manufacturing, build the machines at scale in an LMIC, and sell them around the world at cost? Or, could there be something like this for mining on small ore deposits that the big players don't touch?

I agree with this. I think overall I get a sense that Kat responded in just the sort of manner that Alice and Chloe feared*, and that the flavor of treatment that Alice and Chloe (as told by Ben) said they experienced from Kat/Emerson seems to be on display here. (* Edit: I mean, Kat could've done worse, but it wouldn't help her/Nonlinear.)

I also feel like Kat is misrepresenting Ben's article? For example, Kat says

Chloe claimed: they tricked me by refusing to write down my compensation agreement

I just read that article and don't remember any statement to t... (read more)

My read on this is that a lot of the things in Ben's post are very between-the-lines rather than outright stated. For example, the financial issues all basically only matter if we take for granted that the employees were tricked or manipulated into accepting lower compensation than they wanted, or were put in financial hardship.

Which is very different from the situation Kat's post seems to show. Like... I don't really think any of the financial points made in the first one hold up, and without those, what's left? A She-Said-She-Said about what they were as... (read more)

If you don't think you know what the moral reality is, why are you confident that there is one?

I am confident that if there is no territory relevant to morality, then illusionism is true and (paradoxically) it doesn't matter what our maps contain because the brains that contain the maps do not correlate with any experiences in base reality. I therefore ignore illusionism and proceed with the assumption that there is something real, that it is linked to brains and correlates positively with mental experience, that it is scientifically discoverable, and that... (read more)

3
Lukas_Gloor
7mo
I'm clearly talking about expert convergence under ideal reasoning conditions, as discussed earlier. Weird that this wasn't apparent. In physics or any other scientific domain, there's no question whether experts would eventually converge if they had ideal reasoning conditions. That's what makes these domains scientifically valid (i.e., they study "real things"). Why is morality different? (No need to reply; it feels like we're talking in circles.) FWIW, I think it's probably consistent to have a position that includes (1) a wager for moral realism ("if it's not true, then nothing matters" – your wager is about the importance of qualia, but I've also seen similar reasoning around normativity as the bedrock, or free will), and (2), a simplicity/"lack of plausible alternatives" argument for hedonism. This sort of argument for hedonism only works if you take realism for granted, but that's where the wager comes in handy. (Still, one could argue that tranquilism is 'simpler' than hedonism and therefore more likely to be the one true morality, but okay.) Note that this combination of views isn't quite "being confident in moral realism," though. It's only "confidence in acting as though moral realism is true." I talk about wagering on moral realism in this dialogue and the preceding post. In short, it seems fanatical to me if taken to its conclusions, and I don't believe that many people really believe this stuff deep down without any doubt whatsoever. Like, if push comes to shove, do you really have more confidence in your understanding of illusionism vs other views in philosophy of mind, or do you have more confidence in wanting to reduce the thing that Brian Tomasik calls suffering, when you see it in front of you (regardless of whether illusionism turns out to be true)? (Of course, far be it from me to discourage people from taking weird ideas seriously; I'm an EA, after all. I'm just saying that it's worth reflection if you really buy into that wager wholeheartedly

my suspicion is that you'd run into difficulties defining what it means for morality to be real/part of the territory and also have that be defined independently of "whatever causes experts to converge their opinions under ideal reasoning conditions." 

In the absence of new scientific discoveries about the territory, I'm not sure whether experts (even "ideal" ones) should converge, given that an absence of evidence tends to allow room for personal taste. For example, can we converge on the morality of abortion, or of factory farms, without understandin... (read more)

2
Lukas_Gloor
7mo
If you don't think you know what the moral reality is, why are you confident that there is one? I discuss possible answers to this question here and explain why I find all of the unsatisfying.  The only realism-compatible position I find somewhat defensible is something like "It may turn out that morality isn't a crisp concept in thingspace that gives us answers to all the contested questions (population ethics, comparing human lives to other sentient beings, preferences vs hedonism, etc), but we don't know yet. It may also turn out that as we learn more about the various options and as more facts about human minds and motivation and so on come to light, there will be a theory that 'stands out' as the obvious way of going about altruism/making the world better. Therefore, I'm not yet willing to call myself a confident moral anti-realist." That said, I give some arguments in my sequence why we shouldn't expect any theory to 'stand out' like that. I believe these questions will remain difficult forever and competent reasoners will often disagree on their respective favorite answers. This goes back to the same disagreement we're discussing, the one about expert consensus or lack thereof. The naturalist version of "value is a part of the territory" would be that when we introspect about our motivation and the nature of pleasure and so on, we'll agree that pleasure is what's valuable. However, empirically, many people don't conclude this; they aren't hedonists. (As I defend in the post, I think they aren't thereby making any sort of mistake. For instance, it's simply false that non-hedonist philosophers would categorically be worse at constructing thought experiments to isolate confounding variables for assessing whether we value things other than pleasure only instrumentally. I could totally pass the Ideological Turing test for why some people are hedonists. I just don't find the view compelling myself.)  At this point, hedonists could either concede that there's n

I was about to make a comment elsewhere about moral realism when it occurred to me that I didn't have a strong sense of what people mean by "moral realism", so I whipped out Google and immediately found myself here. Given all those references at the bottom, it seems like you are likely to have correctly described what the field of philosophy commonly thinks of as moral realism, yet I feel like I'm looking at nonsense.

Moral realism is based on the word "real", yet I don't see anything I would describe as "real" (in the territory-vs-map sense) in Philippa Fo... (read more)

3
Lukas_Gloor
7mo
Yeah, that's why I also point out that I don't consider Foot's or Railton's account worthy of the name "moral realism." Even though they've been introduced and discussed that way. I think it's surprisingly difficult to spell out what it would mean for morality to be grounded in the territory. My "One Compelling Axiology" version of moral realism constitutes my best effort at operationalizing what it would mean. Because if morality is grounded in the territory, that should be the cause for ideal reasoners to agree on the exact nature and shape of morality. At this point of the argument, philosophers of a particular school tend to object and say something like the following:  "It's not about what human reasoners think or whether there's convergence of their moral views as they become more sophisticated and better studied. Instead, it's about what's actually true! It could be that there's a true morality, but all human reasoners (even the best ones) are wrong about it." But that sort of argument begs the question. What does it mean for something to be true if we could all be wrong about it even under ideal reasoning conditions? That's the part I don't understand. So, when I steelman moral realism, I assume that we're actually in a position to find out the moral truth. (At least that this is possible in theory, under the best imaginable circumstances.) There's an endnote in a later post in my series that's quite relevant to this discussion. The post is Moral uncertainty and moral realism are in tension, and I'll quote the endnote here:  In the above endnote, I try to defend why I think my description of the One Compelling Axiology version of moral realism is a good steelman, despite some moral realists not liking it because I don't allow for the possibility that moral reality is forever unknowable to even the best human reasoners under ideal reasoning conditions. Definitely! I'm assuming "ideal reasoning conditions" – a super high bar, totally unrealistic in real

Well, okay. I've argued that other decision procedures and moralities do have value, but are properly considered subordinate to CU. Not sure if these ideas swayed you at all, but if you're Christian you may be thinking "I have my Rock" so you feel no need for another.

If you want to criticize utilitarianism itself, you would have to say the goal of maximizing well-being should be constrained or subordinated by other principles/rules, such as requirements of honesty or glorifying God/etc.

You could do this, but you'd be arguing axiomatically. A claim like "my... (read more)

Thanks for taking my comment in the spirit intended. As a noncentral EA it's not obvious to me why EA has little art, but it could be something simple like artists not historically being attracted to EA. It occurs to me that membership drives have often been at elite universities that maybe don't have lots of art majors.

Speaking personally, I'm an engineer and a (unpaid) writer. As such I want to play to my strengths and any time I spend on making art is time not spent using my valuable specialized skills... at least I started using AI art in my latest art... (read more)

2
Jeffrey Kursonis
10mo
Wow thanks for your long and thoughtful reply. I really do appreciate your thinking and I'm glad CU is working for you and you're happy with it...that is a good thing.  I do think you've given me a little boost in my argument against CU unfortunately, though, in the idea that our brain just doesn't have enough compute. There was a post a while back from a well know EA about their long experience starting orgs and "doing EA stuff" and how the lesson they'd taken from it all is that there are just too many unknown variables in life for anything we try to build and plan outcomes for to really work out how we hoped...it's a lot of shots in the dark and sometimes you hit. That is similar to my experience as well...and the reason is we just don't have enough data nor enough compute to process it all...nor adequate points or spectrums of input. The thing that better fits in that kind of category is a robot who with an AI mind can do far more compute...but even they are challenged. So for me that's another good reason against CU optimizing well for humans. And the other big thing I haven't mentioned is our mysterious inner life, the one that responds to spirituality and to emotions within human relationships, and to art...this part of us does not follow logic or compute...it is somehow organic and you could almost say quantum in how we are connected to other people...living with it is vital for happiness...I think the attraction of CU is that it adds to us the logic side that our inner life doesn't always have...and so the answer is to live with both together...to use CU thinking for the effective things it does, but also to realize where it is very ineffective toward human thriving...and so that may be similar to the differences you see between naive and mature CU. Maybe that's how we synthesize our two views.  How I would apply this to the Original Post here is that we should see "the gaping hole where the art should be" in EA as a form of evidence of a bug in EA that

Well... Communism is structurally disinclined to work in the envisioned way. It involves overthrowing the government, which involves "strong men" and bloodshed, the people who lead a communist regime tend to be strongmen who rule with an iron grip ("for the good of communism", they might say) and are willing to use murder to further their goals. Thanks to this it tends to involve a police state and central planning (which are not the characteristics originally envisioned). More broadly, communism isn't based on consequentialist reasoning. It's an exaggerat... (read more)

6
Jeffrey Kursonis
10mo
Yes I appreciate very much what you're saying, I'm learning much from this dialogue. I think what I said that didn't communicate well to you and Brad West isn't some kind of comparison of utilitarianism and communist thought...but rather how people defend their ideal when it's failing, whatever it is...religion, etc. that, "They're not doing it right"..."If you did it right (as I see it) then it would produce much better stuff". EA is uniquely bereft of art in comparison to all other categories of human endeavor: education, business, big tech, military, healthcare, social society, etc. So for EA there's been ten years of incredible activity and massive funding, but no art in sight...so whatever is causing that is a bug and not a feature. Maybe my thesis that utilitarianism is the culprit is wrong. I'd be happy to abandon that thesis if I could find a better one. But given that EA "attracts, creates and retains consequentialists" as you say, and that they are hopefully not the bad kind that doesn't work (naive) but the good kind that works (mature) then why the gaping hole in the center where the art should be? I think it's not naive versus mature utilitarianism, it's that utilitarianism is a mathematical algorithm and simply doesn't work for optimizing human living...it's great for robots. And great for the first pioneering wave of EA blazing a new path...but ulitimately unsustainable for the future.  Eric Hoel does a far better job outlining the poison in utilitarianism that remains no matter how you dilute it or claim it to be naive or mature (but unlike him I am an Effective Altruist). And of course I agree with you on the "it's hard to tell one religion to be another religion", which I myself said in my reply post. In fact, I have a college degree in exactly that - Christian Ministry with an emphasis in "missions' where you go tell people in foreign countries to abandon their culture and religion and adopt yours...and amazingly, you'd be surprised at how wel

I think surely EA is still pluralistic ("a question") and it wouldn't be at all surprised if longtermism gets de-emphasized or modified. (I am uncertain, as I don't live in a hub city and can't attend EAG, but as EA expands, new people could have new influence even if EAs in today's hub cities are getting a little rigid.)

In my fantasy, EAs realize that they missed 50% of all longtermism by focusing entirely on catastrophic risk while ignoring the universe of Path Dependencies (e.g. consider the humble Qwerty keyboard―impossible to change, right? Well, I'm ... (read more)

I see this as a fundamentally different project than Wikipedia. Wikipedia deliberately excludes primary sources / original research and "non-notable" things, while I am proposing, just as deliberately, to include those things. Wikipedia requires a "neutral point of view" which, I think, is always in danger of describing a linguistic style rather than "real" neutrality (whatever that means). Wikipedia produces a final text that (when it works well) represents a mainstream consensus view of truth, but I am proposing to allow various proposals about what is t... (read more)

Oh, I've heard all this crap before

This is my first time.

to develop expert-systems to identify the sequences of coding and non-coding DNA that would need to be changed to morally enhance humans

Forgive my bluntness, but that doesn't sound practical. Since when can we identify "morality nucleotides"?

I suspect morality is more a matter of cultural learning than genetics. No genetic engineering was needed to change humans from slave-traders to people who find slavery abhorrent. Plus, whatever genetic bits are involved, changing them sounds like a huge political can of worms.

1
Paul J. Watson
1y
I agree that morals are not genetically inherited, and I did not mean to imply that. Morals are learned, because given the vicissitudes of human life, and the dynamism of what it takes to successfully cooperate in large groups, there never would have been stable selection to favor even general morals like, say, The Golden Rule. In human life, everyone must learn their in-groups moral's. They also have to learn the styles and parameters of moral deliberation processes that are acceptable within their group. I do think that the cognitive capacity for moral deliberation, which will involve the collaboration of many parts of the brain, must have a heritable genetic foundation. How complex that foundation is remains an empirical question, but it is probably quite complex. In the coming years, however, I think it is reasonable to expect that domain-specific, expert-system AI will be able to help us identify key genes and their variants (alleles), as well as gene*gene interactions, that influence the development of a species,-typical, moral deliberation style, including the non-random ways, our moral deliberation system responds to various socioecological circumstances. In the same way, such domain-specific AI will allow us to understand the complex genetic basis of many diseases and other traits we more predictably would want to modify, because they involve fitness enhancement, like health and longevity, beauty, intelligence, etc. Moreover, such expert systems could help us devise moral enhancement strategies based on genetic engineering (and if we are lucky by less invasive epigenetic engineering) that are most likely to be effective and efficient, with minimal onerous side effects. We are not stuttering from Ground Zero! There is good evidence that certain psychedelic substances can result in chemically-induced moral enhancement. One good starting point would be to look at the mechanisms behind that. Who knows? In the end it may only require fairly minor genetic mod

I'm sure working for Metaculus or Manifold or OWID would be great.

I was hoping to get some help thinking of something smaller in scope and/or profitable that could eventually grow into this bigger vision. A few years from now, I might be able to afford to self-fund it by working for free (worth >$100,000 annually) but it'll be tough with a family of four and I've lost the enthusiasm I once had for building things alone with no support (it hasn't worked out well before). Plus there's an opportunity cost in terms of my various other ideas. Somehow I have to figure out how to get someone else interested...

2
O Carciente
1y
Maybe the right approach is not "developing" / "creating" this, but shifting the systems that are already partway there. You might have a bigger impact if you were working with Wikipedia to shift it more towards the kind of system you would like, for example.  I really doubt that something like this would be profitable quickly, on the grounds that its utility would be derived from its rigour and... Well, people take a while to notice the utility of rigour. 

You were right, this is one of the least popular ideas around. Perhaps even EAs think the truth is easy to find, that falsehoods aren't very harmful, or that automation can't help? I'm confused too. LW liked it a bit more, but not much.

4
PeterSlattery
1y
Thanks for writing this! I agree truth seeking is important.  My low confidence intuition from a quick scan of this is that something like your project probably provides a lot of value if completed. However, it seems that it would be very hard to do, probably not profitable for several years, and never particularly profitable. That's going to make it hard to fund/do. With that in mind, I'd probably look into taking the ideas and motivation and putting them into supporting something else.  With that in mind, have you considered joining the team at Metaculus or Manifold Markets? I wouldn't read into the lack of response too much. In recent times, most people who read the EA forum seem to be looking for quick and digestible insights from sources they know and trust. This is true of me also. The consequence is that posts like yours, get overlooked as they dip in and out. 

Sort of related to this, I started to design an easier dialect of English because I think English is too hard and that (1) it would be easier to learn it in stages and (2) two people who have learned the easier dialect could speak it among themselves. This would be nice in reverse; I married a Filipino but found it difficult to learn Tagalog because of the lack of available Tagalog courses and the fact that my wife doesn't understand and cannot explain the grammar of her language. I wish I could learn an intentionally-designed pidgeon/simplified version of... (read more)

Well, it's new. There are some comments on LW. Currently I'm not ready to put much time in this, but what are your areas of expertise?

2
O Carciente
1y
I am familiar with a few different areas, but I don't think I have a lot of expertise (hence why I said I'm not in a great position to help). 

We are strongly against racism. It's just that Nick Bostrom is not racist (even though I find that his comment 26 years ago was extremely cringe and his apology wasn't done particularly well.)

Perhaps you have some insight about what was meant by "the views this particular academic expressed in his communications"? The criticisms of Bostrom I've seen have consistently declined to say what "views" they are referring to. One exception to this is that I heard one person say that almost everyone thinks it is racist to say that a racial IQ gap exists. To anyone ... (read more)

Bostrom did not say it was unknown how much the gap is genetic vs environmental. He said he didn't know. This apparently made some people mad, but I think what made people more mad was that they read things into the apology that Bostrom didn't say, then got mad about it. (That's why most people criticizing the apology avoid quoting the apology.)

There is a Wikipedia page that says

The scientific consensus is that there is no evidence for a genetic component behind IQ differences between racial groups.[9 citations]

I've also glanced at a couple of scientific p... (read more)

Give a man a fish, feed him for a day. Give him money for a fishing net or a nice plow, that'll help more.

They know what they need. They just need some money for it.
 

I find it extremely [...] threatening, and quite frightening that an exalted leader [...] holds these [...] beliefs

You haven't said what "these beliefs" refers to, but given the preceding context, you seem to be strongly objecting not to any belief Bostrom holds, but to his lack of belief. In other words, it is threatening and frightening (in context) that Bostrom said: "It is not my area of expertise, and I don’t have any particular interest in the question. I would leave to others, who have more relevant knowledge, to debate whether or not in addition to... (read more)

I feel like some people are reading "I completely repudiate this disgusting email from 26 years ago" and thinking that he has not repudiated the entire email, just because he also says "The invocation of a racial slur was repulsive". I wonder if you interpreted it that way.

One thing I think Bostrom should have specifically addressed was when he said "I like that sentence". It's not a likeable sentence! It's an ambiguous sentence (one interpretation of which is obviously false) that carries a bad connotation (in the same way that "you did worse than Joe on ... (read more)

I think that drawing attention to racial gaps in IQ test results without highlighting appropriate social context is in-and-of itself racist.

Why is it that this doesn't count as highlighting appropriate social context?

I also think that it is deeply unfair that unequal access to education, nutrients, and basic
healthcare leads to inequality in social outcomes, including sometimes disparities in skills and cognitive capacity. This is a huge moral travesty that we should not paper over or downplay. [apology paragraph 2]

I guess you could say that the social cont... (read more)

That's a fair point. But Rohit's complaint goes way beyond the statement being harmful or badly constructed. Ze is beating around the bush of a much stronger and unsubstantiated claim that is left unstated for some reason: "Bostrom was and is a racist who thinks that race directly affects intelligence level (and also, his epistemics are shit)".

What ze does say: "his apology, was, to put it mildly, mealy mouthed and without much substance" "I'm not here to litigate race science.;" "someone who is so clearly in a position of authority...maintaining this kind... (read more)

Load more