D

davidhartsough

217 karmaJoined Jul 2019Working (6-15 years)Seeking work
davidhartsough.com

Bio

Participation
2

Mostly regarded as a happy human, conversationalist, drummer, developer, wannabe psychologist, imminent philosopher, and stuntman — more or less... (definitely less towards the end there).

Catch me in the wild @ davidhartsough.com

I'm here to discuss "flourishing" and to just meet you! (So stop by, come thru, say hey, grab a slice of 'zza, kick off ya shoes, stay awhile. Stay stupendous.)

Comments
26

100% agree. Each author gives each post its own intended audience (broad, narrow, niche, etc). And sometimes it's important to make a deliberate choice to only want a select audience to read your post.

Also I've never heard of jargon as a tool for protection. Very interesting.

The curious thing to me is that the EA Forum is entirely public online, so in theory everyone can read your post, even if you don't want them to. So it seems if you have some need to protect either yourself or the post, then you'll need some other strategies. For example, you could write anonymously so that your identity can't be traced.

But I really hope the vast majority of posts don't require any kind of "protective" measures.

I should also probably clarify again that my motives are to see the EA Forum and its ideas be spread to a broader audience. I want it to grow and become more welcoming and inclusive.

In fact, I just decided to write an entire post about just that: Toward a more approachable and accessible EA Forum

I've gotta hand it to ya @AllAmericanBreakfast: your comment inspired me to write more about how this is all connected to EA communication, connection, and community building. Cheers!

Oh, fascinating! I've never come across an idea like these "focus groups" on a forum. Have you tried this before?

(I suppose, upon reflection, I might regard the things I've created for a small audience of friends to account for maybe a third of my best work...? haha very rough estimates of course.)

I guess "Step 2" became my next post: Toward a more approachable and accessible EA Forum

(But it reads as a preface/prelude to this post, so it's more like how the Star Wars trilogies were released out of order...... haha, far stretch of an example)

Fascinating! @Noah, have you seen this discussed in the EA community as well?

This post is "Step 1" towards a side mission: "to make the EA Forum more readable and approachable."

Improving written language for readers is a great way to practice making the EA Forum/community more:

  • friendly
  • welcoming
  • inclusive
  • congenial
  • compelling
  • considerate
  • thoughtful
  • caring
  • kind
  • understandable
  • understanding
  • (and also, yeah, altruistic)

From the outside looking in:

"A readable and approachable writing/forum" = "A reasonable and approachable people/community"

  • By making writings more readable, you demonstrate your understanding of others.
  • By making writings more approachable, you demonstrate your care for others.

I love the comparison to corporations! I've never heard that before and think it's terrific.

Overall well-written and clever. Good formatting. Readable and skimmable. (This is one of those posts that has the necessity of being a "41 minute read".) Many reasons to give props to this.

My favorite quote:

"There are large and powerful systems doing things vastly beyond the ability of individual humans, and acting in a definitively goal-directed way. We have a vague understanding of their goals, and do not assume that they are coherent. Their goals are clearly not aligned with human goals, but they have enough overlap that many people are broadly in favor of their existence. They seek power. This all causes some problems, but problems within the power of humans and other organized human groups to keep under control, for some definition of ‘under control’."

Some people in the EA community who are particularly terrified of AI risks might end up saying in response, "well, this scares me almost equally too!" In which case, maybe we can hope for a branch of EA cause areas to focus on all kinds of risks from "large and powerful systems" including mega corporations.

GREAT post! Such a fantastic and thorough explanation of a truly troubling issue! Thank you for this.

We definitely need to distinguish what I call the various "flavors" of EA. And we have many options for how to organize this.

Personally, I'm torn because, on one hand, I want to bring everyone together still under an umbrella movement, with several "branches" within it. However, I agree that, as you note, this situation feels much more like the differences between the rationality community and the EA community: "the two have a lot of overlap but they are not the same thing and there are plenty of people who are in one but not the other. There’s also no good umbrella term that encompasses both." The EA movement and the extinction risk prevention movement are absolutely different.

And anecdotally, I really want to note that the people who are emphatically in one camp but not the other are very different people. So while I often want to bring them together harmoniously in a centralized community, I've honestly noticed that the two groups don't relate as well and sometimes even argue more than they collaborate. (Again, just anecdotal evidence here — nothing concrete, like data.) It's kind of like the people who represent these movements don't exactly speak the same language and don't exactly always share the same perspectives, values, worldviews, or philosophies. And that's OK! Haha, it's actually really necessary I think (and quite beautiful, in its own way).

I love the parallel you've drawn between the rationality community and the EA community. It's the perfect example: people have found their homes among 2 different forums and different global events and different organizations. People in the two communities have shared views and can reasonably expect people within to be familiar with the ideas, writings, and terms from within the group. (For example, someone in EA can reasonably expect someone else who claims any relation to the movement/community to know about 80000 Hours and have at least skimmed the "key ideas" article. Whereas people in the rationality community would reasonably expect everyone within it to have familiarity with HPMOR. But it's not necessarily expected that these expectations would carry across communities and among both movements.)

You've also pointed out amazing examples of how the motivations behind people's involvement vary greatly. And that's one of the strongest arguments for distinguishing communities; there are distinct subsets of core values. So, again, people don't always relate to each other across the wide variety of diverse cause areas and philosophies. And that's OK :)

Let's give people a proper home — somewhere they feel like they truly belong and aren't constantly questioning if they belong. Anecdotally I've seen so many of my friends struggle to witness the shifts in focus across EA to go towards X risks like AI. My friends can't relate to it; they feel like it's a depressing way to view this movement and dedicate their careers. So much so that we often stop identifying with the community/movement depending on how it's framed and contextualized. (If I were in a social setting where one person [Anne] who knows about EA was explaining it to someone [Kyle] who had never heard of it and that person [Kyle] then asked me if I am "an EA", then I might vary my response, depending on the presentation that the first person [Anne] provided. If the provided explanation was really heavy handed with an X-risk/AI focus, I'd probably have a hard time explaining that I work for an EA org that is actually completely unrelated... Or I might just say something like "I love the people and the ideas" haha)

I'm extra passionate about this because I have been preparing a forum post called either "flavors of EA" or "branches of EA" that would propose this same set of ideas! But you've done such a great job painting a picture of the root issues. I really hope this post and its ideas gain traction. (I'm not gonna stop talking about it until it does haha) Thanks ParthThaya

Hey Stijn, loved your post! Would you be interested in writing a second version of this without the use of philosophical terms (jargon) so that a common layperson would be able to easily read and understand these ideas? (I'd like to be able to share these ideas, but I wouldn't be able to with the current terminology. I understand it is written for an audience who has a background in moral philosophy and ethics, but I want to share these ideas with people who don't have that prior knowledge.)

Thanks Teo!

Thank you for these thoughtful reflections! This is exactly the kind of discussions I was hoping this might generate.

  1. Is flourishing even possible "all else being equal", such as in an experience machine?

Hmmm, depends on how magical your machine is 😅 and not to be that guy again, but it depends on your definition of flourishing. (I'm choosing to not impose any of my own ideas in this post and even in the comments for now.)

Let's take the PERMA theory of well-being from Seligman as an example though. He'd probably say:

"If the machine completely stimulates a reality in which I can pursue some or all of these lifestyles of PERMA, then I could flourish in it. So if I could experience and cultivate positive emotions and engagement, and if I could have other simulated beings with me to relate to and build meaning with, then you've probably got an experience machine that allows for flourishing."

To be fair though, I'm not sure Seligman is clear on intricate details within this, such the questions of "what about relationships in particular do humans truly value?" or "what might the machine need to offer to help people forge meaning?" or "what might one do in the machine to experience engagement?"

I feel bad leaving this question largely unanswered for you, but I'll let you and others discuss!

  1. Relatedly: To what degree does flourishing refer to positive intrinsic vs. extrinsic value?

It seems as though so many of these theories are hinting at intrinsic values, yet it's strange to not see the term widely used in the literature.

For example, the last 2 theories listed in this document make a claim to say that each element in the model is "universally desired", "an end in and of itself", and "pursued by many people for its own sake, not merely to get any of the other elements."

That kind of phrasing really insinuates "universal intrinsic values". So I think these psychologists would pretty much all say, "yeah, flourishing directly relates to intrinsic values."

Ok, I'll "give in" (AKA step outside my choice to not impose my own thoughts) just for a moment here to give you 2 hot takes:

#1.) Those two theorists, Seligman and VanderWeele, did not use any data when compiling their list of domains. To be frank, it feels like armchair philosophy. They claim their elements of their models to essentially be these exhaustive lists of the most universal intrinsic values, but they didn't test this at all. They didn't even run a worldwide survey. (To be fair Harvard did run some surveys years later.) They didn't have a systematic method to arrive at their models of multidimensional well-being. I'll write more about this in another post sometime, but I wanted to leave a fair warning here that: while these theories do refer to intrinsic values, their approach is unfortunately lacking a scientific process.

#2.) I believe any theory of flourishing should begin from a theory of intrinsic values (both in a philosophical sense and even in a "data-driven" sense). So by this I mean to say that any theory that suggests a mutually exclusive and collectively exhaustive list of the domains of well-being would need to first clarify and define the intrinsic values that are presumed in the theory. As a basic example, these theories all presume humans, human life, "psychological functioning", and other concepts are all intrinsically valuable. I would say this principle is doubly applicable to any theory of "needs" as well (and often there's overlaps where a theory of well-being models a theory of needs). To say that there is any "need" at all in this universe is to assume premises of intrinsic values. To say that a human "needs" to eat nutrients, assumes that we care about that human's biological systems functioning well (and that we care about that human's health and life and that human in general). (Haha pardon the ramble, but my personal answer to your question is: "to what degree? In the first degree!" 😅)

Load more