Hayven Frienby

Owner / Operator @ Altitude Information Services
176 karmaJoined Nov 2023Working (15+ years)Pittsfield, MA 01201, USA
www.altitudeIS.com

Bio

A person committed to reason, equity, kindness, and simple living. I’m here for interesting discussions and broader perspectives than those I normally get locally. I live with my two cats (Oliver and Keziah [Kizzie]) and sometimes rescue animals, too. 

My career is in analysis and grant writing, and I also have a background in coding (Python, C++, R, etc.). I‘m the owner / operator of Altitude Information Services (AIS), a small, human-centered firm offering fundraising and evaluation services to nonprofits and startups, with an eye toward increasing their own program effectiveness. I’m also a grad school dropout, looking for any opportunities to continue my research outside of academia. 

Aside from EA in and of itself, my ethical orientations include moral realism, veganism, longtermism, bioconservatism and existential / societal risk reduction, especially related to AI. My goal is the abolition of all current AI systems, the permanent end to all AI capabilities research, and a global, constitutional acceptance of the natural human condition and a human-driven, non-automated world. My third main focus is looking for ways to make EA philosophy and community more accessible to working-class people and to reduce elitism in the community. 

My passions include chess, hiking, kayaking, rock climbing, and backcountry skiing, and I also really love learning languages (I speak English, Spanish, and Japanese, and am studying Chinese and Russian). 

Last and least, I’m non-binary and greatly prefer they/them pronouns.

How others can help me

I'm looking for opportunities to get more connected to the EA community, both IRL and online. I'm not only new to EA but live in a rural, more conservative area where beliefs like this aren't really accepted. Traveling to a larger city (Boston, San Francisco, New York, etc.) for events is okay with me. 

How I can help others

Please reach out to me if you have questions about program evaluation, grant writing, qualitative methods, or fundraising, especially as it relates to startup nonprofits. This is my profession, and I am more than glad to give input. 

Comments
70

Agreed completely. A genetic component influencing dietary decisions doesn't mean that veganism / vegetarianism is out of reach for most or that cultural factors play no role in the adoption of animal-friendly lifestyles. There's definitely still a role for advocacy regardless of the heritability of veg*nism.

As someone who has done vegan advocacy for a long time, this matches with my experience unfortunately. A meatless diet just "clicks" with some people, while with others it's a near impossibility to get them to sustain without meat (let alone other animal products). A genetic component would certainly explain my observations, because there definitely seems to be something deeper than underlying belief or commitment going on. 

If anything, this further underscores the need for cellular agriculture (lab-grown meat / eggs / dairy, without harm to animals). We need to find a way to make these foods cheap and cruelty-free, since universal veganism / vegetarianism may not be possible (although there are certainly a lot of cultural barriers that can be addressed first). 

I'll admit this was a lot to take in, and intuitively I'm inclined to reject fanaticism simply because it seems more reasonable, intuitively, to believe that high probability interventions are always better than low ones. This position, for me at least, is rooted in normalcy bias, and if there's one thing Effective Altruism has taught me, it's that normalcy bias can be a formidable obstacle to doing good. 

Thank you for sharing this, and the linked LW article as well. It's really helpful to have a guide like this and even better that it is supported by evidence. Normalcy bias would have ordinarily prevented me from doing things like this, but if there's one thing joining / engaging with Effective Altruism* has done is slowly break down my normalcy bias.

 

*and just seeing society since the pandemic, to be honest

Then that would make Scenario A much more attractive to me (not necessarily from a moral perspective), and I apologize for misunderstanding your hypothetical. To be honest, with the caveat of forever, I’m not sure which scenario I’d prefer. A is certainly much more interesting to me, but my moral calculus pushes me to conclude B is more rational.

I also get that it’s an analogy to get me thinking about the deeper issues here, and I understand. My perspective is just that, while I certainly find the philosophy behind this interesting, the issue of AI permanently limiting human potential isn’t hypothetical anymore.

It’s likely to happen in a very short period of time (relative to the total lifespan of the species), unless indefinite delay of AI development really is socio-politically possible (and from what evidence I’ve seen recently, it doesn’t seem to be). [epistemic certainly — relatively low — 60%]

In this (purely hypothetical, functionally impossible) scenario, I would choose option B -- not because of the mild, transient suffering in scenario A, but the possibility of the emergence of serious suffering in the future (which doesn't exist on B). 

Happiness is also extremely subjective, and therefore can't be meaningfully quantified, while the things that cause suffering tend to be remarkably consistent across times, places, and even species. So basing a moral system on happiness (rather than suffering-reduction) seems to make no sense to me.  

I've had to sit with this comment for a bit, both to make sure I didn't misunderstand your perspective and that I was conveying my views accurately. 

I agree that population ethics can still be relevant to the conversation even if its full conclusion isn't accepted. Moral problems can arise from, for instance, a one-child policy, and this is in the purview of population ethics without requiring the acceptance of some kind of population-maximizing hedonic system (which some PE proponents seem to support). 

As for suffering--it is important to remember what it actually is. It is the pain of wanting to survive but being unable to escape disease, predators, war, poverty, violence, or myriad other horrors. It's the gazelle's agony at the lion's bite, the starving child's cry for sustenance, and the dispossessed worker's sigh of despair. It's easy (at least for me) to lose sight of this, of what "suffering" actually is, and so it's important for me to state this flat out. 

So, being reminded of what suffering is, let's think about the kind of world where it can flourish. More population = more beings capable of suffering = more suffering in existence, for all instantiations of reality that are not literally perfect (since any non-perfect reality would contain some suffering, and this would scale up linearly with population). So lower populations are better from a moral perspective, because they have lower potential for suffering. 

Most people I've seen espouse a pro-tech view seem to think (properly aligned) smarter-than-human AI will bring a utopia, similar to the paradises of many myths and faiths. Unless it can actually do that (and I have no reason to believe it will), suffering-absence (and therefore moral good, in my perspective) will always be associated with lower populations of sentient beings. 

True, but they are still vastly large numbers--and they are all biological, Earth-based beings given we continue to exist as in 2010. I think that is far more valuable than transforming the affect able universe for the benefit of "digital persons" (who aren't actual persons, since to be a person is to be both sentient and biological).

I also don't really buy population ethics. It is the quality of life, not the duration of an individual's life or the sheer number of lives that determines value. My ethics are utilitarian but definitely lean more toward the suffering-avoidance end of things--and lower populations have lower potential for suffering (at least in aggregate). 

I will admit that my comments on indefinite delay were intended to be the core of my question, with “forever” being a way to get people to think “if we never figure it out, is it so bad?”

As for the suffering costs of indefinite delay, I think most of those are pretty well-known (more deaths due to diseases, more animal suffering/death due to lack of cellular agriculture [but we don’t need AGI for this], higher x-risk from pandemics and climate effects), with the odd black swan possibility still out there. I think it’s important to consider the counterfactual conditions as well—that is, “other than extinction, what are the suffering costs of NOT indefinite delay?”

More esoteric risks aside (Basilisks, virtual hells, etc.), disinformation, loss of social connection, loss of trust in human institutions, economic crisis and mass unemployment, and a permanent curtailing of human potential by AI (making us permanently a “pet” species, totally dependent on the AGI) seem like the most pressing short-term (</= 0.01-100 years) s-risks of not-indefinite-delay. The amount of energy AI consumes can also exacerbate fossil fuel exhaustion and climate change, which carry strong s-risks (and distant x-risk) as well; this is at least a strong argument for delaying AI until we figure out fusion, high-yield solar, etc.

As for that third question, it was left out because I felt it would make the discussion too broad (the theory plus this practicality seemed like too much). “Can we actually enforce indefinite delay?” and “what if indefinite delay doesn’t reduce our x-risk?” are questions that keep me up at night, and I’ll admit that I don’t know much about the details of arguments centered on compute overhang (I need to do more reading on that specifically). I am convinced that the current path will likely lead to extinction, based on existing works on sudden capabilities increase with AGI combined with its fundamental non-connection with objective or human values.

I’ll end with this—if indefinite delay turns out to increase our x-risk (or if we just can’t do it for sociopolitical reasons), then I truly envy those who were born before 1920—they never had to see the storm that’s coming.

Such lives wouldn't be human or even "lives" in any real, biological sense, and so yes, I consider them to be of low value compared to biological sentient life (humans, other animals, even aliens should they exist). These "digital persons" would be AIs, machines- with some heritage from humanity, yes, but let's be clear: they aren't us. To be human is to be biological, mortal, and Earthbound -- those three things are essential traits of Homo sapiens. If those traits aren't there, one isn't human, but something else, even if one was once human. "Digitizing" humanity (or even the entire universe, as suggested in the Newberry paper) would be destroying it, even if it is an evolution of sorts.

If there's one issue with the EA movement that I see, it's that our dreams are far too big. We are rationalists, but our ultimate vision for the future of humanity is no less esoteric than the visions of Heavens and Buddha fields written by the mystics--it is no less a fundamental shift in consciousness, identity, and mode of existence. 

Am I wrong for being wary of this on a more than instrumental level (I would argue that even Yudkowsky's objections are merely instrumental, centered on x- and s-risk alone)? I mean, what would be suboptimal about a sustainable, Earthen existence for us and our descendants? Is it just the numbers (can the value of human lives necessarily be measured mathematically, much less in numbers)?

[This comment is no longer endorsed by its author]Reply
Load more