533 karmaJoined Feb 2017


I'm a senior software developer in Canada (earning ~US$70K in a good year) who, being late to the EA party, earns to give. Historically I've have a chronic lack of interest in making money; instead I've developed an unhealthy interest in foundational software that free markets don't build because their effects would consist almost entirely of positive externalities.

I dream of making the world better by improving programming languages and developer tools, but AFAIK no funding is available for this kind of work outside academia. My open-source projects can be seen at loyc.net, core.loyc.net, ungglish.loyc.net and ecsharp.net (among others).


If you don't think you know what the moral reality is, why are you confident that there is one?

I am confident that if there is no territory relevant to morality, then illusionism is true and (paradoxically) it doesn't matter what our maps contain because the brains that contain the maps do not correlate with any experiences in base reality. I therefore ignore illusionism and proceed with the assumption that there is something real, that it is linked to brains and correlates positively with mental experience, that it is scientifically discoverable, and that prior to such a discovery we can derive reasonable models of morality grounded in our current body of scientific/empirical information.

The naturalist version of "value is a part of the territory" would be that when we introspect about our motivation and the nature of pleasure and so on, we'll agree that pleasure is what's valuable.

I don't see why "introspecting on our motivation and the nature of pleasure and so on" should be what "naturalism" means, or why a moral value discovered that way necessarily corresponds with the territory. I expect morally-relevant territory to have similarities to other things in physics: to be somehow simple, to have existed long before humans did, and to somehow interact with humans. By the way, I prefer to say "positive valence" over "pleasure" because laymen would misunderstand the latter.

At this point, hedonists could either concede that there's no sense in which hedonism is true for everyone – because not everyone agrees. 

I don't concede because people having incorrect maps is expected and tells me little about the territory.

Or they can say something like "Well, it may not seem to you that you're making a mistake of reasoning, but pleasure has this property that it is GOOD in a normative sense irreducible to any of your other dispositions

I'm not sure what these other dispositions are, but I'm thinking on a level below normativity. I say positive valence is good because, at a level of fundamental physics, it is the best candidate I am aware of for what could be (terminally) good. If you propose that "knowledge is terminally good", for example, I wouldn't dismiss it entirely, but I don't see how human-level knowledge would have a physics-level meaning. It does seem like something related to knowledge, namely comprehension, is part of consciousness, so maybe comprehension is terminally good, but if I could only pick one, it seems to me that valence is a better candidate because "obviously" pleasure+bafflement > torture+comprehension. (fwiw I am thinking that the human sense of comprehension differs from genuine comprehension, and both might even differ from physics-level comprehension if it exists. If a philosopher terminally values the second, I'd call that valuation nonrealist.)

claiming that "hedonism is correct in some direct, empirical sense" would predict expert convergence.

🤷‍♂️ Why? When you say "expert", do you mean "moral realist"? But then, which kind of moral realist? Obviously I'm not in the Foot or Railton camp ― in my camp, moral uncertainty follows readily from my axioms, since they tell me there is something morally real, but not what it is.

Edit: It would certainly be interesting if other people start from similar axioms to mine but diverge in their moral opinions. Please let me know if you know of philosopher(s) who start from similar axioms.

my suspicion is that you'd run into difficulties defining what it means for morality to be real/part of the territory and also have that be defined independently of "whatever causes experts to converge their opinions under ideal reasoning conditions." 

In the absence of new scientific discoveries about the territory, I'm not sure whether experts (even "ideal" ones) should converge, given that an absence of evidence tends to allow room for personal taste. For example, can we converge on the morality of abortion, or of factory farms, without understanding what, in the territory, leads to the moral value of persons and animals? I think we can agree that less factory farming, less meat consumption and fewer abortions are better all else being equal, but in reality we face tradeoffs ― potentially less enjoyable meals (luckily there's Beyond Meat); children raised by poor single moms who didn't want children.

I don't even see how we can conclude that higher populations are better, as EAs often do, for (i) how do we detect what standard of living is better than non-existence, or how much suffering is worse than non-existence, (ii) how do we rule out the possibility that the number of beings does not scale linearly with the number of monadal experiencers, and (iii) we need to balance the presumed goodness of higher population against a higher catastrophic risk of exceeding Earth's carrying capacity, and (iv) I don't see how to rule out that things other than valence (of experiences) are morally (terminally) important. Plus, how to value the future is puzzling to me, appealing as longtermism's linear valuation is.

So while I'm a moral realist, (i) I don't presume to know what the moral reality actually is, (ii) my moral judgements tend to be provisionary and (iii) I don't expect to agree on everything with a hypothetical clone of myself who starts from the same two axioms as me (though I expect we'd get along well and agree on many key points). But what everybody in my school of thought should agree on is that scientific approaches to the Hard Problem of Consciousness are important, because we can probably act morally better after it is solved. I think even some approaches that are generally considered morally unacceptable by society today are worth consideration, e.g. destructive experiments on the brains of terminally ill patients who (of course) gave their consent for these experiments. (it doesn't make sense to do such experiments today though: before experiments take place, plausible hypotheses must be developed that could be falsified by experiment, and presumably any possible useful nondestructive experiments should be done first.)


I'm always a bit split whether people who place a lot of weight on qualia in their justification for moral realism are non-naturalists or naturalists.

Why? At the link you said "I'd think she's saying that pleasure has a property that we recognize as "what we should value" in a way that somehow is still a naturalist concept. I don't understand that bit." But by the same token ― if I assume Hewitt talking about "pleasure" is essentially the same thing as me talking about "valence" ― I don't understand why you seem to think it's "illegitimate" to suppose valence exists in the territory, or what you think is there instead.

I was about to make a comment elsewhere about moral realism when it occurred to me that I didn't have a strong sense of what people mean by "moral realism", so I whipped out Google and immediately found myself here. Given all those references at the bottom, it seems like you are likely to have correctly described what the field of philosophy commonly thinks of as moral realism, yet I feel like I'm looking at nonsense.

Moral realism is based on the word "real", yet I don't see anything I would describe as "real" (in the territory-vs-map sense) in Philippa Foot or Peter Railton's forms of "realism". Indeed, I found the entire discussion of "moral realism" here to be bewilderingly abstract and platonic, sorely lacking in connection to the physical world. If I didn't know these were supposed to be "moral realist" views, I would've classified them as non-realist with high confidence. Perplexingly absent from the discussion above are the key ideas I would personally have used to ground a discussion on this topic, ideas like "qualia", "the hard problem of consciousness", "ought vs is", "axioms of belief" or, to coin a phrase, "monadal experiencers".

At the same time, you mention in a reply that "anti-realism says there is no such thing" as "one true morality" which is consistent with my intuition of what anti-realism seems like it should mean ― that morality is fundamentally grounded in personal taste. But then, Foot and Railton's accounts also seem grounded in their personal tastes.

I'm no philosopher, just a humble (j/k) rationalist. So I would like to ask how you would classify my own account of "moral realism worthy of the name" as something that must ultimately be grounded in territory rather than map.

I have three ways of describing my system of "moral realism worthy of the name". One is to say that there is some territory that would lead us to an account of morality. This territory is as-yet undiscovered by modern science, but by reductionist analysis we can still say a lot about what this morality looks like (although there will probably be quite a bit of irreducible uncertainty about morality, until science can reveal more about the underlying territory). Another is to say that I have an axiom about qualia ― that monadal experiencers exist, and experience qualia. Finally, I would say that a "moral realism worthy of the name" is also concerned with the problem of deriving ought-statements from is-statements (note: tentatively I think I can define "X should be" as equivalent to "X is good" where "good" is used in its ordinary everyday secular sense, not in an ideological or religious sense.)

Please have a look at this summary of my views on Twitter. My question is, how does this view fit into the philosophy landscape? I mean, what terms from the Encyclopedia of Philosophy would you use to describe it? Is it realist or (paradoxically) anti-realist?

By the way...

I would call myself a moral realist if I could be convinced that there is One Compelling Axiology [....] I would count something as the One Compelling Axiology if all philosophers or philosophically-inclined reasoners, after having engaged in philosophical reflection under ideal conditions,[19] would deem the search for the One Compelling Axiology to be a sufficiently precise, non-ambiguous undertaking for them to have made up their minds rather than “rejected the question,” and if these people would all come to largely the same conclusions. [....] ideal conditions for philosophical reflection means having access to everything [...] including [...] superintelligent oracle AI.

This part reads to me as if you'd been asked "what would change your mind" and you responded "realistically, nothing." But then, my background involves banging my head against the wall with climate dismissives, so I have a visceral understanding that "science advances one funeral at a time" as Max Planck said. So my next thought, more charitably, is "well, maybe Lukas will make his judgement from the perspective of an imagined future where all necessary funerals have already taken place." Separately, I note that my conception of "realism" requires nothing like this, it just requires a foundation that is real, even if we don't understand it well.

Well, okay. I've argued that other decision procedures and moralities do have value, but are properly considered subordinate to CU. Not sure if these ideas swayed you at all, but if you're Christian you may be thinking "I have my Rock" so you feel no need for another.

If you want to criticize utilitarianism itself, you would have to say the goal of maximizing well-being should be constrained or subordinated by other principles/rules, such as requirements of honesty or glorifying God/etc.

You could do this, but you'd be arguing axiomatically. A claim like "my axioms are above those of utilitarians!" would just be a bare assertion with nothing to back it up. As I mentioned, I have axioms too, but only the bare minimum necessary, because axioms are unprovable, and years of reflection led me to reject all unnecessary axioms.

You could say something like the production of art/beauty is intrinsically valuable apart from the well-being it produces and thus utilitarianism is flawed in that it fails to capture this intrinsic value (and only captures the instrumental value.

The most important thing to realize is that "things with intrinsic value" is a choice that lies outside consequentialism. A consequentialist could indeed choose an axiom that "art is intrinsically valuable". Calling it "utilitarian" feels like nonstandard terminology, but such value assignment seems utilitarian-adjacent unless you treat it merely as a virtue or rule rather than as a goal you seek after.

Note, however, that beauty doesn't exist apart from an observer to view it, which is part of the reason I think this choice would be a mistake. Imagine a completely dead universe―no people, no life, no souls/God/heaven/hell, and no chance life will ever arise. Just an endless void pockmarked by black holes. Suppose there is art drifting through the void (perhaps Boltzmann art, by analogy to a Boltzmann brain). Does it have value? I say it does not. But if, in the endless void, a billion light years beyond the light cone of this art that can never be seen, there should be one solitary civilization left alive, I argue that this civilization's art is infinitely more valuable. More pointedly, I would say that it is the experience of art that is valuable and not the art itself, or that art is instrumentally valuable, not intrinsically. Thus a great work of art viewed by a million people has delivered 100 times as much value as the same art only seen by 10,000 people―though one should take into account countervailing factors such as the fact that the first 10,000 who see it are more likely to be connoisseurs who appreciate it a lot, and the fact that any art you experience takes away time you might have spent seeing other art. e.g. for me I wish I could listen to more EA and geeky songs (as long as they have beautiful melodies), but lacking that, I still enjoy hearing nice music that isn't tailored as much to my tastes. Thus EA art is less valuable in a world that is already full of art. But EA art could be instrumentally valuable both in the experiences it creates (experiences with intrinsic value) and in its tendency to make the EA movement healthy and growing.

So, to be clear, I don't see a bug in utilitarianism; I see the other views as the ones with bugs. This is simply because I see no flaws in my moral system, but I see flaws in other systems. There are of course flaws in myself , as I illustrate below.

And the other big thing I haven't mentioned is our mysterious inner life, the one that responds to spirituality and to emotions within human relationships, and to art...this part of us does not follow logic or compute...it is somehow organic and you could almost say quantum in how we are connected to other people...living with it is vital for happiness

I think it's important to understand and accept that humans cannot be maximally moral; we are all flawed. And this is not a controversial statement to a Christian, right? We can be flawed even in our own efforts to act morally! I'll give an example from last year, when Russia invaded Ukraine. I was suddenly and deeply interested in helping, seeing that a rapid response was needed. But other EAs weren't nearly as interested as I was. I would've argued that although Ukrainian lives are less cost-effective to save than Africans, there was a meaningful longtermist angle: Russia was tipping the global balance of power from democracy to dictatorship, and if we don't respond strongly against Putin, Xi Jinping could be emboldened to invade Taiwan; in the long term, this could lead to tyranny taking over the world (EA relies on freedom to work, so this is a threat to us). Yet I didn't make this argument to EAs; I kept it to myself (perhaps for the same reason I waited ~6 months to publish this, I was afraid EAs wouldn't care). I ended up thinking deeply about the war―about what it might be like as a civilian on the front lines, about what kinds of things would help Ukraine most on a small-ish budget, about how the best Russians weren't getting the support they deserved, and about how events might play out (which turned into a hobby of learning and forecasting, and the horrors of war I've seen are like―holy shit, did I burn out my empathy unit?). But not a single EA organization offered any way to help Ukraine and I was left looking for other ways to help. So, I ended up giving $2000 CAD to Ukraine-related causes, half of which went to Ripley's Heroes which turned out to be a (probably, mostly) fraudulent organization. Not my best EA moment! From a CU perspective, I performed badly. I f**ked up. I should've been able to look at the situation and accept that there was no way I could give military aid effectively with the information I had. And I certainly knew that military aid was far from an ideal intervention and that there were probably better interventions, I just didn't have access to them AFAIK. The correct course of action was not to donate to Ukraine (understanding that some people could help effectively, just not me). But emotionally I couldn't accept that. But you know, I have no doubt that it was myself that was flawed and not my moral system; CU's name be praised! 😉 Also I don't really feel guilty about it, I just think "well, I'm human, I'll make some mistakes and no one's judging me anyway, hopefully I'll do better next time." 

In sum: humans can't meet the ideals of (M)CU, but that doesn't mean (M)CU isn't the correct standard by which to make and evaluate choices. There is no better standard. And again, the Christian view is similar, just with a different axiomatic foundation.

Edit: P.S. a relevant bit of the Consequentialism FAQ:

5.6: Isn't utilitarianism hostile to music and art and nature and maybe love?

No. Some people seem to think this, but it doesn't make a whole lot of sense. If a world with music and art and nature and love is better than a world without them (and everyone seems to agree that it is) and if they make people happy (and everyone seems to agree that they do) then of course utilitarians will support these things.

There's a more comprehensive treatment of this objection in 7.8 below.

Thanks for taking my comment in the spirit intended. As a noncentral EA it's not obvious to me why EA has little art, but it could be something simple like artists not historically being attracted to EA. It occurs to me that membership drives have often been at elite universities that maybe don't have lots of art majors.

Speaking personally, I'm an engineer and a (unpaid) writer. As such I want to play to my strengths and any time I spend on making art is time not spent using my valuable specialized skills... at least I started using AI art in my latest article about AI (well, duh). I write almost exclusively about things I think are very important, because that feeling of importance is usually what drives me to write. But the result has been that my audience has normally been very close to zero (even when writing on EA forum), which caused me to write much less and, when I do write, I tend to write on Twitter instead or in the comment areas of ACX. Okay I guess I'm not really going anywhere with this line of thought, but it's a painful fact that I sometimes feel like ranting about. But here's a couple of vaguely related hypotheses: (i) maybe there is some EA art but it's not promoted well so we don't see it; (ii) EAs can imagine art being potentially valuable, but are extremely uncertain about how and when it should be used, and so don't fund it or put the time into it. EAs want to do "the most impactful thing they can" and it's hard to believe art is it. However, you can argue that EA art is neglected (even though art is commonplace) and that certain ways of using art would be impactful, much as I argued that some of the most important climate change interventions are neglected (even though climate change interventions are commonplace). I would further argue that artists are famously inexpensive to hire, which can boost the benefit/cost ratio (related: the most perplexing thing to me about EA is having hubs in places that are so expensive it would pain me to live there; I suggested Toledo which is inexpensive and near two major cities, earning no votes or comments. Story of my life, I swear, and I've been thinking of starting a blog called "No one listens to me".)

Any religious group that does proselytizing usually gets decent results.

I noticed that too, but I assumed that (for unknown reasons) it worked better for big shifts (pagan to Christian) than more modest ones. But I mentioned "Protestant to Catholic" specifically because the former group was formed in opposition to the latter. I used to be Mormon; we had a whole doctrine about why our religion made more sense and was the True One, and it's hard to imagine any other sect could've come along and changed my mind unless they could counter the exact rationales I had learned from my church. As I see it, mature consequentialist utilitarianism is a lot like this. Unless you seem to understand it very well, I will perceive your pushback against it as being the result of misunderstanding it.

So, if you say utilitarianism is only fit for robots, I just say: nope. You say: utilitarianism is a mathematical algorithm. I say: although it can be put into mathematical models, it can also be imprinted deeply in your mind, and (if you're highly intelligent and rational) it may work better there than in a traditional computer program. This is because humans can more easily take many nuances into account in their minds than type those nuances into a program. Thus, while mental calculations are imprecise, they are richer in detail which can (with practice) lead to relatively good decisions (both relative to decisions suggested by a computer program that lacks important nuances, and relative to human decisions that are rooted in deontology, virtue ethics, conventional wisdom, popular ideology, or legal precedent).

I did add a caveat there about intelligence and rationality, because the strongest argument against utilitarianism that comes to mind is that it requires a lot of mental horsepower and discipline to be used well as a decision procedure. This is also why I value rules and virtues: an mathematically ideal consequentialist would have no need of them per se, but such a being cannot exist because it would require too much computational power. I think of rules and virtues as a way of computationally bounding otherwise intractable mental calculations, though they are also very useful for predicting public perception of one's actions (as most of the public primarily views morality through the lenses of rules and virtues). Related: superforecasters are human, and I don't think it's a coincidence that lots of EAs like forecasting as a test of intelligence and rationality.

However, I think that consequentialist utilitarianism (CU) has value for people of all intelligence levels for judging which rules and virtues are good and which are not. For example, we can explain in CU terms why common rules such as "don't steal" and "don't lie" are usually justified, and by the same means it is hard to justify rules like "don't masturbate" or the Third Reich's rule that only non-Jewish people of “German or kindred blood” could be citizens (except via strange axioms).

This makes it very valuable from a secular perspective: without CU, what other foundation is there to judge proposed rules or virtues? Most people, it seems to me, just go with the flow: whatever rules/virtues are promoted by trusted people are assumed to be good. This leads to people acting like lemmings, sometimes believing good things and other times bad things according to whatever is popular in their tribe/group, since they have no foundational principle on which to judge (they do have principles promoted by other people, which, again, could be good or bad). While Christians say "God is my rock", I say "these two axioms are my bedrock, which led me to a mountain I call mature consequentialist utilitarianism". I could say much more on this but alas, this is a mere comment in a thread and writing takes too much time. But here's a story I love about Heartstone, the magic gemstone of morality.

For predictive decision-making, choosing actions via CU works better the more processing power you use (whether mental or silicon). Nevertheless, after arriving at a decision, it should always be possible to explain the decision to people without access to the same horsepower. We shouldn't say "My giant brain determined this to be the right decision, via reasoning so advanced that your puny mind cannot comprehend it. Trust me." It seems to me that anyone using CU should be able to explain (and defend) their decision in CU terms that don't require high intelligence to understand. However, (i) the audience cannot verify that the decision is correct without using at least as much computing power, they can only verify that the decision sounds reasonable, (ii) different people have different values which can correctly lead to disagreement about the right course of action, and (iii) there are always numerous ways that an audience can misunderstand what was said, even if it was said in plain and unambiguous language (I suspect this is because many people prefer other modes of thought, not because they can't think in a consequentialist manner.)

Now, just in case I sound a bit "robotic" here, note that I like the way I am. Not because I like sounding like Spock or Data, but because there is a whole life journey spanning decades that led to where I am now, a journey where I compared different ways of being and found what seem to be the best, most useful and truth-centered principles from which to derive my beliefs and goals. (Plus I've always loved computers, so a computational framing comes naturally.)

a different reason for [...] the anxious mental health issues connected to not saving enough lives guilt[?]

I think a lot of EAs have an above-average level of empathy and sense of responsibility. My poth (hypothesis) is that these things are what caused them to join EA in the first place, and also caused them to have this anxiety about lives not saved and good not done. This poth leads me to predict that such a person will have had some anxiety from the first day they found out about the disease and starvation in Africa, even if joining EA managed to increase that anxiety further. For me personally, global poverty bothered me since I first learned about it, I have a deep yearning to improve the world that appeared 15+ years before I learned about EA, I don't feel like my anxiety increased after joining EA, and the analysis we're talking about (in which there is a utilitarian justification not to feel bad about only giving 10% of our income) helps me not to feel too bad about the limits of my altruism, although I still want to give much more to fund direct work, mainly because I have little confidence in my ability to persuade other EAs about what I think needs to be done (only 31 karma including my own strong upvote? Yikes! 😳😱)

Why do even military organizations have tons of art compared to EA? 

Is that true? I'm not surprised if military personnel make a lot of art, but I don't expect it from the formal structures or leadership. But, if a military does spend money on art, I expect it's a result of some people who advocated for art to sympathetic ears that controlled the purse strings, and that this worked either because they were persuasive or because people liked art. The same should work in EA if you find a framing that appeals to EAs. (which reminds me of the odd fact that although I identify strongly with common EA beliefs and principles, I have little confidence in my ability to persuade other EAs, as I am often downvoted or not upvoted. I cannot explain this.)

You seem to think if art was good for human optimization then consequentialists should have plenty, so why don't they around here? 

My guess is that it's a combination of

  • the difficulty EAs have had seeing art as an impactful intervention (although I feel like it could be, e.g. as a way of attracting new EAs and improving EA mental health). Note: although EAs like theoretical models and RCTs demonstrating good cost/benefit, my sense is that EA leaders also understand (in a CU manner) that some interventions are valuable enough to support even when there's no solid theoretical/scientific basis for them.
  • artists rarely becoming EAs (why? maybe selection bias in membership drives... maybe artists being turned off by EA vibes for some reason...)
  • EA being a young movement, so (i) lots of things still haven't been worked out and (ii) the smaller the movement is, the less likely that art is worthy of funding (the explanation for this assertion feels too complicated to briefly explain.)
  • something else I didn't think of (???)

Well... Communism is structurally disinclined to work in the envisioned way. It involves overthrowing the government, which involves "strong men" and bloodshed, the people who lead a communist regime tend to be strongmen who rule with an iron grip ("for the good of communism", they might say) and are willing to use murder to further their goals. Thanks to this it tends to involve a police state and central planning (which are not the characteristics originally envisioned). More broadly, communism isn't based on consequentialist reasoning. It's an exaggeration to say it's based on South Park reasoning: 1. overthrow the bourgeoisie and the government so communists can be in charge, 2. ???, 3. utopia! But I don't think this is a big exaggeration.

Individuals, on the other hand, can believe in whatever moral system they feel like and follow its logic wherever it leads. Taking care of yourself (and even your friends/family) not only perfectly fits within the logic of (consequentialist) utilitarianism, it is practical because its logic is consequentialist (which is always practical if done correctly). Unlike communism, we can simply do it (and in fact it's kind of hard not to, it's the natural human thing to do).

What's weird about your argument is that you made no argument beyond "it's like the logic of communism". No, different things are different, you can't just make an analogy and stop there (especially when criticizing logic that you yourself described as "perfect" - well gee, what hope does an analogy have against perfect logic?)

when I discuss there being more art in EA is the Utilitarian response of "why waste money on aesthetics", or I hear about stressed anxious EA's and significant mental health needs...the only clear answer I see to these two problems is reform the Utilitarian part of EA

I think what's going on here is that you're not used to consequentialist reasoning, and since the founders of EAs were consequentialists, and EA attracts, creates and retains consequentialists, you need to learn how consequentialists think if you want to be persuasive with them. I don't see aesthetics as wasteful; I routinely think about the aesthetics of everything I build as an engineer. But the reason is not something like "beauty is good", it's a consequentialist reason (utilitarian or not) like "if this looks better, I'm happier" (my own happiness is one of my terminal goals) or "people are more likely to buy this product if it looks good" (fulfilling an instrumental goal) or "my boss will be pleased with me if he thinks customers will like how it looks" (instrumental goal). Also, as a consequentialist, aesthetics must be balanced against other things―we spend much more time on the aesthetics of some things than other things because the cost-benefit analysis discounts aesthetics for lesser-used parts of the system.

You want to reform the utilitarian part, but it's like telling Protestants to convert to Catholicism. Not only is it an extremely hard goal, but you won't be successful unless you "get inside the mind" of the people whose beliefs you want to change. Like, if you just explain to Protestants (who believe X) why Catholics believe the opposite of X, you won't convince most of them that X is wrong. And the thing is, I think when you learn to think like a consequentialist―not a naive consequentialist* but a mature consequentialist who values deontological rules and virtues for consequentialist reasons―at that point you realize that this is the best way of thinking, whether one is EA or not.

(* we all still remember SBF around here of course. He might've been a conman, but the scary part is that he may have thought of himself as an consequentialist utilitarian EA, in which case he was a naive consequentialist. For you, that might say something against utilitarianism, but for me it illustrates that nuance, care and maturity is required to do utilitarianism well.)

I think surely EA is still pluralistic ("a question") and it wouldn't be at all surprised if longtermism gets de-emphasized or modified. (I am uncertain, as I don't live in a hub city and can't attend EAG, but as EA expands, new people could have new influence even if EAs in today's hub cities are getting a little rigid.)

In my fantasy, EAs realize that they missed 50% of all longtermism by focusing entirely on catastrophic risk while ignoring the universe of Path Dependencies (e.g. consider the humble Qwerty keyboard―impossible to change, right? Well, I'm not on a Qwerty keyboard, but I digress. What if you had the chance to sell keyboards in 1910? There would still be time to change which keyboard layout became dominant. Or what if you had the chance to prop up the Esperanto movement in its heyday around that time? This represents the universe of interventions EAs didn't notice. The world isn't calcified in every way yet―if we're quick, we can still make a difference in some areas. (Btw before I discovered EA, that was my angle on the software industry, and I still think it's important and vastly underfunded, as capitalism is misaligned with longtermism.)

In my second fantasy, EAs realize that many of the evils in the world are a byproduct of poor epistemics, so they work on things that either improve society's epistemics or (more simply) work around the problem.

I see this as a fundamentally different project than Wikipedia. Wikipedia deliberately excludes primary sources / original research and "non-notable" things, while I am proposing, just as deliberately, to include those things. Wikipedia requires a "neutral point of view" which, I think, is always in danger of describing a linguistic style rather than "real" neutrality (whatever that means). Wikipedia produces a final text that (when it works well) represents a mainstream consensus view of truth, but I am proposing to allow various proposals about what is true (including all sorts of false claims) and allow them to compete in a transparent way, so that people can see exactly why allegedly false views are being rejected. In short: Wikipedia avoids going into the weeds; I propose steering straight into them. In addition, I think that having a different social structure is valuable: Wikipedia has its human editors trying to delete the crap, my site would have its algorithms fed with crowdsourced judgement trying to move the best stuff to the top (crap left intact). Different people will be attracted to the two sites (which is good because people like me who do not contribute to Wikipedia are an untapped resource).

I would also point out that rigor is not required or expected of users... instead I see the challenge as one of creating an incentive structure that (i) rewards a combination of rigor and clarity and (ii) when rigor is not possible due to data limitations, rewards reasonable and clear interpretations of whatever data is available.

I agree that it won't be profitable quickly, if ever. I expect its value to be based on network effects, like Facebook or Twitter (sites that did succeed despite starting without a network, mind you). But I wonder if modern AI technology could enable a new way to start the site without the network, e.g. by starting it with numerous concurrent users who are all GPT4, plus other GPT4s rating the quality of earlier output... thus enabling a rapid iteration process where we observe the system's epistemic failure modes and fix them. It seems like cheating, but if that's the only way to succeed, so be it. (AIs might provide too much hallucinated evidence that ultimately needs to be deleted, though, which would mean AIs didn't really solve the "network effect" problem, but they could still be helpful for testing purposes and for other specific tasks like automatic summarization.)

Oh, I've heard all this crap before

This is my first time.

to develop expert-systems to identify the sequences of coding and non-coding DNA that would need to be changed to morally enhance humans

Forgive my bluntness, but that doesn't sound practical. Since when can we identify "morality nucleotides"?

I suspect morality is more a matter of cultural learning than genetics. No genetic engineering was needed to change humans from slave-traders to people who find slavery abhorrent. Plus, whatever genetic bits are involved, changing them sounds like a huge political can of worms.

I'm sure working for Metaculus or Manifold or OWID would be great.

I was hoping to get some help thinking of something smaller in scope and/or profitable that could eventually grow into this bigger vision. A few years from now, I might be able to afford to self-fund it by working for free (worth >$100,000 annually) but it'll be tough with a family of four and I've lost the enthusiasm I once had for building things alone with no support (it hasn't worked out well before). Plus there's an opportunity cost in terms of my various other ideas. Somehow I have to figure out how to get someone else interested...

Load more