Noah Scales

171Joined Jun 2022

Bio

All opinions are my own unless otherwise stated. I dislike use of bibliographic references in blog posts but if you want those references for specific facts that I mention, then you can request the references by private message.

How I can help others

I get very little interaction on this forum, relatively speaking, so if you have questions for me, particularly on research topics, let me know. 

Comments
135

and because of the lack of embodiment of AGI (so it won’t get beat up on by the world generally or have emotional/affective learning).

There are two ways to plausibly embody AGI.

  1. as supervisors of dumb robot bodies, the AGI remotely controls a robot body, processing a portion or all of the robot's sensor data.
  2. as host of an AGI, the AGI's hardware is resident in the robot body.

OK, I will put my conservation thoughts in the comments of the summary post. 

The speed of posting and conversation changes on this forum is way faster than I can match, by the time I have something decent written up, conversation will have moved on. 

Keep an eye out for my reply though, I'll come around when I can. Your work on this parallels a model I have of a pathway for our civilization coping with global warming. I call it "muddling through." I know, catchy, right? How things go this century is very sensitive to short time scales, I'm thinking 10-20 year differences make all kinds of difference, and in my view, most needed changes are in human behavior and politics, not technology developments. So, good and bad.

Which makes me wonder how anyone expects to identify whether software entities have affective experience.

Is there any work in this direction that you like and can recommend?

Below is the original post submitted for the competition. The post is being edited to reflect improvements I wished to make after the competition deadline but could not until the submissions were judged.

--------

TL;DR

  • Keep and use the word belief in your vocabulary distinct from your use of hypothesis or conclusion.
  • Unweighted beliefs are useful distinctions to employ in your life.
  • Constrain beliefs rather than decrease your epistemic confidence in them.
  • Challenge beliefs to engage with underlying (true) knowledge.
  • Take additional actions to ensure preconditions for the consequences of your actions.
  • Match ontologies with real-world information to create knowledge used to make predictions.
  • Use an algebraic scoring of altruism and selfishness of actions to develop some simple thought experiments.
  • My red team critique is that EA folks can effectively rely on unweighted beliefs where they now prefer to use Bayesian probabilities.

Introduction: Effective Altruists want to be altruistic

EA community members want to improve the welfare of others through allocation of earnings or work toward charitable causes. Effective Altruists just want to do good better. 

There’s a couple ways to know that your actions are actually altruistic:

  • Believe that your actions are altruistic.
  • Confirm your actions are altruistic before you believe it.

I believe that EA folks confirm the consequences of their altruistic actions in the near term. For example, they might rely on:

  • expert opinion
  • careful research
  • financial audits
  • cost-effectiveness models
  • big-picture analyses

 and plausibly other processes to ensure that their charities do what they claim.

The essentials of my red team critique

Bayesian subjective probabilities don’t substitute for unweighted beliefs

The community relies on Bayesian subjective probabilities when I would simply rely on unweighted beliefs. Unweighted beliefs let you represent assertions that you consider true.

Why can’t Bayesian probabilities substitute for all statements of belief? Because:

  • Humans do not consciously access all their beliefs or reliably recall formative evidence for every belief that they can articulate. I consider this self-evident.
  • Beliefs are important to human evaluation of morality, especially felt beliefs. I will explore this in a couple of thought experiments about EA guilt.
  • Beliefs can represent acceptance of a conclusion or validation of a hypothesis, but sometimes they appear as seemingly irrational feelings or intuitions.
  • Any strong assertion that you make quickly is a belief. For example, as a medic, you could say “Look, this epipen will help!” to someone guarding their anaphylactic friend from you.

Distinguish beliefs from conclusions from hypotheses

Use the concept of an unweighted belief in discussion of  a person's knowledge. Distinguish beliefs from conclusions from hypotheses, like:

  • Belief Qualifiers: "I believe that X.", "X, or so I believe.", "X."
  • Conclusion Qualifiers: "Therefore X.", "I conclude that X.", "X is my conclusion."
  • Hypothesis Qualifiers: "I hypothesize that X.", "X, with 99% confidence."

I don’t mean to limit your options. These are just examples.

Challenge the ontological knowledge implicit in a belief

If I say, "I believe that buying bednets is not effective in preventing malaria." you can respond:

  • "Why do you believe that?"
  • "What led you to that conclusion?"
  • "Is that your theory?"
  • "Based on what evidence?"

I believe that you should elicit ontology and knowledge from whom you challenge, rather than letting the discussion devolve into exchanges of probability estimates. 

Partial list of relationship types in ontologies

You will want information about entities that participate in relationships like:

  • part-whole relationships: an entity is part of another entity
  • meaning relationships: a defining, entailment, or referent relationship
  • causal relationships: a causal relationship, necessary, sufficient, or both
  • set-subset relationships: a relationship between sets

I will focus on causal relationships in a few examples below. I gloss over the other types of relationships though they deserve a full treatment.

A quick introduction to ontologies and knowledge (graphs)

An ontology is a list of labels for things that could exist and some relationships between them. A knowledge graph instantiates the labels of the relationship. The term knowledge graph is confusing, so I will use the word knowledge as a shorthand. For example:

  • Ontology: Tipping_Points cause Bad_Happenings, where Tipping_Points and Bad_Happenings are the labels, and the relationship is causes.
  • Knowledge: "Melting Arctic Ice causes Ocean Heating At the North Pole." where Melting Arctic Ice instantiates Tipping_Points and Ocean Heating At The North Pole instantiates Bad_Happenings.

I am discussing beliefs about the possible existence of things or possible causes of events in this red team. So do EA folks. Longtermists in Effective Altruism believe in making people happy and making happy people.”iii  Separate from possibilities or costs of longtermist futures is the concept of moral circles, meant to contain beings with moral status. As you widen your moral circle, you include more beings of more types in your moral calculations. These beings occupy a place in an ontology of beings said to exist at some point in present or future time[1].  Longtermists believe that future people have moral status. An open question is whether they actually believe that those future people will necessarily exist. In order for me to instantiate knowledge of that type of person, I have to know that they will exist.

A quick introduction to cause-effect pathways

A necessary cause is required for an effect to occur, while a sufficient cause is sufficient, but not necessary, for an effect to occur. There are also necessary and sufficient causes, a special category.

When discussing altruism, I will consider actions taken, consequences achieved, and the altruistic value[2] of the consequences. Lets treat actions as causes, and consequences as effects. 

In addition, we can add to our model preconditions, a self-explanatory concept. For example, if your action has additional preconditions for its performance that are necessary in order for the action to cause a specific consequence, then those preconditions are necessary causes contributing to the consequence.

Using ontologies to make predictions

“Excuse me. Zero! Zero! There is a zero percent chance that your subprime losses will stop at five percent.” -The character Mark Baum in The Big Short

I will offer my belief that real-world prediction is not done with probabilities or likelihoods. Prediction outside of games of chance is done by matching an ontology to real-world knowledge of events. The events, whether actual or hypothetical, contain a past and a future, and the prediction is that portion of the cause-effect pathways and knowledge about entities that exist in the future.

Some doubts about subjective probability estimates

I doubt the legitimacy and meaningfulness of most subjective probability estimates. My doubts include:

  • betting is not a model of neutral interest in outcomes. Supposedly betting money on predictions motivates interest in accuracy, but I think it motivates interest in betting.
  • you can convey relative importance through probabilities just as easily as you can convey relative likelihoods. I think that's the consequence or even the intent behind subjective probability estimates in many cases.
  • subjective probabilities might be used with an expected value calculation. I believe that humans have no reliable sense of judgement for relative subjective probabilities, and that expected value calculations that rely on any level of precision will yield expected values without any real significance.

To reframe the general idea behind matching ontologies to events, I will use Julia Galef’s well-known model of scouts and soldiers[3]:

  • scouts: collect knowledge and refine their ontology as they explore their world.
  • soldiers: adopt an (arbitrary) ontology and  assert its match to the real-world.

An example of a climate prediction using belief filters and a small ontology

This example is pieces of internal dialog interspersed with pseudo-code. The dialog describes beliefs and questions. The example shows the build-up of knowledge by application of an ontology to research information. I gloss over all relationships except causal ones, relying on the meanings of ontology labels rather than making relationships like part-whole or meaning explicit.

The ontology content is mostly in the names of variables and what they can contain as values while the causal pathways are created by causal links. A variable looks like x? and → means causes. So x? -> y? means some entity x causes some entity y.

Internal dialog while researching:Climate change research keeps bringing tipping point timing closer to the present as the required GAST increases for tipping of natural systems drop in value. Climate tipping points are big and cause big bad consequences. So are there climate system tipping points happening now? Yes, the ice of the arctic melts a lot in the summer.

Knowledge instantiation:

  • small_GAST_increase? -> Tipping_Points?
  • 1.2C GAST Increase -> Ice-free Arctic
  • Tipping_points? -> Big_Bad_Consequences?
  • Ice-free Arctic -> Big_Bad_Consequences?

Internal dialog while researching:Discussions about tipping points could be filtered by media or government. So is there near term-catastrophic threat from tipping points? No, but the arctic melt is a positive feedback for other tipping points.

Knowledge instantiation:

  • Tipping_points? -> catastrophe?
  • Ice-free Arctic -> catastrophe?
  • Ice-free Arctic -> permafrost melt
  • Permafrost melt -> catastrophe?
  • Permafrost melt -> uninhabitable land and 1+ trillion tons carbon emitted
  • Ice-free Arctic -> methane hydrate/thermogenic methane emissions
  • methane hydrate/thermogenic methane emissions -> catastrophe?
  • methane hydrate/thermogenic methane emissions -> 1 degree Celsius GAST rise
  • Ice-free Arctic -> accelerating Greenland melt
  • accelerating Greenland melt -> catastrophe?
  • accelerating Greenland melt -> <7m sea level rise possible
  • sea level rise -> catastrophe?
  • Sea level rise -> estimate:1+ meters sea level rise destabilizes coastal populations, 2+ meters wipes out some countries

Internal dialog while researching:Is the threat from tipping points immediate? Well, researchers are surprised by acceleration of changes at tipping points. Right now fast and surprising threats depend on the movements of the meandering jet stream. Is the jet stream a tipping point? I'll call it  a climate phenomenon.

Knowledge instantiation:

  • Tipping_points? -> immediate_threat?
  • Climate_phenomenon? -> immediate threat?
  • Jet stream -> severe heat waves and cold snaps

Internal dialog while researching:Is stated policy matching the situation severity? No. Climate scientists discussing the issues are frustrated or bitter. It’s clear global commitments are inadequate and either way ignored.

Knowledge instantiation:

  • Situation? -> Good_Policy?
  • Severe situation -> mediocre policy response
  • Policy_made? -> Policy_Implemented?
  • Paris Agreement -> Paris Agreement ignored

Internal dialog while researching: Policy change around the environment, population, and resource use has not progressed much. Are there other pressures on systems than direct climate change pressure? Yes. There is plastic pollution killing marine life; a 6th great extinction cascading through species; new pesticides killing insects; lack of clean water  causing migration, disease, or death; shills and skeptics shaping climate policy.

Knowledge instantiation:

  • Other pressures? -> Synergism with Climate change pressures?
  • lots of plastic pollution in the ocean -> death of marine life
  • a 6th great extinction -> loss of habitats and loss of species
  • .new pesticide stresses on insect species -> continuing insect species/biomass losses
  • lack of clean water availability -> migration, disease, death
  • climate skeptics and industry shills in government -> reactive, bogus, or null climate change mitigation or adaptation policies
  • [OK, this list could go on for a while, so I will stop now]

Internal dialog while researching:My goal was to understand the big picture of climate change in future. What is the big picture of consequences? We are on a path toward extinguishing life on the planet.

Keeping a small ontology or prefiltering for relevance

A prediction is a best-fit match of an ontology to a set of (hypothetical) events. Your beliefs let you filter out what you consider relevant in events. Correspondingly, you match relatively small ontologies to real-world events. The more you filter for relevance, the smaller the matching requirements for your ontology. You need to filter incoming information about events for credibility or plausibility as well, but that is a separate issue involving research skills and critical thinking skills.

In my example above:

  • the internal dialog sections presumed my beliefs. For example, "Discussions about tipping points could be filtered by media or government. " presupposes that organizations might filter discussions of tipping points.
  • questions presupposed beliefs. For example, " Is the threat from tipping points immediate?" presupposes that tipping points pose a threat.

These beliefs, the content of the dialog sections, defines the ontology that I instantiate with the information that I gather during research. For example, I believe in climate tipping points that have bad and big consequences, and that is one of the first relationships I list in the example. The ontological relationships that I instantiate with information about tipping points are few and general so my example ontology was small but let me reach a useful conclusion: "We are on a path toward extinguishing life on the planet."[4]

Superforecasters have smaller ontologies with better relevance and good research and critical thinking skills

“A hundred percent he’s there. Ok, fine, ninety-five percent because I know certainty freaks you guys out, but it’s a hundred.” 
-Intelligence Analyst Maya in the movie Zero Dark Thirty

Superforecasters might be good at:

  • revising their ontologies quickly by adding new entities or relationships and subtracting others.
  • making predictions based on matches to a relatively small but well-curated ontology[5].

I suspect that forecasters continue to use subjective probabilities because forecasters are not asked to justify their predictions by explaining their ontology.

A plausible problem for domain experts charged with forecasting is that they develop  larger ontologies than they should. Then, when they try to match the ontology against real-world information, they fail to collect only relevant data. They then know a lot, but don't believe much about what qualifies as relevant.

I am by no means an expert in prediction or its skills. I am offering my beliefs here.

Acknowledge that your beliefs decide your morality

How you live your life can be consequential for other people, for the environment, for other species, and for future generations. The big question is “What consequences do you cause?”

I suggest that you discuss assertions about your consequences without the deniability that conclusions or hypotheses allow. Assert what you believe that you cause, at least to yourself. Don’t use the term consequence lightly. An outcome occurred in some correspondence to your action. A consequence occurred because your action caused it[6].

That I would advocate for this epistemic position might seem confusing. You’re probably wondering what benefit it has for deciding the altruism of your consequences or the validity of causal models that you employ to decide the altruism of your consequences.

Ordinary pressures to hide beliefs

I believe that there are a number of pressures on discussions of beliefs about moral actions. Those pressures include:

  • social expectations and norms
  • your values and feelings
  • your attention and actual commitment

To be a good scout involved in intersubjective validation of your beliefs, don't lie to others or yourself. If you start off by asserting what you believe, then you might be able to align your beliefs with the best evidence available. What you do by expressing your beliefs is:

  1. ignore expectations of rationality or goodness-of-fit to your beliefs.
  2. express your feelings and plausibly express your values as well.
  3. commit to the discussion of your beliefs

provided that describing your unweighted beliefs is tolerable to those involved.

On way to make discussing your actual beliefs tolerable to yourself is to provide a means to constrain them. You can limit their applicability or revise some of their implications. Here are two short examples:

  • I start out believing that God creating the universe 6000 years ago. I learn enough about evolution and planetary history and theories of the start of the universe that I revise my belief. Yes, God created everything, and God created us, but did it 4+ billion years ago in some kind of primordial chemical soup.
  • I start out believing that climate change is a liberal hoax. I learn about climate science and  recognize that climate researchers are mostly sincere. Now I believe that only the most liberal climate researchers make up how bad climate change will be, and I can tell those liberals apart from the truthful majority at the IPCC.

You might think these are examples of poor epistemics or partially updated beliefs, not something to encourage, but I disagree. 

To simply make belief statements probabilistic implies that we are in doubt a lot, but with reference to unconstrained forms of our original beliefs. Constraining a belief, or chipping away at the details of the belief, is a better approach than chipping away at our confidence in the belief. We curate our ontologies and knowledge more carefully if we constrain beliefs rather than manipulate our confidence in beliefs.

Lets consider a less controversial example. Suppose some charity takes action to build a sewage plant in a small town somewhere. Cement availability becomes intermittent and prices rise, so the charity’s effectiveness drops. Subjective confidence levels assigned for the altruistic value of the charity’s actions correspondingly drop. However, constraining those actions to conditions in which cement is cheap and supply is assured can renew interest in the actions. Better to constrain belief in the action’s effectiveness than to reduce your confidence level without examining why. We'll return to this shortly.

A summary of advantages of unweighted beliefs

At the risk of being redundant, let me summarize the advantages of unweighted beliefs. Preferring unweighted beliefs to weighted beliefs with epistemic confidence levels does offer advantages, including:

  • an unambiguous statement of what you hold true. An unweighted belief is not hedged. “I believe in god” gives me information that “I ninety-five percent believe in god.” does not[7].
  • an intuitive use of the concept of beliefs. Beliefs, as I think most people learned about them, are internal, sometimes rational, sometimes taken on faith, knowledge of what is true.
  • a distinction of believed truths from contingent conclusions or validated hypotheses, as noted earlier.
  • an opportunity to add constraints to your belief when evidence warrants.
  • a way to identify that you think true something that contradicts your epistemic best practices[8].
  • a way to distinguish hunches, intuitions, or other internal representations of beliefs from conclusions or hypotheses.
  • A way to name an assertion that you can revise or revoke rather than assign to a probability.
  • an opportunity to focus on the ontology or knowledge implicit in your belief.

The nature of the problem

Tuld:“So what you’re saying is that this has already happened.”

Peter:“Sort of.”

Tuld: “Sort of. And, Mr. Sullivan, what does your model say that that means for us here?”

- CEO John Tuld and Analyst Peter Sullivan in the movie Margin Call

A community of thinkers, intellectuals, and model citizens pursuing altruism is one that maintains the connection between:

  • thinking about beliefs about the consequences of one’s actions.
  • intersubjective validation of the consequences of one’s actions.
  • taking responsibility for changing the consequences of one’s actions for others.

The altruistic mission of the EA community is excellent and up to the task. I will propose a general principle for you, and that’s next.

Altruism valuations lack truth outside their limiting context

If you decide that your actions produce a consequence, then that is true in a context, and limited to that context. By implication, then, you have to assure that the context is present in order to satisfy your own belief that the action is producing the consequence that you believe it does.

An example of buying cement for a sewage-processing facility

For example, if you’re buying cement from a manufacturer to build a sewage-processing facility, and the payments are made in advance, and the supplier claims they shipped, but the building site didn’t receive the cement, then the charitable contributions toward the purchase of cement for the sewage treatment plant do not have the consequence you believe. Your belief in what you cause through your charity is made false. You don't want to lose your effectiveness, so what do you do? You:

  1. add the precondition that the cement reaches the building site to your list of what causes the sewage plant to be built. You bother because that precondition is no longer assured.
  2. add actions to your causal model that are sufficient to ensure that cement deliveries complete. For example, possible actions might include:
  • to send a representative to the supplier’s warehouses.
  • to send employees to ride the freight train and verify delivery.
  • to create a new contract with the cement manufacturer that payment is only made after delivery.

Lets say that cement deliveries are only reliable if you take all those actions. In that case your beliefs about paying for cement change:

  • original belief: If we pay for cement, we can build our sewage plant.
  • revised belief: If we arrange a cement purchase, and send a representative to the supplier, and that representative rides the freight train carrying the cement to the construction location before we send payment for the cement, then we can build our sewage plant.“

You could do something more EAish involving subjective probabilities and cost effectiveness analyses, such as: 

  1. track the decline in subjective probabilities that your build site receives cement and take action below a specific probability, just sufficient to raise subjective estimates of delivery efficiency above a certain percentage.
  2. develop a cost-effectiveness model around cement deliveries. Ask several employees to supply subjective probabilities for future delivery frequencies based on proposed alternatives to correct delivery failures. Weight their responses to choose a cost-effective means to solve delivery problems.
  3. rather than send a representative to the supplier or have that person ride the freight train, just get a new contract with a supplier that lets you specify that payment is made after cement is freighted to the build site. If successful, it is the cheapest action to start.

If that process also results in a revised true belief, then good enough. There might be other advantages to your seemingly more expensive options that actually reduce costs later[9].

You can take your causal modeling of your actions further[10]. You can:

  • look further upstream in causes. For example, explore what caused the need for a charity to pay for a sewage treatment plant
  • assess causes of preconditions that you have already established, or their preconditions. Maybe your employees wont ride the freight train without hazard pay.
  • Look at your other actions to see if they cause the undesirable preconditions that make your charitable effort necessary. Are you contributing to the reasons that the town cannot afford a sewage treatment plant?[11]

Your altruism is limited to the contexts where you have positive consequences

Yes, your actions only work to bring about consequences in specific contexts in which certain preconditions hold. You cannot say much about your altruism outside those contexts unless you consider your actions outside those contexts. Effective altruists leverage money to achieve altruism, but that’s not possible in all areas of life or meaningful in all opportunities for altruistic action. This has implications that we can consider through some thought experiments.

Scoring altruistic and selfish consequences of actions

“Is that figure right?” 
-Manager Sam Rogers in the movie Margin Call

Actions can be both good and evil in their consequences for yourself or others. To explore this, lets consider a few algebra tools that let you score, rank, scale, and compare the altruistic value of your actions in a few different ways. I think the following four-factor model will do.

Here’s a simple system of scoring actions with respect to their morality, made up of a:

  • benefit score
  • harm score
  • self-benefit score
  • self-harm score

Lets use a tuple of (benefit score, harm score, self-benefit score, self-harm score) to quantify an action in terms of its consequences. For my purposes, I will consider the action of saving a life to have the maximum benefit and self-benefit and give it a score of 10. Analogously, the maximum harm and self-harm score is also 10. All other scores relative to that maximum of 10 are made up.

A Thought Experiment: Two rescues and a donation

The first rescue: A woman saves two children from a building fire

Consider a stranger unknown to anyone in a neighborhood. She walks into a burning building and saves two children inside but dies herself from smoke inhalation. She intended well but caused the two children she rescued to suffer smoke inhalation because of her poor rescue technique.

Lets suppose saving a life gets a score of 10. The tuple of scores for this person’s life-saving action is (20,2,0,10). To explain, here a breakdown of the scores:

  • benefit score: 20. There were two rescues, so 10 points for each life saved.
  • harm score: 2. The rescue caused a bit of smoke inhalation to each person saved, worth 1 point of harm each, because the rescuer held the people over her shoulders after pulling them off the floor where they were lay screaming below the smoke.
  • Self-benefit score: 0. The rescue did nothing for the rescuer, who died immediately afterward.
  • Self-harm score: 10. As a result of inhalation caused by the rescuer searching the building for the children before finding them, the rescuer died on the steps of the building.

The second rescue: a woman saves a child drowning in mud on a street

Now how about a person walking along that sees and rescues a child drowning in mud, like Peter Singer’s original thought experiment to motivate students to give what they can? We might assign that action scores like (10,0,2,1). The act is scored like:

  • benefit score: 10. The rescuer saved a child’s life.
  • harm score: 0. The rescuer didn’t hurt anyone.
  • self-benefit score: 2. The rescuer raised the her reputation among others in the town and with the child's family
  • self-harm score: 1. The rescuer ruined her mother’s faux suede boots with mud.

Comparing the two rescues

A quick subtraction of the fire rescue values from the drowning rescue values gives:

(20,1,0,10) - (10,0,2,1) = (10,1,-2,9)

lets you compare the two actions. The fire rescue action has:

  • benefit score difference: +10. It was much more altruistic than the drowning rescue.
  • harm score difference: +1. It did a bit more harm to others than the drowning rescue..
  • self-benefit score difference: -2. It was less helpful to the rescuer than the drowning rescue.
  • self-harm score difference: +9. It was far more harmful to the rescuer than the drowning rescue.

There’s another way to compare the fire rescue and the drowning rescue actions, and its through distance from a hypothetical null action (0,0,0,0) and from each other.

  • Distance of fire rescue from null: 
  • Distance of drowning rescue from null: 
  • Distance of fire rescue from drowning rescue: 

These distance scores could show that the actions had considerably different outcomes overall. The distances of each action from null show that the fire rescue action had more impacts overall. The distance of each action from each other shows that the impact of each rescue action is different.

The donation with wildly altruistic consequences

EA is cool because you can accomplish goals like reducing suffering from malaria by paying for relatively cheap bednets.

Paying for a couple hundred bednets might deserve a score like (1000,0,0,0.2). This score, with all the validity of a thought experiment, shows:

  • benefit score: 1000. the donation removed malaria suffering for 200 people, so a score of: 5 * 200 = 1000.
  • harm score: 0. the donation did not cause anyone any harm.
  • self-benefit score: 0. the donation did not help the giver in any way.
  • self-harm score: 0.2. the donation cost the giver a trivial fraction of his yearly discretionary spending budget.

Comparing the donation to the fire rescue

Lets compare the scores of a donation to an EA foundation (1000,0,0,0.2) with the scores from the previous fire rescue action of (20,2,0,10).

(1000,0,0,0.2) – (20,2,0,10) = (980,-2,0,-9.8)

Compared to the fire rescue, the donation is different by:

  • benefit score difference: +980. The donation had far more altruistic consequences overall.
  • harm score difference: -2. The donation caused less harm than the fire rescue.
  • self-benefit score difference: 0. The donation didn’t benefit the giver more or less than the fire rescuer’s action..
  • self-harm score difference: -9.8. The donation caused the giver far less harm than the fire rescue woman’s action caused the fire rescue woman.

Distance from null of donation action: 

The donation was clearly impactful. Compare that distance (1000) to the earlier distances of the fire rescue (22.4) and the drowning rescue (10.2).

Distance of donation from fire rescue: 

Clearly, the donation action is far different from the fire rescue.

This should be enough to see how this simple scoring system can develop your intuitions about actions. 

Some legitimate concerns about this thought experiment

Disagreements over these thought experiments might include:

  • the individual scores are subjective and relative. Is preventing a death (a benefit score of 10) really worth two cases of malaria (a benefit score of 5 * 2)?
  • the individual scores conflate incompatible types of altruism. Is death comparable to ruining a pair of boots?
  • The causal models are debatable. Was the large donation actually sufficient to generate new bednet deliveries to Uganda? In the fire rescue thought experiment, are we sure the children suffered from smoke inhalation because of how the rescuer carried them out?
  • Some scores are not possible in practice. For example, is it possible to have null benefit or harm ever?

If you have these concerns, then you can bring them up to improve your use of such a thought experiment. For example, you could disallow absolute 0 values of any individual score.

The influence of beliefs about causation on your sense of responsibility

EA folks complain that they suffer depression or frustration when they compare their plans for personal spending against the good they could do by donating their spending money. Let’s explore this issue.

Continuing with our earlier donation example, lets say it takes $x to cause (1000,0,0,0.2) through the action of donating toward bednet purchases to prevent malaria infection. That same $x affords you a week-long rejuvenating vacation at an expensive spa resort, an action with a score of lets say (0,0,3,0). The vacation action gets a self-benefit score of 3 with no other believed consequences. You could go on to calculate the difference of the two scores (about 1000) and compare their distances from null (1000 vs 3). What do you believe happens to your morality or the world though, if you take that day trip to the spa?

To answer this question, lets consider how actions cause consequences, one more time. If you believe that your action to withhold money from your charity to pay for your spa vacation causes 200 people to suffer malaria, then you need to score your vacation action again to reflect that. Instead of the vacation deserving a score of (0,0,3,0), it should get a score of (0,1000,3,0), because along with rejuvenating at the spa for a week, you caused 200 hundred in Uganda to suffer malaria by your withholding bednets from them. The question comes down to what you believe about what you cause. Does that vacation deserve a scoring of (0,0,3,0), or of (0,1000,3,0)? Did you cause 200 people to contract malaria or not?

Let’s return to the example of the woman running into the burning building. This simpler example will ground your intuition about taking responsibility for outcomes. 

A woman is walking home from a day at the local animal shelter on a roundabout route, getting in some exercise and fresh air while she ponders increasing her monthly Givewell payment. About halfway through her walk, she sees smoke and flame rising from a building at the end of a cul-de-sac and hears screams from inside the building. She doesn’t see anyone else around. There’s no cars in driveways, no lights on in homes, no other sound, light, or motion. She assumes that there is no one else there aware of the fire, and that no one is coming to the rescue. She races toward the building, and you know the rest. She rescues two children but dies of smoke inhalation herself.

Suppose that woman had instead called 911, waited for a fire truck to arrive, and then continued on her walk. A few days later, she reads a news report that two children were found dead in a fire at the location she passed. In her belief, she could have rescued those people trapped inside. She recalls that one else was around to rescue the two children. She concludes and then believes that she was necessary and sufficient to save those children, Furthermore, she believes that her calling 911 and waiting for a fire truck actually killed the two children inside. Therefore, the action of her walking home has a four-factor score like (0,20,1,0).

  • benefit score: 0 She helped no one else.
  • harm score: 20. She believes that she caused two deaths.
  • self-benefit score: 1. She got some mild exercise on her roundabout walk.
  • self-harm score: 0. She didn’t cause herself any harm by walking home.

All this example illustrates is our limitations in assigning ourselves a causal role in events. With a few small changes in in the narrative, we can change the fire rescuer’s beliefs and our own assessment as well. For example, suppose the woman remembers that her lungs are scarred from an accident many years ago and reconsiders whether she could have navigated the building in the thick smoke to find the children. 

I have emphasized beliefs throughout this red team exercise because:

  • beliefs assert what you consider to be consequences of your actions.
  • beliefs decide what goes onto your own ledger of your good, evil, or selfish actions.
  • beliefs guide your decisions and feelings in many situations of moral significance.

The flow of time carrying your actions defines a single path of what exists

Again, I offer my beliefs. Your actions precede their consequences. What you believe that you cause is found in the consequences of your actions. You can look forward in time and predict your future consequences. You can look at diverging tracks of hypothetical futures preceded by actions you could have taken but did not take. However, those hypothetical futures are nothing more than beliefs. On those hypothetical tracks, you can perceive the consequences of what you could have done, like the woman in the fire rescue who elected to call 911 and wait for the fire truck to arrive.  Later on, she's haunted by what would have happened if she had rushed into the building and carried out two living healthy children instead of letting them die in a fire. What would have happened is not an alternate reality though. It is just a belief. The poor woman, haunted by guilt, will never know if her belief is true. Anyone suffering regret or enjoying relief about what they didn't do or did do is responding to their beliefs, not some kind of escaped reality or alternate set of facts.

Conclusion: This was a red-team criticism, not just an exploration of ideas

If I did my job here, then the content and my intent were clear. I am critical of the overuse of subjective probabilities inside EA. A renewal of unqualified use of assertions that indicate unweighted belief will make it easier for you to apply useful critical thinking and prediction skills.

A few recommendations

I offered a few recommendations, including that you:

  • remember that beliefs are more or different than conclusions and hypotheses.
  • qualify your conclusions and hypotheses as such.
  • state your unweighted beliefs with confidence.
  • challenge beliefs to learn the particulars of their implicit ontology or knowledge.
  • match causal pathways and ontological lists to make predictions.
  • qualify a prediction with a discussion of the causal pathway it matches or the ontology it presumes, instead of with a prediction probability.
  • model preconditions, actions, and consequences to achieve your goals.
  • explore the context of any actions of yours that you want to perform.

I attribute your assessment of your action’s consequences to your beliefs, not your rationality, regardless of how you formed that assessment and any confidence that you claim in it. Or so I believe.

I believe that you can succeed to the benefit of others inside your moral circle and outside it as well. That said, these are difficult times, and we could all use some good luck. Take care!

About Me

I am a geophysics and mathematics graduate from the 1990's out of UCSC. I'm an older man now, who spent time working with software and has done various types of work in between. My career interests are open but I expect to seek and find work in the software industry again at some point. I don't have credentials to bolster your belief in what I share here. My formal education in related academic fields consists of several years of software training, some industry work,  and a few philosophy and linguistics courses. Plus a lot of learning on my own time because I have hobbies.

About this critique

This critique represents my beliefs about the topics involved, including the Effective Altruist community. It is not a scholarly work. My conclusions are not drawn from a history of the same that I know about, although there are many related areas of study. Ethics, philosophy, pragmatics, knowledge representation, all have something to say about the issues that I address here. 

I should credit the Goldratt methods of analysis and discovery for inspiring my discussion of precondition identification and support here. If any reader wants to model causality to discover root causes, resolve conflicts, or solve problems, look into Goldratt Consulting's work.

Bibliography

Alciem LLC. Flying Logic User Guide v.3.0.9 . Arciem LLC. 2020.

Cawsey, Allison, The Essence of Artificial Intelligence. Pearson Education Limited, 1998.

Daub, Adrian, What Tech Calls Thinking. FSG Originals, 2020.

Fisher, Roger, and William Ury. Getting to Yes. Boston: Houghton Mifflin, 1981.

Galef, Julia, The Scout Mindset. Penguin, 2021.

Goldratt, Eliyahu, M., et al, The Goal. North River Press, Inc. 2012.

Johnstone, Albert A., Rationalized Epistemology. SUNY Press, 1991.

Pearl, Judea and Dana Mackenzie, The Book of Why. Hachette UK, 2018.

 

  1. ^

    In fact any time that you can affect causally. At least, so far, I have not heard any direct arguments for the immorality of not causing beings to exist.

  2. ^

    Altruistic value is the value to others of some effect, outcome, or consequence. I call the value altruistic because I consider it with respect to what is good for others that experience an effect, outcome, or consequence. In different contexts, the specifics of altruistic value will be very different. For example, EA folks talk about measuring altruistic value in QALY’s when evaluating some kinds of interventions. Another example is that I think of my altruistic value as present in how I cause other’s work to be more efficient in time or energy required. There is the possibility of anti-altruistic value, or harm to others, or evil consequences. Some actions have no altruistic value. They neither contribute to nor subtract from the well-being others. Those actions have null altruistic value.

  3. ^

    Sorry, Julia, if I butchered your concepts.

  4. ^

    I don't believe that the mind uses an inference system comparable to production rules or forward-chaining or other AI algorithms. However, I think that matching is part of what we do when we gather information or make predictions or understand something. I don't want to defend that position in this critique.

  5. ^

    There could be intuitive matching algorithms put to use that let a person quantify the match they determine between the various causal pathways in their ontology and real-world events. I am just speculating, but those algorithms could serve the same purpose as subjective probabilities in forecasting.

  6. ^

    Of course, no matter what confidence value (2%, 95%, very, kinda, almost certainly) that you give a assertion, if you don’t believe it, then it’s not your belief. Conversely, if you do believe an assertion, but assert a low confidence in it, then it is still your belief.

  7. ^

    And no, I am not interested in betting in general.

  8. ^

    Of course, no matter what confidence value (2%, 95%, very, kinda, almost certainly) that you give an assertion, if you don’t believe it, then it’s not your belief. Conversely, if you do believe an assertion, but assert a low confidence in it, then it is still your belief.

  9. ^

    The narrative details determine what's cost effective, but a plausible alternative is that the area is subject to theft from freight trains, and an employee riding the freight train could identify the problem early, or plausibly prevent the thefts, for example with bribes given to the thieves.

  10. ^

    If you’re interested, there are various methods of causal analysis without probabilities. You can find them discussed in a manual from the company behind Flying Logic software, or in books about the Goldratt methods of problem-solving, and perhaps in other places. 

  11. ^

    It would be difficult to learn that your actions in your career increase poverty in or force migration from a country receiving financial support for an EA charity, particularly when you also contribute a good chunk of your income to that charity. If you work for some banks or finance institutions, then it is plausible that you are causing some of the problem that you intend to correct.

Really interesting!

I get the impression that you do organizational consulting. I have been in various business environments where I watched organizational consultants work from my perspective as an employee. 

I am curious how your approach and ethics let you handle:

  • emperor wears no clothes organizational problems: everyone seems to think some X is really great, but X is a fiction and only you see that.
  • elephant in the room communication situations: there's something everyone knows about, fears, and won't talk about and it's the problem that needs handling. 
  • covert consulting needs: you're consulting, but the problems are so obviously related to leadership or the organization, that you either leave or create organizational change covertly despite whatever management identified as problems to fix.

These situations were a test of consultant integrity, from what I saw, but they also show up in everyday life, where fictions, secrets, or politics conflict with desire for integrity.

Congratulations, Corentin!  Having just read part 1 in detail, I'm looking forward to more of your material. 

Time and scale, as you said are the biggest concerns around adaptation. A virtue of not adapting with new infrastructure is that we save on carbon and energy put toward creating that infrastructure. Conservation could help more than anyone else believes, but it's the least sexy approach. I want to comment on these issues, but now I don't know where. Should I comment on the part 2 post, or in the google doc, or on the short contest version post? 

As a general point, do you believe that if  women did realize the full benefits of receiving education in terms of learning outcomes, that then they would be served by their literacy, knowledge of mathematics, physics, politics, government, and software even if they could not find employment in a career field such as engineering?

Your answer might help me understand your question a little better.

I am also curious about what expert or organization discusses the relationship between poverty and population that you allude to when you write:

To be clear, I'm aware that most experts no longer believe global overpopulation will be an issue, and I think more population growth could be good for the progress of technology. 

I wonder if they are truly against AGI or ASI, or if they just want the safe versions? I am not sure if there are really two positions here (one for AI, one against), or really just one with caveats.

Wow, that is a strong claim!

Could these conscious AI also have affective experience?

EDIT: I gave this a mild rewrite about an hour after writing it to make a few points clearer. I notice I already got one strong disagreement. If anyone would like to offer a comment as well, I'm interested in disagreements, particularly around the value of statements of epistemic confidence. Perhaps they serve a purpose that I do not see? I'd like to know, if so.

Hmm. While I agree that it is helpful to include references for factual claims when it is in the author's interest[1], I do not agree that inclusion of those references is necessarily useful to the reader.  

For example, any topic about which the reader is unfamiliar or has a strongly held point of view is also one that likely has opposing points of view. While the reader might be interested in exploring the references for an author's point of view, I think it would make more sense to put the responsibility on the reader to ask for those references than to force the author to presume a reader without knowledge or agreement with the author's point of view. 

Should the author be responsible for offering complete coverage of all arguments and detail their own decisions about what sources to trust or lines of argument to pursue? I think not. It's not practical or needed.

 However, what seems to be the narrative here is that, if an author does not supply references, the reader assumes nothing and moves on.  After all, the author didn't use good practices and the reader is busy. On the other hand, so the narrative goes, by offering a self-written statement of my epistemic confidence, I am letting you know how much to trust my statements whether I offer extensive citations or not. 

The EA  approach of putting the need for references on authors up-front (rather than by request) is not a good practice and neither is deferring to experts or discounting arguments simply because they are not from a recognized or claimed expert.

In the case of a scientific paper, if I am critiquing its claims, then yes, I will go through its citations. But I do so in order to gather information about the larger arguments or evidence for  the paper's claims, regardless of the credentials or expertise that the author claims. Furthermore, my rebuttal might include counterarguments with the same or other citations.  

I can see this as valuable where I know that I (could) disagree with the author. Obviously, the burden is on me to collect the best evidence for the claims an author makes as well as the best evidence against them. If I want to "steelman" the author, as you folks put it, or refute the author's claims definitively, then I will need to ask for citations from the author and collect additional information as well.

The whole point of providing citations up-front is to allow investigation of the author's claims, but not to provide comfort that the claims are true or that I should trust the author. Furthermore, I don't see any point to offering an epistemic confidence statement because such a statement contributes nothing to my trust in the author. However, EA folks seem to think that  proxies for rigor and the reader's epistemic confidence in the author are valid.[2]  

With our easy access to:

  • scientific literature
  • think-tank distillations of research or informed opinion 
  • public statements offered by scientists and experts on topics both inside and outside their areas of expertise
  • books (with good bibliographies) written about most topics
  • freely accessible journalism
  • thousands of forum posts on EA topics
  • and other sources

I don't find it necessary to cite sources explicitly in forum posts, unless I particularly want to point readers to those sources. I know they can do the research themselves. If you want to steelman my arguments or make definitive arguments against them, then you should, is my view. It's not my job to help you nor should I pretend that I am by offering you whatever supports my claims.

In some cases, you can’t provide much of the reasoning for your view, and it’s most transparent to simply say so.

Well, whether a person chooses not to offer their reasoning, or can't offer their reasoning, you can conclude, if they don't offer their reasoning, and you want to know what it is, that you should ask for it. After all, if I assert something you find disputable, why not dispute it? And the first step in disputing it is to ask for my reasoning. This is a much better approach than deciding whether you trust my presentation based on my write-up of my epistemic status. 

For example, here is a plausible epistemic status for this comment:

  • a confident epistemic status: I spent several hours thinking over this comment's content in particular. I thought it through after browsing more than 30 statements of epistemic status presented by EA folks in the last six months. I have more than a decade of experience judging internet posts and articles on their argumentation quality (not in any professional capacity) and have personally authored about 1500-2000 posts and comments (not tweets) on academic topics in the last 20 years. My background includes a year of training in formal and informal logic, more than a year of linguistics study, and about a year of personal study of AI and semantic networks topics and an additional (approximate) year of self-study of research methods, pragmatics and argumentation, all devoted to assertion deconstruction  and development, and most applied in the context of responding to or writing forum/blog posts.

Actually, that epistemic status is accurate, but I don't see it as relevant to my claims here. Plus how do you know its true? But suppose you thought it was true and felt reassured that my claims might have merit based on that epistemic status. I think you would be mistaken but I won't argue it now. Interestingly, it does not appear representative of the sorts of epistemic status that I have actually read on this forum. 

Here is an alternative epistemic status statement for my comment that looks like others on this forum:

  • a less-confident epistemic status: I only spent a few minutes thinking over my position before I decided to write it down[implying that I'm brash].  Plus I wrote it when I was really tired.  I'm not an acknowledged expert on this topic, so my unqualified statements are not reliable. Also, I don't know of anybody else saying this. Finally, I'm not sure if I can articulate why I think what I do, at least to meet your standards, which seem high on this forum [implying that I'm intimidated]. So I guess I'm not that confident about my position and don't want you to agree with it easily, particularly since you've been using your approach for so long. I'm just trying to stimulate conversation with this comment.

It seems intended to acknowledge and validate potential arguments against it that are:

  • ad hominem: "[I'm brash and] I wrote this when I was really tired"
  • appeals to authority: "I'm not an acknowledged expert on this topic"
  • ad populum :"I don't know of anybody else saying this"
  • sunk cost: "you've been using your approach for so long"

and the discussion of confidence is confusing. After acknowledging that I:

  • am brash and tired
  • didn't spend long formulating my position
  • am not an expert
  • feel intimidated by your high standards
  • wouldn't want you to reverse your position too quickly

I let you know that I'm not that confident in my assertions here. And then I go make them anyway with the stated goal that "I'm just trying to stimulate conversation." Turned into a best practice as it is here, though, I see a different consequence for these statements of epistemic confidence.

What I have learned over my years on various forums (and blogs) includes:

  • an author will offer information that is appealing to readers or that helps the readers see the author in a better light. 

    Either of the epistemic status offerings I gave might convince different people to see me in a better light. The first might convince readers that I am knowledgeable and have strong and well-formed opinions. The second might convince readers that I am honest and humble and would like more information.  The second also compliments the reader's high standards.
     
  • readers that are good critical thinkers are aware of fallacies like ad hominem, ad populum, appeals to authority, and sunk cost.  They also distrust flattery.

    They know that to get at the truth, you need more information than is typically available in a post or comment. If they have a strong disagreement with an author, they have a forum to contest the author's claims. If the  author is really  brash, tired, ill-informed, inarticulate, and has no reason for epistemic confidence in his claims , then the way to find out is not to take the author's word for it, but to do your own research and then challenge the author's claims directly. You wouldn't want to encourage the author to validate irrelevant arguments against his claims by asking him for his epistemic confidence and his reasons for it. Even if the author chose on his own to hedge his claims with these irrelevant statements about his confidence, when you decide to steelman or refute the author's argument, you will need different arguments than the four offered by the author (ad hominem, ad populum, appeals to authority, and sunk cost) .

 

  1. ^

    I agree that an author should collect their data and offer reliable access to their sources, including quotes and page numbers for citations and archived copies, when that is in the author's interest. In my experience, few people have the patience or time to go through your sources just to confirm your claims. However, if the information is important enough to warrant assessment for rigor, and you as the author care about that assessment, then yeah. Supply as much source material as possible as effectively as possible. And do a good job in the first place, that is, reach the right conclusions with  your best researcher cap on. Help yourself out. 

  2. ^

    It should seem obvious (this must have come up at least a few times on the forum), but there's a dark side to using citations and sources, and you'll see it from think tanks, governments, and NGO's, and here.  The author will present a rigorous-looking but bogus argument.  You have to actually examine the sources and go through them to see if they're characterized correctly in the argument or if they are unbiased or representative of expert research in the topic area. Use of citations and availability of sources is not a proxy for correct conclusions or argument rigor, but authors commonly use just the access to sources and citations to help them persuade readers of their correctness and rigor, expecting that no one will follow up. 

Load More