NOTE: This post is being expanded. Meanwhile, you can read a summary of some thoughts here. The comment contains a concise version of my original contest submission.
EA epistemology criticisms and suggestions
-
Introspection is fallible. Prescriptive, descriptive, and normative rules are confused. Appreciate basic understandings and categorical contexts. Expect to be ignorant, not uncertain, otherwise.
-
Evidence informs context specification. Evidence weight is arbitrary and subjective.
-
Certainty is felt. Feelings have intensity levels. Felt certainty, truth, and beliefs are distinct. Credences only measure feeling strength. Not likelihood. Feelings are not evidence. Feelings can signal, inform, lead, or follow while hidden from introspection.
-
Credence probabilities (de-)emphasize their credence. Ask "What if the probability were much different?" Expect cynical use of credence probability claims. Question their evidential basis.
-
Metaphors encourage their representation. Betting with bookies or at casinos is a vice. Vices encourage self-serving argument. EA folks show risk-seeking tendencies. Vices and risk-seeking combine poorly. Choose your epistemic metaphors carefully.
-
Probabilism allows measures of subjective uncertainty. Uncertainty presumes ignorance. Credence probabilities falsely imply you have knowledge. Communicate ignorance and beliefs, not credences. Explore ignorance and its reasons.
-
Epistemic status offers argument fallacies. Discourage fallacious reasoning. End the practice of epistemic status.
-
Constrain beliefs. Accommodate evidence. Add meaningful caveats. Change important details. Contextualize implications. Notice counterexamples. Acknowledge alternatives. Acknowledge contingency on unknowns. Keep your beliefs.
Example of failure of credence and expected value theory: typical longtermism
-
Longtermism assigns moral status to possible people. Possible people are not existent people. Existent people do or will exist. Existent people have moral status. Never-existent people don't. Future people will exist in your belief if at all. Assign their moral status accordingly.
-
Faith precludes decision theory. One timeline contains existent people. Alternate timelines are only beliefs. Possible futures are at best expected preferences. Faith in expected preferences is typical. And antithetical to decision theory application.
-
Typical longtermism rationalizes beliefs in pro-natalism. Consider your beliefs instead. Distinguish pro-natalist desires from expected future people.
-
Control of future people is flawed. Claims of it are false. Establishment of it fails. Or future people lose intrinsic value. Self-efface about controlling future people. Or about undue faith in them.
About ontology, a concept valuable to EA
-
Ontology is beliefs about existence. Ontology involves causes, consequences, parts, and sets. Match your ontology to the world. Make your ontology from the world.
-
Ontology plus information yield knowledge. Engage your beliefs to develop knowledge.
Discussion of Bayesian Mindset in forecasting and planning
-
Decision theory is not pragmatic in everyday life. Contextualize it better. Acknowledge the prevalence of intuition and heuristics. Constrain your belief in decision theory.
-
Gather others' perspectives. Include your foresight and insight. Decide for yourself with all information present.
-
Contingent on preconditions, options appear. Retroduct preconditions, your options, and consequences. Choose options to cause future consequences.
-
An option-less present allows an unknown future. Seek predictive indicators either way. Indicators help you plan.
-
Ontology identifies context. Context contains preconditions. Choose of action defines options. Preconditions and options cause consequences. Foresee future consequences in context.
-
Missing preconditions preclude options but don't falsify a contingent prediction. State preconditions proactively, not in hindsight. Contingent predictions are useful, not signs of failure. They're what we do. We generate and test contingent predictions.
-
Scenarios are not outcomes. Scenario planning involves contexts. Forecast odds reframe contexts. Decision theory organizes repeated tests of contingent predictions with late indicators. Don't test existential harm. Avoid contexts of existential danger.
Additional tools to aid EA inference systems
-
Simple decision tree induction: Gather examples with positive and negative results. Identify features. Tabulate example and features. Identify the best feature-to-positive result correspondence. Remove those examples. Recurse in remainder. Create a decision tree of features and their results.
-
Alternatives to Bayesian inference include:analogical, inductive, or deductive; or case-based; or generated and tested inferences; or autoepistemic and defaults; or intuition and abduction. Research alternatives. Contextualize them.
-
Missing premises are fundamental. Contingent arguments are valuable. Make, assess, and record arguments. Study critical thinking and argumentation.
-
Beliefs filter, priorities sort, imagination weighs. Beliefs are forgotten. Priorities are ignored. Evidence is misrepresented. Cognitive aids correct memory, calculations, and representations. Use cognitive aids. At least write a list.
Suggestions to aid pragmatic EA communication about beliefs
-
Truth-seeking is not truth-sharing. When honesty is absent, intellectual honesty is not verifiable. DIshonesty encourages dishonesty in return. Honesty helps intellectual honesty.
-
Beliefs communicate danger. Beliefs discuss situations directly. Beliefs let you give directives easily. Distinguish beliefs from hypotheses from conclusions from motives from lies.
-
Communicate contingencies proactively. Of belief. Of predictions. Of plans.
-
Communication establishes feelings, trust, and kinship. Feelings exist with beliefs. Communication builds feelings. Distinguish communication from its reasons.
-
Establish communicative intent proactively. Establish conditions to meet the intent. Use grammatical, semantic, and contextual criteria.
-
Meanings matter. So words matter. Consider this table of alternative words.
instead of saying | try saying | as in |
---|---|---|
is possible | is plausible | It is plausible that... |
is likely | will happen if | It will happen if... |
is impossible | is implausible | That is implausible. |
possibility | context | There is a context that allows ... |
forecast | foresee | I foresee neglect of .. |
forecast | foresight | Your foresight here is... |
usually | typically | Typically, I do... |
I bet | I suspect | I suspect you don't have... |
is probable | is expected | That is expected. |
risk | danger | The existential danger is... |
uncertain | unknown | The unknown future of your... |
am uncertain | feel uncertain | I feel uncertain about... |
uncertainty | ignorance | I have ignorance about... |
chance | speculation | There's some speculation that... |
Ethical implications of acknowledging beliefs in EA
-
Removal of credence use from EA philosophy leaves utility, ignorance, and error concepts. Explore alternative weighting schemes and their uses.
-
Actions have consequences. Consequences include harms and benefits. Separate consequences for yourself vs others. Score, scale, and compare consequences.
-
Regret and relief imply belief in consequences. Establish your beliefs. Consider plausible scenarios. Beware feelings about what didn't happen. Make feelings about what did. If you don't know what happened, find out.
-
Selfishness is serving yourself. Altruism is serving others. Ethical tension is between the two. Or between benefits and harms of both. When expectations succeed. However, situations constrain expectations.
-
Error, failure, and falsehood identification threaten feelings. Credences were a lousy protection. Pursue the Scout Mindset. It is a genuine alternative.
-
Descriptive ethics supply information. Prescriptive ethics serve a goal. Normative ethics predict an outcome. Gather good information first.
-
Moral uncertainty is a small concern. Failures of intellectual honesty, planning, execution, or context are large concerns. But acknowledge selfishness to falsify moral uncertainty.
Below is a shortened version of the original post submitted for the competition. See the post content for newer material.
--------
TL;DR
Introduction: Effective Altruists want to be altruistic
EA community members want to improve the welfare of others through allocation of earnings or work toward charitable causes. Effective Altruists just want to do good better.
There’s a couple ways to know that your actions are actually altruistic:
I believe that EA folks confirm the consequences of their altruistic actions in the near term. For example, they might rely on:
and plausibly other processes to ensure that their charities do what they claim.
The essentials of my red team critique
Bayesian subjective probabilities don’t substitute for unweighted beliefs
The community relies on Bayesian subjective probabilities when I would simply rely on unweighted beliefs. Unweighted beliefs let you represent assertions that you consider true.
Why can’t Bayesian probabilities substitute for all statements of belief? Because:
Distinguish beliefs from conclusions from hypotheses
Use the concept of an unweighted belief in discussion of a person's knowledge. Distinguish beliefs from conclusions from hypotheses, like:
I don’t mean to limit your options. These are just examples.
Challenge the ontological knowledge implicit in a belief
If I say, "I believe that buying bednets is not effective in preventing malaria." you can respond:
I believe that you should elicit ontology and knowledge from whom you challenge, rather than letting the discussion devolve into exchanges of probability estimates.
Partial list of relationship types in ontologies
You will want information about entities that participate in relationships like:
I will focus on causal relationships in a few examples below. I gloss over the other types of relationships though they deserve a full treatment.
A quick introduction to ontologies and knowledge (graphs)
An ontology is a list of labels for things that could exist and some relationships between them. A knowledge graph instantiates the labels of the relationship. The term knowledge graph is confusing, so I will use the word knowledge as a shorthand. For example:
I am discussing beliefs about the possible existence of things or possible causes of events in this red team. So do EA folks. Longtermists in Effective Altruism believe in making people happy and making happy people.” Separate from possibilities or costs of longtermist futures is the concept of moral circles, meant to contain beings with moral status. As you widen your moral circle, you include more beings of more types in your moral calculations. These beings occupy a place in an ontology of beings said to exist at some point in present or future time[1]. Longtermists believe that future people have moral status. An open question is whether they actually believe that those future people will necessarily exist. In order for me to instantiate knowledge of that type of person, I have to know that they will exist.
A quick introduction to cause-effect pathways
A necessary cause is required for an effect to occur, while a sufficient cause is sufficient, but not necessary, for an effect to occur. There are also necessary and sufficient causes, a special category.
When discussing altruism, I will consider actions taken, consequences achieved, and the altruistic value[2] of the consequences. Lets treat actions as causes, and consequences as effects.
In addition, we can add to our model preconditions, a self-explanatory concept. For example, if your action has additional preconditions for its performance that are necessary in order for the action to cause a specific consequence, then those preconditions are necessary causes contributing to the consequence.
Using ontologies to make predictions
I will offer my belief that real-world prediction is not done with probabilities or likelihoods. Prediction outside of games of chance is done by matching an ontology to real-world knowledge of events. The events, whether actual or hypothetical, contain a past and a future, and the prediction is that portion of the cause-effect pathways and knowledge about entities that exist in the future.
Some doubts about subjective probability estimates
I doubt the legitimacy and meaningfulness of most subjective probability estimates. My doubts include:
To reframe the general idea behind matching ontologies to events, I will use Julia Galef’s well-known model of scouts and soldiers[3]:
An example of a climate prediction using belief filters and a small ontology
This example is pieces of internal dialog interspersed with pseudo-code. The dialog describes beliefs and questions. The example shows the build-up of knowledge by application of an ontology to research information. I gloss over all relationships except causal ones, relying on the meanings of ontology labels rather than making relationships like part-whole or meaning explicit.
The ontology content is mostly in the names of variables and what they can contain as values while the causal pathways are created by causal links. A variable looks like x? and → means causes. So x? → y? means some entity x causes some entity y.
Internal dialog while researching: Climate change research keeps bringing tipping point timing closer to the present as the required Global Average Surface Temperature (GAST) increases for tipping of natural systems drop in value. Climate tipping points are big and cause big bad consequences. So are there climate system tipping points happening now? Yes, the ice of the arctic melts a lot in the summer.
Knowledge instantiation:
Internal dialog while researching: Discussions about tipping points could be filtered by media or government. So is there near term-catastrophic threat from tipping points? No, but the arctic melt is a positive feedback for other tipping points. The Ice-free Arctic: speeds permafrost melt that in turns leads to uninhabitable land and 1+ trillion tons of carbon potentially emitted; causes rapid release of methane hydrates or thermogenic methane from the East Siberian Arctic Shelf that in turn could cause a 1 degree Celsius GAST rise; accelerates Greenland melt that in turn could cause up to 7 meters of sea level rise globally. I estimate 1 meter of sea level rise destabilizes coastal populations and 2 meters of sea level rise will wipe some countries out.
Knowledge instantiation:
The questions I ask here are fairly simple. Examples include:
The skills that create the internal dialog include:
The expression of those skills is left implicit (or pretended) in my example. The example is not meant to train you or impress you. With practice, scout mindset, and training (e.g., in academic research), you can improve the skills that give you foresight.
Keeping a small ontology or filtering for relevance
A prediction, by my definition, is a best-fit match of an ontology to a set of (hypothetical) events. Your beliefs let you filter out what you consider relevant in events. Correspondingly, you match relatively small ontologies to real-world events. The more you filter for relevance, the smaller the matching requirements for your ontology.[4] You need to filter incoming information about events for credibility or plausibility as well, but that is a separate issue involving research skills and critical thinking skills.
In my example above:
These beliefs, the content of the dialog sections, defines the ontology that I instantiate with the information that I gather during research.
Superforecasters curate relevant ontologies using research and critical thinking skills
Superforecasters might be good at:
I suspect that forecasters continue to use subjective probabilities because forecasters are not asked to justify their predictions by explaining their ontology. I understand another term for prediction is foresight. I expect superforecasters in their domains of expertise have good foresight.
I suspect that some domain experts asked to make predictions will develop larger ontologies than contain relevant information. Then, when they try to match the ontology against real-world information, they fail to collect only relevant data. These experts don't believe much about what qualifies as relevant. The information is there and available, but they fail to make only the right connections.
I am by no means an expert in prediction or its skills. This is my idea of "common-sense" prediction analysis.
Acknowledge that your beliefs decide your morality
How you live your life can be consequential for other people, for the environment, for other species, and for future generations. The big question is “What consequences do you cause?”
I suggest that you discuss assertions about your consequences without the deniability that conclusions or hypotheses allow. Assert what you believe that you cause, at least to yourself. Don’t use the term consequence lightly. An outcome occurred in some correspondence to your action. A consequence occurred because your action caused it[5].
Ordinary pressures to hide beliefs
I believe that there are a number of pressures on discussions of beliefs about moral actions. Those pressures include:
To be a good scout involved in intersubjective validation of your beliefs, don't lie to others or yourself. If you start off by asserting what you believe, then you might be able to align your beliefs with the best evidence available. What you do by expressing your beliefs is:
provided that describing your unweighted beliefs is tolerable to those involved.
On way to make discussing your actual beliefs tolerable to yourself is to provide a means to constrain them. You can limit their applicability or revise some of their implications. Here are two short examples:
You might think these are examples of poor epistemics or partially updated beliefs, not something to encourage, but I disagree.
To simply make belief statements probabilistic implies that we are in doubt a lot, but with reference to unconstrained forms of our original beliefs. Constraining a belief, or chipping away at the details of the belief, is a better approach than chipping away at our confidence in the belief. We curate our ontologies and knowledge more carefully if we constrain beliefs rather than manipulate our confidence in beliefs.
Lets consider a less controversial example. Suppose some charity takes action to build a sewage plant in a small town somewhere. Cement availability becomes intermittent and prices rise, so the charity’s effectiveness drops. Subjective confidence levels assigned for the altruistic value of the charity’s actions correspondingly drop. However, constraining those actions to conditions in which cement is cheap and supply is assured can renew interest in the actions. Better to constrain belief in the action’s effectiveness than to reduce your confidence level without examining why. We'll return to this shortly.
A summary of advantages of unweighted beliefs
At the risk of being redundant, let me summarize the advantages of unweighted beliefs. Preferring unweighted beliefs to weighted beliefs with epistemic confidence levels does offer advantages, including:
The nature of the problem
A community of thinkers, intellectuals, and model citizens pursuing altruism is one that maintains the connection between:
The altruistic mission of the EA community is excellent and up to the task. I will propose a general principle for you, and that’s next.
Altruism valuations lack truth outside their limiting context
If you decide that your actions produce a consequence, then that is true in a context, and limited to that context. By implication, then, you have to assure that the context is present in order to satisfy your own belief that the action is producing the consequence that you believe it does.
An example of buying cement for a sewage-processing facility
For example, if you’re buying cement from a manufacturer to build a sewage-processing facility, and the payments are made in advance, and the supplier claims they shipped, but the building site didn’t receive the cement, then the charitable contributions toward the purchase of cement for the sewage treatment plant do not have the consequence you believe. Your belief in what you cause through your charity is made false. You don't want to lose your effectiveness, so what do you do? You:
Lets say that cement deliveries are only reliable if you take all those actions. In that case your beliefs about paying for cement change:
You could do something more EAish involving subjective probabilities and cost effectiveness analyses, such as:
If that process also results in a revised true belief, then good enough. There might be other advantages to your seemingly more expensive options that actually reduce costs later[8].
You can take your causal modeling of your actions further[9]. You can:
Your altruism is limited to the contexts where you have positive consequences
Yes, your actions only work to bring about consequences in specific contexts in which certain preconditions hold. You cannot say much about your altruism outside those contexts unless you consider your actions outside those contexts. Effective altruists leverage money to achieve altruism, but that’s not possible in all areas of life or meaningful in all opportunities for altruistic action. This has implications that we can consider through some thought experiments.
Scoring altruistic and selfish consequences of actions
Actions can be both good and evil in their consequences for yourself or others. To explore this, lets consider a few algebra tools that let you score, rank, scale, and compare the altruistic value of your actions in a few different ways. I think the following four-factor model will do.
Here’s a simple system of scoring actions with respect to their morality, made up of a:
Lets use a tuple of (benefit score, harm score, self-benefit score, self-harm score) to quantify an action in terms of its consequences. For my purposes, I will consider the action of saving a life to have the maximum benefit and self-benefit and give it a score of 10. Analogously, the maximum harm and self-harm score is also 10. All other scores relative to that maximum of 10 are made up.
A Thought Experiment: Two rescues and a donation
Three altruistic scenarios
In tuple form:
The distances between actions measures how close together the rescues are in terms of their scores. For example, the fire rescue is 13.7 points away from the drowning rescue but 980 points away from the bednet donation.
The subtractions of actions show the differences between actions per factor. For example, the bednet donation is 980 points of benefit higher than the Fire Rescue.
Some legitimate concerns about this thought experiment
Disagreements over these thought experiments could include:
If you have these concerns, then you can bring them up to improve your use of such a thought experiment. For example, you could disallow absolute 0 values of any individual score.
The influence of beliefs about causation on your sense of responsibility
EA folks complain that they suffer depression or frustration when they compare their plans for personal spending against the good they could do by donating their spending money. Let’s explore this issue briefly.
Consider the woman who donated to Against Malaria for 200 bednets, with a 1000 points of altruistic benefit and 0.2 points of self-harm associated.. Suppose she needed a vacation, and decided to withhold the donation money and spend it on a week-long spa vacation, gaining 3 points of self-help. If she felt guilty for doing so, is that because she believed that she caused 200 malaria infections or failed to prevent them from happening by her choice of action?
The question comes down to what she believes about what she causes. Does the spa trip deserve a scoring of (0,0,3,0), or of (0,1000,3,0)? Did she cause 200 people to contract malaria or not, when she took her spa trip?
All this example illustrates is our limitations in assigning ourselves a causal role in events. In the earlier example, if you caused 200 people malaria, you have reason to feel guilty. On the other hand, spending some money on a vacation doesn't give anyone malaria, and certainly isn't cause for guilt, all other things equal.
I have emphasized beliefs throughout this red team exercise because:
The flow of time carrying your actions defines a single path of what exists
Your actions precede their consequences. What you believe that you cause is found in the consequences of your actions. You can look forward in time and predict your future consequences. You can look at diverging tracks of hypothetical futures preceded by actions you could have taken but did not take. However, those hypothetical futures are nothing more than beliefs.
On those hypothetical tracks, you can perceive the consequences of what you could have done, like the woman in the fire rescue who elected to call 911 and wait for the fire truck to arrive. Later on, she's haunted by what would have happened if she had rushed into the building and carried out two living healthy children instead of letting them die in a fire. What would have happened is not an alternate reality though. It is just a belief. The poor woman, haunted by guilt, will never know if her belief is true.
Anyone suffering regret or enjoying relief about what they didn't do or did do is responding to their beliefs, not some alternate set of facts or escaped reality. Treat ideas of alternative past, present or future events as beliefs.
Conclusion: This was a red-team criticism, not just an exploration of ideas
If I did my job here, then you have understood:
A renewal of unqualified use of assertions that indicate unweighted belief will make it easier for you to apply useful critical thinking and prediction skills.
I offered a few recommendations, including that you:
I attribute your assessment of your action’s consequences to your beliefs, not your rationality, regardless of how you formed that assessment and any confidence that you claim in it.
I believe that you can succeed to the benefit of others. Good luck!
Bibliography
Alciem LLC. Flying Logic User Guide v.3.0.9 . Arciem LLC. 2020.
Cawsey, Allison, The Essence of Artificial Intelligence. Pearson Education Limited, 1998.
Daub, Adrian, What Tech Calls Thinking. FSG Originals, 2020.
Fisher, Roger, and William Ury. Getting to Yes. Boston: Houghton Mifflin, 1981.
Galef, Julia, The Scout Mindset. Penguin, 2021.
Goldratt, Eliyahu, M., et al, The Goal. North River Press, Inc. 2012.
Johnstone, Albert A., Rationalized Epistemology. SUNY Press, 1991.
Kowalski, Robert., Computational Logic and Human Thinking. Cambridge University Press, 2011.
Pearl, Judea and Dana Mackenzie, The Book of Why. Hachette UK, 2018.
In fact any time that you can affect causally. At least, so far, I have not heard any direct arguments for the immorality of not causing beings to exist.
Altruistic value is the value to others of some effect, outcome, or consequence. I call the value altruistic because I consider it with respect to what is good for others that experience an effect, outcome, or consequence. In different contexts, the specifics of altruistic value will be very different. For example, EA folks talk about measuring altruistic value in QALY’s when evaluating some kinds of interventions. Another example is that I think of my altruistic value as present in how I cause other’s work to be more efficient in time or energy required. There is the possibility of anti-altruistic value, or harm to others, or evil consequences. Some actions have no altruistic value. They neither contribute to nor subtract from the well-being others. Those actions have null altruistic value.
Sorry, Julia, if I butchered your concepts.
I don't believe that the mind uses an inference system comparable to production rules or forward-chaining or other AI algorithms. However, I think that matching is part of what we do when we gather information or make predictions or understand something. I don't want to defend that position in this critique.
Of course, no matter what confidence value (2%, 95%, very, kinda, almost certainly) that you give a assertion, if you don’t believe it, then it’s not your belief. Conversely, if you do believe an assertion, but assert a low confidence in it, then it is still your belief.
And no, I am not interested in betting in general.
beliefs about cause-effects are not scientific assertions about the truth of those cause-effects. Instead, they have all the legitimacy of anything else you believe, including when you assign a subjective probability to that belief.
The narrative details determine what's cost effective, but a plausible alternative is that the area is subject to theft from freight trains, and an employee riding the freight train could identify the problem early, or plausibly prevent the thefts, for example with bribes given to the thieves.
If you’re interested, there are various methods of causal analysis without probabilities. You can find them discussed in a manual from the company behind Flying Logic software, or in books about the Goldratt methods of problem-solving, and perhaps in other places.
It would be difficult to learn that your actions in your career increase poverty in or force migration from a country receiving financial support for an EA charity, particularly when you also contribute a good chunk of your income to that charity. If you work for some banks or finance institutions, then it is plausible that you are causing some of the problem that you intend to correct.
There could be intuitive matching algorithms put to use that let a person quantify the match they determine between the various causal pathways in their ontology and real-world events. I am just speculating, but those algorithms could serve the same purpose as subjective probabilities in forecasting. The study of case-based reasoning could offer some insights.