Thank you for your comment. I think that in some cases, subconscious calculations of expected value motivate actions.
But I don't think that expected value calculations faithfully (reliably or consistently) represent a person's degree of conviction (or confidence) that an outcome occurs given the person's actions.
In particular, I suggest alternatives at work when people claim that their decision is to choose an expected value of very low probability and very high value:
What I don't think they have is a strong expectation that they will fail. We are not wired to meaningfully pursue outcomes that we believe, really believe, will not occur. What they should have, if their probability estimate of success gets really low, is a true expectation that their efforts will fail. In some cases they don't have that expectation because subconsciously they have a simple, clear, and vivid idea of that unlikely outcome. They pursue it even though the pursuit is high cost and the outcome is virtually impossible.
Hi Brian, short answer, yes. Of course.
Look into jellyfish as a food source.
The death of the oceans is progressive and predictable, if you assume that causes of it continue into the future.
Sure, I think you guys (and those folks at the UN) and the general topic of food security is incredibly important.
I am not working in this area professionally, nothing even close to it.
Bivalves are a big part of the US fishing industry. You can explore some of the risks to them by looking over the recent history of their cultivation in the US and globally.
Ocean acidification, waste water outlets, garbage dumping, and storm water runoff are threats to pop-up farms over the next few decades. After that, acidification combined with temperature and pollution could be too damaging, either to farming efforts or to the quality of the food.
Ocean currents near shore can produce lower pH (e.g., 7.65 as opposed to global avg 8.04 (edit:8.1, not because I believe it but because that's the consensus) from upwelling, colder, more acidic water on the west coast of the US) in coastal waters. Bivalves are sensitive to increased acidity of ocean water. Their fertilization rates decrease and their juvenile mortality increases. There might be an effect on their maturation size as well.
Ocean average pH decline has one estimate pinning it at 7.8 by 2094. It is currently 8.1 and dropping. That avg allows wide variation in the availability of carbonate and calcium ions for shell formation in different waters across the globe. Heat maps show largest declines in availability of carbonate near the poles and with unequal distributions around the equator. Measurement data from 2006, I think, shows recent changes in carbonate chemistry occurring in the top 200 meters of the ocean, where marine ecosystems are most productive.
I believe that marine biologists would agree that loss of shell-forming organisms in the ocean would create a ripple effect throughout the world's oceans. The discussions I have reviewed so far suggest that sea butterflies, a shell-forming marine animal that is food for larger fish we know, will die out under certain environmental stresses, emptying the ocean of their predators. That pathway to a die-off of marine life is identified repeatedly. Maybe because it matters to the commercial fishing industry. Without sea butterflies, the major food source for fish that we like to eat will be gone. The question then is whether ocean chemistry will allow widespread bivalve production for a significant time period.
I can't find consensus estimates of the timing of marine life die-offs triggered by the loss of sea butterflies. pH avg change models from NOAA suggest that pH reaches 7.8 before the end of the century. That is below the point where the shell dissolves off a sea butterfly (sea snail) body. Is that pH enough to kill all bivalves? I don't know, but you could probably answer that question easily.
Interestingly, the only public claim of the likely death of marine life as a whole within this century that I could find credits multiple simultaneous stresses on marine life, including a massive poisoning of plankton by pollutants riding on micro-plastics that plankton consume combined with a loss of shell-bearing organisms at a global avg pH of just 7.95. The source of that claim pins the outcome as occurring by 2050. That is not a consensus opinion but there's not much to contradict it, just lack of research and lack of attention. An implication of that claim is that the ocean is no longer supplying oxygen to the atmosphere.
EDIT: there is one area where there is some consensus, it's that the coral reefs of the world will all be lost by 2050. That is a tipping point for ocean ecology.
Given the lead time of any plan to increase bivalve farming, jellyfish might do better as food from the ocean once the larger problem is recognized after the first global famine of the century is over. There will probably be multiple food shortages, mishandling of those shortages, and lack of preparation for further shortages this century that approach a global famine at least once and probably twice.
In the meantime, people everywhere will probably prefer fish like salmon and tuna and shrimp as seafood rather than exclusively bivalves.
Speaking for myself, I'm allergic to shellfish. Bivalves are a common allergen food. After trying some cricket flour in a protein bar, I developed a case of hives. Apparently an allergy to shellfish implies an allergy to insects because of some kind of biological similarity.
"There is a substantial philosophical literature on such topics that I will not wade into, and I believe such non-value-based arguments can be mapped onto value-based arguments with minimal loss (e.g., not having a duty to make happy people can be mapped onto there being no value in making happy people)."
Duty to accomplish X implies much more than an assessment of the value of X. To lack the (moral, legal, or ethical) obligation to bring about a state of affairs does not imply a sense that the state of affairs has no value to you or others.
Don't consider the act of choosing to be an action that is subject to an altruistic value score calculation of its potential consequences. By potential consequence I mean a consequence that you believe in. For some, such a consequence would be all the actions that you did not take.
Keep in mind that altruistic value consequences are based on self-reports. Altruistic value calculations are what you do for yourself with yourself.
The initial sentences play on the word "authority". Barracuda implies that authority is a name for those with resources used in EA causes, that EA folks have resources, and that their elevated authority is something they prefer to keep while they will share their wealth only. Barracuda states that EA efforts are not intended to further causes associated with social justice or democracy, but rather socioeconomic equality or health only.
Basically, I take the criticism to be that EA depends on, or does not address or correct, political inequality.
I bought a GPU some years ago. My belief is that it's consequences were negligible or a small evil, so mildly anti-altruistic.
If I were a gamer, my gaming would not contribute to the welfare of others. Again, gaming would be a selfish act or a small evil.
To establish the altruism of the consequences of the GPU purchase (and use), I score its consequences as I see them. I'm not that sophisticated, so I rely on a two axis analysis. The positive X-Axis is positive altruism. The negative Y-Axis is negative altruism, anti-altruism. So X is how good, minus Y is how evil. X goes to 100, Y to -100. Off the top of my head, I'm going with (0,-2) for the GPU purchase. There were no altruistic consequences but there were a few mildly evil consequences.
To compare the altruistic values of the consequences of the GPU purchase with those occurring if I do not purchase it, I calculate a distance between the two. I need to have some sense of what I do without the GPU. I assume that I simply went on with my lifestyle without the GPU purchase. The desktop computer that I purchased anyway becomes e-waste, has a similar origin and so contributes to similar exploitation, and again my use (it turns out) doesn't really benefit anyone else, so (0,-6), because it's 3x the ewaste of the GPU and again my purchase encouraged electronics manufacturing negligibly because I bought everything new but so did millions of others.
To compare apples to apples, I need to compare altruistic value of the computer purchase with the GPU to that of the computer purchase without the GPU. Relying on simple addition to calculate component values, (0,-8) is the score of the computer purchase with the GPU compared to (0,-6) without the GPU. I can calculate a Euclidian distance between them, it's just 2. The GPU purchase alone didn't change the consequences much between the two actions.
I can compare the two options in terms of scale, (0,-8) and (0,-6). Here I feel my math suffers for lack of options, so for now I'm going with a comparison of the distances of the two points from (0,0) to decide the scale of each action that I want to compare. The two actions are: computer purchase with GPU and computer purchase without GPU. I can say that the purchase of a computer with a GPU is 33% (8/6)more evil than just the purchase of a computer with the mobo, CPU, power supply, keyboard, and mouse.
Explaining this took a lot longer than writing down (0,-6), (0,-8), 2, and 33%. The numbers are relative, subjective, and controversial, and that's why I suggested this analysis for the EA community. The numbers might have more value to collective decision-making as intersubjective values. Remember, this is on a scale of magnitudes from 0-100 on each axis. For example, on the EA forum, someone might give me information that a chunk of e-waste independently raises risk of cancer in 3 people to 1/12, then I could factor that in, "Hmm, my new computer purchase with a GPU, at least 4 chunks of e-waste, causes cancer in some person later", so now the scores are (0,-60) and (0,-45) . It wouldn't matter so much what specific numbers I choose but more that the mathematics of my choice decide a very different altruistic value of GPU purchases (and electronics purchases in general) than previously. Armed with my new information, I might decide to buy a used computer and start dividing the consequences of its eventual turn to e-waste with its previous owner.
Or if I decided that my computer use was altruistic, "Hmm, I did some research with it that saved some people from some unnecessary suffering in their lives", then the scores might be (15,-8) and (15,-6), for example, with a distance of 2 between the points but a smaller scale difference of about 5% (17/16.16). Now the GPU purchase has less influence on the overall impact of my computer purchase because of how I used the computer. If the GPU purchase enabled some specific altruistic use of my computer, then that percentage difference in altruistic value scale (size) would start going up and so would the distance of altruistic value between the two purchase options. Interestingly, if I knew that my purchasing a new computer effectively gave someone else cancer later, then my altruistic use of the computer is obviously inadequate to justify the purchase. Food for thought.
Here's a few final thoughts:
So Jackson, thanks for your interest and comments. I analyzed a GPU purchase, mine. I hope you found it interesting.
"Analyzing the ethical impact of everyday decisions (like about where to live, how to commute, what to eat, who to vote for, etc) is essentially a pitch for "microprojects", and would be more suited to a world where there were very many more people interested in EA but much less funding available."
Hmm, yes. Pragmatically, I wouldn't want to insult the ethics of wealthy charitable givers when their contributions can count for so much and they will earn their money however they do. I see that my suggestion is naive and possibly a poor fit to the EA community.
Thank you, Karthik
I don't have much time and don't expect much attention regardless of my time input to writing about this topic. It is boring, frankly. I am a boring writer. The best that I can do is keep it short.
Altruistic value is not objectively measurable. If a creature like God existed, then she could judge the altruistic value of actions in terms of their consequences. Everyone else makes do with unreliable mental models that are bound by uncertain future circumstances.
As a brief thought experiment, if you have a sense that an action (for example, a large donation to a reliable effective charity) is altruistic, then you have made a judgement of the altruistic value of that donation. Other actions, in fact, all actions, are vulnerable to the same thought experiment. The only result is to make explicit what you already think.
I could offer my sense of true failings of the EA community to make better judgements among specific available options of behavior in certain situations, but those would be context bound, controversial, and with results that I don't think would be worth my time. Besides, I don't care, per se, whether the EA community continues to have blind spots about certain common evil actions and continues to perform them. It's a big world.
I just heard about this contest and thought, hmmm, how to summarize a helpful suggestion for improvement to EA, a little thought experiment of my own.
Sorry I could not put in the effort that I see others do here, but I promise you that my efforts are well-intended and sincere.
Ideas to improve the Effective Altruism movement include:
* include scoring, ranking, and distance measures of the altruistic value of the outcome of all personal behaviors, including all spending behaviors.* research the causal relations of personal behaviors and the altruistic value of the consequences of personal behaviors.* treat altruistic value as a relative and subjective metric with positive, null, and negative possible values.* provide public research and debate on the size and certainty of altruistic values assigned to all common human behaviors (by individual EA practitioners).Successful implementation of these ideas yields:
* robust maps of consequences of all personal behaviors and their relative altruistic value.* an end to context-limited assessments of one's effective altruism over one's life so far. -Noah