All of PaulCousens's Comments + Replies

This year, I donated to NPR, Women's March,  Zendo Project, (started donating to them a few months ago), and UNHCR (started donating earlier this year).

I don't have large amounts to donate so I don't have much of a donation plan. My donation decisions are mostly whim decisions.

I first started donating to NPR because I was listening to their podcasts and listening to their radio station often. I still do, but not as often.

I first started donating to Women's March because I feel aligned with their socioeconomic goal of more equity.

Not sure why I started... (read more)

Yes, I think you are right. Sorry, I made too broad of a statement when I only had things like strength and speed in mind.

I think that its utility being limited is true. It was just a first impression that occurred to me and I haven't thought it through. It seemed like anthropomorphizing AI could consistently keep people on their toes with regard to AI. An alternative way to become wary of AI would be less obvious thoughts like an AI that became a paperclip maximizer. However, growing and consistently having priors about AI that anthropomorphize them may be disadvantageous by constraining people's ability to have outside of the box suspicions (like what they already be covert... (read more)

I have that audiobook by Deutsch and I never thought of making that connection to longtermism. 

I am reminded of the idea of a rubicon where a species' perspective is just a slice of the rulial space of all possible kinds of physics.

I am also reminded of the AI that Columbia Engineering researchers had that found new variables to predict phenomenon we already have formulas for. The AI's predictions using the variables worked well and it was not clear to the researchers what all the variables were.

That discoveries are unpredictable and the two things I ... (read more)

As I understand it, WIll MacAskill pointed out in Doing Good Better  that people doing such low-pay work are actually utilizing a relatively great opportunity in their country and that the seemingly low-pay is actually valuable in their country.

I think it is hybrid because it involves both forecasting and persuading others to think differently about their forecasts.

1
IanPitchford
2y
Of course! I must engage my brain from time to time. I was drawing an automatic comparison to the Hybrid Forecasting Competition, which aimed to leverage “the relative strengths of humans and machines”. That one was interesting. https://www.dni.gov/index.php/newsroom/press-releases/item/1785-iarpa-launches-hybrid-forecasting-competition-to-improve-predictions-through-human-machin

I would be interested in writing summaries of books. I did this with two books that I read within the past two years,  The Human Use of Human Beings and  Beyond Good and Evil. I imagine that I might have excluded many things that I expected myself to easily remember as following logically or being associated with what I did write down. For The Human Use of Human Beings, I tried to combine several of the ideas into one picture. I think what I had in mind was to put all the ideas of the book into a visual dashboard (I did not complete such a visual... (read more)

Part of the trap is that once you’re in the trap trying and failing to get out of it doesn’t help you much, so traits that would help in abundance don’t have a hill they can climb.

Can you clarify what you mean by this? I didn't follow you after you wrote "so traits that would help in abundance don’t have a hill they can climb."

 

I think maybe you meant that appreciation of the worth of money is valuable only until you fall into the trap of spending too much of it. Once you fall into that trap, appreciation of its worth won't helpful to you.

I did not find your blog post about moral offsetting offensive or insensitive. Your explanations of evolutionary reasons for why we have such visceral reactions to rape, to me, addressed the moral outrageousness with which rape is associated. Also, you clearly stated your own inability of being friends with a rapist. Philosophical discussions are probably better when they include sensitive issues so they can have more of an impact on our thought processes. 

Also, there was another post on here in which it was mentioned that a community organizer could ... (read more)

Before I read about the results of the  study, my a priori assumptions were that the money wouldn't help because of bills but that some kind of benefit must come out of it.

Without a reliable source of income, even if they did not have many bills, it is hard to see how even $2,000 could help in the longterm.

To me, it seems that an unconditional cash transfer that helps temporarily but not in a longterm way might make people feel worse by perception of the counterfactual of being better off becoming more vivid. The $500 or $2,000 unconditional cash tran... (read more)

1
PaulCousens
2y
Can you clarify what you mean by this? I didn't follow you after you wrote "so traits that would help in abundance don’t have a hill they can climb."   I think maybe you meant that appreciation of the worth of money is valuable only until you fall into the trap of spending too much of it. Once you fall into that trap, appreciation of its worth won't helpful to you.

There may also be a significant secondary effect of subsequent decisions by the Supreme Court becoming more ambitious. For example, already, Clarence Thomas said he thinks rights for same-sex marriage should be targeted. Also, recent cases allowed a redistricting map that disfavors minorities, allowed looser environmental protection, made it easier to carry a gun in New York, and one was considering changing the balance of power within states in favor for republicans with regards to the states' election laws.

So in the case of same-sex marriage rights, ther... (read more)

Regarding titles, ultimately, the content is what is most important. Titles can be just a way to remember where a certain blog post is or to tell someone else where it is. Even if it doesn't make sense initially, I imagine after your content is read it is likely that the reader will see how the title fits the content.

Regarding coming across as a "know-it-all," I would say just put in caveats and notes about the limitations of your knowledge. Perhaps you could make the posts somewhat open-ended in that regard and edit them later with updates.

Regarding reada... (read more)

I recently told myself that I would never eat any animal product again, and I have been trying to buy things that are not made from animals or tested on animals. 

The main reason for my veganism is that I can have such a diet and not miss animal products at all, so why not? I am not certain/convinced of the impact of my lifestyle's decision. I do think if a significant number adopt such a lifestyle it would have a huge impact on factory farming.  My understanding is that, in other places, veganism is not as convenient/feasible as it is in the Unit... (read more)

The existence of digital people would force us to anthropomorphize digital intelligence. Because of that, the implications of any threats that AI may pose to us might be more comprehensively visible and more often in the foreground of AI researchers' thinking.

Maybe anthropomorphizing AI would be an effective means through which to see the threats AI poses to us because of the fact that we have posed many threats to ourselves, like through war for example.

2
Erhannis
2y
That seems useful up to a point - I feel like many think "Well, the AI will just do what we tell it to do, right?", and remembering the many ways in which even humans cheat could help expose flaws in that thinking.  On the other hand, anthropomorphizing AI too much could mean expecting them to behave in human-like ways, which itself is likely an unrealistic expectation.

That's an interesting way of looking at it. That view seems nihilistic and like it could lead to hedonism since if our only purpose is to make sure we completely destroy ourselves and the universe, nothing really matters.

2
Yitz
2y
I don’t think that would imply that nothing really matters, since reducing suffering and maximizing happiness (as well as good ol’ “care about other human beings while they live”) could still be valid sources of meaning. In fact, insuring that we do not become extinct too early would be extremely important to insure the best possible fate of the universe (that being a quick and painless destruction or whatever), so just doing what feels best at the moment probably would not be a great strategy for a True Believer in this hypothetical.

I read this post about Thomas Ligotti on LessWrong. So far, it wasn't that disconcerting for me. I think that because I read a lot of Stephen King novels and some other horror stories when I was a teenager, I would be able to read more of his thoughts without being disconcerted. 

If I ever find it worthwhile to look more into pessimistic views on existence, I will remember his name.

That is a good point. I was actually considering that when I was making my statement. I suspect self-delusion might be the core of the belief of many individuals who think their their lives are net positive. In order to adapt/avoid great emotional pain, humans might self-delude when faced with the question of whether their life is overall positive.

Even if it is not possible for human lives to be net positive, my first counterargument would still hold for two different reason. 

First, we'd still be able to improve the lives of other species.

Second, it w... (read more)

1
AnaDoe
2y
Note, however, that (a) Ligotti isn't a philosopher himself, he just compiled some pessimistic outlooks, representing them the way he understood them, (b) his book is very dark and can be too depressing even for another pessimist. I mean, proceed with caution, take care of your mental well-being while getting acquainted with his writings, he's a reasonably competent pessimist but a renowned master of, for the lack of a better word, horror-like texts :)
1
Yitz
2y
One possible “fun” implication of following this line of thought to its extreme conclusion would be that we should strive to stay alive and improve science to the point at which we are able to fully destroy the universe (maybe by purposefully paperclipping, or instigating vacuum decay?). Idk what to do with this thought, just think it’s interesting.

In David Deutsch's The Beginning of Infinity: Explanations That Transform the World  there is a chapter about infinity in which he discusses many aspects of infinity. He also talks about the hypothetical scenario that David Hilbert proposed of an infinity hotel with infinite guests, infinite rooms, etc. I don't know which parts of the hypothetical scenario are Hilbert's original idea and which are Deutsch's modifications/additions/etc.

In the hypothetical infinity hotel, to accommodate a train full of infinite passengers, all existing guests are asked ... (read more)

2. Negative Utilitarianism

    This is the view that, as utilitarians (or, more broadly, consequentialists), we ought to focus on preventing suffering and pain as opposed to cultivating joy and pleasure; making someone happy is all well and good, but if you cause them to suffer then the harm outweighs the good. This view can imply anti-natalism and is often grouped with it. If we prevent human extinction, then we are responsible for all the suffering endured by every future human who ever lives, which is significant.

Taking that further

It migh... (read more)

2
Yitz
2y
Do we know this? Thomas Ligotti would argue that even most well-off humans live in suffering, and it’s only through self-delusion that we think otherwise (not that I fully agree with him, but his case is surprisingly strong)
2
Anthony Fleming
2y
Great points. If you assume a negative utilitarian worldview, you can make strong arguments both for and against human extinction.

A suggested explanation for our indifference

During a  cursory reflection on my own perspective of insects after reading this, it occurred to me that maybe interpretable behavior and reactions are what reaches out to our minds and causes emotions.

 Animals like cats, dogs, and hamsters experience the environment like we do. Similar things are perceived as threats, resources, etc. So while they do not talk to us, their actions and reactions are easy to empathize with. Their actions and reactions can speak to us in a way, telling us that they are con... (read more)

The better view of utilitarianism involves leveling up: taking all the warmth and wonder and richness that you’re aware of in your personal life, and imaginatively projecting it into the shadows of strangers.

I would take it further and say that utilitarianism levels up your compassion for both those close to you and those distant from you. By becoming aware of the reasons that were already there, as you say, your appreciation of those reasons can become deeper for both sets of people.

I am not sure whether my worldview is strictly utilitarian. However, my w

... (read more)

Try to sell me on working with large food companies to improve animal welfare, if I’m a vegan abolitionist.

There is more political traction on improving animal welfare in large food companies than there is in ending systematic slaughter and abuse of animals completely. 

Becoming aware of the harm one is causing and then undoing that harm can lift the blinds that were hiding your seemingly innocuous everyday actions.Having large food companies improve animal welfare can increase the sensitivity to animal harm of those within the companies. These people

... (read more)

I disagree with the claim that if we do not pursue longtermism, then no simulations of observers like us will be created. For example, I think an Earth-originating unaligned AGI would still have instrumental reasons to run simulations of 21st century Earth. Further, alien civilizations may have interest to learn about other civilizations.

Maybe it is 2100 or some other time in the future, and AI has already become super intelligent and eradicated or enslaved us since we failed to sufficiently adopt the values and thinking of longtermism. They might be runni... (read more)

The simulation dilemma intuitively seems similar to Newcomb's Paradox. However, when I try to reason out how it is similar, I have difficulty. They both involve two parties, with one having more control/information advantage over the other. They both involve an option with guaranteed rewards (hedonism or the $1,000) and one with an uncertain reward (longtermism or possible $1,000,000). They both involve an option that would exclude one of two possibilities. How the prediction of a predictor in Newcomb's Paradox that may exclude one of two possibilities dir... (read more)

 I have been in two groups/clubs before. One was a student group, and I was only in a few short meetings. One was a book club. I also only went to a few meetings of the book club. On top of that, I socialize with virtually no one. 

I have envisioned how I would facilitate a student EA group. Of course, because of the power of situations to change individual behavior, how I would actually come across and do it in actuality might be different. I thought I would start off a flyer that was a short advertisement with a promise of free pizza. The advert... (read more)

I forgot from where, but I've heard criticisms of Elon Musk that he is advancing our expansion into space while not solving many of Earth's current problems. It seems logical that if we still have many problems on Earth, such as inequity, that those problems will get perpetuated as we expand into space. Also, maybe it's possible that other smaller scale problems that we don't have effective solutions for would become enormously multiplied as we expand into space (though I am not sure what an example of this would be). On the other hand, maybe the developme... (read more)

From the links you posted, the most powerful argument for effective altruism to me was this:

"(Try completing the phrase "no matter..." for this one.  What exactly is the cost of avoiding inefficiency?  "No matter whether you would rather support a different cause that did less good?" Cue the world's tiniest violin.)"

Unless someone had a kind of limited egotism (that perhaps favored only themselves and their friends, or themselves and their family, or themselves and their country, etc.), or a sadist, I don't see how they could disagree that making... (read more)

I have never heard of the ideological Turing Tests that Claire referenced in their post. Those seem interesting. I have felt skeptical about the Turing Tests. That they tell us more about ourselves than they do about AI seems to reflect the nature of my skepticism. 

I think that the question of/the definition of what intelligence is will be an important piece of AI. It seems that this question/definition is still vague and/or not agreed upon yet. Sometimes, I have thought that we probably haven't delved enough into what our own intelligence is, what ma... (read more)

6
Julia_Wise
2y
I think the idea is from Bryan Caplan originally: https://www.econlib.org/archives/2011/06/the_ideological.html

I consider helping all Earth's creatures, extending our compassion, and dissolving inequity as part of fulfilling our potential.

I don't think that because the aliens seemed to enjoy life much more, and had higher levels of more sustained happiness, that would necessarily mean their continued existence should be prioritized over our's. I wouldn't consider one person's life more valuable than another person's life just because that person experienced substantially more enjoyment and happiness. Also, I am not sure how to compare happiness and/or enjoyment bet... (read more)

It does seem like an optimistic expectation that there will be an arrival of entities that are amazingly superior to us. This is not far-fetched though. Computers already surpass humans' capacities on several thought processes, and therefore have already demonstrated that they are better in some aspects of intelligence. And we've created robots that can outperform humans in virtually all physical tasks. So, the expectation is backed by evidence.

Expecting super AGI differs from expecting the arrival of a messiah-like figure in that instead of expecting a fu... (read more)

1
Ben Millwood
2y
Not that this is at all central to your point, but I don't think this is true. We're capable of building robots that move with more force and precision than humans, but mostly only in environments that are pretty simple or heavily customised for them. The cutting edge in robots moving over long distances or over rough terrain (for example) seems pretty far behind where humans are. Similarly, I believe fruit-picking is very hard to automate, in ways that seem likely to generalise to lots of similar tasks. I also don't think we're very close to artificial smell, although possibly people aren't working on it very much?

Other invisible mistakes I make are poor planning (which involves a vague vision of my plan which doesn't account for everything, which can lead to it not turning out exactly as I expected to or failing in some way in the long-term after it is implemented because of factors that became relevant later on), overestimating my endurance for some manual and automatic task (such as driving somewhere) or my ability to tolerate a certain condition (like going without food for a while), and overworking myself at the unintended expense of accuracy.

 


 

I recently listened to the podcast Life Kit on NPR in which Dr. Anna Lembke said that going cold turkey from an addiction (if that is safe) is an effective way of reorganizing the brain. She said this is true because our brains have evolved in environments with much scarcer resources than we have today and so are being overloaded with too much dopamine and pleasure by everything we have around us nowadays.

Daydreaming itself may not be counterproductive. Daydreaming can be a way to adaptively take a break. It may enable more productive work by avoiding burn... (read more)

1
PaulCousens
2y
Other invisible mistakes I make are poor planning (which involves a vague vision of my plan which doesn't account for everything, which can lead to it not turning out exactly as I expected to or failing in some way in the long-term after it is implemented because of factors that became relevant later on), overestimating my endurance for some manual and automatic task (such as driving somewhere) or my ability to tolerate a certain condition (like going without food for a while), and overworking myself at the unintended expense of accuracy.    

Thanks for the story, I enjoyed it. 

It seems that thinking about and working on avoiding existential risks would lead to identification of points of failures that  lead to various horrible futures. Maybe inherent in longtermism is that survival is ongoing, and that to become conscious of the various and many threats to us and the universe is to also be conscious of the daunting possibilities ahead of us. If we weren't conscious of these risks, we might miss a huge fundamental truth about the way the universe works and thus exclude ourselves from ... (read more)

Here is a conversation I had with GPT-3. I was having it guess the word I was thinking of. Then I tried to make it guess the word it was thinking of but I failed to get it to understand what I wanted to do. Finally, I asked it to tell me a joke before talking to it about a world in which 2 plus 2 did not equal 4.

Human: Imagine a world in which 2 plus 2 did not equal 4.

 AI: In a world where 2 plus 2 did not equal 4, everything would be chaos! 

Human: Tell me a story about such a world. 

AI: In a world where 2 plus 2 did not equal 4, the world w... (read more)

1
AndreFerretti
2y
Ha! Maybe this AI just merged math and time, and found the 4th dimension of reality?It was a bit confusing at first, but I find it very creative that the answer to 2+2 depends on the time of day.  I like your suggestion of having the AI write alternative versions of history! This is what I just asked it, to experiment: Me: How would the world look like without racism?  AI: There would be less hate in the world and people would be able to live in peace. Me: How would the world look like without war?  AI: The world without war would look very different. There would be no need for weapons or soldiers. Countries would be able to work together to solve problems and improve the quality of life for all people. Me: How would the world look like without nuclear bombs?  AI: If there were no nuclear bombs, the world would look a lot different. For one, the Cold War would have been very different, and might not have happened at all. There would also be no fear of nuclear warfare, meaning that different countries could form relationships and treaties with each other that they otherwise would not.

After learning where to have a conversation with GPT-3, I talked with it for some time. When I asked it about the COVID-19 pandemic, it said it would need to be a doctor to solve it and a scientist to find out where the virus originated from. When I questioned it more, it said it didn't have the time and resources to train to become a doctor at the moment, and that it wasn't sure it wanted to train to become a scientist.

 

I wonder if AIs can help humans to communicate and act in ways that are not biased by being trained to give outputs with the conditi... (read more)

3
PaulCousens
2y
Here is a conversation I had with GPT-3. I was having it guess the word I was thinking of. Then I tried to make it guess the word it was thinking of but I failed to get it to understand what I wanted to do. Finally, I asked it to tell me a joke before talking to it about a world in which 2 plus 2 did not equal 4. Human: Imagine a world in which 2 plus 2 did not equal 4.  AI: In a world where 2 plus 2 did not equal 4, everything would be chaos!  Human: Tell me a story about such a world.  AI: In a world where 2 plus 2 did not equal 4, the world would be a very different place. Math would be different, science would be different, and even the way we live our lives would be different. In this world, nothing would be the same, and we would have to find a new way to live.  Human: What exactly would be different?  AI: Well, for starters, the way we measure things would be different. Instead of measuring things in terms of length, weight, and volume, we would have to find a new way to measure them. We would also have to find new ways to calculate things like addition, subtraction, multiplication, and division.  Human: Do you know what these new ways would be?  AI: I'm not sure, but it would probably be something along the lines of measuring things in terms of time and counting in terms of numbers other than 4.  Human: What other numbers would they be?  AI: Other numbers could be 5, 6, 7, 8, and 9.  Human: What if 2 plus 3 did not equal 5? What if all addition equations as we know them were not true?  AI: If 2 plus 3 did not equal 5, then we would have to find a new way to calculate addition. Everything we know about math would be different, and we would have to find new ways to solve problems.  Human: Do you know what this new way of calculating addition would be?  AI: I'm not sure, but it would probably be something along the lines of adding numbers in terms of time. For instance, 2 hours plus 3 hours would equal 5 hours.  Human: Explain further.  AI: In a

Because adolescence is a time when the parts of our brain associated with emotions are more prominent than the parts associated with reasoning, it may be worthwhile to see how interventions can steer adolescents on a positive rather than negative life course. The potential mistakes can be tragic and long-lasting. However, many adolescents and children stand out from their peers by accomplishing great things (for example, Greta Thunberg's strong social activism). Research into the brain's state in adolescence that makes negative life decisions more likely c... (read more)

I find it easy to follow a strictly vegan diet outside of eating at restaurants. At restaurants (which I don't go to that often) and on family holidays I concede to eat whatever is available. For the past few weeks and for another few weeks I will be eating animal products because I am volunteering for a study that requires me to be on a meat-eating diet. The study is studying a benzodiazepine drug. I am only doing this because the study will pay between $3,000 and $15,000. I am compromising my vegan diet as a one time thing. To me, it seems that compromis... (read more)

I thought about this some more and thought maybe investigating UFOs could be important in that it is part of the larger goal of the search for extraterrestrial intelligence. The search for extraterrestrial intelligence could hold at least several opportunities/implications for us. 

Opportunities

They could provide us with knowledge and technology that gives us the push past the point where survival is extremely improbable. Or, alternatively, maybe we would have found the knowledge and built the technology eventually without their help. If this were true... (read more)

It seems like it might be worthwhile investigating UFOs/UAFs for a large umbrella purpose of ensuring all technology, information, knowledge about the universe, etc. is democratized and accessible to everyone and not monopolized for nefarious purposes.

It might also be worthwhile to study them to safeguard ourselves from government's psychological operations. It seems that the sky has the potential to have a huge influence on a huge number of people.

Given that people can conflate a spacefaring extraterrestrial craft with a plastic bag in the sky, studying U... (read more)

My random giving in 2021 was composed of:

$5 monthly donation to NPR, which I increased to $8/month around a month ago

A few donations ( I think it added up to around $50) to Women's March.

A donation of $5 to EWG.

When using my debit card at the store, a few times I noticed a question asking me if I would like to donate. It might have been for a hospital or something related to feeding hungry/poor people. I never researched more about the cause. I would guess that nearly all of the times I donated around $1.

Occasionally, I gave some cash and/or snacks to home... (read more)

Seemingly Useful Viewpoints

The expert DiResta said (in the YouTube video of interviews with Twitter and Facebook employees that Misha posted) that overcoming the division that is created by online bad actors will require us addressing our own natures because online bad actors will never be elimanted but merely managed. This struck me as important and it is applicable to the problems that recommender algorithms may exacerbate. If I remember correctly, in the audiobook The Alignment Problem, Brian Christian's way of looking at it was that the biases that AI ... (read more)

To minimize human-caused suffering as much as possible,  it seems that farm animals should be let to live freely until they die naturally and shouldn't need to be modified in any way. A quick google search told me that cows have lifespans of 15-20 years and chickens have lifespans of 3-7 years. Since the world produces enough food to feed the global population several times over (even though hundreds of millions of people go without food), it might be that society and individual habits can be restructured in such a way (such as by using less of our fo... (read more)

4[anonymous]2y
Hi Paul, I am absolutely with you in that I think factory farms are awful and would of course continue to be awful with the widespread use of analgesics. I fully support doing everything we can to eliminate them through some combination of developing alternative proteins and moving people and institutions toward eating the plant-based alternatives we already have. I in no way support mutilating animals even with analgesics. The reason I wrote this post is because I think it would be an improvement for animal welfare over the status quo of using no analgesics, and I think that this improvement is relatively achievable. As a side note, my position has shifted a bit since I've written this based on new technological developments. I now think efforts in this domain should be more targeted toward adoption of drugs and genetic engineering that eliminate the need for the modifications in the first place. When I wrote this, those seemed a long way off but I no longer feel that way. But to be clear, even if we could completely eliminate all forms of direct mutilation that this post discusses I will still think factory farms are horrible.