Kaj_Sotala

Wiki Contributions

Comments

How to succeed as an early-stage researcher: the “lean startup” approach

Draft and re-draft (and re-draft). The writing should go through many iterations. You make drafts, you share them with a few people, you do something else for a week. Maybe nobody has read the draft, but you come back and you’ve rejuvenated your wonderful capacity to look at the work and know why it’s terrible.

Kind of related to this: giving a presentation about the ideas in your article is something that you can use as a form of a draft. If you can't get anyone to listen to a presentation, or don't want to give one quite yet, you can pick some people whose opinion you value and just make a presentation where you imagine that they're in the audience.

I find that if I'm thinking of how to present the ideas in a paper to an in-person audience, it makes me think about questions like "what would be a concrete example of this idea that I could start the presentation with, that would grab the audience's attention right away". And then if I come up with a good way of presenting the ideas in my article, I can rewrite the article to use that same presentation.

(Unfortunately myself I have mostly taken this advice in its reverse form. I've first written a paper and then given a presentation of it afterwards, at which point I've realized that this is actually what I should have said in the paper itself.)

"Disappointing Futures" Might Be As Important As Existential Risks

Depends on exactly which definition of s-risks you're using; one of the milder definitions is just "a future in which a lot of suffering exists", such as humanity settling most of the galaxy but each of those worlds having about as much suffering as the Earth has today. Which is arguably not a dystopian outcome or necessarily terrible in terms of how much suffering there is relative to happiness, but still an outcome in which there is an astronomically large absolute amount of suffering.

Fair point. Though apparently measures of 'life satisfaction' and 'meaning' produce different outcomes:

So, how did the World Happiness Report measure happiness? The study asked people in 156 countries to “value their lives today on a 0 to 10 scale, with the worst possible life as a 0 and the best possible life as a 10.” This is a widely used measure of general life satisfaction. And we know that societal factors such as gross domestic product per capita, extensiveness of social services, freedom from oppression, and trust in government and fellow citizens can explain a significant proportion of people’s average life satisfaction in a country.

In these measures the Nordic countries—Finland, Sweden, Norway, Denmark, Iceland—tend to score highest in the world. Accordingly, it is no surprise that every time we measure life satisfaction, these countries are consistently in the top 10. [...]

... some people might argue that neither life satisfaction, positive emotions nor absence of depression are enough for happiness. Instead, something more is required: One has to experience one’s life as meaningful. But when Shigehiro Oishi, of the University of Virginia, and Ed Diener, of the University of Illinois at Urbana-Champaign, compared 132 different countries based on whether people felt that their life has an important purpose or meaning, African countries including Togo and Senegal were at the top of the ranking, while the U.S. and Finland were far behind. Here, religiosity might play a role: The wealthier countries tend to be less religious on average, and this might be the reason why people in these countries report less meaningfulness.

It has been suggested that people are succumbing to a focusing illusion when they think that having children will make them happy, in that they focus on the good things without giving much thought to the bad.

Worth noting that you might get increased meaningfulness in exchange for the lost happiness, which isn't necessarily an irrational trade to make. E.g. Robin Hanson:

Stats suggest that while parenting doesn’t make people happier, it does give them more meaning. And most thoughtful traditions say to focus more on meaning that happiness. Meaning is how you evaluate your whole life, while happiness is how you feel about now. And I agree: happiness is overrated.

Parenting does take time. (Though, as Bryan Caplan emphasized in a book, less than most think.) And many people I know plan to have an enormous positive influences on the universe, far more than plausible via a few children. But I think they are mostly kidding themselves. They fear their future selves being less ambitious and altruistic, but its just as plausible that they will instead become more realistic.

Also, many people with grand plans struggle to motivate themselves to follow their plans. They neglect the motivational power of meaning. Dads are paid more, other things equal, and I doubt that’s a bias; dads are better motivated, and that matters. Your life is long, most big world problems will still be there in a decade or two, and following the usual human trajectory you should expect to have the most wisdom and influence around age 40 or 50. Having kids helps you gain both.

Some thoughts on the EA Munich // Robin Hanson incident

Thanks. It looks to me that much of what's being described at these links is about the atmosphere among the students at American universities, which then also starts affecting the professors there. That would explain my confusion, since a large fraction of my academic friends are European, so largely unaffected by these developments.

there could be a number of explanations aside from cancel culture not being that bad in academia.

I do hear them complain about various other things though, and I also have friends privately complaining about cancel culture in non-academic contexts, so I'd generally expect this to come up if it were an issue. But I could still ask, of course.

"Disappointing Futures" Might Be As Important As Existential Risks

We also discussed some possible reasons for why there might be a disappointing future in the sense of having a lot of suffering, in sections 4-5 of Superintelligence as a Cause or Cure for Risks of Astronomical Suffering. A few excerpts:

4.1 Are suffering outcomes likely?

Bostrom (2003a) argues that given a technologically mature civilization capable of space colonization on a massive scale, this civilization "would likely also have the ability to establish at least the minimally favorable conditions required for future lives to be worth living", and that it could thus be assumed that all of these lives would be worth living. Moreover, we can reasonably assume that outcomes which are optimized for everything that is valuable are more likely than outcomes optimized for things that are disvaluable. While people want the future to be valuable both for altruistic and self-oriented reasons, no one intrinsically wants things to go badly.

However, Bostrom has himself later argued that technological advancement combined with evolutionary forces could "lead to the gradual elimination of all forms of being worth caring about" (Bostrom 2004), admitting the possibility that there could be technologically advanced civilizations with very little of anything that we would consider valuable. The technological potential to create a civilization that had positive value does not automatically translate to that potential being used, so a very advanced civilization could still be one of no value or even negative value.

Examples of technology’s potential being unevenly applied can be found throughout history. Wealth remains unevenly distributed today, with an estimated 795 million people suffering from hunger even as one third of all produced food goes to waste (World Food Programme, 2017). Technological advancement has helped prevent many sources of suffering, but it has also created new ones, such as factory-farming practices under which large numbers of animals are maltreated in ways which maximize their production: in 2012, the amount of animals slaughtered for food was estimated at 68 billion worldwide (Food and Agriculture Organization of the United Nations 2012). Industrialization has also contributed to anthropogenic climate change, which may lead to considerable global destruction. Earlier in history, advances in seafaring enabled the transatlantic slave trade, with close to 12 million Africans being sent in ships to live in slavery (Manning 1992).

Technological advancement does not automatically lead to positive results (Häggström 2016). Persson & Savulescu (2012) argue that human tendencies such as “the bias towards the near future, our numbness to the suffering of great numbers, and our weak sense of responsibility for our omissions and collective contributions”, which are a result of the environment humanity evolved in, are no longer sufficient for dealing with novel technological problems such as climate change and it becoming easier for small groups to cause widespread destruction. Supporting this case, Greene (2013) draws on research from moral psychology to argue that morality has evolved to enable mutual cooperation and collaboration within a select group (“us”), and to enable groups to fight off everyone else (“them”). Such an evolved morality is badly equipped to deal with collective action problems requiring global compromises, and also increases the risk of conflict and generally negative-sum dynamics as more different groups get in contact with each other.

As an opposing perspective, West (2017) argues that while people are often willing to engage in cruelty if this is the easiest way of achieving their desires, they are generally “not evil, just lazy”. Practices such as factory farming are widespread not because of some deep-seated desire to cause suffering, but rather because they are the most efficient way of producing meat and other animal source foods. If technologies such as growing meat from cell cultures became more efficient than factory farming, then the desire for efficiency could lead to the elimination of suffering. Similarly, industrialization has reduced the demand for slaves and forced labor as machine labor has become more effective. At the same time, West acknowledges that this is not a knockdown argument against the possibility of massive future suffering, and that the desire for efficiency could still lead to suffering outcomes such as simulated game worlds filled with sentient non-player characters (see section on cruelty-enabling technologies below). [...]

4.2 Suffering outcome: dystopian scenarios created by non-value-aligned incentives.

Bostrom (2004, 2014) discusses the possibility of technological development and evolutionary and competitive pressures leading to various scenarios where everything of value has been lost, and where the overall value of the world may even be negative. Considering the possibility of a world where most minds are brain uploads doing constant work, Bostrom (2014) points out that we cannot know for sure that happy minds are the most productive under all conditions: it could turn out that anxious or unhappy minds would be more productive. [...]

More generally, Alexander (2014) discusses examples such as tragedies of the commons, Malthusian traps, arms races, and races to the bottom as cases where people are forced to choose between sacrificing some of their values and getting outcompeted. Alexander also notes the existence of changes to the world that nearly everyone would agree to be net improvements - such as every country reducing its military by 50%, with the savings going to infrastructure - which nonetheless do not happen because nobody has the incentive to carry them out. As such, even if the prevention of various kinds of suffering outcomes would be in everyone’s interest, the world might nonetheless end up in them if the incentives are sufficiently badly aligned and new technologies enable their creation.

An additional reason for why such dynamics might lead to various suffering outcomes is the so-called Anna Karenina principle (Diamond 1997, Zaneveld et al. 2017), named after the opening line of Tolstoy’s novel Anna Karenina: "all happy families are all alike; each unhappy family is unhappy in its own way". The general form of the principle is that for a range of endeavors or processes, from animal domestication (Diamond 1997) to the stability of animal microbiomes (Zaneveld et al. 2017), there are many different factors that all need to go right, with even a single mismatch being liable to cause failure.

Within the domain of psychology, Baumeister et al. (2001) review a range of research areas to argue that “bad is stronger than good”: while sufficiently many good events can overcome the effects of bad experiences, bad experiences have a bigger effect on the mind than good ones do. The effect of positive changes to well-being also tends to decline faster than the impact of negative changes: on average, people’s well-being suffers and never fully recovers from events such as disability, widowhood, and divorce, whereas the improved well-being that results from events such as marriage or a job change dissipates almost completely given enough time (Lyubomirsky 2010).

To recap, various evolutionary and game-theoretical forces may push civilization in directions that are effectively random, random changes are likely to bad for the things that humans value, and the effects of bad events are likely to linger disproportionately on the human psyche. Putting these considerations together suggests (though does not guarantee) that freewheeling development could eventually come to produce massive amounts of suffering.
Some thoughts on the EA Munich // Robin Hanson incident
yet academia is now the top example of cancel culture

I'm a little surprised by this wording? Certainly cancel culture is starting to affect academia as well, but I don't think that e.g. most researchers think about the risk of getting cancelled when figuring out the wording for their papers, unless they are working on some exceptionally controversial topic?

I have lots of friends in academia and follow academic blogs etc., and basically don't hear any of them talking about cancel culture within that context. I did recently see a philosopher recently post a controversial paper and get backlash for it on Twitter, but then he seemed to basically shrug it off since people complaining on Twitter didn't really affect him. This fits my general model that most of the cancel culture influence on academia comes from people outside academia trying to affect it, with varying success.

I don't doubt that there are individual pockets with academia that are more cancely, but the rest of academia seems to me mostly unaffected by them.

Some thoughts on the EA Munich // Robin Hanson incident

On the positive side, a recent attempt to bring cancel culture to EA was very resoundingly rejected, with 111 downvotes and strongly upvoted rebuttals.

Shifts in subjective well-being scales?

I don't know, but I get the impression that SWB questions are susceptible to framing effects in general: for example, Biswas-Diener & Diener (2001) found that when people in Calcutta were asked for their life satisfaction in general, and also for their satisfaction in 12 subdomains (material resources, friendship, morality, intelligence, food, romantic relationship, family, physical appearance, self, income, housing, and social life), they gave on average a slightly negative rating for the global satisfaction, while also giving positive ratings for all the subdomains. (This result was replicated at least by Cox 2011 in Nicaragua.)

Biswas-Diener & Diener 2001 (scale of 1-3):

The mean score for the three groups on global life satisfaction was 1.93 (on the negative side just under the neutral point of 2). [...] The mean ratings for all twelve ratings of domain satisfaction fell on the positive (satisfied) side, with morality being the highest (2.58) and the lowest being satisfaction with income (2.12).

Cox 2011 (scale of 1-7):

The sample level mean on global life satisfaction was 3.8 (SD = 1.7). Four is the mid-point of the scale and has been interpreted as a neutral score. Thus this sample had an overall mean just below neutral. [...] The specific domain satisfactions (housing, family, income, physical appearance, intelligence, friends, romantic relationships, morality, and food) have means ranging from 3.9 to 5.8, and a total mean of 4.9. Thus all nine specific domains are higher than global life satisfaction. For satisfaction with the broader domains (self, possessions, and social life) the means ranged from 4.4 to 5.2, with a mean of 4.8. Again, all broader domain satisfactions are higher than global life satisfaction. It is thought that global judgments of life satisfaction are more susceptible to positivity bias and that domain satisfaction might be more constrained by the concrete realities of an individual’s life (Diener et al. 2000)
A New X-Risk Factor: Brain-Computer Interfaces
In particular, Elon Musk claims that BCIs may allow us to integrate with AI such that AI will not need to outcompete us (Young, 2019). It is unclear at present by what exact mechanism a BCI would assist here, how it would help, whether it would actually decrease risk from AI, or if it is a valid claim at all. Such a ‘solution’ to AGI may also be entirely compatible with global totalitarianism, and may not be desirable. The mechanism by which integrating with AI would lessen AI risk is currently undiscussed; and at present, no serious academic work has been done on the topic.

We have a bit of discussion about this (predating Musk's proposal) in section 3.4. of Responses to Catastrophic AGI Risk; we're also skeptical, e.g. this excerpt from our discussion:

De Garis [82] argues that a computer could have far more processing power than a human brain, making it pointless to merge computers and humans. The biological component of the resulting hybrid would be insignificant compared to the electronic component, creating a mind that was negligibly different from a 'pure' AGI. Kurzweil [168] makes the same argument, saying that although he supports intelligence enhancement by directly connecting brains and computers, this would only keep pace with AGIs for a couple of additional decades.
The truth of this claim seems to depend on exactly how human brains are augmented. In principle, it seems possible to create a prosthetic extension of a human brain that uses the same basic architecture as the original brain and gradually integrates with it [254]. A human extending their intelligence using such a method might remain roughly human-like and maintain their original values. However, it could also be possible to connect brains with computer programs that are very unlike human brains and which would substantially change the way the original brain worked. Even smaller differences could conceivably lead to the adoption of 'cyborg values' distinct from ordinary human values [290].
Bostrom [49] speculates that humans might outsource many of their skills to non-conscious external modules and would cease to experience anything as a result. The value-altering modules would provide substantial advantages to their users, to the point that they could outcompete uploaded minds who did not adopt the modules. [...]
Moravec [194] notes that the human mind has evolved to function in an environment which is drastically different from a purely digital environment and that the only way to remain competitive with AGIs would be to transform into something that was very different from a human.
Load More