All of Tobias_Baumann's Comments + Replies

We've now put together a new and improved audio version, which can be found here.

2
Teo Ajantaival
1y
Just listened to it! The pleasant and thoughtful narration by Adrian Nelson felt perfect for the book. I might even recommend the audiobook version over the text version to people who might otherwise find it distressing to think about s-risks. :)

Thanks! 

An audiobook is a good idea and I'll look into it, though I don't expect it to be done any time soon (i.e. it would at least take several months, I think).

5
Aaron Bergman
1y
I'll just throw out the possibility of copy and pasting the whole thing (with anywhere from zero to a lot of formatting/editing) into an EA Forum post, which (I assume?) would trigger the Nonlinear Library system to turn it into audio. This would also get it into the feeds of people who only consume the forum via podcast app. 

Audiobook version: [new] Aaron made an awesome audiobook version here. 

[Original] It's easy to turn it into an audiobook version with Evie or Natural Reader for anybody who likes to read with their ears instead of their eyes. Full guide I wrote up on how to turn everything into audio here

Also, Tobias, if you want to make a super simple audiobook version of the book, I recommend using Amazon Polly. It'll probably cost under $100 and take less than 10 hours and increase the number of people who read your book by a lot. I know a ton of people who only re... (read more)

8
dotsam
1y
You might consider creating a text-to-speech version by using e.g. Amazon Polly. Whilst imperfect, it is listenable and might be useful to people. Here is a sample generated with the British English Arthur Male voice.

The easiest way to download it as an epub is here.

1
Sam Bogerd
1y
Thank you! :)

I agree with this answer.

I don't need to be persuaded to care about animal/insect/machine suffering in the first place.

That's great, because that is also the starting point of my book. From the introduction:

Before I dive deeper, I should clarify the values that underlie this book. A key principle is impartiality: suffering matters equally irrespective of who experiences it. In particular, I believe we should care about all sentient beings, including nonhuman animals. Similarly, I believe suffering matters equally regardless of when it is exp

... (read more)

Thanks for writing this up! It's great to see more people think about the relationship between animal advocacy and longtermism

It seems important to distinguish between a) the abolition of factory farming and b) a long-term change in human attitudes towards animals (i.e. establishing antispeciesism). b) is arguably more important from a long-term perspective, and it is a legitimate concern cultivated meat (and similar technologies) would only achieve a). 

However, proponents of the "technology-based strategy" usually argue that a) also indirectly... (read more)

3
Fai
2y
Thank you, Tobias, for your comment! I agree. I think all too often when longtermists hear about animals and longtermism, they reject the idea by pointing to their speculation that factory farming will soon be eliminated, while forgetting other animals, or speciesism at large. I agree, and if I understand you correctly that's part of my point. In the post, I wrote about other types of factory farming that are not for the purpose of food. So I think we might be making a similar point here. The lock-in I am pointing at is missing the opportunity to eliminate FFFF for moral reasons. In other words, we cancelled our option to do it for moral reasons. My arguments about why this lock-in is bad can easily be wrong, but I think this being a lock-in seems uncontroversial. It can, my worries are that: * It might not happen in the same probability (i.e. advocates might be relieved to have solved a problem and moved onto other problems) * It might not happen with the same quality (very speculative here, just my intuitive worry that changes due to economic pressure just won't produce the same social changes.)
2
Holly Morgan
2y
"Why can't attitude change / moral progress still happen later?" E.g. when we're advocating for concern for wild animal suffering?

Thanks! I've started an email thread with you, me, and David.

Thanks for the comment, this is raising a very important point. 

I am indeed fairly optimistic that thoughtful forms of MCE are positive regarding s-risks, although this qualifier of "in the right way" should be taken very seriously - I'm much less sure whether, say, funding PETA is positive. I also prefer to think in terms of how MCE could be made robustly positive, and distinguishing between different possible forms of it, rather than trying to make a generalised statement for or against MCE.

This is, however, not a very strongly held view (despite having thought a lot about it), in light of great uncertainty and also some degree of peer disagreement (other researchers being less sanguine about MCE). 

'Longtermism' just says that improving the long-term future matters most, but it does not specify a moral view beyond that. So you can be longtermist and focus on averting extinction, or you can be longtermist and focus on preventing suffering (cf. suffering-focused ethics); or you can have some other notion of "improving". Most people who are both longtermist and suffering-focused work on preventing s-risks.  

That said, despite endorsing suffering-focused ethics myself, I think it's not helpful to frame this as "not caring" about existential risks; t... (read more)

0[comment deleted]3y

I'm somewhat less optimistic; even if most would say  that they endorse this view, I think many "dedicated EAs" are in practice still biased against nonhumans, if only subconsciously. I think we should expect speciesist biases to be pervasive, and they won't go away entirely just by endorsing an abstract philosophical argument. (And I'm not sure if "most" endorse that argument to begin with.)

Fair point - the "we" was something like "people in general". 

This makes IRV a really bad choice. IRV results in a two-party system just like plurality voting does.

I agree that having a multi-party system might be most important, but I don't think IRV necessarily leads to a two-party system. For instance, French presidential elections feature far more than two parties (though they're using a two-round system rather than IRV).

Everything is subject to tactical voting (except maybe SODA? but I don't understand that argument). So I don't see this as a point against approval voting in particular.

I think that approval voti... (read more)

3
abramdemski
3y
Here's an argument that IRV has a pretty bad track record.
1
abramdemski
3y
Yeah, I know very little about multi-party systems in practice (IE why these specific countries have escaped the two-party dynamic). But it's plausible to me that there are a few exceptions but the overall gravity of a voting system still makes a big difference. Especially in places where a two-party system is already entrenched, it's plausible that IRV just wouldn't be enough to dislodge it. It's also plausible to me that if we could do controlled experiments, we would see two-party systems arise a much higher percentage of the time in plurality-voting systems than IRV, or that it would take much longer to settle into a two-party equilibrium in IRV systems. Also, considering French politics (and the politics of other places with multiparty systems), maybe getting rid of two-party systems is not so important as I initially thought -- it doesn't seem like multi-party politics is so much better in terms of sanity and quality of policy. I agree, and that's why I base my opinion mostly on the statistics, which seem to favor approval. Out of the different levels of strategic voting considered, IRV's worst-case scenario is worse than approval's worst-case, and IRV's best-case is worse than approval's best-case. Granted, they have an overlapping range. Perhaps more importantly, STAR voting and 3-2-1 voting beat both pretty decisively. Score voting (aka range voting) is best in completely honest cases, but subject to strategy, becomes as bad as approval. STAR reigns that problem in (by introducing its additional runoff), compromising some value in the completely honest case for a better lower bound in the very strategic case. 3-2-1 does the same thing even moreso, making all the scenarios roughly equally good. Granted, these are simulated statistics, not real-world elections.

Great post - thanks a lot for writing this up! 

It's quite remarkable how we hold ideas to different standards in different contexts. Imagine, for instance, a politician that openly endorses CU. Her opponents would immediately attack the worst implications: "So you would torture a child in order to create ten new brains that experience extremely intense orgasms?" The politician, being honest, says yes, and that's the end of her career. 

By contrast, EA discourse and philosophical discourse is strikingly lenient when it comes to counterintuitive implications of such theories. (I'm not saying anything about which standards are better, and of course this does not only apply to CU.)

Consider the example of someone making a symmetric argument against cosmopolitanism: 

It's quite remarkable how we hold ideas to different standards in different contexts. Imagine, for instance, a US politician that openly endorses caring about all humans equally regardless of where they are located . Her opponents would immediately attack the worst implications: "So you would  prefer money that would go to local schools and homeless shelters be sent overseas to foreign countries?" The politician, being honest, says yes, and that's the end of her

... (read more)

Who is the "we" you are talking about? I imagine the people who end that politician's career would not be EAs. So it seems like your example is an example of different people having different standards, not the same people having different standards in different contexts.

The key thing is that the way I’m setting priors is as a function from populations to credences: for any property F, your prior should be such that if there are n people in a population, the probability that you are in the m most F people in that population is m/n

The fact that I consider a  certain property F should update me, though. This already demonstrates that F is something that I am particularly interested in, or that F is salient to me, which presumably makes it more likely that I am an outlier on F. 

Also, this principle can have p... (read more)

I’m at a period of unusually high economic growth and technological progress

I think it's not clear whether higher economic growth or technological progress implies more influence. This claim seems plausible, but you could also argue that it might be easier to have an influence in a stable society (with little economic or technological change), e.g. simply because of higher predictability.

So, as I say in the original post and the comments, I update (dramatically) on my estimate of my influentialness, on the basis of these considerations. But by how much? Is

... (read more)

I was just talking about 30 years because those are the farthest-out US bonds. I agree that the horizon of patient philanthropists can be much longer.

Yeah, but even 30 year interest rates are low (1-2% at the moment). There is an Austrian 100 year bond paying 0.88%. I think that is significant evidence that something about the "patient vs impatient actors" story does not add up.

2
MichaelDickens
4y
Patient philanthropists might want to wait for hundreds or even thousands of years before deploying their capital. 30 years is nothing compared to the possible future of civilization.

It is fair to say that some suffering-focused views have highly counterintuitive implications, such as the one you mention. The misconception is just that this holds for all suffering-focused views. For instance, there are plenty of possible suffering-focused views that do not imply that happy humans would be better off committing suicide. In addition to preference-based views, one could value happiness but endorse the procreative asymmetry (so that lives above a certain threshold of welfare are considered OK even if there is some severe suffering), or one... (read more)

Re: 1., there would be many more (thoughtful) people who share our concern about reducing suffering and s-risks (not necessarily with strongly suffering-focused values, but at least giving considerable weight to it). That results in an ongoing research project on s-risks that goes beyond a few EAs (e.g., it is also established in academia or other social movements).

Re: 2., a possible scenario is that suffering-focused ideas just never gain much traction, and consequently efforts to reduce s-risks will just fizzle out. However, I think there is significant... (read more)

2
MichaelA
4y
Thanks.  Those answers make sense to me. But I notice that the answer to question 1 sounds like an outcome you want to bring about, but which I wouldn't be way more surprised to observe in a world where CRS doesn't exist/doesn't have impact than one in which it does. This is because it could be brought about by the actions of others (e.g., CLR).  So I guess I'd be curious about things like: * Whether and how you think that that desired world-state will look different if CRS succeeds than if CRS accomplishes very little but other groups with somewhat similar goals succeed * How you might disentangle the contribution of CRS to this desired outcome from the contributions of others I guess this connects to the question of quality/impact assessment as well.  I also think this dilemma is far from unique to CRS. In fact, it's probably weaker for CRS than for non-suffering-focused longtermists (e.g. much of FHI), because there are currently more of the latter (or at least they control more resources), so there are more plausible alternative candidates for the causes of non-suffering-focused longtermist impacts. Also, do you think it might make sense for CRS to run a (small) survey about the quality & impact of its outputs?

I would guess that actually experiencing certain possible conscious states, in particular severe suffering or very intense bliss, could significantly change my views, although I am not sure if I would endorse this as “reflection” or if it might lead to bias.

It seems plausible (but I am not aware of strong evidence) that experience of severe suffering generally causes people to focus more on it. However, I myself have fortunately never experienced severe suffering, so that would be a data point to the contrary.

I was exposed to arguments for suffering-focused ethics from the start, since I was involved with German-speaking EAs (the Effective Altruism Foundation / Foundational Research Institute back then). I don’t really know why exactly (there isn’t much research on what makes people suffering-focused or non-suffering-focused), but this intuitively resonated with me.

I can’t point to any specific arguments or intuition pumps, but my views are inspired by writing such as the Case for Suffering-Focused Ethics, Brian Tomasik’s essays, an... (read more)

I agree that s-risks can vary a lot (by many orders of magnitude) in terms of severity. I also think that this gradual nature of s-risks is often swept under the rug because the definition just uses a certain threshold (“astronomical scale”). There have, in fact, been some discussions about how the definition could be changed to ameliorate this, but I don’t think there is a clear solution. Perhaps talking about reducing future suffering, or preventing worst-case outcomes, can convey this variation in severity more than the term ‘... (read more)

One key difference is that there is less money in it, because OpenPhil as the biggest EA grantmaker is not focused on reducing s-risks. In a certain sense, that is good news because work on s-risks is plausibly more funding-constrained than non-suffering-focused longtermism.

In terms of where to donate, I would recommend the Center on Long-Term Risk and the Center for Reducing Suffering (which I co-founded myself). Both of those organisations are doing crucial research on s-risk reduction. If you are looking for something a bit less abstract, you could con... (read more)

I think a plausible win condition is that society has some level moral concern for all sentient beings (it doesn’t necessarily need to be entirely suffering-focused) as well as stable mechanisms to implement positive-sum cooperation or compromise. The latter guarantees that moral concerns are taken into account and possible gains from trade can be achieved. (An example for this could be cultivated meat, which allows us to reduce animal suffering while accommodating the interests of meat eaters.)

However, I think suffering reducers in particular shoul... (read more)

I don’t think this view is necessary to prioritise s-risk. A finite but relatively high “trade ratio” between happiness and suffering can be enough to focus on s-risks. In addition, I think it’s more complicated than putting some numbers on happiness vs. suffering. (See here for more details.) For instance, one should distinguish between the intrapersonal and the interpersonal setting - a common intuition is that one man’s pain can’t be outweighed by another’s pleasure.

Another possibility is lexicality: one... (read more)

1
Sebastian Schwiecker
4y
Thanks a lot for the reply and the links.

We have thought about this, and wrote up some internal documents, but have not yet published anything (though we might do that at some point, as part of a strategic plan). Magnus and I are quite aligned in our thinking about the theory of change. The key intended outcome is to catalyse a research project on how to best reduce suffering, both by creating relevant content ourselves and by convincing others to share our concerns regarding s-risks and reducing future suffering.

2
MichaelA
4y
That makes sense, thanks. Do you have a sense of who you want to take up that project, or who you want to catalyse it among? E.g., academics vs EA researchers, and what type/field?  And does this influence what you work on and how you communicate/disseminate your work?

Apart from the normative discussions relating to the suffering focus (cf. other questions), I think the most likely reasons are that s-risks may simply turn out to be too unlikely, or too far in the future for us to do something about it at this point. I do not currently believe either of those (see here and here for more), and hence do work on s-risks, but it is possible that I will eventually conclude that s-risks should not be a top priority for one of those reasons.

I would refer to this elaborate comment by Magnus Vinding on a very similar question. Like Magnus, I think a common misconception is that suffering-focused views have certain counterintuitive or even dangerous implications (e.g. relating to world destruction), when in fact those problematic implications do not follow.

Suffering-focused ethics is also still sometimes associated with negative utilitarianism (NU). While NU counts as a suffering-focused view, this often fails to appreciate the breadth of possible suffering-focused views, including pluralist and... (read more)

While I agree that problematic implications do not follow in practice, I still think some views have highly counterintuitive implications. E.g., some suffering-focused views would imply that most happy present-day humans would be better off committing suicide if there's a small chance that they would experience severe suffering at some point in their lives. This seems a highly implausible and under-appreciated implication (and makes me assign more credence to views that don't have this implication, such as preference-based and upside-focused views).

Great question! I think both moral and factual disagreements play a significant role. David Althaus suggests a quantitative approach of distinguishing between the “N-ratio”, which measures how much weight one gives to suffering vs. happiness, and the “E-ratio”, which refers to one’s empirical beliefs regarding the ratio of future happiness and suffering. You could prioritise s-risk because of a high N-ratio (i.e. suffering-focused values) or because of a low E-ratio (i.e. pessimistic views of the future).

That suggests tha... (read more)

Yeah, I would perhaps say that the community has historically been too narrowly focused on a small number of causes. But I think this has been improving for a while, and we're now close to the right balance. (There is also a risk of being too broad, by calling too many causes important and not prioritising enough.)

Thanks for writing this up! I think you're raising many interesting points, especially about a greater focus on policy and going "beyond speculation".

However, I'm more optimistic than you are about the degree of work invested in cause prioritisation, and the ensuing progress we've seen over the last years. See this recent comment of mine - I'd be curious if you find those examples convincing.

Also, speaking as someone who is working on this myself, there is quite a bit of research on s-risks and cause prioritisation from a suff... (read more)

5
weeatquince
4y
Hi Tobias, Thank you for the comment. Yes very glad for CLR ect and all the s-risk research.  An interesting thing I noted when reading through your recent comment is that all 3 of the examples of progress involve a broadening of EA, expanding horizons, pushing back on the idea that we need to be focusing right now on AI risk now. They suggest that to date the community has perhaps gone too quickly gone towards a specific case area (AI / immediate x-risk mitigation) rather than continued to explored. I don’t really know what to make of that. Do you examples weaken the point I am making or strengthen it? Is this evidence that useful research is happening or is this evidence that we as a community under-invests in exploration? Maybe there is no universal answer to this question and it depends on the individual reader and how your examples affects their current assumptions and priors about the world.

I think there haven’t been any novel major insights since 2015, for your threshold of “novel” and “major”.

Notwithstanding that, I believe that we’ve made significant progress and that work on macrostrategy was and continues to be valuable. Most of that value is in many smaller insights, or in the refinement and diffusion of ideas that aren’t strictly speaking novel. For instance:

  • The recent work on patient longtermism seems highly relevant and plausibly meets the bar for being “major”. This isn&
... (read more)

The ideas behind patient altruism have received substantial discussion in academia:

But this literature doesn't s

... (read more)

I liked this answer.

One thing I'd add: My guess is that part of why Max asked about novel insights is that he's wondering what the marginal value of longtermist macrostrategy or global priorities research has been since 2015, as one input into predictions about the marginal value of more such research. Or at least, that's a big part of why I find this question interesting.

So another interesting question is what is required for us to have "many smaller insights" and "the refinement and diffusion of ideas that aren’t strictly speaking novel"? E.g., does that

... (read more)

Thanks for the comment! I fully agree with your points.

People with and without suffering-focused ethics will agree on what to do in the present even more than would be expected from the above point alone. In particular, this is because many actions aimed at changing the long-term future in ways primarily valued by one of those groups of people will also happen to (in expectation) change the long-term future in other ways, which the other group values.

That's a good point. A key question is how fine-grained our influence over the long-term future is - t... (read more)

Yeah, I meant it to be inclusive of this "portfolio approach". I agree that specialisation and comparative advantages (and perhaps also sheer motivation) can justify focusing on things that are primarily good based on one (set of) moral perspectives.

6
MichaelA
4y
In that case, take my comment above as just long-winded agreement! I think we could probably consider motivation (and thus "fit with one's values") as one component of/factor in comparative advantage, because it will tend to make a person better at something, likely to work harder at it, less likely to burn out, etc. Though motivation could sometimes be outweighed by other components of/factors in comparative advantage (e.g., a person's current skills, credentials, and networks).

That seems plausible and is also consistent with Amara's law (the idea that the impact of technology is often overestimated in the short run and underestimated in the long run).

I'm curious how likely you think it is that productivity growth will be significantly higher (i.e. levels at least comparable with electricity) for any reason, not just AI. I wouldn't give this much more than 50%, as there is also some evidence that stagnation is on the cards (see e.g. 1, 2). But that would mean that you're confident that the cause of higher pro... (read more)

I agree that it's tricky, and am quite worried about how the framings we use may bias our views on the future of AI. I like the GDP/productivity growth perspective but feel free to answer the same questions for your preferred operationalisation.

Another possible framing: given a crystal ball showing the future, how likely is it that people would generally say that AI is the most important thing that happens this century?

As one operationalization, then, suppose we were to ask an economist in 2100: "Do you think that the counterfactual contribution
... (read more)
4
bgarfinkel
4y
I mostly have in mind the idea that AI is "early-stage," as you say. The thought is that "general purpose technologies" (GPTs) like electricity, the steam engine, the computer, and (probably) AI tend to have very delayed effects. For example, there was really major progress in computing in the middle of the 20th century, and lots of really major invents throughout the 70s and 80s, but computers didn't have a noticeable impact on productivity growth until the 90s. The first serious electric motors were developed in the mid-19th century, but electricity didn't have a big impact on productivity until the early 20th. There was also a big lag associated with steam power; it didn't really matter until the middle of the 19th century, even though the first steam engines were developed centuries earlier. So if AI takes several decades to have a large economic impact, this would be consistent with analagous cases from history. It can take a long time for the technology to improve, for engineers to get trained up, for complementary inventions to be developed, for useful infrastructure to be built, for organizational structures to get redesigned around the technology, etc. I don't think it'd be very surprising if 80 years was enough for a lot of really major changes to happen, especially since the "time to impact" for GPTs seems to be shrinking over time. Then I'm also factoring in the additional possibility that there will be some unusually dramatic acceleration, which is distinguishes AI from most earlier GPTs.

What is your overall probability that we will, in this century, see progress in artificial intelligence that is at least as transformative as the industrial revolution?

What is your probability for the more modest claim that AI will be at least as transformative as, say, electricity or railroads?

What is your overall probability that we will, in this century, see progress in artificial intelligence that is at least as transformative as the industrial revolution?

I think this is a little tricky. The main way in which the Industrial Revolution was unusually transformative is that, over the course of the IR, there were apparently unusually large pivots in several important trendlines. Most notably, GDP-per-capita began to increase at a consistently much higher rate. In more concrete terms, though, the late nineteenth and early twentieth centuries pr

... (read more)

I also recently wrote up some thoughts on this question, though I didn't reach a clear conclusion either.

This could be relevant. It's not about the exact same question (it looks at the distribution of future suffering, not of impact) but some parts might be transferable.

Hi Michael,

thanks for the comment!

Could you expand on what you mean by the first part of that sentence, and what makes you say that?

I just meant that proposals to represent future non-human animals will likely gain less traction than the idea of representing future humans. But I agree that it would be perfectly possible to do it (as you say). And of course I'd be strongly in favour of having a Parliamentary Committee for all Future Sentient Beings or something like that, but again, that's not politically feasible anytime soon. So we have to fin... (read more)

3
MichaelA
4y
Ah, that makes sense, then. This is an interesting point, and I think there's something to it. But I also tentatively think that the distinction might be less sharp than you suggest. (The following is again just quick thoughts.) Firstly, it seems to me that we should currently have a lot of uncertainties about what would be better for animals. And it also seems that, in any case, much of the public probably is uncertain about a lot of relevant things (even if sufficient evidence to resolve those uncertainties does exist somewhere). There are indeed some relatively obvious low-hanging fruit, but my guess would be that, for all the really big changes (e.g., phasing out factory farming, improving conditions for wild animals), it would be hard to say for sure what would be net-positive. For example, perhaps factory farmed animals have net positive lives, or could have net positive lives given some changes in conditions, in which case developing clean meat, increasing rates of veganism, etc. could be net negative (from a non-suffering-focused perspective), as it removes wellbeing from the world. Of course, even if facing such uncertainties, expected value reasoning might strongly support one course of action. Relatedly, in reality, I'm quite strongly in favour of phasing out factory farming, and I'm personally a vegetarian-going-on-vegan. But I do think there's room for some uncertainty there. And even if there are already arguments and evidence that should resolve that uncertainty for people, it's possible that those arguments and bits of evidence would be more complex or less convincing than something like "In 2045, people/experts/some metric will be really really sure that animals would've been better off if we'd done X than if we'd done Y." (But that's just a hypothesis; I don't know how convincing people would find such judgements-from-the-future.) Secondly, it seems that there are several key things where it's quite clear what policies would be better for futu

Hi Tyler,

thanks for the detailed and thoughtful comment!

I find much less compelling the idea that "if there is the political will to seriously consider future generations, it’s unnecessary to set up additional institutions to do so," and "if people do not care about the long-term future," they would not agree to such measures. The main reason I find this uncompelling is just that it overgenerates in very implausible ways. Why should women have the vote? Why should discrimination be illegal?

Yeah, I agree that there are plenty of... (read more)

4
tylermjohn
4y
Ah, it looks like I read your post to be a bit more committal than you meant it to be! Thanks for your reply! And sorry for the misnomer, I'll correct that in the top-level comment.

Hey Jamie, thanks for the pointer! I wasn't aware of this.

Another relevant critique of whether colonisation is a good idea is Daniel Deudney's new book Dark Skies.

I myself have also written up some more thoughts on space colonisation in the meantime and have become more sceptical about the possibility of large-scale space settlement happening anytime soon.

Great post - I think it's extremely important to explore many different problem areas!

Some further plausible (in my opinion) candidates are shaping genetic enhancement, reducing long-term risks from malevolent actors, invertebrate welfare and space governance.

9
Arden Koehler
4y
Hi Tobias, we've added "governance of outer space" on your recommendation. Thanks!

Hi Tobias — thanks for the ideas!

Invertebrate welfare is wrapped into 'Wild animal welfare', and reducing long-term risks from malevolent actors is partially captured under 'S-risks'. We'll discuss the other two.

Great work, thanks for writing this up! I agree that excessive polarisation is an important issue and warrants more EA attention. In particular, polarisation is an important risk factor for s-risks.

Political polarization, as measured by political scientists, has clearly gone up in the last 20 years.

It is worth noting that this is a US-centric perspective and the broader picture is more mixed, with polarisation increasing in some countries and decreasing in others.

If there’s more I’m missing, feel free to provide links in the comment section.
... (read more)
xccf
4y11
0
0
increasing the presence of public service broadcasting

I don't know how well that would work in the US--it seems that existing public service broadcasters (PBS and NPR) are perceived as biased by American conservatives.

A related idea I've seen is media companies which sell cancellation insurance (archive). The idea being that this is a business model which incentivizes obtaining the trust and respect of as many people as possible, as opposed to inspiring a smaller number of true believers to share/subscribe. One advantage of this idea is it does... (read more)

Amazing work, thanks for writing this up!

The drawdowns of major ETFs on this (e.g. EMB / JNK) during the corona crash or 2008 are roughly 2/3 to 3/4 of how much stocks (the S&P 500) went down. So I agree the diversification benefit is limited. The question, bracketing the point on leverage extra cost, is whether the positive EV of emerging markets bonds / high yield bonds is more or less than 2/3 to 3/4 of the positive EV of stocks. That's pretty hard to say - there's a lot of uncertainty on both sides. But if that is the case and one can borrow at very good rates (e.g. through futures or box

... (read more)

What are your thoughts on high-yield corporate bonds or emerging markets bonds? This kind of bond offers non-zero interest rates but of course also entail higher risk. Also, these markets aren't (to my knowledge) distorted by the Fed buying huge amounts of bonds.

Theoretically, there should be some diversification benefit from adding this kind of bond, though it's all positively correlated. But unfortunately, ETFs on these kinds of bonds have much higher fees.

3
MichaelDickens
4y
I don't know much about emerging market bonds so I can't make any confident claims, but I can say how I am thinking out it for my personal portfolio. I considered holding emerging market bonds because the yield spread between them and and developed-market bonds is unusually high. I decided not to hold them because I don't think they provide enough diversification benefit in the tails. Since I invest with leverage, it doesn't necessarily make sense for me to maximally diversify, I only hold assets if I think the benefit overcomes the extra cost of leverage. But I do believe it might make sense to hold emerging bonds for someone with a less leveraged, more diversified portfolio. That said, I would consider them a "risky" asset, not a "safe" asset, and plan accordingly.
6
matthew.vandermerwe
4y
[disclosure: not an economist or investment professional] This seems wrong — the spillover effects of 2008–13 QE on EM capital markets are fairly well-established (cf the 'Taper Tantrum' of 2013). see e.g. Effects of US Quantitative Easing on Emerging Market Economies

Peter's point is that it makes a lot of sense to have certain norms about not causing serious direct harm, and one should arguably follow such norms rather than expecting some complex longtermist cost-benefit analysis.

Put differently, I think it is very important, from a longtermist perspective, to advance the idea that animals matter and that we consequently should not harm them (particularly for reasons as frivolous as eating meat).

I don't think that calling meat-eating frivolous is very helpful.
Most vegans revert to consuming some degree of animal products (as far as I understand the research they end up eating meat again, but in lower quantities), indicating that there are significant costs involved.

A side-constraint about harm is generally plausible to me.
I'm still somewhat sceptical about the argument:
- Either you extend this norm to not ommiting actions that could prevent harm from happening, or you seem to be making a dubious distinction between acts and omissions. ... (read more)

Load more