All of Tobias_Baumann's Comments + Replies

AMA: Tobias Baumann, Center for Reducing Suffering

Thanks! I've started an email thread with you, me, and David.

How can we reduce s-risks?

Thanks for the comment, this is raising a very important point. 

I am indeed fairly optimistic that thoughtful forms of MCE are positive regarding s-risks, although this qualifier of "in the right way" should be taken very seriously - I'm much less sure whether, say, funding PETA is positive. I also prefer to think in terms of how MCE could be made robustly positive, and distinguishing between different possible forms of it, rather than trying to make a generalised statement for or against MCE.

This is, however, not a very strongly held view (despite having thought a lot about it), in light of great uncertainty and also some degree of peer disagreement (other researchers being less sanguine about MCE). 

Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure

'Longtermism' just says that improving the long-term future matters most, but it does not specify a moral view beyond that. So you can be longtermist and focus on averting extinction, or you can be longtermist and focus on preventing suffering (cf. suffering-focused ethics); or you can have some other notion of "improving". Most people who are both longtermist and suffering-focused work on preventing s-risks.  

That said, despite endorsing suffering-focused ethics myself, I think it's not helpful to frame this as "not caring" about existential risks; t... (read more)

1jushy1yThank you for your input! I agree with the point about co-operation with other value systems. EDIT: as MichaelStJules pointed out, I think I was also mixing up existential risks (a broader term) with extinction risks (a narrower term).
Longtermism and animal advocacy

I'm somewhat less optimistic; even if most would say  that they endorse this view, I think many "dedicated EAs" are in practice still biased against nonhumans, if only subconsciously. I think we should expect speciesist biases to be pervasive, and they won't go away entirely just by endorsing an abstract philosophical argument. (And I'm not sure if "most" endorse that argument to begin with.)

some concerns with classical utilitarianism

Fair point - the "we" was something like "people in general". 

Thoughts on electoral reform

This makes IRV a really bad choice. IRV results in a two-party system just like plurality voting does.

I agree that having a multi-party system might be most important, but I don't think IRV necessarily leads to a two-party system. For instance, French presidential elections feature far more than two parties (though they're using a two-round system rather than IRV).

Everything is subject to tactical voting (except maybe SODA? but I don't understand that argument). So I don't see this as a point against approval voting in particular.

I think that approval voti... (read more)

3abramdemski1yHere's an argument that IRV has a pretty bad track record [] .
1abramdemski1yYeah, I know very little about multi-party systems in practice (IE why these specific countries have escaped the two-party dynamic). But it's plausible to me that there are a few exceptions but the overall gravity of a voting system still makes a big difference. Especially in places where a two-party system is already entrenched, it's plausible that IRV just wouldn't be enough to dislodge it. It's also plausible to me that if we could do controlled experiments, we would see two-party systems arise a much higher percentage of the time in plurality-voting systems than IRV, or that it would take much longer to settle into a two-party equilibrium in IRV systems. Also, considering French politics (and the politics of other places with multiparty systems), maybe getting rid of two-party systems is not so important as I initially thought -- it doesn't seem like multi-party politics is so much better in terms of sanity and quality of policy. I agree, and that's why I base my opinion mostly on the statistics, [] which seem to favor approval. Out of the different levels of strategic voting considered, IRV's worst-case scenario is worse than approval's worst-case, and IRV's best-case is worse than approval's best-case. Granted, they have an overlapping range. Perhaps more importantly, STAR voting and 3-2-1 voting beat both pretty decisively. Score voting (aka range voting) is best in completely honest cases, but subject to strategy, becomes as bad as approval. STAR reigns that problem in (by introducing its additional runoff), compromising some value in the completely honest case for a better lower bound in the very strategic case. 3-2-1 does the same thing even moreso, making all the scenarios roughly equally good. Granted, these are simulated statistics, not real-world elections.
some concerns with classical utilitarianism

Great post - thanks a lot for writing this up! 

It's quite remarkable how we hold ideas to different standards in different contexts. Imagine, for instance, a politician that openly endorses CU. Her opponents would immediately attack the worst implications: "So you would torture a child in order to create ten new brains that experience extremely intense orgasms?" The politician, being honest, says yes, and that's the end of her career. 

By contrast, EA discourse and philosophical discourse is strikingly lenient when it comes to counterintuitive implications of such theories. (I'm not saying anything about which standards are better, and of course this does not only apply to CU.)

Consider the example of someone making a symmetric argument against cosmopolitanism: 

It's quite remarkable how we hold ideas to different standards in different contexts. Imagine, for instance, a US politician that openly endorses caring about all humans equally regardless of where they are located . Her opponents would immediately attack the worst implications: "So you would  prefer money that would go to local schools and homeless shelters be sent overseas to foreign countries?" The politician, being honest, says yes, and that's the end of her

... (read more)

Who is the "we" you are talking about? I imagine the people who end that politician's career would not be EAs. So it seems like your example is an example of different people having different standards, not the same people having different standards in different contexts.

Thoughts on whether we're living at the most influential time in history

The key thing is that the way I’m setting priors is as a function from populations to credences: for any property F, your prior should be such that if there are n people in a population, the probability that you are in the m most F people in that population is m/n

The fact that I consider a  certain property F should update me, though. This already demonstrates that F is something that I am particularly interested in, or that F is salient to me, which presumably makes it more likely that I am an outlier on F. 

Also, this principle can have p... (read more)

Thoughts on whether we're living at the most influential time in history

I’m at a period of unusually high economic growth and technological progress

I think it's not clear whether higher economic growth or technological progress implies more influence. This claim seems plausible, but you could also argue that it might be easier to have an influence in a stable society (with little economic or technological change), e.g. simply because of higher predictability.

So, as I say in the original post and the comments, I update (dramatically) on my estimate of my influentialness, on the basis of these considerations. But by how much? Is

... (read more)
Thoughts on patient philanthropy

I was just talking about 30 years because those are the farthest-out US bonds. I agree that the horizon of patient philanthropists can be much longer.

Thoughts on patient philanthropy

Yeah, but even 30 year interest rates are low (1-2% at the moment). There is an Austrian 100 year bond paying 0.88%. I think that is significant evidence that something about the "patient vs impatient actors" story does not add up.

2MichaelDickens1yPatient philanthropists might want to wait for hundreds or even thousands of years before deploying their capital. 30 years is nothing compared to the possible future of civilization.
AMA: Tobias Baumann, Center for Reducing Suffering

It is fair to say that some suffering-focused views have highly counterintuitive implications, such as the one you mention. The misconception is just that this holds for all suffering-focused views. For instance, there are plenty of possible suffering-focused views that do not imply that happy humans would be better off committing suicide. In addition to preference-based views, one could value happiness but endorse the procreative asymmetry (so that lives above a certain threshold of welfare are considered OK even if there is some severe suffering), or one... (read more)

AMA: Tobias Baumann, Center for Reducing Suffering

Re: 1., there would be many more (thoughtful) people who share our concern about reducing suffering and s-risks (not necessarily with strongly suffering-focused values, but at least giving considerable weight to it). That results in an ongoing research project on s-risks that goes beyond a few EAs (e.g., it is also established in academia or other social movements).

Re: 2., a possible scenario is that suffering-focused ideas just never gain much traction, and consequently efforts to reduce s-risks will just fizzle out. However, I think there is significant... (read more)

2MichaelA1yThanks. Those answers make sense to me. But I notice that the answer to question 1 sounds like an outcome you want to bring about, but which I wouldn't be way more surprised to observe in a world where CRS doesn't exist/doesn't have impact than one in which it does. This is because it could be brought about by the actions of others (e.g., CLR). So I guess I'd be curious about things like: * Whether and how you think that that desired world-state will look different if CRS succeeds than if CRS accomplishes very little but other groups with somewhat similar goals succeed * How you might disentangle the contribution of CRS to this desired outcome from the contributions of others I guess this connects to the question of quality/impact assessment as well. I also think this dilemma is far from unique to CRS. In fact, it's probably weaker for CRS than for non-suffering-focused longtermists (e.g. much of FHI), because there are currently more of the latter (or at least they control more resources), so there are more plausible alternative candidates for the causes of non-suffering-focused longtermist impacts. Also, do you think it might make sense for CRS to run a (small) survey about the quality & impact of its outputs [] ?
AMA: Tobias Baumann, Center for Reducing Suffering

I would guess that actually experiencing certain possible conscious states, in particular severe suffering or very intense bliss, could significantly change my views, although I am not sure if I would endorse this as “reflection” or if it might lead to bias.

It seems plausible (but I am not aware of strong evidence) that experience of severe suffering generally causes people to focus more on it. However, I myself have fortunately never experienced severe suffering, so that would be a data point to the contrary.

AMA: Tobias Baumann, Center for Reducing Suffering

I was exposed to arguments for suffering-focused ethics from the start, since I was involved with German-speaking EAs (the Effective Altruism Foundation / Foundational Research Institute back then). I don’t really know why exactly (there isn’t much research on what makes people suffering-focused or non-suffering-focused), but this intuitively resonated with me.

I can’t point to any specific arguments or intuition pumps, but my views are inspired by writing such as the Case for Suffering-Focused Ethics, Brian Tomasik’s essays, an... (read more)

AMA: Tobias Baumann, Center for Reducing Suffering

I agree that s-risks can vary a lot (by many orders of magnitude) in terms of severity. I also think that this gradual nature of s-risks is often swept under the rug because the definition just uses a certain threshold (“astronomical scale”). There have, in fact, been some discussions about how the definition could be changed to ameliorate this, but I don’t think there is a clear solution. Perhaps talking about reducing future suffering, or preventing worst-case outcomes, can convey this variation in severity more than the term ‘... (read more)

AMA: Tobias Baumann, Center for Reducing Suffering

One key difference is that there is less money in it, because OpenPhil as the biggest EA grantmaker is not focused on reducing s-risks. In a certain sense, that is good news because work on s-risks is plausibly more funding-constrained than non-suffering-focused longtermism.

In terms of where to donate, I would recommend the Center on Long-Term Risk and the Center for Reducing Suffering (which I co-founded myself). Both of those organisations are doing crucial research on s-risk reduction. If you are looking for something a bit less abstract, you could con... (read more)

AMA: Tobias Baumann, Center for Reducing Suffering

I think a plausible win condition is that society has some level moral concern for all sentient beings (it doesn’t necessarily need to be entirely suffering-focused) as well as stable mechanisms to implement positive-sum cooperation or compromise. The latter guarantees that moral concerns are taken into account and possible gains from trade can be achieved. (An example for this could be cultivated meat, which allows us to reduce animal suffering while accommodating the interests of meat eaters.)

However, I think suffering reducers in particular shoul... (read more)

AMA: Tobias Baumann, Center for Reducing Suffering

I don’t think this view is necessary to prioritise s-risk. A finite but relatively high “trade ratio” between happiness and suffering can be enough to focus on s-risks. In addition, I think it’s more complicated than putting some numbers on happiness vs. suffering. (See here for more details.) For instance, one should distinguish between the intrapersonal and the interpersonal setting - a common intuition is that one man’s pain can’t be outweighed by another’s pleasure.

Another possibility is lexicality: one... (read more)

1Sebastian Schwiecker1yThanks a lot for the reply and the links.
AMA: Tobias Baumann, Center for Reducing Suffering

We have thought about this, and wrote up some internal documents, but have not yet published anything (though we might do that at some point, as part of a strategic plan). Magnus and I are quite aligned in our thinking about the theory of change. The key intended outcome is to catalyse a research project on how to best reduce suffering, both by creating relevant content ourselves and by convincing others to share our concerns regarding s-risks and reducing future suffering.

2MichaelA1yThat makes sense, thanks. Do you have a sense of who you want to take up that project, or who you want to catalyse it among? E.g., academics vs EA researchers, and what type/field? And does this influence what you work on and how you communicate/disseminate your work?
AMA: Tobias Baumann, Center for Reducing Suffering

Apart from the normative discussions relating to the suffering focus (cf. other questions), I think the most likely reasons are that s-risks may simply turn out to be too unlikely, or too far in the future for us to do something about it at this point. I do not currently believe either of those (see here and here for more), and hence do work on s-risks, but it is possible that I will eventually conclude that s-risks should not be a top priority for one of those reasons.

AMA: Tobias Baumann, Center for Reducing Suffering

I would refer to this elaborate comment by Magnus Vinding on a very similar question. Like Magnus, I think a common misconception is that suffering-focused views have certain counterintuitive or even dangerous implications (e.g. relating to world destruction), when in fact those problematic implications do not follow.

Suffering-focused ethics is also still sometimes associated with negative utilitarianism (NU). While NU counts as a suffering-focused view, this often fails to appreciate the breadth of possible suffering-focused views, including pluralist and... (read more)

While I agree that problematic implications do not follow in practice, I still think some views have highly counterintuitive implications. E.g., some suffering-focused views would imply that most happy present-day humans would be better off committing suicide if there's a small chance that they would experience severe suffering at some point in their lives. This seems a highly implausible and under-appreciated implication (and makes me assign more credence to views that don't have this implication, such as preference-based and upside-focused views).

AMA: Tobias Baumann, Center for Reducing Suffering

Great question! I think both moral and factual disagreements play a significant role. David Althaus suggests a quantitative approach of distinguishing between the “N-ratio”, which measures how much weight one gives to suffering vs. happiness, and the “E-ratio”, which refers to one’s empirical beliefs regarding the ratio of future happiness and suffering. You could prioritise s-risk because of a high N-ratio (i.e. suffering-focused values) or because of a low E-ratio (i.e. pessimistic views of the future).

That suggests tha... (read more)

The case of the missing cause prioritisation research

Yeah, I would perhaps say that the community has historically been too narrowly focused on a small number of causes. But I think this has been improving for a while, and we're now close to the right balance. (There is also a risk of being too broad, by calling too many causes important and not prioritising enough.)

The case of the missing cause prioritisation research

Thanks for writing this up! I think you're raising many interesting points, especially about a greater focus on policy and going "beyond speculation".

However, I'm more optimistic than you are about the degree of work invested in cause prioritisation, and the ensuing progress we've seen over the last years. See this recent comment of mine - I'd be curious if you find those examples convincing.

Also, speaking as someone who is working on this myself, there is quite a bit of research on s-risks and cause prioritisation from a suff... (read more)

5weeatquince1yHi Tobias, Thank you for the comment. Yes very glad for CLR ect and all the s-risk research. An interesting thing I noted when reading through your recent comment [] is that all 3 of the examples of progress involve a broadening of EA, expanding horizons, pushing back on the idea that we need to be focusing right now on AI risk now. They suggest that to date the community has perhaps gone too quickly gone towards a specific case area (AI / immediate x-risk mitigation) rather than continued to explored. I don’t really know what to make of that. Do you examples weaken the point I am making or strengthen it? Is this evidence that useful research is happening or is this evidence that we as a community under-invests in exploration? Maybe there is no universal answer to this question and it depends on the individual reader and how your examples affects their current assumptions and priors about the world.
What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

I think there haven’t been any novel major insights since 2015, for your threshold of “novel” and “major”.

Notwithstanding that, I believe that we’ve made significant progress and that work on macrostrategy was and continues to be valuable. Most of that value is in many smaller insights, or in the refinement and diffusion of ideas that aren’t strictly speaking novel. For instance:

  • The recent work on patient longtermism seems highly relevant and plausibly meets the bar for being “major”. This isn&
... (read more)

The ideas behind patient altruism have received substantial discussion in academia:

But this literature doesn't s

... (read more)

I liked this answer.

One thing I'd add: My guess is that part of why Max asked about novel insights is that he's wondering what the marginal value of longtermist macrostrategy or global priorities research has been since 2015, as one input into predictions about the marginal value of more such research. Or at least, that's a big part of why I find this question interesting.

So another interesting question is what is required for us to have "many smaller insights" and "the refinement and diffusion of ideas that aren’t strictly speaking novel"? E.g., does that

... (read more)
Common ground for longtermists

Thanks for the comment! I fully agree with your points.

People with and without suffering-focused ethics will agree on what to do in the present even more than would be expected from the above point alone. In particular, this is because many actions aimed at changing the long-term future in ways primarily valued by one of those groups of people will also happen to (in expectation) change the long-term future in other ways, which the other group values.

That's a good point. A key question is how fine-grained our influence over the long-term future is - t... (read more)

Common ground for longtermists

Yeah, I meant it to be inclusive of this "portfolio approach". I agree that specialisation and comparative advantages (and perhaps also sheer motivation) can justify focusing on things that are primarily good based on one (set of) moral perspectives.

6MichaelA1yIn that case, take my comment above as just long-winded agreement! I think we could probably consider motivation (and thus "fit with one's values") as one component of/factor in comparative advantage, because it will tend to make a person better at something, likely to work harder at it, less likely to burn out, etc. Though motivation could sometimes be outweighed by other components of/factors in comparative advantage (e.g., a person's current skills, credentials, and networks).
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

That seems plausible and is also consistent with Amara's law (the idea that the impact of technology is often overestimated in the short run and underestimated in the long run).

I'm curious how likely you think it is that productivity growth will be significantly higher (i.e. levels at least comparable with electricity) for any reason, not just AI. I wouldn't give this much more than 50%, as there is also some evidence that stagnation is on the cards (see e.g. 1, 2). But that would mean that you're confident that the cause of higher pro... (read more)

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

I agree that it's tricky, and am quite worried about how the framings we use may bias our views on the future of AI. I like the GDP/productivity growth perspective but feel free to answer the same questions for your preferred operationalisation.

Another possible framing: given a crystal ball showing the future, how likely is it that people would generally say that AI is the most important thing that happens this century?

As one operationalization, then, suppose we were to ask an economist in 2100: "Do you think that the counterfactual contribution
... (read more)
4Ben Garfinkel1yI mostly have in mind the idea that AI is "early-stage," as you say. The thought is that "general purpose technologies" (GPTs) like electricity, the steam engine, the computer, and (probably) AI tend to have very delayed effects. For example, there was really major progress in computing in the middle of the 20th century, and lots of really major invents throughout the 70s and 80s, but computers didn't have a noticeable impact on productivity growth until the 90s. The first serious electric motors were developed in the mid-19th century, but electricity didn't have a big impact on productivity until the early 20th. There was also a big lag associated with steam power; it didn't really matter until the middle of the 19th century, even though the first steam engines were developed centuries earlier. So if AI takes several decades to have a large economic impact, this would be consistent with analagous cases from history. It can take a long time for the technology to improve, for engineers to get trained up, for complementary inventions to be developed, for useful infrastructure to be built, for organizational structures to get redesigned around the technology, etc. I don't think it'd be very surprising if 80 years was enough for a lot of really major changes to happen, especially since the "time to impact" for GPTs seems to be shrinking over time. Then I'm also factoring in the additional possibility that there will be some unusually dramatic acceleration, which is distinguishes AI from most earlier GPTs.
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

What is your overall probability that we will, in this century, see progress in artificial intelligence that is at least as transformative as the industrial revolution?

What is your probability for the more modest claim that AI will be at least as transformative as, say, electricity or railroads?

What is your overall probability that we will, in this century, see progress in artificial intelligence that is at least as transformative as the industrial revolution?

I think this is a little tricky. The main way in which the Industrial Revolution was unusually transformative is that, over the course of the IR, there were apparently unusually large pivots in several important trendlines. Most notably, GDP-per-capita began to increase at a consistently much higher rate. In more concrete terms, though, the late nineteenth and early twentieth centuries pr

... (read more)
Space governance is important, tractable and neglected

I also recently wrote up some thoughts on this question, though I didn't reach a clear conclusion either.

Max_Daniel's Shortform

This could be relevant. It's not about the exact same question (it looks at the distribution of future suffering, not of impact) but some parts might be transferable.

Representing future generations in the political process

Hi Michael,

thanks for the comment!

Could you expand on what you mean by the first part of that sentence, and what makes you say that?

I just meant that proposals to represent future non-human animals will likely gain less traction than the idea of representing future humans. But I agree that it would be perfectly possible to do it (as you say). And of course I'd be strongly in favour of having a Parliamentary Committee for all Future Sentient Beings or something like that, but again, that's not politically feasible anytime soon. So we have to fin... (read more)

3MichaelA1yAh, that makes sense, then. This is an interesting point, and I think there's something to it. But I also tentatively think that the distinction might be less sharp than you suggest. (The following is again just quick thoughts.) Firstly, it seems to me that we should currently have a lot of uncertainties about what would be better for animals. And it also seems that, in any case, much of the public probably is uncertain about a lot of relevant things (even if sufficient evidence to resolve those uncertainties does exist somewhere). There are indeed some relatively obvious low-hanging fruit, but my guess would be that, for all the really big changes (e.g., phasing out factory farming, improving conditions for wild animals), it would be hard to say for sure what would be net-positive. For example, perhaps factory farmed animals have net positive lives, or could have net positive lives given some changes in conditions, in which case developing clean meat, increasing rates of veganism, etc. could be net negative (from a non-suffering-focused perspective), as it removes wellbeing from the world. Of course, even if facing such uncertainties, expected value reasoning might strongly support one course of action. Relatedly, in reality, I'm quite strongly in favour of phasing out factory farming, and I'm personally a vegetarian-going-on-vegan. But I do think there's room for some uncertainty there. And even if there are already arguments and evidence that should resolve that uncertainty for people, it's possible that those arguments and bits of evidence would be more complex or less convincing than something like "In 2045, people/experts/some metric will be really really sure that animals would've been better off if we'd done X than if we'd done Y." (But that's just a hypothesis; I don't know how convincing people would find such judgements-from-the-future.) Secondly, it seems that there are several key things where it's quite clear what policies would be better for fu
Representing future generations in the political process

Hi Tyler,

thanks for the detailed and thoughtful comment!

I find much less compelling the idea that "if there is the political will to seriously consider future generations, it’s unnecessary to set up additional institutions to do so," and "if people do not care about the long-term future," they would not agree to such measures. The main reason I find this uncompelling is just that it overgenerates in very implausible ways. Why should women have the vote? Why should discrimination be illegal?

Yeah, I agree that there are plenty of... (read more)

4tylermjohn1yAh, it looks like I read your post to be a bit more committal than you meant it to be! Thanks for your reply! And sorry for the misnomer, I'll correct that in the top-level comment.
Space governance is important, tractable and neglected

Hey Jamie, thanks for the pointer! I wasn't aware of this.

Another relevant critique of whether colonisation is a good idea is Daniel Deudney's new book Dark Skies.

I myself have also written up some more thoughts on space colonisation in the meantime and have become more sceptical about the possibility of large-scale space settlement happening anytime soon.

Problem areas beyond 80,000 Hours' current priorities

Great post - I think it's extremely important to explore many different problem areas!

Some further plausible (in my opinion) candidates are shaping genetic enhancement, reducing long-term risks from malevolent actors, invertebrate welfare and space governance.

9Ardenlk1yHi Tobias, we've added "governance of outer space [] " on your recommendation. Thanks!

Hi Tobias — thanks for the ideas!

Invertebrate welfare is wrapped into 'Wild animal welfare', and reducing long-term risks from malevolent actors is partially captured under 'S-risks'. We'll discuss the other two.

EA considerations regarding increasing political polarization

Great work, thanks for writing this up! I agree that excessive polarisation is an important issue and warrants more EA attention. In particular, polarisation is an important risk factor for s-risks.

Political polarization, as measured by political scientists, has clearly gone up in the last 20 years.

It is worth noting that this is a US-centric perspective and the broader picture is more mixed, with polarisation increasing in some countries and decreasing in others.

If there’s more I’m missing, feel free to provide links in the comment section.
... (read more)
increasing the presence of public service broadcasting

I don't know how well that would work in the US--it seems that existing public service broadcasters (PBS and NPR) are perceived as biased by American conservatives.

A related idea I've seen is media companies which sell cancellation insurance (archive). The idea being that this is a business model which incentivizes obtaining the trust and respect of as many people as possible, as opposed to inspiring a smaller number of true believers to share/subscribe. One advantage of this idea is it does... (read more)

How Much Leverage Should Altruists Use?

The drawdowns of major ETFs on this (e.g. EMB / JNK) during the corona crash or 2008 are roughly 2/3 to 3/4 of how much stocks (the S&P 500) went down. So I agree the diversification benefit is limited. The question, bracketing the point on leverage extra cost, is whether the positive EV of emerging markets bonds / high yield bonds is more or less than 2/3 to 3/4 of the positive EV of stocks. That's pretty hard to say - there's a lot of uncertainty on both sides. But if that is the case and one can borrow at very good rates (e.g. through futures or box

... (read more)
How Much Leverage Should Altruists Use?

What are your thoughts on high-yield corporate bonds or emerging markets bonds? This kind of bond offers non-zero interest rates but of course also entail higher risk. Also, these markets aren't (to my knowledge) distorted by the Fed buying huge amounts of bonds.

Theoretically, there should be some diversification benefit from adding this kind of bond, though it's all positively correlated. But unfortunately, ETFs on these kinds of bonds have much higher fees.

3MichaelDickens2yI don't know much about emerging market bonds so I can't make any confident claims, but I can say how I am thinking out it for my personal portfolio. I considered holding emerging market bonds because the yield spread between them and and developed-market bonds is unusually high. I decided not to hold them because I don't think they provide enough diversification benefit in the tails. Since I invest with leverage, it doesn't necessarily make sense for me to maximally diversify, I only hold assets if I think the benefit overcomes the extra cost of leverage. But I do believe it might make sense to hold emerging bonds for someone with a less leveraged, more diversified portfolio. That said, I would consider them a "risky" asset, not a "safe" asset, and plan accordingly.
6matthew.vandermerwe2y[disclosure: not an economist or investment professional] This seems wrong — the spillover effects of 2008–13 QE on EM capital markets are fairly well-established (cf the 'Taper Tantrum' of 2013). see e.g. Effects of US Quantitative Easing on Emerging Market Economies []
How should longtermists think about eating meat?

Peter's point is that it makes a lot of sense to have certain norms about not causing serious direct harm, and one should arguably follow such norms rather than expecting some complex longtermist cost-benefit analysis.

Put differently, I think it is very important, from a longtermist perspective, to advance the idea that animals matter and that we consequently should not harm them (particularly for reasons as frivolous as eating meat).

4TobiasH2yI don't think that calling meat-eating frivolous is very helpful. Most vegans revert to consuming some degree of animal products (as far as I understand the research they end up eating meat again, but in lower quantities), indicating that there are significant costs involved. A side-constraint about harm is generally plausible to me. I'm still somewhat sceptical about the argument: - Either you extend this norm to not ommiting actions that could prevent harm from happening, or you seem to be making a dubious distinction between acts and omissions. Extending the norm would possibly give reasons for longtermists to prioritise other ways to prevent harm over not eating meat (and then this should be part of the longtermist cost-benefit-analysis the OP asks for). - There should be some way to account for the fact that in some cases violating the side-constraint is costly, while in other cases complying with the side-constraint is costly. I completely agree that longtermists should take animal welfare into account, and that is not happening to an adequate degree at the moment. I'm far less sure, whether comparing meat-eating to punching your neighbour is going to achieve this.
Reducing long-term risks from malevolent actors

Thanks for commenting!

I agree that early detection in children is an interesting idea. If certain childhood behaviours can be shown to reliably predict malevolence, then this could be part of a manipulation-proof test. However, as you say, there are many pitfalls to be avoided.

I am not well versed in the literature but my impression is that things like torturing animals, bullying, general violence, or callous-unemotional personality traits (as assessed by others) are somewhat predictive of malevolence. But the problem is that you'll probably also get many

... (read more)
Reducing long-term risks from malevolent actors

Thanks for the comment!

I would guess that having better tests of malevolence, or even just a better understanding of it, may help with this problem. Perhaps a takeaway is that we should not just raise awareness (which can backfire via “witch hunts”), but instead try to improve our scientific understanding and communicate that to the public, which hopefully makes it harder to falsely accuse people.

In general, I don’t know what can be done about people using any means necessary to smear political opponents. It seems that the way to address this is to have go

... (read more)
Adapting the ITN framework for political interventions & analysis of political polarisation

Great work, thanks for sharing! It's great to see this getting more attention in EA.

Just for those deciding whether to read the full thesis: it analyses four possible interventions to reduce polarisation: (1) switching from FPTP to proportional representation, (2) making voting compulsory, (3) increasing the presence of public service broadcasting, and (4) creating deliberative citizen's assemblies. Olaf's takeaway (as far as I understand it) is that those interventions seem compelling and fairly tractable but the evidence of possible impacts is often not very strong.

2Stefan_Schubert2yThanks Tobias, that's helpful.
1edcon2yThanks for this Olaf, good work! I think improving institutions is a good intervention and is probably good to have in portfolio of measures to improve longterm. As well as this I think EA public discussion is overly focused on the question on what to do with an amount of money, not with a set amount of political influence, campaigning time. Though GPI and FHI seem to do some amount of govt advising. From a UK perspective 1) Though changing voting systems seems good would change the likely outcome elections, (more PR systems tend to favour more left parties) so would likely only be supported by parties it would benefit. This has impact even if it went to a referendum, as the alternative vote in the UK was not a PR system, and the government strongly against it which contributed to their loss. 2) Increasing voter turnout also seems quite good. Compulsory voting seems not to be talked about much in the UK, though plausibly could be supported by public(55% in 2015 yougov poll [] ), such as automatic registration (10.1016/j.electstud.2016.03.005 []) , increasing opening hours of polling stations (doi:10.1007/s11558-018-9305-8 []) or decrease voter age to 16 ( 10.1016/j.electstud.2012.01.007 []) 3) The public broadcaster BBC has had its funding cut, and more funding cuts look likely. This as well as decreasing quality, allowing less investigative journalism, will make less independent. This is because as funding is cut new organisations have to depend on outside sources of information, which is mostly legacy print media which is non-partisan in the UK. As well 'revolving doors' exist in public service broadcasting, where many journa
Some thoughts on Toby Ord’s existential risk estimates

Well, historically, there have been quite a few pandemics that killed more than 10% of people, e.g. the Black Death or Plague of Justinian. There's been no pandemic that killed everyone.

Is your point that it's different for anthropogenic risks? Then I guess we could look at wars for historic examples. Indeed, there have been wars that killed something on the order of 10% of people, at least in the warring nations, and IMO that is a good argument to take the risk of a major war quite seriously.

But there have been far more wars that killed fewer ... (read more)

2kokotajlod2yI feel the need to clarify, by the way, that I'm being a bit overly aggressive in my tone here and I apologize for that. I think I was writing quickly and didn't realize how I came across. I think you are making good points and have been upvoting them even as I disagree with them.
6kokotajlod2yI think there are some risks which have "common mini-versions," to coin a phrase, and others which don't. Asteroids have mini-versions (10%-killer-versions), and depending on how common they are the 10%-killers might be more likely than the 100%-killers, or vice versa. I actually don't know which is more likely in that case. AI risk is the sort of thing that doesn't have common mini-versions, I think. An AI with the means and motive to kill 10% of humanity probably also has the means and motive to kill 100%. Natural pandemics DO have common mini-versions, as you point out. It's less clear with engineered pandemics. That depends on how easy they are to engineer to kill everyone vs. how easy they are to engineer to kill not-everyone-but-at-least-10%, and it depends on how motivated various potential engineers are. Accidental physics risks (like igniting the atmosphere, creating a false vacuum collapse or black hole or something with a particle collider) are way more likely to kill 100% of humanity than 10%. They do not have common mini-versions. So what about unknown risks? Well, we don't know. But from the track record of known risks, it seems that probably there are many diverse unknown risks, and so probably at least a few of them do not have common mini-versions. And by the argument you just gave, the "unknown" risks that have common mini-versions won't actually be unknown, since we'll see their mini-versions. So "unknown" risks are going to be disproportionately the kind of risk that doesn't have common mini-versions. ... As for what I meant about making the exact same argument in the past: I was just saying that we've discovered various risks that don't have common mini-versions, which at one point were unknown and then became known. Your argument basically rules out discovering such things ever again. Had we listened to your argument before learning about AI, for example, we would have concluded that AI was impossible, or that somehow AIs which have the
3MichaelA2yI think, for basically Richard Ngo's reasons, I weakly disagree with a strong version of Tobias's original claim that: (Whether/how much I disagree depends in part on what "much more frequently" is meant to imply.) I also might agree with: (Again, depends in part on what "much more likely" would mean.) But I was very surprised to read: And I mostly agree with Tobias's response to that. The point that there's a narrower band of asteroid sizes that would cause ~10% of the population's death than of asteroid sizes that would cause 100% makes sense. But I believe there are also many more asteroids in that narrow band. E.g., Ord writes: And for pandemics it seems clearly far more likely to have one that kills a massive number of people than one that kills everyone. Indeed, my impression is that most x-risk researchers think it'd be notably more likely for a pandemic to cause existential catastrophe through something like causing collapse or reduction in population to below minimum viable population levels, rather than by "directly" killing everyone. (I'm not certain of that, though.) I'd guess, without much basis, that: * the distribution of impacts from a source of risk would vary between the different sources of risks * For pandemics, killing (let's say) 5-25% really is much more likely than killing 100%. (This isn't why I make this guess, but it seems worth noting: Metaculus [] currently predicts a 6% chance COVID-19 kills >100million people by the end of the year, which would put it close to that 5-25% range. I really don't know how much stock to put in that.) * For asteroids, it appears based on Ord that the same is true ("smaller" catastrophes much likelier than complete extinction events). * For AI, the Bostrom/Yudkowsky-style scenario might actually be more likely to kill 100% than 5-25%, or at
6Stefan_Schubert2yMinor: some recent papers argue the death toll from the Plague of Justinian has been exaggerated. [] [] (Two authors appear on both papers.)
Load More