Recent Discussion

Tl;dr: Due to quantum computers' abilities to compute multiple superpositions in parallel it seems possible that the capability of a quantum computer to generate utility grows exponentially with the size of that quantum computer. If this exponential conjecture is true, then very near term quantum computers could suffer more than all life on Earth has in the past 4 billion years.

Introduction

Quantum computing (QC) is a categorically different form of computation. It is capable of tackling certain problems exponentially faster than its classical counterparts. The bounty QC may bring is large: speedups in a broad range of topics, from drug discovery to reinforcement learning. What's more, these speedups may be imminent. Quantum supremacy[1] has already been crossed and the large quantum chip manufacturers are expecting the construction of...

If the Everett interpretation is true, then all experiences are already amplified exponentially. Unless I'm missing something, a QC doesn't deserve any special consideration. It all adds up to normality.

1evn6h1. I think it is much more likely that different states should be counted according to their measure, not their existence. Denying this has the issues with preferred bases that you mentioned--since |+> = 1/sqrt(2) (|0> + |1>), it's unclear whether we should count one observer in state |+> or two observers, one in state |0> and one in state |1>--whereas accepting it will at least be consistent on how much moral weight the system has (weight <+|+> = 1 in the first case, weight 1/sqrt(2)(<0| + <1|) 1/sqrt(2)(|0> + |1>) = 1 in the second case). (Also, this issue is even worse in infinite-dimensional Hilbert spaces--a position eigenstate is a superposition of infinitely many momentum eigenstates. If we counted each state in a superposition separately, the most important question in the world for suffering would be whether space is discrete.) 2. This isn't an issue that's unique to quantum computers in all interpretations of quantum mechanics. In a theory where wavefunction collapse is a physical process, then indeed quantum computers would be special; but in many worlds, everything is in a superposition anyway, and quantum computers are only special because they avoid decoherence at larger scales. I personally think something like many worlds is far more likely to be true than a theory that involves true nondeterminism, although it's not a closed question.

Overview

Coordination problems in EA seem somewhat pressing, but we are lacking a structured way to think through different kinds of coordination challenges and bottlenecks.

In this post we

  • Introduce a way to categorize and understand coordination services within the EA community: ecosystem & community member services
  • Present different potential services in these categories
  • Present a way to categorize coordination bottlenecks into three functional areas: knowledge, infrastructure and support

Our intention with this post is mainly to open up a discussion about this topic so that everyone can contribute to it and understand it better.

Introduction

The importance of coordination & coordination challenges has been periodically discussed for several years in the EA movement. In recent years, it seems that these challenges are becoming more acute - in November 2020 Benjamin Todd mentioned that currently,...

I completely agree, I've thought this ever since I joined the EA movement in 2013, and think this space (among others) is exceptionally neglected.

Some public examples of work I've done in this space include thinking about how to improve funder, project, and volunteer/team coordination as well as providing specialized services to the community. Given that this is a neglected area with little time and money available for people to make progress on it, I think it's no surprise that the space has "a large 'graveyard' of untried proposals or projects." I think ... (read more)

1ElliotJDavies13hAlways nice to have a strong feeling about something and have your argument confirmed and strengthened by others. I would say, there's a lot of work to do in this space. I am happy to hear about the 2 recent CE charities pointing in this direction, and many more are needed. One thing I would note here, I suspect that non-scalable solutions (e.g. headhunting) might not be able to keep up with demand in the foreseeable future. This would be because community building is particularly tractable, and the current level of efforts and money being put into community building could mean that it could grow faster than other parts of EA. The end result would that the types of infrastructure discussed above would continue to be out of reach for the majority of EA's on the local level. For this reason I suspect that local groups will have to build a lot of the services discussed here (skills training/headhunting/career planning) for themselves. Therefore I think we are locked into a trajectory of EA being a movement where most activity (research/learning/networking/jobs) exists in local groups, something most EA's would find unimaginable at the moment.

Back in the old ages, shopping was a hobby, not a science.

People used to lazily stroll through supermarkets with their baskets, looking around to see if anything caught their eyes. Whenever something did, they'd plop it into their basket without thinking twice and continue loafing across the tile with a superficial smile.

Rather than analyze quality, people picked the products with shiny labels, bright colors, and sexy fonts, the ones that gave them a visceral dopamine burst, the ones that evoked a pure but fleeting sense of glee. They shopped for the packaging without giving any attention to what was inside. They would occasionally show off this packaging to their friends and family, as an expression of personal identity. Look at me, they said, I'm the kind of...

I think this is a very cute, clever story! I appreciate it and have upvoted it! I don't think I have any clever comments, though I'll let you know if I think of any.

2Madhav Malhotra9hThis is very well written! I really admire all the imagery in your writing. And how you leave it up to the reader to interpret the meaning here (give 2 + 2, not 4). May I ask what made you write this? :-) What else have you written? Where did you learn to write like this?

This term I took a reinforcement learning course at my university, hoping to learn something useful for the directions of research that I'm considering to enter (one among which is AI safety; others are speculative so I'm not listing them).

I'm about to start coding my first toy model, when I suddenly recalled something that I previously read: Brian Tomasik's Which Computations Do I Care About? and Ethical Issues in Artificial Reinforcement Learning. So I re-read the two essays, and despite dissenting on many of the points that Brian had made, I did become convinced that RL agents (and some other algorithms too), in expectation, deserve a tiny yet non-zero moral weight, and this weight can accumulate over the many episodes in the training process to become significant.

This...

Thanks for the answers, they all make sense and upvoted all of them :)

So for a brief summary:

  • The action that I described in the question is far from optimal under EV framework (CarlShulman & Brian_Tomasik), and
  • Even it is optimal, a utilitarian may still have ethical reasons to reject it, if he or she:
    • endorses some kind of non-traditional utilitarianism, most notably SFE (TimothyChan); or
    • considers the uncertainty involved to be moral (instead of factual) uncertainty (Brian_Tomasik).

We are excited to announce the launch of The Nonlinear Library, which allows you to easily listen to top EA content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs.

In the rest of this post, we’ll explain our reasoning for the audio library, why it’s useful, why it’s potentially high impact, its limitations, and our plans. You can read it here or listen to the post in podcast form here.

Listen here: Spotify, Google Podcasts, Pocket Casts, Apple

Or, just search for it in your preferred podcasting app.

Goal: increase the number of people who read EA research

An EA koan: if your research is high quality, but nobody reads it, does...

Possibly, it is enough to just have a disclaimer like "by submitting, you agree to have this turned into an audio format" to satisfy copyright laws?

5Charles He13hGears level info about TTS: I feel like I am writing an Elon Musk tweet but in case anyone is interested: As the post mentions, basically the speech quality comes from using the newer TTS models. From the sound, I think this is using Amazon Polly, the voice “Matthew, Male”. I am an imposter, but I know this because I “made an app” for myself that does general TTS for local docs and websites. The relevant code that produces the voices is two lines long. I did not check but I would be surprised if there was not a browser extension already. If EAs think that a browser extension would be valuable, or want really any of the permutations of forum/comment or other services, my guess is that a working quality project and full deployment could be made for $30,000 or maybe as little as $3,000 (the crux is operational like handling payment, accounts, as well as interpretation of the value of project management) If there is interest I think we can just put this in EAIF or the future fund or something. Hear the audio version of this comment in “Matthew”, voice [https://ej21-tts-voice-samples.s3.us-west-2.amazonaws.com/matthew-sample.mp3]. Hear the audio version of this comment in “Kevin” voice. [https://ej21-tts-voice-samples.s3.us-west-2.amazonaws.com/kevin-sample.mp3]
2RyanCarey15hAs per the thread with Pablo, I think the podcast sounds pretty good. Having said that, I do have one small suggested improvement. When I look at the logo (a Sierpinski triangle), and think about what it's supposed to represent, it makes me think of a pyramid, or of growing replicators "one person recruits three, and so on". In particular, although this may seem kind-of unfair, it kinda reminds ofthis scene [http://www.youtube.com/watch?v=lC5lsemxaJo&t=0m52s] from the Office. Given that movement building is a major project of the org, that's probably not the connotation that you want. I realize that most people aren't going to think of this connotation, but I'm very curious of others' thoughts, because even a few seems too many...
Sign up for the Forum's email digest
Want a weekly email containing the best posts from the past week? Our moderator Aaron sends out a weekly digest of recent posts that have a lot of karma/discussion or seemed really good to him, as well as question posts that could use more answers.

While global demand for meat continues to grow, conventional production methods are associated with problems of enormous scale, driving climate change and environmental degradation, antibiotic-resistant disease, and animal suffering. However, the development of plant-based and cultivated cell-based meats offer promising solutions.

Join Effective Altruism UQ online to learn about alternative meats from a variety of speakers all working to revolutionize food!

Speakers:
Ruth Purcell – Precision Fermentation & State of the Industry in Australia
Prof. Jason Stokes – Taste vs Texture in Plant-based Meat Engineering,
Dr. Felix Septianto – Consumer Acceptance of Cultured Meats
Dr Mark Allenby – Engineering Lab-Grown Tissues

Tuesday 26 October, 6PM (UTC+10)
Event link: https://uqz.zoom.us/j/83675715923

1Evan R. Murphy6hPEOPLE IN BUNKERS, "SARDINES" AND WHY BIORISKS MAY BE OVERRATED AS A GLOBAL PRIORITY I'm going to make the case here that certain problem areas currently prioritized highly in the longtermist EA community are overweighted in their importance/scale. In particular I'll focus on biorisks, but this could also apply to other risks such as non-nuclear global war and perhaps other areas as well. I'll focus on biorisks because that is currently highly prioritized by both Open Philanthropy and 80,000 Hours and probably other EA groups as well. If I'm right that biotechnology risks should be deprioritized, that would relatively increase the priority of other issues like AI, growing Effective Altruism, global priorities research, nanotechnology risks and others by a significant amount. So it could help allocate more resources to those areas which still pose existential threats to humanity. I won't be taking issue with the longtermist worldview here. In fact, I'll assume the longtermist worldview is correct. Rather, I'm questioning whether biorisks really pose a significant existential/extinction risk to humanity. I don't doubt that they could lead to major global catastrophes which it would be really good to avert. I just think that it's extremely unlikely for them to lead to total human extinction or permanent civilization collapse. This started when I was reading about disaster shelters. Nick Beckstead has a paper considering whether they could be a useful avenue for mitigating existential risks [1]. He concludes there could be a couple of special scenarios where they are that need further research, but by and large new refuges don't seem like a great investment because there are already so many existing shelters and other things which could serve to protect people from many global catastrophes. Specifically, the world already has a lot of government bunkers, private shelters, people working on submarines, and 100-200 uncontacted peoples [https://en.wikipedia.org/wiki/Unc

It's hard to have a discussion about this in the open because many EAs (and presumably some non-EAs) with biosecurity expertise strongly believe that this is too dangerous a topic to talk about in detail in the open, because of information hazards and related issues. 

Speaking for myself, I briefly looked into the theory of information hazards, as well as thought through some of the empirical consequences, and my personal view is that while the costs of having public dialogue about various xrisk stuff (including biorisk) are likely underestimated, the ... (read more)

Imagine you are in charge of running the EA forum/LessWrong Petrov Day game. What would would you do?

Upvote answers you agree with and add your own.

I think it's cool that the forum runs big events like this and I've enjoyed this one. Thanks to the team at CEA. I think it's fun to imagine what each of us would do if we were in charge. 

This years game is described here.

Here a user asks for clarification on the purpose of the game.

I pointed out my issues with the structural dissimilarities between Petrov Day celebrations and what Petrov actually faced before here, here, and here (2 years ago). I personally still enjoyed the game as is. However I'm open to the idea that future Petrov Days should look radically different, and wouldn't have a gamefying element at all. 

But I think if we want a game that reflects the structure of Petrov's decision that day well in an honest way, we probably want something like:

1. Petrov clearly has strong incentives and social pressures to push the ... (read more)