All of Fods12's Comments + Replies

I think it is appropriate for the movement to reflect at this time on whether there are systematic  problems or failings within the community that might have contributed to this problem. I have publicly argued that there are, and though I might be wrong about that, I do think its entirely reasonable to explore these issues. I don't think its reasonable to just continually assert that it was all down to a handful of bad actors and refuse to discuss the possibility of any deeper or broader problems. I like to think that the EA community can learn and grow from this experience.

I disagree that events can't be evidence for or against philosophical positions. If empirical claims about human behaviour or the real-world operation of ethical principles are relevant to the plausibility of competing ethical theories, then I think events can provide evidential value for philosophical positions. Of course that raises a much broader set of issues and doesn't really detract from the main point of this post, but I thought I would push back on that specific aspect.

I love the research-focus of this piece and the lack of waffle. Very impressed.

"Is it really "grossly immoral" to do the same thing in crypto without telling depositors?"
Yes

Great point about ventilation. I am not aware of any evidence that hand sanitisation in particular is merely 'safety theater'. Surface transmission may not be the major method of viral spread, but it still is a method, and hand sanitisation is a very simple intervention. Also, to emphasise something I mentioned in the post, masks are definitely not 'safety theater'. It is good to see that the revised COVID protocol now mentions that mask use will be encouraged and widely available.

I don't understand how Australia's travel policy is relevant. I'm not asking for anything particularly unusual or onerous, I just would expect that a community of effective altruists would follow WHO guidelines regarding methods to reduce the spread of COVID. I honestly don't understand the negative reaction.

6
Kirsten
3y
A cynical person might see your post as asking CEA to do extra work for very little potential gain, because most people involved in EA are already pretty careful about Covid. So I guess that's where the negative reaction could be coming from - it sounds like you don't trust individual EAs or the event organizers to e.g. use hand sanitizer unless it's been written down somewhere that people will use hand sanitizer.

Thanks Amy, I think these clarifications significantly improve the policy. I disagree on the decision not to mandate masks but I understand there will be differences in views there. However mentioning that they are encouraged may be just as effective at ensuring widespread use. That was part of my original concern, that I did not feel this aspect of norm-setting was as evident in the original version of the policy.

It doesn't seem to me this has much relevance to EA.

Buck
4y24
0
0

In addition to what Aaron said, I’d guess Scott is responsible for probably 10% of EA recruiting over the last few years.

I'll add some context to clarify to readers why this could be seen as relevant:

Scott Alexander has done a huge amount of writing about effective altruism, including the following posts that many would regard as "classic" (or at least I do):

His most recent reader survey found that 13% of his readers self-identified as being "effective altruists" (this is f

... (read more)

Hi David,

We deliberately only included information which is based on some specific empirical evidence, not simply advice or recommendations. Of course readers of the review may wish to incorporate additional information or assumptions in deciding how they will run their groups then of course they are welcome to do so.

If you have any particular sources or documents outlining what has been effective in London I'd love to see them!

5
DavidNash
5y
I guess I have concerns with over valuing metrics that are easier to collect which might lead to optimising for the wrong activities. There is the impact report from EA London for 2018.

Hi everyone, thanks for your comments. I'm not much for debating in comments, but if you would like to discuss anything further with me or have any questions, please feel free to send me a message.

I just wanted to make one clarification that I feel didn't come across strongly in the original post. Namely, I don't think its a bad thing that EA is an ideology. I do personally disagree with some commonly believed assumptions or methodological preferences etc, but the fact that EA itself is an ideology I think is a good thing, because it gives ... (read more)

People who aren't "cool with utilitarianism / statistics / etc" already largely self-select out of EA. I think my post articulates some of the reasons why this is the case.

6
kbog
5y
I've met a great number of people in EA who disagree with utilitarianism and many people who aren't particularly statistically minded. Of course it is not equal to the base rates of the population, but I don't really see philosophically dissecting moderate differences as productive for the goal of increasing movement growth. If you're interested in ethnologies, sociology, case studies, etc - then consider how other movements have effectively overcome similar issues. For instance, the contemporary American progressive political movement is heavily driven by middle and upper class whites, and faces dissent from substantial portions of the racial minority and female identities. Yet it has been very effective in seizing institutions and public discourse surrounding race and gender issues. Have they accomplished this by critically interrogating themselves about their social appeal? No, they hid such doubts as they focused on hammering home their core message as strongly as possible. If we want to assist movement growth, we need to take off our philosopher hats, and put on our marketer and politician hats. But you didn't write this essay with the framing of "how to increase the uptake of EA among non-mathematical (etc) people" (which would have been very helpful); eschewing that in favor of normative philosophy was your implicit, subjective judgment of which questions are most worth asking and answering.

Thanks for the comment!

I agree that the probabilities matter, but then it comes to a question of how these are assessed and weighed against each other. On this basis, I don't think it has been established that AGI safety research has strong claims to higher overall EV than other such potential mugging causes.

Regarding the Dutch book issue, I don't really agree with the argument that 'we may as well go with' EV because it avoids these cases. Many people would argue that the limitations of the EV approach, such as having to give a precis... (read more)

Hi Zeke,

I give some reasons here why I think that such work won't be very effective, namely that I don't see how one can achieve sufficient understanding to control a technology without also attaining sufficient understanding to build that technology. Of course that isn't a decisive argument so there's room for disagreement here.

Hi Zeke!

Thanks for the link about the Fermi paradox. Obviously I could not hope to address all arguments about this issue in my critique here. All I meant to establish is that Bostrom's argument does rely on particular views about the resolution of that paradox.

You say 'it is tautologically true that agents are motivated against changing their final goals, this is just not possible to dispute'. Respectfully I just don't agree. It all hinges on what is meant by 'motivation' and 'final goal'. You also say " it jus... (read more)

Hi rohinmshah, I agree that our current methods for building an AI do involve maximising particular functions and have nothing to do with common sense. The problem with extrapolating this to AGI is 1) these sorts of techniques have been applied for decades and have never achieved anything close to human level AI (of course that's not proof it never can but I am quite skeptical, and Bostrom doesn't really make the case that such techniques will be likely to lead to human level AI), and 2) as I argue in part 2 of my critique, other parts of Bostrom's argument rely upon much broader conceptions of intelligence that would entail the AI having common sense.

2
Rohin Shah
5y
We also didn't have the vast amounts of compute that we have today. My claim is that you can write a program that "knows" about common sense, but still chooses actions by maximizing a function, in which case it's going to interpret that function literally and not through the lens of common sense. There is currently no way that the "choose actions" part gets routed through the "common sense" part the way it does in humans. I definitely agree that we should try to build an AI system which does interpret goals using common sense -- but we don't know how to do that yet, and that is one of the approaches that AI safety is considering. I agree with the prediction that AGI systems will interpret goals with common sense, but that's because I expect that we humans will put in the work to figure out how to build such systems, not because any AGI system that has the ability to use common sense will necessarily apply that ability to interpreting its goals. If we found out today that someone created our world + evolution in order to create organisms that maximize reproductive fitness, I don't think we'd start interpreting our sex drive using "common sense" and stop using birth control so that we more effectively achieved the original goal we were meant to perform.

Thanks for these links, this is very useful material!

Hi Denkenberger, thanks for engaging!

Bostrom mentions this scenario in his book, and although I didn't discuss it directly I do believe I address the key issues here in my piece above. In particular, the amount of protein one can receive in the mail in a few days is small, and in order to achieve its goals of world domination an AI would need large quantities of such materials in order to produce the weapons or technology or other infrastructure needed to compete with world governments and militaries. If the AI chose to produce the protein itself, whi... (read more)

2
Denkenberger
5y
Let's say they only mail you as much protein as one full human genome. Then the self-replicating nanotech it builds could consume biomass around it and concentrates uranium (there is a lot in the ocean, e.g.). Then since I believe the ideal doubling time is around 100 seconds, it would take about 2 hours to get 1 million intercontinental ballistic missiles. That is probably optimistic, but I think days is reasonable - no lawyers required.

Thanks for your thoughts. Regarding spreading my argument across 5 posts, I did this in part because I thought connected sequences of posts were encouraged?

Regarding the single quantity issue, I don't think it is a red herring, because if there are multiple distinct quantities then the original argument for self-sustaining rapid growth becomes significantly weaker (see my responses to Flodorner and Lukas for more on this).

You say "Might the same thing be true of AI -- that a few factors really do allow for drastic improvements in problem-solving... (read more)

1
Aaron Gertler
5y
Connected sequences of posts are definitely encouraged, as they are sometimes the best way to present an extensive argument. However, I'd generally recommend that someone make one post over two short posts if they could reasonably fit their content into one post, because that makes discussion easier. In this case, I think the content could have been fit into fewer posts (not just one, but fewer than five) had the organization system been a bit different, but this isn't meant to be a strong criticism -- you may well have chosen the best way to sort your arguments. The critique I'm most sure about is that your section on "the nature of intelligence" could have benefited from being broken down a bit more, with more subheadings and/or other language meant to guide readers through the argument (similarly to the way you presented Bostrom's argument in the form of a set of premises, which was helpful).

Thanks for your thoughts.

Regarding your first point, I agree that the situation you posit is a possibility, but it isn't something Bostrom talks about (and remember I only focused on what he argued, not other possible expansions of the argument). Also, when we consider the possibility of numerous distinct cognitive abilities it is just as possible that there could be complex interactions which inhibit the growth of particular abilities. There could easily be dozens of separate abilities and the full matrix of interactions becomes very complex. The or... (read more)

Thanks for your thoughts!

1) The idea I'm getting at is that an exponential-type argument of self-improvement ability being proportional to current intelligence doesn't really work if there are multiple distinct and separate cognitive abilities, because ability to improve ability X might not be in any clear way related to the current level of X. For example, ability to design a better chess-playing program might not be in any way related to chess playing ability, or object recognition performance might not be related to ability to improve this per... (read more)