Milan Weibel

Organizer @ UC Chile EA Student Chapter
Pursuing an undergraduate degree

Participation
2

  • Attended an EAGx conference
  • Attended more than three meetings with a local EA group

Comments
4

Interesting. I agree that second or third-order effects such that as the good done later by people you have helped are an important consideration.  Maximising such effects could be an underexplored effective giving strategy, and this organization you refer to looks like a group of people trying to do that. However, to really assess an organization's effectiveness, epecially if it focuses in educational or social interventions, some empirical evidence is needed. 

  • Does SENG follow-up on the outcomes of aid recipients?
    •  How do they compare with those of similar people in similar situations, but who didn't recieve help?
  • What programs does SENG run?
    • How much does each cost per recipient helped?

Having thought more about this, I suppose you can divide opinions into two clusters and be pointing at something real. That's because people's views on different aspects of the issue correlate, often in ways that make sense. For instance, people who think AGI will be achieved by scaling up current (or very similar to current) neural net architectures are more excited about practical alignment research on existing models.

However, such clusters would be quite broad. My main worry is that identifying two particular points as prototypical of them would narrow their range. People would tend to let their opinions drift closer to the point closest to them. This need not be caused by tribal dynamics. It could be something as simple as availability bias. This narrowing of the clusters would likely be harmful, because the AI safety field is quite new and we've still got exploring to do.  Another risk is that we may become too focused on the line between the two points, neglecting other potentially more worthwhile axes of variation.

If I were to divide current opinions into two clusters, I think that Scott's two points would in fact fall in different clusters. They would probably even be not too far off their centers of mass. However, I strongly object to pretending the clusters are points, and then getting tribal about it. I think labeling clusters could be useful, if we made it clear that they are still clusters.

On the paths to understanding AI risk without accepting weird arguments, maybe getting people worried about ML unexplainability may be worthwhile to explore, though I suspect most people would think you were pointing to algorithmic bias and the like.

As a factual question, I'm not sure if people's opinions on the shape of AI risk can be divided into two distinct clusters, or even distributed along a spectrum (that is, that factor analysis on the points of opinion-space would find a good general factor), though I suspect it may quite weakly be the case. For instance, I found myself agreeing with six of the statements on one side of Scott's dichotomy and two on the other.

As a public epistemic health question, I think issuing binary labels is harmful for further progress in the field, especially if they borrow terminology from religious groups and the author identifies with one of the proposed camps in the same post he raises the distinction. See the comment by xarkn on LW

Even if the range of current opinions could be well-described by a single general factor, we should certainly use less divisive terminology for such a spectrum and be mindful that truth may well lie orthogonal to it.

Un equilibrio inadecuado (Spotify - Apple Podcasts - Google Podcasts)

Interviews in Spanish on EA topics. I particularly enjoyed the episode with Andrés Gómez Emilsson from Qualia Research Institute. Sadly, no new content since October 2021.