Ozzie Gooen

8714 karmaJoined Berkeley, CA, USA


I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.


Amibitous Altruistic Software Efforts


Topic contributions

Some quick point:
1. Thanks for doing this replication! I find the data pretty interesting.
2. I think my main finding here is that the "giving money to those who are the least happy, conditional on being poor" seems much more effective than giving to those who are more happy. Or, the 15 percentile slopes are far higher than the other slopes, below 50k, and this seems more likely to be statistically significant than other outcomes.

I'm really curious why this is. The effect here seems much larger than I would have imagined. Maybe something is going on like, "These very unhappy poor people had expectations of having more money, so they are both particularly miserable, and money is particularly useful to them."

In theory there could be policy proposals here, but they do seem tricky. A naive one would be, "give money first to the poorest and saddest," but I'm sure you can do better.  

3. From quickly looking at these graphs, I'm skeptical of what you can really takeaway after the 50k pound marks. There seems to be a lot of randomness here, and the 50k threshhold seems arbitrary. I'd also flag that it seems weird to me to extend the red lines so far to the left, when there are so few data points at less than ~3k. I'm very paranoid about outliers here. 

4. Instead of doing a simple linear interpolation, split into two sections, I think I'd be excited about other statistical processes you could do. Maybe this could be modeled as a guassian process, or estimated using bayesian techniques. (I realize this could be much more work though).

Thanks for the context!

Obvious flag that this still seems very sketchy. "the easiest way to do that due to our structure was to put it in Sam's name"? Given all the red flags that this drew, both publicly and within the board, it seems hard for me to believe that this was done solely "to make things go quickly and smoothly."

I remember Sam Bankman-Fried used a similar argument around registering Alameda - in that case, I believe it later led to him later having a lot more power because of it.

Thanks for that explanation.

>I've talked to a lot of suffering-focused EAs. Of the people who feel strongly about rejecting the repugnant conclusion in population ethics, at best only half feel that aggregation is altogether questionable.

I think this is basically agreeing with my point on "person-affecting views seem fairly orthogonal to the Repugnant Conclusion specifically", in that it's possible to have any combination.

That said, you do make it sound like suffering-focused people have a lot of thoughtful and specific views on this topic.

My naive guess would have been that many suffering-focused total utilitarians would simply have a far higher bar for what the utility baseline is than, say, classical total utilitarians. So in some cases, perhaps they would consider most groups of "a few people living 'positive' lives" to still be net-suffering, and would therefore just straightforwardly prefer many options with fewer people. But I'd also assume that in this theory, the repugnant conclusion would basically not be an issue anyway.

I realize that this wasn't clear in my post, but when I wrote it, it wasn't with suffering-focused people in mind. My impression is that the vast majority of people worried about the Repugnant Conclusion are not suffering focused, and would have different thoughts on this topic and counterarguments. I think I'm fine not arguing against the suffering-focused people on this topic, like the ones you've mentioned, because it seems like they're presenting different arguments than the main ones I disagree with. 

First, my current stance around moral realism is something like, "Upon a whole lot of reflection, I expect that myself, and many other likeminded people, will reject a lot of hypotheses about morality is very unlikely to be meaningful, but we might still have some credence in some, at least in terms of how we should optimize our decisions going forward."

That said, I think this question can get pretty abstract meta-ethics. 

I think that it's fine for us to make a lot of moral statements like, "I think racism is generally pretty bad", "I don't see any reasonable evidence for rejecting the importance of animals, vs. humans", etc, and I think these statements are similar to that.

(Person-affecting views also typically give up transitivity, the independence of irrelevant alternatives or completeness/full comparability.)

These views seem quite strange to me. I'd be curious to understand who these people are that believe this. Are these views common among groups of Effective Altruists, or philosophers, or perhaps other groups? 

> I'm curious about what you mean by not addressing the actual question.

I just meant that my impression was that person-affecting views seem fairly orthogonal to the Repugnant Conclusion specifically. I imagine that many person-affecting believers would agree with this. Or, I assume that it's very possible to do any combination of [strongly care about the repugnant conclusion] | [not care about it], and [have person-affecting views] and [not have them].

The (very briefly explained) example I mentioned is meant as something like,
Say there's a trolly problem. You could either accept scenario (A): 100 people with happy lives are saved, or (B) 10000 people with sort of decent lives are saved.

My guess was that this would still be an issue in many person-affecting views (I might well be wrong here though, feel free to correct me!). To me, this question is functionally equivalent to the Repugnant Conclusion.  

Your examples with aggregation also seem very similar.

Thanks for investigating!

From my understanding of boards and governance structures, I think that few are actually very effective, and it's often very difficult to tell this from outside the organization. 

So, I think that the prior should be to expect these governance structures to be quite mediocre, especially in extreme cases, and wait for a significant amount of evidence otherwise. 

I think some people think, "Sure, but it's quite hard to provide a lot of public evidence, so instead we should give these groups the benefit of the doubt." I don't think this makes sense as an epistemic process.

If the prior is bad, then you should expect it to be bad. If it's really difficult to convince a good epistemic process otherwise, don't accept a worse epistemic process in order to make it seem "more fair for the evaluee". 

That's interesting to note, thanks!

At the same time, I'm not sure what to make of this, from Sam's perspective. "Playing along with a joke?" So a non-trivial amount of his public communication is really him just joking, without the audience knowing it? That makes his other communication even harder to trust. 

I'm not sure how much I buy the "joke" argument. It's something I'm used to trolls using as a defense.
"*obviously* I was just joking when I made controversial statement X publicly, and it got a lot of hype."
I'm having trouble imagining any high-up management or PR strategy signing off on this.
"So, you were just accused of a major scandal. We suggest that you pretend that it's true for a while. Then after a few weeks or so, tell everyone that you were joking about it."

I think that higher-order markets definitely make things more complicated, in part by creating feedback loops and couplings that are difficult to predict.

That said, there are definitely a few ways in which higher-order markets could potentially make markets more reliable.

My guess is that useful higher-order markets will take a lot of experimentation, but with time, we'll learn techniques that are more useful than harmful. 

You can imagine strategies like,
"There's just one question. However, people will get paid out over time, if the future aggregate agrees with their earlier forecasts. These payments can trigger at arbitrary times, and can feature a lot of flexibility regarding how far back the forecasts are that they reward."

The effect is very similar to doing it by formally having separate questions.

(I'm sure many would consider this a minor difference)

Load more