Physics graduate and currently Data Analyst. Enthusiastic about Complexity Science
Overall, this seems like a weak criticism worded strongly. It looks like the opposition here is more to the moniker of Complexity Science and its false claims of novelty but not actually to the study of the phenomenon that fall within the Complexity Science umbrella. This is analogous to a critique of Machine Learning that reads "ML is just a rebranding of Statistics". Although I agree that it is not novel and there is quite a bit of vagueness in the field, I disagree on the point that Complexity Science has not made progress.
I think the biggest utility of Complexity Science comes in breaking disciplinary silos. Rebranding things to Complexity Science, just brings all the ideas on systems from different disciplines together under one roof. If you are a student, you can learn all these phenomena in one course or degree. If you are a professor, you can work on anything that relates to Complex Systems phenomena if you are in a Complexity department. The flip side of it is, you might end up living in a world of hammers without nails - you would just have a bunch of tools without a strong domain knowledge in any of the systems that you are studying.
My take on Complexity Science is that it is a set of tools to be used in the right context. For your specific context, some or none of the tools of Complexity Science can be useful. Where Complexity Science falls apart for me is when it tries to lose all context and generalize to all systems. I think the OP here is trying to stay within context. The post is just saying we can build ABMs to approach some specific EA cause areas. So I am more or less onboard with this post.
On a final note, I am in agreement with your critique on abuse of Power Laws. There are too many people that just make a log-log plot, look at the line and exclaim "Power law!". The Clauset-Shalizi-Newman paper you linked to is the citation classic here. For those who do network theory, instead of trying to prove your degree distribution is a power law, I would recommend doing Graphlet Analysis.
Thanks a lot for posting this! I also have the same feeling as finm in that I wanted to write something like this. But even if I had written it wouldn't have been as extensive as this one is. Wonderfully done!
To add to the pool of resources that the post has already linked to:
The very vague definition of "Cause Area" is making it hard for me to think about meta EA. It feels like GPR is a cause area and so working on it would be direct impact work but I am not sure. Same goes for EA Movement building. Also, it starts getting trippy if we claim meta-EA is also a cause area!
Maybe we can clarify the definition for cause area within this meta EA framework?
Specifics matter. There can be no one discussion norm to get people to be nice to each other.
I think things like discussion norms are highly contextual. The platform in which the discussion is happening, the point being discussed, the people who are involved in the discussion are some of the many factors that could end up mattering. Given these factors, transporting discussion norms from one virtual place to another might not be the right way to think about it.
I think the "EA-like" discussion norm is a function of several things. In addition to the factors mentioned above, the concept of EA itself seems to ask for people to be uncertain and humble.
Consider the following thought experiment - say you took all the same people from the EA Forum and put them all in a Facebook group. Do you think the "EA-like" discussion norms currently here would be maintained? Or imagine putting them all in a forum, not about EA or Philosophy or Sciency stuff. What would happen?
Thanks for this wonderful article! I absolutely agree that it would be highly beneficial to have a community that is at the intersection of EA and Complexity. I recently participated in an event, where I actually found several other EAs interested in Complexity but unfortunately I couldn't spend enough time to network with them further (I got involved in another project there).
I have also been thinking about how we may use the tools of Complexity to make EA better although I haven't been able to concretely land on anything. Here are some vague thoughts I have. I am not entirely sure if any of these thoughts are worth pursuing so tug at these threads at your own peril!:
But these are all mostly at a 'wondering-if' stage and one would definitely need help from cleverer people to actually start some concrete work. So having a community around EA & Complexity would be highly beneficial.
Is a recording of this event available?
Thanks for linking to the podcast! I hadn't listened to this one before and ended up listening to the whole thing and learnt quite a bit.
I just wonder if Ben actually had some other means in mind other than evidence and reasoning though. Do we happen to know what he might be referencing here? I recognize it could just be him being humble and feeling that future generations could come up with something better (like awesome crystal balls :-p). But just in case if something else is actually already there other than evidence and reason I find it really important to know.
I both agree and disagree with you.
I don't see how Will's definition allows for debating said ambiguity though. As I mentioned in my earlier comment, I don't think that the definition distinguishes between the two schools of thought enough. As a consequence, I also don't think it shows the ambiguity between them. I believe a conflict(aka ambiguity) requires at least two things but the definition actually doesn't convincingly show there are two things in the first place, in my opinion.
Thanks for bringing up Will's post! I have now updated the question's description to link to that.
I actually like Will's definition more. The reason is two-fold:
One critique I have of Will's alternative is that the proposed definition isn't quite distinguishing the two schools of thought. To explain my thinking here is a bit more visual representation. Let () represent a bucket:
Apologies if that is too nitpicky but I don't think it is. I think the distinctness of Evidence and Careful reasoning needs to come out. I guess rephrasing it this way would be better: Effective altruism attempts to improve the world by the use of experimental evidence and/or theoretical reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms.This rephrasing is inspired by the fact that many of the natural sciences split into two - theory and experiment (like Theoretical Physics and Experimental Physics). We are saying EA is also that way which I think it is. I think this also adds to the Science-aligned point that Will mentions. (I have edited this to say that I don't think this definition is a good one. See my next comment below)
The point about "working through what it really means" is very interesting. (more on this below) But when I read, "high-quality evidence and careful reasoning", it doesn't really engage the curious part of my brain to work out what that really means. All of those are words I have already heard and it feels like standard phrasing. When one isn't encouraged to actually work through that definition, it does feel like it is excluding high variance strategies. I am not sure if you feel this way but "high-quality evidence" to my brain just says empirical evidence. Maybe that is why I am sensing this exclusion of high variance strategies.
You are probably right. But I am worried if that is really a good strategy? By not openly saying that we do things we are uncertain about we could end up coming off as a know-it-all who has it all figured out with evidence! There were some discussions along these lines in another recent post. Maybe having a definition that kind of gives a subtle nod to hits-based giving could help with that?
Your point about 'working through the definition' actually gave me an idea: What if we rephrased to "high-quality evidence and/or careful reasoning". That non-standard phrasing of 'and/or' sows some curiosity to actually work things out, doesn't it? I am making the assumption that the phrase "high-quality evidence" is empirical evidence (as I already said) and the phrase "careful reasoning" includes Expected Value thinking, making Fermi estimates and all the other reasoning tools that EAs use. Also, this small phrasing change is not that radically different from what we already have so the cost of changing shouldn't be that high. Of course the question is, is it actually that much more effective than what we have. Would love to hear thoughts on that and of course other suggestions for a better definition...