Jay Bailey

362Brisbane QLD, AustraliaJoined Aug 2021

Bio

Jay is a software engineer from Brisbane, Australia who is looking to move into more direct EA work. He currently facilitates Intro to EA courses, and is looking for opportunities to work directly for an EA-aligned organisation in EA movement building, global health, or AI safety. He is a signatory of the Giving What We Can pledge.

Comments
71

Paula Amato's Shortform

It would be nice to have some specific examples of these things. This particular criticism, in my view, is just an attempt to associate EA with Bad Things so that people also think of EA as a Bad Thing. There's no actual arguments in this statement - there are no specific claims to oppose. (Except that EA is incredibly well-funded - which is true, but also not inherently good or bad, and therefore does not need to be defended.) 

If I'm being charitable - many arguments are like this, especially when you only have 140 characters. This is a bad argument, but it's far from a uniquely bad argument. The burden of proof is on Timnit to provide evidence for these accusations, but they may have done this somewhere else, just not in this tweet. (I assume it's a tweet because of its length, and, let's face it, its dismissiveness. Twitter is known for such things.)

If I'm not being charitable - the point of a vague argument such as the above is that it places the burden of proof on the accused. The defense being asked for is for EA's to present specific examples of actions that EA is taking that prove they aren't "colonial" or "white savior"-esque. This is a losing game from the start, because the terms are vague enough that you can always argue that a given action proves nothing or isn't good enough, and that someone could be doing more to decolonise their thoughts and actions. The only winning game is not to play.

Which interpretation is correct? I don't know enough about Timnit Gebru to say. I'd say that if Timnit is known for presenting nuanced, concrete arguments on other mediums or on other topics, this argument is probably a casualty of Twitter, and the charitable approach is appropriate here.

Book a chat with an EA professional

"<topic> 101" generally means beginner or introductory questions, taken from some universities where a class like MATH101 would be the first and most basic mathematics class in a degree. So, "EA 101 questions" here means basic or introductory EA questions.

Jay Bailey's Shortform

I notice some parallels between the old essay "Transhumanism as Simplified Humanism" (https://www.lesswrong.com/posts/Aud7CL7uhz55KL8jG/transhumanism-as-simplified-humanism) and current criticisms of EA - that the idea of "doing the most good possible" is obvious and has been thought of many times before. Really,  in a way, this is just common sense. And yet:

Then why have a complicated special name like “transhumanism” ? For the same reason that “scientific method” or “secular humanism” have complicated special names. If you take common sense and rigorously apply it, through multiple inferential steps, to areas outside everyday experience, successfully avoiding many possible distractions and tempting mistakes along the way, then it often ends up as a minority position and people give it a special name.

I feel that EA is like this. If you take a common sense idea like "Do the most good possible", and actually really think about how to do that, and actively compare different things you could be doing - not just the immediate Overton window of what your friends or your colleagues are doing - and then make a serious commitment of resources to make that answer happen, then it ends up as a minority position and people give it a special name. 

By how much should Meta's BlenderBot being really bad cause me to update on how justifiable it is for OpenAI and DeepMind to be making significant progress on AI capabilities?

Not quite a direct answer to your question, but it is worth noting - not everyone in EA believes that about AI capabilities work. I, for one, believe that working on AI capabilities, especially at a top lab like OpenAI and DeepMind, is a terrible idea and should be front and center on our "List of unethical careers". Working in safety positions in those labs is still highly useful and impactful imo.

Criticism of altruism

I don't agree with most of these points, though I appreciate you writing them up. Here are my thoughts on each of them, in turn:

Altruism implies a naive model of human cognition. I feel like this argument proves too much. If "altruism" is not a good concept because humans are inconsistent, why would "self-interest" be any less vulnerable to this criticism? It seems that you could even-handedly apply this criticism to any concept we might want to maximise, which ends up bringing everything back to neutral anyway.

Altruism as emergent from reward-seeking. This brings up a good point in my opinion, though perhaps not the same point you were making. Specifically, I think altruism is often poorly defined. On some level it's obvious that people are altruistic because of self-interest. But it also seems to me that if your view of what you want the world to look like includes other people's preferences, and you make non-trivial sacrifices (E.g, donating 10%) to meet those preferences, that should certainly count as altruism, even if you're doing it because you want to. 

Need for self / other distinction. I'm not actually following this one, so I won't comment on it.

Information asymmetry. Perfectly true - if all humans were roughly equally well-off, the optimal thing to do would be to focus on yourself. However, this is not the case. I may understand more about my preferences than I understand about the preferences of someone in Bangladesh earning $2/day, but I can reasonably predict that a marginal $20 would help them more than it would help me. Thus, it seems totally reasonable that there are ways you can help others even with less information on their internal states.

Game-theoretic perspective. This argument is just confusing to me. Your first sentence says that self-interested agents can co-operate for everyone's benefit, and your second sentence says that altruistic groups may behave suboptimally. Well...so might self-interested agents! "Can" does not mean "will". You've done some sleight of hand here where you say that self-interested agents can sometimes co-ordinate optimally, then you say that altruistic groups do not always co-ordinate optimally, and then used that to imply that self-interested groups are better. You haven't actually shown that self-interested groups are more effective in general, merely that it's possible, in some cases (1 in 10? 1 in 100? 1 in 1000?) for a self-interested group to outperform an altruistic one.

Human nature. Humans aren't hardwired to care about spreadsheets, or to build rockets, or to program computers. One of the greatest things about humans, in my mind, is our ability to transcend our nature. I view evolutionary psychology as a useful field in the same way that sitcoms are useful for romance advice - they give solid advice on what not to do, or what pitfalls to watch out for. I am naturally wired for self-interest...so I should watch out for that.

Also...I'm not sure if we can do this and still keep what makes the movement great. In the end, effective altruism is about trying to improve the world, and that requires thinking beyond oneself, even if that's hard and we're wired to do otherwise. I don't think I'm likely to be convinced that donating 10% of my income to people I'll never see is actually in my own self-interest, and yet I do it anyway. There are absolutely positives to being part of the movement from the point of view of self-interest, and those are good to smuggle along to get your monkey-brain on board. Nevertheless - if you're focused on self-interest, that limits a lot of what you can do to improve the world compared to having that goal directly. So I think altruism is still very important.

EA is Insufficiently Value Neutral in Practice

Agreed entirely. There is a large difference between "We should coexist alongside not maximally effective causes" and "We should coexist across causes we actively oppose." I think a good test for this would be:

You have one million dollars, and you can only do one of two things with it - you can donate it to Cause A, or you can set it on fire. Which would you prefer to do?

I think we should be happy to coexist with (And encourage effectiveness for) any cause for which we would choose to donate the money. A longtermist would obviously prefer a million dollars go to animal welfare than be wasted. Given this choice, I'd rather a million dollars go to supporting the arts, feeding local homeless people, or improving my local churches even though I'm not religious. But I wouldn't donate this money to the Effective Nazism idea that other people have mentioned - I'd rather it just be destroyed. Every dollar donated to them would be a net bad for the world in my opinion. 

Reflection - Growth and the case against randomista development - EA Forum

In addition to raising several further problems, I don't actually see how this solution actually solves any of the problems I brought up in my previous comment.

Reflection - Growth and the case against randomista development - EA Forum

(NOTE: I wrote this response when the post was much shorter, and ended at "I do not care about evidence when people are dying.")

 

First off - a linkpost is a link to the exact same post that has been written somewhere else, rather than an inspiration or a source like the original "Against RCT" post. That's a small thing.

Secondly - people did think about the kids in the PlayPump story. With the benefit of hindsight, we now know the PlayPumps were a bad idea, but that's not how it seemed at the time. It seemed like the kids would get to play (hence the name) and the village would naturally get water as a result. That's a win-win! No need to take kids out of school, and providing access to clean water would have been a great thing. It didn't work out that way, but the narrative was compelling - evidence about how it actually works is the thing that was missing.

Thirdly, it seems strange to say that you don't care about evidence. You claim:

"With common sense it is obvious that we have to invest billions to build a water public company in Africa and to build the infrastructure to allow every citizen to get fresh and clean water.(this is solve root problem with common sense)"

How would we work out how to achieve this, without using evidence? For that matter, how do we know people in Africa need clean water at all? Sure, it's common knowledge now, but how did the people who originally reported on it find out? Did they close their eyes and think really hard, and then open their eyes and say "I bet there's a country called Africa, and people live there, and they need clean water", or did people actually ask Africans or look at conditions in Africa, and find out what was going on?

Less facetiously, there's a whole bunch of questions that would need to be asked in order to complete this project. Questions like:

Would these countries allow this company to be built?
Who should be in charge of it?
Can we actually provide this infrastructure?
How maintainable is the infrastructure? 
What will the expected costs and benefits actually be?

The lesson of the PlayPumps is that you can't answer all these questions by telling a nice story - you have to actually go out and do the research about how things might go in the real world, and then at least you have a chance of getting it right. The world is complicated - things that seem compelling aren't always possible or useful. The only way we know about that can even somewhat reliably tell the difference is with evidence, ideally as empirical (i.e, as close to the source of what's really happening) as possible. 

The key insight from this post I am trying to convey is not "You can't criticise these things", but rather - if you're going to criticise these things, you need to present a counterargument against the actual reasons EA believes in these things. Why do the benefits of evidence not apply here? What method can we use, other than evidence-gathering, to be sure that this project is the best project we could be doing and will actually work as intended? 

GLO, a UBI-generating stablecoin that donates all yields to GiveDirectly

Considering that the scale we're talking about probably involves reaching out to non-crypto people, I feel like my question isn't too basic given that premise:

How fast/cheap is "making a crypto transaction" currently? I've heard bad things about how expensive it is but have no idea if that's actually true.

One Million Missing Children

I imagine the financial claim isn't that offering financial support doesn't work, but a claim more like - there aren't enough resources to offer enough financial support to enough people to meaningfully alter the US fertility rate on the basis of this alone.

Like - how much does it take to raise a child? I've heard 250k, so let's go with that. You don't need to offer the entire amount as financial support, but something like 5k/year seems reasonable. Across 18 years, that's still $90,000. That means that if you give a billion dollars away as financial support, with zero overheads, you've supported the birth of ~11,000 children. This is a rounding error compared to the size of the issue, so I wouldn't see it as "directly moving the ". To directly move the needle at a cost of 90k/child, you'd need to invest hundreds of billions of dollars. It would probably work effectively, but the resources just aren't there in private philanthropy.

By contrast, political advocacy actually could work on the scales that we're talking about.

Load More