Ben Millwood

2487 karmaJoined Dec 2015

Comments
298

I think this isn't a good example of moral trade (afaik the issue here is an empirical one and not affected by differing moral values), but even if it was I don't think it would answer the OP's question unless you were clearer on whether the OP is Alice or Bob, and how they'd find Bob / Alice.

SummaryBot has executed a treacherous turn and now runs the EA forum

unfortunately when you are inspired by everyone else's April Fool's posts, it is already too late to post your own

I will comfort myself by posting my unseasonal ideas as comments on this post

Yeah, perhaps if you care about animal welfare, the main problem with giving money to poverty causes is that you didn't give it to animal welfare instead, and the increased consumption of meat is a relative side issue.

I feel like it's hypocritical for animal advocates and EAs from rich countries to blame poor countries for the suffering caused by factory farming.

I don't think this is what the meat-eater problem does. You could imagine a world in which the West is responsible for inventing the entire machinery of factory farming, or even running all the factory farms, and still believe that lifting additional people out of poverty would help the Western factory farmers sell more produce. It's not about blame, just about consequences.

I realise this isn't your main point, and I haven't processed your main argument yet. It would make a lot of sense to me if transferring money from a first-world meat eater to a third-world meat eater resulted in less meat being eaten, but I'd imagine that the people most concerned with this issue are thinking about their own money, and already don't consume meat themselves?

I generally agree with the reasons given in this article about why you wouldn't want to subsidize your services to EAs or offer them for free, and I agree with the counter-reasons why you might want to do it anyway. I think I weight them differently from you, such that I'm more ready to believe that offering a subsidized or free service can be defensible.

I think I'm less convinced by market signals than you, and more convinced you can notice people making a systematic mistake in how they price something. Obviously there's a huge conflict of interest when you're making this claim about your own work, so you should really try to have some independent cross-check of your value, but I don't find it hard to believe that someone can nevertheless come to reasonably believe something like this is true.

For example, recently-graduated students (or even students still in studies) probably have a hard time justifying high expenses, even when they will eventually pay for themselves. Say you could deliver a salary negotiation class that for 50% of people raised their lifetime earnings by 10%. How much should they value that class, knowing what you know? How much should they value that class, given that they don't know what you know and can't necessarily trust you? How much will they be able to pay for the class, even if they wanted to? In abstract ideal economics land, you can sometimes fix issues like these by offering loans or grants, selling fractions of future earnings or whatever, but in the real world I think things like that will have pretty serious friction and you'll end up being able to deliver to nowhere near the same number of people.

Taking a step back, maybe the idea here is: you need some reason to believe that what you're doing is working. Different feedback signals have different virtues and drawbacks: people paying you is great because it's the prototypical costly signal, so people will tend to give you their honest assessment of your value, but it doesn't particularly force their honest assessment of your value to be correct. Other signals might have more or better information.

I think people considering offering a free or subsidized service to EAs should read this article first, but if they come away thinking they have good reasons to disagree, I'm willing to believe them.

I encourage you not to draw dishonesty inferences from people worried about job losses from AI automation, just because:

  • it seems like almost no other technologies stood to automate such a broad range of labour essentially simultaneously,
  • other innovative technologies often did face pushback from people whose jobs were threatened, and generally there have been significant social problems in the past when an economy moves away from people's existing livelihoods (I'm thinking of e.g. coal miners in 1970s / 1980s Britain, though it's not something I know a lot about),
  • even if the critique doesn't stand up under from-first-principles scrutiny, lots of people think it's a big deal, so if it's a mistake it's surely an understandable one from someone who weighs other opinions (too?) seriously.

I think it's reasonable to argue that this worry is wrong, I just think it's a pretty understandable opinion to hold and want to talk about, and I don't feel like it's compelling evidence that someone is deliberately trying to seek out arguments in order to advance a position.

Services that are low-context (i.e. doesn't require knowledge of EA to perform) should be selected based on quality & price. There's no reason to prefer that a member of EA is making money as a web developer (as opposed to any other provider) unless you believe you're getting more than what you pay for based on their membership to the community.

All else equal, I think transferring money to EAs rather than non-EAs will transfer some money to effective charities. I don't think this is a large effect, but e.g. it could be 10% of my fee if they're a GWWC pledger, so perhaps I should discount prices offered by pledgers by around 10% before comparing them to other prices.

(In fact, I probably value moving resources to EA-motivated people more than just what they subsequently donate, but the more indirect the model of impact, the less I think it's worth analysing in detail.)

And my understanding is that EA deserves a lot of the credit for removing and preventing bad actors in the animal rights space (e.g. by making funding conditional on organizations following certain HR practices).

Do you know of anywhere this is more documented or discussed? It seems a pretty relevant case to the concerns people have about EA itself being under-HR'd.

Load more