Re exercise: I worry that putting myself in a catabolic state (by exercising particularly hard) I temporarily increase my risk. Also by being at the gym around sweaty strangers. Is this worry justified?
I like this model but I think a more interesting example can be made with different variables.
Imagine x and y are actually both good things. You could then claim that a common pattern is for people to be pushing back and forth between x and y. But meanwhile, we may not be at the Frontier at all if you add z. So let's work on z instead!
In that sense, maybe we are never truly at the frontier, all variables considered.
Related to this line of thinking: affordance widths
If you take this model a step further, it suggests working on whatever the most tractable problem is that others are spending resources on, regardless of its impact, because that will maximally free up energy for other causes.
Sounds like something someone should simulate to see if this effect is strong enough to take into account.
[Our] research group is investigating the most promising giving opportunities among mental health interventions in lower and middle-income countries.
Any reason why you're focusing on interventions that target mental health directly and explicitly, instead of any intervention that might increase happiness indirectly (like bednets)?
Can we come up with a list of existing pieces of art that come close to this? I don't expect good ideas to come from first principles, but there might be some type of art out there that is non-cringy and conveys elements of EA thinking properly.
I'll start with Schindler's list, and especially this scene, where the protagonist breaks down while calculating just how many more lives he could have saved if he had sold his car, his jewelry, etc.
Okay, you've convinced me that a US based EA organisation should consider raising their wages to attract top talent.
This data does make me doubt the wisdom of basing non-local activities in the US, but that is another matter.
It does provide clarity, and I can imagine that there are unfortunate cases where those entry level salaries aren't enough.
As I said elsewhere in this thread, I think this problem would be best resolved simply by asking how much an applicant needs, instead of raising wages accross the board. The latter would cause all kinds of problems. It would worsen the already latent center/periphery divide in EA by increasing inequality, it would make it harder for new organisations to compete, it would reduce the net amount of people that we can employ, etc etc.
But I could be wrong, and I sense that some of my thoughts might be ideologically tainted. If you feel the urge to point me at some econ 101, please do.
30 was just an arbitrary number. Is London still hard to live in for 60? Mind that the suggestion is to raise salaries from 75k to 100k. I can't imagine many cases where 75k is prohibitive, except for those that feel a need to be competitive with their peers from industry (which, fwiw, is not something I outright disapprove of)
We should probably operationalize this argument with actual data instead of reasoning from availability.
Given the numbers that we have in mind, these examples are all very specific to the US.
Medical expenses don't get much past $2k per year in most European countries. The only place where cost of living is prohibitively high past a ~$30k income, is San Francisco.
I'm not arguing against the idea that some people exist that should be given the $150k that is needed to unlock their talents. I'm arguing that this group of people might be very small, and concentrated in your bubble.
I think that's the crux of the argument. If a majority of senior people needed $150k to get by, I'd agree that that should be the wage you offer. If these people make up just 1% of the population (which seems true to me), offering $150k to everyone else is just going to cause a lot of subtle cultural damage.
a lot of resentment would emerge
To the extent that this would cause resentment, I'd interpret that as a perception of a higher counterfactual, which means that the execution wasn't done well.