RE #2, I helped develop CCM as a contract worker (I'm not contracted with RP currently) and I had the same thought while we were working on it. The reason we didn't do it is that implementing good numeric integration is non-trivial and we didn't have the capacity for it.
I ended up implementing analytic and numeric methods in my spare time after CCM launched. (Nobody can tell me I'm wasting my time if they're not paying me!) Doing analytic simplifications was pretty easy, numeric methods were much harder. I put the code in a fork of Squigglepy here: https:/...
Agreed. I disagree with the general practice of capping the probability distribution over animals' sentience at 1x that of humans'. (I wouldn't put much mass above 1x, but it should definitely be more than zero mass.)
It seems to me that the naive way to handle the two envelopes problem (and I've never heard of a way better than the naive way) is to diversify your donations across two possible solutions to the two envelopes problem:
Which would suggest donating half to animal welfare and probably half to global poverty. (If you let moral weights be linear w...
If my goal is to help other people make their donations more effective, and I can either:
I would prefer to do #1 because AMF is >10x better (maybe even >1000x better) than the best art charity. So while in theory I would encourage an art-focused foundation to make more effective donations within their area, I don't think trying to do that would be a good use of my time.
Getting 1.5 points by 2.7x'ing GDP actually sounds like a lot to me? It predicts that the United States should be 1.9 points ahead of China and China should be 2.0 points ahead of Kenya. It's very hard to get a 1.9 point improvement in satisfaction by doing anything.
IMO if a forecasting org does manage to make money selling predictions to companies, that's a good positive update, but if they fail, that's only a weak negative update—my prior is that the vast majority of companies don't care about getting good predictions even if those predictions would be valuable. (Execs might be exposed as making bad predictions; good predictions should increase the stock price, but individual execs only capture a small % of the upside to the stock price vs. 100% of the downside of looking stupid.)
I was just thinking about this a few days ago when I was flying for the holidays. Outside the plane was a sign that said something like
Warning: Jet fuel emits chemicals that may increase the risk of cancer.
And I was thinking about whether this was a justified double-hedge. The author of that sign has a subjective belief that exposure to those chemicals increases the probability that you get cancer, so you could say "may give you cancer" or "increases the risk of cancer". On the other hand, perhaps the double-hedge is reasonable in cases like this becau...
I may be misinterpreting your argument, but it sounds like it boils down to:
The jump from step 1 to step 2 looks like a mistake to me.
You also seemed to suggest (although I'm not quite sure whether you were actually suggesting this) that if a being cannot in principle describe its qualia, then it does not have qualia. I don't see much reason to believ...
I have only limited resources with which to do good. If I'm not doing good directly through a full-time job, I budget 20% of my income toward doing as much good as possible, and then I don't worry about it after that. If I spend time and money on advocating for a ceasefire, that's time and money that I can't spend on something else.
If you ask me my opinion about whether Israel should attack Gaza, I'd say they shouldn't. But I don't know enough about the issue to say what should be done about it, and I doubt advocacy on this issue would be very effective—"I...
the results here don't depend on actual infinities (infinite universe, infinitely long lives, infinite value)
This seems pretty important to me. You can handwave away standard infinite ethics by positing that everything is finite with 100% certainty, but you can't handwave away the implications of a finite-everywere distribution with infinite EV.
(Just an offhand thought, I wonder if there's a way to fix infinite-EV distributions by positing that utility is bounded, but that you don't know what the bound is? My subjective belief is something like, utility...
I think this subject is very important and underrated, so I'm glad you wrote the post, and you raised some points that I wasn't aware of, and I would like to see people write more posts like this one. The post didn't do as much for me as it could have because I found two of its three main arguments hard to understand:
Some (small-sample) data on public opinion:
These surveys suggest that the average person gives considerably less moral weight to non-human animals than the RP moral weight estimates, although still enough weight t...
FWIW I haven't looked much into this but my surface impression is that climate change groups are eager to paint CCC as biased/bad science/climate deniers because (1) they don't like CCC's conclusion that many causes in global health and development are more cost-effective than climate change and (2) they tend to exaggerate the expected harms of climate change, and CCC doesn't.
My impression is that most of Lomborg's critics don't understand his claims—they don't understand the difference between "climate change isn't the top priority" and "climate change is...
Last time I talked to John Halstead about this, he was (as am I) pretty skeptical of Bjorn Lomborg on climate, so I think even if Lomborg on climate looks superficially similar to John's report that does not mean EAs generally agree with him on climate.
Speaking for myself (working on EA Climate full-time), reading Lomborg on climate is frustrating because he (a) gets some basic things right (energy innovation is good and under-done, climate is less dramatic than some doomers make it, human development is also shaped by lots of other things etc), but (b) co...
A small thought that occurred to me while reading this post:
In fields where most people do a lot of independent diligence, you should defer to other evaluators more. (Maybe EA grantmaking is an example of this.)
In fields where people mostly defer to each other, you're better off doing more diligence. (My impression is VC is like this—most VCs don't want to fund your startup unless you already got funding from someone else.)
And presumably there's some equilibrium where everyone defers N% of their decisionmaking and does (100-N)% independent diligence, and you should also defer N%.
How feasible do you think this is? From my outsider perspective, I see grantmakers and other types of application-reviewers taking 3-6 months across the board and it's pretty rare to see them be faster than that, which suggests it might not be realistic to consistently review grants in <3 months.
eg the only job application process I've ever done that took <3 months was an application to a two-person startup.
Thanks, I hadn't gotten to your comment yet when I wrote this. Having read your comment, your argument sounds solid, my biggest question (which I wrote in a reply to your other comment) is where the eta=0.38 estimate came from.
I think the answer to that question is no, because I don’t trust models like these to advise us on how much risk to take.
How would you prefer to decide how much risk to take?
OP has tried to estimate empirically the spending/impact curvature of a big philanthropic opportunity set – the GiveWell top charities – and ended up with an eta parameter of roughly 0.38.
I would love to see more on this if there's any public writeups or data.
I think the article does support the title. By my read, the post is arguing:
(I read "much" in the title as a synonym for "high", I personally would have used the word "high" but I have no problem with the title.)
I agree that short posts are generally better, I did find this post long (I reviewed it before publication and it took me abo...
We have a much flatter utility curve for consumption (ignoring world-state) vs individual investors (using GiveWell's #s, or cause variety). [Strong]
Can you elaborate on why you believe this? Are you talking specifically about global poverty interventions, or (EA-relevant) philanthropy in general? (I can see the case for global poverty[1], I'm not so sure about other causes.)
...We have a much lower correlation between our utility and risk asset returns. (Typically equities are correlated with developed market economies and not natural disasters) [Strong]
Related to this, you can only multiply probabilities if they're independent, but I think a lot of the listed probabilities are positively correlated, which means the joint probability is higher than their product. For example, it seems to me that "AGI inference costs drop below $25/hr" and "We massively scale production of chips and power" are strongly correlated.
This is good news! I'm not familiar enough with the law to know whether this ruling is constitutionally justified, I would have preferred to see the Supreme Court ban animal confinement entirely on 13th Amendment grounds, but society is not currently at a point where that has even a distant chance of happening, so I'm happy with this Prop 12 ruling.
I'm conflicted on this: on the one hand I agree that it's worth listening to people who aren't skilled at politeness or aren't putting enough effort into it. On the other hand, I think someone like Sapphire is capable of communicating the same information in a more polite way, and a ban incentivizes people to put more effort into politeness, which will make the community nicer.
I would object to a self-identified EA only giving money to help Muslims, but I don't object to self-identified EAs making it easy for Muslims to give money to help poor Muslims.
I would object to a self-identified EA only giving money to help Muslims and claiming it as an EA activity. How people choose to purchase their fuzzies (as opposed to utilons) isn't really my concern.
I am not clear from your explanation on whether health impacts are talking about the effect on the mother or the effect on the stillborn child. If you are considering the effect on the stillborn child, it seems that you should consider increasing reproduction as approximately as good as decreasing stillbirths.
...it seems crazy to imagine a baby dying during labour as anything other than a rich, full potential liife lost, but if we extend that logic too far backwards then we might imagine any moment that we are not reproducing to be costing one “life’s worth
A couple of questions related to this, not directly relevant but I've been wondering about them and you might know something:
Your recent comment got me thinking more about this. Basically, I didn't think the last few years of underperformance was good evidence against factor investing, but I wasn't sure how to explain why. After thinking for a while, I think I have a better handle on it. You're (probably) smarter than me and you're better at statistics than I am, so I was kind of bothered at myself for not being able to convince you earlier, and I think your argument was better than mine. I want to try to come up with something more coherent.
Most of this comment is about estimat...
My ideal self spends most of my EA Forum time reading technical posts about various cause areas, both to stay up to date on the ones I know a lot about and to learn more about the ones I'm less familiar with.
My actual self disproportionately reads Community posts because they take a lot less energy to read.
But I reserve almost all my upvotes for more technical posts to help nudge myself and others toward reading those ones more.
That question's definition of AGI is probably too weak—it will probably resolve true a good deal before we have a dangerously powerful AI.
Is 5% low? 5% still strikes me as a "preventing this outcome should plausibly be civilization's #1 priority" level of risk.
For instance, given utilitarianism, the Equality Result probably implies that there should be a massive shift in neartermist resources toward animals, and someone might find this unbelievable.
I would make the same claim more strongly: "modus tollens" / "reductio ad absurdum" (as in, "this assumption gives a conclusion I don't like", rather than "this gives an internally inconsistent conclusion") style ethical reasoning is, broadly speaking, not good. Unless you believe standard 21st century morality is correct about everything, you should expect your e...
I don't think it makes sense from an EA worldview to seek the best charity within a specific cause unless you have reason to believe that cause is the most effective. It's fine to have whatever personal priorities you have, but I don't think it's an appropriate discussion topic for the EA Forum.
but I don't think it's an appropriate discussion topic for the EA Forum.
This sentence is what moved this comment from neutral-to-disagree to strong-disagree to me. I think it's reasonable for folks to disagree about whether "most effective intervention within pre-specified constraints" is an EA-apt question. For various reasons, I strongly feel that we shouldn't censure folks that try to do that thing (within reason).
If you are going to do the "well actually even the best interventions in this class aren't effective enough to be counterfactually worthwhile" thing, I think it's critical to do that in a tactful and constructive way, which this isn't.
I'm not entirely sure I understand what you're saying but this is how I think about it:
You have two options (really more, but just two that are relevant): you can start a startup or you can earn to give at a salaried job. If you start a startup, you expect to get paid $X in N years, and you get nothing (or not much) until then. If you work a salaried job, you get paid $Y per year. You can invest that money in public equities. To compare entrepreneurship vs. salaried job, you can look at the expected payoff from entrepreneurship vs. how much money you'd hav...
It's possible. Companies all tend to correlate with each other somewhat so you can't get zero correlation, but if you can fund non-startup companies that other EAs don't invest in, then it could make sense to overweight those. One thing that comes to mind is EAs probably overweight certain countries (US, UK, Switzerland) and especially underweight emerging markets.
Can you say more about why comparisons to leveraged index funds are useful?
It's convenient because it lets you ignore your risk preferences. Making up some numbers, if entrepreneurship has a 20% return, and a leveraged index fund has a 25% return at the same level of risk*, then the leveraged index fund is better no matter how risk averse you are. It doesn't matter how much you care about risk because the two investments are equally risky.
(It's less helpful if the comparison comes out the other way. If a leveraged index fund has only a 15% return, then ...
When this amount is discounted by 12%/year (average S&P 500 returns)
I believe it would make more sense to calculate the certainty-equivalent return, since entrepreneurship is much riskier than an index fund. A worse but simpler method is to discount by the return of an index fund that's levered up to match the volatility of entrepreneurship, which I'd guess is somewhere around 4–5x leverage, implying a 40–50% annual discount. On the other hand, a startup will have lower correlation to other EAs' portfolios, which argues in favor of starting a startu...
Yes, you should absolutely discount future people in proportion to their probability of not existing. This is still totally fair to those future people because, to the extent that they exist, you treat them as just as important as present people.
Debit card liability is capped at $50 and $500 if you report fraudulent transactions within 2 days and 60 days, respectively.
That's good, I didn't know that!
I use Interactive Brokers, but I don't use their debit card because I expect their fraud protections are not as good as a credit card, and I don't want to expose ~all my net worth to an easy fraud vector.
I use a checking account and keep enough money for ~2 months of expenses, and keep the rest in my IB account. I don't have a savings account.
I would prefer not to bring up gender at all. If someone commits sexual harassment, it doesn't particularly matter what their gender is. And it may be true that men do it more than women, but that's not really relevant, any more than it would be relevant if black people committed sexual harassment more than average.
For people (usually but not always men)
As an aside, I dislike calling out gender like this, even with the "not always" disclaimer. Compare: "For people (usually but not always black people)" would be considered inappropriate.
I prefer voting based on value. The other two voting strategies strike me as uncooperative. If you only downvote when you think a score is too high / upvote when you think it's too low, then you're canceling out someone else's vote. And if you don't vote when you think a score is good, then you're causing someone else's vote to have zero counterfactual value (because you will upvote if and only if someone else doesn't upvote).
Thanks for following up! Those sound like good changes.
Another thing you might do (if it's feasible) is list the studies you're using on something like Replication Markets.
I would go so far as to say your interpretation is correct and the original text is wrong, it should read "hen-years", not "hens a year".
I'm not saying Sam didn't know he was on the record. I'm saying I, personally, don't understand when I should expect to be on or off the record, and you saying it's obvious doesn't make me understand. Saying "newsworthy" doesn't help because I don't always know what's newsworthy, and it's basically tautological anyway.
And Kelsey's tweets show that journalists don't even agree on what the rules are, namely, some believe it's ok to quote something that the interviewee says is off the record, and others (like Kelsey) say it's not. If they disagree about this,...
FWIW some people are acting like the social rules around on vs. off the record are obvious and Sam should have known, but the rules are not obvious to me, and this sort of thing makes me reluctant to talk to any friends who are journalists.
I sort of agree with you, but I also think that Sam had much more experience talking to journalists than either of us do and so it's more reasonable to say that he should have known how this works.
It takes about a minute of googling to find an article that reasonably accurately clarifies what is meant by "on the record", "background", and "off the record". The social rule is that when speaking to a journalist about anything remotely newsworthy (if unsure, assume it is), you're on the record unless you say you'd rather not be and the journalist explicitly agrees.
The rules aren't self-evident, they're just well-known among people who need to know them. People are acting like Sam should have known because he has been actively engaging with the press fo...
I agree with all three people:
I'm confused about this. This seems to explain why customers with invested assets can't withdraw immediately. But if a customer has only cash in their account, why can't they withdraw it if it's not being invested? If customer A's cash is being used to secure margin loans for customer B, then how is it true that customer A's cash is "not invested"?
FWIW I would be hesitant to bet on this because I would lose the bet in worlds where the EA community has less money. Not to say it wouldn't be worth it at sufficiently good odds.
Could work. My intuition is that there would be no good way to integrate the list of summaries into the UI, and almost nobody would want to read the list of summaries. The work of writing summaries seems analogous to the work of editing Wikipedia, so perhaps the UI for viewing and writing summaries could be similar to Wikipedia talk pages.
I was originally going to write an essay based on this prompt but I don't think I actually understand the Epicurean view well enough to do it justice. So instead, here's a quick list of what seem to me to be the implications. I don't exactly agree with the Epicurean view but I do tend to believe that death in itself isn't bad, it's only bad in that it prevents you from having future good experiences.