All of MichaelDickens's Comments + Replies

I was originally going to write an essay based on this prompt but I don't think I actually understand the Epicurean view well enough to do it justice. So instead, here's a quick list of what seem to me to be the implications. I don't exactly agree with the Epicurean view but I do tend to believe that death in itself isn't bad, it's only bad in that it prevents you from having future good experiences.

  1. Metrics like "$3000 per life saved" don't really make sense.
    • I avoid referencing dollars-per-life-saved when I'm being rigorous. I might use them when speak
... (read more)

RE #2, I helped develop CCM as a contract worker (I'm not contracted with RP currently) and I had the same thought while we were working on it. The reason we didn't do it is that implementing good numeric integration is non-trivial and we didn't have the capacity for it.

I ended up implementing analytic and numeric methods in my spare time after CCM launched. (Nobody can tell me I'm wasting my time if they're not paying me!) Doing analytic simplifications was pretty easy, numeric methods were much harder. I put the code in a fork of Squigglepy here: https:/... (read more)

1
OscarD
8d
Cool, great you had a go at this! I have not had a look at your new code yet (and am not sure I will) but if I do and I have further comments I will let you know :)

Agreed. I disagree with the general practice of capping the probability distribution over animals' sentience at 1x that of humans'. (I wouldn't put much mass above 1x, but it should definitely be more than zero mass.)

It seems to me that the naive way to handle the two envelopes problem (and I've never heard of a way better than the naive way) is to diversify your donations across two possible solutions to the two envelopes problem:

  • donate half your (neartermist) money on the assumption that you should use ratios to fixed human value
  • donate half your money on the assumption that you should fix the opposite way (eg fruit flies have fixed value)

Which would suggest donating half to animal welfare and probably half to global poverty. (If you let moral weights be linear w... (read more)

6
MichaelStJules
10d
There is no one opposite way; there are many other ways than to fix human value. You could fix the value in fruit flies, shrimps, chickens, elephants, C elegans, some plant, some bacterium, rocks, your laptop, GPT-4 or an alien, etc.. I think a more principled approach would be to consider precise theories of how welfare scales, not necessarily fixing the value in any one moral patient, and then use some other approach to moral uncertainty for uncertainty between the theories. However, there is another argument for fixing human value across many such theories: we directly value our own experiences, and theorize about consciousness in relation to our own experiences, so we can fix the value in our own experiences and evaluate relative to them.

If my goal is to help other people make their donations more effective, and I can either:

  1. move $1 million from a median charity to AMF
  2. move $10 million from a median art-focused charity to the most effective art-focused charity

I would prefer to do #1 because AMF is >10x better (maybe even >1000x better) than the best art charity. So while in theory I would encourage an art-focused foundation to make more effective donations within their area, I don't think trying to do that would be a good use of my time.

4
Jason
14d
Yes, although that's a relatively restricted (and often unusually low-impact) focus for a foundation. Even within often average-impact-per-$ remits like education, research, medical care, and geographic-area benefit, it is more plausible to envision grants with very good impact even if not full AMF level.
1
Kyle Smith
14d
That's fair. Though I would counter that GiveWell says that they have directed $2 billion over 10+ years to effective charities. Private Foundations in the US give collectively around $100 billion a year. So there is a lot of money out there with potential to be influenced.

Getting 1.5 points by 2.7x'ing GDP actually sounds like a lot to me? It predicts that the United States should be 1.9 points ahead of China and China should be 2.0 points ahead of Kenya. It's very hard to get a 1.9 point improvement in satisfaction by doing anything.

-4
Alexander Loewi
1mo
The point is not that 1.5 is a large number, in terms of single variables -- it is -- the point is that 2.7x is a ridiculous number. But 1.5 also isn't such a huge effect within the full scale of what's measured. The maximum value in the data is just over 8. Even something "huge" like 1.5, out of a total  of 8,  is less than twenty percent. If, as the more accurate model suggests, GDP is only making up a small piece of the total, then that suggests it's far more likely that if a country were to take the same effort that would be required to triple their gdp, and put those resources instead into the other variables, they'd get a far larger return. 

IMO if a forecasting org does manage to make money selling predictions to companies, that's a good positive update, but if they fail, that's only a weak negative update—my prior is that the vast majority of companies don't care about getting good predictions even if those predictions would be valuable. (Execs might be exposed as making bad predictions; good predictions should increase the stock price, but individual execs only capture a small % of the upside to the stock price vs. 100% of the downside of looking stupid.)

5
MWStory
1mo
I think if you extend this belief outwards it starts to look unwieldy and “proves too much”. Even if you think that executives don’t care about having access to good predictions the way that business owners do, then why not ask why business owners aren’t paying?

I was just thinking about this a few days ago when I was flying for the holidays. Outside the plane was a sign that said something like

Warning: Jet fuel emits chemicals that may increase the risk of cancer.

And I was thinking about whether this was a justified double-hedge. The author of that sign has a subjective belief that exposure to those chemicals increases the probability that you get cancer, so you could say "may give you cancer" or "increases the risk of cancer". On the other hand, perhaps the double-hedge is reasonable in cases like this becau... (read more)

2
EdoArad
3mo
I like this as an example of a case where you wouldn't want to combine these two different forms of uncertainty

I may be misinterpreting your argument, but it sounds like it boils down to:

  1. Given that we don't know much about qualia, we can't be confident that shrimp have qualia.
  2. [implicit] Therefore, shrimp have an extremely low probability of having qualia.
  3. Therefore, it's ok to eat shrimp.

The jump from step 1 to step 2 looks like a mistake to me.

You also seemed to suggest (although I'm not quite sure whether you were actually suggesting this) that if a being cannot in principle describe its qualia, then it does not have qualia. I don't see much reason to believ... (read more)

-7
MikhailSamin
5mo

I have only limited resources with which to do good. If I'm not doing good directly through a full-time job, I budget 20% of my income toward doing as much good as possible, and then I don't worry about it after that. If I spend time and money on advocating for a ceasefire, that's time and money that I can't spend on something else.

If you ask me my opinion about whether Israel should attack Gaza, I'd say they shouldn't. But I don't know enough about the issue to say what should be done about it, and I doubt advocacy on this issue would be very effective—"I... (read more)

1
LiaH
5mo
Thanks for taking the time to reply! And thanks for acknowledging that it's a good thing to advocate for a ceasefire. Here is my rationale for it being the best thing: 1. I know it is naive and simplistic to say, but war kills and peace saves lives, no matter the circumstance, parties, or reason for the conflict. If we believe that every human life is valued equally, saving the lives of even the most egregious combatants is worthwhile.  2. A ceasefire would mitigate further deaths in Palestine, right now. True, protests to end the conflict haven't been effective for three quarters of a century, but I don't understand how it is an argument for not trying to end the acute crisis, while hospitals are being shut down.  3. Escalation of this conflict is highly possible, in my opinion. I am sorry to repeat this part of the post, but two historically oppressed people are feeling attacked, worldwide. It is hard to play fair when you are feeling attacked, oppressed, downtrodden. It increases the risk of dehumanizing the other side, and illegal war strategies, like chemical weapons, attacks on hospitals and social infrastructure, cutting off supply lines, etc.  Most worrisome, is the people who are feeling oppressed don't just live in Palestine and Israel; the conflict runs the risk of scaling up fast. Hate crimes are already happening in America. The Biden administration is already talking of war.  4. Regarding time, I am only suggesting you share protests for ceasefire on social media, if you are not doing so already. This takes seconds, considerably less than the time it took you to earn the 20% of your income that you are donating (kudos to you, btw). It takes so little time, I am suggesting you can both advocate for ceasefire in addition to the good you do, without impact on you QoL.  5. I don't think I am exaggerating when I suggest your efforts could help to save thousand, maybe millions. Not you alone, of course, but as I indicated in my other reply, as part of

the results here don't depend on actual infinities (infinite universe, infinitely long lives, infinite value)

This seems pretty important to me. You can handwave away standard infinite ethics by positing that everything is finite with 100% certainty, but you can't handwave away the implications of a finite-everywere distribution with infinite EV.

(Just an offhand thought, I wonder if there's a way to fix infinite-EV distributions by positing that utility is bounded, but that you don't know what the bound is? My subjective belief is something like, utility... (read more)

2
MichaelStJules
6mo
I think someone could hand-wave away heavy tailed distributions, too, but rather than assigning some outcomes 0 probability or refusing to rank them, they're assuming some prospects of valid outcomes aren't valid or never occur, even though they're perfectly valid measure-theoretically. Or, they might actually just assign 0 probability to outcomes outside those with a bounded range of utility. In the latter case, you could represent them with both a bounded utility function and an unbounded utility function, agreeing on the bounded utility set of outcomes. You could have moral/normative uncertainty across multiple bounded utility functions. Just make sure you don't weigh them together via maximizing expected choiceworthiness in such a way that the weighted sum of utility functions is unbounded, because the weighted sum is a utility function. If the weighted sum is unbounded, then the same arguments in the post will apply to it. You could normalize all the utility functions first. Or, use a completely different approach to normative uncertainty, e.g. a moral parliament. That being said, the other approaches to normative uncertainty also violate Independence and can be money pumped, AFAIK. Fairly related to this is section 6 in Beckstead and Thomas, 2022. https://onlinelibrary.wiley.com/doi/full/10.1111/nous.12462

I think this subject is very important and underrated, so I'm glad you wrote the post, and you raised some points that I wasn't aware of, and I would like to see people write more posts like this one. The post didn't do as much for me as it could have because I found two of its three main arguments hard to understand:

  1. For your first argument ("Unbounded utility functions are irrational"), the post spends several paragraphs setting up a specific function that I could have easily constructed myself (for me it's pretty obvious that there exist finite utility
... (read more)
2
MichaelStJules
6mo
Thanks, this is helpful! To respond to the points: 1. I can see how naming them without defining them would throw people off. In my view, it's acting seemingly irrationally, like getting money pumped, getting Dutch booked or paying to avoid information, that matters, not satisfying Independence or the STP. If you don't care about this apparently irrational behaviour, then you wouldn't really have any independent reason to accept Independence or the STP, except maybe that they seem directly intuitive. If I introduced them, that could throw other people off or otherwise take up much more space in an already long post to explain with concrete examples. But footnotes probably would have been good. 2. Good to hear! 3. Which argument do you mean? I defined and motivated the axioms for the two impossibility theorems with SD and Impartiality I cite, but I did that after stating the theorems, in the Anti-utilitarian theorems section. (Maybe I should have linked the section in the summary and outline?)

Some (small-sample) data on public opinion:

  1. Scott Alexander did a survey on moral weights: https://slatestarcodex.com/2019/03/26/cortical-neuron-number-matches-intuitive-perceptions-of-moral-value-across-animals/
  2. SlateStarCodex commenter Tibbar's Mechanical Turk survey on moral weights: https://slatestarcodex.com/2019/05/01/update-to-partial-retraction-of-animal-value-and-neuron-number/

These surveys suggest that the average person gives considerably less moral weight to non-human animals than the RP moral weight estimates, although still enough weight t... (read more)

4
MichaelStJules
6mo
The difference between the two surveys was inclusion/exclusion of people who refused to equate finitely many animals to a human. Also, the wording of the questions in the survey isn't clear about what kinds of tradeoffs it's supposed to be about: The median responses for chickens were ~500 and ~1000 (when you included infinities and NA as infinities). But does this mean welfare range or capacity for welfare? Like if we had to choose between relieving an hour of moderate intensity pain in 500 chickens vs 1 human, we should be indifferent (welfare range)? Or, is that if we could save the lives of 500 chickens or 1 human, we should be indifferent (capacity for welfare)? Or something else? If it's capacity for welfare, then this would be a pretty pro-animal view, because chickens live around 10-15 years on average under good conditions, and under conventional intensive farming conditions, 40-50 days if raised for meat and less than 2 years if raised for eggs. Well, the average person probably doesn't know how long chickens live, so maybe we shouldn't interpret it as capacity for welfare. Also, there are 20-30 chickens killed in the US per American per year, so like 2000 per American over their life.

FWIW I haven't looked much into this but my surface impression is that climate change groups are eager to paint CCC as biased/bad science/climate deniers because (1) they don't like CCC's conclusion that many causes in global health and development are more cost-effective than climate change and (2) they tend to exaggerate the expected harms of climate change, and CCC doesn't.

My impression is that most of Lomborg's critics don't understand his claims—they don't understand the difference between "climate change isn't the top priority" and "climate change is... (read more)

Last time I talked to John Halstead about this, he was (as am I) pretty skeptical of Bjorn Lomborg on climate, so I think even if Lomborg on climate looks superficially similar to John's report that does not mean EAs generally agree with him on climate.

Speaking for myself (working on EA Climate full-time), reading Lomborg on climate is frustrating because he (a) gets some basic things right (energy innovation is good and under-done, climate is less dramatic than some doomers make it, human development is also shaped by lots of other things etc), but (b) co... (read more)

A small thought that occurred to me while reading this post:

In fields where most people do a lot of independent diligence, you should defer to other evaluators more. (Maybe EA grantmaking is an example of this.)

In fields where people mostly defer to each other, you're better off doing more diligence. (My impression is VC is like this—most VCs don't want to fund your startup unless you already got funding from someone else.)

And presumably there's some equilibrium where everyone defers N% of their decisionmaking and does (100-N)% independent diligence, and you should also defer N%.

1
Niki Kotsenko
7mo
First two points sounds like a valid application of Grossman-Stiglitz 1980

How feasible do you think this is? From my outsider perspective, I see grantmakers and other types of application-reviewers taking 3-6 months across the board and it's pretty rare to see them be faster than that, which suggests it might not be realistic to consistently review grants in <3 months.

eg the only job application process I've ever done that took <3 months was an application to a two-person startup.

2
Linch
7mo
It's a good question. I think several grantmaking groups local to us (Lightspeed grants, Manifund, the ill-fated FF regranting program) have promised and afaict delivered on a fairly fast timeline. Though all of them are/were young and I don't have a sense of whether they can reliably be quick after being around for longer than say 6 months. LTFF itself has a median response time of about 4 weeks or so iirc. There might be some essential difficulties with significantly speeding up (say) the 99th percentile (eg needing stakeholder buy-in, complicated legal situations, trying to pitch a new donor for a specific project, trying to screen for infohazard concerns when none of the team are equipped to do so), but if we are able to get more capacity, I'd like us to at least get (say) the 85th or 95th tail down a lot lower. 

Thanks, I hadn't gotten to your comment yet when I wrote this. Having read your comment, your argument sounds solid, my biggest question (which I wrote in a reply to your other comment) is where the eta=0.38 estimate came from.

I think the answer to that question is no, because I don’t trust models like these to advise us on how much risk to take.

How would you prefer to decide how much risk to take?

OP has tried to estimate empirically the spending/impact curvature of a big philanthropic opportunity set – the GiveWell top charities – and ended up with an eta parameter of roughly 0.38.

I would love to see more on this if there's any public writeups or data.

I think the article does support the title. By my read, the post is arguing:

  • many EAs claim that EAs should have high risk tolerance (as in, more risk tolerance than individual investors)
  • but these arguments are not as strong as people claim, so we shouldn't say EAs should have high risk tolerance

(I read "much" in the title as a synonym for "high", I personally would have used the word "high" but I have no problem with the title.)

I agree that short posts are generally better, I did find this post long (I reviewed it before publication and it took me abo... (read more)

2
Simon_M
8mo
I don't get the same impression from reading the post especially in light of the conclusions, which even without my adjustments seems in favour of taking more risk.

We have a much flatter utility curve for consumption (ignoring world-state) vs individual investors (using GiveWell's #s, or cause variety). [Strong]

Can you elaborate on why you believe this? Are you talking specifically about global poverty interventions, or (EA-relevant) philanthropy in general? (I can see the case for global poverty[1], I'm not so sure about other causes.)

We have a much lower correlation between our utility and risk asset returns. (Typically equities are correlated with developed market economies and not natural disasters) [Strong]

... (read more)
2
Simon_M
8mo
I was mosty thinking global poverty and health yes. I think it's still probably true for other EA-relevant philanthropy, but I don't think I can claim that here. 1/ I don't think that study is particularly relevant? That's making a statement about the correlation between countries growth rates and the returns on their stock markets. 2/ I don't think there's really a study which is going to convince me either way on this. My reasoning is more about my model of the world: a/ Economic actors will demand a higher risk premium (ie stocks down) when they are more uncertain about the future because the economy in the future is less bright (ie weak economy => lower earnings) b/ Economic actors will demand a higher risk premium when their confidence is lower I don't think there's likely to be a simple way to extract a historical correlation, because it's not clear how forward looking you want to be estimating, what time horizon is relevant etc. I think if you think that stocks are negatively correlated to NGDP AND you think they have a risk premium, you want to be loading up on them to absolutely unreasonable levels of leverage.
7
Peter Favaloro
8mo
(In case it's useful to either Simon or Michael: I argue in favor of both these points in my comment on this post.)

Related to this, you can only multiply probabilities if they're independent, but I think a lot of the listed probabilities are positively correlated, which means the joint probability is higher than their product. For example, it seems to me that "AGI inference costs drop below $25/hr" and "We massively scale production of chips and power" are strongly correlated.

9
Ted Sanders
10mo
Agreed. Factors like AI progress, inference costs, and manufacturing scale needed are massively correlated. We discuss this in the paper. Our unconditional independent forecast of semiconductor production would be much, much lower than our conditional forecast of 46%, for example.

This is good news! I'm not familiar enough with the law to know whether this ruling is constitutionally justified, I would have preferred to see the Supreme Court ban animal confinement entirely on 13th Amendment grounds, but society is not currently at a point where that has even a distant chance of happening, so I'm happy with this Prop 12 ruling.

I'm conflicted on this: on the one hand I agree that it's worth listening to people who aren't skilled at politeness or aren't putting enough effort into it. On the other hand, I think someone like Sapphire is capable of communicating the same information in a more polite way, and a ban incentivizes people to put more effort into politeness, which will make the community nicer.

4
NunoSempere
1y
Yeah, you also see this with criticism, where for any given piece of criticism, you could put more effort into it and make it more effective. But having that as a standard (even as a personal one) means that it will happen less. So I don't think we disagree on the fact that there is a demand curve? Maybe we disagree that I want to have more sapphires and less politeness, on the margin?

I would object to a self-identified EA only giving money to help Muslims, but I don't object to self-identified EAs making it easy for Muslims to give money to help poor Muslims.

Jason
1y11
15
1

I would object to a self-identified EA only giving money to help Muslims and claiming it as an EA activity. How people choose to purchase their fuzzies (as opposed to utilons) isn't really my concern.

I am not clear from your explanation on whether health impacts are talking about the effect on the mother or the effect on the stillborn child. If you are considering the effect on the stillborn child, it seems that you should consider increasing reproduction as approximately as good as decreasing stillbirths.

it seems crazy to imagine a baby dying during labour as anything other than a rich, full potential liife lost, but if we extend that logic too far backwards then we might imagine any moment that we are not reproducing to be costing one “life’s worth

... (read more)
1
Robi Rahman
1y
You're right; surely abortion, miscarriage, and stillbirth are all equally bad for the embryo/fetus/child and should either be counted as 0 or -70 depending on whether you count these as people or not. (Unless there's some kind of Shapley value argument where an abortion of a 5-week embryo counts as only a fractional loss of life because it might have miscarried anyway even if there had not been an abortion, but I don't think anyone is proposing such an accounting here.) It's frustrating that people downvoted you to -6 agreement but no one bothered to explain what they disagreed with.
6
Joseph Pusey
1y
Hi Michael. Thanks for your thoughtful comment. You've highlighted an issue I agree with- that this is something of a grey area where one's personal position on complex moral issues can make a big difference to how effective you think this problem area might be. In the article, I'm defining the health impacts of a stillbirth as the years of health, or healthy life, lost to the child who is stillborn- this, as you point out, is very hard to define. Any health impacts on the mother (not related to economic or wellbeing impacts) were not described particularly fully in the readings I found, although there may be more research that I haven't seen; I suspect they would be hard to entangle from the health problems which may have contributed to, rather than caused, a stillbirth. If I was smarter, I'd have a better impression on where I fell on this issue. What I hope to point out in the article is that taking either position to an extreme results in a position that clashes with my, and I suspect many people's, moral intuition. Probably further thought on this is required by people who have more experience with time discounting/health economics/actuarial sciences than me. Presumably, some people do think this. I think for me to have a strong position on it I'd have to have strong positions on other, more fundamental moral questions that I haven't come to good answers for.

A couple of questions related to this, not directly relevant but I've been wondering about them and you might know something:

  1. How to square interest rate = return on capital with the fact that, for most of human history, the growth rate was close to zero and the interest rate was significantly higher?
  2. How does this account for risk? The (risk-free) interest rate is risk-free, and the return on capital is risky—it fluctuates over time, and sometimes it's negative. So shouldn't the growth rate be higher than the interest rate? (I think the long-run real growth rate is usually higher than the real interest rate—about 1–2% and 0–1% respectively IIRC, which might be the answer.)

Your recent comment got me thinking more about this. Basically, I didn't think the last few years of underperformance was good evidence against factor investing, but I wasn't sure how to explain why. After thinking for a while, I think I have a better handle on it. You're (probably) smarter than me and you're better at statistics than I am, so I was kind of bothered at myself for not being able to convince you earlier, and I think your argument was better than mine. I want to try to come up with something more coherent.

Most of this comment is about estimat... (read more)

My ideal self spends most of my EA Forum time reading technical posts about various cause areas, both to stay up to date on the ones I know a lot about and to learn more about the ones I'm less familiar with.

My actual self disproportionately reads Community posts because they take a lot less energy to read.

But I reserve almost all my upvotes for more technical posts to help nudge myself and others toward reading those ones more.

That question's definition of AGI is probably too weak—it will probably resolve true a good deal before we have a dangerously powerful AI.

3
Bogdan Ionut Cirstea
1y
Maybe, though e.g. combined with it would still result in a high likelihood of very short timelines to superintelligence (there can be inconsistencies between Metaculus forecasts, e.g. with  as others have pointed out before). I'm not claiming we should only rely on these Metaculus forecasts or that we should only plan for [very] short timelines, but I'm getting the impression the community as a whole and OpenPhil in particular haven't really updated their spending plans with respect to these considerations (or at least this hasn't been made public, to the best of my awareness), even after updating to shorter timelines.

Is 5% low? 5% still strikes me as a "preventing this outcome should plausibly be civilization's #1 priority" level of risk.

4
Lukas_Gloor
1y
Yes to (paraphrased) "5% should plausibly still be civilization's top priority." However, in another sense, 5% is indeed low! I think that's a significant implicit source of disagreement over AI doom likelihoods – what sort of priors people start with. The following will be a bit simplistic (in reality proponents of each side will probably state their position in more sophisticated ways).   On one side, optimists may use a prior of "It's rare that humans build important new technology and it doesn't function the way it's intended." On the other side, pessimists can say that it has almost never happened that people who developed a revolutionary new technology displayed a lot of foresight about its long-term consequences when they started using it. For instance, there were comparatively few efforts at major social media companies to address ways in which social media might change society for the worse. Or, same reasoning for the food industry and the obesity epidemic or online dating and its effects on single parenthood rates.  I'm not saying revolutions in these sectors were overall negative for human happiness – just that there are what seems to be costly negative side-effects where no one competent has ever been "in charge" of proactively addressing them (nor do we have good plans to address them anytime soon). So, it's not easily apparent how we'll suddenly get rid of all these issues and fix the underlying dynamics, apart from "AI will give us god-like power to fix everything." The pessimists can argue that humans have never seemed particularly "in control" over technological progress. There's this accelerating force that improves things on some metrics but makes other things worse elsewhere. (Pinker-style arguments for the world getting better seem one-sided to me – he mostly looks at trends that were already relevant 100s of years ago, but doesn't talk about "newer problems" that only arose as Molochian side-effects of technological progress.)   AI will be
3
ben.smith
1y
Agreed, but 5% is much lower than "certain or close to certain", which is the starting point Nuno Sempere said he was sceptical of. I don't know that anyone thinks doom is "certain or close to certain", though the April 1 post could be read that way. 5% is also much lower than, say, 50%, which seems to be a somewhat more common belief.
4
D0TheMath
1y
Yeah, he’s working on it, but its not his no. 1 priority. He developed shard theory.

For instance, given utilitarianism, the Equality Result probably implies that there should be a massive shift in neartermist resources toward animals, and someone might find this unbelievable.

I would make the same claim more strongly: "modus tollens" / "reductio ad absurdum" (as in, "this assumption gives a conclusion I don't like", rather than "this gives an internally inconsistent conclusion") style ethical reasoning is, broadly speaking, not good. Unless you believe standard 21st century morality is correct about everything, you should expect your e... (read more)

2
Bob Fischer
1y
Thanks for this, Michael! I hadn't seen that line from Ozy. I really like it.

I don't think it makes sense from an EA worldview to seek the best charity within a specific cause unless you have reason to believe that cause is the most effective. It's fine to have whatever personal priorities you have, but I don't think it's an appropriate discussion topic for the EA Forum.

but I don't think it's an appropriate discussion topic for the EA Forum.

This sentence is what moved this comment from neutral-to-disagree to strong-disagree to me. I think it's reasonable for folks to disagree about whether "most effective intervention within pre-specified constraints" is an EA-apt question. For various reasons, I strongly feel that we shouldn't censure folks that try to do that thing (within reason).

If you are going to do the "well actually even the best interventions in this class aren't effective enough to be counterfactually worthwhile" thing, I think it's critical to do that in a tactful and constructive way, which this isn't.

6
sjsjsj
1y
Fair point. Is there a consensus within EA that EA should only be focused on what are the most effective causes in terms of increasing total utility, vs there being space to optimize effective impact within non-optimal causes? My personal interests aside, it seems like there would be an case to address this, as many people outside the current EA movement are not particularly interested in maxing utils in the abstract, but rather guided by personal priorities -- so improving the efficacy of their donations within those priorities would have value. And there to my knowledge, there is a vacuum in analyzing the impact of charities outside of top EA cause areas. I would imagine that on net, it's a loss to allocate non-trivial resources to this away from higher impact cause areas... arguably asking people to share the information they currently have in low-effort ways would be positive on net, though I can see why one would want to promote conversational norms that discourage this. Maybe I'll take this to LessWrong, where I'll hit many folks with the same knowledge base, but without violating the norms you put forth?

I'm not entirely sure I understand what you're saying but this is how I think about it:

You have two options (really more, but just two that are relevant): you can start a startup or you can earn to give at a salaried job. If you start a startup, you expect to get paid $X in N years, and you get nothing (or not much) until then. If you work a salaried job, you get paid $Y per year. You can invest that money in public equities. To compare entrepreneurship vs. salaried job, you can look at the expected payoff from entrepreneurship vs. how much money you'd hav... (read more)

2
Ben_West
1y
Sure, but the salaried job has the added confusion that you get paid annually. It's not the same as investing $X for N years (or (1−d)NX dollars for N years).

It's possible. Companies all tend to correlate with each other somewhat so you can't get zero correlation, but if you can fund non-startup companies that other EAs don't invest in, then it could make sense to overweight those. One thing that comes to mind is EAs probably overweight certain countries (US, UK, Switzerland) and especially underweight emerging markets.

Can you say more about why comparisons to leveraged index funds are useful?

It's convenient because it lets you ignore your risk preferences. Making up some numbers, if entrepreneurship has a 20% return, and a leveraged index fund has a 25% return at the same level of risk*, then the leveraged index fund is better no matter how risk averse you are. It doesn't matter how much you care about risk because the two investments are equally risky.

(It's less helpful if the comparison comes out the other way. If a leveraged index fund has only a 15% return, then ... (read more)

7
Ben_West
1y
Sure, my question was more about using the returns to capital as a way to estimate the returns to labor. I see no particular reason why these should be the same (though I understand that if you do make this assumption, leveraged index funds are a reasonable thing to use.)

When this amount is discounted by 12%/year (average S&P 500 returns)

I believe it would make more sense to calculate the certainty-equivalent return, since entrepreneurship is much riskier than an index fund. A worse but simpler method is to discount by the return of an index fund that's levered up to match the volatility of entrepreneurship, which I'd guess is somewhere around 4–5x leverage, implying a 40–50% annual discount. On the other hand, a startup will have lower correlation to other EAs' portfolios, which argues in favor of starting a startu... (read more)

2
MichaelStJules
1y
Could picking small cap stocks or investing in private companies achieve similarly lower correlation with other EAs' portfolios (assuming we don't all pile into the same companies)?
2
Ben_West
1y
Thanks! Can you say more about why comparisons to leveraged index funds are useful? It's not obvious to me that discount rates for capital and labor should be the same; I included the S&P 500 since it was simple, but I don't think there are many people who are going to decide whether or not to start a company based on S&P 500 returns.

Yes, you should absolutely discount future people in proportion to their probability of not existing. This is still totally fair to those future people because, to the extent that they exist, you treat them as just as important as present people.

Debit card liability is capped at $50 and $500 if you report fraudulent transactions within 2 days and 60 days, respectively.

That's good, I didn't know that!

I use Interactive Brokers, but I don't use their debit card because I expect their fraud protections are not as good as a credit card, and I don't want to expose ~all my net worth to an easy fraud vector.

I use a checking account and keep enough money for ~2 months of expenses, and keep the rest in my IB account. I don't have a savings account.

1
Jess_Riedel
1y
I agree it's important to keep the weaker fraud protection on debit cards in mind.  However, for the use I mentioned above, you can just lock the debit card and only unlock it when you have a cash flow problem.  (Btw, if you don't use your IB debit card, you should lock it even if you aren't using it.) Debit card liability is capped at $50 and $500 if you report fraudulent transactions within 2 days and 60 days, respectively.   That said, I have most of my net worth elsewhere, so I'm less worried about tail risks than you would reasonably be if you're mostly invested through IB.

I would prefer not to bring up gender at all. If someone commits sexual harassment, it doesn't particularly matter what their gender is. And it may be true that men do it more than women, but that's not really relevant, any more than it would be relevant if black people committed sexual harassment more than average.

9
James Özden
1y
It's not that it "may be" true - it is true. I think it's totally relevant: if some class of people are consistently the perpetuators of harm against another group, then surely we should be trying to figure out why that it is the case so we can stop it? Not providing that information seems like it could seriously impede our efforts to understand and address the problem (in this case, sexism & patriarchy).  I'm also confused by your analogy to race - I think you're implying that it would be discriminatory to mention race if talking about other bad things being done, but I also feel like this is relevant. In this case I think it's a bit different, however, as there's other confounders present (e.g. black people are much more highly incarcerated, earn less on average, generally much less privileged) which all might increase rates of doing said bad thing. So in this case, it's not a result of their race, but rather a result of the unequal socioeconomic conditions faced when someone is a certain race.

For people (usually but not always men)

As an aside, I dislike calling out gender like this, even with the "not always" disclaimer. Compare: "For people (usually but not always black people)" would be considered inappropriate.

2
Linch
1y
Would you prefer "mostly but not always?" I think the archetypal examples of things I'm calling out is sexual harassment or abuse, so gender is unusually salient here.

I prefer voting based on value. The other two voting strategies strike me as uncooperative. If you only downvote when you think a score is too high / upvote when you think it's too low, then you're canceling out someone else's vote. And if you don't vote when you think a score is good, then you're causing someone else's vote to have zero counterfactual value (because you will upvote if and only if someone else doesn't upvote).

2
Vasco Grilo
1y
Thanks for commenting, Michael! I share your concerns, and historically I have been voting based solely on the 1st approach. However, I have recently started to think about the 2nd and 3rd, as I think neglectedness considerations should have some weight. If I see 2 posts which (to me) are roughly equally valuable, one has 20 karma, and the other has 200, it seems that upvoting the former is more pressing than upvoting the latter. It is true that votes are fungible in the 2nd and 3rd approaches. However, that also applies to donations to different charities.

Thanks for following up! Those sound like good changes.

Another thing you might do (if it's feasible) is list the studies you're using on something like Replication Markets.

1
Falk Lieder
1y
Thank you for your feedback, Michael, and thank you very much for making me aware of those specialized prediction platforms. I really like your suggestion. I think making predictions about the likely results of replication studies would be helpful for me. It would push me to critically examine and quantify how much confidence I should put in the studies my models rely on. Obtaining the predictions of other people would be a good way to make that assessment more objective. We could then incorporate the aggregate prediction into the model. Moreover, we could use prediction markets to obtain estimates or forecasts for quantities for which no published studies are available yet. I think it might be a good idea to incorporate those steps into our methodology. I will discuss that with our team today.

I would go so far as to say your interpretation is correct and the original text is wrong, it should read "hen-years", not "hens a year".

I'm not saying Sam didn't know he was on the record. I'm saying I, personally, don't understand when I should expect to be on or off the record, and you saying it's obvious doesn't make me understand. Saying "newsworthy" doesn't help because I don't always know what's newsworthy, and it's basically tautological anyway.

And Kelsey's tweets show that journalists don't even agree on what the rules are, namely, some believe it's ok to quote something that the interviewee says is off the record, and others (like Kelsey) say it's not. If they disagree about this,... (read more)

5
Greg_Colbourn
1y
If your journalist friends are good friends, maybe you could agree with them that all of your conversations are off the record by default, and they have to ask if they want to put anything on the record (and maybe even get that in writing just in case?). And then only remind them of this if you want to talk about something that readily comes to mind as being potentially sensitive/newsworthy.
3
nbouscal
1y
I don't know you personally so I can't say whether this applies to you specifically, but: the vast majority of people do not say newsworthy things to their friends basically ever. I really don't think it makes sense to feel anxious about this or change your behaviour based on a (former?) multi-billionaire's DMs getting published. Almost everyone who is in the reference class of "people who need to worry about this" is aware that they are in that reference class.

FWIW some people are acting like the social rules around on vs. off the record are obvious and Sam should have known, but the rules are not obvious to me, and this sort of thing makes me reluctant to talk to any friends who are journalists.

I sort of agree with you, but I also think that Sam had much more experience talking to journalists than either of us do and so it's more reasonable to say that he should have known how this works.

It takes about a minute of googling to find an article that reasonably accurately clarifies what is meant by "on the record", "background", and "off the record". The social rule is that when speaking to a journalist about anything remotely newsworthy (if unsure, assume it is), you're on the record unless you say you'd rather not be and the journalist explicitly agrees.

The rules aren't self-evident, they're just well-known among people who need to know them. People are acting like Sam should have known because he has been actively engaging with the press fo... (read more)

I agree with all three people:

  • Holden is right not to rush into funding FTX Foundation grantees
  • Jakob is right that there are now more shovel-ready projects for Open Phil to fund
  • James is right that Open Phil probably shouldn't fund all those projects, because the reduction in total funding demands a higher funding bar
8
Denkenberger
1y
I think this assumes that the funding rate was appropriate given the presence of FTX. However, if one believes that EA will continue to recruit/produce billionaires, then EA could justify continuing the 2022 spend rate (or even more) despite the implosion of FTX (and reduction in Open Phil resources).

I'm confused about this. This seems to explain why customers with invested assets can't withdraw immediately. But if a customer has only cash in their account, why can't they withdraw it if it's not being invested? If customer A's cash is being used to secure margin loans for customer B, then how is it true that customer A's cash is "not invested"?

4
Charles He
1y
Without trying to make an affirmative statement about what happened at FTX or saying there wasn't any other factors, it seems likely that the idea "that customer funds solely belong to the customer and don't mix with other funds" is simplistic and effectively impossible in any leveraged trading system. In reality, what happens is governed by risk management/capital controls, that would almost always blow up in a bank run scenario of the magnitude that happened to FTX.  For example, Robinhood, which no one believes was speculating on customer funds, had a huge crisis in Jan 2021, that needed billions of dollars. This was just due to customer leverage (and probably bad risk management, the magnitudes seem much than what FTX faced this week). https://www.cnbc.com/2021/02/03/why-investors-were-willing-to-write-robinhood-a-3-billion-check-during-the-gamestop-chaos-.html

FWIW I would be hesitant to bet on this because I would lose the bet in worlds where the EA community has less money. Not to say it wouldn't be worth it at sufficiently good odds.

Could work. My intuition is that there would be no good way to integrate the list of summaries into the UI, and almost nobody would want to read the list of summaries. The work of writing summaries seems analogous to the work of editing Wikipedia, so perhaps the UI for viewing and writing summaries could be similar to Wikipedia talk pages.

Load more