All of Jeff_Kaufman's Comments + Replies

Should Earners-to-Give Work at Startups Instead of Big Companies?

Now I'm just confused about what companies were talking about! Stripe is 12 years old, has 4k people, and is worth ~$100B. Is it just that it is still private?

I would have counted it as a big company and not a startup in thinking about this post, but maybe that's not how the author intended it?

1Natália Mendonça19dI thought Stripe would be in the reference class of startups (since it still raises money from VCs), until I read Michael Dickens's reply to this comment. I agree that it was a supremely bad example, though. The other companies I mentioned probably count? There are also a lot of smaller/newer companies that pay about as much as Google/Meta that I didn't mention in my first comment. They're mostly unicorns (though not all of them are), but I think they might be a substantial fraction of the set of companies people actively trying to work at startups might end up in -- they're large compared to the average startup, but the average startup is less likely to have the necessary infrastructure to absorb more people or recruit in a predictable way, and/or might hire exclusively from its personal networks.

I would not consider Stripe a startup for the purposes of this post.

Should Earners-to-Give Work at Startups Instead of Big Companies?

Say a startup offers you an equity package that's worth $X per year at the current valuation. At the same time, a big company offers you a salary that's $X higher than your salary would be at the startup. Both compensation packages have the same face value.

Your post is very detailed so I could have missed where you expand on this, but I think this initial assumption is off. Face value compensation at the top tech companies is generally much higher than what you would get at a startup. Have a look at

4MichaelDickens19dAt a glance, I don't see startup salaries on In my experience, most startups offer worse face-value compensation than large tech companies, but a significant minority offer competitive compensation. I was able to get a (slightly) higher offer from a startup than from Google.
6Natália Mendonça19dIt's more complicated than that. Some top startups (e.g. Stripe, Airtable, Databricks, Scale AI, ByteDance, Benchling, and several others) pay at least as much or a lot more than e.g. Google/Meta. Some of those (Stripe, Airtable, Scale AI) seem to offer new grads close to $100k more than Google does on average in the first year (counting signing bonuses, and assuming the valuation of all of those companies doesn't change). Also,'s 2020 report showed [] that a lot of the top-paying companies were startups. But it is probably the case that startups with more room for growth pay much less.
Effective Altruism, Before the Memes Started

Aside: I'm confused by the "who does not have an account on the EA Forum. I'm posting this on his behalf, and will reply to comments with his replies if/when he responds to them." Anyone can make an account here, it's only a few steps, and it's very fast.

2NicholasKross2moDevin's reponse: “Yeah, I was wondering when that might come up. I have a general resistance to making extraneous accounts, especially if they are anything like social media accounts. I find it stressful and think I would over-obsessively check/use them in a way that would wind up being harmful. Even just having this post up and the ability to respond through Nick has occupied my attention and anxiety a good deal the last few days, or I might do more cross-posts/enable comments on our blog. That said, I did consider it. EA forum seems like it would not be so bad if I was going to have an account somewhere, and there’s still a decent chance that I will make one at some point. When I asked Nick about the issue, he said he already had an account and was very willing to post it for me (by the way, thanks again Nick!). I still considered making one because I thought it might seem weird if it was posted by him instead, but for better or worse I wound up taking him up on it.”
Issues with Giving Multiplier

Holden specifically put forward the claim that this kind of influence matching is a type of non-illusory matching. He even suggests the very concept that Giving Multiplier is doing

Holden wrote "the matcher makes a legitimate commitment to give only if others do, in an attempt to influence their giving".  That's not what's happening here: the matchers are donating regardless of whether others do.  Additionally, I'm quite pessimistic about people being able to make legitimate commitments in this regard, since predicting what you would otherwise do ... (read more)

Issues with Giving Multiplier

Isn't (2) is in conflict with (1)?  That the counterfactual donations are all coming from the donors and not the matchers is not what (I would predict) the donors believe.

2Davidmanheim2moI don't think that most donors who are looking at getting matching donations are particularly interested in thinking about / worried about counterfactual donations - but if they are, and bother to do minimal reading, the situation is very clear. (Note that they are doing counterfactual donation direction, since otherwise the money will not necessarily go to the organization they picked, which is what, in my experience, most non-EA people think they are doing when getting matched donations.)
Issues with Giving Multiplier

it should be pretty obvious that almost all donation matchings that are advertised as such are at least a bit fake

Obvious to who?  Both Giving Multiplier and (last week) GiveWell are targeting matches to people new to effective giving.  My guess is that if you interviewed people right after they donated in these matching campaigns and asked them whether they thought the match was real, they would say it was.

Also note that both GiveWell and Giving Multiplier are putting a lot of argument behind the position that their matches are real.  (If t... (read more)

Obvious to this forum, sorry, I should've been clearer. I think your post here laying out the issues is a good contribution. One way in which it's good is if donors to Giving Multiplier later come across this post and learn more about our culture.

What should "counterfactual donation" mean?

If you spend your personal luxuries budget in full every year, this sounds like #9, and I agree it's fine to call it counterfactual.

GiveWell Donation Matching

I’d love it if you crossposted that post


I think there’s another category before 9, which is “Donate to a charity not commonly supported by EAs, such as the World Wildlife Fund or Habitat for Humanity.”

Yes, I think that's fine as long as we all agree that the impact of donating to an AA charity is very much higher than donating to one of those charities.

GiveWell Donation Matching

In this case it's definitely counterfactual (it wouldn't have gone to a GiveWell charity)

I don't think that should count as counterfactual, actually. Even though the money would not have gone to a GiveWell charity, it would have done something similarly valuable, so the donor cannot reason that their impact is higher. Compare this to when an employer offers to match $X per person, and doesn't put any restrictions on what charity you donate to. In the latter case, this really is more impact, and should factor into decisions like "should I be earning to g... (read more)

4lukefreeman2moAh, yes. In the case of something like "should I be earning to give" that is a very different situation. There's two uses of counterfactual here: 1. Is the total impact triggered by donor A whose donation is being matched by donor B counterfactual once you take into account what donor B would have done otherwise? 2. Were the actions of donor B counterfactually impacted by donor A (i.e. they would have given somewhere else but that might have been similarly impactful, or less impactful). In the case of #2 it is not misleading to donor A to say that their donation was matched IMHO. But it isn't the full story for impact.
2JP Addison2mo(I’d love it if you crossposted that post, but commenting here until then.) I think there’s another category before 9, which is “Donate to a charity not commonly supported by EAs, such as the World Wildlife Fund or Habitat for Humanity.” So this allows for Giving Tuesday to count as counterfactual. I would hope GiveWell’s was of this type (though I sympathize with Luke’s points). Then we have another question, which is who are these people that are ~indifferent between any EA charity? They’re probably not the first time donors that GiveWell’s targeting.
GiveWell Donation Matching

This is a coherent view, but I doubt it's how GiveWell is approaching it?  Specifically, I would be quite surprised if GiveWell chose to advertise a "true" match just with the goal of preventing criticism.  GiveWell has historically been comfortable with a pretty high level of transparency, and if they thought illusory matching was acceptable I would expect them to say so. Instead, they say the opposite: their post introducing their donation matching starts by describing their issues with conventional matching offers. 

Note that GiveWell is g... (read more)

7lukefreeman2moYeah, very good point! My reading is that they are seeing the marketing value in matching but philosophically want to have a true match because of the reasons outlined in earlier posts (they don't consider those campaigns even as "matches"). The 'true match' attempt however might seem to be the worst of both worlds... That being said, I imagine that the average donor towards their "true match" matching funds is actually quite like me: a donor that is seeking to spend specifically on outreach donations. In this case the decision might be between donating to the GiveWell matching pool or something else such as sponsoring a Giving Game, or covering credit card fees, or paying for pizza at an introduction event for students, or an advertising experiment, or a study on psychology of effective giving. In this case it's definitely counterfactual (it wouldn't have gone to a GiveWell charity) but it's not "worse" than they would have otherwise given (they could believe for good reason that incentivising the first donation is sufficiently leveraged that it is better than another outreach focused donation). I can actually understand the psychology of the "true match" donor quite well: I would actually prefer that my donation be held for matching, used for marketing, or returned for me to use in a similar fashion than just go to one of their top charities. This isn't a typical donor, but it is one that I understand (intimately).
College and Earning to Give

Combine this with the destitute medicare strategy, and have them adopted by grandparents:

Concerns with ACE's Recent Behavior

making this claim


I'm confused: the bit you're quoting is asking a question, not making a claim.

5willbradshaw7moThe embedded claim being objected to is that the group is "explicitly aligned with one side" (of this dispute).
College and Earning to Give

I haven't seen other resources that talk about the cost of college this way, but I also don't spend much time looking at financial planning advice?

The approach in this post is only relevant to a pretty small fraction of people:

  • Your children need to be likely enough to be admitted to the kind of institution that commits to meeting 100% of demonstrated financial need, or otherwise has a similar  "100% effective tax rate" that it's worth considering.
  • You need to not be very interested in saving money for your own future use.  The CSS Profile suggesti
... (read more)
1balex7moThanks for your response. I think it's slightly more general than you suggest, because the "tax" is so high. For example, if you're trying to decide whether to buy a larger house or invest the difference in a 529 plan, it could be a better idea to buy a larger house. I went to one of these schools, and my parents noted at the time that under the right circumstances, they might have saved money by buying a fancy car "instead of" saving for my college.

The Plough link is broken; it should be

2Milan_Griffes9moThanks, should be fixed now

I don't think this is actually a reasonable request to make here?

3Milan_Griffes9moWhy do you think that it is an unreasonable request?
EA Relationship Status

What do you think the "for life" adds to the pledge if not "for the rest of your lives"?

2MichaelA1yBacking up to clarify where I'm coming from Again, a reasonable question. I don't think we disagree substantially. Also, again, I think my views are actually less driven by a perceived distinction between "for life" vs "till death do us part", and more driven by: * the idea that it seems ok to make promises even if there's some chance that unforeseen circumstances will make fulfilling them impossible/unwise - as long as the promise really was "taken seriously", and ideally the promise-receiver has the same understanding of how "binding" the promise is * having had many explicit conversations on these matters with my partner Finally, I'd also guess that I'm far from alone in simultaneously (a) being aware that a large portion of marriages end in divorce, (b) being aware that many of those divorces probably began with the couple feeling very confident their marriage wouldn't end in divorce, and (c) having a wedding in which a phrase like "for life" or "till death do us part" was used. And I think it would be odd to see all such people as having behaving poorly by making a promise they may well not keep and know in advance they may not keep, at least if the partners had discussed their shared understanding of what they were promising. (I'm not necessarily saying you're saying we should see those people that way.) One reason for this view is that people extremely often mean something other than exact the literal meaning of what they've said, and this seems ok in most contexts, as long as people mutually understand what's actually meant. (I think a reasonable argument can be made that marriages aren't among those "most contexts", given their unusually serious and legal nature. But it also seems worth noting that this is about what the celebrant said, not our vows or what we signed.) Direct response, which is sort-of getting in the weeds on something I haven't really thought about in detail before, to be honest One could likewise ask what "He spent hi
EA Relationship Status

See the discussion here:

It doesn't account for a very much of the data, unfortunately.

EA Relationship Status

"for life" sounds just as permanent to me, if less morbid, than "till death do us part"

2MichaelA1yI think that's reasonable. Here's one example to illustrate what might be making my intuitions differ a bit; I feel like you could say "He has spent his life working to end malaria" when someone is alive and fairly young, and also that you could say "He spent his life working to end malaria" even if really he worked on that from 30-60 and then retired. (Whereas I don't think this is true if you explicitly say "He worked to end malaria till the day he died".) In a similar way, I have a weak sense we can "enter into a union for life" without this literally extending for 100% of the rest of our lives. But maybe my intuition is being driven more by it being a present-tense matter of us currently voluntarily entering into this union. Analogously, I think people would usually feel it's reasonable for promises to not always be upheld if unusual and hard-to-foresee circumstances arose, the foreseeing of which would've made the promise-maker decide not to make the promise to begin with. (But this does get complicated if reference class forecasting suggests an e.g. 50% chance of some relevant circumstance arising, and it's just that any particular circumstance arising is hard to foresee, as it was in many of those 50% of cases.) In any case, I guess I really think that whether and how partners explicitly discussed their respective understandings of their arrangement, in advance, probably matters more than the precise words the celebrant said.
Some thoughts on deference and inside-view models

Similar with what you're saying about AI alignment being preparadigmatic, a major reason why trying to prove the Riemann conjecture head-on would be a bad idea is that people have already been trying to do that for a long time without success. I expect the first people to consider the conjecture approached it directly, and were reasonable to do so.

3Max_Daniel2yYes, good points. I basically agree. I guess this could provide another argument in favor of Buck's original view, namely that the AI alignment problem is young and so worth attacking directly. (Though there are differences between attacking a problem directly and having an end-to-end story for how to solve it, which may be worth paying attention to.) I think your view is also born out by some examples from the history of maths. For example, the Weil conjectures were posed in 1949 [], and it took "only" a few decades to prove them. However, some of the key steps [] were known from the start, it just required a lot of work and innovation to complete them. And so I think it's fair to characterize the process as a relatively direct, and ultimately successful, attempt to solve a big problem. (Indeed, this is an example of the effect where the targeted pursuit of a specific problem led to a lot of foundational/theoretical innovation [], which has much wider uses.)
Some thoughts on deference and inside-view models
I asked an AI safety researcher "Suppose your research project went as well as it could possibly go; how would it make it easier to align powerful AI systems?", and they said that they hadn't really thought about that. I think that this makes your work less useful.

This seems like a deeper disagreement than you're describing. A lot of research in academia (ex: much of math) involves playing with ideas that seem poorly understood, trying to figure out what's going on. It's not really goal directed, especially not the kind of goa... (read more)

This comment is a general reply to this whole thread.

Some clarifications:

  • I don't think that we should require that people working in AI safety have arguments for their research which are persuasive to anyone else. I'm saying I think they should have arguments which are persuasive to them.
  • I think that good plans involve doing things like playing around with ideas that excite you, and learning subjects which are only plausibly related if you have a hunch it could be helpful; I do these things a lot myself.
  • I think there's a distinction between
... (read more)
5Max_Daniel2yI agree, and also immediately thought of pure mathematics as a counterexample. E.g., if one's most important goal was to prove the Riemann hypothesis [], then I claim (based on my personal experience of doing maths, though e.g. Terence Tao seems to agree [] ) that it'd be a very bad strategy to only do things where one has an end-to-end story for how they might contribute to a proof of the Riemann hypothesis. This is true especially if one is junior, but I claim it would be true even for a hypothetical person eventually proving the Riemann conjecture, except maybe in some of the very last stages of them actually figuring out the proof. I think the history of maths also provides some suggestive examples of the dangers of requiring end-to-end stories. E.g., consider some famous open questions in Ancient mathematics that were phrased in the language of geometric constructions with ruler and compass, such as whether it's possible to 'square the circle' []. It was solved 2,000 years after it was posed using modern number theory. But if you had insisted that everyone working on it has an end-to-end story for how what they're doing contributes to solving that problem, I think there would have been a real risk that people continue thinking purely in ruler-and-compass terms and we never develop modern number theory in the first place. -- The Planners vs. Hayekians [] distinction seems related. The way I'm understanding Buck is that he thinks that, at least within AI alignment, a Planning strategy is superior to a Hayekian one (i.e. roughly one based on optimizing robust heuristics rather than an end-to-end story). -- One of the strongest defenses of Buck's original claim I ca
How should longtermists think about eating meat?

"With this framework, we can propose a clearer answer to the moral offsetting problem: you can offset axiology, but not morality."

How should longtermists think about eating meat?

I wonder how much we can trust people's given reasons for having been veg? For example, say people sometimes go veg both for health reasons and because they also care about animals. I could imagine something where if you asked them while they were still veg they would say "mostly because I care about animals" but then if you ask them after you get more "I was doing it for health reasons" because talking about how you used to do it for the animals makes you sound selfish?

How should longtermists think about eating meat? has "84% of vegetarians/vegans abandon their diet" which matches my experience and I think is an indication that it's pretty far from costless?

2MichaelStJules2yFWIW, the rate was ~50% for vegans who were motivated by animal protection, and ~70% for vegetarians (including vegans) who were motivated by animal protection, based on table 17 on p.18 here [] . For vegans who were motivated by animal protection, here's the recidivism rate calculation: 0.27×129/(0.62×53+0.27×127)≈0.52≈50%The recidivism rate was about 84% of vegetarians motivated by health, who made up more than half, and 86.6% for vegetarians not motivated by animal protection. Actually, only 27% of former vegetarians and 27% of former vegans were motivated by animal protection, even though those motivated by animal protection make up 70% and 62% of current vegetarians and current vegans, respectively. Also see Tables 9 and 10. I don't think it's surprising that people who go veg*n other than for animals go back to eating meat. It could be evidence of some cost, but it could also mainly be evidence that most people who go veg*n do so for reasons they eventually no longer found compelling, so even small costs would have been enough to bring them back to eating meat. They also go over difficulties people had with their diets in that study, too, though.
2Rupert2yDo you have thoughts on what would account for the variance in degree of dislike of the diet, then?
How should longtermists think about eating meat?

> a lot of the long-term vegans that I know

It sounds like you may have a sampling bias, where you're missing out on all the people who disliked being vegan enough to stop? has "84% of vegetarians/vegans abandon their diet" which matches my experience and I think is an indication that it's pretty far from costless?

Why I'm Not Vegan
However, even if I were to get more than $10 of enjoyment out of punching that person, I don't think it's right that I'm morally permitted to do so.

I don't think you would be morally permitted to either, because I think is right and you can offset axiology, but not morality.

I have an intuition that this is more of the disagreement between you and vegans (as opposed to having different moral weights). My guess is that one could literally prevent three chicken-years for less than $500/year?[1] And also that some vegans' personal happiness is more affected by not eating chickens than donating $500.

If that's true, then the reason vegans are vegan instead of donating is because they view it as "morality" as opposed to "axiology".

This accords with my intuition: having someone tell me they care about nonhuman animals while eating a

... (read more)
Why I'm Not Vegan
I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals' moral worth in these discussions

Let's say I'm trying to convince someone that they shouldn't donate to animal charities or malaria net distribution, but instead they should be trying to prevent existential risk. I bring up how many people there could potentially be in the future ("astronomical stakes") as a reason for why they should care a lot about those people getting a chance ... (read more)

Why I'm Not Vegan

While I think moral trades are interesting, I don't know why you would expect me to see $4.30 going to an existential risk charity to be enough for it to be worth me going vegetarian for a year over? I'd much rather donate $4.30 myself and not change my diet.

I think you're conflating "Jeff sees $0.43/y to a good charity as being clearly better than averting the animal suffering due to omnivorous eating" and "Jeff only selfishly values eating animal products at $0.43/y"?

5Khorton2yIf anyone's genuinely interested in this, I'll switch my diet from eating organic meat ~5x a week to completely vegan in exchange for a donation to the Against Malaria Foundation. £10 per week. (I think that's a bad deal for everyone except AMF - there are way better things you can invest in if you care about animals welfare - but I would genuinely do it!)
Leaving Things For Others
why can't I do both the individual action and the institutional part?

Both avoiding delivery and calling stores to encourage prioritization are ways of turning time into a better world. Yes, you can do your own shopping and call your own grocery store, but you have further options. Do you call other stores you go to less frequently and make similar encouragements? Do you call stores in other areas? Do you sign up as an Instacart shopper so there will be more delivery spots available? You write that you can act on both fronts, but if you start thin... (read more)

7[anonymous]2yI think in many cases it makes sense to use the prioritization you describe, but I have two concerns about it: 1) I think it's possible that with collective action problems, it's really easy to miscalculate the potential effects of your choice (and talking about your choice) has on the behavior of others, and therefore harder to estimate the true good the individual action produces (and the harm that explicitly discouraging mildly good but ineffective actions might cause). 2) I think it's likely that "how much of a sacrifice" something is varies a lot, and could depend how many other people are doing the thing and how your community views the thing. So it might be worthwhile to have a community that encourages doing inconvenient things, because that makes it easier to do good things that are inconvenient (ultimately making them less inconvenient). Finally, I'm also not sure I agree that all things can be directly converted into "time spent" and then directly compared. Yes, if I have a specific amount of time I spend on social media, where I can either advocate for policy change or individual action, I should use that time for policy change. But some kinds of time use are inelastic or not-exchangable at a certain point, and one-off uses of mental time for deciding how to spend that inelastic time in a positive way doesn't seem wasteful to me. So I think it's better to be more nuanced than just saying "everything takes time and so everything is a trade-off" and instead evaluate which things genuinely trade off time with each other.
Why I'm Not Vegan

The post doesn't depend on it, because the post is all conditional on animals mattering a nonzero amount ("to be safe I'll assume they do [matter]").

Why I'm Not Vegan

My post describes a model for thinking about when it makes sense to be vegan, and how I apply it in my case. My specific numbers are much less useful to other people, and I'm not claiming that I've found the one true best estimate. Ways the post can be useful include (a) discussion over whether this is a good model to be using and (b) discussion over how people think about these sort of relative numbers.

I included the "I think there's a very large chance they don't matter at all, and that there's just no one inside to suffer... (read more)

5HenryStanley2yI don't see how that can be true. Surely the weightings you give would be radically different if you thought there was "someone inside to suffer"?
9abrahamrowe2yI appreciate your thoughtful response to my post, and think I unintentionally came across harshly. I think you and I likely disagree on how much to weight the moral worth of animals, and what that entails about what we ought to do. But my discomfort with this post is (I hope, though of course I have subconscious biases) is specifically with the non-clarified statements about comparative moral worth between humans and other species. I made my comment to clarify that the reason I voted this down is that I think it is a very bad community standard to blanket accept statements of the sort "I think that these folk X are worth less than these other folk Y" (not a direct quote from you obviously) without stating precisely why one believes that or justifying that claim. That genuinely feels like a dangerous precedent to have, and without context, ought to be viewed with a lot of skepticism. Likewise, if I made an argument where I assumed but did not defend the claim that people different than me are worth 1/10th people like me, you likely ought to downvote it, regardless of the value of the model I might be presenting for thinking about an issue. One small side note - I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals' moral worth in these discussions. Most members of the public, myself included, aren't experts in either moral philosophy nor animal sentience. And, we also know that most members of the public don't view veganism as worthwhile to do. Using this data as evidence that animals have less moral worth strikes me as doing something analogous to saying "most people who care more about their families than others, when surveyed, seem to believe that people outside their families are worth less morally. On those grounds, I ought to think that people outside my family are worth less morally". This kind of survey provides information on what people think about animals, but in
Why I'm Not Vegan

Sorry, I forgot this would be crossposted here automatically and this version was (until just now) missing an edit I made just after publishing: "how many animals" should have been "how many continuously living animals". Since animal lives on factory farms are net negative, and their ongoing suffering is a far bigger factor than their deaths, I don't care about the number of individual animals but instead how many animal-days. So I wouldn't see breeding pigs that produced twice as much meat and lived twice as long as an impr... (read more)

4MichaelStJules2yOk, makes sense. Thanks for the clarification! No, that seems right.
New research on moral weights

Really excited to see this published. This is something I've heard people speculate about a lot over the years ("are people in places with higher child mortality more accepting of it, because it's more normal, and so are we overweighting deaths?") and it's helpful to see what the people we're trying to help actually value.

(And that's on top of us not being able to survey the children!)

2Milan_Griffes2y+1, good to see empirical work on this
Updates from Leverage Research: history, mistakes and new focus

Thoughts, now that I've read it:

  • This sort thing where you try things until you figure out what's going on, starting from a place of pretty minimal knowledge feels very familiar to me. I think a lot of my hobby projects have worked this way, partly because I often find it more more fun to try things than to try to find out what people already know about them. This comment thread, trying to understand what frequencies forked brass instruments make, is an example that came to mind several times reading the post.
  • Not exactly the same, but this also
... (read more)
4mmcteigue2y“One question that comes to mind is whether there is still early stage science today. Maybe the patterns that you're seeing are all about what happens if you're very early in the development of science in general, but now you only get those patterns when people are playing around (like I am above)? So I'd be interested in the most recent cases you can find that you'd consider to be early-stage.” This is also a great question. It is totally possible that early stage science occurred only in the past, and science as a whole has developed past it. We talked to a number of people in our network to try to gather plausible alternatives to our hypothesis about early stage science, and this is one of the most common ones we found. I’m currently thinking of this as one of the possible views we’ll need to argue against or refute for our original hypothesis to hold, as opposed to a perspective we’ve already solidly eliminated. On recent past cases: If you go back a bit, there are lots of plausible early stage science success cases in the late 1800s and early 1900s. The study of radiation is a potential case in this period with some possible indicators of early stage methods. This period is arguably not recent enough to refute the “science as a whole has moved past early stage exploration” hypothesis, so I want to seek out more recent examples in addition to studying these. To get a better answer here, I’ll want us to look more specifically at the window between 1940 and 2000, which we haven’t looked at much so far - I expect it will be our best shot at finding early stage discoveries that have already been verified and accepted, while still being recent. On current cases: Finding current cases or cases in the more recent past is trickier. For refuting the hypothesis you laid out, we’d be most interested in finding recent applications of early stage methods that produced successful discoveries. Unfortunately, it can be hard to identify these cases, because when the early
5mmcteigue2yHi Jeff, Thanks for your comment :) I totally agree with Larissa's response, and also really liked your example about instrument building. I've been working with Kerry at Leverage to set-up the early stage science research program by doing background reading on the academic literature in history and philosophy of science, so I'm going to follow-up on Larissa's comment to respond to the question raised in your 3rd bullet point (and the 4th bullet in a later comment, likely tomorrow). Large hedge: this entire comment reflects my current views - I expect my thinking to change as we keep doing research and become more familiar with the literature. “How controversial is the idea that early stage science works pretty differently from more established explorations, and that you need pretty different approaches and skills? I don't know that much history/philosophy of science but I'm having trouble telling from the paper which of the hypotheses in section 4 are ones that you expect people to already agree with, vs ones that you think you're going to need to demonstrate?” This is a great question. The history and philosophy of science literature is fairly large and complicated, and we’ve just begun looking at it, but here is my take so far. I think it’s somewhat controversial that early stage science works pretty differently from more established explorations and that you need pretty different approaches and skills. It’s also a bit hard to measure, because our claims slightly cross-cut the academic debate. Summary of our position To make the discussion a little simpler, I’ve distilled down our hypothesis and starting assumptions to four claims to compare to positions in the surrounding literature[1]. Here are the claims: (1) You can learn[2] about scientific development and methodology via looking at history. (2) Scientific development in a domain tends to go through phases, with distinct methodologies and patterns of researcher behavior. (3) There is an early p
5LarissaHeskethRowe2yHi Jeff, Thanks for taking the time to check out the paper and for sending us your thoughts. I really like the examples of building new instruments and figuring out how that works versus creating something that’s a refinement of an existing instrument. I think these seem very illustrative of early stage science. My guess is that the process you were using to work out how your forked brass [] works, feels similar to how it might feel to be conducting early stage science. One thing that stood out to me was that someone else trying to replicate the instrument found, if I understood this correctly, they could only do so with much longer tubes. That person then theorised that perhaps the mouth and diaphragm of the person playing the instrument have an effect. This is reminiscent of the problems with Galileo’s telescope and the difference in people’s eyesight. Another thought this example gave me is how video can play a big part in today’s early stage science, in the same way, that demonstrations did in the past. It’s much easier to demonstrate to a wide audience that you really can make the sounds you claim with the instrument you’re describing if they can watch a video of it. If all people had was a description of what you had built, but they couldn’t create the same sound on a replica instrument, they might have been more sceptical. Being able to replicate the experiment will matter more in areas where the claims made are further outside of people’s current expectations. “I can play these notes with this instrument” is probably less unexpected than “Jupiter has satellites we hadn't seen before and I can see them with this new contraption”. This is outside of the scope of our research, it’s just a thought prompted by the example. I’ve asked my colleagues to provide an answer to your questions about how controversial the claim that early stage science works differently is and whether it seems likely that there would still be
Updates from Leverage Research: history, mistakes and new focus

Looking over the website I noticed Studying Early Stage Science under "Recent Research". I haven't read it yet, but will!

Thoughts, now that I've read it:

  • This sort thing where you try things until you figure out what's going on, starting from a place of pretty minimal knowledge feels very familiar to me. I think a lot of my hobby projects have worked this way, partly because I often find it more more fun to try things than to try to find out what people already know about them. This comment thread, trying to understand what frequencies forked brass instruments make, is an example that came to mind several times reading the post.
  • Not exactly the same, but this also
... (read more)
Updates from Leverage Research: history, mistakes and new focus

Thanks for writing this! I'm really glad Leverage has decided to start sharing more.

3LarissaHeskethRowe2yThanks Jeff :-) I hope it’s helpful.
Long-term Donation Bunching?

I wonder whether it would be worth building some standard terms for this and trying to make it a thing?

1Cullen_OKeefe2yYes, it might be. Feel free to sync offline if you want to investigate this.
Candy for Nets

Thanks! Though like all my blog posts it's already public on my website:

Long-term Donation Bunching?

If ~50% of people drift away over five years it's hard to say how many do over 2-3, but it should be at least 25%-35% [1]. You need pretty large tax savings to risk a chance that large of actually donating nothing.

[1] 13%/year for five years gives you 50%, and I think I'd expect the rate of attrition to slowly decrease over time? 25% for two years and 35% for three is assuming it's linear.

7dmolling2yIt seems to depend a lot on what it means for someone to no longer be involved in the EA movement. The relevant alternative in my mind isn't donating nothing. Speaking for myself I can certainly imagine not being involved in the EA movement in 2-3 years. It's a lot harder to imagine myself raiding a dedicated bank account I had set aside for donations. That doesn't mean it's not possible, but if use the population estimates of not being in the EA movement in 2-3 years of 25-30% (which seems reasonable) as my own risk, I'd estimate the risk of raiding a dedicated bank account for charitable donations as a fraction of that - maybe 5%. In that case it could be worth it. Maybe I'll think a little harder about that point if I decide to do it. I 100% agree with the principal behind your post. The future, including your future identity, is so uncertain that for almost everyone the best time to donate and form altruistic habits is right now. I would only "bunch" if I could do so in a way that allows keeping those good habits.
How to Make Billions of Dollars Reducing Loneliness
Facebook and Google have an incentive to track their users because they sell targeted advertising.

Even without ads they would have a very strong reason for tracking: trying to make the product better. Things you do when using Facebook are all fed into a model trying to predict what you like to interact with, so they can prioritize among the enormous number of things they could be showing you.

If physics is many-worlds, does ethics matter?
For every decision I've made, there's a version where the other choice was made.

Is that actually something the many-worlds view implies? It seems like you're conflating "made a choice" with "quantum split"?

(I don't know any of the relevant physics.)

2Milan_Griffes2yI think so? (I'm also lacking the relevant physics.) From the explainer [] I linked to:
EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge?

One group I'm especially interested in is people who were active in EA, took the GWWC pledge, and then drifted away (eg). This is a group that likely mostly didn't take the EA Survey. I would expect that after accounting for this the actual fraction of people current on their pledges would be *much* lower.

Since we don't know the fraction of people keeping their pledge to even the nearest 10%, the survey I would find most useful would be a smallish random sample. Pick 25 GWWC members at random, and follow up with them. Write personalized ... (read more)

There's Lots More To Do

Other people being mislead is how I read "Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense. Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits. And try to fix the underlying systems problems that got you so confused in the first place."

There's Lots More To Do

I don't think the post is correct in concluding that the current marginal cost-per-life-saved estimates are wrong. Annual malaria deaths are around 450k, and if you gave the Against Malaria Foundation $5k * 450k ($2.3B) they would not be able to make sure no one died from malaria in 2020, but still wouldn't give much evidence that $5k was too low an estimate for the marginal cost. It just means that AMF would have lots of difficulty scaling up so much, that some deaths can't be prevented by distributing nets, that some places are harder to... (read more)

There's Lots More To Do

I agree the distribution would be interesting! But it depends how many such opportunities there might be, no? What about:

"Imagine that over time the low hanging fruit is picked and further opportunities for charitable giving get progressively more expensive in terms of cost per life saved equivalents (CPLSE). At what CPLSE, in dollars, would you no longer donate?"

Do you mean the number of opportunities in the future, or the ability to donate larger amounts of money right now? We could do:

What is the most dollars you would be willing to donate in order to save the life of a randomly chosen human? Assume this is the only opportunity you'll ever get to save a life by donating - all other money you have must be spent on yourself and your family.

and also the endowment effect reversal:

If offered a choice between saving a random stranger's life and an amount of money, what is the smallest number of dollars you w... (read more)

Why does EA use QALYs instead of experience sampling?

I tried experience sampling myself for about a year and a half (intro, conclusion) and it made me much more skeptical of the system. I'm just not that sure how happy I am at any given point:

When I first started rating my happiness on a 1-10 scale I didn't feel like I was very good at it. At the time I thought I might get better with practice, but I think I'm actually getting worse at it. Instead of really thinking "how do I feel right now?" it's really hard not to just think "in past situations like this I've put do
... (read more)
4Milan_Griffes3yInteresting. I've done experience-sampling to track my mood [] for about 3 years, and haven't noticed this dynamic. (It generally feels like I'm answering the question "how do I feel right now?") Just another data point. This is a great point. I don't have children & this hasn't been a problem for me. Totally makes sense that this comes up once you're a parent. My intuition is that aggregating the results of the two methods would outperform either method individually, because they skew in different directions.
Value of Working in Ads?
I think the internet shouldn't run on ads. Making people pay for content ensures that the internet is providing real value rather than just clickbaiting

Before the internet you still had tabloids with shocking claims on the cover that, after you bought the paper and read it you realized the claims were overblown. If we moved away from ads the specific case of "you pay, and afterwards you realize you were baited" would still exist.

the dependence on advertising creates controversies where corporations compel content hosts to engage in dubious
... (read more)
Salary Negotiation for Earning to Give

I've helped a few people negotiate salaries at tech companies, and my experience has been people always bring me in too late. You want to have multiple active offers at the same time so you can get them to bid against each other. For example, when I came back to Google I did:

  • Google made me an offer
  • Facebook beat Google's offer
  • Amazon declined to match either offer
  • Google beats Facebook's offer
  • Facebook beats Google's offer
  • Google matches Facebook's offer

The ideal for you is lots of back and forth, which is the opposite of what they wa... (read more)

4Halffull3yYou can often get the timing to work late in the game by stalling the company that gave you the offer, and telling other companies that you already have an offer so you need an accelerated process.

These steps, to my knowledge, are completely unprecedented for CEA.

I think CEA may have done something similar with Gleb, though for very different reasons:

-2pton3yThere is now more information which suggests the allegations were not as severe as commenters here believed. Namely an investigative journalist [] says the allegations amounted to “insistent or clumsy flirting." The single example of which we know details involved Jacy calling someone "cutie," to which the discussant said they were not interested, and he responded that he was in a "polyamorous/open" relationship, which the discussant interpreted as continued flirting, but he says was only meant "to clarify that he was not cheating or intending to cheat on his partner."
These steps, to my knowledge, are completely unprecedented for CEA.

I do know that CEA does not talk publicly about cases like this, so I don't think we would know whether there are other cases like this. I know of at least one case that has not been at all publicly discussed in which someone was banned from all EA events, and practically banned from ever having any kind of leadership position again.

After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation

(Peter has been one of several people continuing to argue "earning to give is undervalued, most orgs could still do useful things with more funding".)

Load More