All of Eric Neyman's Comments + Replies

(Comment is mostly cross-posted comment from Nuño's blog.)

In "Unflattering aspects of Effective Altruism", you write:

Third, I feel that EA leadership uses worries about the dangers of maximization to constrain the rank and file in a hypocritical way. If I want to do something cool and risky on my own, I have to beware of the “unilateralist curse” and “build consensus”. But if Open Philanthropy donates $30M to OpenAI, pulls a not-so-well-understood policy advocacy lever that contributed to the US overshooting inflation in 2021, funds Anthropic13 while Anthr

... (read more)
2
NunoSempere
1mo
[Answered over on my blog]

Thanks for asking! The first thing I want to say is that I got lucky in the following respect. The set of possible outcomes isn't the interior of the ellipse I drew; rather, it is a bunch of points that are drawn at random from a distribution, and when you plot that cloud of points, it looks like an ellipse. The way I got lucky is: one of the draws from this distribution happened to be in the top-right corner. That draw is working at ARC theory, which has just about the most intellectually interesting work in the world (for my interests) and is also just a... (read more)

Thanks -- I should have been a bit more careful with my words when I wrote that "measurement noise likely follows a distribution with fatter tails than a log-normal distribution". The distribution I'm describing is your subjective uncertainty over the standard error of your experimental results. That is, you're (perhaps reasonably) modeling your measurement as being the true quality plus some normally distributed noise. But -- normal with what standard deviation? There's an objectively right answer that you'd know if you were omniscient, but you don't, so ... (read more)

In general I think it's not crazy to guess that the standard error of your measurement is proportional to the size of the effect you're trying to measure

Take a hierarchical model for effects. Each intervention has a true effect , and all the are drawn from a common distribution . Now for each intervention, we run an RCT and estimate where is experimental noise.

By the CLT, where is the inherent sampling variance in your environment and is the sample size of your RCT. What you're saying is that has the same o... (read more)

Let's take the very first scatter plot. Consider the following alternative way of labeling the x and y axes. The y-axis is now the quality of a health intervention, and it consists of two components: short-term effects and long-term effects. You do a really thorough study that perfectly measures the short-term effects, while the long-term effects remain unknown to you. The x-value is what you measured (the short-term effects); the actual quality of the intervention is the x-value plus some unknown, mean zero variance 1 number.

So whereas previously (i.e. in... (read more)

2
Davidmanheim
1y
Yes - though I think this is just an elaboration of what Abram wrote here.

Great question -- you absolutely need to take that into account! You can only bargain with people who you expect to uphold the bargain. This probably means that when you're bargaining, you should weight "you in other worlds" in proportion to how likely they are to uphold the bargain. This seems really hard to think about and probably ties in with a bunch of complicated questions around decision theory.

This is probably my favorite proposal I've seen so far, thanks!

I'm a little skeptical that warnings from the organization you propose would have been heeded (especially by people who don't have other sources of funding and so relying on FTX was their only option), but perhaps if the organization had sufficient clout, this would have put pressure on FTX to engage in less risky business practices.

8
Sam Elder
1y
I don't have much hope that the charity side of things could have influenced FTX to be less risky -- from what I can tell, a high tolerance for risk was core to their business practices. I just think it could have given EA folks who aren't crypto-savvy a lot more sobriety around FTX's relationship to EA and make them consider the potential downsides of taking FTX funding. It also would have helped in the media/reputation fallout if the donor evaluator I have in mind would have clearly labeled FTX as risky or having withheld information. Independent of this particular case to mitigate against, I also think such a donor catalog and evaluation system would be a benefit to the community, as a sort of one-stop shop for potential grantees to learn about their options for seeking funding.

I think this fails (1), but more confidently, I'm pretty sure it fails (2). How are you going to keep individuals from taking crypto money? See also: https://forum.effectivealtruism.org/posts/Pz7RdMRouZ5N5w5eE/ea-should-taboo-ea-should

2
titotal
1y
If I said, "EA should have had a policy to not be involved with or associate with the weapons industry", would you have the same objection? (not saying crypto is as bad obviously, just that some form of divestment is obviously possible). FTX was heavily involved in the core of EA, and nothing was done to discourage them tying themselves to EA at every turn. Do you really think the reputational fallout would have been as great if SBF was a mere anonymous donor?

I think my crux with this argument is "actions are taken by individuals". This is true, strictly speaking; but when e.g. a member of U.S. Congress votes on a bill, they're taking an action on behalf of their constituents, and affecting the whole U.S. (and often world) population. I like to ground morality in questions of a political philosophy flavor, such as: "What is the algorithm that we would like legislators to use to decide which legislation to support?". And as I see it, there's no way around answering questions like this one, when decisions have si... (read more)

2
jasoncrawford
2y
I would like them to use an algorithm that is not based on some sort of global calculation about future world-states. That leads to parentalism in government and social engineering. Instead, I would like the algorithm to be based on something like protecting rights and preventing people from directly harming each other. Then, within that framework, people have the freedom to improve their own lives and their own world. Re the China/US scenario: this does seem implausible; why would the US AI prevent almost all future progress, forever? Setting that aside, though, if this scenario did happen, it would be a very tough call. However, I wouldn't make it on the basis of counting people and adding up happiness. I would make it on the basis of something like the value of progress vs. the value of survival. Abortion policy is a good example. I don't see how you can decide this on the basis of counting people. What matters here is the wishes of the parents, the rights of the mother, and your view on whether the fetus has rights.

Does anyone have an estimate of how many dollars donated to the campaign are about equal in value to one hour spent phonebanking? Thanks!

1
Caro
2y
It's quite hard to know and I don't know what the Team Campaign thinks about it. There is a good article on Vox about the evidence base for those things  "Gerber and Green’s rough estimate is that canvassing can garner campaigns a vote for about $33, while volunteer phone-banking can garner a vote for $36 — not too different, especially when you consider how imprecise these estimates necessarily are." Not exactly what you answered but can give you a sense of direction."

I guess I have two reactions. First, which of the categories are you putting me in? My guess is you want to label me as a mop, but "contribute as little as they reasonably can in exchange" seems an inaccurate description of someone who's strongly considering devoting their career to an EA cause; also I really enjoy talking about the weird "new things" that come up (like idk actually trade between universes during the long reflection).

My second thought is that while your story about social gradients is a plausible one, I have a more straightforward story ab... (read more)

7
Linch
2y
I think an interesting related question is how much our social (and other incentive) gradients should prioritize people whose talents or dispositions are naturally predisposed to doing relevant EA work, versus people who are not naturally inclined for this but are morally compelled to "do what needs to be done." I think in one sense it feels more morally praiseworthy for people to be willing to do hard work. But in another sense, it's (probably?) easier to recruit people for whom the pitch and associated sacrifices to do EA work is lower, and for a lot of current longtermist work (especially in research), having a natural inclination/aptitude/interest probably makes you a lot better at the work than grim determination  I'm curious how true this is. 
1
NegativeNuno
2y
I don't think this is an important question, it's not like "tall people" and "short people" are a distinct cluster. There is going to be a spectrum, and you would be somewhere in the middle. But still using labels is a convenient shorthand. So the thing that worries me is that if someone is optimizing for something different, they might reward other people for doing the same thing. The case has been on my mind recently where someone is a respected member of the community, but what they are doing is not optimal, and it would be awkward to point that out. But still necessary, even if it looses one brownie points socially. Overall, I don't really read minds, and I don't know what you would or wouldn't do.
4
NegativeNuno
2y
I think this would work if one actually did it, but not if impact is distributed with long tails (e.g., power law) and people take offense to being accepted very little.

I may have misinterpreted what exactly the concept-shaped hole was. I still think I'm right about them having been surprised, though.

If it helps clarify, the community builders are talking about are some of the Berkeley(-adjacent) longtermist ones. As some sort of signal that I'm not overstating my case here, one messaged me to say that my post helped them plug a "concept-shaped hole", a la https://slatestarcodex.com/2017/11/07/concept-shaped-holes-can-be-impossible-to-notice/

[This comment is no longer endorsed by its author]Reply
1
Eric Neyman
2y
I may have misinterpreted what exactly the concept-shaped hole was. I still think I'm right about them having been surprised, though.

Great comment, I think that's right.

I know that "give your other values an extremely high weight compared with impact" is an accurate description of how I behave in practice. I'm kind of tempted to bite that same bullet when it comes to my extrapolated volition -- but again, this would definitely be biting a bullet that doesn't taste very good (do I really endorse caring about the log of my impact?). I should think more about this, thanks!

Yup -- that would be the limiting case of an ellipse tilted the other way!

The idea for the ellipse is that what EA values is correlated (but not perfectly) with my utility function, so (under certain modeling assumptions) the space of most likely career outcomes is an ellipse, see e.g. here.

Note that the y-axis is extrapolated volition, i.e. what I endorse/strive for. Extrapolated volition can definitely change -- but I think by definition we prefer ours not to?

1
RedStateBlueState
2y
In that case I'm going to blame Google for defining volition as "the faculty or power of using one's will."  Or maybe that does mean "endorse"? Honestly I'm very confused, feel free to ignore my original comment.

Note that covid travel restrictions may be a consideration. For example, New Zealand's borders are currently closed to essentially all non-New Zealanders and are scheduled to remain closed to much of the world until July:

Historically, there have been ~24 Republicans vs ~19 Democrats as senators (and  1 independent) from Oregon, so partisan affiliation doesn't seem that important.

A better way of looking at this is the partisan lean of his particular district. The answer is D+7, meaning that in a neutral environment (i.e. an equal number of Democratic and Republican votes nationally), a Democrat would be expected to win this district by 7 percentage points.

This year is likely to be a Republican "wave" year, i.e. Republicans are likely to outperform Democrats (the party ... (read more)

Hi! I'm an author of this paper and am happy to answer questions. Thanks to Jsevillamol for the summary!

A quick note regarding the context in which the extremization factor we suggest is "optimal": rather than taking a Bayesian view of forecast aggregation, we take a robust/"worst case" view. In brief, we consider the following setup:

(1) you choose an aggregation method.

(2) an adversary chooses an information structure (i.e. joint probability distribution over the true answer and what partial information each expert knows) to make your aggregation method d... (read more)

Thanks for putting this together; I might be interested!

I just want to flag that if your goal is to avoid internships, then (at least for American students) I think the right time to do this would be late May-early June rather than late June-early July as you suggest on the Airtable form. I think the most common day for internships to start is the day after Memorial Day, which in 2022 will be May 31st. (Someone correct me if I'm wrong.)

3
trammell
2y
Glad to hear you might be interested! Thanks for pointing this out. It's tough, because (a) as GrueEmerald notes below, at least some European schools end later, and (b) it will be easier to provide accommodation in Oxford once the Oxford spring term is over (e.g. I was thinking of just renting space in one of the colleges). Once the application form is up*, I might include a When2Meet-type thing so people can put exactly what weeks they expect to be free through the summer. *If this goes ahead; but there have been a lot of expressions of interest so far, so it probably will!
2[anonymous]2y
I think late May is too early for most European students.

My understanding is that the Neoliberal Project is a part of the Progressive Policy Institute, a DC think tank (correct me if I'm wrong).

Are you guys trying to lobby for any causes, and if so, what has your experience been on the lobbying front? Are there any lessons you've learned that may be helpful to EAs lobbying for EA causes like pandemic preparedness funding?

Yes, lobbying officials is part of what we do.  We're trying to talk to officials about all the things we care about - taking action on climate change, increasing immigration, etc etc etc. Truthfully I don't have a ton of experience on this front yet - I've been part of the project since its inception in early 2017, but have only been formally employed by PPI for the last 8 months or so. So I'm not a fountain of wisdom on all the best lobbying techniques - this is somewhat beginner level analysis of the DC swamp.

One thing I've noticed is that an ounce... (read more)

There sort of is -- I've seen some EAs use the light bulb emoji 💡 on Twitter (I assume this comes from the EA logo) -- but it's not widely used, and it's unclear to me whether it means "identifies as an EA" or "is a practicing EA" (i.e. donates a substantial percentage of their income to EA causes and/or does direct work on those causes).

I'm unsure whether I want there to be an easy way to "identify as EA", since identities do seem to make people worse at thinking clearly. I've thought/written about this (in the context of a neoliberal identity too, as it... (read more)

4
JeremiahJohnson
3y
Loved the post you linked! I second your hesitation about the upside/downside to "identifying as an EA". But I honestly don't think you can help this sort of thing happening. The most you can do is actively guide the values that are defining your group.  In the early days of the neoliberal subreddit (the earliest large-scale group of modern self-identified neoliberals), one of the slogans we used was 'evidence based policy'. The leaders and prominent members of the subreddit tried to instill 'evidence based policy' as a core value to the members, to offset the dangers of groupthink, to make people be willing to change their minds.  EBP is a complicated subject and it's not like most people are really out there reading research papers. But it's important to at least have people signaling that they are open to changing their minds. Signaling can become reality.

Thanks for writing this up; I agree with your conclusions.

There's a neat one-to-one correspondence between proper scoring rules and probabilistic opinion pooling methods satisfying certain axioms, and this correspondence maps Brier's quadratic scoring rule to arithmetic pooling (averaging probabilities) and the log scoring rule to logarithmic pooling (geometric mean of odds). I'll illustrate the correspondence with an example.

Let's say you have two experts: one says 10% and one says 50%. You see these predictions and need to come up with your own predictio... (read more)

Cool idea! Some thoughts I have:

  • A different thing you could do, instead of trading models, is compromise by assuming that there's a 50% chance that your model is right and a 50% chance that your peer's model is right. Then you can do utility calculations under this uncertainty. Note that this would have the same effect as the one you desire in your motivating example: Alice would scrub surfaces and Bob would wear a mask.
    • This would however make utility calculations twice as difficult as compared just using your own model, since you'd need to compute the exp
... (read more)

Yeah -- I think it's unlikely that Pact would become a really large player and have distortionary effects. If that happens, we'll solve that problem when we get there :)

The broader point that the marginal dollar might be more valuable to one campaign than to another is an important one. You could try to deal with this by making an actual market, where the ratio at which people trade campaign dollars isn't fixed at 1, but I think that will complicate the platform and end up doing more harm than good.

Yeah, there are various incentives issues like this one that are definitely worth thinking about! I wrote about some of them in this blog post: https://ericneyman.wordpress.com/2019/09/15/incentives-in-the-election-charity-platform/

The issue you point out can be mostly resolved by saying that half of a pledges contributions will go to their chosen candidate no matter what -- but this has the unfortunate effect of decreasing the amount of money that gets sent to charity. My guess is that it's not worth it (though maybe doing some nominal amount like 5% is w... (read more)

We want a Republican on our team; unfortunately in our experience Democrats are pretty disproportionately interested in the idea -- and this is in addition to the fact that our circles already have very few Republicans. (This could be a byproduct of how we're framing things, which is part of why we're trying to experiment with framing and talking to Republican consultants.) So we've been unsuccessful so far, but I agree that this is important.

This is a cool idea that we hadn't considered. Thank you!

This definitely sounds like it's worth trying, and it turns out that there's at least one prominent politician who's a fan of this idea. I do have the intuition that almost none of them would actually do it, because having more money directly benefits their staff.

3
MaxRa
3y
Good point. I suppose I could end up being more optimistic because * some politicians might think supporting it will, all in all, still make it more likely for them to win office * they might not believe that too many people would take part in this, so they could win relatively cheap virtue points * they might just be convinced that this is a great idea and are open to testing it out with voters * no idea if true, but I imagine many politicians also don’t have too close relationships with a significant proportion of their (seasonal?) campaign staff and have enough slack cutting other things if necessary? Or to rely more on volunteers? Probably it would help if you could find ways for the politicians to reap as much positive public recognition from this as possible, e.g. trying to place things like „Voters of both Richard Roe and Jane Doe donated 30.000$ as part of the One America Charity Campaign“ in the local news. Maybe also by letting them recommend a charity they’d like to be associated with. Another thought, I guess you might face less opposition in areas where campaigning is less professionalized and connected to the respective party‘s campaign apparatuses, who I guess will not like this idea (assuming they exist).

I believe the general name for this sort of thing if "moral trade"; see this paper by Toby Ord: http://www.amirrorclear.net/files/moral-trade.pdf. But yeah, this is something we've struggled with a bit, including trying not to use the word "matching" in our emails describing the concept. I think the best donor-oriented framing we have right now is "making a deal" with a donor for the other side. So maybe "political donation dealmaking"? But that sounds someone clunky to me.

Ryan, could you point me to "the funders behind Progress studies" you mentioned? I wasn't able to figure out what this refers to by googling. Thanks!

2
NunoSempere
3y
This probably refers to the Mercatus Center / Emergent Ventures / Marginal Revolution / Tyler Cowen. See https://marginalrevolution.com/marginalrevolution/2019/11/progress-studies-tranche-of-emergent-ventures.html, https://www.mercatus.org/commentary/we-need-new-science-progress 
2
RyanCarey
4y
Basically funding connected to this.

Thanks. Basically the way I'm thinking about this in my head is: we have some effective charities, and some charities that are meant to encourage people to participate. If we end up getting 10 million in donations, only a quarter of which goes to effective charities, I think that would be a bigger success than getting 1 million in donations, all of which goes to effective charities. I'm thinking about the most effective way to get the platform off the ground, because if it doesn't get off the ground then no money will be sent to charities anyway, and at le... (read more)

I would find it extremely surprising if compromising on charity choice led to you getting 10x more donations. Based on past experience, I'd surprised if it got you 10% more donations.

Many people would express preferences about where to donate if asked if they have preferences. However if they are going through a donation UX, every time they have one fewer click it's a win for them, and very few donors have preferences strong enough to overcome their desire for a clean UX. (I think this is intuitive for many non-EA people).

Hence my recommendation to focus on just one charity (or basket of high impact charities), but allow users the option to donate to anything if they don't like the default choice.

Thanks! I agree we should talk to an expert on these sorts of things. Probably "sociologist or psychologist" isn't the right category though? I'd guess that talking to someone who specializes in political ads, voter turnout, etc. would be the right person to talk to. I'm curious what other people think.

Thanks for the thoughts. I agree that the first thing you point out is a problem, but let me just point out: in the event that it becomes a problem, that means that our platform is already a wild success. After all, I'd be very happy if our platform took out single-digit millions of money out of politics (compared to the single-digit billions that are spent). If we become a large fraction of all money going into politics, then yeah, this will become a problem, perhaps solvable in the way you suggest.

Regarding your thoughts on ads, that seems like a pl... (read more)

7
xccf
3y
I think the Center for Election Science, an EA organization that advocates approval voting, could be an effective anti-polarization organization.  There seems to be widespread dissatisfaction with the 2-party system, and I believe it's contributing significantly to polarization. There's something rather delightful about money being matched from Republican and Democrat donors in order to fund an organization which aims to get rid of the 2-party system :)
2
MichaelStJules
4y
Alternatively, people will predict this and then refuse to use it in the first place in those cases.

If we find such wealthy donors, we could match them against each other instead! But I suppose it's possible that we'd find donors who'd be willing to match with each other up to however much is contributed to the platform, as a way of raising interest. Like how cool would it be if Sheldon Adelson and George Soros agreed to this sort of thing? (I'm not even remotely optimistic though :P)

Thanks, this is my biggest concern. I agree that this sort of platform is less likely to work now than a decade ago when the U.S. was less polarized. I don't really have strong counter-evidence to point to; but we are running some rudimentary informal trials to see if we can muster up any interest from donors on both sides. If those are successful, that will give me hope that this can work at scale.

Thanks! But yeah, I don't think we could get political parties to like us. Because ultimately parties do prefer that they and the opposing party have a billion dollars than they they both have no money, if only because the employment of party operatives depends on it.

Thanks -- that was really helpful! The 4x rule of thumb you mentioned makes sense and is good to know. We may contact you about collaborating; we're probably not yet at the stage where we'll be making this decision, but we'll keep you posted! And your "nudging" suggestion makes sense, especially in light of what Ryan Carey said about people hating choosing between charities.

I did find one thing you said a bit odd, which is that veterans' charities strike you as political. To me they seem fairly apolitical, as people all across... (read more)

4
Sanjay
4y
Re veterans' charities: I don't have a strong opinion on this, because my experiences are more based on the UK than the US, which may be different. However if your intuition said that veterans charities are more likely to appeal to Republicans than Democrats, Democrats might have the same intuition What I can say is that veterans' charities (certainly in the UK, and probably in the US too) are rich with organisations whose impact enormously underperforms AMF. By several orders of magnitude. So if you did decide to include a veterans' charity, you would need a really good reason. And if you need someone to assess the charities you're considering, let me know -- I can get someone from the SoGive analysis team to take a look.

Yeah, I agree this would be bad. I talk a bit about this here: https://ericneyman.wordpress.com/2019/09/15/incentives-in-the-election-charity-platform/

A possible solution is to send only half of any matched money to charity. Then, from an apolitical altruist's perspective, donating $100 to the platform would cause at most $100 extra to go to charity, and less if their money doesn't end up matched. (On the other hand, this still leaves the problem of s slightly political altruist, who cares somewhat about politics but more about charity; I don&apo... (read more)

Thanks! Yup, if we were guaranteed success, I agree it would be worth it. On the other hand, I don't know how likely that is or how much money we'd attract if we did get off the ground. We're trying to get people to participate in a rudimentary version of our platform to see how much interest there is in this sort of thing.

Thanks for recommending the funds. I'm not heavily involved in this community (yet) so I wasn't aware of these; we will definitely look into them!

I've thought about allowing matches besides 1:1, but this se... (read more)