EN

Eric Neyman

1495 karmaJoined

Bio

I'm a theoretical CS grad student at Columbia specializing in mechanism design. I write a blog called Unexpected Values which you can find here: https://ericneyman.wordpress.com/. My academic website can be found here: https://sites.google.com/view/ericneyman/.

Comments
46

I wanted to highlight one particular U.S. House race that Matt Yglesias mentions:

Amish Shah (AZ-01): A former state legislator, Amish Shah won a crowded primary in July. He faces Rep. David Schweikert, a Republican who supported Trump's effort to overturn the 2020 presidential election. Primaries are costly, and in Shah’s pre-primary filing, he reported just $216,508.02 cash on hand compared to $1,548,760.87 for Schweikert.

In addition to running in a swing district, Amish Shah is an advocate for animal rights. See my quick take about him here.

Yeah, it was intended to be a crude order-of-magnitude estimate. See my response to essentially the same objection here.

Thanks for those thoughts! Upvoted and also disagree-voted. Here's a slightly more thorough sketch of my thought in the "How close should we expect 2024 to be" section (which is the one we're disagreeing on):

  • I suggest a normal distribution with mean 0 and standard deviation 4-5% as a model of election margins in the tipping-point state. If we take 4% as the standard deviation, then the probability of any given election being within 1% is 20%, and the probability of at least 3/6 elections being within 1% is about 10%, which is pretty high (in my mind, not nearly low enough to reject the hypothesis that this normal distribution model is basically right). If we take 5% as the standard deviation, then that probability drops from 10% to 5.6%.
  • I think that any argument that actually elections are eerily close needs to do one of the following:
    • Say that there was something special about 2008 and 2012 that made them fall outside of the reference class of close elections. I.e. there's some special ingredient that can make elections eerily close and it wasn't present in 2008-2012.
      • I'm skeptical of this because it introduces too many epicycles.
    • Say that actually elections are eerily close (maybe standard deviation 2-3% rather than 4-5%) and 2008-2012 were big, unlikely outliers.
      • I'm skeptical of this because 2008 would be a quite unlikely outlier (and 2012 would also be reasonably unlikely).
    • Say that the nature of U.S. politics changed in 2016 and elections are now close, whereas before they weren't.
      • I think this is the most plausible of the three. However, note that the close margins in 2000 and 2004 are not evidence in favor of this hypothesis. I'm tempted to reject this hypothesis on the basis of only having two datapoints in its favor.

(Also, just a side note, but the fact that 2000 was 99.99th percentile is definitely just a coincidence. There's no plausible mechanism pushing it to be that close as opposed to, say, 95th percentile. I actually think the most plausible mechanism is that we're living in a simulation!)

Yeah I agree; I think my analysis there is very crude. The purpose was to establish an order-of-magnitude estimate based on a really simple model.

I think readers should feel free to ignore that part of the post. As I say in the last paragraph:

So my advice: if you're deciding whether to donate to efforts to get Harris elected, plug in my "1 in 3 million" estimate into your own calculation -- the one where you also plug in your beliefs about what's good for the world -- and see where the math takes you.

The page you linked is about candidates for the Arizona State House. Amish Shah is running for the U.S. House of Representatives. There are still campaign finance limits, though ($3,300 per election per candidate, where the primary and the general election count separately; see here).

Amish Shah is a Democratic politician who's running for congress in Arizona. He appears to be a strong supporter of animal rights (see here).

He just won his primary election, and Cook Political Report rates the seat he's running for (AZ-01) as a tossup. My subjective probability that he wins the seat is 50% (Edit: now 30%). I want him to win primarily because of his positions on animal rights, and secondarily because I want Democrats to control the House of Representatives.

You can donate to him here.

It looks like Amish Shah will probably (barely) win the primary!

(Comment is mostly cross-posted comment from Nuño's blog.)

In "Unflattering aspects of Effective Altruism", you write:

Third, I feel that EA leadership uses worries about the dangers of maximization to constrain the rank and file in a hypocritical way. If I want to do something cool and risky on my own, I have to beware of the “unilateralist curse” and “build consensus”. But if Open Philanthropy donates $30M to OpenAI, pulls a not-so-well-understood policy advocacy lever that contributed to the US overshooting inflation in 2021, funds Anthropic13 while Anthropic’s President and the CEO of Open Philanthropy were married, and romantic relationships are common between Open Philanthropy officers and grantees, that is ¿an exercise in good judgment? ¿a good ex-ante bet? ¿assortative mating? ¿presumably none of my business?

I think the claim that Open Philanthropy is hypocritical re: the unilateralist's curse doesn't quite make sense to me. To explain why, consider the following two scenarios.

Scenario 1: you and 999 other people smart, thoughtful people have a button. You know there's 1000 people with such a button. If anyone presses the button, all mosquitoes will disappear.

Scenario 2: you and you alone have a button. You know that you're the only person with such a button. If you press the button, all mosquitoes will disappear.

The unilateralist's curse applies to Scenario 1 but *not* Scenario 2. That's because, in Scenario 1, your estimate of the counterfactual impact of pressing the button should be your estimate of the expected utility of all mosquitoes disappearing, *conditioned on no one else pressing the button*. In Scenario 2, where no one else has the button, your estimate of the counterfactual impact of pressing the button should be your estimate of the (unconditional) expected utility of all mosquitoes disappearing.

So, at least the way I understand the term, the unilateralist's curse refers to the fact that taking a unilateral action is worse than it naively appears, *if other people also have the option of taking the unilateral action*.

 

This relates to Open Philanthropy because, at the time of buying the OpenAI board seat, Dustin was one of the only billionaires approaching philanthropy with an EA mindset (maybe the only?). So he was sort of the only one with the "button" of having this option, in the sense of having considered the option and having the money to pay for it. So for him it just made sense to evaluate whether or not this action was net positive in expectation.

Now consider the case of an EA who is considering launching an organization with a potentially large negative downside, where the EA doesn't have some truly special resource or ability. (E.g., AI advocacy with inflammatory tactics -- think DxE for AI.) Many people could have started this organization, but no one did. And so, when deciding whether this org would be net positive, you have to condition on this observation.

Thanks for asking! The first thing I want to say is that I got lucky in the following respect. The set of possible outcomes isn't the interior of the ellipse I drew; rather, it is a bunch of points that are drawn at random from a distribution, and when you plot that cloud of points, it looks like an ellipse. The way I got lucky is: one of the draws from this distribution happened to be in the top-right corner. That draw is working at ARC theory, which has just about the most intellectually interesting work in the world (for my interests) and is also just about the most impactful place for me to work (given my skills and my models of what sort of work is impactful). I interned there for 4-5 months and I'll be starting there full-time soon!

Now for my report card, as for how well I checked in (in the ways listed in the post):

  • Writing the above post was useful in an interesting way: I formed some amount of identity around "I care about things besides impact" in a way that somewhat decreased value drift. (I endorse this, I think.) This manifested as me thinking a lot over the last year about whether I'm happy. Sometimes the answer was "not really"! But I noticed this and took steps toward fixing it. In particular, I noticed when I was in Berkeley last summer that I had a need for a social group that doesn't talk about maximizing impact all the time. This was super relevant to my criteria for choosing a living situation when I came back to Berkeley in October. I ended up choosing a "chill" group house, and I think that was the right choice.
  • I had the goal of keeping a monthly diary about my values. I updated it four times -- in June, July, October, and March -- and I think that captured most of the value. (I'm not sure that this was a particularly valuable intervention.)
  • Regarding the four specific non-EA things I cared about that I listed above:
    • Family and non-EA friends: I continue to be close with my family and remain similarly close with the non-EA friends I had at the time.
    • Puzzles and puzzle hunts: I continue caring about this. Empirically I haven't done many puzzle hunts over the last year, but that was more for a lack of good opportunities. But I recently joined a new puzzle hunt team, so I might have more opportunities ahead!
    • Spending time in nature: yup, I continue to care about this. I went to Alaska for a few weeks last month and it was great.
    • Random statistical analyses: honestly, much less? Which I'm a bit sad about.
      • One interested that I had not listed because I had mixed feelings about how much I endorsed the interest was politics. I indeed care less about politics now (though still do a decent amount).
  • I also picked up an interest -- I'm part of the Bayesian Choir! I've also been playing some small amount of tennis, for the first time since high school.
  • I didn't do any of the CFAR techniques, like focusing or internal double crux.

I'd say that this looks pretty good.

 

I do think that there are a couple of yellow flags, though:

  • I currently believe that the Berkeley EA community is unhealthy (I'm not sure whether to add the caveat "for me" or whether I think it's unhealthy, period). The main reason for this, I think, is that there's a status hierarchy. The way I sometimes put this is: if you asked me which of my friends in college are highest status, I would've been like "...what does that even mean, that question doesn't make sense". But unfortunately I think if you asked about people's status in this community, I'd often have thoughts. I have a theory that this comes out of having a large group of people with really similar values and goals. To elaborate on this: in college, everyone was pursuing their own thing and had their own values, which means that different people had very different standards for what it meant for someone to be cool. (There would have been way more status if, say, everyone were trying to be a member of some society; my impression is that this caused status dynamics in parts of my college that I didn't interact with.) In the Berkeley EA community, most people have pretty similar goals (such as furthering AI safety or having interesting conversations). If people agree on what's important then naturally they'll agree more on who's good at the important things (who's good at AI safety research, or who's good at having interesting conversations -- and by the way, there's way more agreement in the Berkeley EA community about what constitutes an interesting conversation than there is in college).
    • This theory would predict that political party organizations (the Democratic and Republican parties) have a strong social status hierarchy, since they mostly share the same goals (get the party into a position of power). If I learn that actually these organizations mostly don't have strong social status hierarchies, I'll retract my diagnosis.
  • I weakly think that something about the Berkeley EA community makes it harder for me to have original thoughts. Maybe it's that there's so much stuff going on that I don't spend very much time alone with my thoughts. Or maybe it's that there's more of a "party line" about the right takes, in a way that discourages free-thinking. Or maybe it's that people in this community really like talking about some things but not other things, and this implicitly discourages thinking about the "other things".

I haven't figured out how to navigate this. These may be genuine trade-offs -- a case where I can't both work at ARC and be immune from these downsides -- or maybe I'll learn to deal with the downsides over time. I do think that the benefits of my decision to work at ARC are worth the costs for me, though.

Thanks -- I should have been a bit more careful with my words when I wrote that "measurement noise likely follows a distribution with fatter tails than a log-normal distribution". The distribution I'm describing is your subjective uncertainty over the standard error of your experimental results. That is, you're (perhaps reasonably) modeling your measurement as being the true quality plus some normally distributed noise. But -- normal with what standard deviation? There's an objectively right answer that you'd know if you were omniscient, but you don't, so instead you have a subjective probability distribution over the standard deviation, and that's what I was modeling as log-normal.

I chose the log-normal distribution because it's a natural choice for the distribution of an always-positive quantity. But something more like a power law might've been reasonable too. (In general I think it's not crazy to guess that the standard error of your measurement is proportional to the size of the effect you're trying to measure -- in which case, if your uncertainty over the size of the effect follows a power law, then so would your uncertainty over the standard error.)

(I think that for something as clean as a well-set-up experiment with independent trials of a representative sample of the real world, you can estimate the standard error well, but I think the real world is sufficiently messy that this is rarely the case.)

Load more