All of Eric Neyman's Comments + Replies

Yeah -- I have instructions that I'm happy to send to people privately. (That said, I've stopped recommending that people do bank transfers, because my sense is that the process is annoying enough that way fewer than 96% of intentions-to-do-bank-transfers end in actual bank transfers.)

No, just that, contra your title, most EAs would have been abolitionists, the way that I understand the word "abolitionist" to be used.

-22
Holly Elmore ⏸️ 🔸

In my vocabulary, "abolitionist" means "person who is opposed to slavery" (e.g. "would vote to abolish slavery"). My sense is that this is the common meaning of the term, but let me know if you disagree.

It seems, then, that the analogy would be "person who is opposed to factory farming" (e.g. "would vote to outlaw factory farms"), instead of "vegetarian" and "animal donor". The latter two are much higher standards, as they require personal sacrifice (in the same way that "not consuming products made with slave labor" was a much higher standard -- one that ... (read more)

-13
Holly Elmore ⏸️ 🔸

In October, I wrote a post encouraging AI safety donors to donate to the Alex Bores campaign. Since then, I've spent a bunch of time thinking about the best donations for making the long-term future go well, and I still think that the Alex Bores campaign is the best donation opportunity for U.S. citizens/permanent residents. Under my views, donations to his campaign made this month are about 25x better than donations to standard AI safety 501(c)(3) organizations like LTFF.[1] I also think that donations made after December 31st are substantially almos... (read more)

What's the Great AI Timelines Scare of 2017?

In my memory, the main impetus was a couple of leading AI safety ML researchers started making the case for 5-year timelines. They were broadly qualitatively correct and remarkably insightful (promoting the scaling-first worldview), but obviously quantitatively too aggressive. And AlphaGo and AlphaZero had freaked people out, too. 

A lot of other people at the time (including close advisers to OP folks) had 10-20yr timelines. My subjective impression was that people in the OP orbit generally had more aggressive timelines than Ajeya's report did. 

[Not tax/financial advice]

I agree, especially for donors who want to give to 501(c)(3)'s, since a lot of Anthropic equity is pledged to c3's.

Another consideration for high-income donors that points in the same direction: if I'm not mistaken, 2025 is the last tax year where donors in the top tax bracket (AGI > $600k) can deduct up to 60% of their AGI; the One Big Beautiful Bill Act lowers this number to 35%. (Someone should check this, though, because it's possible that I'm misinterpreting the rule.)

[This comment is no longer endorsed by its author]Reply
7
LeahC
IIRC, the 35% figure comes from the value of the deduction for people in the 37% bracket. Basically, you will only see 35% back from the deduction instead of the full 37% (saving $0.35 in taxes vs $0.37 for every dollar donated in that bracket). I am not aware of any changes to the limit in total qualified donations, but you should still be able to take advantage of carryover rules if you exceed the limit.  I am not an accountant and this is not financial advice - I just don't want people to be discouraged from giving in the coming years and worked pretty closely on the OBBB. 

As one of Zach's collaborators, I endorse these recommendations. If I had to choose among the 501c3s listed above, I'd choose Forethought first and the Midas Project second, but these are quite weakly held opinions.

I do recommend reaching out about nonpublic recommendations if you're likely to give over $20k!

Nancy Pelosi is retiring; consider donating to Scott Wiener.

[Link to donate; or consider a bank transfer option to avoid fees, see below.]

Nancy Pelosi has just announced that she is retiring. Previously I wrote up a case for donating to Scott Wiener, who is running for her seat, in which I estimated a 60% chance that she would retire. While I recommended donating on the day that he announced his campaign launch, I noted that donations would look much better ex post in worlds where Pelosi retires, and that my recommendation to donate on launch day was sensi... (read more)

Yup! Copying over from a LessWrong comment I made:

Roughly speaking, I'm interested in interventions that cause the people making the most important decisions about how advanced AI is used once it's built to be smart, sane, and selfless. (Huh, that was some convenient alliteration.)

  • Smart: you need to be able to make really important judgment calls quickly. There will be a bunch of actors lobbying for all sorts of things, and you need to be smart enough to figure out what's most important.
  • Sane: smart is not enough. For example, I wouldn't trust Elon Musk wit
... (read more)
2
MichaelDickens
Hmm, I think if we are in a world where the people in charge of the company that have already built ASI need to be smart/sane/selfless for things to go well, then we're already in a much worse situation than we should be, and things should have been done differently prior to this point. I realize this is not a super coherent statement but I thought about it for a bit and I'm not sure how to express my thoughts more coherently so I'm just posting this comment as-is.

People are underrating making the future go well conditioned on no AI takeover.

This deserves a full post, but for now a quick take: in my opinion, P(no AI takeover) = 75%, P(future goes extremely well | no AI takeover) = 20%, and most of the value of the future is in worlds where it goes extremely well (and comparatively little value comes from locking in a world that's good-but-not-great).

Under this view, an intervention is good insofar as it affects P(no AI takeover) * P(things go really well | no AI takeover). Suppose that a given intervention can chang... (read more)

4
William_MacAskill
I, of course, agree!  One additional point, as I'm sure you know,  is that potentially you can also affect P(things go really well | AI takeover). And actions to increase ΔP(things go really well | AI takeover) might be quite similar to actions that increase ΔP(things go really well | no AI takeover). If so, that's an additional argument for those actions compared to affecting ΔP(no AI takeover). Re the formal breakdown, people sometimes miss the BF supplement here which goes into this in a bit more depth. And here's an excerpt from a forthcoming paper, "Beyond Existential Risk", in the context of more precisely defining the "Maxipok" principle. What it gives is very similar to your breakdown, and you might find some of the terms in here useful (apologies that some of the formatting is messed up): "An action x’s overall impact (ΔEVx) is its increase in expected value relative to baseline.  We’ll let C refer to the state of existential catastrophe, and b refer to the baseline action. We’ll define, for any action x: Px=P[¬C | x] and Kx=E[V |¬C, x]. We can then break overall impact down as follows: ΔEVx = (Px – Pb) Kb+ Px(Kx– Kb)   We call (Px – Pb) Kb the action’s existential impact and Px(Kx– Kb)  the action’s trajectory impact. An action’s existential impact is the portion of its expected value (relative to baseline) that comes from changing the probability of existential catastrophe; an action’s trajectory impact is the portion of its expected value that comes from changing the value of the world conditional on no existential catastrophe occurring. We can illustrate this graphically, where the areas in the graph represent overall expected value, relative to a scenario with a guarantee of catastrophe: With these in hand, we can then define: Maxipok (precisified): In the decision situations that are highest-stakes with respect to the longterm future, if an action is near‑best on overall impact, then it is close-to-near‑best on existential impact.   [1
3
Saul Munn
What interventions are you most excited about? Why? What are they bottlenecked on?
4
Sharmake
One of the key issues with "making the future go well" interventions is that we start to run up against the reality that what is a desirable outcome for the future is so variable between different humans that the concept of making the future go well requires buying into ethical assumptions that people won't share, meaning that it's much less valid as any sort of absolute metric to coordinate around: (A quote from Steven Byrnes here): This level of variability is less for preventing bad outcomes, especially outcomes in which we don't die (though there is still variability here) because of instrumental convergence, and while there are moral views where dying/suffering isn't so bad, these moral views aren't held by many human beings (in part due to selection effects), so there's less of a chance to have conflict with other agents. The other reason is humans mostly value the same scarce instrumental goods, but in a world where AI goes well, basically everything but status/identity becomes abundant, and this surfaces up the latent moral disagreements way more than our current world.  
4
MichaelDickens
Do you think this sort of work is related to AI safety? It seems to me that it's more about philosophy (etc.) so I'm wondering what you had in mind.

California state senator Scott Wiener, author of AI safety bills SB 1047 and SB 53, just announced that he is running for Congress! I'm very excited about this.

It’s an uncanny, weird coincidence that the two biggest legislative champions for AI safety in the entire country announced their bids for Congress just two days apart. But here we are.*

In my opinion, Scott Wiener has done really amazing work on AI safety. SB 1047 is my absolute favorite AI safety bill, and SB 53 is the best AI safety bill that has passed anywhere in the country. He's been a dedicat... (read more)

In the past, I've had:

  • One instance of the campaign emailing me to set up a bank transfer. This... seems to have happened 9 months after the candidate lost the primary, actually? Which is honestly absurdly long; I don't know if it's typical.
  • One time, I think the campaign just sent a check to the address I used when I donated? But I don't remember for sure. My guess is that they would have tried to reach me if I didn't cash the check, but I'm not sure. I vaguely recall that the check was sent within a few months of the candidate losing the primary, but I'm n
... (read more)
1
Jeremy
Thank you!

I just did a BOTEC, and if I'm not mistaken, 0.0000099999999999999999999999999999999999999999988% is incorrect, and instead should be 0.0000099999999999999999999999999999999999999999998%. This is a crux, as it would mean that the SWWM pledge is actually 2x less effective than the GWWC pledge.

 

I tried to write out the calculations in this comment; in the process of doing so, I discovered that there's a length limit to EA Forum comments, so unfortunately I'm not able to share my calculations. Maybe you could share yours and we could double-crux?

Did you assume the axiom of choice? That's a reasonable modeling decision-- our estimate used an uninformative prior over whether it's true, false, or meaningless.

Hi Karthik,

Your comment inspired me to write my own quick take, which is here. Quoting the first paragraph as a preview:

I feel pretty disappointed by some of the comments (e.g. this one) on Vasco Grilo's recent post arguing that some of GiveWell's grants are net harmful because of the meat eating problem. Reflecting on that disappointment, I want to articulate a moral principle I hold, which I'll call non-dogmatism. Non-dogmatism is essentially a weak form of scope sensitivity.

I decided to spin off a quick take rather than replying here, because I think it... (read more)

I feel pretty disappointed by some of the comments (e.g. this one) on Vasco Grilo's recent post arguing that some of GiveWell's grants are net harmful because of the meat eating problem. Reflecting on that disappointment, I want to articulate a moral principle I hold, which I'll call non-dogmatism. Non-dogmatism is essentially a weak form of scope sensitivity.[1]

Let's say that a moral decision process is dogmatic if it's completely insensitive to the numbers on either side of the trade-off. Non-dogmatism rejects dogmatic moral decision processes.

A central ... (read more)

6
Michael St Jules 🔸
EDIT: Rereading, I'm not really disagreeing with you. I definitely agree with the sentiment here:   (Edited) So, rather than just the possibility that all tradeoffs between humans and chickens should favour humans, I take issue with >99% confidence in that position or otherwise treating it like it's true. Whatever someone thinks makes humans infinitely more important than chickens[1] could actually be present in chickens in some similarly important form with non-tiny or even modest probability (examples here), or not actually be what makes humans important at all (more general related discussion, although that piece defends a disputed position). In my view, this should in principle warrant some tradeoffs favouring chickens. Or, if they don't think there's anything at all, say except the mere fact of species membership, then this is just pure speciesism and seems arbitrary. 1. ^ Or makes humans matter at all, but chickens lack, so chickens don't matter at all.
6
Guive
I also disagree with those comments, but can you provide more argument for your principle? If I understand correctly, you are suggesting the principle that X can be lexicographically[1] preferable to Y if and only if Y has zero value. But, conditional on saying X is lexicographically preferable to Y, isn't it better for the interests of Y to say that Y nevertheless has positive value? I mean, I don't like it when people say things like no amount of animal suffering, however enormous, outweighs any amount of human suffering, however tiny. But I think it is even worse to say that animal suffering doesn't matter at all, and there is no reason to alleviate it even if it could be alleviated at no cost to human welfare.    Maybe your reasoning is more like this: in practice, everything trades off against everything else. So, in practice, there is just no difference between saying "X is lexicographically preferable to Y but Y has positive value", and "Y has no value"? 1. ^ From SEP: "A lexicographic preference relation gives absolute priority to one good over another. In the case of two-goods bundles, A≻B if a1>b1, or a1=b1 and a2>b2. Good 1 then cannot be traded off by any amount of good 2."
2
NunoSempere
This page could be a useful pointer?
1[comment deleted]

I haven't looked at your math, but I actually agree, in the sense that I also got about 1 in 1 million when doing the estimate again a week before the election!

I think my 1 in 3 million estimate was about right at the time that I made it. The information that we gained between then and 1 week before the election was that the election remained close, and that Pennsylvania remained the top candidate for the tipping point state.

2
WilliamKiely🔸
I'm curious if by "remained close" you meant "remained close to 50/50"? (The two are distinct, and I was guilty of pattern-matching "~50/50" to "close" even though ~50/50 could have meant that either Trump or Harris was likely to win by a lot (e.g. swing all 7 swing states) and we just had no idea which was more likely.)

Could you say more about "practically possible"? What steps do you think one could have taken to have reached, say, a 70% credence?

2
WilliamKiely🔸
Yeah. I said some about that in the ACX thread in an exchange with a Jeffrey Soreff here. Initially I was talking about a "maximally informed" forecaster/trader, but then when Jeffrey pointed out that that term was ill-defined, I realized that I had a lower-bar level of informed in mind that was more practically possible than some notions of "maximally informed." Basically just steps to become more informed and steps to have better judgment. (Saying specifically what knowledge would be sufficient to be able to form a forecast of 70% seems borderline impossible or at least extremely difficult.) Before the election I was skeptical that people like Nate Silver and his team and The Economist's election modeling team were actually doing as good a job as they could have been[1] forecasting who'd win the election and now post-election I still remain skeptical that their forecasts were close to being the best they could have been. [1] "doing as good a job as they could have been" meaning I think they would have made substantially better forecasts in expectation (lower Brier scores in expectation) if figuring out who was going to win was really important to them (significantly more important than it actually was), and if they didn't care about the blowback for being "wrong" if they made a confident wrong-side-of-maybe forecast, and if they were given a big budget to use to do research and acquire information (e.g. $10M), and if they were highly skilled forecasters with great judgment (like the best in the world but not superhuman (maybe Nate Silver is close to this--IDK; I read his book The Signal and the Noise, but it seems plausible that there could still be substantial room for him to improve his forecasting skill)).

Oh cool, Scott Alexander just said almost exactly what I wanted to say about your #2 in his latest blog post: https://www.astralcodexten.com/p/congrats-to-polymarket-but-i-still

I don't have time to write a detailed response now (might later), but wanted to flag that I either disagree or "agree denotatively but object connotatively" with most of these. I disagree most strongly with #3: the polls were quite good this year. National and swing state polling averages were only wrong by 1% in terms of Trump's vote share, or in other words 2% in terms of margin of victory. This means that polls provided a really large amount of information.

(I do think that Selzer's polls in particular are overrated, and I will try to articulate that case more carefully if I get around to a longer response.)

1
David T
Think I disagree especially strongly with #6. Of all the reasons to think Musk might be a genius, him going all in on 60/40 odds is definitely not one of them  Especially since he could probably have got an invite to Mar-a-Lago and President Trump's ear on business and space policy with a small donation and generic "love Donald's plans to make American business great again" endorsement, and been able to walk it right back again whenever the political wind was blowing the other way. I don't think he's spent his time and much of his fortune to signal boost catturd tweets out of calm calculation of which way the political wind was blowing.  Biggest and highest profile donor to the winning side last time round didn't do too well out of it either, and he probably did think he was being clever and calculating (Lifelong right winger Thiel's "I think it's 50/50 who will win but my contrarian view is I also don't think it'll be close" was great hedging his bets, on the other hand!) .
2
NunoSempere
My sense is that the polls were heavily reweighted by demographics, rather than directly sampling from the population. That said, I welcome your nitpicks, even if brief

Oh cool, Scott Alexander just said almost exactly what I wanted to say about your #2 in his latest blog post: https://www.astralcodexten.com/p/congrats-to-polymarket-but-i-still

I just want to register that, because the election continues to look extremely close, I now think the probability that the election is decided by fewer than 100,000 votes is more like 60%.

4
Eric Neyman
Looks like it'll be about 250,000 votes.

I wanted to highlight one particular U.S. House race that Matt Yglesias mentions:

Amish Shah (AZ-01): A former state legislator, Amish Shah won a crowded primary in July. He faces Rep. David Schweikert, a Republican who supported Trump's effort to overturn the 2020 presidential election. Primaries are costly, and in Shah’s pre-primary filing, he reported just $216,508.02 cash on hand compared to $1,548,760.87 for Schweikert.

In addition to running in a swing district, Amish Shah is an advocate for animal rights. See my quick take about him here.

Yeah, it was intended to be a crude order-of-magnitude estimate. See my response to essentially the same objection here.

Thanks for those thoughts! Upvoted and also disagree-voted. Here's a slightly more thorough sketch of my thought in the "How close should we expect 2024 to be" section (which is the one we're disagreeing on):

  • I suggest a normal distribution with mean 0 and standard deviation 4-5% as a model of election margins in the tipping-point state. If we take 4% as the standard deviation, then the probability of any given election being within 1% is 20%, and the probability of at least 3/6 elections being within 1% is about 10%, which is pretty high (in my mind, not n
... (read more)
4
LintzA
I think it's very reasonable to say that 2008 and 2012 were unusual. Obama is widely recognized as a generational political talent among those in Dem politics. People seem to look back on, especially 2008, as a game-changing election year with really impressive work by the Obama team. This could be rationalization of what were effectively normal margins of victory (assuming this model is correct) but I think it matches the comparative vibes pretty well at the time vs now.  As for changes over the past 20+ years, I think it's reasonable to say that there's been fundamental shifts since the 90s: * Polarization has increased a lot  * The analytical and moneyball nature of campaigns has increased by a ton. Campaigns now know far more about what's happening on the ground, how much adversaries spend, and what works. * Trump is a highly unusual figure which seems likely to lead to some divergence * The internet & good targeting have become major things  Agree that 5-10% probability isn't cause for rejection of the hypothesis but given we're working with 6 data points, I think it should be cause for suspicion. I wouldn't put a ton of weight on this but 5% is at the level of statistical significance so it seems reasonable to tentatively reject that formulation of the model. Trump vs Biden favorability was +3 for Trump in 2020, Obama was +7 on McCain around election day (average likely >7 points in Sept/Oct 2008). Kamala is +3 vs Trump today. So that's some indication of when things are close. Couldn't quickly find this for the 2000 election.

Yeah I agree; I think my analysis there is very crude. The purpose was to establish an order-of-magnitude estimate based on a really simple model.

I think readers should feel free to ignore that part of the post. As I say in the last paragraph:

So my advice: if you're deciding whether to donate to efforts to get Harris elected, plug in my "1 in 3 million" estimate into your own calculation -- the one where you also plug in your beliefs about what's good for the world -- and see where the math takes you.

The page you linked is about candidates for the Arizona State House. Amish Shah is running for the U.S. House of Representatives. There are still campaign finance limits, though ($3,300 per election per candidate, where the primary and the general election count separately; see here).

Amish Shah is a Democratic politician who's running for congress in Arizona. He appears to be a strong supporter of animal rights (see here).

He just won his primary election, and Cook Political Report rates the seat he's running for (AZ-01) as a tossup. My subjective probability that he wins the seat is 50% (Edit: now 30%). I want him to win primarily because of his positions on animal rights, and secondarily because I want Democrats to control the House of Representatives.

You can donate to him here.

9
Eevee🔹
Applicable campaign finance limits: According to this page, individuals can donate up to $5,400 to legislative candidates per two-year election cycle.

It looks like Amish Shah will probably (barely) win the primary!

(Comment is mostly cross-posted comment from Nuño's blog.)

In "Unflattering aspects of Effective Altruism", you write:

Third, I feel that EA leadership uses worries about the dangers of maximization to constrain the rank and file in a hypocritical way. If I want to do something cool and risky on my own, I have to beware of the “unilateralist curse” and “build consensus”. But if Open Philanthropy donates $30M to OpenAI, pulls a not-so-well-understood policy advocacy lever that contributed to the US overshooting inflation in 2021, funds Anthropic13 while Anthr

... (read more)
2
NunoSempere
[Answered over on my blog]

Thanks for asking! The first thing I want to say is that I got lucky in the following respect. The set of possible outcomes isn't the interior of the ellipse I drew; rather, it is a bunch of points that are drawn at random from a distribution, and when you plot that cloud of points, it looks like an ellipse. The way I got lucky is: one of the draws from this distribution happened to be in the top-right corner. That draw is working at ARC theory, which has just about the most intellectually interesting work in the world (for my interests) and is also just a... (read more)

Thanks -- I should have been a bit more careful with my words when I wrote that "measurement noise likely follows a distribution with fatter tails than a log-normal distribution". The distribution I'm describing is your subjective uncertainty over the standard error of your experimental results. That is, you're (perhaps reasonably) modeling your measurement as being the true quality plus some normally distributed noise. But -- normal with what standard deviation? There's an objectively right answer that you'd know if you were omniscient, but you don't, so ... (read more)

In general I think it's not crazy to guess that the standard error of your measurement is proportional to the size of the effect you're trying to measure

Take a hierarchical model for effects. Each intervention has a true effect , and all the are drawn from a common distribution . Now for each intervention, we run an RCT and estimate where is experimental noise.

By the CLT, where is the inherent sampling variance in your environment and is the sample size of your RCT. What you're saying is that has the same o... (read more)

Let's take the very first scatter plot. Consider the following alternative way of labeling the x and y axes. The y-axis is now the quality of a health intervention, and it consists of two components: short-term effects and long-term effects. You do a really thorough study that perfectly measures the short-term effects, while the long-term effects remain unknown to you. The x-value is what you measured (the short-term effects); the actual quality of the intervention is the x-value plus some unknown, mean zero variance 1 number.

So whereas previously (i.e. in... (read more)

2
Davidmanheim
Yes - though I think this is just an elaboration of what Abram wrote here.

Great question -- you absolutely need to take that into account! You can only bargain with people who you expect to uphold the bargain. This probably means that when you're bargaining, you should weight "you in other worlds" in proportion to how likely they are to uphold the bargain. This seems really hard to think about and probably ties in with a bunch of complicated questions around decision theory.

This is probably my favorite proposal I've seen so far, thanks!

I'm a little skeptical that warnings from the organization you propose would have been heeded (especially by people who don't have other sources of funding and so relying on FTX was their only option), but perhaps if the organization had sufficient clout, this would have put pressure on FTX to engage in less risky business practices.

8
Sam Elder
I don't have much hope that the charity side of things could have influenced FTX to be less risky -- from what I can tell, a high tolerance for risk was core to their business practices. I just think it could have given EA folks who aren't crypto-savvy a lot more sobriety around FTX's relationship to EA and make them consider the potential downsides of taking FTX funding. It also would have helped in the media/reputation fallout if the donor evaluator I have in mind would have clearly labeled FTX as risky or having withheld information. Independent of this particular case to mitigate against, I also think such a donor catalog and evaluation system would be a benefit to the community, as a sort of one-stop shop for potential grantees to learn about their options for seeking funding.

I think this fails (1), but more confidently, I'm pretty sure it fails (2). How are you going to keep individuals from taking crypto money? See also: https://forum.effectivealtruism.org/posts/Pz7RdMRouZ5N5w5eE/ea-should-taboo-ea-should

2
titotal
If I said, "EA should have had a policy to not be involved with or associate with the weapons industry", would you have the same objection? (not saying crypto is as bad obviously, just that some form of divestment is obviously possible). FTX was heavily involved in the core of EA, and nothing was done to discourage them tying themselves to EA at every turn. Do you really think the reputational fallout would have been as great if SBF was a mere anonymous donor?

I think my crux with this argument is "actions are taken by individuals". This is true, strictly speaking; but when e.g. a member of U.S. Congress votes on a bill, they're taking an action on behalf of their constituents, and affecting the whole U.S. (and often world) population. I like to ground morality in questions of a political philosophy flavor, such as: "What is the algorithm that we would like legislators to use to decide which legislation to support?". And as I see it, there's no way around answering questions like this one, when decisions have si... (read more)

2
jasoncrawford
I would like them to use an algorithm that is not based on some sort of global calculation about future world-states. That leads to parentalism in government and social engineering. Instead, I would like the algorithm to be based on something like protecting rights and preventing people from directly harming each other. Then, within that framework, people have the freedom to improve their own lives and their own world. Re the China/US scenario: this does seem implausible; why would the US AI prevent almost all future progress, forever? Setting that aside, though, if this scenario did happen, it would be a very tough call. However, I wouldn't make it on the basis of counting people and adding up happiness. I would make it on the basis of something like the value of progress vs. the value of survival. Abortion policy is a good example. I don't see how you can decide this on the basis of counting people. What matters here is the wishes of the parents, the rights of the mother, and your view on whether the fetus has rights.

Does anyone have an estimate of how many dollars donated to the campaign are about equal in value to one hour spent phonebanking? Thanks!

1
Caro
It's quite hard to know and I don't know what the Team Campaign thinks about it. There is a good article on Vox about the evidence base for those things  "Gerber and Green’s rough estimate is that canvassing can garner campaigns a vote for about $33, while volunteer phone-banking can garner a vote for $36 — not too different, especially when you consider how imprecise these estimates necessarily are." Not exactly what you answered but can give you a sense of direction."

I guess I have two reactions. First, which of the categories are you putting me in? My guess is you want to label me as a mop, but "contribute as little as they reasonably can in exchange" seems an inaccurate description of someone who's strongly considering devoting their career to an EA cause; also I really enjoy talking about the weird "new things" that come up (like idk actually trade between universes during the long reflection).

My second thought is that while your story about social gradients is a plausible one, I have a more straightforward story ab... (read more)

7
Linch
I think an interesting related question is how much our social (and other incentive) gradients should prioritize people whose talents or dispositions are naturally predisposed to doing relevant EA work, versus people who are not naturally inclined for this but are morally compelled to "do what needs to be done." I think in one sense it feels more morally praiseworthy for people to be willing to do hard work. But in another sense, it's (probably?) easier to recruit people for whom the pitch and associated sacrifices to do EA work is lower, and for a lot of current longtermist work (especially in research), having a natural inclination/aptitude/interest probably makes you a lot better at the work than grim determination  I'm curious how true this is. 
1
NegativeNuno
I don't think this is an important question, it's not like "tall people" and "short people" are a distinct cluster. There is going to be a spectrum, and you would be somewhere in the middle. But still using labels is a convenient shorthand. So the thing that worries me is that if someone is optimizing for something different, they might reward other people for doing the same thing. The case has been on my mind recently where someone is a respected member of the community, but what they are doing is not optimal, and it would be awkward to point that out. But still necessary, even if it looses one brownie points socially. Overall, I don't really read minds, and I don't know what you would or wouldn't do.
4
NegativeNuno
I think this would work if one actually did it, but not if impact is distributed with long tails (e.g., power law) and people take offense to being accepted very little.

I may have misinterpreted what exactly the concept-shaped hole was. I still think I'm right about them having been surprised, though.

If it helps clarify, the community builders are talking about are some of the Berkeley(-adjacent) longtermist ones. As some sort of signal that I'm not overstating my case here, one messaged me to say that my post helped them plug a "concept-shaped hole", a la https://slatestarcodex.com/2017/11/07/concept-shaped-holes-can-be-impossible-to-notice/

[This comment is no longer endorsed by its author]Reply
1
Eric Neyman
I may have misinterpreted what exactly the concept-shaped hole was. I still think I'm right about them having been surprised, though.

Great comment, I think that's right.

I know that "give your other values an extremely high weight compared with impact" is an accurate description of how I behave in practice. I'm kind of tempted to bite that same bullet when it comes to my extrapolated volition -- but again, this would definitely be biting a bullet that doesn't taste very good (do I really endorse caring about the log of my impact?). I should think more about this, thanks!

Yup -- that would be the limiting case of an ellipse tilted the other way!

The idea for the ellipse is that what EA values is correlated (but not perfectly) with my utility function, so (under certain modeling assumptions) the space of most likely career outcomes is an ellipse, see e.g. here.

Note that the y-axis is extrapolated volition, i.e. what I endorse/strive for. Extrapolated volition can definitely change -- but I think by definition we prefer ours not to?

1
Gil
In that case I'm going to blame Google for defining volition as "the faculty or power of using one's will."  Or maybe that does mean "endorse"? Honestly I'm very confused, feel free to ignore my original comment.

Note that covid travel restrictions may be a consideration. For example, New Zealand's borders are currently closed to essentially all non-New Zealanders and are scheduled to remain closed to much of the world until July:

Historically, there have been ~24 Republicans vs ~19 Democrats as senators (and  1 independent) from Oregon, so partisan affiliation doesn't seem that important.

A better way of looking at this is the partisan lean of his particular district. The answer is D+7, meaning that in a neutral environment (i.e. an equal number of Democratic and Republican votes nationally), a Democrat would be expected to win this district by 7 percentage points.

This year is likely to be a Republican "wave" year, i.e. Republicans are likely to outperform Democrats (the party ... (read more)

Hi! I'm an author of this paper and am happy to answer questions. Thanks to Jsevillamol for the summary!

A quick note regarding the context in which the extremization factor we suggest is "optimal": rather than taking a Bayesian view of forecast aggregation, we take a robust/"worst case" view. In brief, we consider the following setup:

(1) you choose an aggregation method.

(2) an adversary chooses an information structure (i.e. joint probability distribution over the true answer and what partial information each expert knows) to make your aggregation method d... (read more)

Thanks for putting this together; I might be interested!

I just want to flag that if your goal is to avoid internships, then (at least for American students) I think the right time to do this would be late May-early June rather than late June-early July as you suggest on the Airtable form. I think the most common day for internships to start is the day after Memorial Day, which in 2022 will be May 31st. (Someone correct me if I'm wrong.)

3
trammell
Glad to hear you might be interested! Thanks for pointing this out. It's tough, because (a) as GrueEmerald notes below, at least some European schools end later, and (b) it will be easier to provide accommodation in Oxford once the Oxford spring term is over (e.g. I was thinking of just renting space in one of the colleges). Once the application form is up*, I might include a When2Meet-type thing so people can put exactly what weeks they expect to be free through the summer. *If this goes ahead; but there have been a lot of expressions of interest so far, so it probably will!
2[anonymous]
I think late May is too early for most European students.

My understanding is that the Neoliberal Project is a part of the Progressive Policy Institute, a DC think tank (correct me if I'm wrong).

Are you guys trying to lobby for any causes, and if so, what has your experience been on the lobbying front? Are there any lessons you've learned that may be helpful to EAs lobbying for EA causes like pandemic preparedness funding?

Yes, lobbying officials is part of what we do.  We're trying to talk to officials about all the things we care about - taking action on climate change, increasing immigration, etc etc etc. Truthfully I don't have a ton of experience on this front yet - I've been part of the project since its inception in early 2017, but have only been formally employed by PPI for the last 8 months or so. So I'm not a fountain of wisdom on all the best lobbying techniques - this is somewhat beginner level analysis of the DC swamp.

One thing I've noticed is that an ounce... (read more)

Load more