My experience is that it's more that group leaders & other students in EA groups might reward poor epistemics in this way.
And that when people are being more casual, it 'fits in' to say AI risk & people won't press for reasons in those contexts as much, but would push if you said something unusual.
Agree my experience with senior EAs in the SF Bay was often the opposite–I was pressed to explain why I'm concerned about AI risk & to respond to various counterarguments.
No, though maybe you're using the word "intrinsically" differently? For the (majority) consequentialist part of my moral portfolio: The main intrinsic bad is suffering, and wellbeing (somewhat broader) is intrinsically good.
I think any argument about creating people/etc is instrumental - will they or won't they increase wellbeing? They can both potentially contain suffering/wellbeing themselves, and affect the world in ways that affect wellbeing/suffering now & in the future. This includes effects before they are born (e.g. on women's lives). TBH ...
I don't think near-term population is helpful for long-term population or wellbeing, e.g. in >10,000 years from now. More likely negative effect than positive effect imo, especially if the mechanism of trying to increase near-term population is to restrict abortion (this is not a random sample of lives!)
I also think it seems bad for general civilization trajectory (partially norm-damaging, but mostly just direct effects on women & children), probably bad for ability to make investments in resilience & be careful with powerful new technology. These seem like the most important effects from a longtermist perspective, so I think abortion-restriction is bad from a total-longtermist perspective.
I guess I did mean aggregate in the 'total' well-being sense. I just feel pretty far from neutral about creating people who will live wonderful lives, and also pretty strongly disagree with the belief that restricting abortion will create more total well-being in the long run (or short tbh).
For total-view longtermism, I think the most important things are ~civilization is on a good trajectory, people are prudent/careful with powerful new technology, the world is lower conflict, investments are made to improve resilience to large catastrophes, etc. Restr...
abortion is morally wrong is a direct logical extension of a longtermist view that highly values maximizing the number of people on assumption that the average existing persons life will have positive value
I'm a bit confused by this statement. Is a world where people don't have access to abortion likely to have more aggregate well-being in the very long run? Naively, it feels like the opposite to me
To be clear I don't think it's worth discussing abortion at length, especially considering bruce's comment. But I really don't think the number of people ...
Agree that was a weird example.
Other people around the group (e.g. many of the non-Stanford people who sometimes came by & worked at tech companies) are better examples. Several weren't obviously promising at the time, but are doing good work now.
I'm somewhat more pessimistic that disillusioned people have useful critiques, at least on average. EA asks people to swallow a hard pill "set X is probably the most important stuff by a lot", where X doesn't include that many things. I think this is correct (i.e. the set will be somewhat small), but it means that a lot of people's talents & interests probably aren't as [relatively] valuable as they previously assumed.
That sucks, and creates some obvious & strong motivated reasons to lean into not-great criticisms of set X. I don't even think th...
I'd add a much more boring cause of disillusionment: social stuff
It's not all that uncommon for someone to get involved with EA, make a bunch of friends, and then the friends gradually get filtered through who gets accepted to prestigious jobs or does 'more impactful' things in community estimation (often genuinely more impactful!)
Then sometimes they just start hanging out with cooler people they meet at their jobs, or just get genuinely busy with work, while their old EA friends are left on the periphery (+ gender imbalance piles on relationship stuff). This happens in normal society too, but there seem to be more norms/taboos there that blunt the impact.
Your second question "Will the potential negative press and association with Democrats be too harmful to the EA movement to be worth it?" seems to ignore that a major group EAs will be running against will be democrats in primaries.
So it's not only that you're creating large incentives for republicans to attack EA, you're also creating it for e.g. progressive democrats. See: Warren endorsing Flynn's opponent & somewhat attacking flynn for crypto billionaire sellout stuff
That seems potentially pretty harmful too. It'd be much harder to be an active gr...
Random aside, but does the St. Petersburg paradox not just make total sense if you believe Everett & do a quantum coin flip? i.e. in 1/2 universes you die, & in 1/2 you more than double. From the perspective of all things I might care about in the multiverse, this is just "make more stuff that I care about exist in the multiverse, with certainty"
Or more intuitively, "with certainty, move your civilization to a different universe alongside another prospering civilization you value, and make both more prosperous".
Or if you repeat it, you have "move all civilizations into a few giant universes, and make them dramatically more prosperous.
Which is clearly good under most views, right?
Another complication: we want to select for people who are good fits for our problems, e.g. math kids, philosophy research kids, etc. To some degree, we're selecting for people with personal-fun functions that match the shape of the problems we're trying to solve (where what we'd want them to do is pretty aligned with their fun)
I think your point applies with cause selection, "intervention strategy", or decisions like "moving to Berkeley". Confused more generally
I'm confused about how to square this with specific counterexamples. Say theoretical alignment work: P(important safety progress) probably scales with time invested, but not 100x by doubling your work hours. Any explanations here?
Idk if this is because uncertainty/ probabilistic stuff muddles the log picture. E.g. we really don't know where the hits are, so many things are 'decent shots'. Maybe after we know the outcomes, the outlier good things would be quite bad on the personal-liking front. But that doesn't sound exactly correct either
Curious if you disagree with Jessica's key claim, which is "McKinsey << EA for impact"? I agree Jessica is overstating the case for "McKinsey <= 0", but seems like best-case for McKinsey is still order(s) of magnitude less impact than EA.
Subpoints:
There were tons of cases from EAGx Boston (an area with lower covid case counts). I'm one of them. Idk exact numbers but >100 if I extrapolate from my EA friends.
Not sure whether this is good or bad tho, as IFR is a lot lower now. Presumably lower long covid too, but hard to say
An argument against that doesn't seem directly considered here: veganism might turn some high-potential people off without compensatory benefits, and very high base rates of non-veganism (~99% of western people are non-vegan IIRC) means this may matter even on relatively marginal effects.
Obviously many things can be mitigated significantly by being kind/accommodating (though at some level there's a little remaining implied "you are doing bad"). But even accounting for that, a few things remain despite accommodating. E.g.
Still wondering why I never see moral circle expansion advocates make the argument I made here
That argument seems to avoid the suffering-focused problem where moral circle expansion doesn't address, or might even make worse, the worst suffering scenarios for the future (e.g. threats in multipolar futures). Namely, the argument I linked says despite potentially increasing suffering risk, it also increases the value of good futures enough to be worth it
TBC, I don't hold this view because I believe we need a solid "great reflection" to achieve the best futur...
Yeah I agree that's pretty plausible. That's what I was trying to make an allowance for with "I'd also distinguish vacations from...", but worth mentioning more explicitly.
For the sake of argument, I'm suspicious of some of the galaxy takes.
Excellent prioritization and execution on the most important parts. If you try to do either of those while tired, you can really fuck it up and lose most of the value
I think relatively few people advocate working to the point of sacrificing sleep, prominent hard-work-advocate (& kinda jerk) rabois strongly pushes for sleeping enough & getting enough exercise.
Beyond that, it's not obvious working less hard results in better prioritization or execution. A naive look at th...
Minor suggestion: Those forms should send responses after you submit, or give the option "would you like to receive a copy of your responses"
Otherwise, it may be hard to clarify whether a submission went through, or details of what you submitted
I think that depends a lot on framing. E.g. if this is just a prediction of future events, it sounds less objectionable to other moral systems imo b/c it's not making any moral claims (perhaps some by implication, as this forum leans utilitarian)
In the case of making predictions, I'd strongly bias to say things I think are true even if they end up being inconvenient, if they are action relevant (most controversial topics are not action relevant, so I think people should avoid them). But this might be important for how to weigh different risks a...
Agree, tried to add more clarification below. I'll try to avoid this going forward, maybe unsuccessfully.
Tbh, I mean a bit of both definitions (Will's views are quite surprising to me, which is why I want to know more), but mostly the former (i.e. stating it's close to 0% or 100%).
I sometimes find the terminology of "no x-risk", "going well" etc.
Agree on "going well" being under-defined. I was mostly using that for brevity, but probably more confusion than it's worth. A definition I might use is "preserves the probability of getting to the best possible futures", or even better if it increases that probability. Mainly because from an EA perspective (even if people are around) if we've locked in a substantially suboptimal moral situation, we've effectively lost most possible va...
If you believe "<1% X", that implies ">99% ¬X", so you should believe that too. But if you think >99% ¬X seems too confident, then you should modus tollens and moderate your <1% X belief. When other people give e.g. 30% X, that only implies 70% ¬X, which seems more justifiable to me.
I use AGI as an example just because if it happens, it seems more obviously transformative & existential than biorisk, where it's harder to reason about whether people survive. And because Will's views seem to diverge quite stron...
I disagree with your implicit claim that Will's views (which I mostly agree with) constitute an extreme degree of confidence. I think it's a mistake to approach these questions with a 50-50 prior. Instead, we should consider the base rate for "events that are at least as transformative as the industrial revolution".
That base rate seems pretty low. And that's not actually what we're talking about - we're talking about AGI, a specific future technology. In the absense of further evidence, a prior of <10% on "AGI tak...
This is just a first impression, but I'm curious about what seems a crucial point - that your beliefs seem to imply extremely high confidence of either general AI not happening this century, or that AGI will go 'well' by default. I'm very curious to see what guides your intuition there, or if there's some other way that first-pass impression is wrong.
I'm curious about similar arguments that apply to bio & other plausible x-risks too, given what's implied by low x-risk credence
The general background worldview that motivates this credence is that predicting the future is very hard, and we have almost no evidence that we can do it well. (Caveat I don’t think we have great evidence that we can’t do it either, though.) When it comes to short-term forecasting, the best strategy is to use reference-class forecasting (‘outside view’ reasoning; often continuing whatever trend has occurred in the past), and make relatively small adjustments based on inside-view reasoning. In the absence of anything better, I think we should do the same f...
I think there’s a significant[8] chance that the moral circle will fail to expand to reach all sentient beings, such as artificial/small/weird minds (e.g. a sophisticated computer program used to mine asteroids, but one that doesn’t have the normal features of sentient minds like facial expressions). In other words, I think there’s a significant chance that powerful beings in the far future will have low willingness to pay for the welfare of many of the small/weird minds in the future.[9]
...I think it’s likely that the powerful beings in the far future (a
Very cool!
random thought: could include some of Yoshua Bengio's or Geoffrey Hinton's writings/talks on AI risks concerns in week 10 (& could include Lecun for counterpoint to get all 3), since they're very-well cited academics & Turing Award Winners for deep learning
I haven't looked through their writings/talks to find the most directly relevant, but some examples: https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/ https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/