All of nonn's Comments + Replies

Very cool!

random thought: could include some of Yoshua Bengio's or Geoffrey Hinton's writings/talks on AI risks concerns in week 10 (& could include Lecun for counterpoint to get all 3), since they're very-well cited academics & Turing Award Winners for deep learning

I haven't looked through their writings/talks to find the most directly relevant, but some examples: https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/ https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/

1
Calvin_Baker
7mo
Thanks for the recs! What's the Lecun you mention? 

My experience is that it's more that group leaders & other students in EA groups might reward poor epistemics in this way.

And that when people are being more casual, it 'fits in' to say AI risk & people won't press for reasons in those contexts as much, but would push if you said something unusual.

Agree my experience with senior EAs in the SF Bay was often the opposite–I was pressed to explain why I'm concerned about AI risk & to respond to various counterarguments.

No, though maybe you're using the word "intrinsically" differently? For the (majority) consequentialist part of my moral portfolio: The main intrinsic bad is suffering, and wellbeing (somewhat broader) is intrinsically good.

I think any argument about creating people/etc is instrumental - will they or won't they increase wellbeing? They can both potentially contain suffering/wellbeing themselves, and affect the world in ways that affect wellbeing/suffering now & in the future. This includes effects before they are born (e.g. on women's lives). TBH ... (read more)

1
Ariel Simnegar
1y
Thanks for this detail! Yeah, I agree that encouraging/supporting people having kids is a more effective approach, and that other things matter more from a total longtermist perspective. (In particular, if human extinction does occur in the near term, then factory farming plausibly outweighs everything good we've ever done. Either way, we have much to catch up on.) To be more precise on the question, do you think that with all else equal, choosing to have a child is better than choosing to abort, assuming that the child will live a net good life (in expectation)? (This is what I was trying to capture with the word "intrinsic"--without accounting for concerns of norms, opportunity costs, other interventions dominating, etc i.e. as a unitary yes-or-no decision.) Your advice on optimization is definitely correct, and I have many regrets about the framing of this post, some of which I enumerate here.

I don't think near-term population is helpful for long-term population or wellbeing, e.g. in >10,000 years from now. More likely negative effect than positive effect imo, especially if the mechanism of trying to increase near-term population is to restrict abortion (this is not a random sample of lives!)

I also think it seems bad for general civilization trajectory (partially norm-damaging, but mostly just direct effects on women & children), probably bad for ability to make investments in resilience & be careful with powerful new technology. These seem like the most important effects from a longtermist perspective, so I think abortion-restriction is bad from a total-longtermist perspective.

0
Ariel Simnegar
1y
Understandable! Would you still say, though, that abortion is intrinsically morally bad? (As in the above, that doesn't at all mean you have to endorse involuntary methods of reducing it.)

I guess I did mean aggregate in the 'total' well-being sense. I just feel pretty far from neutral about creating people who will live wonderful lives, and also pretty strongly disagree with the belief that restricting abortion will create more total well-being in the long run (or short tbh).

For total-view longtermism, I think the most important things are ~civilization is on a good trajectory, people are prudent/careful with powerful new technology, the world is lower conflict, investments are made to improve resilience to large catastrophes, etc. Restr... (read more)

abortion is morally wrong is a direct logical extension of a longtermist view that highly values maximizing the number of people on assumption that the average existing persons life will have positive value

I'm a bit confused by this statement. Is a world where people don't have access to abortion likely to have more aggregate well-being in the very long run? Naively, it feels like the opposite to me

To be clear I don't think it's worth discussing abortion at length, especially considering bruce's comment. But I really don't think the number of people ... (read more)

2
lastmistborn
1y
I agree with everything you've said here. What I was saying is that, for the type of longtermism that assumes that the average persons life will be of positive value, and that it is morally good to maximize the total number of people to maximize total happiness, and assumes that allowing a life to come into existence is as good as saving a life, abortion seems to be morally bad, unless you argue that abortion being banned will have enough of a negative effect to outweigh all the lives that would not have existed if it were banned (which I think one could definitely argue). I say type of longtermism because there are definitely different approaches to longtermism and these assumptions are not representative of all, and I disagree with many of the assumptions here. I particularly disagree that total value or wellbeing, as opposed to aggregate as you mention in your comment, is a meaningful metric, but I realize there are different views on that.
0
Ariel Simnegar
1y
One could hypothetically believe that abortion is morally wrong, but that intervening to involuntarily reduce it is either: * Bad on net, because it damages the norm of personal autonomy, or * Insufficiently good on net, because there are better ways to increase the near-term population than by reducing abortion access So rejecting the implications you outlined don't necessarily mean rejecting the idea that abortion is intrinsically morally wrong.

Agree that was a weird example.

Other people around the group (e.g. many of the non-Stanford people who sometimes came by & worked at tech companies) are better examples. Several weren't obviously promising at the time, but are doing good work now.

typo, imo. (in my opinion)

nonn
2y24
0
0

I'm somewhat more pessimistic that disillusioned people have useful critiques, at least on average. EA asks people to swallow a hard pill "set X is probably the most important stuff by a lot", where X doesn't include that many things. I think this is correct (i.e. the set will be somewhat small), but it means that a lot of people's talents & interests probably aren't as [relatively] valuable as they previously assumed.

That sucks, and creates some obvious & strong motivated reasons to lean into not-great criticisms of set X. I don't even think th... (read more)

2
Arepo
2y
I would guess both that disillusioned people have low value critiques on average, and that there are enough of them that if we could create an efficient filtering process, there would be gold in there. Though another part of the problem is that the most valuable people are generally the busiest, and so when they decide they've had enough they just leave and don't put a lot of effort into giving feedback.
3
Callum Dyer
2y
What does 'iml' stand for?
nonn
2y21
0
0

I'd add a much more boring cause of disillusionment: social stuff

It's not all that uncommon for someone to get involved with EA, make a bunch of friends, and then the friends gradually get filtered through who gets accepted to prestigious jobs or does 'more impactful' things in community estimation (often genuinely more impactful!)

Then sometimes they just start hanging out with cooler people they meet at their jobs, or just get genuinely busy with work, while their old EA friends are left on the periphery (+ gender imbalance piles on relationship stuff). This happens in normal society too, but there seem to be more norms/taboos there that blunt the impact.

nonn
2y13
0
0

Your second question "Will the potential negative press and association with Democrats be too harmful to the EA movement to be worth it?" seems to ignore that a major group EAs will be running against will be democrats in primaries.

So it's not only that you're creating large incentives for republicans to attack EA, you're also creating it for e.g. progressive democrats. See: Warren endorsing Flynn's opponent & somewhat attacking flynn for crypto billionaire sellout stuff

That seems potentially pretty harmful too. It'd be much harder to be an active gr... (read more)

Random aside, but does the St. Petersburg paradox not just make total sense if you believe Everett & do a quantum coin flip? i.e. in 1/2 universes you die, & in 1/2 you more than double. From the perspective of all things I might care about in the multiverse, this is just "make more stuff that I care about exist in the multiverse, with certainty"

Or more intuitively, "with certainty, move your civilization to a different universe alongside another prospering civilization you value, and make both more prosperous".

Or if you repeat it, you have "move all civilizations into a few giant universes, and make them dramatically more prosperous.

Which is clearly good under most views, right?

1
trait-feign
2y
I think this Everettian framing is useful and really probes at how we should think about probabilities outside of the quantum sense as well. So I would suggest your reasoning this holds for the standard coin flip case too.

Another complication: we want to select for people who are good fits for our problems, e.g. math kids, philosophy research kids, etc. To some degree, we're selecting for people with personal-fun functions that match the shape of the problems we're trying to solve (where what we'd want them to do is pretty aligned with their fun)

I think your point applies with cause selection, "intervention strategy", or decisions like "moving to Berkeley". Confused more generally

I'm confused about how to square this with specific counterexamples. Say theoretical alignment work: P(important safety progress) probably scales with time invested, but not 100x by doubling your work hours. Any explanations here?

Idk if this is because uncertainty/ probabilistic stuff muddles the log picture. E.g. we really don't know where the hits are, so many things are 'decent shots'. Maybe after we know the outcomes, the outlier good things would be quite bad on the personal-liking front. But that doesn't sound exactly correct either

nonn
2y60
0
0

Curious if you disagree with Jessica's key claim, which is "McKinsey << EA for impact"? I agree Jessica is overstating the case for "McKinsey <= 0", but seems like best-case for McKinsey is still order(s) of magnitude less impact than EA.

Subpoints:

  • Current market incentives don't address large risk-externalities well, or appropriately weight the well-being of very poor people, animals, or the entire future.
  • McKinsey for earn-to-learn/give could theoretically be justified, but that doesn't contradict Jessica's point of spending money to get EAs
... (read more)
nonn
2y14
0
0

There were tons of cases from EAGx Boston (an area with lower covid case counts). I'm one of them. Idk exact numbers but >100 if I extrapolate from my EA friends.

Not sure whether this is good or bad tho, as IFR is a lot lower now. Presumably lower long covid too, but hard to say

An argument against that doesn't seem directly considered here: veganism might turn some high-potential people off without compensatory benefits, and very high base rates of non-veganism (~99% of western people are non-vegan IIRC) means this may matter even on relatively marginal effects.

Obviously many things can be mitigated significantly by being kind/accommodating (though at some level there's a little remaining implied "you are doing bad"). But even accounting for that, a few things remain despite accommodating. E.g.

  • People feel can vaguely outgroup
... (read more)
1
Hank_B
2y
This seems at least a bit different from going veg*n in "private" so to speak. If you stop eating meat and tell no-one not immediately impacted by this choice, why would that lead to scaring off people from EA?   Granted, you seem to be talking about a large portion of EAs being veg*n, a large enough portion that meat is not served at the events and a potential new-comer would feel like the only omnivore there. I think this cuts against EA organizations advocating for veg*nism and towards providing non-veg*n food at EA events, but not necessarily against one's own personal consumption choices.

Still wondering why I never see moral circle expansion advocates make the argument I made here

That argument seems to avoid the suffering-focused problem where moral circle expansion doesn't address, or might even make worse, the worst suffering scenarios for the future (e.g. threats in multipolar futures). Namely, the argument I linked says despite potentially increasing suffering risk, it also increases the value of good futures enough to be worth it

TBC, I don't hold this view because I believe we need a solid "great reflection" to achieve the best futur... (read more)

Yeah I agree that's pretty plausible.  That's what I was trying to make an allowance for with "I'd also distinguish vacations from...", but worth mentioning more explicitly.

2
Kirsten
3y
Sorry I missed that! My bad

For the sake of argument, I'm suspicious of some of the galaxy takes.

Excellent prioritization and execution on the most important parts. If you try to do either of those while tired, you can really fuck it up and lose most of the value

I think relatively few people advocate working to the point of sacrificing sleep, prominent hard-work-advocate (& kinda jerk) rabois strongly pushes for sleeping enough & getting enough exercise.
Beyond that, it's not obvious working less hard results in better prioritization or execution.  A naive look at th... (read more)

4
JP Addison
3y
This is a good response.
2
Kirsten
3y
One thing that hasn't been mentioned here is vacation time and sabbaticals, which would presumably be very useful for a fresh perspective!
nonn
3y21
0
0

Minor suggestion:  Those forms should send responses after you submit, or give the option "would you like to receive a copy of your responses"

Otherwise, it may be hard to clarify whether a submission went through, or details of what you submitted

4
abergal
3y
Changed, thanks for the suggestion!

I think that depends a lot on framing. E.g. if this is just a prediction of future events, it sounds less objectionable to other moral systems imo b/c it's not making any moral claims (perhaps some by implication, as this forum leans utilitarian)

In the case of making predictions, I'd strongly bias to say things I think are true even if they end up being inconvenient, if they are action relevant (most controversial topics are not action relevant, so I think people should avoid them). But this might be important for how to weigh different risks a... (read more)

Agree, tried to add more clarification below. I'll try to avoid this going forward, maybe unsuccessfully.

Tbh, I mean a bit of both definitions (Will's views are quite surprising to me, which is why I want to know more), but mostly the former (i.e. stating it's close to 0% or 100%).

nonn
5y11
0
0
I sometimes find the terminology of "no x-risk", "going well" etc.

Agree on "going well" being under-defined. I was mostly using that for brevity, but probably more confusion than it's worth. A definition I might use is "preserves the probability of getting to the best possible futures", or even better if it increases that probability. Mainly because from an EA perspective (even if people are around) if we've locked in a substantially suboptimal moral situation, we've effectively lost most possible va... (read more)

nonn
5y21
0
0

If you believe "<1% X", that implies ">99% ¬X", so you should believe that too. But if you think >99% ¬X seems too confident, then you should modus tollens and moderate your <1% X belief. When other people give e.g. 30% X, that only implies 70% ¬X, which seems more justifiable to me.

I use AGI as an example just because if it happens, it seems more obviously transformative & existential than biorisk, where it's harder to reason about whether people survive. And because Will's views seem to diverge quite stron... (read more)

I disagree with your implicit claim that Will's views (which I mostly agree with) constitute an extreme degree of confidence. I think it's a mistake to approach these questions with a 50-50 prior. Instead, we should consider the base rate for "events that are at least as transformative as the industrial revolution".

That base rate seems pretty low. And that's not actually what we're talking about - we're talking about AGI, a specific future technology. In the absense of further evidence, a prior of <10% on "AGI tak... (read more)

nonn
5y46
0
0

This is just a first impression, but I'm curious about what seems a crucial point - that your beliefs seem to imply extremely high confidence of either general AI not happening this century, or that AGI will go 'well' by default. I'm very curious to see what guides your intuition there, or if there's some other way that first-pass impression is wrong.

I'm curious about similar arguments that apply to bio & other plausible x-risks too, given what's implied by low x-risk credence

The general background worldview that motivates this credence is that predicting the future is very hard, and we have almost no evidence that we can do it well. (Caveat I don’t think we have great evidence that we can’t do it either, though.) When it comes to short-term forecasting, the best strategy is to use reference-class forecasting (‘outside view’ reasoning; often continuing whatever trend has occurred in the past), and make relatively small adjustments based on inside-view reasoning. In the absence of anything better, I think we should do the same f... (read more)

3
SiebeRozendal
5y
Why do his beliefs imply extremely high confidence? Why do the higher estimates from other people not imply that? I'm curious what's going on here epistemologically.
nonn
6y13
0
0

I think there’s a significant[8] chance that the moral circle will fail to expand to reach all sentient beings, such as artificial/small/weird minds (e.g. a sophisticated computer program used to mine asteroids, but one that doesn’t have the normal features of sentient minds like facial expressions). In other words, I think there’s a significant chance that powerful beings in the far future will have low willingness to pay for the welfare of many of the small/weird minds in the future.[9]

I think it’s likely that the powerful beings in the far future (a

... (read more)