All of Nathan_Barnard's Comments + Replies

The value of x-risk reduction

Yeah this is just about the constant risk case, I probably should have referred to it not covering time of perils explicitly, although same mechanism with neglectedness should still apply.

How close to nuclear war did we get over Cuba?

wow that's really interesting, I'll look more deeply into that. It's defintely not what I've read happened, but at this point I think it's proably worth me reading the primary sources rather than relying on books. 

How close to nuclear war did we get over Cuba?

I have no specifc source saying explicitly that there wasn't a plan to use nuclear weopons in response to a tactical nuclear weopon. However, I do know what the decsion making stucture for the use of nuclear weopons was. In a case where there hadn't been a decapiting strike on civillian administrators, the Presidnet was presented with plans from the SIOP (US nuclear plan) which were exclusively plans based around a statagy of descrution of the Communist bloc. The SIOP was the US nuclear plan but triggers for nuclear war weren't in it anywhere. When induvid... (read more)

1Rob Mitchell2mo
Thanks for the detailed response and for linking to that other post. I've been dealing with chickenpox in the house so this is probably later and briefer than the analysis deserves. +1 to 'Command and Control' and 'Nuclear Folly' as well worth reading - between them, enough to dispel any illusions that the destructive power of nuclear weapons was matched with processes to avoid going wrong, whether by accident or human folly. I'll check out 'The Bomb'. The worrying aspect for me is the combination of leeway for particular commanding officers combined with environmental factors that reduce the ability of those officers to know what's going on, and/or to exercise rational judgement. The sub is the most obvious example of this. That's a pretty strong argument in favour of escalation to nuclear exchange! I think it's also other situations taking up the bandwidth of intelligence and politicians, introducing uncertainty, increasing the number of locations where normal accidents or individuals doing something stupid could increase tensions. For China, it came to nothing but one more thing taking up attention and not ideal if you're dealing with one nuclear-armed Communist country to have another one with an unpredictable leader invading another country...
When is AI safety research harmful?

Thanks for you feedback! Unfortunately I am a smart  junior person, so looks like we know who'll be doing the copy editing 

1deep2mo
Ha! Personally, I've gotten a lot of value in having a buddy look over my work and chat with me about it -- a fresh perspective is really useful, not just for copyedits but also for building on my first thoughts. If you don't yet know people you could ask for this, you might find it valuable to reach out to SERI, CERI, or other community orgs that aim to help junior x-risk researchers. (presumably ZERI and JERI are next.) Happy to chat more via DM if that would be useful :)
The Mystery of the Cuban missile crisis

Yes! 

I think three really good books are One minute to Midnight, Nuclear folly, and Gambling with Armageddon. Lots of other ones have shortish sections but these three focus more almost completely on the crisis. 

This article: https://www.jstor.org/stable/2148197?saml_data=eyJzYW1sVG9rZW4iOiJhOTYxYmFiMS1kYzVkLTQ1OTUtYTgxZi1kZmJjM2E3NDY1YTgiLCJpbnN0aXR1dGlvbklkcyI6WyIzZGVlYmI1NC0yMDMwLTQ3YjgtYjhjNi0wN2E3NzQ3NDFlZGEiXX0&seq=1

Also deals with the issue from the same persecptive I've presented here. 

3Arjun Panickssery2mo
What conclusions do they come to?
How much current animal suffering does longtermism let us ignore?

I think that there is something to the claim being made in the post which is that longtermism as it currently is is mostly about increasing number of people in the future living good lives. It seems genuinely true that most longtermists are prioritising creating happiness over reducing suffering. This is the key factor which pushes me towards longtermist s-risk.  

FTX/CEA - show us your numbers!

I think the key point here is that it is unsually easy to recuirt EAs at uni compared to when they're at McKinsey. I think it's unclear if a) among the the best things for a student to do is go to McKinsey and b) how much less likely it is that an EA student goes to McKinsey. I think it's pretty unlikely going to McKinsey is the best thing to do, but I also think that EA student groups have a realtively small effect on how often students go into elite coporate jobs (a bad thing from my perspective) at least in software engineering.  

I'm not sure how clear it is that it's much better for people to hear about EA at university, especially given there is a lot more outreach and onboarding at the university level than for professionals.

FTX/CEA - show us your numbers!

I'm obviously not speaking for Jessica here, but I think the reason the comparison is relevant is that the high spend by Goldman ect suggests that spending a lot on recruitment at unis is effective. 

If this is the case, which I think is also supported by the success of well funded groups with full or part time organisers, and that EA is in an adversarial relationship to  with these large firms, which I think is large true, then it makes sense for EA to spend similar amounts of money trying to attract students. 

The relvent comparison is then comparing the value of the marginal student recurited with malaria nets ect. 

3Lucas Lewit-Mendes2mo
Thanks Nathan, that would make a lot of sense, and motivates the conversation about whether CEA can realisticly attract as many people through advertising as Goldman etc. I guess the question is then whether: a) Goldman's activities are actually effective at attracting students; and b) This is a relevant baseline prior for the types of activities that local EA groups undertake with CEA's funding (e.g. dinners for EA scholars students)
Free-spending EA might be a big problem for optics and epistemics

I'm going through this right now. There have just clearly been times both as a group organiser and in my personal life when I should have just spent/taken money and in hindsight clearly had higher impact, e.g buying uni textbooks so I study with less friction to get better grades. 

AMA: Joan Rohlfing, President and COO of the Nuclear Threat Initiative

I view India-Pakistan as the pair of nuclear armed states most like have a nuclear exchange. Do you agree with this and if so  what should this imply about our priorities in the nuclear space.  

1Joan Rohlfing7mo
Unfortunately I think there are multiple pathways to nuclear use or an exchange involving several pairings or groupings of states with nuclear weapons including: US-Russia and scenarios that could also involve the UK and France along with the US; US-China; India- Pakistan; China – India; DPRK – US; and potentially Iran and other countries should Iran decide to build a nuclear weapon, not to mention the potential for terrorists to get hold of nuclear weapons or materials. So I believe our priorities in the nuclear space must be first to build awareness of the true risks and recognize that the risk is increasing that nuclear weapons will be used again; second to demand that governments pursue policies and concrete actions that will reduce the risks of nuclear use; and third build political will to ultimately end nuclear weapons as a threat by eliminating them and implementing safeguards for a world in which nuclear technology exists and will continue to be used for civilian purposes, but where possession of nuclear weapons is verifiably banned.
AMA: Joan Rohlfing, President and COO of the Nuclear Threat Initiative

As long as China and Russia have nuclear weapons, do you think it's valuable for the US to maintain a nuclear arsenal? What about the UK and France?

1Joan Rohlfing7mo
The reality is that as long as other states possess nuclear weapons, the United States will continue to maintain nuclear forces for deterrence. I believe their purpose should be only to deter the use of nuclear weapons by others, and that the US should maintain the minimum number of forces we believe we need to serve that purpose. At the same time, the US must vigorously pursue further nuclear reductions and limits with Russia, and eventually, multilateral arms control to reduce and ultimately eliminate the nuclear forces of all states that have them including China. We must also strengthen the nuclear nonproliferation regime to prevent the emergence of new nuclear weapons states. It is essential to return to an agreement with Iran to make sure it does not develop nuclear weapons, and we need to reinvigorate diplomacy with North Korea to stop, reverse and ultimately eliminate its nuclear weapons program while providing security and economic incentives in return.
Reducing long-term risks from malevolent actors

So the model is more like, during the Russian revolution for instance it's a 50/50 chance that whichever leader came out of that is very strongly selected to have dark traid traits, but this is not the case for the contemporary CCP.  

Yeah seems plausible. 99:1 seems very very strong. If it were 9:1  means we're in a 1/1000 world, 1:2 means an approx 1/10^5. Yeah, I don't have a good enough knowledge of rulers before they gained close to absolute power to be able to evaluate that claim. Off the top of my head, Lenin, Prince Lvov (the latter led th... (read more)

Open Thread: Spring 2022

I'm currently doing research on this!  The big big driver is age, income is pretty small comparatively, the education effect goes away when you account for income and age. At least this what I get from the raw health survey of England data lol. 

Reducing long-term risks from malevolent actors

It seems like a strange claim that both the atrocities committed by Hitler, Stalin and Mao were substantially more likely because they had dark triad traits and that when doing genetic selection we're interested in removing the upper tail, in the article it was the top 1%. To take this somewhat naively, if we think that the Holocaust, and Mao and Stalin's terror-famines wouldn't have happened unless all three leaders exhibited dark tetrad traits in the top 1%, this implies we're living in a world world that comes about with probability 1/10^6, i.e 1 in a m... (read more)

3Thomas Kwa7mo
I think an implicit assumption of the article is that people with dark triad traits are more likely to gain power. If unstable politics can create a 99:1 selection effect toward the Hitlers and Maos in the top %ile of dark triad, then they come to power in half of the nations with unstable politics. I can imagine this being true if people with dark triad traits are way more power-seeking and this is what matters in unstable political climates.
Sasha Chapin on bad social norms in EA

I definitely feel this as a student. I care a lot about my impact and I know intellectually that being really good at being a student the best thing I can do for long term impact.  Emotionally though, I find it hard to know that the way I'm having my impact is so nebulous and also doesn't take very much work do well. 

We need alternatives to Intro EA Fellowships

I organise EA Warwick and we've had decent success so far with concepts workshops as an alternative to fellowships. They're much less of a time commitment for people, and after the concepts workshop people seem to basically bought into EA and want to get involved more heavily. We've only done 3 this term so far, so definitely we don't know how this will turn out. 

If You're So Smart, Why Aren't You Governor Of California? (Scott Alexander: Astral Codex Ten)

Yes, I kind of did see this coming (although not in the US) and I've been working on a forum post for like a year and now I will finish it. 

2Nathan Young10mo
Happy to read over it.
A formalization of negelectness

Yeah I wrote it in google docs and then couldn't figure out how to transfer the del and suffixes to the forum.

More EAs should consider “non-EA” jobs

I think this is correct and EA thinks about neglectedness wrong. I've been meaning to formalise this for a while  and will do that now. 

1tamgent10mo
I wonder if others' understanding of neglectedness is different from my own. I think I've always implicitly thought of neglectedness as how many people are trying to do the exact thing you're trying to do to solve the exact problem you're working on, and therefore think there's loads of neglected opportunities everywhere, mostly at non-EA orgs. But now reading this thread I got confused and checked the community definition here [https://forum.effectivealtruism.org/tag/neglectedness] and which says it's about dedicating resources to a problem, which is quite different and helps me better understand this thread. It's funny that after all these years I've had a different concept in my head to everyone else and didn't realise. Anyway, if neglectedness includes resources dedicated to the problem, then a predominantly non-EA org like a government body might be dedicating lots of resources to a problem, but not making much progress on it. In my view, this is a neglected opportunity. Maybe we should distinguish between neglected in terms of crowdedness vs. opportunities available? Also, what are others' understandings of neglectedness?
6freedomandutility10mo
Same! I think neglectedness is more useful for identifying impactful “just add more funding” style interventions, but is less useful for identifying impactful careers and other types of interventions since focusing on neglectedness systematically misses high leverage careers and interventions.
Nathan_Barnard's Shortform

If preference utilitarianism is correct there may be no utility function that accurately describes the true value of things. This will be the case if people's preferences aren't continuous or aren't complete, for instance if they're expressed as a vector. This generalises to other forms of consequentialism that don't have a utility function baked in. 

1jkmh1y
What do you mean by correct? When you say "this generalizes to other forms of consequentialism that don't have a utility function baked in", what does "this" refer to? Is it the statement: "there may be no utility function that accurately describes the true value of things" ? Do the "forms of consequentialism that don't have a utility function baked in" ever intend to have a fully accurate utility function?
Nathan_Barnard's Shortform

A 6 line argument for AGI risk 

(1) Sufficient intelligence has capitalities that are ultimately limited by physics and computability  

(2) An AGI could be sufficiently intelligent that it's limited by physics and computability but humans can't be 

(3) An AGI will come into existence

(4)  If the AGIs goals aren't the same as humans, human goals will only be met for instrumental reasons and the AGIs goals will be met

(5) Meeting human goals won't be instrumentally useful in the long run for an unaligned AGI

(6) It is more morally valuable for human goals to be met than an AGIs goals

Non-consequentialist longtermism

Thank you, those both look like exactly what I'm looking for

1Ramiro1y
You're welcome. Plz, write a post (even if a shortform) about it someday. Something that attracts me in this literature (particularly in Scheffler) is how they can pick different intuitions that often collide with premises / conclusions of reasons based on something like the rational agent model (i.e., VnM decision theory). I think that, even for a philosophical theorist, it could be useful to know more about how prevalent are these intuitions, and what possible (social or psychological) explanations could be offered for them. (I admit that, just like the modus ponens of one philosopher might be the modus tollens of the other, someone's intuition might be someone else's cognitive bias) For instance, Scheffler mentions we (at least me and him) have a "primitive" preference for humanity's existence (I think by "humanity" he usually means rational agents similar to us - being extinct by trisolarans would be bad, but not as bad as the end of all conscious rational agents); we usually prefer that humanity exists for a long time, rather than a short period, even if both timelines have the same amount of utility - which seems to imply some sort of negative discount rate of the future, so violating usual "pure time preference" reasoning. Besides, we prefer world histories where there's a causal connection between generations / individuals, instead of possible worlds with the same amount of utility (and the same length in time) where communities spring and get extinct without any relation between them - I admit this sounds weird, but I think it might explain my malaise towards discussions on infinite ethics [https://link.springer.com/chapter/10.1007/978-94-017-3530-8_4].
Non-consequentialist longtermism

But thank you for replying, in hindsight by reply seems a bit dismissive :)

Non-consequentialist longtermism

Not really because that paper is essentially just making the consequentialist claim that axiological long termism implies that the action we should take are those which help the long run future the most. The Good is still prior to the Right.  

1Alex HT1y
https://globalprioritiesinstitute.org/andreas-mogensen-staking-our-future-deontic-long-termism-and-the-non-identity-problem/ [https://globalprioritiesinstitute.org/andreas-mogensen-staking-our-future-deontic-long-termism-and-the-non-identity-problem/]
Introducing Rational Animations

I'm worried about associating Effective altruism and rationality closely in public. I think rationality is reasonably likely to make enemies. The existence of r/sneerclub is maybe the strongest evidence of this, but also the general dislike that lots of people have for silicon valley and ideas that have a very silicon valley feel to them. I'm unsure to degree people hate Dominic Cummings because he's a rationality guy, but I think it's some evidence to think that rationality is good at making enemies. Similarly, the whole NY times-Scott Alexander crazyness makes me think there's the potential for lots of people to be really anti rationality.  

4Writer1y
I don't think we should let the most unreasonable haters maneuver what we say, but if it is of any reassurance the plan is to have a channel with its own legs. It will not be a core brand thing to be associated with EA or LW*. That said, don't discount the value of the connection rationality-EA. It's probably true that EA is rationality applied to altruism, and many of the most valuable EAs are also LW people. *Upon reflection, this is probably too early to say and not true right now. What I can say is that at least the channel probably won't be linked to the forums for the reasons already stated in the post.
9Linch1y
I think this is a reasonable worry, but also something of a lost cause.
Nathan_Barnard's Shortform

I think empirical claims can be discriminatory. I was struggling with how to think about this for a while,  but I think I've come to two conclusions. The first way I think that empirical claims can be discrimory is if they express discriminatory claims with no evidence, and people refusing to change their beliefs based on evidence.  I think the other way that they can be discriminatory is when talking about the definitions of socially constructed concepts where we can, in some sense and in some contexts, decide what is true. 

Concerns with ACE's Recent Behavior

I think the relevant split is between people who have different standards and different preferences for enforcing discourse norms. The ideal type position on the SJ side is that a significant number of claims relating to certain protected characteristics are beyond the pale and should be subject to strict social sanctions. The facebook group seems to on the over side of this divide. 

Voting reform seems overrated

I think using Bayesian regret misses a number of important things. 

It's somewhat unclear if it means utility in the sense of a function that maps preference relations to real numbers, or utility in axiological sense. If it's in the former sense then I think it misses a number of very important things. The first is that preferences are changed by the political process. The second is that people have stable preferences for terrible things like capital punishment. 

If it means it in the axiological sense then I don't think we have strong reason to be... (read more)

Voting reform seems overrated

Yeah I mean this is a pretty testable hypothesis and I'm tempted to actually test it. My guess is that the level of vote splitting that electoral system has won't have an effect and that that whether not voting is compulsory, number of young people, level of education and level of trust will explain most of the variation in rich democracies. 

Nathan_Barnard's Shortform

Two books I recommend on structural causes and solutions to global poverty. The bottom billion by Paul Collier focuses on the question how can you get failed and failing states in very poor countries to middle income status and has a particular focus on civil war. It also looks at some solutions and thinks about the second order effects of aid. How Asia works by Joe Studwell focus on the question of how can you get poor countries with high quality, or potentially high quality governance and reasonably good political economy to become high income countries. It focuses exclusively on the Asian developmental state model and compares it with neoliberalish models in other parts of Asia that are now mostly middle income countries. 

Nathan_Barnard's Shortform

Maybe this isn't something people on the forum do, but it is something I've heard some EAs suggest. People often have a problem when they become EAs that they now believe this really strange thing that potentially is quite core to their identity now and that can feel quite isolating. A suggestion that I've heard is that people should find new, EA friends to solve this problem. It is extremely important that this does not come off as saying that people should cut ties with friends and family who aren't EAs. It is extremely important that this is not what you mean. It would be deeply unhealthy for us as community is this became common. 

Nathan_Barnard's Shortform

Yeah that sounds right, I don't even know how many people are working on strategy based around India becoming a superpower, which seems completely plausible. 

Nathan_Barnard's Shortform

Is there anyone doing research from an EA perspective on the impact of Nigeria becoming a great power by the end of this century? Nigeria is projected to be the 3rd largest country in the world by 2100 and potentially appears to be having exponential growth in GDP per capita. I'm not claiming that it's a likely outcome that Nigeria is a great power in 2100 but nor does it seem impossible. It isn't clear to me that Nigeria has dramatically worse institutions than India but I expect India to be a great power by 2100. It seems like it'd be really valuable for someone to do some work on this given it seems really neglected.  

4Vaidehi Agarwalla1y
I agree! I think there's some issue here (don't know if there's a word for it) where maybe some critical mass of effort on foreign powers is focused on china, leaving other countries with a big deficit or something. I'm not sure what the solution here is, perhaps other than to make some kind of "the case for becoming a [country X] specialist" for a bunch of potentially influential countries.
6evelynciara1y
I don't know, but I think it would be great to look into. There was a proposal to make a "Rising Powers" or "BRICS" tag [https://forum.effectivealtruism.org/posts/rxbLqMDhd4832WYit/propose-and-vote-on-potential-tags?commentId=KGPuyPQcCrwGpWkC7] , but the community was most interested in making one for China. I'd like to see more discussion of other rising powers, including the other BRICS countries.
"Why Nations Fail" and the long-termist view of global poverty

I think China is basically in a similar situation to  Prussia/Germany from 1848 to 1914. The revolutions of 1848 were unsuccessful in both Prussia and the independent South German states but they gave the aristocratic elites one hell of fright. The formal institutions of government didn't change very much, nor did who was running the show - in the Prussia then Germany the aristocratic-military Junker class. They still put people they didn't like in prison sometimes and still had kings with a large amount of formal power. However, they liberalised pret... (read more)