Thanks! Fixed
wow that's really interesting, I'll look more deeply into that. It's defintely not what I've read happened, but at this point I think it's proably worth me reading the primary sources rather than relying on books.
I have no specifc source saying explicitly that there wasn't a plan to use nuclear weopons in response to a tactical nuclear weopon. However, I do know what the decsion making stucture for the use of nuclear weopons was. In a case where there hadn't been a decapiting strike on civillian administrators, the Presidnet was presented with plans from the SIOP (US nuclear plan) which were exclusively plans based around a statagy of descrution of the Communist bloc. The SIOP was the US nuclear plan but triggers for nuclear war weren't in it anywhere. When induvid... (read more)
Thanks for you feedback! Unfortunately I am a smart junior person, so looks like we know who'll be doing the copy editing
Yeah I think that's very reasonable
Yes!
I think three really good books are One minute to Midnight, Nuclear folly, and Gambling with Armageddon. Lots of other ones have shortish sections but these three focus more almost completely on the crisis.
Also deals with the issue from the same persecptive I've presented here.
I think that there is something to the claim being made in the post which is that longtermism as it currently is is mostly about increasing number of people in the future living good lives. It seems genuinely true that most longtermists are prioritising creating happiness over reducing suffering. This is the key factor which pushes me towards longtermist s-risk.
I think the key point here is that it is unsually easy to recuirt EAs at uni compared to when they're at McKinsey. I think it's unclear if a) among the the best things for a student to do is go to McKinsey and b) how much less likely it is that an EA student goes to McKinsey. I think it's pretty unlikely going to McKinsey is the best thing to do, but I also think that EA student groups have a realtively small effect on how often students go into elite coporate jobs (a bad thing from my perspective) at least in software engineering.
I'm not sure how clear it is that it's much better for people to hear about EA at university, especially given there is a lot more outreach and onboarding at the university level than for professionals.
I'm obviously not speaking for Jessica here, but I think the reason the comparison is relevant is that the high spend by Goldman ect suggests that spending a lot on recruitment at unis is effective.
If this is the case, which I think is also supported by the success of well funded groups with full or part time organisers, and that EA is in an adversarial relationship to with these large firms, which I think is large true, then it makes sense for EA to spend similar amounts of money trying to attract students.
The relvent comparison is then comparing the value of the marginal student recurited with malaria nets ect.
I'm going through this right now. There have just clearly been times both as a group organiser and in my personal life when I should have just spent/taken money and in hindsight clearly had higher impact, e.g buying uni textbooks so I study with less friction to get better grades.
I view India-Pakistan as the pair of nuclear armed states most like have a nuclear exchange. Do you agree with this and if so what should this imply about our priorities in the nuclear space.
As long as China and Russia have nuclear weapons, do you think it's valuable for the US to maintain a nuclear arsenal? What about the UK and France?
So the model is more like, during the Russian revolution for instance it's a 50/50 chance that whichever leader came out of that is very strongly selected to have dark traid traits, but this is not the case for the contemporary CCP.
Yeah seems plausible. 99:1 seems very very strong. If it were 9:1 means we're in a 1/1000 world, 1:2 means an approx 1/10^5. Yeah, I don't have a good enough knowledge of rulers before they gained close to absolute power to be able to evaluate that claim. Off the top of my head, Lenin, Prince Lvov (the latter led th... (read more)
I'm currently doing research on this! The big big driver is age, income is pretty small comparatively, the education effect goes away when you account for income and age. At least this what I get from the raw health survey of England data lol.
It seems like a strange claim that both the atrocities committed by Hitler, Stalin and Mao were substantially more likely because they had dark triad traits and that when doing genetic selection we're interested in removing the upper tail, in the article it was the top 1%. To take this somewhat naively, if we think that the Holocaust, and Mao and Stalin's terror-famines wouldn't have happened unless all three leaders exhibited dark tetrad traits in the top 1%, this implies we're living in a world world that comes about with probability 1/10^6, i.e 1 in a m... (read more)
I definitely feel this as a student. I care a lot about my impact and I know intellectually that being really good at being a student the best thing I can do for long term impact. Emotionally though, I find it hard to know that the way I'm having my impact is so nebulous and also doesn't take very much work do well.
I organise EA Warwick and we've had decent success so far with concepts workshops as an alternative to fellowships. They're much less of a time commitment for people, and after the concepts workshop people seem to basically bought into EA and want to get involved more heavily. We've only done 3 this term so far, so definitely we don't know how this will turn out.
Thanks :)
Yes, I kind of did see this coming (although not in the US) and I've been working on a forum post for like a year and now I will finish it.
Yeah I wrote it in google docs and then couldn't figure out how to transfer the del and suffixes to the forum.
I think this is correct and EA thinks about neglectedness wrong. I've been meaning to formalise this for a while and will do that now.
If preference utilitarianism is correct there may be no utility function that accurately describes the true value of things. This will be the case if people's preferences aren't continuous or aren't complete, for instance if they're expressed as a vector. This generalises to other forms of consequentialism that don't have a utility function baked in.
A 6 line argument for AGI risk
(1) Sufficient intelligence has capitalities that are ultimately limited by physics and computability
(2) An AGI could be sufficiently intelligent that it's limited by physics and computability but humans can't be
(3) An AGI will come into existence
(4) If the AGIs goals aren't the same as humans, human goals will only be met for instrumental reasons and the AGIs goals will be met
(5) Meeting human goals won't be instrumentally useful in the long run for an unaligned AGI
(6) It is more morally valuable for human goals to be met than an AGIs goals
Thank you, those both look like exactly what I'm looking for
But thank you for replying, in hindsight by reply seems a bit dismissive :)
Not really because that paper is essentially just making the consequentialist claim that axiological long termism implies that the action we should take are those which help the long run future the most. The Good is still prior to the Right.
Hi Alex, the link isn't working
I'm worried about associating Effective altruism and rationality closely in public. I think rationality is reasonably likely to make enemies. The existence of r/sneerclub is maybe the strongest evidence of this, but also the general dislike that lots of people have for silicon valley and ideas that have a very silicon valley feel to them. I'm unsure to degree people hate Dominic Cummings because he's a rationality guy, but I think it's some evidence to think that rationality is good at making enemies. Similarly, the whole NY times-Scott Alexander crazyness makes me think there's the potential for lots of people to be really anti rationality.
I think empirical claims can be discriminatory. I was struggling with how to think about this for a while, but I think I've come to two conclusions. The first way I think that empirical claims can be discrimory is if they express discriminatory claims with no evidence, and people refusing to change their beliefs based on evidence. I think the other way that they can be discriminatory is when talking about the definitions of socially constructed concepts where we can, in some sense and in some contexts, decide what is true.
I think the relevant split is between people who have different standards and different preferences for enforcing discourse norms. The ideal type position on the SJ side is that a significant number of claims relating to certain protected characteristics are beyond the pale and should be subject to strict social sanctions. The facebook group seems to on the over side of this divide.
I think using Bayesian regret misses a number of important things.
It's somewhat unclear if it means utility in the sense of a function that maps preference relations to real numbers, or utility in axiological sense. If it's in the former sense then I think it misses a number of very important things. The first is that preferences are changed by the political process. The second is that people have stable preferences for terrible things like capital punishment.
If it means it in the axiological sense then I don't think we have strong reason to be... (read more)
Yeah I mean this is a pretty testable hypothesis and I'm tempted to actually test it. My guess is that the level of vote splitting that electoral system has won't have an effect and that that whether not voting is compulsory, number of young people, level of education and level of trust will explain most of the variation in rich democracies.
Two books I recommend on structural causes and solutions to global poverty. The bottom billion by Paul Collier focuses on the question how can you get failed and failing states in very poor countries to middle income status and has a particular focus on civil war. It also looks at some solutions and thinks about the second order effects of aid. How Asia works by Joe Studwell focus on the question of how can you get poor countries with high quality, or potentially high quality governance and reasonably good political economy to become high income countries. It focuses exclusively on the Asian developmental state model and compares it with neoliberalish models in other parts of Asia that are now mostly middle income countries.
Maybe this isn't something people on the forum do, but it is something I've heard some EAs suggest. People often have a problem when they become EAs that they now believe this really strange thing that potentially is quite core to their identity now and that can feel quite isolating. A suggestion that I've heard is that people should find new, EA friends to solve this problem. It is extremely important that this does not come off as saying that people should cut ties with friends and family who aren't EAs. It is extremely important that this is not what you mean. It would be deeply unhealthy for us as community is this became common.
Yeah that sounds right, I don't even know how many people are working on strategy based around India becoming a superpower, which seems completely plausible.
Is there anyone doing research from an EA perspective on the impact of Nigeria becoming a great power by the end of this century? Nigeria is projected to be the 3rd largest country in the world by 2100 and potentially appears to be having exponential growth in GDP per capita. I'm not claiming that it's a likely outcome that Nigeria is a great power in 2100 but nor does it seem impossible. It isn't clear to me that Nigeria has dramatically worse institutions than India but I expect India to be a great power by 2100. It seems like it'd be really valuable for someone to do some work on this given it seems really neglected.
I think China is basically in a similar situation to Prussia/Germany from 1848 to 1914. The revolutions of 1848 were unsuccessful in both Prussia and the independent South German states but they gave the aristocratic elites one hell of fright. The formal institutions of government didn't change very much, nor did who was running the show - in the Prussia then Germany the aristocratic-military Junker class. They still put people they didn't like in prison sometimes and still had kings with a large amount of formal power. However, they liberalised pret... (read more)
Yeah this is just about the constant risk case, I probably should have referred to it not covering time of perils explicitly, although same mechanism with neglectedness should still apply.