I'm a researcher at London School of Economics and Political Science, working in the intersection of moral psychology and philosophy.
I liked this comment.
Another way to see it is that there are two different sorts of arguments for prioritising existential risk reduction - an empirical argument (the risk is large) and a philosophical/ethical argument (even small risks are hugely harmful in expectation, because of the implications for future generations). (Of course this is a bit schematic, but I think the distinction may still be useful.)
I guess the fact that EA is a quite philosophical movement may be a reason why there's been a substantial (but by no means exclusive) focus on the philosophical argument. It's also easier to convey quickly, whereas the empirical argument requires much more time.
To actually achieve the goals of longtermism it seems like MUCH more work needs to be happening in translational research to communicate academic x-risk work into policymakers' language for instrumental ends, not necessarily in strictly 'correct' ways.
This sentence wasn't quite clear to me.
Yes, this is my impression as well, based on recently having booked a day 2 test.
Also, one may want to check the reliability of the providers via rating sites (one cheap one I looked at had a terrible rating).
However, one should also note that the rules are about to change:
From 24 October fully vaccinated passengers and most under 18s arriving in England from countries not on the red list can take a cheaper lateral flow test, on or before day 2 of their arrival into the UK. These can be booked from 22 October.
Yeah. Could be good to study EA success in different areas more systematically, as we get more empirical data.
It's also been used outside of genetics by others. I find the EA usage unproblematic.
Yes, I think it's stronger evidence of EAs being good at making a lot of money (or of it being easier than expected to make a lot of money) than of EAs being super talented in general (though it's some evidence of that as well).
I don't know much about Wave, but to me it seems like a data point, even though smaller (meaning there isn't just one case).
Afaict there is a difference between the Long Reflection and Hanson's discussion about brain emulations, in that Hanson focuses more on prediction, whereas the debate on the Long Reflection is more normative (ought it to happen?).
Neoliberals tend to talk about issues that many people take an interest in to a greater extent than EAs do. I would guess that that's an important part of the explanation of the Neoliberals' greater success on Twitter.
There seems to be some premise missing in this argument.
To me, it seems that the question whether professional distance is good or not is mostly orthogonal to the question why EA isn't a solved problem already.
I agree that it would be good to describe this distinction in the Wiki. Possibly it could be part of the Epistemic deference entry, though I don't have a strong view on that.