Oh that is annoying, thanks for pointing it out. I've just tried to use the new column width feature to fix it, but no luck.
it is good to omit doing what might perhaps bring some profit to the living, when we have in view the accomplishment of other ends that will be of much greater advantage to posterity.
- Descartes (1637)
I really think egoism strains to fit the data. From a comment on a deleted post:
...[in response to someone saying that self-sacrifice is necessarily about showing off and is thus selfish]:
How does this reduction [to selfishness] account for the many historical examples of people who defied local social incentives, with little hope of gain and sometimes even destruction?
(Off the top of my head: Ignaz Semmelweis, Irena Sendler, Sophie Scholl.)
We can always invent sufficiently strange posthoc preferences to "explain" any behaviour: but what do you gain in
This is a great question and I'm sorry I don't have anything really probative for you. Puzzle pieces:
I really think egoism strains to fit the data. From a comment on a deleted post:
...[in response to someone saying that self-sacrifice is necessarily about showing off and is thus selfish]:
How does this reduction [to selfishness] account for the many historical examples of people who defied local social incentives, with little hope of gain and sometimes even destruction?
(Off the top of my head: Ignaz Semmelweis, Irena Sendler, Sophie Scholl.)
We can always invent sufficiently strange posthoc preferences to "explain" any behaviour: but what do you gain in
I'm mostly not talking about infighting, it's self-flagellation - but glad you haven't seen the suffering I have, and I envy your chill.
You're missing a key fact about SBF, which is that he didn't "show up" from crypto. He started in EA and went into crypto. This dynamic raises other questions, even as it makes the EA leadership failure less simple / silly.
Agree that we will be fine, which is another point of the list above.
Yeah it's not fully analysed. See these comments for the point.
The first list of examples is to show that universal shame is a common feature of ideologies (descriptive).
The second list of examples is to show that most very well-regarded things are nonetheless extremely compromised, in a bid to shift your reference class, in a bid to get you to not attack yourself excessively, in a bid to prevent unhelpful pain and overreaction.
Good analysis. This post is mostly about the reaction of others to your actions (or rather, the pain and demotivation you feel in response) rather than your action's impact. I add a limp note that the two are correlated.
The point is to reset people's reference class and so salve their excess pain. People start out assuming that innocence (not-being-compromised) is the average state, but this isn't true, and if you assume this, you suffer excessively when you eventually get shamed / cause harm, and you might even pack it in.
"Bite it" = "everyone eventually ...
There's some therapeutic intent. I'm walking the line, saying people should attack themselves only a proportionate amount, against this better reference class: "everyone screws up". I've seen a lot of over the top stuff lately from people (mostly young) who are used to feeling innocent and aren't handling their first shaming well.
Yes, that would make a good followup post.
Good point thanks (though I am way less sure of the EU's sign). That list of examples is serving two purposes, which were blended in my head til your comment:
You seem to be using compromised to mean "good but flawed", where I'm using it to mean "looks bad" without necessarily evaluating the EV.
Yet another lesson about me needing to write out my arguments explicitly.
Title: The long reflection as the great stagnation
Author: Larks
URL: https://forum.effectivealtruism.org/posts/o5Q8dXfnHTozW9jkY/the-long-reflection-as-the-great-stagnation
Why it's good: Powerful attack on a cherished institution. I don't necessarily agree on the first order, but on the second order people will act up and ruin the Reflection.
Title: Forecasting Newsletter: April 2222
Author: Nuno
URL: https://forum.effectivealtruism.org/posts/xnPhkLrfjSjooxnmM/forecasting-newsletter-april-2222
Why it's good: Incredible density of gags. Some of the in-jokes are so clever that I had to think all day to get them; some are so niche that no one except Nuno and the target could possibly laugh.
Agree about the contest. Something was submitted but it wasn't about blowup risk and didn't rise to the top.
Your reasoning in footnote 4 is sound, but note that practitioners often complain that OPT is much worse than GPT-3 (or even GPT-NeoX) in qualitative / practical terms. Benchmark goodharting is real.
(Even so, this might be goalpost shifting, since GPT3!2022 is a very different thing from GPT3!2020.)
Looks like we have a cost-saving way to prevent 7 billion male chick cullings a year.
I snipe at accelerationist anti-welfarists in the thread, but it's an empirical question whether removing horrifying parts of the horrifying system ends up delaying abolition and being net-harmful. It seems extremely unlikely (and assumes that one-shot abolition is possible) but I haven't modelled it.
I like all of your suggested actions. Two thoughts:
1) EA is a both a set of strong claims about causes + an intellectual framework which can be applied to any cause. One explanation for what's happening is that we grew a lot recently, and new people find the precooked causes easier to engage with (and the all-important status gradient of the community points firmly towards them). It takes a lot of experience and boldness to investigate and intervene on a new cause.
I suspect you won't agree with this framing but: one way of viewing the play between these tw...
On AI quietism. Distinguish four things:
(4) is not a rational lack of concern about an uncertain or far-off risk: it's lack of caring, conditional on the risk being real.
Can there really be anyone in category (...
Ord's undergrad thesis is a tight argument in favour of enlightened argmax: search over decision procedures and motivations and pick the best of those instead of acts or rules.
3. Tarsney suggests one other plausible reason moral uncertainty is relevant: nonunique solutions leaving some choices undetermined. But I'm not clear on this.
Excellent comment, thanks!
Yes, wasn't trying to endorse all of those (and should have put numbers on their dodginess).
1. Interesting. I disagree for now but would love to see what persuaded you of this. Fully agree that softmax implies long shots.
2. Yes, new causes and also new interventions within causes.
3. Yes, I really should have expanded this, but was lazy / didn't want to disturb the pleasant brevity. It's only "moral" uncertainty about how much risk aversion you should have that changes anything. (à la this.)
4. Agree.
5. Agree.
6. I'm usin...
Not in this post, we just link to this one. By "principled" I just mean "not arbitrary, has a nice short derivation starting with something fundamental (like the entropy)".
Yeah, the Gittins stuff would be pitched at a similar level of handwaving.
Looking back two weeks later, this post really needs
Yeah could be terrible. As such risks go it's relatively* well-covered by the military-astronomical complex, though events continue to reveal the inadequacy of our monitoring. It's on our Other list.
* This is not saying much: on the absolute scale of "known about" + "theoretical and technological preparedness" + "predictability" + "degree of financial and political support" it's still firmly mediocre.
We will activate for things besides x-risks. Besides the direct help we render, this is to learn about parts of the world it's difficult to learn about any other time.
Yeah, we have a whole top-level stream on things besides AI, bio, nukes. I am a drama queen so I want to call it "Anomalies" but it will end up being called "Other".
We're not really adding to the existing group chat / Samotsvety / Swift Centre infra at present, because we're still spinning up.
My impression is that Great Power stuff is unusually hard to influence from the outside with mere research and data. We could maybe help with individual behaviour recommendations (turning the smooth forecast distributions of others into expected values and go / no-go advice).
Vouching for this, it's a wonderful place to work and also to hang out.