All of Lumpyproletariat's Comments + Replies

It's from the paper "Some Limits to Global Ecophagy" (which he's cited in this context before): https://lifeboat.com/ex/global.ecophagy

1
Muireall
7mo
I see, thanks! Section 8.2, "Gray Dust":

When I say that there's a seventy percent chance of something, that specific number carries a very specific meaning: there is a 67% chance that it is the case.

(I checked my calibration online just now.)

It's not some impossible skill to get decent enough calibration.

Your post begins with,

I do not believe this interpretation is correct.

And ends with,

To be fair, upon reading it again

If in the writing of a comment you realize that you were wrong, you can just say that.

The EA Forum has recently had some very painful experiences where members of the community jumped to conclusions and tried to oust people on very flimsy evidence, and now we're seeing people upvote who are sick of the dynamic. 

LessWrong commenters did a better job of navigating accusations, waiting for evidence, and downvoting low-quality combativeness. People running off half-cocked hasn't had as disastrous effects, so there aren't as many people there who are currently sick of it. 

Upvoted.

I'm in strong agreement with point two and in agreement with point four. I think these are things that more people should keep in mind while putting together microcultures and they are things I worry about frequently.

I'm also in favor of point one for... basically all social groups and microcultures which aren't EA. But it wouldn't work for EA. EA is more public than a boardgame club, and many loadbearing people in EA are also public figures. Public figures are falsely accused of assault, constantly.

None of this was news to the people who use LessWrong. 

The time to have a conversation about what went wrong and what a community can do better, is immediately after you learned that the thing happened. If you search for the names of the people involved, you'll see that LessWrong did that at length.

The worst possible time to bring the topic up again, is when someone writes a misleading article for the express purpose of hurting you, which was not written to be helpful and purposefully lacks the context that it would require in order to be helpful. Why... (read more)

I'm worried about this a non-zero amount.

But in the longer run I'm relatively optimistic about most futures where humans survive and continue making decisions. The future will last a very long time, and it's not uncommon for totalitarian governments to liberalize as decades or centuries wear on. Where there is life, there is hope.

The Bloomberg piece was not an update on how misconduct has happened in EA to anyone who has been previously paying attention

I'm strongly downvoting the parent comment for now, since I don't think it should be particularly visible. I'll reverse the downvote if you release the rejection letter and it is as you've represented. 

One of the comments Ivy was responding to there began "I am encouraging you to try to exercise your empathetic muscles and understand..." 

And the comment thread we are in by someone who named this burner account of theirs "Eugenics-Adjacent" began "Sadly I fear stories like this are lost on the devoted EA crowd here..."
 

I agree that posts on the EA forum should be kind and assume good faith.

I agree that we should be aiming for excellence.

If having many examples of behavior X within a group doesn't show that the group is worse at that or better at that than average - if you expect to see the same thing in either case - then being presented with such a list has given you zero evidence on which to update. 

They would have written the same article whether behavior X was half as common or twice as common or vanishingly rare. They would have written the same article whether things were handled well or poorly, as shown by their framing things mi... (read more)

In the absence of evidence that rationalism is uniquely good at dealing with sexual harrasment (it isn't), then the prior assumption about the level of misconduct should be "average", not "excellent".  Which means that there is room for improvement. 

Even if these stories do not update your beliefs about the level of misconduct in the communities, they do give you information about how misconduct is happening, and point to areas that can be improved. I must admit I am baffled as to why the immediate response seems to be mostly about attacking the media, instead of trying to use this new information to figure out how to protect your community. 

Mentioning that in the article would have defeated the purpose of writing it, for the person who wrote it. 

The "chinese robber fallacy" is being overstretched, in my opinion. All it says is that having many examples of X behaviour within a group doesn't necessarily prove that X is worse than average within that group.  But that doesn't mean it isn't worse than average. I could easily imagine the catholic church throwing this type of link out in response to the first bombshell articles about abuse. 

Most importantly, we shouldn't be aiming for average, we should be aiming for excellence. And I think the  poor response to a lot of the incidents described is pretty strong evidence that excellence is not being achieved on this matter. 

I strong upvoted your comment because I disagreed that it should be at negative forum karma.

1
CG
1y
I just realized that I forgot to respond earlier, but your consideration and transparent explanation are appreciated. 

Provocation can shock people out of their normal way of seeing the world into looking at some fact in a different light. This seems to be roughly what Bostrom was saying in the first paragraph of his 1996 email. However, in the case of that email, it's unclear what socially valuable fact he was trying to shock people into seeing in a new way.


Bostrom's email was in response to someone who made the point you do here about provocation sometimes making people view things in a new light. The person who Bostrom was responding to advocated saying things in a blun... (read more)

Interesting! I admit I didn't go and read the original discussion thread, so thanks for that context. To the extent that Bostrom was arguing against being needlessly shocking, he was kind of already making the same point that his critics have been making: don't say needlessly shocking things. He didn't show enough sensitivity/empathy in the process of presenting the example and explaining why it was bad, but he was writing a quick email to friends, not a carefully crafted political announcement intended to be read by thousands of people.

Here are the last four things I remember seeing linked as supporting evidence in casual conversation on the EA forum, in no particular order:

https://forum.effectivealtruism.org/posts/LvwihGYgFEzjGDhBt/?commentId=HebnLpj2pqyctd72F - link to Scott Alexander, "We have to stop it with the pointless infighting or it's all we will end up doing," is 'do x'-y if anything is. (It also sounds like a perfectly reasonable thing to say and a perfectly reasonable way to say it.)

https://forum.effectivealtruism.org/posts/LvwihGYgFEzjGDhBt/?commentId=SCfBodrdQYZBA6RBy - se... (read more)

Ah, I hadn't meant to use "vetting stage" as a term of art.

Gains from trade, and agglomeration effects, and economies of scale. Being effective is useful for doing good, having a lot of close friends and allies is useful for being effective.

I think it's pretty obvious at this point that Tegmark and FLI was seriously wronged, but I barely care about any wrong done to them and am largely uninterested in the question of whether it was wildly disproportionate or merely sickeningly disproportionate.

I care about the consequences of what we've done to them.

I care about how, in order to protect themselves from this community, the FLI is

working hard to continue improving the structure and process of our grantmaking processes, including more internal and (in appropriate cases) external review. &nb

... (read more)
9
Un Wobbly Panda
1y
Getting to one object level issue: If what happened was that Max Tegmark or FLI gets many dubious grant applications, and this particular application made it a few steps through FLI's processes before it was caught, expo.se's story and the negative response you object to on the EA forum would be bad, destructive and false. If this was what happened, it would absolutely deserve your disapproval and alarm. I don't think this isn't true. What we know is: * An established (though hostile) newspaper gave an account with actual quotes from Tegmark that contradict his apparent actions  * The bespoke funding letter, signed by Tegmark, explicitly promising funding, "approved a grant" conditional on registration of the charity * The hiring of the lawyer by Tegmark When Tegmark edited his comment with more content, I'm surprised by how positive the reception of this edit got, which simply disavowed funding extremist groups.  I'm further surprised by the reaction and changing sentiment on the forum in reaction of this post, which simply presents an exonerating story. This story itself is directly contradicted by the signed statement in the letter itself.  Contrary to the top level post, it is false that it is standard practice to hand out signed declarations of financial support, with wording like "approved a grant" if substantial vetting remains. Also, it's extremely unusual for any non-profit to hire a lawyer to explain that a prospective grantee failed vetting in the application process. We also haven't seen any evidence that FLI actually communicated a rejection. Expo.se seems to have a positive record—even accepting the aesthetic here that newspapers or journalists are untrustworthy, it's costly for an outlet to outright lie or misrepresent facts.  There's other issues with Tegmark's/FLI statements (e.g. deflections about the lack of direct financial benefit to his brother, not addressing the material support the letter provided for registration/the reasonable su

I barely give a gosh-guldarn about FLI or Tegmark outside of their (now reduced) capacity to reduce existential risk.

 

Obviously I'd rather bad things not happen to people and not happen to good people in particular, but I don't specifically know anyone from FLI and they are a feather on the scales next to the full set of strangers who I care about.

2
Un Wobbly Panda
1y
If Tegmark or FLI was wronged in the way your comments and others imply, you are correct and justified in your beliefs. But if the apology or the current facts do not make that status clear, there's an object level problem and it's bad to be angry that they are wronged, or build further arguments on that belief.

Eliezer is an incredible case of hero-worship - it's become the norm to just link to jargon he created as though it's enough to settle an argument.

I think that you misunderstand why people link to things.

If someone didn't get why I feel morally obligated to help people who live in distant countries, I would likely link them to Singer's drowning child thought experiment. Either during my explanation of how I feel, or in lieu of one if I were busy. 

This is not because I hero-worship Singer. This is not because I think his posts are scripture. This is be... (read more)

2
Arepo
1y
There's a world of difference between the link-phrases 'here's an argument about why you should do x' and 'do x'. Only Eliezer seems to regularly merit the latter.

It did not make it past the vetting stage. 

They did not award the grant.

2
JWS
1y
FWIW, by FLI's own admission this is false - though perhaps you would call stage 5 (see below) the vetting stage. In section 4) What was the meaning of FLI's letter of intent? FLI lay out  7 general stages for grant decision-making. They say "This proposal made it through 4) in August, then was rejected in November during 5), never reaching 6) or 7)." Where Stage 2 was: 2) Evaluation and vetting And Stage 5 was: 5) Further due diligence on grantee So it would be more accurate to say that it made it past initial vetting, but not further due diligence, and no grant was awarded.

There's an angry top-level post about evaporative cooling of group beliefs in EA that I haven't written yet, and won't until it would no longer be an angry one. That might mean that the best moment has passed, which will make me sad for not being strong enough to have competently written it earlier. You could describe this as my having been chilled out of the discourse, but I would instead describe it as my politely waiting until I am able and ready to explain my concerns in a collected and rational manner.

I am doing this because I care about carefully art... (read more)

4
Un Wobbly Panda
1y
I think you are upset because FLI or Tegmark was wronged. Would you consider hearing another perspective about this?

For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesn't matter that much for social reality, and EA should ideally be persuasive and control social reality.
 

I think the extent to which nuanced truth does not matter to "most of the world" is overstated. 

I additionally think that EA should not be optimizing for deceiving people who belong to the class "most of the world".

Both because it wouldn't be useful if it worked (realistically most of the world has very little they are offering) and because it woul... (read more)

I'd like to ask people not to downvote titotal's comment below zero, because that also hides RobBensinger's timeline. I had to strong upvote the parent comment to make the timeline visible again.

At the time of my writing this comment, the parent was at 25 karma and -31 agreement karma. 

Seeing as Jim was absolutely correct, I think that the people who dismissed them out of hand should reflect on what manner of reasoning led them to do so.

EDIT: posted this before I saw that Ic had already made the same point.

I had to draft and re-draft the parent comment to write it without cursing. I am crying angry tears right now. Both are deeply out of character for me. 

I have been worn down.

8) What have we learned from this and how can we improve our grantmaking process?

The way we see it, we rejected a grant proposal that deserved to be rejected, and challenging, reasonable questions have been asked as to why we initially considered it and didn’t reject it earlier. We deeply regret that we may have inadvertently compromised the confidence of our community and constituents. This causes us huge distress, as does the idea that FLI or its personnel would somehow align with ideologies to which we are fundamentally opposed. We are working hard

... (read more)

The FLI did nothing wrong.

I don't completely agree: grantmaking organizations shouldn't issue grant intent letters which imply this level of certainty before completing their evaluation. I expect one outcome here will be that FLI changes how they phrase letters they send at this stage to be clearer about what they actually represent, and this will be a good thing on its own where it helps grantees better understand where they are in the process and how confident to be about incoming funds.

I'm also not convinced that the stage at which this was caught i... (read more)

lc
1y10
2
0

:(

I had to draft and re-draft the parent comment to write it without cursing. I am crying angry tears right now. Both are deeply out of character for me. 

I have been worn down.

Uncontroversial take: EA wouldn't exist without the blithely curious and alien-brained. 

More controversially: I've been increasingly feeling like I'm on a forum where people think the autistic/decoupler/rationalist cluster did their part and now should just... go away. Like, 'thanks for pointing us at the moral horrors and the world-ending catastrophe, I'll bear them in mind, now please stop annoying me.'

But it is not obvious to me that the alien-brained have noticed everything useful that they are going to notice, and done all the work that they will do, such that it is safe to discard them.

Let me say this: autism runs in my family, including two of my first cousins. I think that neurodivergence is not only nothing to be ashamed of, and not an "illness" to be "cured", but in fact a profound gift, and one which allows neurodivergent individuals to see what many of us do not. (Another example: Listen to Vikingur Olafsson play the piano! Nobody else hears Mozart like that.).

Neurodivergent individuals and high decouplers should not be chased out of effective altruism or any other movement. Doing this would not only be intrinsically wrong, but wou... (read more)

Noting that I strongly disagreed with this, rather than it being the case that someone with weighty karma did a normal disagree. 

2
Guy Raveh
1y
Both weak and strong votes increase in power when you get more karma, although I think for every currently existing user the weak vote is at most 2 (and the strong vote up to 9).

Sometimes it's more important to convey something with high fidelity to few people than it'd be to convey an oversimplified version to many. 

That's the reason why we bother having a forum at all - despite the average American reading at an eighth grade level - rather than standing on street corners shouting at the passers-by. 

I think that having to actively filter out controversy is the sort of trivial inconvenience that would lead to many people just not using the forum while there's a controversy on (or, use the forum ever, if this is the new normal).

My initial reaction to the mod comment was confusion, as it is not threaded beneath wachichornia's comment for me:

5
Lizka
1y
Hi, just to clear some things up: the warning and ban are for this comment by 3f6f6014. [1] We've just enabled a system that we're testing to avoid showing spam or severely norm-breaking comments by users who've just joined, by which comments posted by new users don't show up for other users until they've been checked by a Forum mod or facilitator.  The comment I issued a warning for is extremely downvoted and disagree-voted, so I assumed that people were seeing it, although it had the note from the new system "[This comment will not be visible to other users until the moderation team checks it for spam or norm violations.]". I think that was wrong, and this has caused some confusion. We'll try to improve the system here. 1. ^ If you can't see the comment, still, its entire content is "Genetic determinism is true."

I'm going to push back against this a very slight amount. It is good to write a thing as simply as possible while saying exactly what it's meant to say in exactly the way it's meant to be said - but not to write a thing more simply than that. 

Noting for the record that I read this post after these comments were written, and other people will as well.

I've updated the title.

Many people stand by The Scout Mindset by Julia Galef (though I haven't myself read it) (here's a book review of it that you can read to decide whether you want to buy or borrow the book). I don't know how many pages long it is exactly but am 85% sure it falls in your range.

On the nightstand next to me is Replacing Guilt by Nate Soares - it's 202 pages long and they are all of them great. You can find much of the material online here, you could give the first few chapters a glance-through to see if you like them.

I'm interested to see which books other people recommend!

Hello! Welcome to the forum, I hope you make yourself at home. 


...you would be justified in requiring first some short and convincing expository work with the core arguments and ideas to see if they look sufficiently appealing and worth engaging in. Is there something of the kind for Rationalism?

In this comment Hauke Hillebrandt linked this essay of Holden Karnofsky's: The Bayesian Mindset. It's about a half-hour read and I think it's a really good explainer.

Putanumonit has their own introduction to rationality - it's less explicitly Bayesian, and som... (read more)

7
Manuel Del Río Rodríguez
1y
Thanks for the recommendations! I wouldn't have any issues either with a moderately-sized book (say, from 200-400 pages long). Cheers. M.

I got LG for my forum alignment - I'm guessing that that's the most common one? 

Comment if you got a different one (unless you'd rather not (I guess you could make a throwaway account so that no one judges you for being CE)).

4
Kat Woods
1y
Neutral good! Which is indeed how I identify.  I do predict  that most EAs are either lawful good or neutral good. 
8
Andrew Simpson
1y
I got neutral evil 😳
7
Writer
1y
I am CHAOTIC Good MUAAHAHA

I disagree pretty strongly with the headline  claim about extreme overconfidence, having found rationalist stuff singularly useful for reducing overconfidence with its major emphasises on falsifiable predictions, calibration, bowing quickly to the weight of the evidence, thinking through failure-states in detail and planning for being wrong.

I could defend this at length, but it's hard to find the heart to dig up a million links and write a long explanation when it seems unlikely that this is actually important to you or the people who strong-agreed with you.

7
titotal
1y
Perhaps it has worked for you in reducing overconfidence, but it certainly hasn't worked for yudkowsky. I already linked to you the list of failed prognostications, and he shows no sign of stopping, with the declaration that AI extinction has probability ~1.  I have my concerns about calibration in general. I think they let you get good at estimating short term, predictable events and toy examples, which then gives you overconfidence in your beliefs about long term, unpredictable beliefs and events.  I don't expect you to dig up a million links when I'm not doing the same. I think it's important to express these opinions out loud, lest we fall into a false impression of consensus on some of these matters. It is important to me... I simply don't agree with you. 

A lot of the people who built effective altruism see it as an extension of the LessWrong worldview, and think that that's the reason why EA is useful to people where so many well-meaning projects are not.

Some random LessWrong things which I think are important (chosen because they come to mind, not because they're the most important things):

The many people in EA who have read and understand Death Spirals (especially Affective Death Spirals and Evaporative Cooling of Group Beliefs) make EA feel safe and like a community I can trust (instead of feeling like ... (read more)

I've read a decent chunk of the sequences, there are plenty of things to like about them, like the norms of friendliness and openess to new ideas you mention.

But I cannot say that I subscribe to the lesswrong worldview, because there are too many things I dislike that come along for the ride. Chiefly, it's seems to foster sense of extreme overconfidence in beliefs about fields people lack domain-specific knowledge about. As a physicist, I find the writings about science to be shallow, overconfident and often straight up wrong, and this has been the reactio... (read more)

1
Quadratic Reciprocity
1y
I don't think my original post was good at conveying the important bits - in particular, I think I published it too quickly and missed out on elaborating on some parts that were more time-consuming to explain. I like your comment and would enjoy reading more 

Eliezer isn't (to my knowledge) an expert on, say, evolutionary biology. Reading the sequences will not make you an expert on evolutionary biology either. 

They will, however, show you how to make a layman's understanding of evolutionary biology relevant to your life.

If I had to guess, I'd point at having a long bulleted list of different specific predictions about the future as a risk factor for someone registering disagreement. 

No reason to feel dumb - I didn't immediately get the reference either. I saw that it was a reference to a legend about a golden apple from how it was the caption to a painting of a legend-looking-person holding a golden apple, so to answer your question  I googled "golden apple legend", found the wikipedia disambiguation page, and searched that for the legend that fit.

It's a joking reference to the Apple of Discord story, wherein the goddess of discord Eris crashed a party and started the Trojan War.

1
Cornelis Dirk Haupt
2y
Now I feel dumb, but at least I'm smarter. Thanx.

I think "does EA provide what is wanted or needed by women?" is a pretty serviceable title; two nations divided by a common language and such.

I'm very glad to have been of help. :D

Many people of sound mind choose assisted suicide in their old age and advanced illness.

There was a prominent debate between Eliezer Yudkowsky and Robin Hanson back in 2008 which is a part of the EA/rationalist communities' origin story, link here: https://wiki.lesswrong.com/index.php?title=The_Hanson-Yudkowsky_AI-Foom_Debate

Prediction is hard and reading the debate from the vantage point of 14 years in the future it's clear that in many ways the science and the argument has moved on, but it's also clear that Eliezer made better predictions than Robin Hanson did, in a way that inclines me to try and learn as much of his worldview as possible so I can analyze other arguments through that frame. 

2
leosn
2y
This link could also be useful for learning how Yudkowsky & Hanson think about the issue: https://intelligence.org/ai-foom-debate Essentially, Yudkowsky is very worried about AGI ('we're dead in 20-30 years' worried) because he thinks that progress on AI overall will rapidly accelerate as AI helps us make further progress. Hanson was (is?) less worried.  

The "alignment problem for advanced agents" or "AI alignment" is the overarching research topic of how to develop sufficiently advanced machine intelligences such that running them produces good outcomes in the real world.

Both 'advanced agent' and 'good' should be understood as metasyntactic placeholders for complicated ideas still under debate. The term 'alignment' is intended to convey the idea of pointing an AI in a direction--just like, once you build a rocket, it has to be pointed in a particular direction.

"AI alignment theory" is meant as an overarch

... (read more)
Load more