Something I have been wondering about is how social/'fluffy' the EA Forum should be. Most posts just make various claims and then the comments are mostly about disagreement with those claims. (There have been various threads about how to handle disagreements, but this is not what I am getting at here.)
Of course not all posts fall in this category: AMAs are a good example, and they encourage people to indulge in their curiosity about others and their views. This seems like a good idea to me.
For example, I wonder whether I should write more comments pointing out what I liked in a post even if I don't have anything to criticise instead of just silently upvoting. This would clutter the comment section more, but it might be worth it by people feeling more connected to the community if they hear more specific positive feedback.
I feel like Facebook groups used to do more online community fostering within EA than they do now, and the EA Forum hasn't quite assumed the role they used to play. I don't know whether it should. It is valuable to have a space dedicated to 'serious discussions'. Although having an online community space might be more important than usual while we are all stuck at home.
One positive secondary effect of this is that Great but uncontroversial posts will be seen by lots of people. Currently posts which are good but don't generate any disagreement get a few upvotes then fall off the front page pretty quickly because nobody has much to say.
I think specific/precise positive feedback is almost as good (and in some cases better) as specific criticism, especially if you (implicitly) point to features that other posts don't have. This allows onlookers to learn and improve in addition to giving a positive signal to the author. For a close reference class, the LessWrong team often has comments explaining why they like a certain post.
The type of social/"fluffy" content that some readers may be worried about is if lots of our threads have non-substantive comments like this one, especially if they're bloated and/or repeated often. I don't have a strong sense of where our balance should be on this.
I don't see bloat as much of a concern, because our voting system, which works pretty well, can bring the best comments to the top. If they're not substantive, they should either be pretty short, or not be highly upvoted.
I will personally feel bad downvoting low-information comments of encouragement, even if they're currently higher up on the rankings than (what I perceive to be) more substantive neutral or negative comments.
Perhaps comments/posts should have more than just one "like or dislike" metric? For example, it could have upvoting or downvoting in categories of "significant/interesting," "accurate," "novel," etc. It also need not eliminate the simple voting metric if you prefer that.
(People may have already discussed this somewhere else, but I figured why not comment--especially on a post that asks if we should engage more?)
IMO the best type of positive comment adds something new on top of the original post, by extending it or by providing new and relevant information. This is more difficult than generic praise, but I don't think it's particularly harder than criticism.
Fairly strongly agreed - I think it's much easier to express disagreement than agreement on the margin, and that on the margin people find it too intimidating to post to the EA Forum and it would be better to be perceived as friendlier. (I have a somewhat adjacent blog post about going out of your way to be a nicer person)
I strongly feel this way for specific positive feedback, since I think that's often more neglected and can be as useful as negative feedback (at least, useful to the person making the post). I feel less strongly for "I really enjoyed this post"-esque comments, though I think more of those on the margin would be good.
An alternate approach would be to PM people the positive feedback - I think this adds a comparable amount to the person, but removes the "changing people's perceptions of how scary posting on the EA Forum is" part
I wrote a quick post in response to this comment (though I've also been thinking about this issue for a while).
I think people should just share their reactions to things most of the time, unless there's a good reason not to, without worrying about how substantive their reactions are. If praise tends to be silent and criticism tends to be loud, I worry that authors will end up with a very skewed view of how people perceive their work. (And that's even before considering that criticism tends to occupy more space in our minds than praise.)
I agree, positive feedback can be a great motivator.
[status: mostly sharing long-held feelings&intuitions, but have not exposed them to scrutiny before]
I feel disappointed in the focus on longtermism in the EA Community. This is not because of empirical views about e.g. the value of x-risk reduction, but because we seem to be doing cause prioritisation based on a fairly rare set of moral beliefs (people in the far future matter as much as people today), at the expense of cause prioritisation models based on other moral beliefs.
The way I see the potential of the EA community is by helping people to understand their values and then actually try to optimize for them, whatever they are. What the EA community brings to the table is the idea that we should prioritise between causes, that triaging is worth it.
If we focus the community on longtermism, we lose out on lots of other people with different moral views who could really benefit from the 'Effectiveness' idea in EA.
This has some limits, there are some views I consider morally atrocious. I prefer not giving these people the tools to more effectively pursue their goals.
But overall, I would much prefer to have more people to have access to cause prioritisation tools, and not just people who find longtermism appealing.
What underlies this view is possibly that I think the world would be a better place if most people had better tools to do the most good, whatever they consider good to be (if you want to use SSC jargon, you could say I favour mistake theory over conflict theory).
I appreciate this might not necessarily be true from a longtermist perspective, especially if you take the arguments around cluelessness seriously. If you don't even know what is best to do from a longtermist perspective, you can hardly say the world would be better off if more people tried to pursue their moral views more effectively.
I have some sympathy with this view, and think you could say a similar thing with regard non-utilitarian views. But I'm not sure how one would cache out the limits on 'atrocious' views in a principled manner. To a truly committed longtermist it is plausible that any non-longtermist view is atrocious!
Yes, completely agree, I was also thinking of non-utilitarian views when I was saying non-longtermist views. Although 'doing the most good' is implicitly about consequences and I expect for someone who wants to be the best virtual ethicist one can be to not find the EA community as valuable for helping them on that path than for people who want to optimize for specific consequences (i.e. the most good). I would be very curious what a good community for that kind of person is however and what good tools for that path are.
I agree dividing between the desirability of different moral views is hardly doable in a principled manner, but even just looking at longtermism we have disagreements whether they should be suffering-focussed or not, so there already is no one simple truth.
I'd be really curious what others think about whether humanity collectively would be better off according to most if we all worked effectively towards our desired worlds, or not, since this feels like an important crux to me.
I mostly share this sentiment. One concern I have: I think one must be very careful in developing cause prioritization tools that work with almost any value system. Optimizing for naively held moral views can cause net harm; Scott Alexander implied that terrorists might just be taking beliefs too seriously when those beliefs only work in an environment of epistemic learned helplessness.
One possible way to identify views reasonable enough to develop tools for is checking that they're consistent under some amount of reflection; another way could be checking that they're consistent with facts e.g. lack of evidence for supernatural entities, or the best knowledge on conscious experience of animals.
I think that thinking about longtermism enables people to feel empowered to solve problems somewhat beyond the reality, truly feeling the prestige/privilege/knowing-better of 'doing the most good'- also, this may be a viewpoint applicable for those who really do not have to worry about finances, but also that is relative. Which also links to my second point that some affluent persons enjoy speaking about innovative solutions, reflecting current power structures defined by high-technology, among others. It would be otherwise hard to make a community of people feeling the prestige of being paid a little to do good or donating to marginally improve some of the current global institutions, that cause the present problems. Or wouldit
[epistemic status: musing]
When I consider one part of AI risk as 'things go really badly if you optimise straightforwardly for one goal' I occasionally think about the similarity to criticisms of market economies (aka critiques of 'capitalism').
I am a bit confused why this does not come up explicitly, but possibly I have just missed it, or am conceptually confused.
Some critiques of market economies think this is exactly what the problem with market economies is: they should maximize for what people want, but instead they maximize for profit instead, and these two goals are not as aligned as one might hope. You could just call it the market economy alignment problem.
A paperclip maximizer might create all the paperclips, no matter what it costs and no matter what the programmers' intentions were. The Netflix recommender system recommends movies to people which glue them to Netflix, whether they endorse this or not, to maximize profit for Netflix. Some random company invents a product and uses marketing that makes having the product socially desirable, even though people would not actually have wanted it on reflection.
These problems seem very alike to me. I am not sure where I am going with this, it does kind of feel to me like there is something interesting hiding here, but I don't know what.
EA feels culturally opposed to 'capitalism critiques' to me, but they at least share this one line of argument. Maybe we are even missing out on a group of recruits.
Some 'latestage capitalism' memes seem very similar to Paul's What Failure looks like to me.
Edit: Actually, I might be using the terms market economy and capitalism wrongly here and drawing the differences in the wrong place, but it's probably not important.
A similar analogy with the fossil fuel industry is mentioned by Stuart Russell (crediting Danny Hillis) here:
let’s say the fossil fuel industry as if it were an AI system. I think this is an interesting line of thought, because what he’s saying basically and — other people have said similar things — is that you should think of a corporation as if it’s an algorithm and it’s maximizing a poorly designed objective, which you might say is some discounted stream of quarterly profits or whatever. And it really is doing it in a way that’s oblivious to lots of other concerns of the human race. And it has outwitted the rest of the human race.
It also seems that "things go really badly if you optimise straightforwardly for one goal" bears similarities to criticisms of central planning or utopianism in general though.
People do bring this up a fair bit - see for example some previous related discussion on Slatestarcodex here and the EA forum here.
I think most AI alignment people would be relatively satisfied with an outcome where our controls over AI outcomes were as strong as our current control over corporations: optimisation for a criteria that requires continual human input from a broad range of people, while keeping humans in-the-loop of decision making inside the optimisation process, and with the ability to impose additional external constrains at run-time (regulations).
Thank you so much for the links! Possibly I was just being a bit blind.
I was pretty excited about the Aligning Recommender systems article as I had also been thinking about that, but only now managed to read it in full. I somehow had missed Scott's post.
I'm not sure whether they quite get to the bottom of the issue though (though I am not sure whether there is a bottom of the issue, we are back to 'I feel like there is something more important here but I don't know what').
The Aligning recommender systems article discusses the direct relevance to more powerful AI alignment a fair bit which I was very keen to see. I am slightly surprised that there is little discussion on the double layer of misaligned goals - first Netflix does not recommend what users would truly want, second it does that because it is trying to maximize profit. Although it is up to debate whether aligning 'recommender systems' to peoples' reflected preferences would actually bring in more money than just getting them addicted to the systems, which I doubt a bit.
Your second paragraph feels like something interesting in the capitalism critiques - we already have plenty of experience with misalignment in market economies between profit maximization and what people truly want, are there important lessons we can learn from this?
I mused about something similar here - about corporations as dangerous optimization demons which will cause GCRs if left unchecked :
Not sure how fruitful it was.
For capitalism more generally, GPI also has "Alternatives to GDP" in their research agenda, presumably because the GDP measure is what the whole world is pretty much optimizing for, and creating a new measure might be really high value.
There is now a Send to Kindle Chrome browser extension, powered by Amazon. I have been finding it very valuable for actually reading long EA Forum posts as well as 80,000hours podcast transcripts.