This is a special post for quick takes by Joseph_Chu. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 11:57 PM

As a utilitarian, I think that surveys of happiness in different countries can serve as an indicator of how well the various societies and government systems of these countries serve the greatest good. I know this is a very rough proxy and potentially filled with confounding variables, but I noticed that the two main surveys, Gallup's World Happiness Report, and Ipsos' Global Happiness Survey seem to have very different results.

Notably, Gallup's Report puts the Nordic model countries like the Netherlands (7.403) and Sweden (7.395) near the top, with Canada (6.961) and the United States (6.894) scoring pretty well, and countries like China (5.818) scoring modestly, and India (4.036) scoring poorly.

Conversely, the Ipsos Survey puts China (91%) at the top, with the Netherlands (85%) and India (84%) scoring quite well, while the United States (76%), Sweden (74%), and Canada (74%) are more modest.

I'm curious why these surveys seem to differ so much. Obviously, the questions are different, and the scoring method is also different, but you'd expect a stronger correlation. I'm especially surprised by the differences for China and India, which seem quite drastic.

As you've pointed out, the questions are very different. The Gallup Poll asks people to rank their current position in life from "the best possible" to the "worst possible" on a ten point scale which implies that unequal opportunities and outcomes matter a lot.

The IPSOS poll avoids any sort of implicit comparison with how much better things could otherwise have been or actually is for others, and simply asks whether they would describe themselves as (very) happy or not (at all) on a simpler 4 point scale which is collapsed to a yes/no answer for the ranking

So Chinese and Indian people aren't being asked whether they're conscious of the many things they lack which could make their life better like in the Gallup poll, they're being asked whether they feel so bad about their life they wish to describe themselves as unhappy (or, for various other questions "unsatisfied"). People tend to be biased towards saying they're happy and there's likely to be a cultural component to how willing people are to say they're not too

And to add to the complications, the samples are non-random and not necessarily equivalent. IPSOS acknowledge their developing country samples are significantly more affluent, urban and educated than the population, which might explain why even when it comes to their personal finances they're often more "satisfied" than inhabitants of countries with much higher median incomes. Gallup doesn't admit that sampling bias, but even if it's present to exactly the same extent (it's bound to be present to some extent; poor, rural illiterate people are hard to randomly survey) it probably doesn't have the same effect. Indian professionals can simultaneously be "happy" with their secure-by-local standards position in life and aware that their life outcomes could have been a whole lot better.

Think the stark differences are a good illustration of the limits to subjective wellbeing data, but arguably neither survey captures SWB particularly well anyway, the former because it asks people to make a comparison of [mainly objective] outcomes and the latter because the scale is too simple to capture hedonic utility.

I have some ideas and drafts for posts to the EA Forums and Less Wrong that I've been sitting on because I feel somewhat intimidated by the level of intellectual rigor I would need to put into the final drafts to ensure I'm not downvoted into oblivion (particularly on Less Wrong, where a younger me experienced such in the early days).

Should I try to overcome this fear, or is it justified?

For the EA Forums, I was thinking about explaining my personal practical take on moral philosophy (Eudaimonic Utilitarianism with Kantian Priors), but I don't know if that's actually worth explaining given that EA tries to be inclusive and not take particular stands on morality, and it might not be relevant enough to the forum.

For Less Wrong, I have a draft of a response to Eliezer's List of Lethalities post that I've been sitting on since 2022/04/11 because I doubted it would be well received given that it tries to be hopeful and, as a former machine learning scientist, I try to challenge a lot of LW orthodoxy about AGI in it. I have tremendous respect for Eliezer though, so I'm also uncertain if my ideas and arguments aren't just hairbrained foolishness that will be shot down rapidly once exposed to the real world, and the incisive criticism of Less Wrongers.

The posts in both places are also now of such high quality that I feel the bar is too high for me to meet with my writing, which tends to be more "interesting train-of-thought in unformatted paragraphs" than the "point-by-point articulate with section titles and footnotes" style that people in both places tend to employ.

Anyone have any thoughts?

Short form/quick takes can be a good compromise, and sources of feedback for later versions.

As someone who most of my time here critiquing EA/rationalist orthodoxy, I don't think you have much to worry about, besides annoying comments. A good faith critique presented politely is rarely downvoted. 

Also, I feel like there's selection bias going on around the quality of posts. The best, super highly upvoted posts may be extremely high quality, but there are still plenty of posts that aren't (and that's fine, this is an open forum, not an academic journal). 

I'd be interested in reading your list of lethalities response. I'm not sure it would be that badly recieved, for example, this response by quinton pope got 360 upvotes. List of lethalities seems to be a fringe view even among AI x-risk researchers, let alone the wider machine learning community. 

This is one reason why it's very common for people to write a Google doc first, share it around, update it based on feedback and then post. But this only works if you know enough people who are willing to give you feedback.

An additional option: if you don't know people who are willing to review a document and give you feedback, you could ask people in the Effective Altruism Editing and Review Facebook group to review it.

On this Forum, it is rather rare for good-faith posts to end up with net negative karma. The "worst" reasonably likely outcome is to get very little engagement with your post, which is still more engagement than it will get in your drafts folder. I can't speak to LW, though.

I also think that the appropriate reference point is not the median level of the average post here, but much of the range of first posts from people who have developed into recognized successful posters.

From your description, my only concern would be whether your post sufficiently relates to EA. If it's ~80-90 percent a philosophy piece, maybe there's a better outlet for it. If it's ~50-70 percent, maybe it would work here with a brief summary of the philosophical position upfront and an internal link for the reader who wants to jump directly to the more directly EA-relevant content?

I encourage you to share your ideas.

I've often felt a similar my thoughts aren't valuable enough to share feeling. I tend to write these thoughts as a quick take rather than as a normal forum post, and I also try to phrase my words in a manner to indicate that I am writing rough thoughts, or observations, or something similarly non-rigorous (as sort of a signal to the reader that it shouldn't be evaluated by the same standard).

Either it'll be recieved well, or you get free criticism on your ideas, or a blend of the two. You win in all cases. If it get down voted into oblivion you can always delete it; how many deleted posts can you tie to an author? I can't name one. 

Ultimately, nobody cares about you (or me, or any other random forum user). They're too busy worrying about how they'll be perceived. This is a blessing. You can take risks and nobody will really care if you fail.

Either it'll be recieved well, or you get free criticism on your ideas, or a blend of the two.

 

A tough pill for super-sensitives like me to swallow, but I can see it as an exceptionally powerful one. I surely sympathize with OP on the fear of being downvoted—it's what kept me away from this site for months and from Reddit entirely—but valid criticism on many occasions has influenced me for the better, even if I'm scornful of the moments. Maybe my hurt with being wrong will lessen someday or maybe not, but knowing why can serve me well in the end, I can admit that.

My view is you should write/post something if you believe it's an idea that people haven't sufficiently engaged with in the past. Both of your post ideas sound like that to me.

If you have expertise on AI, don't be shy about showing it. If you aren't confident, you can frame your critiques as pointed questions, but personally I think it's better to just make your argument.

As for style, I think people will respond much better to your argument if it's clear. Clear is different from extensive; I think your example of many-sections-with-titles-and-footnotes conflates those two. That format is valuable for giving structure to your argument, not for being a really extensive argument that covers every possible ground. I agree that "interesting train of thought in unformatted paragraphs" won't likely be received well in either venue. I think it's good communication courtesy to make your ideas clear to people who you are trying to convey them to. Clear structure is your friend, not a bouncer keeping you out of the club.

Post links to google docs as quick takes if posting posts proper feels like a high bar?

So, I read a while back that SBF apparently posted on Felicifia back in the day. Felicifia was an old Utilitarianism focused forum that I used to frequent before it got taken down. I checked an archive of it recently, and was able to figure out that SBF actually posted there under the name Hutch. He also linked a blog that included a lot of posts about Utilitarianism, and it looks like, at least around 2012, he was a devoted Classical Benthamite Utilitarian. Although we never interacted on the forum, it feels weird that we could have crossed paths back then.

His Felicifia: https://felicifia.github.io/user/1049.html
His blog: https://measuringshadowsblog.blogspot.com/

I recently interviewed with Epoch, and as part of a paid work trial they wanted me to write up a blog post about something interesting related to machine learning trends. This is what I came up with:

http://www.josephius.com/2022/09/05/energy-efficiency-trends-in-computation-and-long-term-implications/

Curated and popular this week
Relevant opportunities