All of Joseph_Chu's Comments + Replies

As a utilitarian, I think that surveys of happiness in different countries can serve as an indicator of how well the various societies and government systems of these countries serve the greatest good. I know this is a very rough proxy and potentially filled with confounding variables, but I noticed that the two main surveys, Gallup's World Happiness Report, and Ipsos' Global Happiness Survey seem to have very different results.

Notably, Gallup's Report puts the Nordic model countries like the Netherlands (7.403) and Sweden (7.395) near the top, with Canada... (read more)

As you've pointed out, the questions are very different. The Gallup Poll asks people to rank their current position in life from "the best possible" to the "worst possible" on a ten point scale which implies that unequal opportunities and outcomes matter a lot.

The IPSOS poll avoids any sort of implicit comparison with how much better things could otherwise have been or actually is for others, and simply asks whether they would describe themselves as (very) happy or not (at all) on a simpler 4 point scale which is collapsed to a yes/no answer for the rankin... (read more)

I would just like to point out that this consideration of there being two different kinds of AI alignment, one more parochial, and one more global, is not entirely new. The Brookings Institute put out a paper about this in 2022. 

I have some ideas and drafts for posts to the EA Forums and Less Wrong that I've been sitting on because I feel somewhat intimidated by the level of intellectual rigor I would need to put into the final drafts to ensure I'm not downvoted into oblivion (particularly on Less Wrong, where a younger me experienced such in the early days).

Should I try to overcome this fear, or is it justified?

For the EA Forums, I was thinking about explaining my personal practical take on moral philosophy (Eudaimonic Utilitarianism with Kantian Priors), but I don't know if that... (read more)

9
titotal
4mo
As someone who most of my time here critiquing EA/rationalist orthodoxy, I don't think you have much to worry about, besides annoying comments. A good faith critique presented politely is rarely downvoted.  Also, I feel like there's selection bias going on around the quality of posts. The best, super highly upvoted posts may be extremely high quality, but there are still plenty of posts that aren't (and that's fine, this is an open forum, not an academic journal).  I'd be interested in reading your list of lethalities response. I'm not sure it would be that badly recieved, for example, this response by quinton pope got 360 upvotes. List of lethalities seems to be a fringe view even among AI x-risk researchers, let alone the wider machine learning community. 
2
CalebW
4mo
Post links to google docs as quick takes if posting posts proper feels like a high bar?

Short form/quick takes can be a good compromise, and sources of feedback for later versions.

7
Jason
4mo
On this Forum, it is rather rare for good-faith posts to end up with net negative karma. The "worst" reasonably likely outcome is to get very little engagement with your post, which is still more engagement than it will get in your drafts folder. I can't speak to LW, though. I also think that the appropriate reference point is not the median level of the average post here, but much of the range of first posts from people who have developed into recognized successful posters. From your description, my only concern would be whether your post sufficiently relates to EA. If it's ~80-90 percent a philosophy piece, maybe there's a better outlet for it. If it's ~50-70 percent, maybe it would work here with a brief summary of the philosophical position upfront and an internal link for the reader who wants to jump directly to the more directly EA-relevant content?
9
Chris Leong
4mo
This is one reason why it's very common for people to write a Google doc first, share it around, update it based on feedback and then post. But this only works if you know enough people who are willing to give you feedback.
7
Joseph Lemien
4mo
I encourage you to share your ideas. I've often felt a similar my thoughts aren't valuable enough to share feeling. I tend to write these thoughts as a quick take rather than as a normal forum post, and I also try to phrase my words in a manner to indicate that I am writing rough thoughts, or observations, or something similarly non-rigorous (as sort of a signal to the reader that it shouldn't be evaluated by the same standard).
6
John Salter
4mo
Either it'll be recieved well, or you get free criticism on your ideas, or a blend of the two. You win in all cases. If it get down voted into oblivion you can always delete it; how many deleted posts can you tie to an author? I can't name one.  Ultimately, nobody cares about you (or me, or any other random forum user). They're too busy worrying about how they'll be perceived. This is a blessing. You can take risks and nobody will really care if you fail.
5
Karthik Tadepalli
4mo
My view is you should write/post something if you believe it's an idea that people haven't sufficiently engaged with in the past. Both of your post ideas sound like that to me. If you have expertise on AI, don't be shy about showing it. If you aren't confident, you can frame your critiques as pointed questions, but personally I think it's better to just make your argument. As for style, I think people will respond much better to your argument if it's clear. Clear is different from extensive; I think your example of many-sections-with-titles-and-footnotes conflates those two. That format is valuable for giving structure to your argument, not for being a really extensive argument that covers every possible ground. I agree that "interesting train of thought in unformatted paragraphs" won't likely be received well in either venue. I think it's good communication courtesy to make your ideas clear to people who you are trying to convey them to. Clear structure is your friend, not a bouncer keeping you out of the club.

This year I decided to focus my donations more, as in the past I used to have a "charity portfolio" of  about 20 charities and 3 political parties that I would donate to monthly. This year I've had some cash flow issues due to changes with my work situation, and so I stopped the monthly donations and switched back to an annual set of donations once I worked out what I can afford. I normally try to donate 12.5% of my income annually averaged over time.

This year's charitable donations went to: The Against Malaria Foundation, GiveDirectly, Rethink Priori... (read more)

So, I read a while back that SBF apparently posted on Felicifia back in the day. Felicifia was an old Utilitarianism focused forum that I used to frequent before it got taken down. I checked an archive of it recently, and was able to figure out that SBF actually posted there under the name Hutch. He also linked a blog that included a lot of posts about Utilitarianism, and it looks like, at least around 2012, he was a devoted Classical Benthamite Utilitarian. Although we never interacted on the forum, it feels weird that we could have crossed paths back the... (read more)

It's good to see this post. I was a member of my local Rotaract club for years until I eventually aged out of their 18-30 age limit. I think I actually at one point got us to send some donations from one of our events to the Against Malaria Foundation. Overall, it was a great experience, although I ended up not joining Rotary Club proper later, mostly because I moved away from my hometown and didn't know anyone in the Rotary Club of my current city.

I do agree that EA can learn a lot from Rotary as a highly successful organization and community and I'm glad to see someone else mention it here.

These are all great points!

I definitely agree in particular that the thinking on extraterrestrials and the simulation argument aren't well developed and deserve more serious attention.  I'd add into that mix, the possibility of future human or post-human time travellers, and parallel world sliders that might be conceivable assuming the technology for such things is possible.  There's some physics arguments that time travel is impossible, but the uncertainty there is high enough that we should take seriously the possibility.  Between time tra... (read more)

I recently interviewed with Epoch, and as part of a paid work trial they wanted me to write up a blog post about something interesting related to machine learning trends. This is what I came up with:

http://www.josephius.com/2022/09/05/energy-efficiency-trends-in-computation-and-long-term-implications/

A possible explanation is simply that the truth tends to be some information that may or may not be useful. It might, with a small probability, be very useful information, like say, life saving information. The ambiguity of the question means that while you may not be happy with the information, it could conceivably benefit others greatly or not at all. On the other hand, guaranteed happiness is much more certain and concrete. At least, that's the way I imagine it.

I've had at least one person explain their choice as being a matter of truth being harder to get than happiness, because they could always figure out a way to be happy by themselves.

Well, the way the question is formed, there are a number of different tendencies that this question seems to help gauge. One is obviously whether an individual is aware of the difference between instrumental and terminal goals. Another would be what kinds of sacrifices they are willing to make, as well as their degree of risk aversion. In general, I find most people answer truth, but that when faced with an actual situation of this sort, tend to show a preference for happiness.

So far I'm less certain about if particular groups actually answer it one way... (read more)

0
Tom_Ash
9y
Interesting. I'm struggling to imagine why that might be, any theories?

So, I have a slate of questions that I often ask people to try and better understand them. Recently I realized that one of these questions may not be as open-ended as I'd thought, in the sense that it may actually have a proper answer according to Bayesian rationality. Though, I remain uncertain about this. I've also posted this question to the Less Wrong open thread, but I'm curious what Effective Altruists in particular would think about this question. If you'd rather you can private message me your answer. Keep in mind the question is intentionally somewhat ambiguous.

The question is:

Truth or Happiness? If you had to choose between one or the other, which would you pick?

0
Peter Wildeford
9y
I think the hope is that there doesn't have to be a choice.
0
Larks
9y
Truth, no hesitation.
2
Tom_Ash
9y
All else being equal, I'd pick happiness. What understanding do you get from this question, out of interest? Do particular groups tend to answer it one way or another?

I had another thought as well. In your calculation, you only factor in the potential person's QALYs. But if we're really dealing with potential people here, what about the potential offspring or descendants of the potential person as well?

What I mean by this is, when you kill someone, generally speaking, aren't you also killing all that person's future possible descendants as well? If we care about future people as much as present people, don't we have to account for the arbitrarily high number of possible descendants that anyone could theoretically hav... (read more)

I also posted this comment at Less Wrong, but I guess I'll post it here as well...

As someone who's had a very nuanced view of abortion, as well as a recent EA convert who was thinking about writing about this, I'm glad you wrote this. It's probably a better and more well-constructed post than what I would have been able to put together.

The argument in your post though, seems to assume that we have only two options, either to totally ban or not ban all abortion, when in fact, we can take this much more nuanced approach.

My own, pre-EA views are nuanced to t... (read more)

1
Dale
9y
Haha, I now feel bad leaving this comment unanswered because it very thoughtful, so I guess I'll copy-paste my response too... Thanks! It took a long time - and was quite stressful. I'm glad you liked it. I actually deliberately avoided discussing legal issues (ban or not ban) because I felt the purely moral issues were complicated enough already. Yeah, if you want to do both you need a joint probability distribution, which seemed a little in-depth for this (already very long!) post.

Just an update. I decided to make a go of adding the experiment to the Registry. Hopefully what I added is acceptable. If not, let me know what I should change.

I have a bunch of experiments I ran for a Master's Thesis related to the use of neural networks for object recognition, that ended up getting published in a couple conference papers. Given that any A.I. research has the potential to contribute to Friendly A.I., would those have counted or are they too distant from E.A.?

I also have an experiment that's current status is failed, a Neural Network Earthquake Predictor, but which I'm considering resurrecting in the near future by applying different and newer methods. How would I go about incorporating such an experiment into this registry, given that it technically has a tentative result, but the result isn't final yet?

0
Joseph_Chu
9y
Just an update. I decided to make a go of adding the experiment to the Registry. Hopefully what I added is acceptable. If not, let me know what I should change.