NicholasKross

CS student, blogging and editing at https://www.thinkingmuchbetter.com/. PM me your fluid-g-increasing ideas

Wiki Contributions

Comments

How should Effective Altruists think about Leftist Ethics?

Thank you for putting this (and solutions) in clear words

December 2021 monthly meme post

Imho some kind of /r/EffectiveMemes would be the best bet

A Red-Team Against the Impact of Small Donations

I am naturally an angsty person, and I don't carry much reputational risk Relate! Although you're anonymous, I'm just ADD.

Point 1 is interesting to me:

  • longtermist/AI safety orgs could require a diverse ecosystem of groups working based on different approaches. This would mean the "current state of under-funded-ness" is in flux, uncertain, and leaning towards "some lesser-known group(s) need money".
  • lots of smaller donations could indicate/signal interest from lots of people, which could help evaluators or larger donors with something.

Another point: since I think funding won't be the bottleneck in the near future, I've refocused my career somewhat to balance more towards direct research.

(Also, partly inspired by your "Irony of Longtermism" post, I'm interested in intelligence enhancement for existing human adults, since the shorter timelines don't leave room for embryo whatevers, and intelligence would help in any timeline.)

December 2021 monthly meme post

I post one article by a friend about memes, look away for 5 seconds, and now this!

I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related)

BOUNTY IDEA (also sent in the form): Exploring Human Value Codification.

Offered to a paper or study that demonstrates a mathematical (or otherwise engineering-ready) framework to measure human's real preference-ordering directly. Basically a neuroscience experiment or proposal thereof.

End goal: Using this framework / results from experiment(s) done based on it, you can generate novel stimuli that seem similar to each other, and reliably predict which ones human subjects will prefer more. (Gradients of pleasure, of course, no harm being done). And, of course, the neuroscientific understanding of how this preference ordering came about.

Prize amount: $5-10k for the proposal, more to fund a real experiment, order of magnitude probably in the right ballpark.

Effective Altruism, Before the Memes Started

Devin's reply:

“Thanks for the response, reading your posts was one of the biggest inspirations for me writing this, its overall demeanor reminded me of what I see as this older strain of EA public interface in a way I hadn’t thought of in a while. On the point of MacAskill responding, I think the information you’ve given is helpful, but I do think there would have been some value in public commentary even if Torres personally wasn’t going to change his mind because of it, for instance it would have addressed concerns the piece gave outsiders who read it, and it would have both legitimized and responded to the concerns of insiders who might have resonated with some of what Torres said. As it happens, I think the community did respond to it somewhat significantly, but in a pretty partial, snubbish way. Robert Wiblin for instance appeared to subtweet the piece like twice:

https://mobile.twitter.com/robertwiblin/status/1422213998527799307

https://mobile.twitter.com/robertwiblin/status/1438883980351361030

Culminating in his recent 80k interview which he strongly advertised as a response to these concerns (again, without naming the article):

https://mobile.twitter.com/robertwiblin/status/1445817240008355843

A similar story can be said of MacAskill himself, shortly after the piece came out he made some comments on EA Forum apparently correcting misconceptions about longtermism the piece brought up without engaging with the piece directly:

https://eaforum.issarice.com/posts/fStCX6RXmgxkTBe73/towards-a-weaker-longtermism#TmaKvfoLo5jtNAoWw

https://eaforum.issarice.com/posts/fStCX6RXmgxkTBe73/towards-a-weaker-longtermism#aYW8s8mY2brTvGNJX

Maybe Torres doesn’t deserve direct engagement even if some of his concerns do (or maybe he does), but it seems hard to deny that its publication had some non-trivial impact on the internal conversations of the movement, including in some ways there was already an appetite for. Though again I can’t expect more direct engagement (especially from those personally attacked), it does seem to me more thorough, direct engagement from prominent figures would have been better in many ways than most of the actual reaction.”

Effective Altruism, Before the Memes Started

Devin's reponse:

“Yeah, I was wondering when that might come up. I have a general resistance to making extraneous accounts, especially if they are anything like social media accounts. I find it stressful and think I would over-obsessively check/use them in a way that would wind up being harmful. Even just having this post up and the ability to respond through Nick has occupied my attention and anxiety a good deal the last few days, or I might do more cross-posts/enable comments on our blog. That said, I did consider it. EA forum seems like it would not be so bad if I was going to have an account somewhere, and there’s still a decent chance that I will make one at some point. When I asked Nick about the issue, he said he already had an account and was very willing to post it for me (by the way, thanks again Nick!). I still considered making one because I thought it might seem weird if it was posted by him instead, but for better or worse I wound up taking him up on it.”

An update in favor of trying to make tens of billions of dollars

I mostly agree with the AI risk worldview described in footnote 5, but this is certainly an interesting analysis! (Although not super-useful for someone in a non-MIT/non-Jane-Street/not-elite-skilled reference class, but I still wonder about the flexibility of that...)

Effective Altruism, Before the Memes Started

Devin's response:

“The white supremacy part doesn’t have this effect for me. Yes there is a use of this word to refer to overt, horrible bigotry, but there is also a use of this word meaning something closer to ‘structures that empower, or maintain the power, of white people disproportionately in prominent decision-making positions’. It is reasonable to say that this latter definition may be a bad way of wording things, you could even argue a terrible way, but since this use has both academic, and more recently some mainstream, usage, it hardly seems fair to assume bad faith because of it. Some of the other stuff in this thread is more troubling, it seems there is a deep rabbit hole here, and it’s possible that Torres is generally a bad actor. Again, I don’t want to be too confident in this particular case. Although it seems we have very different ways of viewing these criticisms even when we are looking at the same thing, I will allow that you seem to have more familiarity with them.”

Load More