936Joined Dec 2020


Thanks for posting, not seeing a lot of people talking about this (plausibly quite important!) event. 

I think it makes sense to be worried that Anthropic devolves into a mere armsracer over subsequent years, though the specific role of industry partnerships in increasing this worry is something I'm less confident about. (I've been told by an Anthropic employee his reasons why he's not worried about this, but that's different from credible signals or commitment from leadership, somehow). 

I cosign this comment completely. 

I have a cheap thing polyam folks can start doing today that would make a decent amount of progress over time. 

more downvotes and social sanctions for the "monog is unenlightened" meme. 

I know when people get excited about an awesome new social technology they want to scream it from the rooftops, and they think "why didn't I try this sooner was I some kind of primitive?" But when you say that out loud, others hear "so you're saying I'm a primitive".

I've seen numerous comments and anecdotes of meatspace conversations that go further than that! "letting jealousy run your life means you need therapy" or "you've been brainwashed by the conformist masses of romcoms", when they happen in our community they're not downvoted into oblivion (yet, growth mindset)!  

I don't think it's a referendum on community engagement in polyamory for us to listen to the complaints of people who are either obligate monog and had an experiment in polyam go south or monog and not interested in experimenting or questioning it. 

(Keep in mind, many queer people go through the stage of skepticism that there exist any properly truly straight people at all. I sure did. This is seen as something to grow out of in the queer community. Let's assert that assuming everyone would be polyam if they just tried harder to be civilized is something to grow out of, too). 

I also don't experience a moral conviction about my ownership of my bank account, but I understand there's a lot of variance about this across the population and the differing intuitions here have caused society and history a great deal of stress. I guess I ascribe most of my success to birth lotteries and people who feel moreso like they had to grind and face adversity would be likely to feel like their ownership over their resources is a morally valid matter. 

I've been a part of one unfinished whitepaper and one unsubmitted grant application on mechanisms and platforms in this space, in particular they were interventions aimed at creating distributed epistemics which is I think a very slightly more technical way of describing the value prop of "democracy" without being as much of an applause light. 

The unfinished whitepaper was a slightly convoluted grantmaking system driven by asset prices on a project market, based on the hypothesis that alleviating (epistemic) pressure from elite grantmakers would be good. You kind of need to believe as a premise that wisdom of crowds beats expertise, at least in some weakened form, which I don't think is justified, so I dropped it. 

The unsubmitted grant application was a play at getting more cost effectiveness analysis written as well as streamlining the process by which CEAs are consumed and donation decisions are produced. This isn't as much of a direct democratization play as most of what's discussed in this post, but it's in the genre of alleviating the cognitive bottleneck from an elite few analysts and grantmakers. Nuño pointed out to me that even if this product would be a boon for donors of 2-5 zeroes, the large institutions probably wouldn't use this to drive decisionmaking, (without users inside large institutions, the overall cost in developer hours probably doesn't break even, since large institutions are more important measured in dollars than donors of 2-5 zeroes). 

Again, I think the crux is the premise around the wisdom of crowds. And even if you don't buy that crowds beat silo'd experts even in weakened form, you can still think that the cognitive pressure on a small number of movement leaders is problematic. 

I'd like to thank Nuño, Eli Lifland, Nathan Young, Ashley Lin, Hazelfire, and David Reinstein for advancing the discussion with me and giving me some of the ideas in this comment (half of them encouraged me and half of them discouraged me, but I won't tell you which is which). 

Interesting! When I read oldemail.pdf, I thought he pretty loudly was damning with faint praise toward arguments against race/IQ correlation, which made me respect his epistemic integrity less. It just felt like a lotta kayfabe to me, even though I understand not wanting to write an in-depth viewpoint that's out of one's area of expertise. Hard problem: the epistemic integrity response to race/IQ discourse is to believe true things, to ignore people who think empirical facts are identities (and face the political consequences head on!) or who haven't internalized the is/ought distinction, but that means some serious engagement with what is in this case a thankless literature.

I know you didn't wanna make this about the object level race/IQ thing, so sorry if that's what I'm doing, I meant to write about our differing assessments of his epistemic integrity. There's a lot of understandable brain poison around this topic. 

Answer by quinnNov 23, 202210

I just dumped $2k into AMF. 

  • I was behind on taking 5% out of a bunch of invoices, and decided to catch up all in one place
  • My direct work efforts have all been longtermismy the past year
  • Historically, I've donated toward animals or future people. 
  • Clear, legible wins for virtue signaling without mental gymnastics are desirable. I don't want to have to explain a 300 IQ plan that makes forecasting tech or game theory altruistic every time I want to make an attempt at transmitting the "things are broken, if I can try to help then you can too" core message to someone. 

Earlier this year I maxed out the Carrick Flynn donation, which in the most ancient and wisest words of whomever "seemed like a good idea at the time". 

The above two targets are money I made as a proof engineer at a defi startup. I took a paycut from there for work in the EA ecosystem, and I haven't decided yet if I'm gonna try to skim 5-10% off the top of my post-paycut money for more donations. 

I think the heuristic I mentioned is designed for sexual assault, and I wouldn't expect it to be the right move for less severe values of interpersonal harm. 

Realizing now that I did the very thing that annoys me about these discussions: make statements tuned for severe and obvious cases that have implications about less severe or obvious cases, but not being clear about it, leaving the reader to wonder if they ought to round up the less obvious cases into a more obvious case. Sorry about that. 

Poor accounting, possibly just no really global accounting or sense of where the money was going;

I chatted with an Alameda python dev for about an hour. I tried to get a sense of their testing culture, QA practices, etc. Lmao: there didn't seem to be any. Soups of scripts, no time for tests, no internal audits. Just my impression. 

My type-driven and property-based testing zealot/pedant side has harvested some bayes points, unfortunately. 

making sure that we can still get the value of perpetrator's work.

The standard recommendation I've always heard is basically in the family of tradeoffs, but says that you never really land on the side of preserving the perpetrator's contributions when you factor in the victim's contributions and higher order effects from networks/feedback loops. 

Load More