quinn

878Joined Dec 2020

Comments
115

Answer by quinnNov 23, 202210
❤️3

I just dumped $2k into AMF. 

  • I was behind on taking 5% out of a bunch of invoices, and decided to catch up all in one place
  • My direct work efforts have all been longtermismy the past year
  • Historically, I've donated toward animals or future people. 
  • Clear, legible wins for virtue signaling without mental gymnastics are desirable. I don't want to have to explain a 300 IQ plan that makes forecasting tech or game theory altruistic every time I want to make an attempt at transmitting the "things are broken, if I can try to help then you can too" core message to someone. 

Earlier this year I maxed out the Carrick Flynn donation, which in the most ancient and wisest words of whomever "seemed like a good idea at the time". 

The above two targets are money I made as a proof engineer at a defi startup. I took a paycut from there for work in the EA ecosystem, and I haven't decided yet if I'm gonna try to skim 5-10% off the top of my post-paycut money for more donations. 

I think the heuristic I mentioned is designed for sexual assault, and I wouldn't expect it to be the right move for less severe values of interpersonal harm. 

Realizing now that I did the very thing that annoys me about these discussions: make statements tuned for severe and obvious cases that have implications about less severe or obvious cases, but not being clear about it, leaving the reader to wonder if they ought to round up the less obvious cases into a more obvious case. Sorry about that. 

Poor accounting, possibly just no really global accounting or sense of where the money was going;

I chatted with an Alameda python dev for about an hour. I tried to get a sense of their testing culture, QA practices, etc. Lmao: there didn't seem to be any. Soups of scripts, no time for tests, no internal audits. Just my impression. 

My type-driven and property-based testing zealot/pedant side has harvested some bayes points, unfortunately. 

making sure that we can still get the value of perpetrator's work.

The standard recommendation I've always heard is basically in the family of tradeoffs, but says that you never really land on the side of preserving the perpetrator's contributions when you factor in the victim's contributions and higher order effects from networks/feedback loops. 

One brief point against Left EA: solidarity is not altruism.

low effort shortform: do pingback to here if you steal these ideas for a more effortful post

It has been said in numerous places that leftism and effective altruism owe each other some relationship, stemming from common goals and so on. In this shortform, I will sketch one way in which this is misguided. 

I will be ignoring cultural/social effects, like bad epistemics, because I think bad epistemics are a contingent rather than necessary feature of the left. 

Solidarity appeals to skin-in-the-game. Class awareness is good to team up with your colleague to bargain for higher wages, but it's literally orthogonal to cosmopolitanism/impartiality. Two objections are mutual aid and some form of "no actually leftism is cosmopolitanism". Under mutual aid, at least as it was taught at the philly food not bombs chapter back in my sordid past, we observe the hungry working alongside the fed to feed even more of the hungry, that you can coalition across the hierarchical barrier between charitable action and skin in the game, or reject the barrier flatly. While this lesson works great for meals or needle exchanges, I'm skeptical about how well it generalizes even to global poverty, to say nothing of animals or the unborn. The other objection, that leftism actually is cosmopolitan, only really makes sense to the thought-leaders of leftism and is dissonant with theories of change that involve changing ordinary peoples' minds (which is most theories of change). A common pattern for leftist intellectuals to take is "we have to free the whole world from the shackles of capitalism, working class consciousness shows people that they can fight to improve their lot" (or some flavor of "think global act local"). It is always the intellectual who's thinking about that highfalutin improving the lot of others, while the pleb rank and file is only asked to advocate for themselves. Instead, EAs should be honest: that we do not fight via skin in the game, we fight via caring about others; EA thought leaders and EA rank and file should be on the same page about this. This is elitist to only the staunchest horizontalist. (However, while I think it is sparingly that we defer to standpoint epistemology, for good reason, it's very plausible that it has it's moments to shine, and plausible that we currently don't standpoint epistemology enough, but that's getting a bit afield). 

element can get you both e2e encryption and transfer between devices. 

It looks like they're ending fallback to SMS when communicating with people who don't have signal installed. This is a dealbreaker for me. I agree with a lot of your post, but we should look into funding a sysadmin for an EA synapse server, because element can bridge to other services. I haven't looked into it that much. 

The uncertain bibliography

A latex plugin for annotating a .bib file with credences, confidence intervals, squiggle strings, replication probabilities. 

Interactive consensus aggregator for squiggle estimates

If analysts Alice and Bob each write cost-effectiveness analysis of charity C, then donor Eve ought to be able to input relative trust quantities informing how to weight Alice's estimate against Bob's. In other words, if Eve thinks Alice is twice as trustworthy or competent as Bob, then the MVP would return the squiggle string mixture(alice, bob, [2, 1])

Interactive worldview substituter for squiggle estimates

Building on the above, it would be nice if Alice and Bob had agreed to use a set of input variables, such as background quantities like the state of a manifold market on some ML benchmark or a global count of malaria cases. This set of input variables can also be a worldview, or quantities which upon inspection are also squiggle estimates. Then, it would be nice to be able to trivially substitute these input worldviews, if you want to look at how you expect your cost-effectiveness analysis to change over time or if you think Bob has a more calibrated background worldview but prefer Alice's fermstimate of charity C's impact. 

Load More