quinn

656Joined Dec 2020

Comments
90

EA in the mainstream media: if you're not at the table, you're on the menu

(to save some folks some time clicking or reading) the article makes a case that EA goals are subject to the seductively evil drive to collect taxes for public goods funding, and I personally feel bad I didn't predict that argument to come from this crowd once I saw that Sam, Dustin, and Cari  were making specific political plays

(excuse my US-centrism) Usually when I think about the EA republicans question, I think about nationalism, religiosity, some core "liberty is welfare and has good externalities" principles around homosexuality and freedom of movement, but this article updated me to also think about taxes (not that I think republicans are actually against taxation in any sense, but just that there's nonzero information in what they choose to write on the tin). 

EA in the mainstream media: if you're not at the table, you're on the menu

Thanks. I was inspired yesterday to do a point by point addressing of the piece. Feels a little "when you wrestle with a pig, you get muddy and the pig likes it", but spoiler alert I think there's nonzero worthy critique hiding in the bad writing.

Workers will rationalize high-paying jobs by giving most of their income away. Actually, when you work, you already give to society, but that is too complex for some to understand.

I think EAs live in the space between the extreme "capitalism is perfectly inefficient such that a wallstreet compensation package is irrelevant to the (negligible) social value that a wall street worker produces" and the equally extreme "capitalism is perfectly efficient such that a wallstreet compensation package is in direct proportion to the (evidentially high) social value that a wallstreet worker produces". Also, insofar as capitalism is designed and not emergent, is it really optimized for social value? It seems optimized for metrics which are proxies for social value, and very much subject to goodhart, but I'll stop before I start riots in every last history and economics department. Moreover, how about we want more number go up? If number go up is good, and working some arbitrary gig in fact makes number go up, donating some of the proceeds will make number go up more, so E2G people are correct to do both! 

Animal rights and veganism are big in the movement as well.

Sorry this reads to me like applause lights for the "I hate those smug virtue signaling vegans because I love bacon" crowd. OP's thesis about EA doesn't really relate to our unusually high vegan population, they might as well have pointed out our unusually high queer or jewish or computer programmer population. 

Yes, they direct money toward malaria nets and treatments for parasitic worms, but they also supply supplements for vitamin A deficiency, though genetically modified “golden” rice already provides vitamin A more effectively. Hmmm, seems like a move backward.

Sorry one sec I'm laughing a little at this "what have the romans ever done for us?" moment. "yeah, besides the malaria nets and deworming, which I admit are a plus, what have the EAs ever done for the poor?" it's like monty python! Anyway, my friend, if you think golden rice is a neglected approach to vitamin A deficiency, are you rolling up your sleeves and advancing the argument? Do you even bother to cite evidence that it's more effective? "Hmmm, seems like a move backward" is a completely unjustified and frivolous sentence. 

That’s a bit like closing the barn door after the horse has bolted.

EAs do not subscribe to the interpretation of the theory of random variables that you imply! We do not believe that random variables conserve a supply of events out in the universe of potentiality, such that an event of a particular class drains the supply of events of that class from the future. We instead believe that events of a class occurring does not imply that there's less of that class of event available to occur in the future. In fact, if anything we believe the opposite, if anything we believe that observing an event of a class should update us to think they're more likely than we did before we observed it! Moreover, EAs are widely on record advocating for pandemic preparedness well before covid. 

Partly as a result of his and his brother’s efforts, $30 billion for pandemic preparation was written into the Biden administration’s thankfully stalled Build Back Better porkfest.

From a writing style perspective, this is blatant applause lights for the tribe of those who think build back better is bad. 

Catch that? Someone else pays. Effective, but not exactly selfless. It’s the classic progressive playbook: Raise taxes to fund their pet projects but not yours or mine. I don’t care if altruists spend their own money trying to prevent future risks from robot invasions or green nanotech goo, but they should stop asking American taxpayers to waste money on their quirky concerns.

Not wrong. Policy efforts inevitably lead to this line (from this crowd at least), unless they're, like, tax-cutting. Policy EAs are advancing a public goods argument. It opens us up to every lowering-my-taxes-is-ITN guy that every single public goods argument in the world is opened up to. I don't need to point out that OP surely has pet projects that they think ought to be funded, by taxes even, and I can omit conjectures about what they are and about how I personally feel about them. But this is a legitimate bit of information about EA policy efforts. (Obviously subject to framing devices: tax increments are sufficiently complex that a hostile reader would call something "increase by 0.75%" while another reader would say "pushing numbers around the page such that the 0.75% came from somewhere else so it's not a real increment" and neither would be strictly lying). 

And “effective” is in the eye of the beholder. Effective altruism proponent Steven Pinker said last year, “I don’t particularly think that combating artificial intelligence risk is an effective form of altruism.”

I'll omit how what I actually think about Pinker, but in no worlds is this settled. Pinker is one guy who lots of people disagree with! 

There are other critics. Development economist Lant Pritchett finds it “puzzling that people’s [sic] whose private fortunes are generated by non-linearity”—Facebook, Google and FTX can write code that scales to billions of users—“waste their time debating the best (cost-effective) linear way to give away their private fortunes.” He notes that “national development” and “high economic productivity” drive human well-being.

Seems valid to me. Nonlinear returns on philanthropy would be awesome, wouldn't they? It's sort of like "if a non-engineer says 'wouldnt a heat-preserving engine be great?' we don't laud them as a visionary inventor" in this case, because I don't expect OP to roll up their sleeves and start iterating on what that nonlinearly returning mechanism would look like! But that doesn't mean we shouldn't take a look ourselves. 

There are only four things you can do with your money: spend it, pay taxes, give it away or invest it. Only the last drives productivity and helps society in the long term.

This should clearly be in our overton window about how to do the most good. It almost alludes to the excellent Hauke Hillebrandt essay doesn't it? 

Eric Hoffer wrote in 1967 of the U.S.: “What starts out here as a mass movement ends up as a racket, a cult, or a corporation.” That’s true even of allegedly altruistic ones.

This seems underjustified and not of a lot of substance. I think what OP has portrayed may qualify as a racket to people of a particular persuasion regarding government spending, or as a cult to the "I intuitively dislike virtue signaling and smugness so I look for logical holes in anyone who tries to do good" crowd, but OP could have been more precise and explicit about which of those they think is important to end on. But alas, when you're in a given memeplex that you know you share with your audience, you only have to handwave! lol 


As Scott Alexander recently addressed, EAs are like a borg: we assimilate critics of any quality bar whatsoever. As much as we respect Zvi "guys, I keep telling you I'm not an EA" Mowshowitz' wishes to not carry a card with a lightbulb heart stamped on it, it's pretty hard not to think of him as an honorary member. My point is we really should consider borg-ing up the "taxation is theft" sort of arguments about public goods and the "investment beats aid" sort of arguments about raising global welfare. 

What ‘equilibrium shifts’ would you like to see in EA?

the philly value prop

  • 2 hours from new york, a little over 2 from DC, something like 5-7 to boston depending on if you drive or amtrak. 
  • EA Philly's discord has about a hundred people
  • A wework cluster (spearheaded by rethink priorities) has a bunch of empty desks at the time of this writing! 
  • rent under a thousand is quite easy to find for a lot of different types of people and needs (I pay more than anyone in my house and i'm at like 537 lol) 
  • Penn has a reasonable EA history, has hard coursework and some cool profs and students.
  • Adequate public transit

Not philly: 

  • Volume of entrepreneurial vibes is small. At a meetup you're more likely to run into a "since there's no free will we might as well all not try / rationality is a social club it's not about kicking ass and winning" guy than a "well I've been working on project xyz" or "my theory of change is abc" guy (by an astounding factor). 
  • Summers are too hot and humid, winters are too cold. lol. The sweet spot of no complaints doesn't feel actively too short to count tbh, it's really only the worst part of the summer and the worst part of the winter that I can be caught whining about it. 
  • You should talk to the people who bailed from Penn EA about what they don't like about Philly

Reach out to me for a couch if you want to visit! 

GLO, a UBI-generating stablecoin that donates all yields to GiveDirectly

 to make this no longer a red flag?

Some note to the effect that you've redteamed the strategy, planned for contingencies. I think if I had read a brief comment like 

I think you're following the way I set up the point about EV theories, what I meant really just had to do with risk tolerance, that I think the risk tolerance of the user base implies a more conservative approach. 

GLO, a UBI-generating stablecoin that donates all yields to GiveDirectly

Cheers, chaps! Thanks for the update. I hope you're right and I hope you win

If bonds stop generating yields, we’ll have to rethink our strategy.

Sorry about not sparing effort to sound nicer, want to write comment quickly: I think of markets (in the sense of "which things are generating how much yield?") as rather fickle, and if I was involved I wouldn't build any strategy at all around these short-term signals, my assessment of your circumstances is not the assessment that would lead to me getting behind a myopic strategy. And I'll go one further-- it's a bit of a red flag that y'all are willing to stake so much on fickle behaviors that you're observing in a notoriously fickle market. 

The discussion here is a broad one: on the one hand, I don't have a good track record in that I've never been super glad about worshiping at the altar of EV theory (the altar hasn't made me an OOM more money than I could make working hourly), which you can interpret as either lack of luck or lack of wisdom, and you don't want to listen to advice from people without a good track record. On the other hand, the nature of your product is specifically more involved in trust and stability, which comes with a kind of responsibility that makes the class of reasoning you want to do decidedly not the "EV theory YOLO" that characterizes for example SBF's journey from a well-off jane street alum to $30b. The fact is that a person reasoning about the journey from $1 to $100 /day has a qualitatively different EV theory than a person reasoning about the journey from $1000 to $10000 /day, because logarithms. A claim I'm considering making is that the GLO team ought to be using the former EV theory -- because it is the one used by the users! -- even though most defi projects prefer the latter EV theory. 

I think I could be wrong by not misassessing how much the strategies you're building around short-term properties of market behavior really actually are short-term strategies, and won't shoot you in the foot when the properties change and you have to reassess. 

How do employers not affiliated with effective altruism regard experience at EA-affiliated organizations?

Epistemic status: pepperoni airplane, includes reasoning about my blase and bohemian risk tolerance and career path which probably doesn't apply to e.g. people with responsibilities. I think it'd be really hard to proceed with this question in a non-anecdotal way, e.g. employers being cagey about the reasons they decline hiring someone due to legal risk as a barrier to creating a dataset. 

I took a 6 month sabbatical at EA Hotel to do some AI safety things smack in the middle of what was supposed to be a burgeoning IT career. I received zero career advice telling me to leave that startup after a mere 8 months, but I'm good at independent study and I was finding that in my case the whole "real world jobs teach you more than textbooks" thing was a lie. So off to Blackpool I went, here's the one consideration: I didn't feel like my AI Safety Camp Five project had freedom to be too mathy, I felt like I needed to make sure it had a github presence with nice-looking pull requests because while I was earnestly attempting an interesting research problem, I was also partially optimizing for legible portfolio artifacts for when I'd end up back on the job hunt. 

When I got home, I had a couple months left of the SERI internship, and toward the end of that I landed an interview at a consultancy for web3 projects (right groupchat right time), and crushed it using some of my EA Hotel activities (the leader of the consultancy ended up mentioning reinforcement learning on sales calls because of my AI Safety Camp Five project, though no customers took him up on it). I kinda borked my SERI project, so took a confidence hit as far as alignment or any kind of direct work was concerned, so retreating into E2G was the move: it was also great brain food and exposed me to generically kickass people. The point is that EA was not a negative signal, even a totally weird-sounding sabbatical at a hotel in a beach town scored no negative points in the eyes of this particular employer. The takeaway about my AI Safety Camp Five project is you can optimize for things legible to normies while doing direct work

If you have way less bohemian risk tolerance than me, then your EA activities will be way more legible and respectable than mine were at that time. 

It's kind of like what they tell people trying to break into IT from "nontraditional paths"-- the interview is all about spin, narrative, confidence. IT managers, in my experience (excuse another pepperoni airplane), can get a ton of useful information from stories about problem solving and conflict resolution that take place in restaurants or film sets! Unless I'm deliberately making the least charitable caricature of HR, I assume that if you talked about some project you tried for a while with this social movement of philosophers trying to fix massive problems in an interview you'd get a great response. 

My Most Likely Reason to Die Young is AI X-Risk

I have suggested we stop conflating positive and negative longtermism. I found, for instance, the Precipice hard to read because of the way he flipped back and forth between the two.

What is the top concept that all EAs should understand?

I've come, through the joking to serious pipeline, to telling people that EAs are just people who are really excited about multiplication, and who think multiplication is epistemically and morally sound. 

New US Senate Bill on X-Risk Mitigation [Linkpost]

Seems like a win, curious to hear about involvement of people in our networks in making this happen. 

Future Fund June 2022 Update

and generally find it pretty frustrating. For example, would your next step be to send emails to each of those addresses? ;)

I guess it's not realistic to litmus test individuals about their cold-emailing practices and their seriousness about the problem area they claim to be working in, before giving them access to the list.  

I would expect the cold emailing advice given by Y Combinator to result in emails that do not frustrate regrantors. 

Load More