J

JeremyR

111 karmaJoined Mar 2017

Posts
1

Sorted by New
26
New cause area: Traffic congestion
· 10mo ago · 20m read

Comments
19

JeremyR
5mo10

Just seeing this, but yes it was a quote from the original piece! FWIW I appreciate your use of “weird” vs. the original author’s more colorful language (though no idea if that’s what your pre-edit comment was in reference to)

JeremyR
6mo2621

Sharing my reflections on the piece here (not directly addressing this particular post but my own reflections I shared with a friend.)

While I agree with lots of points the author makes and think he raises valuable critiques of EA, I don’t find his arguments related to SBF to be especially compelling.  My run-through of the perceived problems within EA that the author describes and my reactions:

  1. The dominance of philosophy. I personally find parts of long-termism kooky and I'm not strongly compelled by many of its claims, but the Vox author doesn’t explain how this relates to SBF (or his misdeeds)... it feels more like shoehorning a critique of EA in to a piece on SBF? 
  2. Porous boundaries between billionaires and their giving. So yes it sounds like SBF was very directly involved in the philanthropy his funds went toward but I don’t think that caused (much? any?) incremental reputational harm to EA vs. a world where he created the “SBF family foundation” and had other people running the organization. 
  • If I wanted to rescue this argument, maybe I could say SBF’s behavior here is representative of a common trait of his (at FTX and in his charity) – SBF doesn’t even have the dignity to surround himself with yes-men; he insists on doing it all himself! And maybe that’s a red-flag RE cult of personality/genius and/or fraud that EA should have caught on to. 
  • I will say, though, that the FTX Future Fund had a board/team that was fairly star-studded and ran a big re-granting program (i.e., let others make grants with their money). Which is to say I’m not sure how directly involved SBF actually was in the giving. [As an aside, I think it’s fine for billionaires to direct their own giving and am a lot more suspect of non-profit bloat and organizational incentives than the Vox author is.] 
  1. 3. Utilitarianism free of guardrails. I agree a lack of guardrails is a problem, but: 
  • a) On utilitarianism’s own account it seems to me you should recognize that if you commit massive fraud you’ll probably get caught and it will all be worthless (+ cause serious reputational harm to utilitarianism), so then committing the fraud is doing utilitarianism wrong. [I don’t think I’m no-true-Scotsman-ing here?] 
  • b) More importantly… the author doesn't explain how unabashed utilitarianism led to SBF's actions - it's sort of vaguely hand-waving and trying to make a point by association vs. actual causal reasoning / proof, in the same vein as the dominance of philosophy point above? I guess the steelman is: SBF wanted to do the most good at any cost, and genuinely thought the best way to do so was to commit fraud (?) A bit tough for me to swallow. 
  1. 4. Utilitarianism full of hubris. A rare reference to evidence (well, an unconfirmed account, but at least it’s something!) Comparing the St. Petersburg paradox to SBF figuring let’s double-or-nothing our way out of letting Alameda default is an interesting point to make, but SBF's take on this was so wild as to surprise other EA-ers. So it strikes me as a point in favor of “SBF has absurd viewpoints and his actions reflect that” vs. “EA enabled SBF.” Meanwhile the author moves directly from this anecdote to “This is not, I should say, the first time a consequentialist movement has made this kind of error” (emphasis added).  SBF != the movement and I think the consensus EA view is the opposite of SBF’s, so this feels misleading at best.

One EA critique in the piece that resonated with me - and I'm not sure I'd seen put so succinctly elsewhere is: 

“The philosophy-based contrarian culture means participants are incentivized to produce ‘fucking insane and bad’ ideas, which in turn become what many commentators latch to when trying to grasp what’s distinctive about EA." 

While not about SBF, it's a point I don't see us talking about often enough with regard to EA perceptions / reputation and I appreciated the author making it. 

TL;DR: I thought it was an interesting and thought-provoking piece with some good critiques of EA, but the author (or - perhaps more likely - editor who wrote the title / sub-headers) bit off more than they could chew in actually connecting  EA to SBF's actions.

JeremyR
9mo10

Thanks Adina! Agree it's an awesome tool;  the link was in my draft but I really should have incorporated it!

Taking the tool "one step further" (e.g., trying to size the impact of each intervention in a  more standardized manner) is probably one of the most clear-cut (and possibly high-return) next steps a funder could take if they were interested in further pursuing the topic. 

JeremyR
10mo20

I know the footnotes in this piece don't currently work :(  I pasted my write-up from a Google doc based on this guidance but it seems something broke in my attempt. If anyone here can help me figure out how to get those sorted, that'd be much appreciated!

Relatedly, two upfront notes I'd have liked to add toward the start but couldn't get to work as footnotes in the editor:

  1. Almost all of the data I used in this piece came from the Texas A&M Transportation Institute's (TTI) annual Urban Mobility Report, which is not peer-reviewed. It seems to be the only real game in town on the topic of traffic’s scale and effects, and is incredibly thorough. I spoke to David Schrank, one of its co-authors, in drafting this piece and made sure I had a (very) surface-level understanding of TTI’s methodology, but ultimately my findings do hinge largely on their work. This goes without saying, but further review is warranted before considering allocating meaningful resources accordingly
  2. COVID dramatically altered the traffic landscape over the last several years, and is likely to leave a lasting mark. How lasting remains to be seen - TTI’s latest report is based on 2020 data - but when it comes to my analyses I generally rely on pre-COVID (2014-2019) data. It’s worth being explicit that - at its peak - COVID dramatically reduced traffic, and the work from home policies it begot will almost certainly lead to a step-change in traffic moving forward. While in some sense this means low-hanging fruit has already been plucked, COVID has also shifted the “Overton window,” allowing for discussion of opportunities that a few short years back seemed far-fetched
Answer by JeremyRApr 07, 202210

I haven't read any of Blattman's writings but in case I'm not too late and these aren't being covered, I'd be curious to hear his thoughts on

  1. The impact of international institutions in regard to war (e.g., do they help prevent and/or end wars, are they merely an extension of power by different means, do these examples represent institutionalism and realism respectively which perhaps he thinks we should be "down with")
  2. The impact of nuclear weapons on willingness to fight (do they, in his view, help prevent war)

For what it's worth, I took a course on Causes of War in college ~ten years back with Professor Gary Bass, and I still have the syllabus alongside a summary of a few of the assigned readings. It's raw, but if you're still looking for inspiration I'm happy to share them for you to skim. 

  1. On the other hand, taxes are not entirely "money lost" - a good part of government spending goes into causes that you may not be entirely averse to - although it's hard to tell what a marginal dollar will do, e.g. whether it will be used to cut the taxes of millionaires, or to provide social benefits to the poor.

To your point on marginal impact - governments certainly don't spend money they take in dollar for dollar, and in fact it seems the correlation between intake and expenditure is quite far from 1:1. US government debt is on the order of trillions of dollars, so while its maybe slightly better than flushing your money down the toilet, I'm not sure I'd value it much higher

Personally, I would donate to the Long Term Future Fund over the global health fund, and would expect it to be perhaps 10-100x more cost-effective (and donating to global health is already very good). This is mainly because I think issues like AI safety and global catastrophic biorisks are bigger in scale and more neglected than global health. Coming up with an actual number is difficult – I certainly don’t think they’re overwhelmingly better. 

Not to pick nits but what would you consider “overwhelmingly better?” 1000x? I'd have said 10x  so curious to understand how differently we're calibrated / the scales we think on. 

Should "reduction" in the quote below (my emphasis) read "increase?" 

"This is  hard to justify intuitively - it implies that we should ignore the near-term costs, and (taken to the extreme) could justify almost any atrocity in the pursuit of a miniscule reduction of long-term value."

JeremyR
2y210

Posting as an individual who is a consultant, not on behalf of my employer

Let me start off by saying that's an interesting question, and one I can't give a highly confident answer to because I don't know that I've ever had a conversation with a colleague about truth qua truth. 

That said, my short answer would be: I think many of us care about truth, I think our work can be shaped by factors other than truth-seeking, and I think if the statement of work or client need is explicitly about truth / having the tough conversations, consultants wouldn't find it especially hard  to deliver on that. The only factor particular to consulting that I could see weighing against truth-seeking would be the desire to sell future work to the client... but to me that's resolved by clients making clear that what the client values is truth, which would keep incentives well-aligned. 

My longer answer...

  • I think most of my colleagues do care about truth, and are willing to take a firm stance on what they believe is right even if it's a tough message for the client to hear. [Indeed I've explicitly heard firm leadership share examples of such behavior... which I think is an indicator that a) it does happen but b) it's not a given which ties to...]
  • ...I think there's a recognition that at the end of the day, we have formal signed statements of work regarding what our clients  expect us to deliver, and our foremost obligation is to deliver according to that contract (and secondarily, to their satisfaction) rather than to "truth"
  • If our contracts were structured in a more open-ended manner or explicitly framed around us delivering the truth, I see no reason (other than the aforementioned) why we would do anything other than provide that honest perspective
  • I wonder the extent to which employees of EA organizations feel competing forces against truth (e.g., I need to keep my job, not rock the boat, say controversial things that could upset donors) - I think you could make a case that consultants are actually better poised to do some of that truth-seeking e.g., if it's a true one-off contract

To your 2nd question about >70%: 

  • I don't think this framing is really putting your original question another way (to sprinkle in some consulting-ese I think "the question behind your question" is something else)
  • That said, my "safe," not-super-helpful, and please-don't-selectively-quote-this-out-of-context answer is less than half the time...
  • ...But that's because most of the work I (and I'd venture to say, most of us) do isn't  about truth-seeking, so it's not the sort of thing about which reasonable people of good will will have meaningful disagreement. Rather, the work is about further developing a client's hypothesis, or helping them understand how best to pursue an objective, or helping them execute a process in which they lack expertise [all generally in the service of increasing client profitability]
Load more