S

samuel

203 karmaJoined Mar 2022

Posts
1

Sorted by New
26
samuel
· 2y ago · 3m read

Comments
28

Thanks for this summary. I listened to this yesterday & browsed through the SH subreddit discussion, and I'm surprised that this hasn't received much discussion on here. Perhaps the EA community is talked out about this subject, which, fair enough. But as far as I can tell, it's one of Will's first at-length public remarks on SBF, so it feels discussion-worthy to me.

I agree that the discussion was oddly vague given all the actual evidence we have. I don't feel like going into much detail but a few things I noticed:

  • It seems that Will is still somewhat in denial that SBF was a fraud. I guess this is a perfectly valid opinion, Will knows SBF, but I can't help but feel that this is naive (or measured, if we're to be charitable). We can quibble over his reasoning, but fraud is fraud, and SBF committed a lot of it, repeatedly and at scale. He doesn't have to be good at fraud to be one.
  • They barely touch on the dangers of a "maximalist" EA. If Will doesn't believe SBF was fraudulent for regular greedy reasons, then EA may have played a part, and that's worth considering. No need to be overly dramatic here, the average EA is well-intentioned and not going to do anything like this... but as long as EA is centralized and reliant on large donors, it's something we need to further think through.
  • The podcast is a reminder of how difficult it is to understand motivations, and how difficult it is for "good" people to understand what motivates "bad" actions (adding "" to acknowledge the big vast gray areas here). Given that there are a lot of neuro-atypical EAs, the community seems desensitized to potential red flags like claiming not to feel love. This is my hobbyhorse -- like any community, EA has smart capable people that I wouldn't ever want to have power. It sounds like there were people who felt this way about SBF, including a cofounder. It's a bit shocking to me that EA leadership didn't see SBF as much of a liability.

There's a big effort to increase access to clean cooking, especially in Africa. So it's entirely possible that this kind of intervention could be bundled with projects that are already giving away stoves (often in exchange for carbon credits now). I actually know someone leading a project like this in west Africa, no idea how different bean-cooking practices are in West Africa vs Uganda, but I could ask!

Thanks for the feedback Dan. Maybe I'm using the vocabulary incorrectly - does collective specifically mean 1 person 1 vote?  I do specifically avoid saying democratic and mention market-based decision making in the first sentence. 

It's not at all obvious to me that putting market-based feedback systems in place would look like the funding situation today. I think it's worth pushing back on the assumption that EA's current funding structure rewards the best performers in terms of asset allocation.

I want to push back a bit on my own intuition, which is that trying to build out collective (or market-based) decision-making for EA funding is potentially impractical and/or ineffective. Most EAs respect the power of the "wisdom of crowds", and many advocate for prediction markets. Why exactly does this affinity for markets stop at funding? It sounds like most think collective decision-making for funding is not feasible enough to consider, and that's 100% fair, but were it easy to implement, would it be ineffective? 

Again, my intuition is to trust the subject matter experts, to rely on the institutions that we've built for this specific task. But I invest in index funds, I believe that past performance is no guarantee of future results, and I trust that aggregate markets are typically more accurate than most experts. Have EA organizations proved that they are essentially super-forecasters, that they consistently "beat the EA market" in terms of ROI? Perhaps this metaphor is doomed -- these EA orgs are also market-makers as well. Who better to place bets than those with insider knowledge?

At the very least, this experiment seems ripe for running if it hasn't been already. It's far beyond me to figure out how to structure it, I'll leave that to those like Nuno, who laid out a potential path. But we're making a rather large assumption that the collective is by default ineffective.

EDIT: someone pointed out that I'm conflicting prediction markets w/ collective decision making. I want to clarify that my comment is referring to market-based decision making (basically prediction markets), which I view as a subset of collective decision making. Maybe my EA vocab is off though.

I wouldn't call it predatory - in fact, every significant work test / trial I've done has been paid, which is remarkably progressive! 

However, I empathize with your pain - interviewing for EA jobs is a rigorous and rather impersonal gambit. As far as I know, this is a feature not a bug. It's frustrating but I try to cut them some slack. There are many applicants, EA orgs are almost always short-staffed and they're trying to avoid bias. Most EAs want an EA job but these hiring processes are optimized to test this desire.

Knowing this, I don't bother applying for an EA job unless I truly think that my application can be competitive and that I actually want the job (not a bad heuristic to follow in general).

I'm hopeful for lab grown salmon (see: Wild Type Foods), but if all else fails and the taste for salmon proves to be too sticky, I could imagine a counterintuitive campaign that specializes salmon to be "only for holidays." Of course, I'm sure this could easily backfire. This kind of work is hard!

Could an increase in salmon preference on Christmas also lead to higher preference for salmon year-round? More people are introduced to the fish, learn how to cook it, etc. Perhaps another downstream effect to consider in your model, although difficult to quantify and hard to know if your campaign has much of an impact here.

I'm very thankful for EVF and associated orgs, and as referenced by others, it's understandable how/why the community is currently organized this way. Eventually, depending on growth and other factors, it'll probably make sense for the various subs to legally spin off, but I'm not sure if this is high priority - it depends on just how worried EAs are about governance in the wake of this past month. 

I will say, conflict of interest disclosures are important but seems like they may be doing a lot of work here.  As far as I can tell[1], leadership within these organizations also function independently and they're particularly aware of bias as EAs so they've built processes to mitigate this. But being aware of bias and disclosing it doesn't necessarily stop [trustworthy] people from being biased (see: doctors prescribing drugs from companies that pay for talks.) Even if these organizations separated tomorrow, I'd half expect them to be in relative lock-step for years to come. Even if these orgs never shared funding/leadership again, they're in the same community, they'll have funders in common, they'll want to impress the same people, so they'll make decisions with this in mind. I've seen this first-hand in every [non-EA] org I've ever been a part of, in sectors of all sizes, so moving forward we'll have to build with this bug in mind and decide just how much mitigation is worth doing.

I'm aware that none of this is original or ground-breaking but perhaps worth reiterating.

  1. ^

    This is a little facetious, but does anyone else find themselves caveating more often these days, just in case...

"My point is just that this nightmare is probably not one of a True Sincere Committed EA Act Utilitarian doing these things" - I agree that this is most likely true, but my point is that it's difficult to suss out the "real" EAs using the criteria listed. Many billionaires believe that the best course of philanthropic action is to continue accruing/investing money before giving it away. 

Anyways, my point is more academic than practical, the FTX fraud seems pretty straight forward and I appreciate your take. I wonder if this forum would be having the same sorts of convos after Thanos snaps his fingers.

I don’t [currently] view EA as particularly integral to the FTX story either. Usually, blaming ideology isn’t particularly fruitful because people can contort just about anything to suit their own agendas. It’s nearly impossible to prove causation, we can only gesture at it.

However, I’m nitpicking here but - is spending money on naming rights truly evidence that SBF wasn’t operating under a nightmare utilitarian EA playbook? It’s probably evidence that he wasn’t particularly good at EA, although one could argue it was the toll to further increase earnings to eventually give. It’s clearly an ego play but other real businesses buy naming rights too, for business(ish) reasons, and some of those aren’t frauds… right? 

I nitpick because I don't find it hard to believe that an EA could also 1) be selfish, 2) convince themselves that ends justify the means and 3) combine 1&2 into an incendiary cocktail of confused egotism and lumpy, uneven righteousness that ends up hurting  people. I’ve met EAs exactly like this, but fortunately they usually lack the charm, knowhow and/or resources required to make much of a dent. 

In general, I’m not surprised with the community's reaction. Best case scenario, it had no idea that the fraud was happening (and looks a bit naïve in hindsight) and its dirty laundry is nonetheless exposed (it’s not so squeaky clean after all). Even if EA was only a small piece in the machinery that resulted in such a [big visible] fraud, the community strives to do *important* work and it feels bad for potentially contributing to the opposite.

Load more