keller_scholl

331Joined Sep 2015

Comments
17

Topic Contributions
1

I think a practical intervention here would be outlining how much governance should be in place at a variety of different scales. "We employ 200 people directly and direct hundreds of millions of dollars annually" should obviously have much more governance-structure than two people self-funding a project. A claim like "by the time your group has ten members and expects to grow, one of them, who is not in a leadership role themselves, should be a designated contact person for concerns, and a second replacement person as socially and professionally distant from the first as practical should be designated by the time your group hits 30 people." I expect explicit growth models of governance to be much more useful than broad prescriptions for decision-makers, and to make explicit the actual disagreements that people have. 

Thank you for responding. I read "Some of these men control funding for projects and enjoy high status in EA communities and that means there are real downsides to refusing their sexual advances and pressure to say yes, especially if your career is in an EA cause area or is funded by them. There are also upsides, as reported by CoinDesk on Caroline Ellison." I have seen a number of people pass around https://www.coindesk.com/business/2022/11/10/bankman-frieds-cabal-of-roommates-in-the-bahamas-ran-his-crypto-empire-and-dated-other-employees-have-lots-of-questions/. I have seen a number of assertions that Caroline received the job because of a sexual/romantic relationship with SBF. I haven't seen anyone assert any other "upsides" that make sense in specific relation to Caroline Ellison. Would you mind clarifying what upsides you were referring to if not the CEO position?

[2022-11-13: Edit to include more of the context of the quote]

I think it's bad to confidently assert, without real evidence, that a woman slept her way to the top of a company. Do you think it's fine?

The casual assumption that people make that obviously the only reason Caroline could have become CEO was because she was sleeping with SBF is annoying when I see it on Twitter or some toxic subreddit. Here I expect better. Plenty of people at FTX and Alameda were equally young and equally inexperienced. The CTO (a similarly important role at a tech company) of FTX, Gary Wang, was 29. Sam Trabucco, the previous Alameda co-CEO, seems to be about the same. I have seen no reason to think that Caroline was particularly unusual in her age or experience relative to others at FTX and Alameda. 

or its funny to write like that if you feel like it. charles raises a fair point that social reactions to a post are far in the future, but they can be many more than the value of the time you invested. that probably makes more sense for sposts than comments though

Agreed on the importance of who their potential donor pool is. If I found out that an org had run the event the author describes for highly committed EAs I would be aghast. But by the standards of what is done to solicit ultra high net worth donors who move millions annually and who are not currently interested in EA, it seems entirely reasonable. 

I think that most of this is good analysis: I am not convinced by all of it, but it is universally well-grounded and useful. However, the point about Communicating Risk, in my view, misunderstands the point of the original post, and the spirit in which the discussion was happening at the time. It was not framed with the goal of "what should we, a group that includes a handful of policymakers among a large number, be aiming to convince with". Rather, I saw it as a personally relevant tool that I used to validate advice to friends and loved ones about when they should personally get out of town. 

Evaluating the cost in effective hours of life made a comparison they and I could work with: how many hours of my life would I pay to avoid relocating for a month and paying for an AirBnB? I recognize that it's unusual to discuss GCRs this way, and I would never do it if I were writing in a RAND publication (I would use the preferred technostrategic language), but it was appropriate and useful in this context. 

Two points, but I want to start with praise. You noticed something important and provided a very useful writeup. I agree that this is an important issue to take seriously.

While aiming to be the person available when policymakers want expert opinion does not favour more technocratic decision-making, actively seeking to influence policymakers does favour more technocratic decision-making

I don't think that this is an accurate representation of how policymakers operate, either for elected officials or bureaucrats. My view comes from a gestalt of years of talking with congressional aides, bureaucrats in and around DC, and working at a think tank that does policy research. Simply put, there are so many people trying to make their point in any rich democracy that being "available" is largely equivalent to being ignored.

There are exceptions, particularly academics who publish extensively on a topic and gain publicity for it, but most people who don't actively attempt to participate in governance simply won't. Nobody has enough spare time, and nobody has enough spare energy, to actively seek out points of view and ideas reliably.

More importantly, I think that marginal expert influence mostly crowds out other expert influence, and does not crowd out populist impulses. Here I am more speculative, but my sense is that elected officials get a sense of what the expert/academic view is, as one input in a decision making process that also includes stakeholders, public opinion (based on polling, voting, and focus groups), and party attitudes (activists, other elected officials, aligned media, etc). Hence an EA org that attempts to change views mostly displaces others occupying a similar social / epistemic / political role, not any sense of public opinion.

 

On the bureaucracy side,  expert input, lawmaker input, and stakeholder input are typically the primary influences when considering policy change. Occasionally public pressure will be able to notice something, but the federal registry is very boring, and as the punctuated equilibrium model of politics suggests, most of the time the public isn't paying attention. And bureaucrats usually don't have extra time and energy to go out and find people whose work might be relevant, but they don't have anyone actively presenting. Add that most exciting claims are false, so decisionmakers would really have to read through entire literatures to be confident in a claim, and experts ceding influence goes primarily not to populist impulses but existing stakeholders.

Suggesting that a future without industrialization is morally tolerable does not imply opposition to "any and all" technological progress, but the amount of space left is very small. I don't think they're taking an opinion on the value of better fishhooks.

The paper doesn't explicitly mention economic growth, but it does discuss technological progress, and at points seems to argue or insinuate against it.

"For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential." Personally, I consider a long-term future with a 48.6% child and infant mortality rate  abhorrent and opposed to human potential, but the authors don't seem bothered by this. But they have little enough space to explain how their implied society would handle the issue, and I will not critique it excessively.

There is also a repeated implication that halting technological progress is, at a minimum, possible and possibly desirable.
"Since halting the technological juggernaut is considered impossible, an approach of differential technological development is advocated"
"The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible"
"regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option" implies to me that one of those three options is a feasible option, or is at least worth investigating.

While they don't explicitly advocate degrowth, I think it is reasonable to read them as doing such, as John does.

Load More