All of Cullen's Comments + Replies

Cullen
24d74
10
4
3

OP gave some reasoning for their views on their recent blog post:

Another place where I have changed my mind over time is the grant we gave for the purchase of Wytham Abbey, an event space in Oxford.

We initially agreed to help fund that purchase as part of our effort to support the growth of the community working to reduce global catastrophic risks (GCRs). The original idea presented to us was that the space could serve as a hub for workshops, retreats, and conferences, to cut down on the financial and logistical costs of hosting large events at private f

... (read more)

How does AMF collect feedback from the end-recipients of bednets? How does feedback from them inform AMF's programming?

Do you have any citations for this claim?

3
Elizabeth
5mo
Implict and explicit from  https://askamanager.com/ and https://nonprofitaf.com/ (which was much epistemically stronger in its early years)

According to the book Bullies and Saints: An Honest Look at the Good and Evil of Christian History, some early Christians sold themselves into slavery so they could donate the proceeds to the poor. Super interesting example of extreme and early ETG.

(I'm listening on audiobook so I don't have the precise page for this claim.)

(To avoid bad-faith misinterpretation: I obviously think that nobody should do the same.)

Longtermist shower thought: what if we had a campaign to install Far-UVC in poultry farms? Seems like it could:

  1. Reduce a bunch of diseases in the birds, which is good for: a. the birds’ welfare; b. the workers’ welfare; c. Therefore maybe the farmers’ bottom line?; d. Preventing/suppressing human pandemics (eg avian flu)
  2. Would hopefully drive down the cost curve of Far-UVC
  3. May also generate safety data in chickens, which could be helpful for derisking it for humans

Insofar as one of the main obstacles is humans' concerns for health effects, this would at least only raise these for a small group of workers.

I had a similar thought a (few) year (s) ago and emailed a couple of people to sanity check the idea - all the experts I asked seemed to think this wouldn't be an effective thing to do (which is why I didn't do any more work on it). I think Alex's points are true (mostly the cost part - I think you could get high enough intensity for it to be effective).

3
Alex D
6mo
Good shower thought! A few people have come to this idea independently for swine CAFOs. There are a fair number of important "production-limiting diseases" in swine that are primarily spread via respiratory transmission, so this seems to me like a plausible win-win-win (as you've described). This is all very "shower thought" level on my side as well, and I'd be keen for someone to think this through in more depth. Very happy to talk it through with anyone considering a more thorough investigation! (Note my understanding is influenza is primarily a gastrointestinal illness in poultry, so I don't think this intervention is as promising in that context.)

I think 1 unfortunately ends up not being true in the intensive farming case. Lots of things are spread by close enough contact that even intense uvc wouldn't do much (and it would be really expensive)

Narrow point: my understanding is that, per his own claims, the Manifund grant would only fund technical upkeep of the blog, and that none of it is net income to him.

1
zchuang
8mo
Sorry for the dead response, I think I took the secondary claim he made that extra money would go towards a podcast as the warrant for my latter claim. Again I don't feel any which way about this other than we should fund critics and not let the external factors that are just mild disdains from forum posters as determinative about whether or not we fund him. 
Answer by CullenJul 28, 20233
1
0

How probable does he think it is that some UAP observed on Earth are aliens? :-)

Super excited about the artificial conscience paper. I'd note that a similar approach be very useful for creating law-following AIs:

An LFAI system does not need to store all knowledge regarding the set of laws that it is trained to follow. More likely, the practical way to create such a system would be to make the system capable of recognizing when it faces sufficient legal uncertainty,[10] then seeking evaluation from a legal expert system ("Counselor").[11]

The Counselor could be a human lawyer, but in the long-run is probably most robust and efficient

... (read more)

Utilitarianism is much more explicit in its maximisation than most ideologies, plus it (at least superficially) actively undermines the normal safeguards against dangerous maximisation (virtues, the law, and moral rules) by pointing out these can be overridden for the greater good.

Like yes there are extreme environmentalists and that's bad, but normally when someone takes on an ideology like environmentalism, they don't also explicitly & automatically say that the environmental is all that matters and that it's in principle permissible to cheat &

... (read more)

I would be very curious for Gregory's take on whether he thinks EAs are too epistemically immodest still!

On the Democratic side, challenging Biden is a way to make yourself Very Unpopular with party elites. Challenging Harris, if she is his chosen successor, would be That But Worse.

This seems very wrong to me. Harris is very unpopular.

3
keller_scholl
1y
From what I can tell, Harris has impressively low name recognition and is fairly unpopular with voters. That doesn't mean that party elites won't object to an outside group sponsoring a candidate who doesn't have their blessing.
2
david_reinstein
1y
I agree there is a pretty open lane after Biden.

Thanks, this is a meaningful update for me.

it doesn't seem like a big leap to think that confidence in an ideology that says you need to maximise a single value to the exclusion of all else could lead to dangerously optimizing behaviour.

I don't find this a persuasive reason to think that utilitarianism is more likely to lead to this sort of behavior than pretty much any other ideology. I think a huge number of (maybe all?) ideologies imply that maximizing the good as defined by that ideology is the best thing to do, and that considerations outside of that ideology have very little weight. You se... (read more)

8[anonymous]1y
I disagree with this. I think utilitarian communities are especially vulnerable to bad actors. As I discuss in my other comment, psychopaths disproportionately have utilitarian intuitions, so we should expect communities with a lot of utilitarians to have a disproportionate number of psychopaths relative to the rest of the population. 
9
Benjamin_Todd
1y
I'd agree a high degree of confidence + strong willingness to act combined with many other ideologies leads to bad stuff. Though I still think some ideologies encourage maximisation more than others. Utilitarianism is much more explicit in its maximisation than most ideologies, plus it (at least superficially) actively undermines the normal safeguards against dangerous maximisation (virtues, the law, and moral rules) by pointing out these can be overridden for the greater good. Like yes there are extreme environmentalists and that's bad, but normally when someone takes on an ideology like environmentalism, they don't also explicitly & automatically say that the environmental is all that matters and that it's in principle permissible to cheat & lie in order to benefit the environment. Definitely not saying it has any bearing on the truth of utilitarianism (in general I don't think recent events have much bearing on the truth of anything). My original point was about who EA should try to attract, as a practical matter.
8
Linda Linsefors
1y
Ooo look!  Someone already said all the things I wanted to say, except even better. This is great. I feel instantly less annoyed. Thanks :)

I don’t fully understand why the netted enclosure helps. Is the idea just that it prevents humans from coming close to the barns?

4
DirectedEvolution
1y
Thanks for noting your confusion, I updated the language in the opening to be specific about this - the netting would block bird/mink or bird/pig contact, preventing the mammals from getting infected and thus becoming a system in which a mammalian-transmissible bird flu strain could evolve
4
Erin
1y
Prevents contact between birds and minks I believe, reducing the likelihood of another crossover of H1N5 between the species.
1
jasmine_wang
1y
Thank you Cullen!

How do I access this?

1
AndreFerretti
1y
Hey Cullen! Unfortunately, this is just an image that I designed and it's not a real feature

I feel like I was only speaking out against the framing that critics of EA are entitled to a lengthy reply because of EA being ambitious in its scope of caring. (This framing was explicit at least in the quoted paragraph, not necessarily in her post as a whole or her previous work.)

Ah, okay. That seems more reasonable. Sorry for misunderstanding.

I would also point out that I think the proposition that " that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk" is both:

  1. Probably undesirable to implement in practice because any criticism will have some disincentivizing effect.
  2. Probably violated by your comment itself, since I'd guess that any normal person would be disincentivized to some extent by engaging in constructive criticism (above the baseline of apathy or jerkiness) that is likely
... (read more)
6
Lukas_Gloor
1y
"Never" is too strong, okay. But I disagree with your second point. I feel like I was only speaking out against the framing that critics of EA are entitled to a lengthy reply because of EA being ambitious in its scope of caring. (This framing was explicit at least in the quoted paragraph, not necessarily in her post as a whole or her previous work.) I don't feel like I was discouraging criticism. Basically, my point wasn't about the act of criticizing at all, it was only about an added expectation that went with it, which I'd paraphrase as "EAs are doing something wrong unless they answer to my concerns point by point."

Yeah. I have strong feelings that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk. For example, I'm annoyed when people punish others for honesty in cases where it would have been easy to tell a lie and look better. Likewise, I find it unfair if having the stated goal to make the future better for all sentient beings is somehow taken to imply "Oh, you care for the future of all humans, and even animals? That's suspicious – we're definitely goi

... (read more)
4
Cullen
1y
I would also point out that I think the proposition that " that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk" is both: 1. Probably undesirable to implement in practice because any criticism will have some disincentivizing effect. 2. Probably violated by your comment itself, since I'd guess that any normal person would be disincentivized to some extent by engaging in constructive criticism (above the baseline of apathy or jerkiness) that is likely to be labeled as immoral. This is just to say that I value the general maxim you're trying to advance here, but "never" is way too strong. Then it's just a boring balancing question.

The main problem with lavishness, IMHO, is not optics per se, but rather that it's extremely easy for people to trick themselves into believing that spending money on their own comfort/lifestyle/accommodations is net-good-despite-looking-bad (for productivity reasons or whatever). This generalizes to the community level.

(To be clear, this is not to say that we should never follow such reasoning. It's just a serious pitfall. This is also not original—others have certainly brought this up.)

Out of curiosity, where does the money come from?

3
Austin
1y
Thanks for asking! Manifold has received a grant to promote charitable prediction markets which we can regrant from. But otherwise, we could also fund these donations via mana purchases (some of our users buy in more mana if they run out, or want to support Manifold.markets)

How clear is it that stablecoins have value other than by enabling speculative transactions on blockchains? My main model of stablecoins, borrowing from Matt Levine, is that if you do a lot of stuff on-chain, it is also useful to have an on-chain way to transact in fiat.

I could definitely think of many situations in which stablecoins would be useful, but on priors I would guess they’re fairly small compared to uses facilitating speculation.

4
Linch
1y
I don't have a good sense of this tbh, I mostly got this from pretty anecdotal evidence rather than looking at data (and realistically I won't do a deep dive on this data unless some analysis was handed to me on a platter).[1] My current guess is that there is significant uses for stablecoins in practice right now. My guess is that their theoretical value is probably lower than if you were to use the same resources to build more centralized systems like MPesa or Wave, but of course there's less VC interest and also less of an easy path to profitability than from being subsidized by crypto speculation/fraud. 1. ^  If I search around there's certainly evidence of significant crypto uses in some places, but I'm not sure about the reliability of this evidence.

A bunch of things that all seem true to me:

  1. Some number of people in EA community could have done things that were positive in expectation that would have mitigated much of the downside to EA from FTX.
  2. A bunch of people are overreacting to this situation and making it seem much more damning to EA than I think it is. Some of those people are acting in bad faith.
  3. It is very possible that as a community we overreact to this situation and adopt bad norms, institutions, or practices that are negative EV going forward.

Absolutely. We obviously can weather losing funding. EA started small and it can grow back. And people always have enjoyed heaping one form of abuse on it or another. The more fundamental damage will be what we inflict on ourselves.

But I'm still optimistic this will mostly blow over with respect to the EA movement. Mostly, I think that people are being louder than usual across the board, but they seem to be expressing opinions they'd already held. When it stops being as salient, people will probably more or less quiet down and keep pursuing the same types of goals and having the same perspectives they had previously. Hopefully in the context of improved movement governance.

I think the point of most non-profit boards is to ensure that donor funds are used effectively to advance the organization's charitable mission. If that's the case, then having donor representation on the board seems appropriate.

I don't see how this follows.

It is indeed very normal to have one or more donors on the board of a nonprofit. But FTX the for-profit organization did in fact have different interests than the FTX Foundation. For example, it was in the FTX Foundation's interest to not make promises to grantees that it could not honor. It was also... (read more)

OpenPhil has a majority of board members (3/5) who aren't the source of funds (Moskovitz and Tuna, who are the other 2). As I understand it, they also have a few $B under their direct independent legal control[1]. The fact that FTX Foundation didn't secure any assets independently this way is a massive failure (for the world, EA, and FTX creditors[2]).

  1. ^
  2. ^

    Were there significant assets in an independently-controlled FTX Foundation we would be in a much better position now even from the point of view of wanting (or being comp

... (read more)

Another structural question that will need answering at some point: Did anybody outside of FTX consider it okay that all of the directors at the FTX Foundation were senior FTX employees? Why were there no independent (of FTX) directors there?

4[anonymous]1y
One response here.

Relatedly, I think a focus on ends-justify-the-means reasoning is potentially misguided because it seems super clear in this case that, even if we put zero intrinsic value on integrity, honesty, not doing fraud, etc., some of the decisions made here were pretty clearly very negative expected-value. We should expect the upsides from acquiring resources by fraud (again, if that is what happened) to be systematically worth much less than reputational and trustworthiness damage our community will receive by virtue of motivating, endorsing, or benefitting from that behavior.

Cullen
1y175
72
1

My naive moral psychology guess—which may very well be falsified by subsequent revelations, as many of my views have this week—is that we probably won’t ever find an “ends justify the means” smoking gun (eg, an internal memo from SBF saying that we need to fraudulently move funds from account A to B so we can give more to EA). More likely, systemic weaknesses in FTX’s compliance and risk management practices failed to prevent aggressive risk-taking and unethical profit-seeking and self-preserving business decisions that were motivated by some complicated b... (read more)

Relatedly, I think a focus on ends-justify-the-means reasoning is potentially misguided because it seems super clear in this case that, even if we put zero intrinsic value on integrity, honesty, not doing fraud, etc., some of the decisions made here were pretty clearly very negative expected-value. We should expect the upsides from acquiring resources by fraud (again, if that is what happened) to be systematically worth much less than reputational and trustworthiness damage our community will receive by virtue of motivating, endorsing, or benefitting from that behavior.

It’s definitely true that there are more philosophical questions that a lawyerly investigation wouldn’t be well-positioned to answer. But it seems likely that there were plenty of legal and financial risk-management mistakes that EA orgs made in the pst year that an independent investigator or other outside risk management consultant would be well-positioned to opine on.

I agree we have no idea what the terms of the deal are, which is why I don't think we can say what the total effects on SBF's assets are other than by informed guessing.

1
Noep
1y
Not only we have no idea of the terms of the deal for FTX.com, but it seems hard to predict what it means for the value of FTX US (what does the probability of another bank run look like now?) and Almeda (did they actually use FTX.com's info/cash as a significant generator of alpha?)

Obviously I was too optimistic here :-(

Some of the thoughts in this post and thread seem pretty half baked and very uncertain, I think the pace of writing should be lower. 

  • For example, the withdrawals might be at $6B this morning, that would break systems in a purely ops or make movement of money impractical for very innocent, mundane reasons. This experience adds a lot of confusion and noise when reported and echoed.
  • The "Satan's Apple" seems excessively abstract, looking at this from a regular business expansion/portfolio theory seems like more useful and this would benefit from more time.

I'm confused why you say

This means SBF has lost control of around ~50% of his resources. It will have damaged the value of FTX US and Alameda as well.

Two things have happened:

  1. The value of FTX appears to have gone down (a lot).
  2. Some part of FTX is potentially being sold to Binance.

(1) causes Sam to lose control of a lot of his resources, because those resources have essentially evaporated with the value of FTX. But conditional on (1) happening, doesn't (2) just mean that whatever value SBF retains after (1) is converted from equity in (the relevant ... (read more)

3
Nathan Young
1y
It's easy for me to say this now, but the reason I said it was because I sensed that that chunk would be valueless and maybe much of the rest. ~50% was my median value between 20% and like 90%. But it felt a little exhausting to say that given I couldn't really justify it.  It's cheeky of me to say this without evidence, (though I guess maybe we could find me rewording it in the original google doc)
-18
brb243
1y
6
Noep
1y
Nathan's post is not entirely wrong though. If FTX.com sells at a discount, we have no idea who gets paid first. Maybe it takes FTX to sell and lose 50% of its value for SBF to get zero, maybe it takes 90% discount, maybe 99%?  This happens a lot when there are aqui-hires because of start-up going bankrupt, and employees with shares get 0 because investors have prefered payback clauses
-2
Nathan Young
1y
Touché

Yeah, people should probably do right on Nov. 1 if they want to get the match.

I’m guessing because it hasn’t started yet.

2
ThomasW
1y
Ah makes sense, thanks!

Interesting excerpt from a book I’m reading:

Somewhat incongruously, at the same time that private standardization was both promoting economic globalization and becoming more global, social activists concerned with the impact of globalization on the environment, on workers, and on human rights began to look to ISO's experience with management system standards as a model of how to prevent a globalization-led race to the bottom. For the activists, ISO 9000 provided a governance model for organizations concerned with social and environmental sustainability,

... (read more)

I do think it'd be interesting to have an AGI-pilled economist talk to one of the economists that do GWP forecasting to see if they can find cruxes.

Hi John! You might be interested in my Law-Following AI Sequence, where I've explored very similar ideas: https://forum.effectivealtruism.org/s/3pyRzRQmcJNvHzf6J

I'm glad we've seemed to converge on similar ideas. I would love to chat sometime!

1
johnjnay
2y
That's awesome - thank you for sharing!   Would love to chat as well.
Answer by CullenSep 18, 20222
0
0

Rise of the Conservative Legal Movement is very interesting and good.

There's also this interesting discussion from Bentham against the attorney–client privilege: https://h2o.law.harvard.edu/text_blocks/7432.

(I don't endorse Bentham's view).

Thanks! Was not aware of this; definitely relevant :-)

Agreed, and in part why I'm not very sold on this critique.

I was curious about who would be the firm's opponent in this scenario, i.e. the actor trying to legally implement the Windfall Clause.

This is underdetermined by the concept of the WC itself, but is a very important design consideration.

The worst-case scenario for this failure mode is that some very large number of people are plaintiffs in their individual capacity. Coordinating to enforce would be hard for them, but class action mechanisms (of which I'm not an expert!) could probably help.

A bet... (read more)

Load more