OP gave some reasoning for their views on their recent blog post:
...Another place where I have changed my mind over time is the grant we gave for the purchase of Wytham Abbey, an event space in Oxford.
We initially agreed to help fund that purchase as part of our effort to support the growth of the community working to reduce global catastrophic risks (GCRs). The original idea presented to us was that the space could serve as a hub for workshops, retreats, and conferences, to cut down on the financial and logistical costs of hosting large events at private f
How does AMF collect feedback from the end-recipients of bednets? How does feedback from them inform AMF's programming?
According to the book Bullies and Saints: An Honest Look at the Good and Evil of Christian History, some early Christians sold themselves into slavery so they could donate the proceeds to the poor. Super interesting example of extreme and early ETG.
(I'm listening on audiobook so I don't have the precise page for this claim.)
(To avoid bad-faith misinterpretation: I obviously think that nobody should do the same.)
Longtermist shower thought: what if we had a campaign to install Far-UVC in poultry farms? Seems like it could:
Insofar as one of the main obstacles is humans' concerns for health effects, this would at least only raise these for a small group of workers.
I had a similar thought a (few) year (s) ago and emailed a couple of people to sanity check the idea - all the experts I asked seemed to think this wouldn't be an effective thing to do (which is why I didn't do any more work on it). I think Alex's points are true (mostly the cost part - I think you could get high enough intensity for it to be effective).
I think 1 unfortunately ends up not being true in the intensive farming case. Lots of things are spread by close enough contact that even intense uvc wouldn't do much (and it would be really expensive)
Narrow point: my understanding is that, per his own claims, the Manifund grant would only fund technical upkeep of the blog, and that none of it is net income to him.
Super excited about the artificial conscience paper. I'd note that a similar approach be very useful for creating law-following AIs:
...An LFAI system does not need to store all knowledge regarding the set of laws that it is trained to follow. More likely, the practical way to create such a system would be to make the system capable of recognizing when it faces sufficient legal uncertainty,[10] then seeking evaluation from a legal expert system ("Counselor").[11]
The Counselor could be a human lawyer, but in the long-run is probably most robust and efficient
...Utilitarianism is much more explicit in its maximisation than most ideologies, plus it (at least superficially) actively undermines the normal safeguards against dangerous maximisation (virtues, the law, and moral rules) by pointing out these can be overridden for the greater good.
Like yes there are extreme environmentalists and that's bad, but normally when someone takes on an ideology like environmentalism, they don't also explicitly & automatically say that the environmental is all that matters and that it's in principle permissible to cheat &
I would be very curious for Gregory's take on whether he thinks EAs are too epistemically immodest still!
On the Democratic side, challenging Biden is a way to make yourself Very Unpopular with party elites. Challenging Harris, if she is his chosen successor, would be That But Worse.
This seems very wrong to me. Harris is very unpopular.
it doesn't seem like a big leap to think that confidence in an ideology that says you need to maximise a single value to the exclusion of all else could lead to dangerously optimizing behaviour.
I don't find this a persuasive reason to think that utilitarianism is more likely to lead to this sort of behavior than pretty much any other ideology. I think a huge number of (maybe all?) ideologies imply that maximizing the good as defined by that ideology is the best thing to do, and that considerations outside of that ideology have very little weight. You se...
I don’t fully understand why the netted enclosure helps. Is the idea just that it prevents humans from coming close to the barns?
I feel like I was only speaking out against the framing that critics of EA are entitled to a lengthy reply because of EA being ambitious in its scope of caring. (This framing was explicit at least in the quoted paragraph, not necessarily in her post as a whole or her previous work.)
Ah, okay. That seems more reasonable. Sorry for misunderstanding.
I would also point out that I think the proposition that " that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk" is both:
...Yeah. I have strong feelings that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk. For example, I'm annoyed when people punish others for honesty in cases where it would have been easy to tell a lie and look better. Likewise, I find it unfair if having the stated goal to make the future better for all sentient beings is somehow taken to imply "Oh, you care for the future of all humans, and even animals? That's suspicious – we're definitely goi
The main problem with lavishness, IMHO, is not optics per se, but rather that it's extremely easy for people to trick themselves into believing that spending money on their own comfort/lifestyle/accommodations is net-good-despite-looking-bad (for productivity reasons or whatever). This generalizes to the community level.
(To be clear, this is not to say that we should never follow such reasoning. It's just a serious pitfall. This is also not original—others have certainly brought this up.)
How clear is it that stablecoins have value other than by enabling speculative transactions on blockchains? My main model of stablecoins, borrowing from Matt Levine, is that if you do a lot of stuff on-chain, it is also useful to have an on-chain way to transact in fiat.
I could definitely think of many situations in which stablecoins would be useful, but on priors I would guess they’re fairly small compared to uses facilitating speculation.
A bunch of things that all seem true to me:
Absolutely. We obviously can weather losing funding. EA started small and it can grow back. And people always have enjoyed heaping one form of abuse on it or another. The more fundamental damage will be what we inflict on ourselves.
But I'm still optimistic this will mostly blow over with respect to the EA movement. Mostly, I think that people are being louder than usual across the board, but they seem to be expressing opinions they'd already held. When it stops being as salient, people will probably more or less quiet down and keep pursuing the same types of goals and having the same perspectives they had previously. Hopefully in the context of improved movement governance.
I think the point of most non-profit boards is to ensure that donor funds are used effectively to advance the organization's charitable mission. If that's the case, then having donor representation on the board seems appropriate.
I don't see how this follows.
It is indeed very normal to have one or more donors on the board of a nonprofit. But FTX the for-profit organization did in fact have different interests than the FTX Foundation. For example, it was in the FTX Foundation's interest to not make promises to grantees that it could not honor. It was also...
OpenPhil has a majority of board members (3/5) who aren't the source of funds (Moskovitz and Tuna, who are the other 2). As I understand it, they also have a few $B under their direct independent legal control[1]. The fact that FTX Foundation didn't secure any assets independently this way is a massive failure (for the world, EA, and FTX creditors[2]).
...Another structural question that will need answering at some point: Did anybody outside of FTX consider it okay that all of the directors at the FTX Foundation were senior FTX employees? Why were there no independent (of FTX) directors there?
Relatedly, I think a focus on ends-justify-the-means reasoning is potentially misguided because it seems super clear in this case that, even if we put zero intrinsic value on integrity, honesty, not doing fraud, etc., some of the decisions made here were pretty clearly very negative expected-value. We should expect the upsides from acquiring resources by fraud (again, if that is what happened) to be systematically worth much less than reputational and trustworthiness damage our community will receive by virtue of motivating, endorsing, or benefitting from that behavior.
My naive moral psychology guess—which may very well be falsified by subsequent revelations, as many of my views have this week—is that we probably won’t ever find an “ends justify the means” smoking gun (eg, an internal memo from SBF saying that we need to fraudulently move funds from account A to B so we can give more to EA). More likely, systemic weaknesses in FTX’s compliance and risk management practices failed to prevent aggressive risk-taking and unethical profit-seeking and self-preserving business decisions that were motivated by some complicated b...
Relatedly, I think a focus on ends-justify-the-means reasoning is potentially misguided because it seems super clear in this case that, even if we put zero intrinsic value on integrity, honesty, not doing fraud, etc., some of the decisions made here were pretty clearly very negative expected-value. We should expect the upsides from acquiring resources by fraud (again, if that is what happened) to be systematically worth much less than reputational and trustworthiness damage our community will receive by virtue of motivating, endorsing, or benefitting from that behavior.
It’s definitely true that there are more philosophical questions that a lawyerly investigation wouldn’t be well-positioned to answer. But it seems likely that there were plenty of legal and financial risk-management mistakes that EA orgs made in the pst year that an independent investigator or other outside risk management consultant would be well-positioned to opine on.
I agree we have no idea what the terms of the deal are, which is why I don't think we can say what the total effects on SBF's assets are other than by informed guessing.
Some of the thoughts in this post and thread seem pretty half baked and very uncertain, I think the pace of writing should be lower.
I'm confused why you say
This means SBF has lost control of around ~50% of his resources. It will have damaged the value of FTX US and Alameda as well.
Two things have happened:
(1) causes Sam to lose control of a lot of his resources, because those resources have essentially evaporated with the value of FTX. But conditional on (1) happening, doesn't (2) just mean that whatever value SBF retains after (1) is converted from equity in (the relevant ...
Interesting excerpt from a book I’m reading:
...Somewhat incongruously, at the same time that private standardization was both promoting economic globalization and becoming more global, social activists concerned with the impact of globalization on the environment, on workers, and on human rights began to look to ISO's experience with management system standards as a model of how to prevent a globalization-led race to the bottom. For the activists, ISO 9000 provided a governance model for organizations concerned with social and environmental sustainability,
I do think it'd be interesting to have an AGI-pilled economist talk to one of the economists that do GWP forecasting to see if they can find cruxes.
Hi John! You might be interested in my Law-Following AI Sequence, where I've explored very similar ideas: https://forum.effectivealtruism.org/s/3pyRzRQmcJNvHzf6J
I'm glad we've seemed to converge on similar ideas. I would love to chat sometime!
There's also this interesting discussion from Bentham against the attorney–client privilege: https://h2o.law.harvard.edu/text_blocks/7432.
(I don't endorse Bentham's view).
Agreed, and in part why I'm not very sold on this critique.
I was curious about who would be the firm's opponent in this scenario, i.e. the actor trying to legally implement the Windfall Clause.
This is underdetermined by the concept of the WC itself, but is a very important design consideration.
The worst-case scenario for this failure mode is that some very large number of people are plaintiffs in their individual capacity. Coordinating to enforce would be hard for them, but class action mechanisms (of which I'm not an expert!) could probably help.
A bet...
I made a perma.cc copy of the Final Report here: https://perma.cc/3KP9-ZSFB