All of Cullen 🔸's Comments + Replies

Hi Chelsea. You should probably hire a trusts & estates lawyer to help you understand your rights with respect to the trust better.

1
Chelsea
Thanks Cullen, I hadn't thought of that but it makes a lot of sense.

Definitely agreed on that point!

I think typical financial advice is that emergency funds should be kept in very low-risk assets, like cash, money market funds, or short-term bonds. This makes sense because the probability that you need to draw on emergency funds is negatively correlated with equities: market downturns make it more likely that you will lose your job, or some sort of disaster could cause both market downturns and personal loss. You really don't want your emergency fund to lose value at the same time that you're most likely to need it.

3
Guive
Yeah, my understanding is there is debate about whether the loss in EV from having an emergency fund in low yield low risk assets is offset by the benefits of reduced risk. The answer will depend on personal risk tolerance, current net worth, expected career volatility, etc. The main point of my comment was just that a lot of people use default low yield savings accounts even though there's no reason to do that at all.  

One dynamic worth considering here is that a person with near-typical longtermist views about the future also likely believes that there are a large number of salient risks in the future, including sub-extinction AI catastrophes, pandemics, war with China, authoritarian takeover, "white collar bloodbath" etc.

It can be very psychologically hard to spend all day thinking about these risks without also internalizing that these risks may very well affect oneself and one's family, which in turn implies that typical financial advice and financial lifecycle plann... (read more)

3
Guive
That's a fair point, but a lot of the scenarios you describe would mean rapid economic growth and equities going up like crazy. The expectation of my net worth in 40 years on my actual views is way, way higher than it would be if I thought AI was totally fake and the world would look basically the same in 2065. That doesn't mean you shouldn't save up though (higher yields are actually a reason to save, not a reason to refrain from saving).

Ah sorry, I read your post too quickly :-)

There used to be a website to try to coordinate this; not sure what ever happened to it.

[This comment is no longer endorsed by its author]Reply
2
Alfredo Parra 🔸
I assume it's the one I linked in my original post? Catherine announced it was discontinued. :/

I also want to point out that having better outside income-maximizing options makes you more financially secure than other people in your income bracket, all else equal, which pro tanto would give you more reason to donate than them.

4
Neel Nanda
My point is that "other people in the income bracket AFTER taking a lower paying job" is the wrong reference class. Let's say someone is earning $10mn/year in finance. I totally think they should donate some large fraction of their income. But I'm pretty reluctant to argue that they should donate more than 99% of it. So it seems completely fine to have a post donation income above $100K, likely far above. If this person quits to take a job in AI Safety that pays $100K/year, because they think this is more impactful than their donations, I think it would be unreasonable to argue that they need to donate some of their reduced salary, because then their "maximum acceptable post donation salary" has gone down, even though they're (hopefully) having more impact than if they donated everything above $100K I'm picking fairly extreme numbers to illustrate the point, but the key point is that choosing to do direct work should not reduce your "maximum acceptable salary post donations", and that at least according to my values, that max salary post donation is often above what they get paid in their new direct role.

I'm not going to defend my whole view here, but I want to give a though experiment as to why I don't think that "shadow donations"—the delta between what you could earn if you were income-maximizing, and what you're actually earning in your direct work job—are a great measure for the purposes of practical philosophy (though I agree they're both a relevant consideration and a genuine sacrifice).

Imagine two twins, Anna and Belinda. Both have just graduated with identical grades, skills, degrees, etc. Anna goes directly from college work on AI safety at Safet... (read more)

4
D0TheMath
I will note that my comment made no reference to who is “more altruistic”. I don’t know what that term means personally, and I’d rather not get into a semantics argument. If you give the definition you have in mind, then we can argue over whether its smart to advocate that someone ought to be more altruistic in various situations, and whether it gets at intuitive notions of credit assignment. I will also note that given the situation, its not clear to me Anna’s proper counterfactual here isn’t making $1M and getting nice marketable skills, since she and Belinda are twins, and so have the same work capacity & aptitudes. I think this is the crux personally. This seems very healthy to me, in particular because it creates strong boundaries between the relevant person and EA. Note that burnout & overwork is not uncommon in EA circles! EAs are not healthy, and (imo) already give too much of themselves! Why do you think its unhealthy? This seems to imply negative effects on the person reasoning in the relevant way, which seems pretty unlikely to me.
2
Neel Nanda
Suppose they're triplets, and Charlotte, also initially identical, earns $1M/year just like Belinda, but can't/doesn't want to switch to safety. How much of Charlotte's income should she donate in your worldview? What is the best attitude for the EA community?
2
Eevee🔹
I was thinking of a different organization, but thanks!

It's not clear to me whether you're talking about people who (a) do a voluntary salary sacrifice while working at an EA org, or (b) people who could have earned much more in industry but moved to a nonprofit so now earn much less than their hypothetical maximum earning potential.

In case (a), yes, their salary sacrifice should count towards their real donations.

But I think a practical moral philosophy wherein donation expectations are based on your actual material resources (and constraints), not your theoretical maximum earning potential, seems more justif... (read more)

2
D0TheMath
I think the right stance here is a question of “should EA be praising such people or get annoyed they’re not giving up more if it wants to keep a sufficient filter for who it calls true believers”, and the answer there is obviously both groups are great & true believers and it seems dumb to get annoyed at either. The 10% number was notably chosen for these practical reasons (there is nothing magic about that number), and to back-justify that decision with bad moral philosophy about “discharge of moral duty” is absurd.

I disagree and think that b is actually totally sufficient justification. I'm taking as an assumption that we're using an ethical theory that says people do not have an unbounded ethical obligation to give everything up to subsistence and that it is fine to set some kind of a boundary and fraction of your total budget of resources that you spend on altruistic purposes. Many people doing well paying altruistic careers (eg technical AI safety careers) could earn dramatically more money eg at least twice as much, if they were optimising for the highest paying... (read more)

2
Jason
It's complicated, I think. Based on your distinguishing (a) and (b), I am reading "salary sacrifice" as voluntarily taking less salary than was offered for the position you encumber (as discussed in, e.g., this post). While I agree that should count, I'm not sure (b) is not relevant. The fundamental question to me is about the appropriate distribution of the fruits of one's labors ("fruits") between altruism and non-altruism. (Fruits is an imperfect metaphor, because I mean to include (e.g.) passive income from inherited wealth, but I'll stick with it.)  We generally seem to accept that the more fruit one produces, the more (in absolute terms) it is okay to keep for oneself. Stated differently -- at least for those who are not super-wealthy -- we seem to accept that the marginal altruism expectation for additional fruits one produces is less than 100%. I'll call this the "non-100 principle." I'm not specifically defending that principle in this comment, but it seems to be assumed in EA discourse. If we accept this principle, then consider someone who was working full-time in a "normal" job and earn a salary of 200 apples per year. They decide to go down to half-time (100-apple salary) and spend the half of their working hours producing 100 charitable pears for which they receive no financial benefit. [1]The non-100 principle suggests that it's appropriate for this person to keep more of their apples than a person who works full-time to produce 100 apples (and zero pears). Their total production is twice as high, so they aren't similarly situated to the full-time worker who produces the same number of apples. The decision to take a significantly less well-paid job seems analogous to splitting one's time between remunerative and non-remunerative work. One gives up the opportunity to earn more salary in exchange for greater benefits that flow to others by non-donation means. I am not putting too much weight on this thought experiment, but it does make me think that
3
calebp
I feel quite confused about the case where someone earns much less than their earning potential in another altruistically motivated but less impactful career doing work that uses a similar skillset (e.g. joining a think tank after working on policy at an AI company). This seems somewhere between A and B.

Is your claim that somehow FTX investing in Anthropic has caused Anthropic to be FTX-like in the relevant ways? That seems implausible.

8
Greg_Colbourn ⏸️
No, just saying that without their massive injection of cash, Anthropic might not be where they are today. I think the counterfactual where there wasn't any "EA" investment into Anthropic would be significantly slower growth of the company (and, arguably, one fewer frontier AI company today).

Thanks for this very thoughtful reply!

I have a lot to say about this, much of which boils down to a two points:

  1. I don't think Jeremy is a good example of unnecessary polarization.
  2. I think "avoid unnecessary polarization" is a bad heuristic for policy research (which, related to my first point, is what Jeremy was responding to in Dislightenment), at least if it means anything other than practicing the traditional academic virtues of acknowledging limitations, noting contrary opinion, being polite, being willing to update, inviting disagreement, etc.

The r... (read more)

Ah, interesting, not exactly the case that I thought you were making.

I more or less agree with the claim that "Elon changing the twitter censorship policies was a big driver of a chunk of Silicon Valley getting behind Trump," but probably assign it lower explanatory power than you do (especially compared to nearby explanatory factors like, Elon crushing internal resistance and employee power at Twitter). But I disagree with the claim that anyone who bought Twitter could have done that, because I think that Elon's preexisting sources of power and influence ... (read more)

I will say that not appreciating arguments from open-source advocates, who are very concerned about the concentration of power from powerful AI, has lead to a completely unnecessary polarisation against the AI Safety community from it.

I think if you read the FAIR paper to which Jeremy is responding (of which I am a lead author), it's very hard to defend the proposition that we did not acknowledge and appreciate his arguments. There is an acknowledgment of each of the major points he raises on page 31 of FAIR. If you then compare the tone of the FAIR pap... (read more)

4
JWS 🔸
Hey Cullen, thanks for responding! So I think there are object-level and meta-level thoughts here, and I was just using Jeremy as a stand-in for the polarisation of Open Source vs AI Safety more generally. Object Level - I don't want to spend too long here as it's not the direct focus of Richard's OP. Some points: * On 'elite panic' and 'counter-enlightenment', he's not directly comparing FAIR to it I think. He's saying that previous attempts to avoid democratisation of power in the Enlightenment tradition have had these flaws. I do agree that it is escalatory though. * I think, from Jeremy's PoV, that centralization of power is the actual ballgame and what Frontier AI Regulation should be about. So one mention on page 31 probably isn't good enough for him. That's a fine reaction to me, just as it's fine for you and Marcus to disagree on the relative costs/benefits and write the FAIR paper the way you did. * On the actual points though, I actually went back and skim-listened to the the webinar on the paper in July 2023, which Jeremy (and you!) participated in, and man I am so much more receptive and sympathetic to his position now than I was back then, and I don't really find Marcus and you to be that convincing in rebuttal, but as I say I only did a quick skim listen so I hold that opinion very lightly. Meta Level -  * On the 'escalation' in the blog post, maybe his mind has hardened over the year? There's probably a difference between ~July23-Jeremy and ~Nov23Jeremy, which he may view as an escalation from the AI Safety Side to double down on these kind of legislative proposals? While it's before SB1047, I see Wiener had introduced an earlier intent bill in September 2023. * I agree that "people are mad at us, we're doing something wrong" isn't a guaranteed logic proof, but as you say it's a good prompt to think "should i have done something different?", and (not saying you're doing this) I think the absolutely disaster zone that was the sB1047 debate and

(Elon's takeover of twitter was probably the second—it's crazy that you can get that much power for $44 billion.)

I think this is pretty significantly understating the true cost. Or put differently, I don't think it's good to model this as an easily replicable type of transaction.

I don't think that if, say, some more boring multibillionaire did the same thing, they could achieve anywhere close to the same effect. It seems like the Twitter deal mainly worked for him, as a political figure, because it leveraged existing idiosyncratic strengths that he had,... (read more)

6
richard_ngo
My story is: Elon changing the twitter censorship policies was a big driver of a chunk of Silicon Valley getting behind Trump—separate from Elon himself promoting Trump, and separate from Elon becoming a part of the Trump team. And I think anyone who bought Twitter could have done that. If anything being Elon probably made it harder, because he then had to face advertiser boycotts. Agree/disagree?

A warm welcome to the forum!

I don't claim to speak authoritatively, or to answer all of your questions, but perhaps this will help continue your exploration.

There's an "old" (by EA standards) saying in EA, that EA is a Question, Not an Ideology. Most of what connects the people on this forum is not necessarily that they all work in the same cause area, or share the same underlying philosophy, or have the same priorities. Rather, what connects us is rigorous inquiry into the question of how we can do the most good for others with our spare resources. Becaus... (read more)

5
Dr Kassim
Thanks, Cullen  I really appreciate this perspective. The idea that EA is a question rather than an ideology really resonates, especially when thinking about the diversity of approaches within the movement. It’s reassuring to know that many of these debates about long termism, AI safety, and earning-to-give aren’t settled, but rather ongoing discussions that reflect different ways of reasoning about impact. Coming from a background in fish welfare and food systems in Uganda, I see similar tensions how do we balance immediate suffering with long-term change? How do we integrate global priorities with local realities? And how do we ensure that interventions remain relevant in the face of political and economic uncertainty? It’s exciting to engage with a community that embraces these complexities, and I look forward to thinking through these questions alongside others who share the goal of doing the most good. THIS IS SURELY THE MOST GOOD. 

I upvoted and didn't disagree-vote, because I generally agree that using AI to nudge online discourse in more productive directions seems good. But if I had to guess where disagree votes come from, it might be a combination of:

  1. It seems like we probably want politeness-satisficing rather than politeness-maximizing. (This could be consistent with some versions of the mechanism you describe, or a very slightly tweaked version).
  2. There's a fine line between politiness-moderating and moderating the substance of ideas that make people uncomfortable. Historicall
... (read more)

Both Sam and Dario saying that they now believe they know how to build AGI seems like an underrated development to me. To my knowledge, they only started saying this recently. I suspect they are overconfident, but still seems like a more significant indicator than many people seem to be tracking.

2[anonymous]
In the last 48 hours Dario said 3-5 years until AGI!
3
Ebenezer Dukakis
Maybe once Sam says it, Dario kinda has to say it to stay competitive for funding?

I also have very wide error bars on my $1B estimate; I have no idea how much equity early employees would normally retain in a startup like Anthropic. That number is also probably dominated by the particular compensation arrangements and donation plans of ~5–10 key people and so very sensitive to assumptions about them individually.

4
JoshYou
Forbes estimates that the seven co-founders are now worth $1-2 billion each.
3
Marcus Abramovitch 🔸
This, along with OAI is likely worth investigating.

Indeed, though EAs were less well-represented at the senior managerial and executive levels of OpenAI, especially after the Anthropic departures.

One factor this post might be failing to account for: the wealth of Anthropic founders and early-stage employees, many of whom are EAs, EA-adjacent, or at minimum very interested in supporting existential AI safety. I don't know how much equity they have collectively, how liquid it is, how much they plan to donate, etc. But if I had to guess, there's probably at least $1B earmarked for EA projects there, at least in NPV terms?

(In general, this topic seems under-discussed.)

5
Cullen 🔸
I also have very wide error bars on my $1B estimate; I have no idea how much equity early employees would normally retain in a startup like Anthropic. That number is also probably dominated by the particular compensation arrangements and donation plans of ~5–10 key people and so very sensitive to assumptions about them individually.
2
Marcus Abramovitch 🔸
Good point. OpenAI as well.

I’m obviously not a bankruptcy lawyer, but I was surprised to read this because I assumed that the statute of limitations depended on the jurisdiction of the bankruptcy estate, not the clawback claimee. Am I wrong? Or are there parallel proceedings in other jurisdictions?

Thank you for all your contributions, Luke! GWWC made tremendous progress under your leadership.

On the allocative efficiency front, the Harris campaign has pledged to impose nation-wide rent controls, an idea first floated by President Biden. Under the proposal, “corporate landlords” with 50+ units would have to “either cap rent increases on existing units to no more than 5% or lose valuable federal tax breaks,” referring to depreciation write-offs. This would be a disastrously bad policy for the supply-side of housing, and an example of the sort of destructive economic populism normally ascribed to Trump.

Harris’s terrible housing policy can be disco

... (read more)
1
Sarah Cheng 🔸
Thanks! I've updated the title.

I agree he shouldn’t have his past donations held against him, and that his past generosity should be praised.

At the same time, he’s not simply “stopping giving.” His prior plan was that his estate would go to BMGF. Let’s assume that that was reflected in his estate planning documents. He would have had to make an affirmative change to effect this new plan. So with this specific action he is not “stopping giving,” he is actively altering his plan to be much worse.

2
Larks
I don't buy this is a morally or socially significant distinction. Do we really believe that a parallel world Warren, who made a public pledge to give his money away, and fully intended to, but never got around to actually writing a will before he changed his mind, would be significantly less blameworthy, or would escape opprobrium? Part of my intuition is that the temporal ordering doesn't matter - if anything it's better to give sooner - so we should not treat more harshly someone who donated and then stopped than someone who consumed frivolously and then saw the light later in life.

I think many people are tricking themselves into being more intellectually charitable to Hanania than warranted.

I know relatively little about Hanania other than stuff that has been brought to my attention through EA drama and some basic “know thy enemy” reading I did on my own initiative. I feel pretty comfortable in my current judgment that his statements on race are not entitled charitable readings in cases of ambiguity.

Hanania by his own admission was deeply involved in some of the most vilely racist corners of the internet. He knows what sorts of mess... (read more)

Yeah fair, should have considered that more duh

2
Linch
I'm glad I was helpful! :P  I'd find anecdotes about cutting corners in bioweapons or nuclear (both weapons development and power) more convincing, partially because it's more directly analogous and partially because I don't think Khrushchev is completely heartless.

Example: They crammed three cosmonauts into a capsule initially designed for one person. But due to the size constraints, the cosmonauts couldn't wear proper spacesuits; they had to wear leisure suits!

Pretty wild discussion in this podcast about how aggressively the USSR cut corners on safety in their space program in order to stay ahead of the US. In the author's telling of the history, this was in large part because Khrushchev wanted to rack up as many "firsts" (e.g., first satellite, first woman in space) as possible. This seems like it was most proximately for prestige and propaganda rather than any immediate strategic or technological benefit (though of course the space program did eventually produce such bigger benefits).

Evidence of the following ... (read more)

8
Linch
Though the costs are also low, from the perspective of Khrushchev. (A few cosmonauts' lives is presumably not that important to him)
2
Cullen 🔸
Example: They crammed three cosmonauts into a capsule initially designed for one person. But due to the size constraints, the cosmonauts couldn't wear proper spacesuits; they had to wear leisure suits!

It could be the case that the board would reliably fail in all nearby fact patterns but that market participants simply did not know this, because there were important and durable but unknown facts about e.g. the strength of the MSFT relationship or players' BATNAs.

I agree this is an alternative explanation. But my personal view is also that the common wisdom that it was destined to fail ab initio is incorrect. I don't have much more knowledge than other people do on this point, though.

I think it would be fair to describe some Presidents as being effe

... (read more)

I agree this would be appealing to intellectually consistent conservatives, but this seems like a bad meme to be spreading/strengthening for animal welfare. Maybe local activists should feel free to deploy it if they think they can flip some conservative's position, but they will be setting themselves up for charges of hypocrisy if they later want to e.g. ban eggs from caged chickens.

How are you defining "powerless"? See my previous comment: I think the common meaning of "powerless" implies not just significant constraints on power but rather the complete absence thereof.

I would say that the LTBT is powerless iff it can be trivially prevented from accomplishing its primary function—overriding the financial interests of the for-profit Anthropic investors—by those investors, such as with a simple majority (which is the normal standard of corporate control). I think this is very unlikely to be true, p<5%.

I definitely would not say that the OpenAI Board was powerless to remove Sam in general, for the exact reason you say: they had the formal power to do so, but it was politically constrained. That formal power is real and, unless it can be trivially overruled in any instance in which it is exercised for the purpose for which it exists, sufficient to not be "powerless."

It turns out that they were maybe powerless to remove him in that instance and in that way, but I think there are many nearby fact patterns on which the Sam firing could have worked. This is e... (read more)

6
Larks
This seems confused to me, because the market is reflecting epistemic uncertainty, not counterfactual resilience. It could be the case that the board would reliably fail in all nearby fact patterns but that market participants simply did not know this, because there were important and durable but unknown facts about e.g. the strength of the MSFT relationship or players' BATNAs. I think it would be fair to describe some Presidents as being effectively powerless with regard their veto yes, if the other party control a super-majority of the legislature and have good internal discipline.  In any case I think the impact and action-relevance of this post would not be very much changed if the title was instead a more wordy "Maybe Anthropic's Long-Term Benefit Trust is as powerless as OpenAI's was".

I think "powerless" is a huge overstatement of the claims you make in this piece (many of which I agree with). Having powers that are legally and politically constrained is not the same thing as the nonexistence of those powers.

I agree though that additional information about the Trust and its relationship to Anthropic would be very valuable.

3
Larks
Would you say the OpenAI board was powerless to remove Altman? They had some legal powers that were legally and politically constrained, and in practice I think it's fair to describe them as effectively powerless.
9
Zach Stein-Perlman
I claim that public information is very consistent with the investors hold an axe over the Trust; maybe the Trust will cause the Board to be slightly better or the investors will abrogate the Trust or the Trustees will loudly resign at some point; regardless, the Trust is very subordinate to the investors and won't be able to do much. And if so, I think it's reasonable to describe the Trust as "maybe powerless."
7
Habryka [Deactivated]
I think people should definitely consider and assign non-trivial probability to the LTBT being powerless (probably >10%), which feels like the primary point of the post. Do you disagree with that assessment of probabilities (if so, I would probably be open to bets).

I am not under any non-disparagement obligations to OpenAI.

It is important to me that people know this, so that they can trust any future policy analysis or opinions I offer.

I have no further comments at this time.

I'm sorry for not getting around to responding to this, and may not be able to for some time. But I wanted to quickly let you know that I appreciated both this comment and your post, and both updated me significantly toward your position and away from my Reason 4.

4
Vasco Grilo🔸
Thanks for the update, Cullen! Relatedly, you may want to check my post on Nuclear war tail risk has been exaggerated?.

Do you have specific examples of proposals you think have been too far outside the window?

3
freedomandutility
I think Yudkowsky's public discussion of nuking data centres has "poisoned the well" and had backlash effects.

I realize that the idea of cloud labs is not new. I just think that this particular quote is so obviously scary that it could be rhetorically useful.

Quote from VC Josh Wolfe:

Biology. We will see an AWS moment where instead of you having to be a biotech firm that opens your own wet lab or moves into Alexandria Real Estate, which is you know, specializes in hosting biotech companies, in in all these different regions approximate to academic research centers. You will be able to just take your experiment and upload it to the cloud where there are cloud-based robotic labs. We funded some of these. There's one company called Stratios.

There's a ton that are gonna come on wave, and this is exciting because

... (read more)
3
Cullen 🔸
I realize that the idea of cloud labs is not new. I just think that this particular quote is so obviously scary that it could be rhetorically useful.

OP gave some reasoning for their views on their recent blog post:

Another place where I have changed my mind over time is the grant we gave for the purchase of Wytham Abbey, an event space in Oxford.

We initially agreed to help fund that purchase as part of our effort to support the growth of the community working to reduce global catastrophic risks (GCRs). The original idea presented to us was that the space could serve as a hub for workshops, retreats, and conferences, to cut down on the financial and logistical costs of hosting large events at private f

... (read more)

How does AMF collect feedback from the end-recipients of bednets? How does feedback from them inform AMF's programming?

Do you have any citations for this claim?

3
Elizabeth
Implict and explicit from  https://askamanager.com/ and https://nonprofitaf.com/ (which was much epistemically stronger in its early years)

According to the book Bullies and Saints: An Honest Look at the Good and Evil of Christian History, some early Christians sold themselves into slavery so they could donate the proceeds to the poor. Super interesting example of extreme and early ETG.

(I'm listening on audiobook so I don't have the precise page for this claim.)

(To avoid bad-faith misinterpretation: I obviously think that nobody should do the same.)

Longtermist shower thought: what if we had a campaign to install Far-UVC in poultry farms? Seems like it could:

  1. Reduce a bunch of diseases in the birds, which is good for: a. the birds’ welfare; b. the workers’ welfare; c. Therefore maybe the farmers’ bottom line?; d. Preventing/suppressing human pandemics (eg avian flu)
  2. Would hopefully drive down the cost curve of Far-UVC
  3. May also generate safety data in chickens, which could be helpful for derisking it for humans

Insofar as one of the main obstacles is humans' concerns for health effects, this would at least only raise these for a small group of workers.

I had a similar thought a (few) year (s) ago and emailed a couple of people to sanity check the idea - all the experts I asked seemed to think this wouldn't be an effective thing to do (which is why I didn't do any more work on it). I think Alex's points are true (mostly the cost part - I think you could get high enough intensity for it to be effective).

3
Alex D 🔸
Good shower thought! A few people have come to this idea independently for swine CAFOs. There are a fair number of important "production-limiting diseases" in swine that are primarily spread via respiratory transmission, so this seems to me like a plausible win-win-win (as you've described). This is all very "shower thought" level on my side as well, and I'd be keen for someone to think this through in more depth. Very happy to talk it through with anyone considering a more thorough investigation! (Note my understanding is influenza is primarily a gastrointestinal illness in poultry, so I don't think this intervention is as promising in that context.)

I think 1 unfortunately ends up not being true in the intensive farming case. Lots of things are spread by close enough contact that even intense uvc wouldn't do much (and it would be really expensive)

Narrow point: my understanding is that, per his own claims, the Manifund grant would only fund technical upkeep of the blog, and that none of it is net income to him.

1
zchuang
Sorry for the dead response, I think I took the secondary claim he made that extra money would go towards a podcast as the warrant for my latter claim. Again I don't feel any which way about this other than we should fund critics and not let the external factors that are just mild disdains from forum posters as determinative about whether or not we fund him. 

How probable does he think it is that some UAP observed on Earth are aliens? :-)

Load more