Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.
Â
I think the problem is fundamentally the lack of care and attention to the content being created, not whether or not AI is used. If it is in people's incentives to produce polished, thoughtless, drivel on LinkedIn and they can do it in 10 seconds, they will.Â
This is very different from an iterative process in which the human is carefully examining the output and refining to optimize the exploration and explanation of an idea.Â
I have experience writing things with and without AI. At least for me, it can be a very difficult process trying to convey things as clearly and effectively as I can. Perhaps I am being unreasonable in putting that much time into the process and perhaps other people are just much better at writing clearly and effectively without AI. But I can say that I would not produce a lot of the content that I produce without AI being able to shorten the process significantly.Â
I disagree pretty strongly with this.Â
Although there are tradeoffs associated with AI writing, mostly being able to produce content that can appear polished and well-considered when it is not, I think AI's enabling the proliferation of good thoughts and ideas that would otherwise just never happen far outweighs this.Â
Going back and forth with AI, reviewing, and drafting can turn a writing process that might take several days to a week or more, into an hour or two, or less. This enables me, and I'm sure others, to share content and ideas that otherwise we would not be able to.
Removing the barriers to people sharing their thoughts quickly and effectively is probably how we get more new and impactful ideas out there. I've been pretty sad at the sort of witch-huntery I've been seeing about AI generated content.
This is a worthwhile idea and I appreciate you putting it out there. Team formation and skills matching are real bottlenecks. That said, for ideas that fall outside established EA cause areas or existing frameworks, the bigger bottleneck is often upstream of team formation: getting even modest funding to explore feasibility in a rigorous way. Volunteer energy and cross-functional collaboration are valuable, but they tend to dissipate without some resource runway. Your model might be even stronger if it included a pathway for connecting promising early-stage ideas to funders willing to back basic exploration, not just to collaborators.
I don't know that it is entirely manipulative or insincere, even if the founders of Farmkind are themselves vegan and support veganism. I think that they are trying to put forward a perspective and highlight a perspective that is also consistent with funding effective animal charities:
"I love consuming animal products and I am not giving that up. But I also think it's fucked up and wrong how animals are treated in the factory farming system."
And then they would initially use the interesting contrast between that and the vegan community to generate attention, while then emphasizing the commonality. That animals shouldn't be tortured and we can all do something to help make that stop.
I think that Farmkind is right that embracing people who have that perspective and validating that perspective may be part of growing the big tent, through not just funding but through engagement with the political process as well.
It seems like there were some execution issues here, but I hope that the appetite for creative and new ways to try to engage with the omnivorous supermajority continues growing.
Yeah, there's the possibility of a double-standard. Essentially the PFG is reputationally penalized for competitive choices in ways their normal competitors are not.
It seems the short term solution to this is selecting contexts that aren't fraught with ethical issues.
And if you succeed in the short term, the long term solution would be a messaging campaign that tried to get at this irrational double-standard where competitive business choices are not popular.
Nick, I think you're imagining a different model than what I'm proposing. You're picturing a founder who needs to be driven by altruism instead of greed. That's not the idea.
The model is: a foundation buys an already-successful business from its existing owners and keeps the professional management in place. The managers keep getting paid salaries and bonuses. They keep running the business exactly as before. The only thing that changes is where the profits go after they're generated. This isn't about finding saintly founders. It's about acquisition. Private equity does this constantly. They buy businesses, keep management, extract profits. We're proposing the same thing, just with a charitable foundation as the equity holder instead of a PE fund.
You're right that greed drives startup founders. But startups are a tiny fraction of the economy. Most market share consists of mature companies run by professional managers who are already separated from ownership. They don't know or care whether their shares are held by Vanguard, Blackstone, or a foundation. They come to work, hit their targets, collect their bonus. That's the context where this operates.
This is precisely why this model is scalable. It doesn't require heroes. It just requires a foundation to buy out an existing business and keep the operations the same. In most businesses, management does not have much equity so the PFG business can offer the same compensation packages that a normal business would.
Kyle, appreciate the engagement. I think there's a core misread I should clear up: COA doesn't require anyone to pay more. That's the whole point. The thesis isn't "people will pay a premium for charity-owned." It's "at price parity, stakeholders prefer charity-owned, and that preference shows up in conversion, retention, and terms." You don't need customers to pay a charity tax. You need them to choose you over an equivalent competitor. The stated and revealed preference research suggests they will. So your concern about commodity and B2B customers actually supports my thesis. They won't pay more, and they don't have to. In fact, commodities might be the best for PFG if the business has the capital required, because it creates a differentiator where there are otherwise none. At equal price and quality, preference tips the balance. Even small advantages in win rates compound on thin margins. A business operating at 10% margin that improves by 5 percentage points doesn't improve profit by 5%. It increases by 50%.
On the acquisition mechanics: yes, you're buying profitable businesses at normal multiples. The thesis is that charitable ownership improves margins post-acquisition, not that you're getting a discount upfront. Debt service comes first, charitable distributions come from what remains. If COA improves margins even modestly, the spread over borrowing costs funds both repayment and distributions. Same as any leveraged acquisition, just with a different equity holder. And foundation-owned businesses actually show lower default rates in the data, so lending terms should be competitive or better. The "entire economy" scope follows from the mechanism. The preference operates on profit destination, not product category. And because the preference advantage doesn’t come with a clear operating disadvantage, we’d have to look for when a disadvantage might emerge. This could possibly be businesses like startups, where equity incentives for the key early players might outweigh such an advantage. But in most of the economy, ownership and management are separate. In the lower-middle market, where experimental acquisitions might feasibly take place, the kinds of acquisitions that keep operations in place but change ownership – continuity acquisitions- happen all the time
On the beachhead: agreed, this is what's needed. I'm working toward a fund structure to do instrumented acquisitions. The goal is generating real data, not just arguing from theory. Section 1.1 of the research compilation has more on the preference research if you want to dig in.
EDIT: Re AI timelines, one of the risks (certainly not the only one) is that it will cause wealth to be concentrated among the owners of capital. Having charities be the holders of that capital is likely a better outcome than a very small group who are accountable to no one.
If you're interested in the plausible margin effects, sector selection criteria, and financial projects, you can check out the research compilation that I linked to (Section 1 for stakeholder preference research, Section 4 on the effect of parity (no consumer sacrifice) on adoption, and Sections 9A and 9B on sector selection criteria and financial modeling, respectively).
Â
And Claude helped organize and review the draft, but I wrote it.
Yeah, the downside would be the cost of running the program, which would be very small in relation to the value of the capital (which would be going to charity, so just subject to normal business risks).
If you see differences in post-acquisition performance, they can expand the fund and other philanthropists will have the incentive to copycat. If the thesis is generally proven, lenders will have the incentive to finance further acquisitions (leveraged buyouts); the sky, or most of the entire economy (other than perhaps startups where equity incentives might outweigh COA advantages), is the limit.Â
Truly absurd that this is not being explored.
I would definitely want a human reviewing and possibly iterating, but if that is happening and the AI is drafting, that's fine.Â