Correct: I'm vaguely aware of Kat Woods posting on FB, but haven't investigated Nonlinear in any depth before: having an explicit definition of "what information I'm working with" seemed useful.
Yes, Nonlinear is smaller than expected.
I outlined a bad org with problems, even after adjusting for a hostile reporter and a vengeful ex-employee. I think that the evidence is somewhat weaker than what I expect, not counting that I trust you personally, and the allegations are stronger/worse. Overall, it was a negative update about Nonlinear.
I think part of the disconnect, from my perspective, is that I have experience with small scrappy conventions that deliver good talks and an enjoyable time and a large central room where people can mingle. The scrappier science-fiction conventions seem to charge in the range of $60-$120, usually on the lower side, and, while relying very heavily on volunteer labor and physical assets, about break even. The fancier ones might charge $250/person/weekend. That's not the true price, since it excludes what dealers pay for access, advertising, etc. But my sense of con budgets is that it is at least half of the true price.
Obviously a large chunk of that is the $240 on food that you're spending and they're not. Another chunk of the cost is location: said cons tend to be out in the boonies of their relevant cities, passing along to attendees costs of travel or increased hotel prices.
The context that non-profit conventions tend to be $400+ is helpful: thank you. I really appreciate the transparency.
I don't think that this is a good state of affairs. I think that the points I raise range from "this should be completely unacceptable" (4, 6) to "if this is the worst credible information that can be shared, the org is probably doing very well (3, 5)". This is not a description of an org that I would support! But if a friend told me they were doing good work there and they felt the problems were blown out of proportion or context by a hostile critic and a vengeful ex-employee with an axe to grind, I would take them seriously and not say "you have to leave immediately. I can cover two months salary for you while you find another job, but I believe that strongly that you should not work here."
As always, context is important: "the head of the org is a serial harasser with no effective checks" and "we fired someone when their subordinate came forward with a sexual harassment allegation that, after a one-week investigation, we validated and found credible: the victim is happily employed by us today" are very different states of affairs. If someone is sharing the worst credible information, then the difference between "we were slow to update on X" and "they knew X was false from the report by A, but didn't change their marketing materials for another six months" can be hard to distinguish.
Running an org is complicated and hard, and I think many people underestimate how much negative spin a third party with access to full information can include. I am deliberately not modelling "Ben Pace, who I have known for almost a decade" and instead framing "hostile journalist looking for clicks", which I think is the appropriate frame of reference.
Worst credible information about a charity that I would expect based on the following description (pulled from Google's generative AI summary: may or may not be accurate, but seemed like the best balance to me of engaging with some information quickly):
Nonlinear is an organization that funds and researches AI safety interventions. They also offer an incubation program that provides seed funding and mentorship. The Nonlinear Library is a podcast that uses text-to-speech software to convert the best writing from the rationalist and EA communities into audio.
The Nonlinear Fund is an organization that aims to research, fund, and seed AI safety interventions. Their incubation program provides seed funding and mentorship. The seed funding is for a year's salary, but you can also use it for other things, such as hiring other people.
The Nonlinear Library is a podcast that uses text-to-speech software to convert the best writing from the rationalist and EA communities into audio. You can listen to the podcast on Apple Podcasts and Spotify.
I am not describing a charity with ideal management practices, but envisioning one with 25 employees, active for 5 years, and which has poor but not shockingly or offensively bad governance by the standards of EA orgs. Someplace where I wouldn't be worried if a friend worked there, but I would sympathetically listen to their complaints and consider them not the best use of my marginal dollar.
Maybe I am excessively cynical about what bad things happen at small charities, but this feels like a reasonable list to me. There may be other events of similar badness.
From what I can tell, Harris has impressively low name recognition and is fairly unpopular with voters. That doesn't mean that party elites won't object to an outside group sponsoring a candidate who doesn't have their blessing.
A few points.
With the same resources, it's probably easier and more effective to try to persuade candidates who are more successful.
You're massively underestimating your ROI, probably by an order of magnitude. $10 billion in charitable contributions per year, even with a very steep discount rate of 20%, would be an ROI of, not 18-fold, but closer to 90-fold (with a net present value of $50 billion). With a more reasonable discount rate of 10% (would have said 5%, but then the Fed happened), you're talking about 180-fold returns.
Of course, this falls apart under sufficiently short timelines.
I don't think that any of those justify not sending either your questions or a writeup of the post to the org in advance. They have a public email address. It's at the bottom of their home page. I don't think it's a particularly excessive burden to send a copy once you're done and give them a week. Perhaps two if they apologize and ask for a bit more time. I understand why people might be suspicious at the moment, but forcing people to scramble while on vacation is not a good norm. As you say, this post clearly wasn't that time-sensitive. I don't think that the Forum should have taken your post down, but that's a much higher bar.
For comparison, when I posted a piece that was somewhat critical of CEA's admissions and transparency policies, it was after I had asked in a more private Slack channel and gotten an answer I was not satisfied with. You can see that they clarified that they did inform people, and that others chimed in to thank me for informing them with the post.
I am not speaking for the DoD, the US government, or any of my employers.
I think that your claim about technological inevitability is premised on the desire of states to regulate key technologies, sometimes mediated by public pressure. All of the examples listed were blocked for decades by regulation, sometimes supplemented with public fear, soft regulation, etc. That's fine so long as, say, governments don't consider advancements in the field a core national interest. The US and China do, and often in an explicitly securitized form.
Quoting CNAS
China’s leadership – including President Xi Jinping – believes that being at the forefront in AI technology is critical to the future of global military and economic power competition.
English-language Coverage of the US tends to avoid such sweeping statements, because readers have more local context, because political disagreement is more public, and because readers expect it.
But the DoD in the most recent National Defense Strategy identified AI as a secondary priority. Trump and Biden identified it as an area to maintain and advance national leadership in. And, of course, with the US at the head they don't need to do as much in the way of directing people, since the existing system is delivering adequate results.
Convincing the two global superpowers not to develop militarily useful technology while tensions are rising is going to be the first time in history that has ever been accomplished.
That's not to say that we can't slow it down. But AI very much is inevitable if it is useful, and it seems like it will be very useful.
I think that distinguishing between 1-8 hours (preferably paid), up to 40 hours, and 1-6 months, is very important here. I am happiest about the shortest ones, particularly for people who have to leave a job (part of why I think that OP is talking about the latter sort).