New & upvoted

Customize feedCustomize feed

Posts tagged community

Quick takes

Show community
View more
I think it is good to have some ratio of upvoted/agreed : downvotes/disagreed posts in your portfolio. I think if all of your posts are upvoted/high agreeance then you're either playing it too safe or you've eaten the culture without chewing first.
David Rubinstein recently interviewed Philippe Laffont, the founder of Coatue (probably worth $5-10b). When asked about his philanthropic activities, Laffont basically said he’s been too busy to think about it, but wanted to do something someday. I admit I was shocked. Laffont is a savant technology investor and entrepreneur (including in AI companies) and it sounded like he literally hadn’t put much thought into what to do with his fortune. Are there concerted efforts in the EA community to get these people on board? Like, is there a google doc with a six degrees of separation plan to get dinner with Laffont? The guy went to MIT and invests in AI companies. In just wouldn’t be hard to get in touch. It seems like increasing the probability he aims some of his fortune at effective charities would justify a significant effort here. And I imagine there are dozens or hundreds of people like this. Am I missing some obvious reason this isn’t worth pursuing or likely to fail? Have people tried? I’m a bit of an outsider here so I’d love to hear people’s thoughts on what I’m sure seems like a pretty naive take!
A couple takes from Twitter on the value of merch and signaling that I think are worth sharing here: 1)  2) 
What is "capabilities"? What is "safety"? People often talk about the alignment tax: the magnitude of capabilities/time/cost a developer loses by implementing an aligned/safe system. But why should we consider an unaligned/unsafe system "capable" at all? If someone developed a commercial airplane that went faster than anything else on the market, but it exploded on 1% of flights, no one would call that a capable airplane. This idea overlaps with safety culture and safety engineering and is not new. But alongside recent criticism of the terms "safety" and "alignment", I'm starting to think that the term "capabilities" is unhelpful, capturing different things for different people.
something I persistently struggle with is that it's near-impossible to know everything that has been said about a topic, and that makes it really hard to know when an additional contribution is adding something or just repeating what's already been said, or worse, repeating things that have already been refuted to an extent this seems inevitable and I just have to do my best and sometimes live with having contributed more noise than signal in a particular case, but I feel like I have an internal tuning knob for "say more" vs. "listen more" and I find it really hard to know which direction is overall best

Popular comments

Recent discussion


  1. Destabilization could be the biggest setback for great power conflict, AI, bio-risk, and climate disruption.
  2. Polarization plays a role in nearly every causal pathway leading to destabilization of the United States, and there is no indication polarization
Continue reading

Just an FYI that most non-profits are often legally constrained on doing many sorts of political advocacy work, I think. 

Fermi–Dirac Distribution
This post is timely, given the recent selection of J.D. Vance as Trump’s VP.  J.D. Vance said that he would not have certified the 2020 election results if he were in Pence’s place. As Trump’s VP pick, he has about a two-thirds chance of being the President of the Senate when the 2028 election results are certified. If the Democratic candidate wins that presidential election, it doesn’t seem implausible that he’ll refuse to certify the election results.   The Electoral Count Act was overhauled after January 6 to give the VP a less ambiguous and discretionary role in the certification process. But there’s reason to think Vance could maximally adversarially exploit any remaining discretion or ambiguity. Or worse, he may not even respect the law. After all, Vance has previously said that there are cases in which the president should defy the Supreme Court[1] Vance has once “called on the Justice Department to open a criminal investigation into a Washington Post columnist who penned a critical piece about Trump.” It could be reasonable to conclude from this that the freedom of the press might be at risk.  To make matters worse, Vance is young. He’s not even 40 yet. He graduated from Yale Law School, so he’s extremely smart. He has a lot of time, and a lot of competence, to achieve his antidemocratic aims.  Trump is approaching his 80s. Optimistically, Vance may have made his antidemocratic statements to (successfully) get Trump’s attention and advance his career, and those ideas will retreat after Trump’s death, taking the Republican Party back to the ideals of people like Mitt Romney and Nikki Haley. But it’s not obvious that this will happen. Trump’s example may instead empower more politicians domestically and abroad to challenge democratic institutions and accumulate power, as we already started seeing with Bolsonaro in Brazil.  Many countries have voted themselves out of a real democracy. Turkey, Hungary, Russia and Venezuela have all done so in recent decad
Thanks for the post. I suppose you'd agree that there's a good chance that, once Uncle Sam gets unstable, many other countries will follow suit.

We estimate that, as of June 12, 2024, OpenAI has an annualized revenue (ARR) of:

 $1.9B for ChatGPT Plus (7.7M global subscribers),
 $714M from ChatGPT Enterprise (1.2M seats),
 $510M from the API, and
 $290M from ChatGPT Team (from 980k seats)

(Full report...

Continue reading

Hi! We currently don't have a reliable estimate of the cost, but we might include it in the future.

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

This post is scavenged and adapted from my report on resilience to global cooling catastrophes [summary here].


  • There is significant disagreement about the validity and severity of nuclear winter
  • I use results from two papers on either side of the debate to construct
Continue reading

Dear Stan.

I think there are issues with this analysis. As it stands, it presents a model of nuclear winter if firestorms are unlikely in a future large scale nuclear conflict. That would be an optimistic take, and does not seem to be supported by the evidence:

  • In my post on the subject that you referenced, I discuss how there are serious issues with coming to a highly confident conclusion in relation to nuclear winter. There are only limited studies, which come at the issue from different angles, but to broadly summarize:
    • Rutgers are highly concerned about t
... (read more)
Photo by Elena Mozhvilo on Unsplash.

One way to approach the decision whether to build conscious AI is as a simple cost-benefits analysis: do the benefits outweigh the risks?

In previous posts, we've argued that building conscious AI courts multiple serious risks. In this...

Continue reading

Do you think that consciousness will come for free? I think that it seems like a very complex phenomenon that would be hard to accidentally engineer. On top of this, the more permissive your view of consciousness (veering towards panpsychism), the less ethically important consciousness becomes (since rocks & electrons would then have moral standing too). So if consciousness is to be a ground of moral status, it needs to be somewhat rare.


Aschenbrenner’s ‘Situational Awareness’ (Aschenbrenner, 2024) promotes a dangerous narrative of national securitisation. This narrative is not, despite what Aschenbrenner suggests, descriptive, but rather, it is performative, constructing a particular...

Continue reading

National securitisation privileges extraordinary measures to defend the nation, often centred around military force and logics of deterrence/balance of power and defence. Humanity macrosecuritization suggests the object of security is to defend all of humanity, not just the nation, and often invokes logics of collaboration, mutual restraint and constraints on sovereignty.

I found this distinction really helpful. 

It reminds me of Holden Karnofsky's piece on How to make the best of the most important century (2021), in which he presents two contrasting f... (read more)

Joseph Miller
These are not just vibes - they are all empirical claims (except the last maybe). If you think they are wrong, you should say so and explain why. It's not epistemically poor to say these things if they're actually true.
Seth Herd
Excellent work. To summarize one central argument in briefest form:   Aschenbrenner's conclusion in Situational Awareness is wrong in overstating the claim. He claims that treating AGI as a national security issue is the obvious and inevitable conclusion for those that understand the enormous potential of AGI development in the next few years. But Aschenbrenner doesn't adequately consider the possibility of treating AGI primarily as a threat to humanity instead of a threat the nation or to a political ideal (the free world). If we considered it primarily a threat to humanity, we might be able to cooperate with China and other actors to safeguard humanity. I think this argument is straightforwardly true. Aschenbrenner does not adequately consider alternative strategies, and thus his claim of the conclusion being the inevitable consensus is false. But the opposite isn't an inevitable conclusion, either. I currently think Aschenbrenner is more likely correct about the best course of action. But I am highly uncertain. I have thought hard about this issue for many hours both before and after Aschenbrenner's piece sparked some public discussion. But my analysis, and the public debate thus far, are very far from conclusive on this complex issue. This question deserves much more thought. It has a strong claim to being the second most pressing issue in the world at this moment, just behind technical AGI alignment.

This is a couple weeks old, but I don't think it's been shared here yet. For context: GiveWell recently took GiveDirectly (which runs cash-transfer programs) off its "top charities" list. GiveDirectly wrote a very gracious and interesting response. One section particularly...

Continue reading

I think the 'resonating with individual empowerment' point is important. While Give Directly may not be as effective as the top charities recommended by Give Well, in my experience it has a 'low intellectual bar to entry' for getting non-EAs to donate. I've had trouble convincing certain people to donate to charities like GW's Top 4 (and existential risk initiatives are an even harder sell), but Give Directly seems to resonate quite easily — it's still more effective than most of the charities out there, especially in the poverty alleviation space. 


Audio is here if you prefer. Hope you like it.


I got something to offer, all I ask is your time.
And forgiveness for the form: a cheesy rhyme.
You’re skeptical? Makes sense. But I know your type's vice.
You strike me as a purveyor, of do-gooder advice.

And it’s pretty ...

Continue reading

This is lovely and heartfelt, Elliot! I loved listening to your rendition. Makes me think an EA-themed poetry slam would a great idea.

Folks in philanthropy and development definitely know that the Gates Foundation is the largest private player in that realm by far. Until recently it was likely to get even larger, as Warren Buffet had stated that the Foundation would receive the bulk of his assets when

Continue reading

Any ideas for ways to make a positive outreach to the Buffet trust… praising their past giving and discussing their ideals and strategy?

Seems like a bad incentive if we harshly criticise people for stopping giving, at least if we didn't praise them way more for the donation in the first place. Warren has been one of the most generous men in history, but this historical generosity shouldn't be held as evidence against him.
I agree he shouldn’t have his past donations held against him, and that his past generosity should be praised. At the same time, he’s not simply “stopping giving.” His prior plan was that his estate would go to BMGF. Let’s assume that that was reflected in his estate planning documents. He would have had to make an affirmative change to effect this new plan. So with this specific action he is not “stopping giving,” he is actively altering his plan to be much worse.