There are other safety problems-- often ones that are more speculative-- that the market is not incentivizing companies to solve.
My personal response would be as follows:
Sorry to hear about your experience!
Which countries are at the top/bottom of the priority list to be funded? [And why?]
I think this is a great question, and I suspect it's somewhat under-considered. I looked into this a couple years ago as a short research project, and I've heard there hasn't been a ton more work on it since then. So my guess is that the reasoning might be somewhat ad-hoc or intuitive, but tries to take into account important factors like "size / important-seemingness of country for EA causes", talent pool for EA, and ease of movemen...
Zero-bounded vs negative-tail risks
(adapted from a comment on LessWrong)
In light of the FTX thing, maybe a particularly important heuristic is to notice cases where the worst-case is not lower-bounded at zero. Examples:
Inside-view, some possible tangles this model could run into:
Speaking as a non-expert: This is an interesting idea, but I'm confused as to how seriously I should take it. I'd be curious to hear:
Ah, sorry, I was thinking of Tesla, where Musk was an early investor and gradually took a more active role in the company.
...
In February 2004, the company raised $7.5 million in series A funding, including $6.5 million from Elon Musk, who had received $100 million from the sale of his interest in PayPal two years earlier. Musk became the chairman of the board of directors and the largest shareholder of Tesla.[15][16][13] J. B. Straubel joined Tesla in May 2004 as chief technical officer.[17]A lawsuit settlement agreed to by Eberhard and Tesla i
I think it's reasonable and often useful to write early-stage research in terms of one's current weak best guess, but this piece makes me worry that you're overconfident or not doing as good a job as you could of mapping out uncertainties. The most important missing point, I'd say, is effects on AI / biorisk (as Linch notes). There's also the lack of (or inconsistent treatment of) counterfactual impact of businesses, as I mention in my other comment.
Also, a small point, but given the info you linked, calling Oracle "universally reviled" seems too strong. This kind of rhetorical flourish makes me worry that you're generally overconfident or not tracking truth as well as you could be.
The market value of Amazon is circa $1T, meaning that it has managed to capture at least that much value, and likely produced much more consumer surplus.
I'm confused about your assessment of Bezos, and more generally about how you assess value creation via businesses.
My core concern here is counterfactual impact. If Bezos didn't exist, presumably another Amazon-equivalent would have come into existence, perhaps several years later. So he doesn't get full credit for Amazon existing, but rather for such an org existing for a few more years. And m...
Nice to see this! I remember being surprised a few years back that nobody in EA besides Drexler was talking about APM, so it's nice to see a formal public writeup clarifying what's going on with it. I'm leery of infohazards here, but conditional on it being reasonable to publish such an article at all, this seems like a solid version of that article.
Re: key organizations, a few thoughts:
I enjoyed this post -- I've wanted a scope-sensitive news source for ages.
A resource I really like for getting a sense of what the world looks like "on average" is Dollar Street, which puts together info and images about households around the world. They estimate household income, so you can see what life at different income levels is like.
Thanks for this post! I always appreciate a pretty metaphor, and I generally agree that junior EAs should be less deferential and more ambitious. Maybe most readers will in fact mostly take away the healthy lesson of "don't defer", which would be great! But I worry a bit about the urgent tone of "act now, it's all on you", which I think can lead in some unhealthy directions.
To me, it felt like a missing mood within the piece was concern for the reader's well-being. The concept of heroic responsibility is in some ways very beautiful and im...
Ha!
Personally, I've gotten a lot of value in having a buddy look over my work and chat with me about it -- a fresh perspective is really useful, not just for copyedits but also for building on my first thoughts. If you don't yet know people you could ask for this, you might find it valuable to reach out to SERI, CERI, or other community orgs that aim to help junior x-risk researchers. (presumably ZERI and JERI are next.) Happy to chat more via DM if that would be useful :)
I think this is a pretty important topic, and one I haven't seen discussed as often as I'd like! Thanks for writing it up.
I think you could get more engagement with this topic if you spent some more time smoothing out the presentation of your writeup. For example, there are a few typos in the summary section that made me less excited to read the rest of the piece. Given that you now have a pretty interesting piece of thinking written, it might be pretty feasible to find a smart junior person who could give you copyedits and comments.
Thanks for you feedback! Unfortunately I am a smart junior person, so looks like we know who'll be doing the copy editing
Self-signaling value ain't something to sneeze at. Personally, a lot of my desire-for-demandingness is about reinforcing my identity as someone who's willing to make sacrifices in order to do good. ("reinforcing" meaning both getting good at that skill, and assuring myself that that's what I'm like :)
epistemic status: "the best way to learn is by saying something wrong and being corrected." These statements are all intended as "my best guess" from someone who's not super technical and could easily be wrong about AI progress.
In general, I'm skeptical of surveys like this -- I participated in a similar one a few years ago that didn't have super useful results, though I think it was kind of useful for clarifying my own thinking. But that's pretty outside-viewy. Let me take a stab at making that general skepticism concrete -- trying to elucidate why people might struggle to answer, slash why the questions you're asking won't yield super useful answers.
I expect that the 'right' answer depends on carefully enumerating and considering a bunch of different plausible scena...
This piece is...pretty amazing. I could see this being really useful for me as an AI governance researcher, possibly the most useful thing I've read this year. Thanks!
Do you have any advice for eliciting feedback from people when you're doing rapid iteration? I generally find it valuable to share Google Docs with people as I'm working through ideas, but it can be hard to communicate the kind of feedback that's most useful for rough documents. Maybe it's good to flag "these are hot takes, I'm looking for strong arguments against them to refine my viewpoint, don't bother with small details for now"?
Like Akash, I agree with a lot of the object-level points here and disagree with some of the framing / vibes. I'm not sure I can articulate the framing concerns I have, but I do want to say I appreciate you articulating the following points:
- Society is waking up to AI risks, and will likely push for a bunch of restrictions on AI progress
- Sydney and the ARC Captcha example have made AI safety stuff more salient.
- There's opportunity for substantially more worry about AI risk to emerge after even mild warning events (e.g. AI-powered cyber events, crazier b
... (read more)