I attended the Progress Summit in Hollywood yesterday, hosted by The Atlantic. Progress studies and EA have overlap, so I thought it would be useful to give my thoughts on the event. In general, the main difference I perceived was people attending not because they wanted to maximize their positive impact but rather because they were intellectually interested in socially responsible progress. And some other stuff. 

-- 

Right after I left, I told my friend it felt like a “grown up version of an EA conference.” I’m 22, and was probably the youngest person there. Everything felt more professional (cocktails, food, outfits, etc.) and the operations seemed smoother than any EA event I’ve been to. 

--

The facilitators were all Atlantic writers, such as Ross Anderson and Derek Thompson, and they were significantly more eloquent and better at holding people’s attention than any EA event I’ve been to. I could definitely tell the difference in their training. That being said, the talks felt fluffy and often skirted around intellectual issues for the sake of a smooth conversation. 

--

Networking was less direct. More small talk, less intensity. 

--

The event was catered towards investors/venture capitalists. Speakers were trying to make their product sound appealing so investors will fund them, which I thought was slightly bad for epistemics, often ignoring the risks of their products (e.g. AI slaughter bots).  

--

In general, the majority of the attendees seemed bullish on “the progress of technology,” and didn’t touch much on the potential risks of things like AGI or biorisk. If they did address the risks, it was invariably in relation to (1) the economy, (2) climate change, or (3) war. Of the people I spoke with, <20% had heard of misalignment or existential risk. I didn’t get the impression that anyone at the event didn’t take existential risk seriously. Rather, it felt like they had not heard about it in the progress studies ecosystem. 

--

Overall, I think the Progress studies community seems decently aligned with what EAs care about, and could become more-so in the coming years. The event had decent epistemics and was less intimidating than an EA conference. I think many people who feel that EA is too intense, cares too much about longtermism, or uses too much jargon could find progress studies as a suitable alternative. If the movement known as EA dissolved (God forbid) I think progress studies could absorb many of the folks. 

--

Notable events: 

  1. “How mRNA Technology Can Save the World” 
    1. Most of the people I talked to here had never considered the risks posed by biotechnology (beyond class inequality stuff)
  2. “Drones and AI: The Future of Military Technology” 
    1. The concern most people had was causing a war (which seems good), but because much of what Brian Schimpf talked about was technology used for deterrence, most people then seemed bullish on the positive impacts of this technology (I was not). 
  3. “How Artificial Intelligence Can Revolutionize Creativity” 
    1. I didn’t go to this one, but I heard from someone who did that they talked about GPT-3 and Dall-e positively, and didn't mention the potential risks posed by capabilities advancements. 
  4. “The Long View” 
    1. Didn’t go, unfortunately. 

70

0
0

Reactions

0
0

More posts like this

Comments9


Sorted by Click to highlight new comments since:

What is the view toward animal welfare of the Progress Studies movement? 

A large community of "near-EA" animal welfare people exist, but don't post on the forum as much as others. Note that this community has a mature, coordinated, cooperative outlook, different from some kinds of activism.

Did not hear animal welfare mentioned once, and they had lots of meat options for lunch. That's all I got lol. 

You can read what Jason Crawford had to say on the topic here when 
Peter Wildeford  asked:
https://twitter.com/peterwildeford/status/1520911804288966656

Peter:  What do progress studies people think about nonhuman animals?
Jason : It's not discussed much. There are probably a range of views. Personally, my current position is that we shouldn't be inhumane or needlessly cruel, but that animals aren't on the same moral level as humans

Peter: Do you think modern factory farming is inhumane?
Jason: I've only read a little bit about it, and what I read was pretty bad. But the topic is controversial enough that I'd want to hear multiple takes (ideally from different sides) before having a real opinion

Also mentions that he doesn't see factory farming of animals as  one of the biggest problems/negatives caused by progress.

Thanks for sharing these thoughts! I'm curious to know more about smoother operations - could you elaborate?

it's hard to put into words, but like there were cocktails and nice background music and all the events transitioned super smoothly. It's like when you watch the Oscars or something and everything seems like it's been rehearsed--that's how this felt. EA conferences, on the other hand, usually seem more hectic and improvisational. 

In this case, if I had to choose between

A) attending an event with nicer background music + cocktails and

B) one that doesn't seem "rehearsed"

I'd probably end up choosing the latter...

[comment deleted]1
0
0

Overall, I think the Progress studies community seems decently aligned with what EAs care about, and could become more-so in the coming years. The event had decent epistemics and was less intimidating than an EA conference. I think many people who feel that EA is too intense, cares too much about longtermism, or uses too much jargon could find progress studies as a suitable alternative. If the movement known as EA dissolved (God forbid) I think progress studies could absorb many of the folks.

I'm curious about how you think this will develop. It seems like Progress studies often takes the stance that for all technologies, progress in that technology is good. This seems relatively central to their shtick.

Maybe their views will start to shift towards thinking strongly in terms of what we would call differential technological development. Where they can maintain their view that progress is good, but append onto that that progress is only good if certain technologies get developed sooner than other technologies. Perhaps this is the perspective they have on many technologies already, and I don't know enough about the community to tell.

You know that's what I thought as well, but I've found the community to be more open to caution than I initially thought. Derek Thompson in particular (the main organizer for the event) harped on safety quite a bit. And if more EAs got involved (assuming they don't get amnesia) I assume they can carry over some of these concerns and shift the culture. 

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f