Hey all, I’ll be hosting a 30-minute fireside chat with Jaan Tallinn in the EAGxAsia-Pacific conference happening this weekend, and I’d like to get your thoughts and suggestions on what questions I should ask him.

For those that don't know Jaan, you can learn more about him on Wikipedia and in this Vox article. You can also watch or read this fireside chat with him in EAG SF 2018.

Here are some questions I’ve come up with that I am thinking of asking him, roughly in this order. I may come up with a few follow-up questions though on the spot:

  1. What buckets of projects/activities do you spend your time on currently, and how do you divide your time across these activities?
  2. I have heard you use the metaphor of the drill and the city to talk about the risks from AI before. Could you explain that metaphor for the audience today?
  3. What have you changed your mind on or updated your beliefs the most on when it comes to AI risks within the last 2 years?
  4. Many AI researchers still disagree on when human-level AI would be created, but around when do you personally think this would be created? How has this view of yours updated within the last 2 years?
  5. Are you working on or supporting any projects, activities, or people from the Asia-Pacific region currently?
  6. What projects or organizations would you like to see started in the Asia-Pacific region? How willing or excited would you be to fund these?
  7. Some effective altruists subscribe to the view of patient longtermism, that instead of focusing on reducing specific existential risks this century, we should expect that the crucial moment for longtermists to act lies in the future, and our main task today should be to prepare for that time. What are your thoughts about this view?
  8. What would you like to see happen in the effective altruism community in the Asia-Pacific region within the next 3 years?

I’ve pasted these questions as comments below, and you can upvote the ones you would want me to ask. If there are topics or questions you think I should consider asking him too, comment them down below. It would also be good for you to include some rationale on why you want him to answer that question. Thanks!

13

0
0

Reactions

0
0
Comments8


Sorted by Click to highlight new comments since:

What have you changed your mind on or updated your beliefs the most on when it comes to AI risks within the last 2 years?

What would you like to see happen in the effective altruism community in the Asia-Pacific region within the next 3 years?

What projects or organizations would you like to see started in the Asia-Pacific region? How willing or excited would you be to fund these? 

I have heard you use the metaphor of the drill and the city to talk about the risks from AI before. Could you explain that metaphor for the audience today?

What buckets of projects/activities do you spend your time on currently, and how do you divide your time across these activities?

Some effective altruists subscribe to the view of patient longtermism, that instead of focusing on reducing specific existential risks this century, we should expect that the crucial moment for longtermists to act lies in the future, and our main task today should be to prepare for that time. What are your thoughts about this view?

Are you working on or supporting any projects, activities, or people from the Asia-Pacific region currently?

Many AI researchers still disagree on when human-level AI would be created, but around when do you personally think this would be created? How has this view of yours updated within the last 2 years?

Curated and popular this week
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at