Hide table of contents

[Spanish below]

We want to remind the international EA community that EAGxCDMX will take place in Mexico City at the Universum Museum, from March 14 to 16, 2025, and that the deadline to apply is next Monday, February 24th.

APPLY NOW! 

Our main goal with this conference is to create meaningful and lasting connections between attendees, that motivate them to do good in the world. We’re looking forward to welcoming experienced members of the community who are excited to make connections in this region, as well as people newer to the movement who want to learn more from seasoned EAs.  

You can expect talks and workshops on AI, animal welfare, global health and development, biosecurity, career prioritisation, effective giving, and social entrepreneurship. This week we will spotlight confirmed speakers through our Instagram and LinkedIn channels.

We are aiming for about half of the content to be in Spanish and the remainder in English, and it is our priority that the conference provides valuable connections and talks for both groups. If you’re considering attending but are unsure, please err on the side of applying!

If you have any questions or comments, please reach out at cdmx@eaglobalx.org.

Hope to see you in Mexico City!

 

Español:

Queremos recordar a la comunidad internacional de altruismo eficaz que EAGxCDMX se llevará a cabo en la Ciudad de México en el Museo Universum, del 14 al 16 de marzo de 2025, y que la fecha límite para aplicar es el próximo lunes 24 de febrero.

¡APLICA AHORA!

Nuestro objetivo principal con esta conferencia es generar conexiones significativas y duraderas entre los asistentes, que les motiven a hacer el bien en el mundo. Esperamos recibir a personas que ya llevan tiempo en la comunidad y quieran hacer conexiones en esta región, así como a personas más nuevas en el movimiento que quieran aprender más de miembros con amplia experiencia.

Tendremos charlas y talleres sobre inteligencia artificial, bienestar animal, salud global, bioseguridad, toma de decisiones profesionales, donaciones efectivas y emprendimiento social. Esta semana anunciaremos a los ponentes confirmados a través de nuestros canales de Instagram y LinkedIn.

Nuestro objetivo es que aproximadamente la mitad del contenido sea en español y el resto en inglés, y es nuestra prioridad que la conferencia brinde conexiones y charlas valiosas para ambos grupos. Si estás considerando asistir pero no estás seguro(a), ¡consideramos recomendable que apliques!

Si tienes alguna pregunta o comentario, escríbenos a cdmx@eaglobalx.org.

¡Esperamos verte en la Ciudad de México!

15

0
0

Reactions

0
0
Comments


No comments on this post yet.
Be the first to respond.
More from cescorza
34
cescorza
· · 2m read
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 9m read
 · 
TL;DR In a sentence:  We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up. In more detail: We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.   During 2025, we are prioritising: 1. Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well 2. Communicating why and how people can contribute to reducing the risks 3. Connecting our users with impactful roles in this field 4. And fostering an internal culture which helps us to achieve these goals We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.   This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions. Why we’re updating our strategic direction Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes. We think we should consolidate our effort and focus because:   * We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI
Relevant opportunities