In this year, I suffered from a serious condition called “Carnival abstinence” and spent a lot of time reflecting... which allowed me to realize that the lyrics of "O Milla" (1996), sung by the prophet Netinho, describe an apocalyptic scenario where we fail to mitigate catastrophic risks.

Netinho - Milla - Planeta Xuxa (1997) - YouTube

Lyrics in Portuguese and English.

Don’t be fooled by the display of energetic joy. That happiness is only achieved after going through despair and acceptance.

Firstly, the title is an obvious reference to Mila - Quebec AI Institute, one of the research centers on the topic, led by experts such as Yoshua Bengio work. Note that the singer uses the masculine article "o" before the name "Milla"; don't be fooled by an extra "L" - I myself never know how many "Ls" are in my name.

Other "messages" to the attentive reader:

"Destino te mandou de volta para o meu cais" (Fate sent you back to my quay) - a clear reference to the CAIS model - Comprehensive AI Services, as explained by this FHI report. And may I vent that “fate” evokes a fatalistic note – a realization that we are puppets in the hands of evidential decision-theory algorithms?

"No coração ficou lembrança de nós dois / como ferida aberta, como tatuagem" (In heart remained a memory o fus / like an open wound, a tatoo). This is an Easter Egg for the initiated: it is a veiled reference to an excerpt from Eliezer Yudkowsky (I think?) in which he suggests that an AGI could easily solve the protein folding problem and spread prions that, like a time bomb, would damage the coronary arteries of all humans at a certain point. Netinho is a truly visionary artist.

(only now I realize that the prophet is also citing these famous lines by Baki, one of Sultan Suleiman's favorites: “Fate saw the jewel in me, and pawed the heart apart to have it”)

“Na praia, no barco, no farol apagado, no moinho abandonado” (On the beach, on the boat, in the unlit lighthouse, in the abandoned mill) - here the artist ambiguously describes the post-catastrophe scenario, where survivors move through an abandoned infrastructure that evokes climate change adaptation and the replacement of the energy grid. Is the artist discussing our concern with global warming, or, on the contrary, the adaptation to a post-catastrophe scenario (such as a nuclear winter - or geoengineering gone wrong?) where we would depend on other sources of food and energy? I find the second alternative more likely.

"Vendo estrelas caindo, vendo a noite passar..." (watching falling stars, seeing the night go by) - an obvious reference to missiles with nuclear warheads, whose explosions would have launched particles into the atmosphere, obscuring the sun and causing a nuclear winter.

"Eu e vocêêêê... na ilha do sol" (me and youuuuuu... on the island of sun). Moving on, the refugees moved to the tropics and sought protected positions, such as islands.

What happened to this world? One possibility is that an arms race involving artificial intelligence led to a nuclear war; another, even darker, is that an AI got out of control, leading humans to respond with widespread destruction of infrastructure.

(No joke: I remember there was a long bet along this line - I just can't find the reference) 
 

Portuguese version (original)

Na falta de pular carnaval neste ano, passei muito tempo refletindo, e concluí que a letra de "O Milla" (1996), do profético Netinho, é uma descrição de um cenário apocalíptico em que falhamos em mitigar riscos catastróficos.

Primeiro, o título é uma óbvia referência ao Mila - Quebec AI Institute, um dos centros de pesquisa do tema, onde trabalham autoridades do assunto como Yoshua Bengio. Notem que o cantor usa o artigo masculino "o" antes do nome "Milla"; não se deixem enganar por um “L” a mais – eu mesmo nunca sei quantos “Ls” há no meu nome.

Outras “mensagens” ao leitor atento:

“Destino te mandou de volta para o meu cais” – uma menção clara ao modelo CAIS – Comprehensive AI Services, cf. relatório do FHI

“No coração ficou lembrança de nós dois / como ferida aberta, como tatuagem”. Esse é um Easter Egg para os iniciados: trata-se de uma menção velada a um excerto de Eliezer Yudkowsky no qual ele supõe que uma AGI poderia facilmente resolver o protein folding problem e espalhar príons que, como uma bomba-relógio, danificariam as artérias coronárias de todos os humanos num determinado momento. Netinho é um artista verdadeiramente visionário.

“Na praia, no barco, no farol apagado, no moinho abandonado” – aqui o artista descreve de forma ambígua o cenário pós-catástrofe, onde os sobreviventes se deslocam por uma infraestrutura abandonada que evoca mudanças climáticas e substituição da matriz energética. Estaria o artista discutindo nossa preocupação com o aquecimento global, ou, pelo contrário, a adaptação a um cenário pós-catástrofe (como um inverno nuclear? Geoengenharia que deu errado?) onde dependeríamos de outras fontes de comida e energia? Acho a segunda alternativa mais provável; repare na próxima referência.

“Vendo estrelas caindo, vendo a noite passar...” – óbvia referência a mísseis com ogivas nucleares, cujas explosões teriam lançado partículas na atmosfera, obscurecendo o sol e causando um inverno nuclear... ou geoengenharia que saiu do controle, claro.

“Eu e vocêêêeeeê, na ilha do sol”. Seguindo essa linha, os refugiados se deslocaram para os trópicos, fugindo à nova era do gelo, e procuraram posições protegidas, como ilhas.

O que aconteceu? Uma possibilidade é que uma corrida armamentista envolvendo a inteligência artificial levou a uma guerra nuclear; outra, ainda mais sombria, é que uma IA saiu do controle, levando os humanos a responder com uma destruição generalizada da infraestrutura. 

Comments3


Sorted by Click to highlight new comments since:

Thinking about this one year later, I realize that Global Catastrophic events are much like Carnival in Brazil: unlivable climatic conditions, public services are shut down, traffic becomes impossible, crowds of crazy people roam randomly through the streets... but without Samba and beaches, of course (or, in the case of Curitiba, without zombies selling you beer)

Meu amor Olha só hoje o sol não apareceu É o fim Da aventura humana na Terra Meu planeta adeus Fugiremos nós dois na arca de Noé Mas olha bem meu amor O final da odisseia terrestre Sou Adão e você será... Di-di-di-di-di-diz Minha pequena Eva (Eva) O nosso amor na última astronave (Eva) Além do infinito eu vou voar Sozinho com você E voando bem alto (Eva) Me abraça pelo espaço de um instante (Eva) Me cobre com o teu corpo e me dá A força pra viver

For more, see this brilliant podcast: https://globoplay.globo.com/podcasts/episode/choque-de-cultura-ambiente-de-musica/7b0c4362-3a28-4ed4-be45-e6786aaba9f9/

Curated and popular this week
Andy Masley
 ·  · 4m read
 · 
If you’re visiting Washington DC to learn more about what’s happening in effective altruist policy spaces, we at EA DC want to make sure you get the most out of it! EA DC is one of the largest EA networks and we have a lot of amazing people to draw from for help. We have a lot of activity in each major EA cause area and in a broad range of policy careers, so there are a lot of great opportunities to connect and learn about each space! If you're not visiting DC soon but would still like to connect or learn more about the group you should email us at Info@EffectiveAltruismDC.org and explore our resource list!   How to get the most out of DC Fill out our visitor form Start by filling out our visitor form. We’ll get back to you soon with any resources and connections you requested! We’d be excited to chat over a video call before your visit, get you connected to useful resources, and put you in touch with specific people in DC most relevant to your cause area and career interests. Using the form, you can: Connect with the EA DC network If you fill out the visitor form we can connect you with specific people based on your interests and the reasons for your visit. After we connect you, you can either set up in-person meetings during your visit or have video calls ahead of time to get a sense of what's happening on the ground here before you arrive. To connect with more people you can find all our community resources here and on our website. Follow along with EA DC events here.  Get added to the EA DC Slack Even if you’re just in town for a few days, the Slack channel is a great way to follow what’s up in the network. If you’re okay sharing your name and reasons for your DC visit with the community you can post in the Introductions channel and put yourself out there for members to reach out to. Get hosted for your stay We have people in the network with rooms available to sublet, and sometimes options to stay for free. Find an office to work from during the
 ·  · 13m read
 · 
> It seems to me that we have ended up in a strange equilibrium. With one hand, the Western developed nations are taking actions that have obvious deleterious effects on developing countries... With the other hand, we are trying (or at least purport to be trying) to help developing countries through foreign aid... Probably the strategy that we as the West could be doing, is to not take these actions that are causing harm. That is, we don't need to "fix" anything, but we could stop harming developing countries. —Nathan Nunn, Rethinking Economic Development EAs typically think about development policy through the lens of “interventions that we can implement in poor countries.” The economist Nathan Nunn argues for a different approach: advocating for pro-development policy in rich countries. Rather than just asking for more and better foreign aid from rich countries, this long-distance development policy[1] goes beyond normal aid-based development policy, and focuses on changing the trade, immigration and financial policies adopted by rich countries.  What would EA look like if we seriously pursued long-distance development policy as a complementary strategy to doing good? The argument Do less harm Nunn points out that rich countries take many actions that harm poor countries. He identifies three main kinds of policies that harm poor countries: 1. International trade restrictions. Tariffs are systematically higher against developing countries, slowing down their industrialization and increasing poverty. 2. International migration restrictions. By restricting citizens of poor countries from travelling to and working in rich countries, rich countries deny large income-generation opportunities to those people, along with the pro-development effects of their remittances. 3. Foreign aid. This sounds counterintuitive—surely foreign aid is one of the helpful actions?–-but there's sizable evidence that foreign aid can fuel civil conflict, especially when given with
 ·  · 1m read
 · 
[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.] In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.  In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines: 1. Those working on AI Safety, because they believe that transformative AI is coming. 2. Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1] Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that? If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources. 
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time. 1. ^ Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other pra
Relevant opportunities
26
CEEALAR
· · 1m read