I am an astrobiologist researching the detection of life on Mars, with links to the Mars Perseverance rover. I am also interested in the role of the space community in tackling existential risks. Check out my introduction to the topic here.
If you'd like to chat about space and existential risk please book a meeting! I'm particularly interested in the role of international collaborations in reducing the probability of a great power conflict, and in space activities that tackle existential risks, such as monitoring nuclear weapons testing and climate change impacts, and missions to test asteroid deflection and understand cosmic threats. I'm based in London and happy to meet in person. You can email me at j.stone22 at imperial dot ac dot uk
I am a freelance scientific illustrator. I create diagrams to visualise your research for presentations, publications, grant proposals, visual summaries etc.
Check out this post on the forum for more info.
Thank you, and very good question! The short answer is not really. I think that building momentum on existential risk reduction from the space sector could be tractable. One way to do this would be to found organisations that tackle some of the cosmic threats with unknown severity and probability. But to be honest I'm not sure if that's necessary, maybe the LTTF or other governments and organisations should just fund some more research into these threats.
I think the main area in which EAs can have an impact is by developing existing organisations, with the aim of increasing their power to enforce policy, developing their interconnectedness, and increasing their prevalence. By doing this, we may be able to increase great power collaboration, build up institutions that will naturally evolve into space governance structures for the long term, while helping to tackle natural existential risks directly.
I'm making a post about this strategy at the moment, so happy to elaborate, but I don't want to write the whole post in one comment! Here's a diagram from the post draft to show how well covered most areas in space are:
Plugging this into EAometer....
We can propose a project to "direct charitable donations to popular but low-impact causes to the charities with the highest impact within each low-impact cause"
We can score this project on importance, tractability, and neglectdness to help decide if it's worth working on.
Importance: Probably a 3/10 as this project is directed at low-impact causes. But the causes may be fairly important as lots of people care about them/are impacted by them enough to donate.
Tractability: I think 5/10. Charities like Cancer Research and WWF have monopolies over giving to these causes, and dominate advertising. So I'm not sure how we could peel people away from that. But the fact that lots of people donate to these causes would probably make it easier to get donations to grant funds on these cause areas - but maybe they wont attract the type of people who give through GWWC/EA.
Neglectedness: Not sure, I'd have to do some research. But I would guess it's low because these are popular causes, so they would be very busy with researchers to trying to increase impact.
So to conclude, I would say it would be hard to implement this project and compete in such busy and giant cause areas that invest a lot of money in advertising. The change in impact is most likely not as great as just directing people to more effective cause areas. Popular cause areas are so over crowded that probably everything gets funded anyway.
Woah, a really nice article that identified the most common criticisms of EA that I've come across, namely, cause prioritization, earning to give, billionaire philanthropy, and longtermism. Funnily enough, I've come across these criticisms on the EA forum more than anywhere else!
But it's nice to see a well-researched, external, and in-depth review of EA's philosophy, and as a non-philosopher, I found it really accessible too. I would like to see an article of a similar style arguing against EA principles though. Does anyone know where I can find something like that? A search for EA criticism on the web brings up angry journalists and media articles that often miss the point.
I'm thinking about organising a seminar series on space and existential risk. Mostly because it's something I would really like to see. The webinar series would cover a wide range of topics:
I think this would be an online webinar series. Would this be something people would be interested in?
Thank you for these updates! They are super useful for me as someone who is just starting to get more involved with EA. The updates are really helping me get a good overview of what EA's priorities are and what measurable differences the movement is making. I come out of the post with a list of things to look further into :D
Greetings! I'm a doctoral candidate and I have spent three years working as a freelance creator, specializing in crafting visual aids, particularly of a scientific nature. However, I'm enthusiastic about contributing my time to generate visuals that effectively support EA causes.
Typically, my work involves producing diagrams for academic grant applications, academic publications, and presentations. Nevertheless, I'm open to assisting with outreach illustrations or social media visuals as well. If you find yourself in need of such assistance, please don't hesitate to get in touch! I'm happy to hop on a zoom chat
I searched google for "gain of function UK" and the first hit was a petition to ban gain of function research in the UK that only got 106 signatures out of the 10,000 required.
How did this happen? Should we try again?
Just curious, why did you decide not to tackle AI risks? This seems like it would be more of a natural flow based on your interest in existential risk and experience with programming.
Yeah basically that was my reasoning. I'm super sceptical about this risk. The virus may destroy one ecosystem in an extreme environment or be a very effective pathogen in specific circumstances but would be unlikely to be a pervasive threat.
This theoretical microbe would have invested so many stat points in adaptations like extreme UV radiation resistance, resistance to toxins in Mars soil like perchlorates and H2O2, and totally unseen levels of desiccation, salinity, and ionic strength resistance that would be useless on Earth. And it would have to power all of these useless abilities on a food source that it is likely not suited to metabolising, and definitely not under the conditions it is used to. I just can't imagine how it would be a huge threat around the World. But in a worst case scenario, it could kill a lot of people or damage an ecosystem we rely on heavily with massive global implications, so 7/10.