https://www.nytimes.com/2016/01/03/opinion/how-to-cultivate-the-art-of-serendipity.html

One survey of patent holders (the PatVal study of European inventors, published in 2005) found that an incredible 50 percent of patents resulted from what could be described as a serendipitous process. Thousands of survey respondents reported that their idea evolved when they were working on an unrelated project — and often when they weren’t even trying to invent anything.

https://www.youtube.com/watch?v=dXQPL9GooyI

Kenneth Stanley: Why Greatness Cannot Be Planned: The Myth of the Objective


I believe these links could be useful to people in EA who are focused on hard questions where the answers (and the process to get to the answers) aren't yet obvious. I think the NY Times article provides the "why" and the YouTube video provides the "how."

My sense is that this is a highly undervalued tool/method for tackling hard ambiguous questions where the process to get an answer/solution isn't yet obvious, and it seems that EA is full of questions and challenges like that!

So, I hope this is a useful model to add to your problem solving toolkit as you work on doing good!

1

0
0

Reactions

0
0
Comments2


Sorted by Click to highlight new comments since:

What does it actually mean to have "serendipity" as a model in one's toolkit? Would you be open to writing a brief summary of the "how", or do you strongly recommend just watching the video?

I think the "how" is roughly:

If you do not know the steps to your goal with high confidence, then do the following:

You can imagine that you're looking at a map, and your distant goal is somewhere on the map, but the map is blurry / not yet revealed all the way to your distant goal

So then identify what options you *do* know the steps to (the ones that _are_ visible on the map), and then pick the option from those that is most novel

This is because the more novel it is, the more likely it is to reveal large and unexpected portions of the map, potentially including the part that gives you a visible path to your distant goal

So when uncertain, identify the most novel thing you know how to do/achieve, and repeat that, and that's likely the best (albeit very roundabout!) route for getting to your distant not-yet-visible-path goal.


If the above is intriguing, I'd highly recommend watching the video – I think it's a very well spent 15 minutes if watching on 2x speed.

Curated and popular this week
 ·  · 1m read
 · 
[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.] In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.  In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines: 1. Those working on AI Safety, because they believe that transformative AI is coming. 2. Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1] Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that? If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources. 
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time. 1. ^ Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other pra
rai, NunoSempere
 ·  · 5m read
 · 
We’re developing an AI-enabled wargaming-tool, grim, to significantly scale up the number of catastrophic scenarios that concerned organizations can explore and to improve emergency response capabilities of, at least, Sentinel. Table of Contents 1. How AI Improves on the State of the Art 2. Implementation Details, Limitations, and Improvements 3. Learnings So Far 4. Get Involved! How AI Improves on the State of the Art In a wargame, a group dives deep into a specific scenario in the hopes of understanding the dynamics of a complex system and understanding weaknesses in responding to hazards in the system. Reality has a surprising amount of detail, so thinking abstractly about the general shapes of issues is insufficient. However, wargames are quite resource intensive to run precisely because they require detail and coordination. Eli Lifland shared with us some limitations about the exercises his team has run, like at The Curve conference: 1. It took about a month of total person-hours to iterate of iterating on the rules, printouts, etc. 2. They don’t have experts to play important roles like the Chinese government and occasionally don’t have experts to play technical roles or the US government. 3. Players forget about important possibilities or don’t know what actions would be reasonable. 4. There are a bunch of background variables which would be nice to keep track of more systematically, such as what the best publicly deployed AIs from each company are, how big private deployments are and for what purpose they are deployed, compute usage at each AGI project, etc. For simplicity, at the moment they only make a graph of best internal AI at each project (and rogue AIs if they exist). 5. It's effortful for them to vary things like the starting situation of the game, distribution of alignment outcomes, takeoff speeds, etc. AI can significantly improve on all the limitations above, such that more people can go through more scenarios faster at the same q
DavidNash
 ·  · 7m read
 · 
Project 2025: Mandate for Leadership: The Conservative Promise is 922 pages of US governing proposals from the Heritage Foundation, with ideas for multiple departments. From the recent executive orders it seems like parts of Project 2025 are already or in the process of being implemented. They have a 30 page section on the US Agency for International Development (USAID) and I thought it would be useful to go through and see what the new US government may be attempting to do in the next few years. I’ve given a brief summary of most of the topics without much comment.   Key Issues Aligning U.S. Foreign Aid to U.S. Foreign Policy * U.S. foreign aid currently suffers from fragmentation across approximately 20 different government offices, agencies, and departments, resulting in poor alignment with broader foreign policy strategy. * The proposed solution is to authorise the USAID Administrator to serve as Director of Foreign Assistance (at Deputy Secretary level within the State Department), enabling better coordination of aid programs and alignment with policy objectives. Countering China’s Development Challenge * China's Belt and Road Initiative has deployed billions in loans and investments across Latin America and Africa, often creating "debt traps" that advance China's strategic interests while undermining local economies and U.S. influence. * The Trump administration established several counter-China programs through USAID (including "Clear Choice," Digital Strategy, and new bilateral partnerships), but these were largely discontinued under the Biden administration in favor of climate-focused policies. * The administration should restore USAID's counter-China programs and prioritise aid to countries that resist Chinese influence, while cutting funding to partners that engage with Chinese entities. Climate Change * USAID was declared "a climate agency," redirecting its focus towards transitioning countries away from fossil fuels to renewable energy.