Topic Contributions


[$20K In Prizes] AI Safety Arguments Competition

Trying to align very advanced AIs with what we want is a bit like when you try to design a law or a measure to constrain massive companies, such as Google or Amazon, or powerful countries, such as the US or China. You know that when you put a rule in place, they will have enough resources to circumvent it. And you might try as hard as you want, if you didn't design the AI properly in the first place, you won't be able to have it make what you want.

[$20K In Prizes] AI Safety Arguments Competition

When you look at society's problems, you can observe that many of our structural problems come from strong optimizers. 

  • Companies, to keep growing once they're big enough, start having questionable practices such as tax evasion, preventing new companies from entering markets, capturing regulators to keep their benefits etc.
  • Policymakers who are elected are those who are doing false promises, who are ruthless with their adversaries and who are using communication without caring about truth. 

Now, even these optimizers that are hard to fight against, are very limited in their capabilities. They're limited by coordination costs, by their limited ability to forecast or by their limited ability to process relevant information. AI poses the risk to break down these barriers and be able to optimize much more strongly. And thus, the feeling that you may have next to these companies and policymakers, i.e that you can't stop them even if you the way they're cheating, will be multiplied tenfold next to smarter AIs.

Open Philanthropy Shallow Investigation: Civil Conflict Reduction

Hi Lauren! Thanks for the post! Did you come across some literature on civil wars and life satisfaction ? Because I expect the effect of civil wars on the latter to be significant so I'd be curious to know if there were some estimates.

The Future Fund’s Project Ideas Competition

Monitoring Nanotechnologies and APM
Nanotechnologies, and a catastrophic scenario linked to it called “Grey goo”, have received very little attention recently (more information here ), whereas nanotechnologies keep moving forward and that some think that it’s one of the most plausible ways of getting extinct. 

We’d be excited that a person or an organization closely monitors the evolution of the field and produces content on how dangerous it is. Knowing whether there are actionable steps that could be taken now or not would be very valuable for both funders and researchers of the longtermist community.

The Future Fund’s Project Ideas Competition

Making Impactful Science More Reputable

There are two things that matter in science: reputation and funding. While there is more and more funding available for mission-driven science, we’d be excited to see projects that would try to increase the reputation of impactful science. We think that increasing the reputation of impactful work could over time increase substantially the amount of research done on most things that society care about.

Some of the ways we could provide more reputation to impactful research:

  • Awarding prizes to past and present researchers that have done mostly impactful work.
  • Organizing “seminars of impact” where the emphasis is put on researchers who have been able to make their research impactful
  • Communicating and sharing impactful research being done. That could be done in several ways (e.g simply using social media or making short films on some specific mission-driven research projects).
Nuclear attack risk? Implications for personal decision-making

Does anyone know where to buy potassium iodide tablets? I can't find any seller which is not out-of-stock and which works on the internet

What are effective ways to help Ukrainians right now?

Epistemic Status : I just read a Twitter thread on it and I found the idea interesting. So I'm sharing it for this. But it's still very plausible I'm wrong.

I feel like it might be worth considering this:
It's basically an ad campaign to raise awareness among Russian on what happens in Ukraine. It looks like the project has committed to a high degree of transparency. 
So I'd say that there are two main plausible paths to impact: 
- The direct impact of the campaign. It looks like in this context, the marginal value of information might be high because if many people update (which is plausible), it might put pressure on Putin and lead him to lower his expectations on Ukraine. 
- The informative impact: I'd be curious to have detailed data on this kind of ad campaign. I feel like it hasn't been tried a lot yet, and I wonder how effective it could be. Given the announced transparency of this project, it's likely that it could give us a better idea of how efficient it is (via the click rate for instance). That said, it might be hard to track the actual impact of that on Russia's policy. So my guess is that it would provide a lot of information only if it's very efficient or very inefficient.


What are effective ways to help Ukrainians right now?


Epistemic Status : I'm not very confident about my idea because I just started thinking about this idea this morning. So I'd be happy to hear your thoughts on that.

I wonder if a small group of smart hackers couldn't do huge things to help Ukraine vs Russia. I feel like Anonymous tends to do massive not-very-smart attacks (huge DDoS basically) and I feel like given their results there might be room for very smart interventions that truly make a difference in Russia's ability to win this war (either helping Ukrainians via information or some protection against cyber attacks, or attacking Russian logistics / information systems for instance)

For the rationale of engaging in these cyber activities: 
- I feel like anything that does not harm Russian civilians and makes the Ukraine invasion harder is positive, bc I feel like this might have very important long-run effects (if Russia comes out of this conflict much weaker than when it began, it's a major incentive against future full-scale invasions, and that's very positive)

The downsides: 
- If your localization is identifiable and you're able to do significant things, it might make diplomatic relations between your country / Russia even worse. 
- It might be risky if you want to go to Russia at some point

EA megaprojects continued

Some more ideas that are related to what you mentioned :

  • Exploring / Exploiting interventions on growth in developing countries. So for instance, what if we took an entire country and spent about 100$ or more per households (for a small country, that could be feasible) ? We could do direct transfer as GiveDirectly but I'd expect some public goods funding to be worth trying aswell.
  • Making AI safety prestigious setting up an institute that would hire top researchers for safety aligned research. I'm not a 100% sure but I feel like top AI people often go to Google in big part because they offer great working conditions. If an institute offered these working conditions and that we could hire top junior researchers quite massively to work on prosaic AGI alignment, that could help making AI Safety more prestigious. Maybe such an institute could run seminars, give some rewards in safety or even on the long run host a conference.
EA megaprojects continued

My guess is that it can help converting non-EAs into people who have roughly EA-aligned objectives which seems highly valuable ! What I mean is that a simple econ degree is enough to have people who think almost like EAs so I I expect an EA university to be able to do that even better

Load More