[TO POLICYMAKERS]Trying to align very advanced AIs with what we want is a bit like when you try to design a law or a measure to constrain massive companies, such as Google or Amazon, or powerful countries, such as the US or China. You know that when you put a rule in place, they will have enough resources to circumvent it. And you might try as hard as you want, if you didn't design the AI properly in the first place, you won't be able to have it make what you want.
[TO ML RESEARCHERS AND MAYBE TECH EXECUTIVES]When you look at society's problems, you can observe that many of our structural problems come from strong optimizers.
Now, even these optimizers that are hard to fight against, are very limited in their capabilities. They're limited by coordination costs, by their limited ability to forecast or by their limited ability to process relevant information. AI poses the risk to break down these barriers and be able to optimize much more strongly. And thus, the feeling that you may have next to these companies and policymakers, i.e that you can't stop them even if you the way they're cheating, will be multiplied tenfold next to smarter AIs.
Hi Lauren! Thanks for the post!
Did you come across some literature on civil wars and life satisfaction ? Because I expect the effect of civil wars on the latter to be significant so I'd be curious to know if there were some estimates.
Monitoring Nanotechnologies and APMNanotechnologies, and a catastrophic scenario linked to it called “Grey goo”, have received very little attention recently (more information here ), whereas nanotechnologies keep moving forward and that some think that it’s one of the most plausible ways of getting extinct. We’d be excited that a person or an organization closely monitors the evolution of the field and produces content on how dangerous it is. Knowing whether there are actionable steps that could be taken now or not would be very valuable for both funders and researchers of the longtermist community.
Making Impactful Science More ReputableThere are two things that matter in science: reputation and funding. While there is more and more funding available for mission-driven science, we’d be excited to see projects that would try to increase the reputation of impactful science. We think that increasing the reputation of impactful work could over time increase substantially the amount of research done on most things that society care about.
Some of the ways we could provide more reputation to impactful research:
Does anyone know where to buy potassium iodide tablets? I can't find any seller which is not out-of-stock and which works on the internet
Epistemic Status : I just read a Twitter thread on it and I found the idea interesting. So I'm sharing it for this. But it's still very plausible I'm wrong.
I feel like it might be worth considering this: https://www.crowdfunder.co.uk/p/give-russians-real-news-about-ukraine-using-adsIt's basically an ad campaign to raise awareness among Russian on what happens in Ukraine. It looks like the project has committed to a high degree of transparency. So I'd say that there are two main plausible paths to impact: - The direct impact of the campaign. It looks like in this context, the marginal value of information might be high because if many people update (which is plausible), it might put pressure on Putin and lead him to lower his expectations on Ukraine. - The informative impact: I'd be curious to have detailed data on this kind of ad campaign. I feel like it hasn't been tried a lot yet, and I wonder how effective it could be. Given the announced transparency of this project, it's likely that it could give us a better idea of how efficient it is (via the click rate for instance). That said, it might be hard to track the actual impact of that on Russia's policy. So my guess is that it would provide a lot of information only if it's very efficient or very inefficient.
Epistemic Status : I'm not very confident about my idea because I just started thinking about this idea this morning. So I'd be happy to hear your thoughts on that.I wonder if a small group of smart hackers couldn't do huge things to help Ukraine vs Russia. I feel like Anonymous tends to do massive not-very-smart attacks (huge DDoS basically) and I feel like given their results there might be room for very smart interventions that truly make a difference in Russia's ability to win this war (either helping Ukrainians via information or some protection against cyber attacks, or attacking Russian logistics / information systems for instance)For the rationale of engaging in these cyber activities: - I feel like anything that does not harm Russian civilians and makes the Ukraine invasion harder is positive, bc I feel like this might have very important long-run effects (if Russia comes out of this conflict much weaker than when it began, it's a major incentive against future full-scale invasions, and that's very positive)The downsides: - If your localization is identifiable and you're able to do significant things, it might make diplomatic relations between your country / Russia even worse. - It might be risky if you want to go to Russia at some point
Some more ideas that are related to what you mentioned :
My guess is that it can help converting non-EAs into people who have roughly EA-aligned objectives which seems highly valuable ! What I mean is that a simple econ degree is enough to have people who think almost like EAs so I I expect an EA university to be able to do that even better