Wiki Contributions


Examples of pure altruism towards future generations?

The Wikipedia article seems to only cover the conceptual idea, and I remembered that there has been a concrete implementation, too. After a short google, I found at least this Atlantic article:

In New Mexico, not too far from where the original Trinity test was held, is the Waste Isolation Pilot Plant. Almost 2 million cubic feet of radioactive waste is buried half a mile deep in the 250-million year old salt deposit. The plant will continue to receive nuclear sludge from around the country until 2070, when it will be sealed up for good. The government half-heartedly anticipated the dangers to future humans and settled on surrounding the plant with obelisks containing messages in Spanish, Navajo, Chinese, Latin, Hebrew, and English.

Examples of pure altruism towards future generations?

Long-time nuclear waste warning messages are intended to deter human intrusion at nuclear waste repositories in the far future, within or above the order of magnitude of 10,000 years.

Examples of pure altruism towards future generations?

Maybe one could find examples of people saving important books in times of severe crises, where it’s clear that the books won‘t help with the rebuilding efforts that will fall on the current generations?

What would you say gives you a feeling of existential hope, and what can we do to inspire more of it?

Sounds great!

Have you thought about translating the website to different languages?

EU's importance for AI governance is conditional on AI trajectories - a case study

Thanks for the post, I think it's really useful to get a better picture about interactions like these.

I wonder whether I really expect companies to end up being that averse to AI regulation:

  • I expect decision-makers in companies to get increasingly worried out about AI progress and associated problems of control, alignment, and so forth
  • I expect the same for shareholders of the companies
  • They might appreciate regulations/constraints for their AI development teams, if the regulations increase safety to a reasonable cost
    •  I can picture companies accepting very high costs... maybe the regulations on nuclear energy and the reactions of industry are somewhat analog and interesting to look at?
    • Companies might see themselves as largely regulating their own AI systems and might welcome "regulative help" from a competent outside body
Research idea: Evaluate the IGM economic experts panel

Thanks, that's useful to me! E.g. I didn't consider trying to convince those surveying institutions to ask about particularly important topics they might not have on their radar, or maybe paying them for the service.

As you don't plan considering it further, did you make a guess how useful work on this seems? For fields that are particularly relevant for EAs (like epidemiology, AI, maybe international relations) it might be very valuable to take the initiative so that at least some share of the surveys will be informative for the most important issues.

Research idea: Evaluate the IGM economic experts panel

Thanks, that all is useful feedback.

Regarding the impossibility of resolving questions about causal effects: Take the example I gave in the screenshot. I would think that there are plausible observations in the coming years that make the causal claim "monopolists using their market power -> inflation" very unlikely, right? E.g. if the inflation decreases without observing changes in competitiveness or something like this. And yes, a retrospective survey might be the gold standard, though I suppose I'd feel somewhat comfortable trusting a careful economist to make a bunch of judgements by herself.

And regarding buy-in, do you have an idea how the members themselves initially got interested in participating? I guess if you want to replicate the IGM panel, you'd need to have some well connected people on board fairly early. Maybe due to econs proximity to politics it's also very natural for them to want to speak out about policy issues.

AI acceleration from a safety perspective: Trade-offs and considerations

Really nice, I haven't thought about this much before, thanks for sharing your account of the landscape. Some thoughts in reaction:

1. AI acceleration is bad under all circumstances

AGI might be really terrible. Thus, everything that makes it come earlier is bad. 

 I assume everybody in AI Safety thinks "AGI might be really terrible", so I'd sketch this differently. I assume people in this category think something like "There is a high likelihood that we won't have enough time to design architectures for aligned systems, and the current technological pathways are very likely hopelessly doomed to fail"? (ah, you elaborate these points later)

3. We should further the state-of-the-art to increase control over relevant AI systems

By being in control of relevant knowledge about AI algorithms or relevant architecture, aligned actors could control who gets access and thus decrease the risk of misalignment. 

In the best case, aligned actors have sufficient power to decide who gets access to the most capable models or compute.

Maybe worth considering: I expect you would also have significant influence on what the most talented junior researchers will work on, as they will be drawn to the most technologically exciting research hubs. Would be interesting to hear from people at for example OpenAI's and DeepMind's safety teams, whether they think they have an easier time attracting very talented people who are not (initially) motivated by safety concerns.

Another thought, I think it's plausible that at some point there will be some effort of coordination between all research groups that are succesfully working on highly advanced AI systems. Having a seat at that table might be extremely valuable for arguing for increased safety investments by that group.

Long-Term Future Fund: July 2021 grant recommendations

Nice, glad to see so much good work being supported. And I appreciate learning about the thoughts/uncertainties/datapoints that go into the different grant decisions, so thanks for the detailed write-ups.

Load More