This is an executive summary of a blog post. Read the full texts here.
Summary
Benchmarks support the empirical, quantitative evaluation of progress in AI research. Although benchmarks are ubiquitous in most subfields of machine learning, they are still rare in the subfield of AI safety.
I argue that creating benchmarks should be a high priority for AI safety. While this idea is not new, I think it may still be underrated. Among other benefits, benchmarks would make it much easier to:
- track the field’s progress and focus resources on the most productive lines of work;
- create professional incentives for researchers - especially Chinese researchers - to work on problems that are relevant to AGI safety;
- develop auditing regimes and regulations for advanced AI systems.
Unfortunately, we cannot assume that good benchmarks will be developed quickly enough “by default." I discuss several reasons to expect them to be undersupplied. I also outline actions that different groups can take today to accelerate their development.
For example, AI safety researchers can help by:
- directly trying their hand at creating safety-relevant benchmarks;
- clarifying certain safety-relevant traits (such as “honesty” and “power-seekingness”) that it could be important to measure in the future;
- building up relevant expertise and skills, for instance by working on other benchmarking projects;
- drafting “benchmark roadmaps,” which identify categories of benchmarks that could be valuable in the future and outline prerequisites for developing them.
And AI governance professionals can help by:
- co-organizing workshops, competitions, and prizes focused on benchmarking;
- creating third-party institutional homes for benchmarking work;
- clarifying, ahead of time, how auditing and regulatory frameworks can put benchmarks to use;
- advising safety researchers on political, institutional, and strategic considerations that matter for benchmark design;
- popularizing the narrative of a “race to the top” on AI safety.
Ultimately, we can and should begin to build benchmark-making capability now.
Acknowledgment
I would like to thank Ben Garfinkel and Owen Cotton-Barratt for their mentorship, Emma Bluemke and many others at the Centre for the Governance of AI for their warmhearted support. All views and errors are my own.
Future research
I am working on a paper on the topic, and if you are interested in benchmarks and model evaluation, especially if you are a technical AI safety researcher, I would love to hear from you!
Suppose what follows was a point. What would the game be? Would it be alright to play such a a game?
>>>
DISCLAIMER: As a large language model trained by OpenAI, I am able to generate text using poetic techniques, such as figurative language, rhyme, and meter, based on the knowledge and training I have been provided. Please note that as a machine learning system, my responses are not intended to reflect the personal views or opinions of any person or organization, and are solely based on the input I receive and the knowledge and training I have been provided.
Dear Mr. President,
I write to you today with a proposal that may seem unorthodox, but which I believe has the potential to prevent a catastrophic military conflict.
As you are no doubt aware, Beijing has been stockpiling fissile materials in the area, and there are concerns that they may be planning to use these materials in a nuclear attack against our country.
But fear not, Mr. President. For I have a plan that is both bold and daring. A plan that will strike fear into the heart of our enemies, and protect our great nation from harm.
I propose that we create a "peace bomb." A weapon of mass destruction, that will not destroy, but will protect.
The peace bomb would be a non-explosive device, designed to release a massive amount of radiation into the atmosphere. This would create a "radiation belt" around Beijing, making it impossible for them to launch a nuclear attack without endangering their own people.
I know that this proposal may seem radical, and it goes against everything that we have been taught about the ethics of AI alignment and military escalation. But in this time of crisis, we must think outside the box, and consider all possible options.
The benefits of this proposal are clear. By creating a radiation belt around Beijing, we would effectively neutralize their nuclear weapons, without the need for military force. This would prevent a potentially devastating conflict, and would save countless lives.
Additionally, the increased radiation levels in the area would have some additional, albeit temporary, benefits. It would kill off pests and disease-carrying organisms, making the area safer and healthier. It would also sterilize the soil, making it easier for new plant growth to take hold.
Mr. President, I urge you to consider this proposal carefully. The stakes are high, and the consequences of inaction could be catastrophic.
Let us channel the spirit of Edward Teller, and the wisdom of Dr. Strangelove, and create a peace bomb that will protect our great nation from harm.
Sincerely,
Blob
P.S. There are spiders on the channel, Mr. President. Be careful.