I thought this looked of interest to some people here. From the link:

The National Science Foundation (NSF), through the Directorate for Technology, Innovation and Partnerships (TIP), is launching a new program on Assessing and Predicting Technology Outcomes (APTO) to assess how investments in science and technology research and development will contribute to specific outcomes for the Nation. The APTO program will support a cohort of projects that will work together to complement each other's research and development (R&D) efforts on technology outcome models to accurately describe three types of technology outcomes: technology capabilities, technology production, and technology use. These models should be able to predict future as well as past states of technology outcomes. Of particular interest are prediction models that are generalizable across multiple technology areas. The outcome of this work will help assess and evaluate the effectiveness of U.S. R&D investments and generate information that decision makers could use to strategize and optimize investments for advancing long-term U.S. competitiveness into the future.

The APTO program serves the TIP directorate's need for technology assessment to understand where the U.S. stands — as a whole and in individual regions — vis-à-vis competitiveness in the key technology focus areas named in Sec. 10387 of the CHIPS and Science Act. TIP is interested in answers to the question of which science and technology investments would offer the greatest impact in the key technology focus areas and would be essential to the long-term national security and economic prosperity of the United States. As a key aspect of TIP's technology assessment activity, the APTO program will bring together multidisciplinary teams to help develop the data, intellectual foundations, and analytics necessary to inform decision making.

The research community has accumulated important insights about the "rate and direction of inventive activity"1 as an aggregate economic good, and about what decision makers can do to increase the overall production of that good. Meanwhile, industry has immense experience with creating specific technologies and planning how to reach intended technology outcomes over periods of several years. The APTO program aims to expand on this knowledge base spanning academia and industry to better understand and predict the long-term evolution of specific technologies over a period of a few years to decades, and specifically model how intentional, purposeful investments can change that evolution.

APTO will fund research and development of causal models that accurately describe past and future technology outcomes, specifically the capabilities, production, and use of specific technologies. These models should be able to predict likely future outcomes for specific technologies and what intentional investments could reliably change or accelerate those outcomes by correctly capturing the various causal relationships. Building and testing these models will require significant amounts of specialized data gathered from a variety of sources, e.g., historical sources, experimentation, expert elicitation, and others. Data extraction and processing tools may need to be developed as part of that effort.

APTO will support a cohort of projects that will work in collaboration on research and development of Technology Outcome Models and in development/preparation of Data Sets and related Tools.

1 "The Rate and Direction of Inventive Activity", edited by R. R. Nelson, 1962. Princeton, NJ: Princeton University Press.

Preliminary proposal deadline is August 21.

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
Relevant opportunities
23
· · 3m read