Hide table of contents

What is needed:

I am looking for links to resources to understand if/how Tech(nology) Policy influences the temporal dimension of Public Policy. I am trying to think about questions like:

1. Should the rapid advancement of Tech deprioritize any efforts to focus on long term Public Policymaking?

2. If yes to 1, can having a proper Tech Policy in place help the situation?

3. Does Tech policy itself become too redundant too quickly?


What I have found so far:

1. A book titled "The Half-Life of Policy Rationales: How New Technology Affects Old Policy Issues" by Daniel B. Klein and Fred Foldvary. It (from a skimming of the introductory chapter) seems to argue that the rapid development of Tech should mean that free-enterprise policy must be the way to go. The authors suggest that Tech makes the market too complex for the policymaker to comprehend its dynamics (and hence make policies) and also that Tech reduces transactional costs hence making Market-failure arguments weaker which in turn reduces the need for regulation. So it feels like their answer to 1 is yes.

I am not able to find good criticisms of this book (supporting or rejecting its claims). Also, it was written in 2003. I wonder if the ideas in it are still relevant given the Tech improvements since then. (Have the book's ideas themselves reached their half life?!)

2. A journal article titled "Policy Making for the Long Term in Advanced Democracies" by Alan M. Jacobs. It actually doesn't talk about Tech at all. It talks about the challenges that Politics imposes on long term Policymaking. But it is still interesting to read.


Why is it needed:

I find the arguments for longtermism that I have heard in the EA community quite appealing. So it feels like a neat idea to concentrate on policies that have a long term effect to maximize the good I can do. This would mean addressing things that stand in the way of long term policymaking. Tech seems like one of those things and learning Tech Policy, a way to regulate Tech's effects. In addition, it feels like knowledge of Tech policy would equip one to work on existential risk reduction as well. So learning Tech Policy seems like a 'one stone, two mangoes' proposition and hence I feel motivated to do so.

The operational words in my last paragraph are 'seems' and 'feels'! So I am looking for resources to learn more and find some evidence to substantiate my current motivation to learn Tech Policy. In fact, I am currently relying on this evidence to help me write an SOP for a short certification course on Tech Policy at a local think tank which is for me a low-cost way of testing if I have a good personal fit with this domain.

Thanks for all your responses in advance!

3

0
0

Reactions

0
0
New Answer
New Comment


1 Answers sorted by

No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 1m read
 · 
 The Life You Can Save, a nonprofit organization dedicated to fighting extreme poverty, and Founders Pledge, a global nonprofit empowering entrepreneurs to do the most good possible with their charitable giving, have announced today the formation of their Rapid Response Fund. In the face of imminent federal funding cuts, the Fund will ensure that some of the world's highest-impact charities and programs can continue to function. Affected organizations include those offering critical interventions, particularly in basic health services, maternal and child health, infectious disease control, mental health, domestic violence, and organized crime.
NickLaing
 ·  · 5m read
 · 
TLDR: After artesunate rose from an obscure Chinese herb to be the lynchpin of malaria treatment, increasing resistance means we might need a fresh miracle.   Back in 2010 I visited Uganda for the first time, a wide eyed medical intern in the Idyllic Kisiizi rural hospital. In my last week malaria struck me down, but there was a problem – a nationwide shortage meant that no adult artesunate doses remained. Through an awkward mix of white privilege and Ugandan hospitality, I was handed the last four children’s doses to make up one adult dose. Even in my fever dream, I pondered the fate of those 4 kids who would miss out because of me… Act 1 – The Origin In the midst of the Vietnam war, a fierce arms race was afoot. But not for a killer – for a life saver. Troops died from more than just bullets, bombs and shrapnel – a nefarious killer roamed, and the drug of the day wasn’t good enough. “The incidence (of malaria) in some combat units..., approached 350 per 1,000 per year”. Quinine took too long to cure malaria and the side effects were rough. Resistance to quinine was on the rise, “14 days proved inadequate to effect a radical cure, and there was a 70 to 90 percent rate of recrudescence within a month.” Soldiers might not die of malaria but could be out of action for weeks. Both sides set their best scientists to the race. The Americans discovered that if you added an extra drug pyrimethamine, cure rates were higher and recovery was quicker -a useful development but hardly a game changer. The Chinese though had a secret weapon. I’m proud that EAs lobby to normalise “challenge studies” as a quicker way to find cures than laborious multi-year RCTs. China's Tu Youyou was putting them to great use to help the North Vietnamese cause. Armed with malaria infected mosquitos and 2,000 traditional herbs, she wondered whether there was any merit in thousands of years of traditional Chinese medicine. After culling the initial list of 2000 to the 380 most promising herbs,
 ·  · 10m read
 · 
This is a quick take, not a research report, reflecting the current thoughts and impressions of (some) members of Rethink Priorities' Worldview Investigation Team. We plan to occasionally post views we have (like this one) that we think are important but which we haven't had time to focus on in depth.   Evaluating AI systems for morally relevant properties like agency and consciousness is presently quite challenging. AI systems are complex and our understanding of their functioning remains rudimentary. Human and animal brains are, if anything, even less well understood. Coming up with verdicts on the presence of traits like valenced experience or morally-weighty desires involves distilling expert opinion, largely formulated in terms of humans and other animals, and applying the lessons derived to what we know about AI systems. Doing this well requires thought and conceptual nuance, and even in the ideal will leave us with significant uncertainty. For many traits of interest, the important considerations that bear among our assessment have no reliable behavioral signature. To know if a system is conscious, we may want to know if it has a global workspace. To know if a system feels pain, we may want to know how its decision-making faculties are structured. These kinds of things may be very hard to detect just by observing output. Many popular theories of mental states give significant weight to architectural details that can only be definitively assessed by looking at the internals of the systems in question. Despite the challenge, we believe that some progress can be made towards understanding the moral status of present AI and near-future systems. Insofar as they aren’t moral subjects, we can say what it is about them that should make us confident that they aren’t. Insofar as they might be, we can explain which details count toward their candidacy on which theories and why we should take those theories seriously. However, there are several worrisome trends in t