The OECD are currently hiring for a few potentially high-impact roles in the tax
policy space:
The Centre for Tax Policy and Administration (CTPA)
* Executive Assistant to the Director and Office Manager (closes 6th October)
* Senior programme officer (closes 28th September)
* Head of Division - Tax Administration and VAT (closes 5th October)
* Head of Division - Tax Policy and Statistics (closes 5th October)
* Head of Division - Cross-Border and International Tax (closes 5th October)
* Team Leader - Tax Inspectors Without Borders (closes 28th September)
I know less about the impact of these other areas but these look good:
Trade and Agriculture Directorate (TAD)
* Head of Section, Codes and Schemes - Trade and Agriculture Directorate
(closes 25th September)
* Programme Co-ordinator (closes 25th September)
International Energy Agency (IEA)
* Clean Energy Technology Analysts (closes 24th September)
* Modeller and Analyst – Clean Shipping & Aviation (closes 24th September)
* Analyst & Modeller – Clean Energy Technology Trade (closes 24th September)
* Data Analyst - Temporary (closes 28-09-2023)
Financial Action Task Force
* Policy Analyst(s), Anti-Money Laundering & Combatting Terrorist Financing
Together with a few volunteers, we prepared a policy document for the Campaign
for AI Safety to serve as a list of demands by the campaign.
It is called "Strong and appropriate regulation of advanced AI to protect
humanity". It is currently geared towards Australiand and US policy-makers, and
I think it's not its last version.
I would appreciate any comments!
Hey - I’d be really keen to hear peoples' thoughts on the following
career/education decision I'm considering (esp. people who think about AI a
lot):
* I’m about to start my undergrad studying PPE at Oxford.
* I’m wondering whether re-applying this year to study CS & philosophy at
Oxford (while doing my PPE degree) is a good idea.
* This doesn’t mean I have to quit PPE or anything.
* I’d also have to start CS & philosophy from scratch the following year.
* My current thinking is that I shouldn’t do this - I think it’s unlikely that
I’ll be sufficiently good to, say, get into a top 10 ML PhD or anything, so
the technical knowledge that I’d need for the AI-related paths I’m
considering (policy, research, journalism, maybe software engineering) is
either pretty limited (the first three options) or much easier to self-teach
and less reliant on credentials (software engineering).
* I should also add that I’m currently okay at programming anyway, and plan
to develop this alongside my degree regardless of what I do - it seems like
a broadly useful skill that’ll also give me more optionality.
* I do have a suspicion that I’m being self-limiting re the PhD thing - if
everyone else is starting from a (relatively) blank slate, maybe I’d be on
equal footing?
* That said, I also have my suspicions that the PhD route is actually my
highest-impact option: I’m stuck between 1) deferring to 80K here, and 2)
my other feeling that enacting policy/doing policy research might be
higher-impact/more tractable.
* They’re also obviously super competitive, and seem to only be getting
more so.
* One major uncertainty I have is whether, for things like policy, a PPE degree
(or anything politics-y/economics-y) really matters. I’m a UK citizen, and
given the record of UK politicians who did PPE at Oxford, it seems like it
might?
What mistakes am I making here/am I being too self-limiting? I s
The following is an assignment I submitted for my Cyber Operations class at
Georgetown, regarding the risk of large AI model theft and what the US
Cybersecurity and Infrastructure Security Agency (CISA) should/could do about
it. Further caveats and clarifications in footnote.[1] (Apologies for formatting
issues)
-------------
Memorandum for the Cybersecurity and Infrastructure Security Agency (CISA)
SUBJECT: Supporting Security of Large AI Models Against Theft
Recent years have seen a rapid increase in the capabilities of artificial
intelligence (AI) models such as GPT-4. However, as these large models become
more capable and more expensive to train, they become increasingly attractive
targets for theft and could pose greater security risks to critical
infrastructure (CI), in part by enhancing malicious actors’ cyber capabilities.
Rather than strictly focusing on the downstream effects of powerful AI models,
CISA should also work to reduce the likelihood (or rapidity) of large AI model
theft. This memo will explain some of the threats to and from powerful AI
models, briefly describe relevant market failures, and conclude with
recommendations for CISA to mitigate the risk of AI model theft.
There are Strong Incentives and Historical Precedent for China and Other Actors
to Steal AI Models
There are multiple reasons to expect that hackers will attempt to exfiltrate
large AI model files:
1. Current large models have high up-front development (“training”)
costs/requirements but comparatively low operational costs/requirements
after training.[1] This makes theft of AI models attractive even for
non-state actors and distinct from many instances of source code theft.[2]
Additionally, recent export controls on semiconductors to China could
undermine China’s ability to develop future large models,[3] which would
further increase Beijing’s incentive to steal trained models.
2. China and other actors have repeatedly stolen sensitive data
Nutrition science is actually doing just fine (quality research has been
consistent and stood the test of time) and pervasive distrust is due to a
massive disinformation compaign by beef industry. Evidence? See
youtube.com/@PlantChompers
Don't be put off by the name - watch any video and see Chris Macaskill's love of
reference checking, breaking down complex concepts, taking a wider perspective
on nutrition history,and entertaining story telling. I used to think the
research was all shoddy too - now I see exactly where that comes from, and the
harm that uncautios skepticism can bring.
As evidence increases for cognitive effects of poor air quality:
https://patrickcollison.com/pollution
There may be initial opportunities for extra impact by prioritizing monitoring
and improving air quality in important decision-making buildings like government
buildings, headquarters, etc
Hey guys! I work in Politics and Economic Policy here in London. I’m going to
San Fran for the first time ever - where are the best places to go and who are
some great people to meet?
Thanks in advance!
Decentralizing speeding cameras
What if anyone could buy a certified speed camera that can be installed on a
building a or a moving car?
Wouldn't it instantly solve all traffic violations and so massively reduce death
toll, if the hardware is cheap enough, which is probably the case in 2023?