This is a special post for quick takes by Daniel Samuel Polak. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Tax incentives for AI safety - rough thoughts

A number of policy tools such as regulations, liability regimes or export controls - aimed at tackling AI risks -  have already been explored, and mostly appear as promising and worth further iterations.

But AFAIK no one has so far come up with a concrete proposal to use tax policy tools to internalize AI risks. I wonder why, considering that policies, such as tobacco taxes, R&D tax credits, and 401(k), have been mostly effective. Tax policy also seems to be underutilized and neglected, given we already possess sophisticated institutions like tax agencies or tax policy research networks.

Safety measures spending of AI Companies seems to be relatively low, and we can expect that if competition intensifies, these expenses will be even lower.

So I've started to consider more seriously the idea of tax incentives - basically we can provide a tax credit or deduction for expenditures on AI safety measures like alignment research, cybersecurity or oversight mechanisms etc. which effectively could lower their cost. To illustrate:  AI Company incurs safety researcher salary as a cost and then 50% of that cost  can be additionally deducted from the tax base.

My guess was that such tool could  influence the ratio of safety-to-capability spending. If implemented properly it could help mitigate competitive pressures affecting frontier AI labs by incentivising them to increase spending on AI safety measures.

Like any market intervention, we can justify such incentives if they correct market inefficiencies or generate positive externalities. In this case, lowering the cost of security measures helps internalize risk.

However there are many problems on path to design such tool effectively:

  1. The crucial problem is that financial benefit from tax credit can't match the expected value of increasing capabilities. Underlying incentives for capability breakthroughs are potentially orders of magnitude larger. So simply AI labs wouldn't  bother and keep the same level while getting extra money from incentives which is an obvious backlash.
    1. However, if some AI Company plans to increase safety expenses due to their real concerns about risks or external pressures (boards, public etc.), perhaps they would be more willing to do it.
    2. Also risk of keeping the same safety expenses level could be overcome by requiring a specific threshold of expenditures to benefit from the incentive.
  2. The focus here is on inputs (spending) instead of outcomes (actual safety).
  3. Implementing it would be pain in the ass, requiring creating specialised departments within IRS or delegating most of the work to NIST.
  4. Defining the scope of qualified expenditures -  it could be hard to separate safety from capabilities research cost. Keeping an eye on this later can be a considerable administrative cost.
  5. Expected expenses could be incurred regardless of the public funding received if we just impose a strict requirement.
  6. There could be a problem of safety washing - AI labs creating an impression and signalling that appropriate safety measures are implemented and benefiting from incentives while not decreasing the risk effectively.
  7. I don't know much about US tax system but I guess it could overlap with existing R&D tax incentives. However, existing incentives are unlikely to reduce the risk. if they are used for both safety and capabilities research then they
  8. Currently most AI labs are in loss position so they can't effectively benefit from such incentives unless some special  feature is put in place, like refundable tax credits or the option to claim such relief/credit as soon as they make a taxable profit.
  9. Perhaps direct government financing would be more effective. Or existing ideas (such as those mentioned earlier) would be more effective and we don't have enough room for weaker solutions.
  10. Maybe money isn't a problem here as AI labs are more talent constrained. If the main bottleneck for effective safety work is a talented researcher, then making safety spending cheaper via tax credits might not significantly increase the amount of high-quality safety work done.

Is there something crucial that I am missing? Is it worth investigating further? So far it has more problems than the potential benefits so I don't think it's promising, but I'd love to hear your thoughts on it.

What is your greatest achievement? 

Many job offers, competitions and other application processes require you to state your greatest achievement. 

I'm always having a problem with this one due to not being goal-oriented. Besides, I do not see any of my results as achievements. 

What are some examples of achievements (or even categories of achievements) for an undergraduate or a person starting a career?  

I struggled with a similar question back when I was a student. What I've found out is that people asking this usually want to know how the applicant describes their work and approach, and how confident or passionate a person is about the things they do.

One option could be to talk about the most exciting university project/assignment that you've worked on. You could describe something that made it interesting, what you learnt from it, and explain how you handled teamwork or prioritization during it. Interesting results are a plus, but learning experiences also make for a good story.

Other options include some kind of competitive performance, or a hobby project you felt passionate about and dedicated time and energy into. Personally I would even be happy to hear about something nice you did that helped somebody else. Feel free to be open and explain what made the experience special to you.

People asking this question usually understand that new graduates' achievements don't necessarily involve work projects. So my advice would be to not worry about the context too much.

Curated and popular this week
Relevant opportunities