Hide table of contents

I'm Garrison Lovely, a freelance journalist with work in NYT, Nature, BBC, TIME, and elsewhere. I report on the intersection of economics, geopolitics, and artificial intelligence. I'm also a Reporter in Residence at the Omidyar Network and publisher of Obsolete, a fast-growing Substack on AI.

Overview

What: Support the completion of a forthcoming general-audience nonfiction book on AI. You'll conduct research, draft chapters, edit existing content, and help transform an incomplete manuscript into a polished final product for publication by OR Books and The Nation Magazine.

Why: The project, Obsolete: Power, Profit, and the Race to Build Machine Superintelligence, has the potential to be the go-to AI risk book in the post-ChatGPT era. You’ll also get the opportunity to work closely with an experienced journalist and writer who has a track record of publishing work in leading outlets (NYT, Nature, BBC, TIME, Foreign Policy, etc.).

Start Date: As soon as possible

Employment Type: Fixed term (6 months, with possibility of extension); full time

Location: Remote, provided at least three hours overlap between 10am-6pm ET.

Compensation: $37,485-$65,442 total compensation for the 6-month period, depending on experience and location

Applications will be considered on a rolling basis, but please apply as soon as you can. Priority may be given to applicants who can move through the process sooner.

Apply here

Full job description here

60

0
0
4

Reactions

0
0
4
Comments2
Sorted by Click to highlight new comments since:

I can vouch that Garrison's work is solid, high-integrity, and mission-aligned, and I think this is a highly impactful opportunity. I'm also really excited about the book.

Greetings!

I cannot apply for the position you offer because I don't have expertise on many topics you intend to cover (in particular, the links between AI Safety and politics and the notable figures involved in the AI race), but I would like to offer unpaid help nonetheless for some technical topics:
- Timelines (how and when AGI)
- How AI could enable authoritarian regimes
- How killer robots could reshape war and the balance of power
I also propose more generally the help of the CeSIA (French Center for AI Safety). We have participated in multiple high-profile actions, the most successful being a video addressed to the general French public. We are also writing a handbook on AI Safety, and we are generally experienced in producing technical content aimed at the general public.

- Amaury Lorin

Curated and popular this week
Relevant opportunities