Hide table of contents

The Tarbell Fellowship is accepting applications until November 5th. Apply here.

Key details

  • What: One-year programme for early-career journalists interested in covering artificial intelligence.
  • When: March 2024 → March 2025 (with some flexibility)
  • Benefits: Fellows receive a stipend of up to $50,000, secure a 9-month placement at a major newsroom, participate in a study group covering AI governance & technical fundamentals, and attend a 2 week journalism summit in Oxford.
  • Who should apply: We’re interested in supporting people with a deep understanding of artificial intelligence that have the potential to become exceptional journalists. Previous newsroom experience is desirable but not essential.
  • Why journalism? Journalism as a career path is neglected by those interested in reducing risks from advanced AI. Journalists can lead the public debate on AI in important ways, drive engagement with specific policy proposals & safety standards, and hold major AI labs accountable in the public arena.
  • Learn more: Visit our website or sign up for our information sessions:
  • Deadline: Apply here by November 5th.

About the Tarbell Fellowship

What is it?

The Tarbell Fellowship is a one-year programme for early-career journalists interested in covering emerging technologies, especially artificial intelligence.

What we offer

  • Stipend: Fellows receive a stipend of up to $50,000 for the duration of their placement. We expect stipends to vary between $35,000 - $50,000 depending on location and personal circumstances.
  • Placement: Fellows secure a 9 month placement at a major newsroom covering artificial intelligence. The Tarbell Fellowship will match fellows with outlets in order to secure a placement. Exact details will vary by outlet.
  • AI Fundamentals Programme: Prior to placements, fellows explore the intricacies of AI governance & technical fundamentals through an 8-week course. This programme requires ~10 hours per week and is conducted remotely.
  • Oxford Summit: Fellows attend a two week journalism summit in Oxford at the beginning of the fellowship. This will be an intensive fortnight featuring guest speakers, workshops and networking events in Oxford and London. Travel and accommodation costs will be fully covered.

Who should apply

We’re interested in supporting people with a deep understanding of artificial intelligence that have the potential to become exceptional journalists.

In particular, we’re looking for:

  • AI Expertise: A deep interest in artificial intelligence and its effects on society. Many fellows will have experience working in tech journalism, machine learning research, or AI governance.
  • Passionate about journalism: Previous newsroom experience (including at student newspapers) or comparable experience in a field such as law or research is desirable but not necessary. We seek to support early-career journalists and will prioritise potential over experience.
  • Excellent writing skills: A creative approach to storytelling combined with the ability to quickly get to the heart of a new topic. Fellows must be able to produce compelling copy under tight deadlines.
  • Relentless: Journalism is a highly competitive industry. Fellows must be willing to work hard, and be capable of withstanding repeated rejection
  • Open-minded: The desire to understand the truth of any given story, even when it conflicts with prior beliefs. Fellows must be open to criticism and recognising ways they can improve their writing and thinking.

If you’re unsure whether you’re a good fit, we encourage you to apply anyway. Alternatively, you can attend one of our upcoming information sessions or email cillian [at] tarbellfellowship [dot] org

Why consider a career in journalism?

Journalism, as a career path, is neglected by those interested in reducing risks from advanced AI. We believe many more people should consider a career in journalism (and expect to write a more detailed post on this in the near future).

Artificial intelligence is one of the key challenges facing humanity this century and journalists can lead the public debate here in important ways. As AI systems progress, they could pose a catastrophic risk to humanity. Researchers could lose control over advanced AI systems, malicious actors could weaponize this technology, and a deluge of persuasive misinformation could undermine democracy. Relative to their importance, these issues receive much less attention than they deserve.

The Tarbell Fellowship is creating a community of expert journalists that can focus the collective conversation on the most important risks from AI, drive engagement with specific policy proposals & safety standards, and hold major AI labs accountable in the public arena.

Track record

The Tarbell Fellowship launched in 2023, selecting 7 fellows from a competitive pool of over 950 applications. 3 fellows are currently at TIME, one is at Coda, and another is freelancing with the New Yorker. Following the fellowship, we expect our fellows will go on to work at top news organisations, bringing their expertise to outlets around the world.

Attend an information session

Interested in learning more about the Tarbell Fellowship? Register for one of our information sessions & ask any questions you might have:

How to apply

We expect the application form to take ~1 hour to complete provided you already have a current CV and writing samples. The application form includes four sections:

  • About you, including your CV.
  • Short essay responses about your suitability for the Tarbell Fellowship.
  • 1-3 writing sample(s) (500-2000 words) to assess your writing skills.

Apply here by November 5th.

58

0
0

Reactions

0
0

More posts like this

Comments1


Sorted by Click to highlight new comments since:

This seems like a great opportunity. It is now live on the EA Opportunity Board!

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
JamesÖz
 ·  · 3m read
 · 
Why it’s important to fill out this consultation The UK Government is currently consulting on allowing insects to be fed to chickens and pigs. This is worrying as the government explicitly says changes would “enable investment in the insect protein sector”. Given the likely sentience of insects (see this summary of recent research), and that median predictions estimate that 3.9 trillion insects will be killed annually by 2030, we think it’s crucial to try to limit this huge source of animal suffering.  Overview * Link to complete the consultation: HERE. You can see the context of the consultation here. * How long it takes to fill it out: 5-10 minutes (5 questions total with only 1 of them requiring a written answer) * Deadline to respond: April 1st 2025 * What else you can do: Share the consultation document far and wide!  * You can use the UK Voters for Animals GPT to help draft your responses. * If you want to hear about other high-impact ways to use your political voice to help animals, sign up for the UK Voters for Animals newsletter. There is an option to be contacted only for very time-sensitive opportunities like this one, which we expect will happen less than 6 times a year. See guidance on submitting in a Google Doc Questions and suggested responses: It is helpful to have a lot of variation between responses. As such, please feel free to add your own reasoning for your responses or, in addition to animal welfare reasons for opposing insects as feed, include non-animal welfare reasons e.g., health implications, concerns about farming intensification, or the climate implications of using insects for feed.    Question 7 on the consultation: Do you agree with allowing poultry processed animal protein in porcine feed?  Suggested response: No (up to you if you want to elaborate further).  We think it’s useful to say no to all questions in the consultation, particularly as changing these rules means that meat producers can make more profit from sel
michel
 ·  · 4m read
 · 
I'm writing this in my personal capacity as someone who recently began working at the Tarbell Center for AI Journalism—my colleagues might see things differently and haven’t reviewed this post.  The rapid development of artificial intelligence could prove to be one of the most consequential technological transitions in human history. As we approach what may be a critical period in AI development, we face urgent questions about governance, safety standards, and the concentration of power in AI development. Yet many in the general public are not aware of the speed of AI development, nor the implications powerful AI models could have for them or society at large. Society’s capacity to carefully examine and publicly debate AI issues lags far behind their importance.  This post makes the case for why journalism on AI is an important yet neglected path to remedy this situation. AI journalism has a lot of potential I see a variety of ways that AI journalism can helpfully steer AI development.  Journalists can scrutinize proposed regulations and safety standards, helping improve policies by informing relevant actors of potential issues or oversights. * Example article: Euractiv’s detailed coverage of the EU AI Act negotiations helped policymakers understand the importance of not excluding general-purpose models from the act, which may have played a key role in shaping the final text.[1] * Example future opportunity: As individual US states draft AI regulations in 2025, careful analysis of proposed bills could help prevent harmful loopholes or unintended consequences before they become law. Journalists can surface (or amplify) current and future risks, providing activation energy for policymakers and other actors to address these risks.  * Example article: 404 Media's investigation revealing an a16z-funded AI platform's generation of images that “could be categorized as child pornography” led a cloud computing provider to terminate its relationship with the platf
Relevant opportunities