Hide table of contents

UPDATE 07/15/2023: Applications are now closed. If you still want to apply, I will consider you for the future, which may be as soon as a few months from now, or I could initially offer you a lower salary if you'd like to join sooner.

Rational Animations is hiring. If you would like to apply, reach out to rationalanimations@gmail.com

The more material you include, the easier for me to evaluate you. The hiring process will probably consist of a short interview and a period you should consider a trial, even if paid at full salary.

AI Safety scriptwriter

At the moment, Rational Animations has two consistently active writers, and the only one doing AI Safety scripts is me. I'd like to have more slack and be able to publish a lot of high-quality AI Safety explainers in the coming months and years. Our animation team is growing, and we will be able to publish videos much faster. 

Pay: 50 -150 USD per hour. If I offer you 150 per hour, I'll limit your hours to 10 per week, at least for the next few months. That said, I'm looking for someone who can put in consistent effort. At any given time, most scriptwriters in Rational are inactive, and I'm 100% OK with that, but I'm also looking for someone I can consistently rely on.

What may help you land the job:

  1. Having written good stuff about AI Safety helps enormously.
  2. Mathematical education helps.
  3. AI education helps.
  4. AI Safety education (e.g., having studied the AGISF curriculum) helps.
  5. Surely other things I have yet to think of.

I may hire more than one person if many good candidates apply. I may also decide to hire no one.

Bonus: I expect people in this role to be able to write scripts for shorts about AI Safety (more below).

Lead community manager

Shortly, we will include many more calls to action in our videos to increase our impact. But that is only one way to go about it. Another option I'm excited about is building our own spaces for the most hardcore followers of the channel to go much deeper into our topics. These spaces include, but may not be limited to:

  • Discord (high priority)
  • Reddit (high priority)
  • Twitter
  • Instagram

Your job consists of developing strategies to help with this objective and taking action to pursue them. The outcome you should try to achieve is to help people land in those spaces and curate them so that the users learn a lot.

Our highest-priority topic is becoming AI Safety, so the most important thing you'll do is to help people skill up in that subject. Here's an example of how I picture the funnel:

1. We make videos about AI Safety.
2. People land on our Discord server.
3. You help the most interested people skill up by talking with them and linking resources.
4. You help them stay accountable if they decide to embark on a learning journey by, e.g., organizing book clubs, weekly meetings in the style of AGISF, etc.

You will also manage a small team comprising two artists and a moderator. They will help you achieve your objectives for our social media and online spaces.

Pay: 25-50 USD per hour. More for exceptional candidates. At 50 per hour, I may have to limit your hours for a while, but probably not more than five months, and possibly not at all.

Bonus: I expect people in this role to potentially be able to write scripts for shorts about AI Safety (more below).

Scriptwriter for AI Safety shorts

We want to experiment with shorts. There are lots of things happening in AI Alignment and AI Governance lately. Rational Animations cannot inform its audience about these events with our long-form videos. Shorts, instead, are the perfect medium. Being able to keep up with current events in AI Alignment and AI Governance is easier than writing long-form explainers about them. If you can do this job well and feel excited about it, apply!

General Inquiry

Could you be of help in any other way? Let Rational Animations know!

Deadline

At the moment, there is no set deadline! If I still haven't updated this post to indicate that applications are closed, I will review your submission. I may select a few candidates within the next two to four weeks, but I may also consider additional applications later. I will make sure to keep this post current with any updates.

40

0
0

Reactions

0
0

More posts like this

Comments4


Sorted by Click to highlight new comments since:

An endorsement for Rational Animations:

It's a wonderful channel with great videos, and over 150,000 subscribers. I've assigned many of their videos in my 'Psychology of Effective Altruism' class. Some of their videos get over a million views.

In terms of outreach and impact, I think that working as a writer for Rational Animations could be a very good job for somebody with the right skills -- probably higher impact than many EA think tank research jobs.

Hey, I don't know if this is [totally not what you're looking for] or [an amazing fit], .. anyway, consider reaching out to Dagan Shani who made this

Are you looking for someone to do voiceovers for it? I have experience doing VO work for a few informational YouTube channels.

No, but we'll need more than one voice actor for some videos. We'll consider you for those occasions if you send us your portfolio.

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
LewisBollard
 ·  · 6m read
 · 
> Despite the setbacks, I'm hopeful about the technology's future ---------------------------------------- It wasn’t meant to go like this. Alternative protein startups that were once soaring are now struggling. Impact investors who were once everywhere are now absent. Banks that confidently predicted 31% annual growth (UBS) and a 2030 global market worth $88-263B (Credit Suisse) have quietly taken down their predictions. This sucks. For many founders and staff this wasn’t just a job, but a calling — an opportunity to work toward a world free of factory farming. For many investors, it wasn’t just an investment, but a bet on a better future. It’s easy to feel frustrated, disillusioned, and even hopeless. It’s also wrong. There’s still plenty of hope for alternative proteins — just on a longer timeline than the unrealistic ones that were once touted. Here are three trends I’m particularly excited about. Better products People are eating less plant-based meat for many reasons, but the simplest one may just be that they don’t like how they taste. “Taste/texture” was the top reason chosen by Brits for reducing their plant-based meat consumption in a recent survey by Bryant Research. US consumers most disliked the “consistency and texture” of plant-based foods in a survey of shoppers at retailer Kroger.  They’ve got a point. In 2018-21, every food giant, meat company, and two-person startup rushed new products to market with minimal product testing. Indeed, the meat companies’ plant-based offerings were bad enough to inspire conspiracy theories that this was a case of the car companies buying up the streetcars.  Consumers noticed. The Bryant Research survey found that two thirds of Brits agreed with the statement “some plant based meat products or brands taste much worse than others.” In a 2021 taste test, 100 consumers rated all five brands of plant-based nuggets as much worse than chicken-based nuggets on taste, texture, and “overall liking.” One silver lining
 ·  · 1m read
 ·