Hide table of contents

Dear EA Forum Readers,

Legal Impact for Chickens is looking for a passionate and hard-working Operations Specialist to join us as we continue to grow our nonprofit and fight for animals. This is a new position, and you will have the ability to influence our operations and play an important role in our work.

The responsibilities of this position are varied, covering operational, administrative, and paralegal work, and we will consider a variety of candidates and experiences. Therefore, the final job title may differ depending on the final candidate.

Want to join us?

 

About our work and why you should join us

Legal Impact for Chickens (LIC) is a 501(c)(3) litigation nonprofit. We work to protect farmed animals.

You may have seen our Costco shareholder derivative suit in The Washington Post, Fox Business, or CNN Business—or even on TikTok.

You also may have heard of LIC from our recent Animal Charity Evaluators (ACE) recommendation.

Now, we’re looking for our next hire: an entrepreneurial Operations Specialist to support us in our fight for animals!

Legal Impact for Chickens is currently a team of three litigators. You will join LIC as our first non-litigator employee and support the entire team.

 

About you

You might be a great fit for this position if you have many of the following qualities:

• Passion for helping farmed animals

• Extremely organized, thoughtful, and dependable

• Strong interpersonal skills

• Interest in the law and belief that litigation can help animals

• Zealous, creative, and enthusiastic

• Excited to build this startup nonprofit!

• Willingness to help with all types of nonprofit startup work, from litigation support to HR to finance

• Strong work ethic and initiative

• Love of learning

• Paralegal experience or certificate preferred, but not required

• Experience with various aspects of operations (such as bookkeeping and IT) preferred, but not required

• Experience growing a new team preferred, but not required

• Kind to our fellow humans, and excited about creating a welcoming, inclusive team

 

About the role

You will be an integral part of LIC. You’ll help shape our organization’s future.

Your role will be a combination of (1) assisting the lawyers with litigation tasks, and (2) helping with everything else we need to do, to build and run a growing nonprofit organization!

Since this is such a small organization, you’ll wear many hats: Sometimes you’ll act as a paralegal, formatting a table of authorities, performing legal research, or preparing a binder for a hearing. Sometimes you’ll act as an HR manager, making sure we have the right trainings and benefits available. Sometimes you’ll act as an administrative assistant, tracking expenditures and donations, booking travel arrangements, or helping LIC’s president with email management. Sometimes you’ll act as LIC’s front line for communicating with the public, monitoring info@legalimpactforchickens.org emails, thanking donors, or making calls to customer service representatives on LIC’s behalf. Sometimes you’ll pitch in on LIC’s communications, social media, and public education efforts. If you’re the kind of person who likes to handle many different types of work, this role is for you!

This job offers tremendous opportunity for professional growth and the ability to create valuable impact for animals. The hope is for you to become an indispensable, long-time member of our new team.

Commitment: Full time

Location and travel: This is a remote, U.S.-based position. You must be available to travel for work as needed, since we will litigate all over the country.

Reports to: Alene Anello, LIC’s president

Salary: $50,000–$70,000 depending on experience

Benefits: Health insurance (with ability to buy dental), 401(k), life insurance, flexible schedule, unlimited PTO (plus mandatory vacation!), room for advancement as the organization grows

 

One more thing!

LIC is an equal opportunity employer. Women, people of color, and those from other marginalized groups are strongly encouraged to apply. Applicants will receive consideration without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, ancestry, citizenship status, disability, age, medical condition, veteran status, marital status, political affiliation, or any other protected characteristic.

 

To Apply

To apply, please email your cover letter and resume, combined as one PDF, to info@legalimpactforchickens.org.

 

Thank you for your time and your compassion!

Sincerely,

Legal Impact for Chickens

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr