All posts

Top (inflation-adjusted)

Wednesday, 24 April 2024
Wed, 24 Apr 2024

Career choice 1
Announcements and updates 1
Probably Good 1
Collections and resources 1
Community 1
Building effective altruism 1
More

Quick takes

First in-ovo sexing in the US Egg Innovations announced that they are "on track to adopt the technology in early 2025." Approximately 300 million male chicks are ground up alive in the US each year (since only female chicks are valuable) and in-ovo sexing would prevent this.  UEP originally promised to eliminate male chick culling by 2020; needless to say, they didn't keep that commitment. But better late than never!  Congrats to everyone working on this, including @Robert - Innovate Animal Ag, who founded an organization devoted to pushing this technology.[1] 1. ^ Egg Innovations says they can't disclose details about who they are working with for NDA reasons; if anyone has more information about who deserves credit for this, please comment!
With the US presidential election coming up this year, some of y’all will probably want to discuss it.[1] I think it’s a good time to restate our politics policy. tl;dr Partisan politics content is allowed, but will be restricted to the Personal Blog category. On-topic policy discussions are still eligible as frontpage material. 1. ^ Or the expected UK elections.
Ben West recently mentioned that he would be excited about a common application. It got me thinking a little about it. I don't have the technical/design skills to create such a system, but I want to let my mind wander a little bit on the topic. This is just musings and 'thinking out out,' so don't take any of this too seriously. What would the benefits be for some type of common application? For the applicant: send an application to a wider variety of organizations with less effort. For the organization: get a wider variety of applicants. Why not just have the post openings posted to LinkedIn and allow candidates to use the Easy Apply function? Well, that would probably result in lots of low quality applications. Maybe include a few question to serve as a simple filter? Perhaps a question to reveal how familiar the candidate is with the ideas and principles of EA? Lots of low quality applications aren't really an issue if you have an easy way to filter them out. As a simplistic example, if I am hiring for a job that requires fluent Spanish, and a dropdown prompt in the job application asks candidates to evaluate their Spanish, it is pretty easy to filter out people that selected "I don't speak any Spanish" or "I speak a little Spanish, but not much." But the benefit of Easy Apply (from the candidate's perspective) is the ease. John Doe candidate doesn't have to fill in a dozen different text boxes with information that is already on his resume. And that ease can be gained in an organization's own application form. An application form literally can be as simple as prompts for name, email address, and resume. That might be the most minimalistic that an application form could be while still being functional. And there are plenty of organizations that have these types of applications: companies that use Lever or Ashby often have very simple and easy job application forms (example 1, example 2). Conversely, the more than organizations prompt candidates to explain "Why do you want to work for us" or "tell us about your most impressive accomplishment" the more burdensome it is for candidates. Of course, maybe making it burdensome for candidates is intentional, and the organization believes that this will lead to higher quality candidates. There are some things that you can't really get information about by prompting candidates to select an item from a list.
Maybe EA philanthropists should be invest more conservatively, actually The pros and cons of unusually high risk tolerance in EA philanthropy have been discussed a lot, e.g. here. One factor that may weigh in favor of higher risk aversion is that nonprofits benefit from a stable stream of donations, rather than one that goes up and down a lot with the general economy. This is for a few reasons: * Funding stability in a cause area makes it easier for employees to advance their careers because they can count on stable employment. It also makes it easier for nonprofits to hire, retain, and develop talent. This allows both nonprofits and their employees to have greater impact in the long run. Whereas a higher but more volatile stream of funding might not lead to as much impact. * It becomes more politically difficult to make progress in some causes during a recession. For example, politicians may have lower appetite for farm animal welfare regulations and might even be more willing to repeal existing regulations if they believe the regulations stifle economic growth. This makes it especially important for animal welfare orgs to retain funding.
I don't think we have a good answer to what happens after we do auditing of an AI model and find something wrong.   Given that our current understanding of AI's internal workings is at least a generation behind, it's not exactly like we can isolate what mechanism is causing certain behaviours. (Would really appreciate any input here- I see very little to no discussion on this in governance papers; it's almost as if policy folks are oblivious to the technical hurdles which await working groups)