This is a special post for quick takes by geoffrey. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Decreasing focus over time may not mean decreasing productivity:

Suppose you want to double your productivity by doubling your work hours from 30 to 60 per week. Standard advice will say this is silly, since focus decreases over time. You may still increase your productivity but it will scale slower than your work hours.

But this assumes all assigned work is equally important. In reality, many jobs have peripheral tasks that must be done before your core tasks (or your "real work"). Civil servants have reporting requirements, academic researchers have teaching obligations, and individual contributors everywhere have to attend meetings so managers can coordinate direction. 

Suppose the non-core tasks takes 20 hours per week. Then going from a 30-hour to 60-hour workweek isn't just doubling your core task hours; it's quadrupling your core task hours from 10 to 40! And that quadrupling of core task hours can outweigh the diminishing focus over time. It can even mean that the last 20 hours are more productive than the first 20 hours.

Now 20 hours of peripheral tasks is admittedly an extreme example. But it may not be that far off for modeling career advancement. Promotions are based partly on stretch assignments (or "performing above your level") and you won't get to work on stretch assignments all the time. Managers may split your time between your current job and the job you want to promote into.

Once you get to a certain level of seniority and organizational maturity, then more of your hours become core task hours. So diminishing focus more directly translates into diminishing productivity. But I think the earlier you are in your career, the more exploration you're doing, and the further you are from your target job, the more likely you'll want those extra hours.

At face value, what you've written makes sense. It depends massively on the structure of the job of course. But if you have a flat amount of non-core tasks and a much larger number of core tasks, then... yeah, you are right.

I think that one of the challenges is that many of the non-core tasks require a lot of context, so you can't just outsource them or hire an administrative assistant to handle them. They tend to be tightly tied to other tasks, or at least to your thinking and knowledge (such as a civil servant that has to write a report on what work she has accomplished this week). And I imagine there are some scenarios where it would be feasible to outsource the task but organizational rules and culture don't allow it (such as a academic researcher delegating all teaching responsibilities to someone else). And if I need to communicate information verbally to several people, could they all send their assistants to attend the meeting instead of coming themselves? I suppose they could, but that means that when we make a decision the assistant needs to be empowered with decision-making authority, and the assistant's judgement and context might be inferior to the manager's.

My reaction to this line of thinking is basically that there are lots of challenges and impracticalities and aspects of various situations that make it not very feasible. But for situations that don't have much of those barriers: yeah, you are right. 👍

Strong agree. By no means am I suggesting organizations outsource or cancel more of their non-core work. It’s hard for organizations to define those, non-core work needs a lot of context, and a lot of grunt work is genuinely “real work” that people don’t appreciate.

But from an individual POV, I wanted to make sense of the feeling that extra hours could sometimes be increasing in value even when I was very tired. And I think it’s this dynamic with some tasks or career goals where the last N% is where most of the rewards are. So spending more time once you get there is a big deal.

I believe Claudia Goldin calls these “greedy jobs”.

Personal reasons why I wished I delayed donations: I started donating 10% of my income about 6 years back when I was making Software Engineer money. Then I delayed my donations when I moved into a direct work path, intending to make up the difference later in life. I don't have any regrets about 'donating right away' back then. But if I could do it all over again with the benefit of hindsight, I would have delayed most of my earlier donations too.

First, I've been surprised by 'necessary expenses'. Most of my health care needs have been in therapy and dental care, neither of which is covered much by insurance. On top of that, friend visits cost more over time as people scatter to different cities, meaning I'm paying a lot more for travel costs. And family obligations always manage to catch me off-caught.

Second, career transitions are expensive. I was counting on my programming skills and volunteer organizing to mean a lot more in public policy and research. But there are few substitutes for working inside your target field. And while everyone complains about Master's degrees, it's still heavily rewarded on the job market so I ultimately caved in and paid for one. 

Finally, I'm getting a lot more from 'money right away' these days. Thanks to some mental health improvements, fancy things are less stressful and more enjoyable than before. The extra vacation, concert, or restaurant is now worth it, and so my optimal spending level has increased. That's not just for enjoyment. My productivity also improves after that extra splurging, whereas before there wasn't much difference in the relaxation benefit I got from a concert and a series of YouTube comedy skits.

If I had to find a lesson here, it's that I thought too much about my altruistic desires changing and not enough on everything else changing. I opted to 'donate right away' to protect against future me rebelling against effective charity, worrying about value drift and stories of lost motivation. In practice, my preference for giving 10% has been incredibly robust. My other preferences have been a lot more dynamic. 

Project-based learning seems to be a underappreciated bottleneck for building career capital in public policy and non-profits. By projects, I mean subjective problems like writing policy briefs, delivering research insights, lobbying for political change, or running community events. These have subtle domain-specific tradeoffs without a clean answer. (See the Project Work section in On-Ramps Into Biosecurity)

Thus the lessons can't be easily generalized or made legible the way a math problem can be. With projects, even the very first step of identifying a good problem is tough. Without access to a formal network, you can spend weeks on a dead end only realizing your mistakes months or years after the fact. 

This constraint seems well-known for professionals in the network, as organizers for research fellowships like SERI Mats describe their program as valuable, highly in-demand, yet constrained in how many people they can train.

I think operations best shows the surprising importance of domain-specific knowledge. The skill set looks similar across fields. So that would imply some exchange-ability between private sector and social sector. But in practice, organizations want you to know their specific mission very well and they're willing (correctly or incorrectly) to hire a young Research Assistant over, say, someone with 10 years of experience in a Fortune 500 company. That domain knowledge helps you internalize the organization's trade-offs and prioritize without using too much senior management time.

Emphasizing this supervised project-based learning mechanism of getting domain-specific career capital would clarify a few points.

  • With school, it would
    • emphasize that textbook-knowledge is both necessary yet insufficient for contributing to social sector work
    • show the benefits of STEM electives and liberal arts fields, where the material is easier from a technical standpoint but you work on open-ended problems
    • illustrate how research-based Master degrees in Europe tend to be better training than purely coursework-based ones in the US (IMHO, true in Economics)
  • With young professionals, it would
    • highlight the "Hollywood big break" element of getting a social sector job, where it's easier to develop your career capital after you get your target job and get feedback on what to work on (and probably not as important before that)
    • formalize the intuition some people have about "assistant roles in effective organizations" being very valuable even though you're not developing many hard skills
  • With discussions on elitism and privilege, it would
Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr