Quick takes

Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more

Is there any possibility of the forum having an AI-writing detector in the background which perhaps only the admins can see, but could be queried by suspicious users? I really don't like AI writing and have called it out a number of times but have been wrong once. I imagine this has been thought about and there might even be a form of this going on already.

In saying this my first post on LessWrong was scrapped because they identified it as AI written even though I have NEVER used AI in online writing not even for checking/polishing. So that system obviously isnt' perfect.

Showing 3 of 8 replies (Click to show all)
4
Lorenzo Buonanno🔸
  I'm surprised to read this, can you check your post on https://www.pangram.com/ ?
4
NickLaing
It wasn't a very well written comment, was a bit benign and generic which is maybe why it ot flaggerd. Here it is below To their credit though they reinstated it. "This seems to be a nice observational study which analyses already available data, with an interesting and potentially important finding. They didn't do "controlling" in the technical sense of the word, they matched cases and controls on 40 baseline variables in the cohort with "demographics, 15 comorbidities, concomitant cardiometabolic drugs, laboratories, vitals, and health-care utilization" The big caveat here is that these impressive observational findings often disappear, or become much smaller when a randomised controlled trial is done. Observational studies can never prove causation. Usually that is because there is some silent feature about the kind of people that use melatonin to sleep, that couldn't be matched for or was missed in the matching. A speculative example here could be that some silent, unknown illnesses could have caused people to have poor sleep - which lead to melatonin use. Also what if poor sleep itself led to poor cardiovascular health not the melatonin itself? This might be enough initial data to trigger a randomised placebo control trial using melatonin. It might be hard to sign enough people up to detect an effect on mortality - although a smaller study could still at least pick up if melatonin caused cardiovascular disease. I agree with their conclusion which I think is a great takeaway "These findings challenge the perception of melatonin as a benign chronic therapy and underscore the need for randomized trials to clarify its cardiovascular safety profile." " This is the pangram result   This was the lesswong rejection. Literally just cranked out a 2 minute average quality comment and got accused of being a bot lol. Great introduction to the forum. To be fair they followed up well and promptly, but it was a bit annoying because it was days later and by that st

Thanks for sharing! I'd have guessed they would be using something at least as good as pangram, but maybe it has too many false negatives for them, or it was rejected for other reasons and the wrong rejection message was shown.

 

Literally just cranked out a 2 minute average quality comment and got accused of being a bot lol. Great introduction to the forum. To be fair they followed up well and promptly, but it was a bit annoying because it was days later and by that stage the thread had passed ant the comment was irrelevent.

As an ex forum moderator I can sympathize with them, not a fun job!

The Forum should normalize public red-teaming for people considering new jobs, roles, or project ideas.

If someone is seriously thinking about a position, they should feel comfortable posting the key info — org, scope, uncertainties, concerns, arguments for — and explicitly inviting others to stress-test the decision. Some of the best red-teaming I’ve gotten hasn’t come from my closest collaborators (whose takes I can often predict), but from semi-random thoughtful EAs who notice failure modes I wouldn’t have caught alone (or people think pretty differently... (read more)

I think something a lot of people miss about the “short-term chartist position” (these trends have continued until time t, so I should expect it to continue to time t+1) for an exponential that’s actually a sigmoid is that if you keep holding it, you’ll eventually be wrong exactly once.

Whereas if someone is “short-term chartist hater” (these trends always break, so I predict it’s going to break at time t+1) for an exponential that’s actually a sigmoid is that if you keep holding it, you’ll eventually be correct exactly once.

Now of course most chartists (my... (read more)

2
titotal
A fallacy that can come out of this dynamic is for someone to notice that the "trend continues" people have been right almost all the time, and the "trend is going to stop soon" people are continuously wrong, and to therefore conclude that the trend will continue forever. 

This seems like a pretty unlikely fallacy, but I agree it's theoretically possible (and ocassionally happens in practice).

The difference between 0 and 1 is significant! And it's very valuable to figure out when the transition point happens, if you can.

Do we need to begin considering whether a re-think will be needed in the future with our relationships with AGI/ASI systems? At the moment we view them as tools/agents to do our bidding, and in the safety community there is deep concern/fear when models express a desire to remain online and avoid shutdown and take action accordingly. This is viewed as misaligned behaviour largely.

But what if an intrinsic part of creating true intelligence - that can understand context, see patterns, truly understand the significance of its actions in light of these insight... (read more)

PSA: regression to the mean/mean reversion is a statistical artifact, not a causal mechanism.

So mean regression says that children of tall parents are likely to be shorter than their parents, but it also says parents of tall children are likely to be shorter than their children.

Put in a different way, mean regression goes in both directions. 

This is well-understood enough here in principle, but imo enough people get this wrong in practice that the PSA is worthwhile nonetheless.

Nice post on this, with code: https://acastroaraujo.github.io/blog/posts/2022-01-01-regression-to-the-mean/index.html 

Andres pointed out a sad corollary downstream of people's misinterpretation of regression to the mean as indicating causality when there might be none. From Tversky & Kahneman (1982) via Andrew Gelman:

We normally reinforce others when their behavior is good and punish them when their behavior is bad. By regression alone, therefore, they are most likely to improve after being punished and most likely to deteriorate after being rewar

... (read more)

We seem to be seeing some kind of vibe shift when it comes to AI.

What is less clear is whether this is a major vibe shift or a minor one.

If it's a major one, then we don't want to waste this opportunity (it wasn't clear immediately after the release of ChatGPT that it really was a limited window of opportunity and if we'd known, maybe we would have been able to leverage it better).

In any case, we should try not to waste this opportunity, if happens to turn out to be a major vibe shift.

How would you define and operationalize the vibe shift? Whether there's been a vibe shift and, if so, how significant, seems empirically tractable to determine.

Me: "Well at least this study shows no association beteween painted houses and kids' blood lead levels. That's encouraging!"

Wife: "Nothing you have said this morning is encouraging NIck. Everything that I've heard tells me that our pots, our containers and half of our hut are slowly poisoning our baby"

Yikes touche...

(Context we live in Northern Uganda)

Thanks @Lead Research for Action (LeRA) for this unsettling but excellently written report. Our house is full of aluminium pots and green plastic food containers. Now to figure out what to do about it!

https:/... (read more)

1
Lead Research for Action (LeRA)
Thanks so much for reading, Nick. It really is nearly impossible as an individual consumer to figure out which of the many products we interact with everyday are safe, and which might be contaminated, unfortunately... so hoping we can collectively make progress on these things at a systems level soon! On a related topic, I'm curious whether you see any geophagia (soil consumption) among pregnant women, or other people, in Northern Uganda? It's fairly common in Kenya and Malawi and we've unfortunately seen that the soils (which are often compacted, so they look like small stones) frequently contain lead levels well above what you'd want to see in something that's being directly consumed. -- Isabel Arjmand (cofounder)
2
NickLaing
Thanks Isabel! I still think you could perhaps have some soft recommendations? We are thinking of ditching our red and green plastic containers (which bits of plastic are often flaking off from) and replacing them with aluminium ones. I figure if the water is not hot that's surely safer?  I think consumers can do something here to lower risk. We can't figure out what is contaminated but we can know which is more likely at least? Yeah pregnant women eating soil a thing and some people do but I honestly don't know how much is ingested here so can't help you I'm afraid. That won't be the easiest question to answer. I would have thought the best way to answer directly would just be to take blood lead levels of say 100 women late in pregnancy and compare that with 100 not-pregnant woman to directly see if there's a difference. That wouldn't prove causality but if there was no difference you could discount the problem.

Sure – it's a good point about striking a balance between being willing to take action even with imperfect information, while also not wanting to overclaim. In that vein: We think that it may often be coming from lead chromate (high-lead plastics often also read high in chromium), which is a bright yellow-orange pigment, so most likely to be found in yellow/orange/green plastics; we saw it most in orange and bright green, which are both very popular colors in Malawi. We also saw high lead levels in at least one white plastic, which we suspect is comin... (read more)

TARA Round 1, 2026 — Last call: 9 Spots Remaining

𝗪𝗲'𝘃𝗲 𝗮𝗰𝗰𝗲𝗽𝘁𝗲𝗱 ~𝟳𝟱 𝗽𝗮𝗿𝘁𝗶𝗰𝗶𝗽𝗮𝗻𝘁𝘀 across 6 APAC cities for TARA's first round this year. Applications were meant to close in January, but we have room for 9 more people in select cities.

𝗢𝗽𝗲𝗻 𝗰𝗶𝘁𝗶𝗲𝘀: Sydney, Melbourne, Brisbane, Manila, Tokyo & Singapore

𝗜𝗳 𝘆𝗼𝘂'𝗿𝗲 𝗶𝗻𝘁𝗲𝗿𝗲𝘀𝘁𝗲𝗱: → Apply by March 1 (AOE) → Attend the March 7 icebreaker → Week 1 begins March 14

TARA is a 14-week, part-time technical AI safety program delivering the ARENA curriculum through weekl... (read more)

This is more of a note for myself that I felt might resonate/help some other folks here...

For Better Thinking, Consider Doing Less

I am, like I believe many EAs are, a kind of obsessive, A-type, "high-achieving" person with 27 projects and 18 lines of thought on the go. My default position is usually "work very very hard to solve the problem."

And yet, some of my best, clearest thinking consistently comes when I back off and allow my brain far more space and downtime than feels comfortable, and I am yet again being reminded of that over the past couple of (d... (read more)

On alternative proteins: I think the EA community could aim to figure out how to turn animal farmers into winners if we succeed with alternative proteins. This seems to be one of the largest social risks, and it's probably something we should figure out before we scale alternative proteins a lot. Farmers are typically a small group but have a large lobby ability and public sympathy.

3
Max Taylor
Definitely agree this is important, but I also think we need to reframe the narrative of 'soulless alt proteins companies vs. hard-working farmers' to 'scrappy underdogs vs. huge animal ag corporations that are more like Amazon or Ford than the kinds of cute farms people imagine'. I also wonder how major job displacement from AI will play into this - maybe there will be less public concern about animal farmers losing their jobs if this is part of a general pattern that affects almost all industries? Not at all confident in that though. 

I agree. A reason why it may be easier is that the average age of farmers is high, close to 60. This may be sufficiently high, and population sufficiently small, that standard national support schemes could bear with it.

Is the recent partial lifting of US chip export controls on China (see e.g. here: https://thezvi.substack.com/p/selling-h200s-to-china-is-unwise) good or bad for humanity? I’ve seen many takes from people whose judgment I respect arguing that it is very bad, but their arguments, imho, just don’t make sense. What am I missing?

For transparency, I am neither Chinese nor American, nor am I a paid agent of them. I am not at all confident in this take, but imho someone should make it.

I see two possible scenarios: A) you are not sure how close humanity is to deve... (read more)

Showing 3 of 6 replies (Click to show all)
1
alesziegler
Thx for your perspective; I should be upfront that my confidence in my own case is far from ironclad. Anyway. From the ryan_greenblatt’s article you’ve linked, I think safety plan A, based on achieving international agreement, should be tried before going for plan B, based on achieving and using a secure US lead, while in “plans” C and D it doesn’t matter whether the leading company is American or Chinese, so slowing down Chinese development is useless for those “plans” (they are in fact more like scenarios than real plans). I do agree that the US political system is in an important sense better than the Chinese political system, but my prior is that if superintelligence is developed before, let’s say, 2050 (and that is very optimistic), it is likely to go badly completely regardless of which country it’ll come from. I take your point about slack being potentially useful. Theoretically, I can imagine the following sequence of events: a) the US AI industry crushes Chinese competition, then b) the US government, feeling secure in the US lead, imposes sensible safety regulation on companies. If it were smart about it, it would at the same time propose an international regulatory framework that the rest of the world would be prepared to sign on to, as an alternative to unrestrained US domination. In effect, this would be tantamount to getting so much leverage over China that they would drop out of the race, and then hoping that the US government would use its advantage to push for safety, instead of using it in some other way. However, imho this is a plan that should be pursued as a first option only in circumstances where you are really confident it’ll work, since the consequences of trying and failing are likely to be dire, as per my original post. And I am not confident it will work. A much better order of operations would be to 1) try to negotiate with China to establish an international regulatory framework (plan A), with export control and other stuff being i
2
Erich_Grunewald 🔸
Maybe if you are President of the United States you can first try the one thing, and then the other. But from the perspective of an individual, you have to assume there's some probability of each of these plans (and other strategies) being executed, and that everything will be really messy (e.g., different actors having different strategies in mind, even within the US). Softening export controls seems like something you could do as part of executing Plan A, but as I mentioned above, it's very unclear to me whether unilaterally doing so makes Plan A more likely to be the chosen strategy, and it does likely make Plan B and Plan C go worse. I think you're thinking people have more control over which strategy is adopted than I think they do? Or, what circumstances do you have in mind? Because waiting seems pretty costly. But I think maybe the cruxiest bits are (a) I think export controls seem great in Plan B/C worlds, which seem much likelier than Plan A worlds, and (b) I think unilaterally easing export controls is unlikely to substantially affect the likelihood of Plan A happening (all else equal). It seems like you disagree with both, or at least with (b)?

"But I think maybe the cruxiest bits are (a) I think export controls seem great in Plan B/C worlds, which seem much likelier than Plan A worlds, and (b) I think unilaterally easing export controls is unlikely to substantially affect the likelihood of Plan A happening (all else equal). It seems like you disagree with both, or at least with (b)?"

Yep, this is pretty close to my views. I do disagree with (b), since I am afraid that controls might poison the well for future Plan A negotiations. As for (a), I don’t get how controls help with Plan C, and I don’t ... (read more)

Why don’t EA chapters exist at very prestigious high schools (e.g., Stuyvesant, Exeter, etc.)?

It seems like a relatively low-cost intervention (especially compared to something like Atlas), and these schools produce unusually strong outcomes. There’s also probably less competition than at universities for building genuinely high-quality intellectual clubs (this could totally be wrong).

Showing 3 of 5 replies (Click to show all)

FWIW I went to the best (or second best lol) high school in Chicago, Northside, and tbh the kids at these top city highschools are of comparable talent to the kids at northwestern, with a higher tail as well. More over everyone has way more time and can actually chew on the ideas of EA. There was a jewish org that sent an adult once a week with food and I pretty much went to all of them even tho i would barely even self identify as jewish because of the free food and somewhere to sit and chat about random stuff while I waited for basketball practice. ... (read more)

7
Eli Rose🔸
I think it sounds like an exciting idea. In my role funding EA CB work over the years I've seen a few of these clubs, so there's not literally nothing, but it's true that it's much less common than at universities, and I'm not aware of EA groups at these specific high schools. The answer to many questions of the form "why isn't there an EA group for XYZ" tends to be "no organizer / no one else working to make it happen" and I'm guessing that's the main answer here too.
1
Noah Birnbaum
Seems right, though plausibly there are some EA/EA-adjacent students at some of these schools. 

alignment is a conversation between developers and the broader field. all domains are conversations between decision-makers and everyone else:

“here are important considerations you might not have been taking into account. here is a normative prescription for you.”

“thanks — i had been considering that to 𝜀 extent. i will {implement it because x / not implement it because y / implement z instead}."

these are the two roles i perceive. how does one train oneself to be the best at either? sometimes, conversations at eag center around ‘how to get a job’, whereas i feel they ought to center around ‘how to make oneself significantly better than the second-best candidate’.

Recent generations of Claude seem better at understanding blog posts and making fairly subtle judgment calls than most smart humans. These days when I’d read an article that presumably sounds reasonable to most people but has what seems to me to be a glaring conceptual mistake, I can put it in Claude, ask it to identify the mistake, and more likely than not Claude would land on the same mistake as the one I identified.

I think before Opus 4 this was essentially impossible, Claude 3.xs can sometimes identify small errors but it’s a crapshoot on whether it ca... (read more)

Showing 3 of 4 replies (Click to show all)

what prompt did you use?

9
Linch
EDIT: I noticed that in my examples I primed Claude a little, and when unprimed Claude does not reliably (or usually) get to the answer. However Claude 4.xs are still noticeable in how little handholding they need for this class of conceptual errors, Geminis often takes like 5 hints where Claude usually gets it with one. And my impression was that Claude 3.xs were kinda hopeless (they often don't get it even with short explanations by me, and when they do, I'm not confident they actually got it vs just wanted to agree).
8
Linch
I wouldn't go quite this far, at least from my comment. There's a saying in startups, "never outsource your core competency", and unfortunately reading blog posts and spotting conceptual errors of a certain form is a core competency of mine. Nonetheless I'd encourage other Forum users less good at spotting errors (which is most people) to try to do something like this and post posts that seem a little fishy to Claude and see if it's helpful.[1] For me, Claude is more helpful for identifying factual errors, and for challenging my own blog posts at different levels (eg spelling, readability, conceptual clarity, logical flow, etc). I wouldn't bet on it spotting conceptual/logical errors in my posts I missed, but again, I have a very high opinion of myself here.   1. ^ (To be clear I'm not sure the false positives/false negatives ratio is good enough for other people).

Here’s a random org/project idea: hire full-time, thoughtful EA/AIS red teamers whose job is to seriously critique parts of the ecosystem — whether that’s the importance of certain interventions, movement culture, or philosophical assumptions. Think engaging with critics or adjacent thinkers (e.g., David Thorstad, Titotal, Tyler Cowen) and translating strong outside critiques into actionable internal feedback.

The key design feature would be incentives: instead of paying for generic criticism, red teamers receive rolling “finder’s fees” for critiques that a... (read more)

While I like the potential incentive alignment, I suspect finder’s fees are unworkable. It’s much easier to promise impartiality and fairness in a single game as opposed to an iterated one, and I suspect participants relying on the fees for income would become very sensitive to the nuances of previous decisions rather than the ultimate value of their critiques.


Ultimately, I don’t think there are many shortcuts in changing the philosophy of a movement. If something is worth challenging, than people strongly believe it and there will have to be a process of contested diffusion from the outside in. You can encourage this in individual cases, but systemizing it seems difficult.

EAGx and Summit events are coming up, and we're looking for organizers for more!

Applications for EAGxCDMX (Mexico City, 20–22 March), EAGxNordics (Stockholm, 24–26 April), and EAGxDC (Washington DC, 2–3 May) are all open! These will be the largest regional-focused events in their respective areas, and are aimed at serving those already engaged with EA or doing related professional work. EAGx events are networking-focused conferences designed to foster strong connections within their regional communities.

If you’d like to apply to join the organizing team fo... (read more)

"Most people make the mistake of generalizing from a single data point. Or at least, I do." - SA

When can you learn a lot from one data point? People, especially stats- or science- brained people, are often confused about this, and frequently give answers that (imo) are the opposite of useful. Eg they say that usually you can’t know much but if you know a lot about the meta-structure of your distribution (eg you’re interested in the mean of a distribution with low variance), sometimes a single data point can be a significant update.

This type of limited conc... (read more)

4
Mo Putera
Seems you and Spencer Greenberg (whose piece you linked to) are talking past each other because you both disagree on what the interesting epistemic question is and/or are just writing for different audiences?  * Spencer is asking "When can a single observation justify a strong inference about a general claim?" which is about de-risking overgeneralisation, a fair thing to focus on since many people generalise too readily    * You're asking "When does a single observation maximally reduce your uncertainty?" which is about information-theoretic value, which (like you said) is moreso aimed towards the "stats-brained"  Also seems a bit misleading to count something like "one afternoon in Vietnam" or "first day at a new job" as a single data point when it's hundreds of them bundled together? Spencer's examples seem to lean more towards actual single data points (if not all the way). And Spencer's 4th example on how one data point can sometimes unlock a whole bunch of other data points by triggering a figure-ground inversion that then causes a reconsideration of your vie seems perfectly aligned with Hubbard's point. That said I do think the point you're making is the more practically useful one, I guess I'm just nitpicking.

Also seems a bit misleading to count something like "one afternoon in Vietnam" or "first day at a new job" as a single data point when it's hundreds of them bundled together?

From a information-theoretic perspective, people almost never refer to a single data point as strictly as just one bit, so whether you are counting only one float in a database or a whole row in a structured database, or also a whole conversation, we're sort of negotiating price. 

I think the "alien seeing a car" makes the case somewhat clearer. If you already have a deep model of ... (read more)

It is popular to hate on Swapcard, and yet Swapcard seems like the best available solution despite its flaws. Claude Code or other AI coding assistants are very good nowadays, and conceivably, someone could just Claude Code a better Swapcard that maintained feature parity while not having flaws.

Overall I'm guessing this would be too hard right now, but we do live in an age of mysteries and wonders. It gets easier every month. One reason for optimism is it seems like the Swapcard team is probably not focused on the somewhat odd use case of EAGs in general (... (read more)

Showing 3 of 4 replies (Click to show all)

There is however rising appetite for 1v1s. I just went to an online meeting about Skoll world forum, which is probably the biggest NGO and Funder conference in the world. Both speakers emphasized 1v1s being the most important aspect, and advised only going to other sessions at times when 1v1s weren't books.

So maybe the GHD ecosystem at least is wakign up a bit...

2
Yonatan Cale
Yes, I could use help understanding the demand for 1. Similar features but less bugs 2. Focusing on CEA's use case (making more high quality connections, right?) Can you help me with this @Eli Rose🔸 ?
2
Eli Rose🔸
Cool! Re how to build it, I'd just talk to CEA here or maybe EAG goers, don't think I have any insight to add.

I like Scott's Mistake Theory vs Conflict Theory framing, but I don't think this is a complete model of disagreements about policy, nor do I think the complete models of disagreement will look like more advanced versions of Mistake Theory + Conflict Theory. 

To recap, here's my short summaries of the two theories:

Mistake Theory: I disagree with you because one or both of us are wrong about what we want, or how to achieve what we want)

Conflict Theory: I disagree with you because ultimately I want different things from you. The Marxists, who Scott was or... (read more)

4
Mjreard
I'll need to reread Scott's post to see how reductive it is,[1] but negotiation and motivated cognition here do feel like a slightly lower level of abstraction in the sense that they are composed or different kinds of (and proportions of) conflicts and mistakes. The dynamics you discuss here follow pretty intuitively from the basic conflict/mistake paradigm. This is still great analysis and a useful addendum to Scott's post. 1. ^ actually pretty reductive on a skim, but he does have a savings clause at the end: "But obviously both can be true in parts and reality can be way more complicated than either."

The dynamics you discuss here follow pretty intuitively from the basic conflict/mistake paradigm.

I think it's very easy to believe that the natural extension of the conflicts/mistakes paradigm is that policy fights are composed of a linear combination of the two. Schelling's "rudimentary/obvious" idea, for example, that conflict is and cooperation is often structurally inseparable, is a more subtle and powerful reorientation than it first seems.

But this is a hard point to discuss (because it's in the structure of an "unknown known"), and I didn't interview... (read more)

A bit sad to find out that Open Philanthropy’s (now Coefficient Giving) GCR Cause Prioritization team is no more. 

I heard it was removed/restructured mid-2025. Seems like most of the people were distributed to other parts of the org. I don't think there were public announcements of this, though it is quite possible I missed something. 

I imagine there must have been a bunch of other major changes around Coefficient that aren't yet well understood externally. This caught me a bit off guard. 

There don't seem to be many active online artifa... (read more)

Showing 3 of 4 replies (Click to show all)
24
David Bernard
Thanks for flagging this, Ozzie. I led the GCR Cause Prio team for the last year before it was wound down, so I can add some context. The honest summary is that the team never really achieved product-market fit. Despite the name, we weren't really doing “cause prioritization” as most people would conceive of it. GCR program teams have wide remits within their areas and more domain expertise and networks than we had, so the separate cause prio team model didn't work as well as it does for GHW, where it’s more fruitful to dig into new literatures and build quantitative models. In practice, our work ended up being a mix of supporting a variety of projects for different program teams and trying to improve grant evaluation methods. GCR leadership felt that this set-up wasn’t on track to answer their most important strategy and research questions and that it wasn’t worth the opportunity cost of the people on the team. GCR leadership are considering alternative paths forward, though haven’t decided on anything yet. I don't think there are any other comparably major structural changes at Coefficient to flag, other than that we’re trying to scale Good Ventures' giving and work with other partners, as described in our name change announcement post. I’ll also note that the Worldview Investigation team also wound down in H2, although that case was because team members left for other high-impact roles (e.g. Joe) and not through a top-down decision. This means that there's no longer much dedicated pure research capacity within GCR, though grantmaking here is fairly contiguous with research in practice. 

Thanks so much for this response! That's really useful to know. I really appreciate the transparency and clarity here. 

Hope that the team members of it are all doing well now.  

2
Ozzie Gooen
I don't mean to sound too negative on this - I did just say "a bit sad" on that one specific point. Do I think that CE is doing worse or better overall? It seems like Coefficient has been making a bunch of changes, and I don't feel like I have a good handle on the details. They've also been expanding a fair bit. I'd naively assume that a huge amount of work is going on behind the scenes to hire and grow, and that this is putting CE in a better place on average. I would expect this (the GCR prio team change) to be some evidence that specific ambitious approaches to GCR prioritization are more limited now. I think there are a bunch of large projects that could be done in this area that would probably take a team to do well, and right now it's not clear who else could do such projects. Bigger-picture, I personally think GCR prioritization/strategy is under-investigated, but I respect that others have different priorities.
Load more