Tangentially, this made me wonder whether the ppl running EAF/LW/etc are thinking about and "ready" wrt the risk of mass-produced BS from LLMs flooding online spaces, including potentially forums like these.
We did not consider the discussion on specific research projects to be within the scope of this post. As mentioned in the beginning, we tried to cover as much as we could that would be relevant to other field builders and related audiences.
It primarily focusses on information we think might be relevant for other people and initiatives in this space. We also do not go into specific research outputs produced by fellows within the scope of this post
There are a few reasons for why it made sense this way.
As discussed in other parts of this post, a lot of research output has not yet been published. Some teams did publicly share their work (as an example, one of the two teams that worked on "dynamical systems perspective on goal-oriented behavior and relativistic agency" posted their updates on the Alignment Forum:  and , which we hugely appreciate), some have submitted their manuscripts to academic venues, and several others have not yet. This has been for various reasons including e.g. because they are continuing the project and waiting to only publish at some further level of maturity, (info) hazard considerations and sanity checks, preferences over the format of research output they'd want to pursue and working towards that, or the project was primarily directed at informing the mentor's research and that may not involve an explicit public output.
From our end, while we might hold preferences for certain insights to flow outwards more efficiently, we also wanted to defer decisions about the form and content of research outputs to the shared judgement between fellows and their respective mentors.
Note that in some of the cases this absence of public communication till now is fairly justifiable, especially in the cases of promising projects that became long term collaborations.
(Fwiw, as we mention in this post, we have also gained a better understanding of how to facilitate outward communication without constraining such research autonomy, we will take into account in future.)
There are also other reasons why detailed evaluation of projects is difficult to do based on partial outputs and mentor-specific inside-view motivations. In the light of all this, we did decide for this reflections post to be a high-level abstraction, and not include either a Research Showcase or a detailed Portfolio Evaluation. Based on what we understand right now, this seems like a reasonable decision.
At the same time, if a project evaluator or somebody in a related capacity wishes to take a look at a more detailed evaluation report, we'd be open to discussing that (under some info sharing constraints) and would be happy to hear from you at firstname.lastname@example.org
TLDR; PIBBSS is hiring for a full-time Project Manager who will be responsible for running the second iteration of the PIBBSS Summer Research Fellowship.
To apply, please complete this application form.
PIBBSS aims to facilitate knowledge transfer from fields studying intelligent behaviour in natural systems to AI safety and alignment.
The Project Manager will be supported by TJ and Nora (who ran the fellowship in 2022) to help transfer learnings from last year’s fellowship, and work alongside (and manage) 1-3 team members to help execute the program.
We’re happy to discuss this opportunity with any potential applicants. Feel free to contact me with any questions you might have at: email@example.com.
What is the timeline of the Century Fellowship/application? Is there a time when applications will be closed?
Another thought in the gendre "consequentialism+": capabilitariansim à la Senn and Nussbaumer (e.g. here (h/t TJ) for an intro or SEP) seems attractive to me (among other reasons) because I believe it makes a practically useful abstraction from "what we believe ultimately matters" to "what are the best levers to affect that which we believe ultimatly matters". (In this case, the suggestion would be, while we might still think that some broad notion of utility is what we consider to ultimately matter morally, given the specific world we live in and its causal structure, focus on improving people's centrals capabilities (as listed for example in the post linked earlier) is a effective and robust way of promoting that good.
And importantly, consequentialism-viewed-through-the-lens-of-capabilitairnism will equip you with some different intuitions in e.g. political philosophy than a more "straitghtforward" notion of consequentialism will (at least before you reach what I am suggesting here to be a new reflective equilibrium).
FWIW I would be a regular reader of Nuno's monthly (or some other interval) forum digest. Also think that having a number of other people (potentially with complementary profiles) could be valuable. Given the depth d breadth of EA/EA Forum these days, trying to find the "common denominator of relevance" in the form of a single digest will result in a digest that is of limited usefulness for most readers.
Some of the section ideas are great, in particularly "underupvoted underdogs".
PIBBSS Summer Research Fellowship -- Q&A event
PIBBSS Summer Research Fellowship -- Q&A event
Somewhat related: What to do with people? https://forum.effectivealtruism.org/posts/oNY76m8DDWFiLo7nH/what-to-do-with-people
I agree there are those things, but I am overall probably more pessimistic than you; I think there is a (significant) assymmetry towards pollution-y and not-truth-conducive content production here.
(That said, I am not too concerned overall either; I think the solution of making it harder/require some form of verification to make an account.)