Hello! 

I’m Toby, the new Content Manager @ CEA. 

Before working at CEA, I studied Philosophy at the University of Warwick, and worked for a couple of years on a range of writing and editing projects in the EA space. Recently I helped run the Amplify Creative Grants program, in order to encourage more impactful podcasting and YouTube projects (such as the podcast in this Forum post). You can find a bit of my own creative output on my more-handwavey-than-the-ea-forum blog, and my (now inactive) podcast feed.

I’ll be doing some combination of: moderating, running events on the Forum, making changes to the Forum based on user feedback, writing announcements, writing the Forum Digest and/or the EA Newsletter, participating in the Forum a lot etc… I’ll be doubling the capacity of the content team (the team formerly known as Lizka). 

I’m here because the Forum is great in itself, and safeguards parts of EA culture I care about preserving. The Forum is the first place I found online where people would respond to what I wrote and actually understand it. Often they understood it better than I did. They wanted to help me (and each other) understand the content better. They actually cared about there being an answer. 

The EA community is uniquely committed to thinking seriously about how to do good. The Forum does a lot to maintain that commitment, by platforming critiques, encouraging careful, high-context conversations, and sharing relevant information. I’m excited that I get to be a part of sustaining and improving this space. 

I’d love to hear more about why you value the Forum in the comments (or, alternatively, anything we could work on to make it better!)

This is the image I'm using for my profile picture. It's a linoprint I made of one of my favourite statues, The Rites of Dionysus.


 

81

0
0
10

Reactions

0
0
10
Comments23


Sorted by Click to highlight new comments since:

Just to be clear, Lizka isn't being replaced and you're a new, additional content manager? Or does Lizka have a new role now?

Yep, Lizka is still Content Specialist, and I'm additive. There were a lot of great content related ideas being left on the table because Lizka can't do everything at once. So once I'm up to speed we should be able to get even more projects done. 

What's the difference between a Content Specialist and a Content Manager?

The difference in role titles reflects the fact that Lizka is the team lead (of our team of two). From what I understand, the titles needn't make much difference in practice.

PS- I'm presuming there is a disagree react on my above comment because Lizka can in fact do everything at once. Fair enough. 

FWIW I would've expected the Content Manager manages the Content Specialist, not the other way around.

FWIW I would have guessed the reverse re role titles

Yes I am also curious about the difference. I’ve been using them interchangeably.

(I'd guess the different titles mostly just reflect the difference in seniority? cf. "program officer" vs "program associate")

Wow, seeing as HILTS is hands down my favorite podcast so now I’m quite excited to see what new and exciting content will come from the forum. Welcome to the EA Forum team!

Thank you Constance! I'm glad to hear you like the podcast. To be very clear- everything you like about the podcast is down to James and Amy, we just chose to fund them. 

The only thing that comes to mind for me regarding "make it better" would be to change the wording on the tooltips for voting to clarify (or to police?) what they are for. I somewhat regularly see people agree vote or disagree vote with comments that don't contain any claims or arguments.

Interesting! Let me know if any examples come up (feel free to post here or dm). Ideally we wouldn't have the disagree button playing the same role as the karma button. 

Sure. The silly and simplified cliché is something like this: a comment describes someone's feelings (or internal state) and then gets some agree votes and disagree votes, as if Person A says "this makes me happy" and person be wants to argue that point.

(to be clear, this is a very small flaw/issue with the EA Forum, and I wouldn't really object if the people running the forum decide that this is too minor of an issue to spend time on)

A few little examples:

  • Peter Wildeford's comment on this post "What's the difference between a Content Specialist and a Content Manager?" currently has two agree votes. There isn't any argument or stance there; it is merely asking a question. So I assume people are using the agree vote to indicate something like "I also have this question" or "I am glad that you are asking this question."
  • I made a comment a few days ago about being glad that I am not the only one who wants to have financial runway before donating. It currently has a few agree votes and disagree votes, and I can't for the life of me figure out why. There aren't really any stances or claims being made in that comment.
  • Ben West made a comment about lab grown meat that currently has 27 agree votes, even through the comment has nothing to agree with: "Congratulations to Upside Foods, Good Meat, and everyone who worked on this technology!" I guess that people are using the agree vote to indicate something like "I like this, and I want to express the same gratitude."

Is this a problem? Seems fine to me, because the meaning is often clear, as in two of your examples, and I think it adds value in those contexts. And if it's not clear, doesn't seem like a big loss compared to a counterfactual of having none of these types of vote available.

Thanks for putting these together. This doesn't currently seem obviously bad to me for (I think) the same reasons as Isaac Dunn (those examples don't show valueless reacts, and most cases are much clearer). However, your cases are interesting. 

I agree with your read of the reactions to Ben West's comment. 

In the question about my role, perhaps it is slightly less clear, because "I agree that this is a good question" or "I have this question as well" could probably be adequately expressed with Karma. But I also doubt that this has led to significant confusion. 

In the reaction to your comment, I'd go with the agrees saying that they echo the statement in your tl;dr. The disagree is weirder- perhaps they are signalling disencouragement of your encouraging Lizka's sentiment? 


(Perhaps how perplexing people find agree/disagree reacts to comments which don't straightforwardly contain propositions maps to how habitually the reader decouples propositional content from context.) 


I'll keep an eye out for issues with this- my view is loosely held. Thanks again for raising the issue. 
 

Congratulations on the new role! :)

Welcome! Glad to have you here, Toby.

Thanks Joseph!

Welcome Toby :)

Thank you Max!

Congrats Toby, excited to see what you get up to in the new role! And thanks for all your work on Amplify.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr