Hello! I'm Toby. I'm Content Strategist at CEA. I work with the Online Team to make sure the Forum is a great place to discuss doing the most good we can. You'll see me posting a lot, authoring the EA Newsletter and curating Forum Digests, making moderator comments and decisions, and more.
Before working at CEA, I studied Philosophy at the University of Warwick, and worked for a couple of years on a range of writing and editing projects within the EA space. Recently I helped run the Amplify Creative Grants program, to encourage more impactful podcasting and YouTube projects. You can find a bit of my own creative output on my blog, and my podcast feed.
Reach out to me if you're worried about your first post, want to double check Forum norms, or are confused or curious about anything relating to the EA Forum.
Reach out to me if you're worried about your first post, want to double check Forum norms, or are confused or curious about anything relating to the EA Forum.
Hey Clara, congrats on taking the pledge!
My guess on this question is that you'd need a lot more people to give before you saw macro-economic effects. But people's giving does differ greatly country to country, so I'm sure there is some economic work on the effects of that on savings, investment, wages etc!
Also- welcome to the Forum! Let me know if you have any questions about how it works.
This is just a quick note to let you know that polls-in-posts weren't just for DIY debate week. You can use them at any time.
More about the feature, and how to use it, in this post.
Should EA avoid using AI art for non-research purposes?
Treating "agree" as "yes"
I think the strongest reason against (using AI art for non-research purposes) for me is the idea of all the uncompensated art that made up the data. It's a bit of an original sin for AI in general (including text generation), and not one which we've found a good response to.
Reasons for (using AI art for non-research purposes):
- It makes sense for EAs to adopt a place in the memetic space where we think that AI is and will be very powerful (and therefore it's important to learn how to use it), and it's likely to be very dangerous. I don't think there is a contradiction there, and avoiding the use of AI would be increasingly hobbling. This is relevant because I don't think we can make a clean distinction between AI generated art and AI generated text - both are likely built on an amount of stolen/ un-compensated data.
- Using AI images (like bulby above) is just a bit of fun, i.e. the scale of use is pretty small - this correlates with this not being a very big deal.
- A lot of the concerns are hypothetical comms concerns, I'd take this more seriously if things played out that way, but right now I'd guess that the anti-AI-use camp is fairly loud but not strategically useful. And since I disagree with them for the other reasons above, I'd rather not pretend that I do for optics reasons.
Overall: I'm definitely open to changing my mind on this. I especially don't feel like I have a principled response to the un-compensated labour that went into creating AI, and it'd be great to have one.
I'm a pretty strong anti-realist but this is one of the strongest types of shoulds for me.
I.e. 'If you want to achieve the best consequences, then you should expect the majority of affectable consequences to be in the far future' Seems like the kind of thing that could be true or false on non-normative grounds, and would normatively ground a 'should' if you are already committed to consequentialism. In the sense that believing "I should get to Rome as fast as possible" and "The fastest way to get to Rome is to take a flight" grounds a 'should' for "I should take a flight to Rome".
However people interpret the question is how we should discuss it, but when I was writing it, I was wondering about whether bioweapons can cause extinction/ existential risks or not per se. I.e. can bioweapons either:
a) kill everyone
b) Kill enough of the population, forever, such that we can never achieve much as a species.
I'm not sure about the feasibility of either.
Thanks for this comment! I think you've pointed out a few places where this post clearly isn't comprehensive. I'm not sure how frequently asked these questions will be, but in case they are, some quickfire answers:
If someone makes a post that criticizes someone on the forum but does not reach out to the target of their criticism first, would you consider that to be violating a norm of the forum, even if that violation won't result in any enforcement?
No - I mistakenly used the word norms in an ambiguous sentence in the second section. I've changed the word to practices. Reaching out to a critiqued organisation or person, or giving right of reply are 'practices we'd like to encourage' rather than new norms. In practice this means that we (the mods) will advise people to follow these practices in many cases, and in many cases, will help reduce friction (by doing the reaching out on the critics behalf for example).
What is in scope for "criticism" in this context?
This is a good question. I could cop out with a "I know it when I see it" which is partially true. But broadly I think the type of criticism we are more concerned about/ would more strongly encourage to follow these practices is criticism which could damage the reputation of an organisation or individual if was read without a response. General disagreement/ critical engagement with the ideas of an organisation could technically fall into this category, but is generally read as more collaborative than as an accusation of wrongdoing. Tone probably matters a bit here. Others on the mod team may have different views on this question.
I think its not uncommon that critics and their targets have major disagreements about whether these types of beliefs are reasonable. When can one invoke this type of reasoning for not reaching out?
When it's reasonable to do so. I think the Forum is naturally quite sceptical and won't let bad faith arguments stand for long, so in many cases, I don't think it will matter if a bad faith response is published alongside a critique. But it's a little hard to form a principle here (hence practices, not norms).
Yep quick takes are the best spot for more speculative, early stage, or short thoughts :)