First, I wanted to thank all of the Effective Altruism Global organizers and participants. I found it to be very valuable and overall well put together. There was obviously a ton of work put into it, most by conference organizers who I don't believe will get that much credit for it, and I very much commend their work.
That said, there's always a lot of room for new ideas, and I find I often get a bunch of ideas at and after these conferences. Because of the EAGx events, ideas described now may be able to be put into action somewhat soon and experimented with.
As may be expected, I recommend that people make all of their ideas be independent comments, then upvote the ideas that they think would be the most useful.
Hi Kerry! Congratulations again for the exceptional conference, and thanks for adding detail.
Updates I've made:
while in my tiny sample of 13 the emails with 'from' names like 'Kit Surname via EAG' worked out badly, it looks like you produced the most reasonable emails of that form possible without the benefit of hindsight. In answer to your question, I call this dishonest primarily because it gives the appearance of endorsement of content which I do not endorse. I would still not do this.
the deadlines at first appeared to be mainly to generate haste, but some or all had operational function. My blanket terming 'fake deadlines' was therefore wrong.
aside from 'we trust Kit's judgement', I see that most/all other statements made in the campaign were true in a technical sense. However, I maintain that this is insufficient. 'I was looking through our attendee database' is a great example, precisely because the whole message implies specificity to the recipient, while it appears that the looking could have been replaced by a single filter for people who hadn't bought tickets. Likewise for 'ideal participant'. At the very least, I'd bin these along with the "you're a cool person, come to EA" emails Michael mentioned.
Additional arguments against my position:
if CEA has standards substantially above average for its reference class, people might still not trust EAs to the extent I would like
maybe we don't particularly need highly involved EAs to trust each other more, and this kind of marketing won't materially affect what less involved people think.
I had also suspected that my concerns put me in a niche group which holds a small proportion of total relevance. I have updated away from this suspicion because the ratio of people who at the present time register a desire for greater honesty (17-27, probably nearer 17) to those who register no concern (3-5) is much higher than I had anticipated, and I suspect that forum participants are a highly relevant class for cooperation considerations.
To the other 16+ of those 17+ people: if my views are not representative of yours, it could be valuable for you to say so.