It's an MVP—we will upgrade to a better website in due time. Hopefully the release of more products will mean there will be more options to suit a greater variety of tastes. If you have any ideas for designs or aesthetic styles that would appeal to you, I encourage you to submit them.
Unfortunately the products cannot get any cheaper than they are as we cannot operate the store without using a service like Printful, and products may in fact go up in price in the future if we change the funding model.
Thanks!
Yeah, we are aware of this bug and unfortunately don't know how to fix it yet, but hopefully we'll solve it in the next few days
The reason we are not charging a markup is because it could lead to tax-related complications, but this may change in the future.
Hm, this may be right. We will change it if this comment gets enough upvotes. Also, if you had the same issue as Dan (shipping was too expensive), try again now!
Ability to include a poll in when you make a question post, à la Twitter! I know this feature has been suggested before, in response to which Aaron Gertler made the Effective Altruism Polls Facebook group, but it seems to have plateaued at 578 members after 2.5 years. Response rates in the forum would probably be much higher.
I think a bottleneck to this is often that having the explicit goal of trying to make the members of your EA group become friends can feel inorganic and artificial. The activities you suggest seem like a good way of doing this in a way that doesn't feel forced, and I'll probably be using some of these ideas for EA Ireland. Thanks for writing this wholesome post up!
Yes, this is true and very important. We should by no means lose sight of existential risks as a discerning principle! I think the best framing to use will vary a lot case-by-case, and often the one you outline will be the better option. Thanks for the feedback!
This is a good point, and I thought about it when writing the post—trying to be persuasive does carry the risk of ending up flatteringly mischaracterizing things or worsening epistemics, and we must be careful not to do this. But I don't think it is doomed to happen with any attempts at being persuasive, such that we shouldn't even try! I'm sure someone smarter than me could come up with better examples than the ones I presented. (For instance, the example about using visualizations seems pretty harmless—maybe attempts to be persuasive should look more like this than the rest of the examples?)
Maybe we don't just want to optimize the messaging, but the messengers: Having charismatic & likeable people talk about this stuff might be good (to what extent is this already happening? Are MacAskill & Ord as good as spokespeople as they are as researchers?).
Furthermore, taking the WaitButWhy approach, with easily understandable visualizations, sounds like a good approach, I agree.
No, that's not what I mean. I mean we should use other examples of the form "you ask an AI to do X, and the AI accomplishes X by doing Y, but Y is bad and not what you intended" where Y is not as bad as an extinction event.
Much of SoGive's methodology is outlined on this blog, which I think is pretty accessible for beginners (but I think some parts are out of date)
Do you work with Kat Woods? She mentioned some people on her team had already done some work on this and was meaning to put me in touch with them
That's amazing! Yes, I definitely think we can work together. Do you have an email or similar where I can reach out to discuss further?
I think many of these benefits could be achieved by local EA groups working on a high-impact project together (maybe like those in Impact CoLabs?). Some people in my local EA group have started working on AI research together and that seems to be going pretty well. I worry EA groups doing community service in an official EA capacity may muddy the waters about what effective altruism stands for.
- Team smarter than you - join a team where most people are smarter than you
Couldn't you argue that your marginal impact is less here than in a case where you're the smartest in the team?
Are they familiar with Charity Entrepreneurship? They research high-impact nonprofit ideas (which you can find on their website) and they have an incubation program
I see this as one of those problems that could be addressed with a "trickle-down solution": Once the top universities and/or academic journals change their policies, it is likely that all the rest will copy them and follow suit. I don't know if there is any type of "lobbying" we can do to influence these institutions but it seems like a potentially straightforward and tractable path.
There is the EA Hub profiles directory, where you can search for people by location, cause area, expertise, and whether they're open to job offers
I think this is often a real tradeoff, but there are other ways of framing it that might help:
A) You should work in something you at least somewhat enjoy and have a good personal fit for in order to avoid burnout (I think this is 80k's position as well). Within the range of things that meet this criteria, some will be more impactful than others, and you should choose the most impactful one. EA frameworks are very useful for discerning which one this might be.
B) The aptitude-building approach (from Holden Karnofsky's 80k podcast episode): You should become ...
I think this is a great idea! I worry that calling it The Altruist might be off-putting for some readers as it could be read as self-congratulatory
This may be useful for Future Perfect as a case study: The 12 most-read Future Perfect pieces of 2021
I agree with you—I generally come to the forum looking for more thoughtful content, and there are already several EA Facebook groups for which at least the meme post would have been more appropriate. I think the writing contest is probably fine though.
This seems very useful. Personally, I would also be interested in:
Will look into this