Thanks for this post!
I think there's a typo here:
We also found sizable differences between the percentage of Republicans (4.3% permissive, 1.5% stringent) estimated to have heard of EA, compared to Democrats (7.2% permissive, 2.9% stringent) and Independents (4.3% permissive, 1.5% stringent). [emphasis added]
It looks like the numbers for Republicans were copy-pasted for Independents? Since the text implies that the numbers should be very different but they're identical, and since if those are the correct numbers it seems weird that the US adult population ... (read more)
For what it's worth, I get a sense of vagueness from this post, like I don't have a strong understanding of what specific claims are being made and like I predict that different readers will spot or interpret different claims from this.
I think attempting to provide a summary of the key points in the form of specific claims and arguments for/against them would be a useful exercise, to force clarity of thought/expression here. So what follows is one possible summary. Note that I think many of the arguments in this attempted summary are flawed, as I'll ... (read more)
I put a bunch of weight on decision theories which support 2.
A mundane example: I get value now from knowing that, even if I died, my partner would pursue certain Claire-specific projects I value being pursued because it makes me happy to know they will get pursued even if I die. I couldn't have that happiness now if I didn't believe he would actually do it, and it'd be hard for him (a person who lives with me and who I've dated for many years) to make me believe that he actually would pursue them even if it weren't true (as well as seeming ske... (read more)
Or perhaps you're thinking of utils in terms of whether preferences are actually satisfied, regardless of whether people know or experience that and whether they're alive at that time? If so, then I think that's a pretty unusual form of utilitarianism, it's a form I'd give very little weight to, and that's a point that it seems like you should've clarified in the main text.
Although I find this version of utilitarianism extremely implausible, it is actually a very common form of it. Discussions of preference-satisfaction theories of wellbeing presupposed by... (read more)
Thank you so, so much for writing up your review & criticism! I think your sense of vagueness is very justified, mostly because my own post is more "me trying to lay out my intuitions" and less "I know exactly how we should change EA on account of these intuitions". I had just not seen many statements from EAs, and even less among my non-EA acquaintances, defending the importance of (1), (2), or (3) - great breakdown, btw. I put this post up in the hopes of fostering discussion, so thank you (and all the other commenters) for contributing your th... (read more)
I just want to flag that, for reasons expressed in the post, I think it seems probably a bad idea to be trying to accelerate the implementation of APM at the moment, as opposed to doing more research and thinking on whether to do that and then maybe indeed doing that afterwards, if it then appears useful.
And I also think it seems bad to "stand firmly behind" any "aggressive strategy" for accelerating powerful emerging technologies; I think there are many cases where accelerating such technologies is beneficial for the world, but one should prob... (read more)
I strong downvoted this comment. Given that and that others have too (which I endorse), I want to mention I'm happy to write some thoughts on why I did so if you want, since I imagine sometimes people new-ish to the EA Forum may not understand why they're getting downvoted.
But in brief:
As a moderator, I agree with Michael. The comment Michael's replying to goes against Forum norms.
Oh, we do have https://forum.effectivealtruism.org/topics/marketing So it's probably not worth adding a new tag for just Digital marketing.
The author or readers might also find the following interesting:
That said, fwiw, since I'm recommending Holden's doc, I should also flag that I think the breakdown of possible outcomes that Holden sketches there isn't a good one, because:
Thanks for this post. I upvoted this and think the point you make is important and under-discussed.
That said, I also disagree with this post in some ways. In particular, I think the ideal version of this post would pay more attention to:
Someone shared a project idea with me and, after I indicating I didn't feel very enthusiastic about it at first glance, asked me what reservations I have. Their project idea was focused on reducing political polarization and is framed as motivated by longtermism. I wrote the following and thought maybe it'd be useful for other people too, since I have similar thoughts in reaction to a large fraction of project ideas.
Thanks for this post!
Here's a related thing I wrote recently, with a slightly different framing and some additional details, in case this is useful to someone. Though Habryka's comment makes me think perhaps LessWrong already have this mostly covered, so I guess a first step would be to check that out.
"Mesa-project* idea: Centralised & scalable proofreading, copyediting, and formatting assistance for EA-aligned people
Maybe someone should find decent/good copyeditors/proofreaders/formatters and advertise their services to EA community members who are wi... (read more)
Regulation and/or Standards
https://forum.effectivealtruism.org/topics/corporate-governance | governance of artificial intelligence | law | policy
An FYI to potential text-writers for this entry I made: I've mostly thought about this in relation to AI governance, but I think it's also important for space governance and presumably various other EA issues.
On the bio point, see also You Should Write a Forum Bio
I think you mean "your first name" or something like that, rather than necessarily "your full name"?
My suggested default would be to write your full real name in your bio, fill in other info about you in your bio, and make your Forum name sufficiently related to your real name that people who at one point learned the connection will easily remember it. (As I've done.)
If one does that, then also making one's Forum name their full real name seems to add little value and presumably adds some risk to their 'real life' reputation if they want to lat... (read more)
My policy on this, to the extent I have one, is a sort of soft lockdown: I don't mind sharing enough personal info on here that an EA who knows me in real life could figure out my identity, but I need to always have at least plausible deniability in the face of any malicious actor.
As far as the risks in policy careers, I think the risk is very high for appointed jobs and real but lower for elected ones. Politicians are more risk averse than voters, and when they can pick from a pool of 100, they'll look for any reason to turn you down. When the voter... (read more)
Nice post! I think that I agree with all of the specific points you made, that they seem in aggregate pretty useful+important to say, and that in future I'll probably send this post to at least 5 people when giving career advice.
But here are two criticisms:
For what it's worth, I'd say (partly based on my experience as a grantmaker and on talking to lots of other grantmakers about similar things):
Agreed - and to point to lots of sources, I'd highlight List of EA funding opportunities and my statement there that:
I strongly encourage people to consider applying for one or more of these things. Given how quick applying often is and how impactful funded projects often are, applying is often worthwhile in expectation even if your odds of getting funding aren’t very high. (I think the same basic logic applies to job applications.)
Also, less importantly, Things I often tell people about applying to EA Funds.
I think this would be a subset of Personal development, so in some sense is "covered" by that, but really that tag is probably too broad and not very intuitively named. So I think I'm in favour of adding subsidiary tags or dividing that tag up or refactoring it or something. Not sure precisely what the best move is, though.
Productivity would also overlap with Coaching and Time-money tradeoffs, but that seems ok.
I've just now learned of www.futurefundinglist.com, which seems also relevant (though I haven't looked at it closely or tried to assess how useful it'd be to people)
Thanks for sharing this!
Some other things people might find useful:
Something else I now often tell people:
I'd suggest:
- maybe getting feedback from various people in EA who know about the sort of things you're working on but aren't as busy as the grantmakers
- then just applying and seeing whether you (a) already get accepted, (b) get rejected but with useful feedback, or (c) get rejected with no feedback but can then use that as a signal to rethink, get feedback elsewhere, and apply again with a new version of the project and explanation.
Relatedly, a few points that I now feel this post should've had more in mind are:
My initial feeling is that research assistants is a pretty different kind of thing and is closer to "research" than to "PA & similar", but that PAs, virtual assistant, and executive assistants do form a natural cluster.
But I'm not sure if that's right. And even if it is, it seems fine to call it "assistants" anyway, and just still have RA-related things often get other tags too and have this tag mostly be about things other than RA things.
Personal assistance or Personal assistant or PA or Personal/executive assistant or something like that
E.g. https://forum.effectivealtruism.org/posts/bzXBZyMrnMiWu2DeF/to-pa-or-not-to-pa
Overlaps with Operations and https://forum.effectivealtruism.org/tag/pineapple-operations but seems sufficiently distinct and important to warrant its own tag
Thanks so much!
Something about quantum computing or quantum mechanics?
I'm more interested in the former, but maybe we should have the latter tag and then in practice it'll also work as the former tag, since it won't be hugely populated anyway?
Relevant posts include:
I think that this isn't a useful way of looking at the situation and doesn't match the reality well. I don't have time to fully elaborate on why I think that, but here are brief points:
I basically just think it's a bad idea to say "we don't want to waste [evaluators'] time and flood their applications process" (even with your caveats). I think there's only a small kernel of truth to this in practice, and that the statement is far more likely to mislead than enlighten people.
To elaborate:
Yeah, that seems a fair point.
One thing I'd say in response is that, as a person who's been on multiple hiring committees and evaluated many grant applications, I'm pretty confident hirers and grantmakers would be excited for people to apply even if there's a decent chance they'll ultimately pull out or decline an offer!
E.g., even if someone has a 75% chance of pulling our or declining, that just reduces the EV for the hirer/grantmaker of the person applying by a factor of 4. And that probably isn't a very big deal, given that hirers and grantm... (read more)
Some quickly written scattered remarks on how some of these points have played out for me personally:
Coworking spaces
Do we already have a similar tag? If not, I feel fairly confident we should have this; there are at least three people / groups I know of who might find it useful to have all relevant posts collected in one place.
There are a bunch of recent relevant posts I won't bother collecting, but one is https://forum.effectivealtruism.org/posts/MBDHjwDvhDnqisyW2/awards-for-the-future-fund-s-project-ideas-competition
Compute governance
Maybe this overlaps too much with https://forum.effectivealtruism.org/tag/semiconductors ?
Relevant posts:
I think I favour dropping the word "teams" to make this broader.
We could also consider replacing this name with "crisis response", but I don't have a view on which is better.
Some orgs that should maybe be added (I'd be keen for someone to fill in the form to add them, including relevant info on them):
I'm pretty sure in the last few months there was a post that was a retrospective on a fellowship/program in continental Europe (maybe Finland or Sweden or Poland?) that was framed as paying people to translate EA content into their local language but then also intended to have the benefit of getting those people themselves interested in EA. That should get this tag. But I can't remember what the post was called.
Market testing or message testing or polling something like that
I'm pretty unsure if we should make this entry. Also maybe these topics are too different to all be lumped together? Maybe market testing should just be covered by a tag on Digital marketing or Marketing (proposed elsewhere) and then message testing and polling should be covered by a different tag?
By message testing I mean this what this page talks about: https://publicinterest.org.uk/TestingGuide.pdf
Some relevant posts:
... (read more)Digital marketing or maybe just Marketing
Do we already have a tag quite like this? If not, I think we should almost certainly have it.
I know at least a few posts would warrant this tag and that several funders and I think entrepreneur-types and incubators are interested in the topic, so having a tag to collect posts on the topic seems good. (E.g., then we can send that tag page to people who are at an early stage of considering doing work on this.)
I'd be interested in hearing whether people think it'd be worth posting each individual research doc - the ones linked to from the table - as its own top-level post, vs just relying on this one post linking to them and leaving the docs as docs.
So I'd be interested in people's views on that. (I guess you could upvote this comment to express that you're in favour of each research idea doc being made into a top-level post, or you could DM me.)
Tabletop exercises or wargaming or maybe some other related term (scenario planning? I think that's too distant a concept, but I guess maybe it independently deserves an entry?)
I think the former name is better because it seems best to not have highly militaristic/adversarial framings in some contexts, including with respect to some/many existential risks.
Some posts this could apply to:
Thanks!
And good idea - done
Thanks for this post! I follow similar principles myself and think they're helpful, and when people ask me for things I and they would often benefit from them following these principles too.
Some readers might also be interested in my rough collection of Readings and notes on how to get useful input from busy people. (I've also now added a link to this post from there.)
Retreat or Retreats
I think there are a fair few EA Forum posts about why and how to run retreats (e.g., for community building, for remote orgs, or for increasing coordination among various orgs working in a given area). And I think there are a fair few people who'd find it useful to have these posts collected in one place.
No, I didn't - I ended up getting hired by Rethink Priorities and doing work on nuclear risk instead, among other things.
Thanks for this post and for helping run this project! As we've discussed, I think this is a valuable effort.
I wanted to mention a few things:
If you found this post interesting, there's a good chance you should do one or more of the following things:
Some additional additional rough notes:
Some additional rough notes that didn’t make it into the post
Quick take: Seems to have clearly boosted the prominence of biorisk stuff, and in a way that longtermism-aligned folks were able to harness well to promote interventions, ideas, etc. that are especially relevant to existential biorisk. I think it probably also on net boosted longtermist-/x-risk-style priorities/thinking more broadly, but I haven't really thought about it much.