I worked with Sam for 4 years and would recommend the experience. He's an absolute blast to talk tech with, and a great human.
Maybe a report from someone with a strong network in the silicon valley scene about how AI safety's reputation is evolving post-OAI-board-stuff. (I'm sure there are lots of takes that exist, and I guess I'd be curious for either a data driven approach or a post which tries to take a levelheaded survey of different archetypes.)
I'm not sure if this qualifies, but the Creative Writing Contest featured some really moving stories.
I have a spotify playlist of songs that seemed to rhyme with EA to me.
Some good kabbalistic significance to our issue tracker, but I'm not sure how.
First, a note: I have heard recommendations to try to lower the number of issues. I've never understood it except as a way to pretend like you don't have bugs. For sure some of those issues are stale and out of date, but quite a few are probably live but ultimately very edge-case and unimportant bugs, or feature requests we probably won't get to but could be good. I don't think it's a good use of time to prune it, and most of the approaches I've seen companies take is to auto-clo...
Thanks for the report. We currently do the second, which isn't ideal to be sure. If someone redrafts and republishes after a post has been up for a while, an admin will have to adjust the published date manually. This happens surprisingly infrequently relative to what I might've expected, so we haven't prioritized improving that.
My guess is that cause-neutral activities are 30-90% as effective as cause-specific ones (in terms of generating labor for that specific cause), which is remarkably high, but still less than 100%
This isn't obvious to me. If you want to generate generic workers for your animal welfare org, sure, you might prefer to fund a vegan group. But if you want people who are good at making explicit tradeoffs, focusing on scope sensitivity, and being exceptionally truth-seeking, I would bet that an EA group is more likely to get you those people. And so it seems plaus...
Relatedly: I expect that the margins change with differing levels of investment. Even if you only cared about AI safety, I suspect that the correct amount of investment in cause-general stuff is significantly non-zero, because you first get the low-hanging fruit of the people who were especially receptive to cause-general material, and so forth.
So it actually feels weird to talk about estimating these relative effectiveness numbers without talking about which margins we're considering them at. (However, I might be overestimating the extent to which these different buckets are best modelled as having distinct diminishing returns curves.)
I'm curating this post. I appreciate the careful reasoning, and your taxonomies make sense. I recommend readers who may not have time to read the whole sequence to read up to the start of the preliminaries section.
I really like the ambitious aims of this model, and I like the way you present it. I'm curating this post.
I would like to take the chance to remind readers about the walkthrough and Q&A on Giving Tuesday a ~week from now.
I agree with JWS. There isn't enough of this. If we're supposed to be a cause neutral community, then sometimes we need to actually attempt to scale this mountain. Thank for doing so!
You can think of the GWWC pledge as analogous to marriage, and that would make the trial pledge something like moving in together. In the romance analogy, some friends of mine who are reasonably averse to lifelong commitments do "handfasting", or intentionally not lifelong partnerships. A thought I've had for a while is that the Trial Pledge, by virtue of its name if nothing else, does poorly in the position of handfasting, where often the intention is never to get married (/ take the pledge).
(Anyway, all academic for me as I'm crazy enough to have done the lifelong pledge.)
I'm curating this post. I love the way you have done this link-post, pulling out sections of interest to the EA community. Always helpful to see order of magnitude updates to EA BOTECs.
I'm curating this post. It's very personal, well-written, and I'm excited to, during the Effective Giving Spotlight week, highlight this post from someone who's Earned-to-Give for so long.
I've updated it. One oversight of the page was that it didn't mention that intercom was desktop-only. If you're one desktop and you don't see it, can you try the new debugging steps? But also feel free to email us.
I'm curating this post. I really like the honesty in this post. Evidence I have that makes me think it's doing unusually well here:
One of Alvea’s biggest indirect achievements [...] is the growth and development that our projects catalyzed for the people who worked there.
I agree with this, from my outside vantage point. It seems pretty wild how some environments tend to level people up while others do so much less.
I also like the way you divide up the claims. I think this paper is a really neat demonstration of point 1, and I'm kinda disappointed with the discourse for getting distracted arguing about point 2.
I'm curating this post. There have been several recent posts on the theme of RSPs. I'm featuring this one, but I recommend the other two posts to readers.
I particularly like that these posts mention that they view these policies as good for eventual regulation, and are willing to be clear about this.
There’s some reason to believe shrimp paste could be easier to create plant based substitutes for, compared to other shrimp products, and that the alternative proteins market might not naturally have the right incentives to create excellent substitutes very quickly.
This analysis seems right to me (intuitively).
Idea: an EAA funder pre-commits to purchase (subsidize?) the first $UNITs of plant based shrimp based.
I'm curating this post. I often find myself agreeing with the discomfort of applying EV maximization in the case of numerous plausibly sentient creatures. And I like the proposed ways to avoid this — it helps me think about the situations in which I think it's reasonable vs unreasonable to apply them.
I'm curating this post. This is a well-written summary of the AI Pause Debate, and I'm excited for our community to build on that conversation, through distillation and more back-and-forth.
I'm curating this post. I think for many EA donors, knowing about Open Philanthropy's plans will be an important part of their models. I appreciate the transparency in general, and the detailed writeup aimed at critical donors in particular.
I'm curating this post. I know you don't intend for it to be exhaustive, but it is nevertheless very thorough. I agree with @PeterSlattery that people considering founding/running orgs in these spaces would benefit from seeing this information, and I think you do a good job of presenting it.
I highly recommend Asana. I have used a few different options in my personal life, including Todoist, and like Asana better than them.
If FutureTech is going to be a Notion shop, then I would use Notion for task tracking as well. But if not, then I think your users will find Asana easier to grok.
I'm curating this post — I really like how it was short and focused on very concrete actions that could be done in one weekend.
I know a guy[1] who's done the same Manager ↔ IC transition. Google as well. I do really respect this part of Google's culture.
My dear old Dad
We investigated how much karma from Community posts was distorting how much karma users had relative to what would happen if the Community section karma hadn't been there, and relative to our personal "overrated-vs-underrated commenter" ratings. There was somewhat surprisingly not that much improvement from changing the weighting, so we decided to stop working on the project.
Ultimately, you shouldn't take a user's Forum karma has much correlation with their impact. It's quite easy to have a lot of impact with low karma, or to be mostly a terminally online person who doesn't get much object-level work done.
On the first paragraph: this is definitely something that bothers me a bunch, and I hear about often. Sadly it is quite hard to fix. We'd need a bespoke google docs importer to do so, and that's probably too large of a project.
On the second: Noted.
I'm excited to curate this Career Conversations Week post. It's an easy read and seems helpful in evaluating a very high upside career path.
Is there a way to snooze the community tab or snooze / hide certain posts? I would use this feature.
I'm curating this post. I also highly recommend reading the post that I interpret this post as being in conversation with, by @bean .
These posts, along with the conversation in the comments, is what it looks like (at least from my non-expert vantage point) to be actually trying on one of the key cruxes of nuclear security cause prioritization.
I'm curating this. Along with other commenters, I really like the focus on the marginal grant. If I were to write a post that would help donors understand the impact of their donations to the Long Term Future Fund, it would look a lot like this.
While I'm sympathetic to the reasoning, I was sad to hear that EA Funds would stop sharing publicly all its grants. To my mind to this post goes a long way towards remedying that, and makes me much more likely to recommend the Long Term Future Fund to others. (That strikes me as a surprisingly large update, but I stand by it.)
Thanks a bunch for writing this!
That's what I mean by something automatic. I'm not sure without trying it whether it'd be a terrible and disorienting experience that was wrong most of the time, or whether it'd be successfully useful.
I expect this to be hard to get right, but I think it would in fact remove a major bottleneck to returning to a post. Claim: the hard part is getting people to set their bookmarks. Maybe we could do something automatic?
I've recorded the feedback, thank you! The anticipation that some this might be distracting was the motivation for the feedback button. Which makes me concerend to hear that it's not working for you. Could I ask if you could check your cookies to see if you've enabled functional cookies? (See the link in the second paragraph.)
Thanks a bunch for this very helpful overview — I'm curating it.
This covers well the things I've learned from my casual observation of the field, and introduced me to new considerations and more detail. I'm very glad to have read it and I recommend it.
I assume this is about for your own psychology? My recommendation here is to use your ad-blocker to block out the specific element.
I've just submitted a change that will make this uBlock Origin rule work:
###karma-info
(Note the three #s)
I just added this to a recent related improvement. Should be fixed when that Pull Request gets merged.
I agree with you, and so does our issue tracker. Sadly, it does seem a bit hard. Tagging @peterhartree as a person who might be able to tell me that it's less hard than I think.