Sorry, I don't have any experience with that.
I recently made RatSearch for this purpose. You can also try the GPT bot version (more information here).
Recently, I made RatSearch for googling within EA-adjecent webs. Now, you can try the GPT bot version! (GPT plus required)The bot is instructed to interpret what you want to know in relation to EA and try to search for it on the Forums. If it fails, it searches through a wider list of EA webs curated by EA News. If it fails again, it broadens its search parameter to the whole web.
Cons: ChatGPT uses Bing, which isn't entirely reliable when it comes to indexing less visited webs.
Pros: It's fun for brainstorming EA connections/perspective, even when you just type a raw phrase like "public transport" or "particle physics"
Neutral: I have yet to experiment whether it works better when you explicitly limit the search using the site: operator - try AltruSearch 2. It seems better at digging deeper within the EA ecosystem; AltruSearch 1 seems better at digging wider.
My intention was to make any content published by OpenAI accessible
Yes, OpenAI's domain name is in the list because they have a blog
Thanks, I've changed it up
I've just put together a collection of related resources. Fossil fuel depletion is the only mineral resource suggested to have longtermist sugnificance in WWOTF. Metals can be efficiently recycled for long enough that I expect us to develop AGI/nanotechnology before their depletion could start to become problematic. Recycling uranium would be quite advantageous, but I'd be skeptical regarding its tractability and it seems we'll get by with renewable energy.
I've just put together a post collecting related articles here.
Update: I'm pleased to learn Yudkowsky seems to have suggested a similar agenda in a recent interview with Dwarkesh Patel (timestamp) as his greatest source of predictable hope about AI. It's a rather fragmented bit but the gist is: Perhaps people doing RLHF get a better grasp on what to aim for by studying where "niceness" comes from in humans. He's inspired by the idea that "consciousness is when the mask eats the shoggoth" and suggests, "maybe with the right bootstrapping you can let that happen on purpose".
I see a very important point here: Human intelligence isn't misaligned with evolution in a random direction, it is misaligned in the direction of maximizing positive qualia. Therefore, it seems very likely that consciousness played a causal role in the evolution of human moral alignment - and such causal role needs to be possible to study.
Suggestion: Integrated search in LessWrong, EA Forum, Alignment Forum and perhaps Progress Forum posts.