Hide table of contents

A lot has changed for the Mental Health Navigator over the past year! This post provides some updates on what's new from the last few months, as well as information about volunteering opportunities we have available!

Data Bank of Free, Low-cost, or Sliding Scale Resources

The data bank on our Resources page has been reformatted to be more easily navigable, and updated to only contain low cost (<$100), sliding scale, and free resources, and there are now 260 listed! We’re continuously looking to expand our data bank, with a goal of reaching 500 by December. If you’d like to help us reach this target, please send us any mental health resources you recommend by filling out this form or emailing us at info@mentalhealthnavigator.co.uk. Information about what we’ll accept and our quality control process is available on the data bank page of our website: https://www.mentalhealthnavigator.co.uk/resource-data-bank

Newsletter

We've got a monthly newsletter now! It’s called MentNav, and it provides information about mental health resources we find and add to our growing data bank, articles and blog posts, and opportunities at the Mental Health Navigator. Feel free to subscribe here: https://mailchi.mp/mentalhealthnavigator.co.uk/ment-nav

If you’re involved in the mental health space and would like to have anything included in the newsletter, please send us an email at info@mentalhealthnavigator.co.uk

Advisory Service Open to Everyone

Our Advisory Service is now open to everyone! You can book a consultation via the booking form on our Advisory Service webpage: https://www.mentalhealthnavigator.co.uk/advisory-service

If you don’t see a time that works for you, we’ve recently had volunteers join, and their availability will also be visible in the coming weeks, so please check back later.

Providers Table

Our Providers Table has grown significantly over the last year to include 92 providers, and now an average of 141 people visit it every month! If there’s anyone you recommend adding to the Providers Table, please fill out this form.

If you’re someone who would like to be listed in the Providers Table, please send us an email at info@mentalhealthnavigator.co.uk.

Looking for Volunteers for Data Bank, Blog, and Advisory Service

Looking for a volunteering opportunity this autumn? We’re accepting applications for volunteers at the Mental Health Navigator! We’re currently looking for 1 more Advisory Service volunteer (based in the UK), Data Bank volunteers, and Content Writing volunteers. To find out more about volunteering with us and apply, please visit our Get Involved page here: https://www.mentalhealthnavigator.co.uk/get-involved

23

0
0

Reactions

0
0
Comments1


Sorted by Click to highlight new comments since:
[anonymous]2
0
0

Am I right in thinking that you are no longer EA-specific? Just a generic mental health website?

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Recent opportunities in Community
20
John Salter
· · 2m read