1 min read 29

4

Use this thread to post things that are awesome, but not awesome enough to be full posts. This is also a great place to post if you don't have enough karma to post on the main forum.

Consider giving your post a brief title to improve readability.

4

0
0

Reactions

0
0
Comments29


Sorted by Click to highlight new comments since:

[Did Career Research to Reach a Sense of Conclusion]

About a year ago I left an organization I was closely involved in, and I spent the past year struggling quite a bit with what I wanted to do with my career. I applied to PhD programs in economics and got into a top one, Stanford. I spent several months thinking about what causes I want to focus on and whether this made sense for my career, and I made a pretty ambitious plan of reading up on debates that pertained to crucial considerations I'd longed neglected, such as how RCTs compare to other forms of evidence, what the best theories of consciousness are and their implications, and what the odds of various risks are.

I'm pretty happy with where I am now after doing this. I updated my moral weight for animals downwards (though it was quite high before so it's still pretty high), my probability of machine sentience in the medium-term upwards, and my views on different forms of evidence and seriousness of risk stayed largely the same. Overall, I concluded that most of the things I'm most concerned about have research as their biggest gap, and the PhD is well-suited to that. I like to contemplate my career regularly, but I've reached a place of satisfaction that I had not been in for a while. Meanwhile, I started the PhD and am very happy with it so far.

Hi, I’m ACE’s new research director. Help give me karma to post on the forum!

Hi Toni, the moderators can also give you posting ability, you should be all set now!

Ah, good to know. Thanks!

Hi Tonia, Julia. I'm new here and apologise for having have to use your thread to pass on my message which is totally unrelated to your post. Wanted to ask if there's an EA group in Lagos, Nigeria I could join.

I actually tried posting a comment directly on the open thread but saw that I needed 5 points to do that. Somehow, I still think I'm not doing this as I ought to, cause I imagine I should be able to post on an OT without points, but I admit I don't know the rules just yet.

Hi! Sorry, I don't think there is a group in Nigeria. Here's a guide to starting one if you're interested in a project: https://app.effectivealtruism.org/groups/resources/resources-and-support

Thanks, Julia.

jai
10
0
0

I've been asked to post "500 Million But Not a Single One More" on the EA Forum so that it's easier to include in a sequence of EA-related content. I need 5 karma to post - and the most straightforward way to get that seems like straight-up begging.

Just so this comment isn't entirely devoid of content: This Week In Polio is a great way to track humanity's (hopefully) final battles against Polio and one of my go-to pages when I'm looking to feel good about my species.

You made it!

[Intercultural online communication]

The EA Hotel recently hosted EA London's weeklong retreat, and I got a chance to meet lots of EAs in Europe, which was great! One of the many interesting discussions I had was about intercultural communication differences in online discussion. Apparently my habit of spending a few minutes thinking about someone's post and writing the first thing that comes into my head as a comment is "very American". It seems that some EAs in the UK like to be fairly certain about their ideas before sharing them online, and when they do share their ideas, they put more effort into hedging their statements to communicate the correct level of confidence. I thought this was important for forum readers to know; I would hate for people to think that the thoughts I have off the top of my head are carefully considered, and similarly, it seems worth knowing that some forum users comment infrequently because they want the thoughts they do share to carry more weight. This is plausibly more of a UK vs US cultural difference than a cultural difference between the UK & US EA communities specifically, but it still seems worth knowing.

RESEARCH PROPOSAL

Mindfulness And Effective Altruism: Can The Former Help The Latter?

In the UK, poor mental health is the main cause of disability amongst adults of working age. 1 in 4 people experience a mental health problem at some point in their lives and an estimated 15% of people at work suffer symptoms of a mental health condition. Depression, anxiety, stress, compassion fatigue and burnout are the most common sources of mental distress and disturbance in the workplace. These conditions are exacerbated amongst the helping and caring professions and are considered largely preventable...

Hello Everyone

I just wanted to write and introduce myself….I have been a member of the EA community for a few years and feel now is the time to make my first post. My background is in mental health, research and psychological wellbeing, and since studying Bioethics in 2013 I have become increasingly influenced by EA.

Last year I spent some time at CEA as a Mental Health Research Assistant Intern. I wrote several reports drawing on the existing evidence base in the psychological sciences exploring ways to protect and promote optimum psychological wellbeing. A significant component of the wellbeing in the workplace initiative I proposed included following the Mindful Initiative (2015) recommendations

(http://www.themindfulnessinitiative.org.uk/…/Mindfulness-AP…)

offering an 8-week, 2x hours per week mindfulness-based cognitive therapy (MBCT) course to employees as a health intervention aimed at addressing occupational mental health issues including work-related stress.

The emerging evidence for secular based MBCT interventions in the workplace is encouraging. The reason why this is likely to be particularly important for EA, speaks to the shared ethics, values, virtues and character strengths explicit in MBI, see Baer (2015)

(https://link.springer.com/…/pdf/10.1007%2Fs12671-015-0419-2…)

and the personal attributes / qualities we may wish to cultivate within the EA community. My particular interests include qualities like compassion (for self and others), friendliness, openness, kindness, gratitude and wisdom.

Last weekend I attended the EA Blackpool Hotel Life Review discussing some of these ideas and my presentation was received with interest. I have agreed to run some mindfulness-based meditation sessions at EAG London next month where I look forward to meeting many more EAs.

Finally, I have recently received an offer to study for a DPhil at Oxford University (Department of Psychiatry supervised by Oxford Mindfulness Centre (OMC)) exploring mindfulness in the workplace with a specific interest in the EA community.

OMC is one of the leading institutions in the world for mindfulness training and research and has recently been nominated to receive a Chair in Perpetuity in Mindfulness and the Psychological Sciences so this is a very exciting, topical and world leading research opportunity. Unfortunately, following EA grants decision to not fund tuition fees / stipends this year, I have not yet secured funding (despite being due to start next month).

If you have any ideas, thoughts or comments regarding funding streams or an interest in this area please feel free to get in touch. I would be delighted to hear from you.

Thank you for reading, Georgina

Lovely meeting you at the EA Hotel Gina :) the importance of this research has been growing on me in the past few weeks, and I’m in full support of your work. Even your short workshop on the topic significantly changed my life (perspectives, behaviour and priorities). I’ll keep an eye and ear out, and just let me know if there’s any way specific way I can help. Also, I’m trying to access the link but it’s broken :(

Hi Lauren

Thank you for your kind words :)

As you may remember, I spent last week at Oxford Mindfulness Centre Summer School being taught by Prof Mark Williams, Prof Willem Kuyken, Chis Cullen and Prof Ruth Baer....their collective experience and teaching was profound. They spoke so eloquently about the confluence of ancient wisdom and modern psychological science and how compassion (for ourselves and others) can be cultivated. I really believe the EA community could be greatly enhanced if everyone made taking care of themselves (and each other) a priority, and there is an emerging evidence base for how best to do this...

I am however running out of time to secure funding as I have to pay tuition and college fees on 1st October.

Sorry the links didn't work - here they are again:-

http://www.themindfulnessinitiative.org.uk/publications/mindful-nation-uk-report

https://link.springer.com/content/pdf/10.1007%2Fs12671-015-0419-2.pdf

Title: Shamelessly asking for karma

Hello! My name is Benjamin Pence. I am a multi-year RSS lurker, first time poster. Can the lovely people of the community please give me enough karma to post? I swear I'm not a robot. Probably. I can do CAPTCHAs after all.

You made it to five karma.

I’d love to see a “EA Job board” organized not around open jobs, but around EAs who are looking for jobs. I’m hiring for an EA job and went to search for Boston area EAs looking for work and realized the tool I needed didn’t exist. This could probably be as simple as an “EA Job seekers” group on LinkedIn.

I'd be happy to post it for you on the Boston EA group if it's specifically in the Boston area.

[Intro to Cause Prioritisation Article]

I recently created a non-technical introduction to cause prioritisation, which some of you might find useful. It can be found here. When I accrue enough karma, I also plan to post it on the main forum. Let me know what you think!

EA Operations Job Opportunity

If anyone is interested in a job in Vancouver or remote, Rethink Charity is in need of an Operations and Administration Officer: https://rtcharity.org/operations-officer/

[Criminal Justice Reform Donation Recommendations]

I emailed Chloe Cockburn (the Criminal Justice Reform Program Officer for the Open Philanthropy Project) asking what she would recommend to small donors. She told me she recommends Real Justice PAC. Since contributions of $200 or more to PACs are disclosed to the FEC, I asked her what she would recommend to a donor who wants to stay anonymous (and whether her recommendation would be different for someone who could donate significantly more to a 501(c)(3) than a 501(c)(4) for tax reasons). She told me that she would recommend 501(c)(4)s for all donors because it's much harder for 501(c)(4)s to raise money and she specifically recommended the following 501(c)(4)s: Color of Change, Texas Organizing Project, New Virginia Majority, Faith in Action, and People's Action.

I asked for and received her permission to post the above.

(I edited this to add a subject in brackets at the top.)

[Startup to improve predictions]

I'm currently working on the startup https://www.primeprediction.com/. We aim to help organizations make better decisions by improving their prediction capabilities.

We're currently very early stage and are learning more about the problems people face when making predictions/forecasts.

I'll be happy to answer any questions you may have. I'd also love to hear your feedback, especially about concrete problems you have faced in your line of work for our product could be relevant.

[European Union Election 2019]

In one year, the election to the European parliament will be held. The differences between parties can be unclear already at a national level, and so far I haven't seen any attempts to systematically compare how the party groups have voted and what they believe themselves to stand for.

To make it easier to make a well-informed vote, I would like to gather interested EA's and make a handy guide on what group to vote on depending on values. To make it more useful, I'd suggest that it not only be written with EA's in mind, but also other voters.

Has anyone reframed priorities choices (such as x-risk vs. poverty) as losses to check if they’re really biased?

I’ve read a little bit about the possibility that preferences for poverty reduction/global health/animal welfare causes over x-risk reduction may be due to some kind of ambiguity-aversion bias. Between donating U$3,000 for (A) saving a life (high certainty, presently) or (B) potentially saving 10^20 future lives (I know this may be a conservative guess, but it's the reasoning that is important here, not the numbers), by making something like a marginal 10^-5 contribution to reducing in 10^-5 some extinction risk, people would prefer the first "safe" option A, despite the large pay-off of the second one. However, such bias is sensitive to framing effects: people usually prefer sure gains (like A) and uncertain losses (like B'). So, I was trying to find out, without success, if anyone had reframed this decision as matter of losses, to see if one prefers, e.g., (A’) reducing deaths by malaria from 478,001 to 478,000 or (B’) reducing the odds of extinction (minus 10^20 lives) in 10^-10.

Perhaps there’s a better way to reframe this choice, but I’m not interested in discussing one particular example (however, I’m concerned with the possibility that there’s no bias-free way of framing it). My point is that, if one chooses something like A-B’, then we have a strong case for the existence of a bias.

(I’m well aware of other objections against x-risk causes, such as Pascal’s mugging and discount rates arguments – but I think they’ve received due attention, and should be discussed separately. Also, I’m mostly thinking about donation choices, not about policy or career decisions, which is a completely different matter; however, IF this experiment confirmed the existence of such a bias, it could influence the latter, too.

I’m new here. Since I suspect someone has probably already made a similar question somewhere else - but I couldn’t find it, so sorry bothering you - I’m mostly trying to satisfy my curiosity; however, there’s a small probability that it touches an important unsolved dilemma about global priorities - the x-risk vs. safe causes. I'm not looking for karma - though you can't have too much of it, right?)

[This comment is no longer endorsed by its author]Reply

I have some insights regarding effective altruism of insect and animal welfare. They are questions and ideas I have thought about for a while, and it's a long post. May I post it here? Thank you!

Hello! I am MercifulVoice and I am a new member of E.Altruism Forums!

Possibly Highly Effective Ways to Address Climate Change

This thread by Josh Busby gives plenty of examples—

http://twitter.com/busbyj2/status/1038269431439388672

Skip the commentary I’ll add on after this sentence and just read that thread if you’re trying to save time.

As this article (https://amp.theguardian.com/environment/2018/aug/29/local-climate-efforts-wont-undo-trump-inaction?__twitter_impression=true) makes clear, local actions to address climate change that do not scale up to the national or higher level are just feel-good nothingburgers. It’s possible that every local action has a chance to scale to that level, but not all will scale at the speed that we need, and some actions taken to address climate change actually make things worse (see some advertising campaigns from the past.)

cool !

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
 ·  · 1m read
 · 
We’ve written a new report on the threat of AI-enabled coups.  I think this is a very serious risk – comparable in importance to AI takeover but much more neglected.  In fact, AI-enabled coups and AI takeover have pretty similar threat models. To see this, here’s a very basic threat model for AI takeover: 1. Humanity develops superhuman AI 2. Superhuman AI is misaligned and power-seeking 3. Superhuman AI seizes power for itself And now here’s a closely analogous threat model for AI-enabled coups: 1. Humanity develops superhuman AI 2. Superhuman AI is controlled by a small group 3. Superhuman AI seizes power for the small group While the report focuses on the risk that someone seizes power over a country, I think that similar dynamics could allow someone to take over the world. In fact, if someone wanted to take over the world, their best strategy might well be to first stage an AI-enabled coup in the United States (or whichever country leads on superhuman AI), and then go from there to world domination. A single person taking over the world would be really bad. I’ve previously argued that it might even be worse than AI takeover. [1] The concrete threat models for AI-enabled coups that we discuss largely translate like-for-like over to the risk of AI takeover.[2] Similarly, there’s a lot of overlap in the mitigations that help with AI-enabled coups and AI takeover risk — e.g. alignment audits to ensure no human has made AI secretly loyal to them, transparency about AI capabilities, monitoring AI activities for suspicious behaviour, and infosecurity to prevent insiders from tampering with training.  If the world won't slow down AI development based on AI takeover risk (e.g. because there’s isn’t strong evidence for misalignment), then advocating for a slow down based on the risk of AI-enabled coups might be more convincing and achieve many of the same goals.  I really want to encourage readers — especially those at labs or governments — to do something
Relevant opportunities
47
· · 3m read