Which interesting EA-related bluesky accounts do you know of?
I'm not using Twitter anymore since it's being used to promote hateful views, but Bluesky is quite a cool online space in my opinion.
I'm making a list of Bluesky accounts of EA-related organisations and key people. If you're active on Bluesky or some of your favourite EA orgs or key people are, please leave a comment with a link to their profile!
I've also made an EA (GHD+AW+CC) Starter Pack in case you're interested. Let me know who I should add! Effective Environmentalism also has a pack with ef...
High-variance. Most people seem to have created an account and then gone back to being mostly on (eX)twitter. However, there are some quite active accounts. I'm not the best to ask this question to, since I'm not that active either. Still, having the bluesky account post as a mirror of the twitter acccount maybe isn't hard to set up?
Quick take on Burnout
Note: I am obviously not an expert here nor do I have much first hand experience but I thought it could be useful for people I work with to know how I currently conceptualize burnout. I was then encouraged to post on the forum. This is based off around 4 cases of burnout that I have seen (at varying levels of proximity) and conversations with people who have seen significantly more.
Relatedly, I think in many cases burnout is better conceptualised as depression (perhaps with a specific work-related etiology).
Whether burnout is distinct from depression at all is a controversy within the literature:
I think that this has the practical implications that people suffering from bu...
From some expressions on extinction risks as I have observed - extinction risks might actually be suffering risks. It could be the expectation of death is torturing. All risks might be suffering risks.
Offer subject to be arbitrarily stopping at some point (not sure exactly how many I'm willing to do)
Give me chatGPT Deep Research queries and I'll run them. My asks are that:
It's the first official day of the AI Safety Action Summit, and thus it's also the day that the Seoul Commitments (made by sixteen companies last year to adopt an RSP/safety framework) have come due.
I've made a tracker/report card for each of these policies at www.seoul-tracker.org.
I'll plan to keep this updated for the foreseeable future as policies get released/modified. Don't take the grades too seriously — think of it as one opinionated take on the quality of the commitments as written, and in cases where there is evidence, implemented. Do feel free to...
Hi! I’m looking for help with a project. If you’re interested or know someone who might be, it would be really great if you let me know/share. I'll check the forum forum for dms.
One of the alleged Zizian murderers has released a statement from prison, and it's a direct plea for Eliezer Yudkowsky specifically to become a vegan.
This case is getting a lot of press attention and will likely spawn a lot of further attention in the form of true crime, etc. The effect of this will be likely to cement Rationalism in the public imagination as a group of crazy people (regardless of whether the group in general opposes extremism), and groups and individuals connected to rationalism, including EA, will be reputationally damaged by association.
Disclaimer: I think the instant USAID cuts are very harmful, they directly affect our organisation's wonderful nurses and our patients. I'm not endorsing the cuts, I just think exaggurating numbers when communicating for dramatic effect (or out of ignorance) is unhelpful and doesn't build trust in institutions like the WHO.
Sometimes the lack of understanding, or care in calulations from leading public health bodies befuddles me.
"The head of the United Nations' programme for tackling HIV/AIDS told the BBC the cuts would have dire impacts across the gl...
Thanks Jason - those are really good points. In general maybe this wasn't such a useful thing to bring up at this point in time, and in general its good that she is campaigning for funding to be restored. I do think the large exaggeration though means this a bit more than a nitpick.
I've been looking for her saying the actual quote, and have struggled to find it. A lot of news agencies have used the same quote I used above with similar context. Mrs. Byanyima even reposted on her twitter the exact quote above...
"AIDS-related deaths in the next 5 years ...
I think there's a nice hidden theme in the EAG Bay Area content, which is about how EA is still important in the age of AI (disclaimer: I lead the EAG team, so I'm biased). It's not just a technical AI safety conference, but it's also not ignoring the importance of AI. Instead, it's showing how the EA framework can help prioritise AI issues, and bring attention to neglected topics.
For example, our sessions on digital minds with Jeff Sebo and the Rethink team, and our fireside chat with Forethought on post-AGI futures, demonstrate how there's important AI r...
When I try to think about how much better the world could be, it helps me to sometimes pay attention to the less obvious ways that my life is (much) better than it would have been, had I been born in basically any other time (even if I was born among the elite!).
So I wanted to make a quick list of some “inconspicuous miracles” of my world. This isn’t meant to be remotely exhaustive, and is just what I thought of as I was writing this up. The order is arbitrary.
1. Washing machines
It’s amazing that I can just put dirty clothing (or dishes, etc.) into a ...
I love this write up. Re point 2 — I sincerely think we are in the golden age of media, at least in ~developed nations. There has never been a time where any random person could make music, write up their ideas, or shoot an independent film and make a living out of it! The barrier to entry is so much lower, and there are typically no unreasonable restrictions on the type of media we can create (I am sure medieval churches wouldn't be fans of heavy metal). If we don't mess up our shared future, all this will only get better.
Also, I feel this should have been a full post and not a quick note.
TLDR: Notes on confusions about what we should do about digital minds, even if our assessments of their moral relevance are correct[1]
I often feel quite lost when I try to think about how we can “get digital minds right.” It feels like there’s a variety of major pitfalls involved, whether or not we’re right about the moral relevance of some digital minds.
Digital-minds-related pitfalls in different situations | ||
Reality ➡️ Our perception ⬇️ | These digital minds are (non-trivially) morally relevant[2] | These digital minds are not morally relevant |
We see thes |
@Lucius Caviola has also written about this topic, e.g. Will disagreement about AI rights lead to societal conflict?
And my two cents on why I don't think we should worry about digital sentience (plus the slicing problem). :)
https://forum.effectivealtruism.org/posts/A6W5qm9gWyr3mikmS/the-ea-case-for-trump-2024
Seems to have not been the case
Part of me thinks we should spend years reflecting on lifelong decisions before making them; hence, we ought not encourage young people (e.g., university students) to sign the GWWC pledge.
However, a bigger part of me thinks locking in altruistic desires to mitigate future selfishness is *exactly* what we should be doing. Some argue that we shouldn't make life-long decisions as young people because our preferences and values may change. Yet, to me, this is all the more reason to take the GWWC pledge; it is precisely because our altruistic tendencies might w...
I guess my issue is that this all seems strictly worse than "pledge to give 10% for the first 1-2 years after graduation, and then decide whether to commit for life". Even "you commit for life, but with the option to withdraw 1-2 years after graduation", ie with the default to continue. Your arguments about not getting used to a full salary apply just as well to those imo
More broadly, I think it's bad to justify getting young people without much life experience to make a lifetime pledge, based on a controversial belief (that it should be normal to give 10%...
Notes on some of my AI-related confusions[1]
It’s hard for me to get a sense for stuff like “how quickly are we moving towards the kind of AI that I’m really worried about?” I think this stems partly from (1) a conflation of different types of “crazy powerful AI”, and (2) the way that benchmarks and other measures of “AI progress” de-couple from actual progress towards the relevant things. Trying to represent these things graphically helps me orient/think.
First, it seems useful to distinguish the breadth or generality of state-of-the-art AI ...
I found these visualizations very helpful! I think of AGI as the top of your HLAI section: human level in all tasks. Life 3.0 claimed that just being superhuman at AI coding would become super risky (recursive self improvement (RSI)). But it seems to me it would need to be ~human level at some other tasks as well like planning and deception to be super risky. Still, that could be relatively narrow overall.
David R: In general I don’t think all of these questions/question posts themselves contain pivotal questions. But some of them seem promising, and in others the responses seem to generate potential pivotal questions.
Should we push for a rapid malaria vaccine rollout? — EA Forum Bots
What EA projects could grow to become megaprojects, eventually spending $10...
AI Safety Monthly Meetup - Brief Impact Analysis
For the past 8 months, we've (AIS ANZ) been running consistent community meetups across 5 cities (Sydney, Melbourne, Brisbane, Wellington and Canberra). Each meetup averages about 10 attendees with about 50% new participant rate, driven primarily through LinkedIn and email outreach. I estimate we're driving unique AI Safety related connections for around $6.
Volunteer Meetup Coordinators organise the bookings, pay for the Food & Beverage (I reimburse them after the fact) and greet attendees. This ini...