New & upvoted

Customize feedCustomize feed

Posts tagged community

Quick takes

Show community
View more
I met Australia's Assistant Minister for Defence last Friday. I asked him to write an email to the Minister in charge of AI, asking him to establish an AI Safety Institute. He said he would. He also seemed on board with not having fully autonomous AI weaponry. All because I sent one email asking for a meeting + had said meeting.  Advocacy might be the lowest hanging fruit in AI Safety.
I took a break from engaging with EA topics for like a month or two, and I think it noticeably improved my mental health and productivity, as debating here frequently was actually stressing me out a lot. Which is weird, because the stakes for me posting here are incredibly low: I'm pseudonymous, have no career or personal attachments to EA groups, and I'm highly skeptical that EA efforts will have any noticeable effect on the future of humanity. I can't imagine how stressful these discussions are for people with the opposite positions! I still have plenty of ideas I want to write up, so I'm not going anywhere, but I'll try to be more considered in where I put my effort. 
Mini EA Forum Update You can now subscribe to be notified every time a user comments (thanks to the LessWrong team for building the functionality!), and we’ve updated the design of the notification option menus. You can see more details in GitHub here.
My previous take on writing to Politicians got numbers, so I figured I'd post the email I send below. I am going to make some updates, but this is the latest version: --- Hi [Politician] My name is Yanni Kyriacos, I live in Coogee, just down the road from your electorate. If you're up for it, I'd like to meet to discuss the risks posed by AI. In addition to my day job building startups, I do community / movement building in the AI Safety / AI existential risk space. You can learn more about AI Safety ANZ by joining our Facebook group here or the PauseAI movement here. I am also a signatory of Australians for AI Safety - a group that has called for the Australian government to set up an AI Commission (or similar body). Recently I worked with Australian AI experts (such as Good Ancestors Policy) in making a submission to the recent safe and response AI consultation process. In the letter, we called on the government to acknowledge the potential catastrophic and existential risks from artificial intelligence. More on that can be found here. There are many immediate risks from already existing AI systems like ChatGPT or Midjourney, such as disinformation or improper implementation in various businesses. In the not-so-distant future, certain safety nets will need to be activated (such as a Universal Basic Income policy) in the event of mass unemployment due to displacement of jobs with robots and AI systems. But of greatest concern is the speed at which we are marching towards AGI (artificial general intelligence) – systems that will have cognitive abilities at or above human level. Half of AI researchers believe that there is a 10% or greater chance that the invention of artificial superintelligence will mean the end of humanity. Among AI safety scientists, this chance is estimated to be an average of 30%. And these levels of risk aren’t just a concern for people in the far-distant future, with prediction markets such as Metaculus showing these kinds of AI could be invented in the next term of government. Notable examples of individuals sounding the alarm are Prof. Geoffrey Hinton and Prof. Yoshua Bengio, both Turing-award winners and pioneers of the deep learning methods that are currently achieving the most success. The existential risk of AI has been acknowledged by hundreds of scientists, the UN, the US and recently the EU. To make a long story short: we don't know how to align AI with the complex goals and values that humans have. When a superintelligent system is realised, there is a significant risk it will pursue a misaligned goal without us being able to stop it. And even if such a superhuman AI remains under human control, the person (or government) wielding such a power could use this to drastically, irreversibly change the world. Such an AI could be used to develop new technologies and weapons, manipulate masses of people or topple governments. The advancements in the AI landscape have progressed much faster than anticipated. In 2020, it was estimated that an AI would pass university entrance exams by 2050. This goal was achieved in March 2023 by the system GPT-4 from OpenAI. These massive, unexpected leaps have prompted many experts to request a pause in AI development through an open letter to major AI companies. The letter has been signed over 33,000 times so far, including many AI researchers and tech figures. Unfortunately, it seems that companies are not willing to jeopardise their competitive position by voluntarily halting development. A pause would need to be imposed by a government. Luckily, there seems to be broad support for slowing down AI development. A recent poll indicates that 63% of American support regulations to prevent AI companies from building superintelligent AI. At the national level, a pause is also challenging because countries have incentives to not fall behind in AI capabilities. That's why we need an international solution. The UK organised an AI Safety Summit on November 1st and 2nd at Bletchley Park. We hoped that during this summit, leaders would work towards sensible solutions that prevent the very worst of the risks that AI poses. As such I was excited to see that Australia signed the The Bletchley Declaration, agreeing that this risk is real and warrants coordinated international action. However, the recent policy statements by Minister Husic don't seem to align with the urgency that experts are seeing. The last safe moment to act could be very soon. The Summit has not yet produced an international agreement or policy. We have seen proposals being written by the US Senate, and even AI company CEOs have said there is “overwhelming consensus” that regulation is needed. But no proposal so far has seriously considered ways to slow down or prevent a superintelligent AI from being created. I am afraid that lobbying efforts by AI companies to keep regulation at a minimum are turning out to be highly effective. It's essential that the government follows through on its commitment at Bletchley Park to create a national or regional AI safety body. We have such bodies for everything from the risk of plane crashes to the risk of tsunamis. We urgently need one on ensuring the safety of AI systems. Anyway, I'd love to discuss this more in person or via zoom if you're in town soon. Let me know what you think. Cheers,, Yanni 
What is your "Pens Down" moment?  "Pens Down" to mean 'Artificial Super Intelligence in my opinion is close enough that it no longer makes sense to work on whatever else I'm currently working on, because we're about to undergo radical amounts of change very soon/quickly'. For me, it is probably when we have something as powerful as GPT-4 except it is agentic and costs less than $100 / month. So, that looks like a digital personal assistant that can execute an instruction like "have a TV delivered for me by X date, under Y price and organise installation and wall mounting." This is obviously a question mainly for people who don't work full time on AI Safety.

Popular comments

Recent discussion


The Forum will hold a Draft Amnesty Week from March 11th-17th. 

Draft Amnesty Week is a chance to share unpolished drafts, posts you aren’t sure you agree with, and drafts which have become too ugh-y to finish.

We’ll host smaller events and threads

Continue reading

I'll try to post two! 


We recently ran a test to see if utilising creative marketing practices could increase the performance of a brand-led digital campaign, and it did. We are sharing the results to encourage other organisations to consider the quality of their output in areas where...

Continue reading

Really cool experiment!

Was it possible to track to what extent the more engaging ads drove conversions? (donations made, pledges taken, etc.)

My hypothesis would be the more engaging ads get more people onto the website, but those people will be much less likely to follow through (and especially with significant amounts), than for example a very targeted and nerdy ad aimed at wealthy tech workers.

James Odene [User-Friendly]
Yes, absolutely. You can see a version here: It is linked in the doc above the table. I've just made bold to make more clear.
Thanks! My apologies for missing that.

Overview of essay series

This is the first in a collection of three essays exploring and ultimately defending the idea of choosing what feels wholesome as a heuristic for picking actions which are good for the world. I'm including a summary of the series here, before...

Continue reading
Thanks for writing this, a pleasure to read as always. I must admit I come away being rather confused by what you mean by 'wholesomeness'. Is wholesomeness basically consequentialism but with more vibes and less numbers? Your account makes it seem quite close to consequentialism. It also seems really close to virtue ethics - you try to differentiate it by saying it rejects "focus[ing] single-mindedly on excelling at one virtue" but my impression was that virtue ethics was all about balance the golden median anyway. And then it seems pretty close to sincerity/integrity also. I was especially confused by this section: Apparently the activities I think most people would be most likely to label wholesomeness are only "often... somewhat" wholesome. And I think most people would basically never describe experimenting with drugs as wholesome. Maybe it might be good, but if it is good its good for some other reason (like it's educational), not because its wholesome. I think you actually have a really revisionist account of 'wholesomeness' - so revisitionist I think you should probably just pick a new word. It seems like you are trying to rely on some of the vibes of the word while giving it a new meaning which fixates on the word 'whole' to the neglect of the actual historical denotation. Samwise is one of the most wholesome characters I know, but it's not because he was attending to the whole of Middle Earth - it's because of his courage and humility, and his loyalty to Frodo, Rosie and the Shire. A good officer - or Denethor - comes much closer to attending to the whole, but that doesn't mean his batsman isn't more wholesome.

Is wholesomeness basically consequentialism but with more vibes and less numbers?

I think it's partially that (where the point of the vibes is often that they're helpful for tracking things which aren't directly good/bad, but have an increased chance of causing good/bad things down the line). It's also a bit like "consequentialism, but with some extra weight on avoiding negative consequences to things you're interacting with" (where the point of this is that it distributes responsibility in a sensible way across agents).

I think you actually have a really re

... (read more)
Owen Cotton-Barratt
I definitely think it's important to consider (and head off) ways that it could go wrong! Your first two bullets are discussed a bit further in the third essay which I'll put up soon. In short, I completely agree that sometimes you need visionary thought or revolutionary action. At the same time I think revolutionary action -- taken by people convinced that they are right -- can be terrifying and harmful (e.g. the Cultural Revolution). I'd really prefer if people engaging in such actions felt some need to first feel into what is unwholesome about them, so that they're making the choices consciously and may be able to steer away from the most harmful versions. On your third point, I kind of feel the other way? Like I think it feels wholesome to have a certain level of support for staff, but lots of cushy benefits doesn't really feel wholesome, and I feel is more likely to come from people in an optimizing "how do we make ourselves attractive to staff?" mindset. (Am I an outlier here? Does it feel wholesome to you to have cushy benefits for staff?) Edit: On the third point, I do think that emphasising wholesomeness would lead to fewer people pushing themselves to the point of burnout. I have mixed feelings about this. The optimistic view is that it would help people to find healthy sustainable balances, and also help reduce people being putoff because of seeing burnout. The pessimistic view is that it would lead to just less work, and also perhaps less of a culture of taking important things very seriously.
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

This is the second of a collection of three essays, ‘On Wholesomeness’. In the first essay I introduced the idea of wholesomeness as a criterion for choosing actions. This essay will explore the relationship between acting wholesomely and some different conceptions...

Continue reading
Owen Cotton-Barratt
I'm guessing that the word is just used differently in different contexts or circles? Your comment made me wonder how much I was just stuck in my own head about this. So I asked ChatGPT about the sentence you're labelling as nonsensical, and it said: Of course I guess that ChatGPT is pretty good at picking up on meanings which are known anywhere, so this is evidence more that I'm aligning with one existing usage of the word, rather than that all native English speakers will understand it that way (and you're providing helpful evidence against the latter claim).
The same could be said about e.g. many fake aphorisms people come up with. Something can function to make you pause for thought, but still be nonsensical. It’s also obvious that ChatGPT is bullshitting because there’s no way such a short sentence is almost by definition not “comprehensive”

OK, fair complaint.

Another data point that this is how some other people understand the word is this comment by Gordon S Worley on LessWrong:

I don't think it has to be hard to say what wholesomeness is. I don't know what you mean by the word, but to me it's simply acting in a way that has compassion and respect to everything, leaving nothing out. Very hard to do, but easy enough to state.

I have a new paper coming out in the Australasian Journal of Philosophy: Critical-Set Views, Biographical Identity, and the Long Term.

The paper is about critical-level and critical-range views in population axiology. I argue that these views run into trouble once we start...

Continue reading


Join an online speaker event with ARMoR (Alliance for Reducing Microbial Resistance), a Charity Entrepreneurship incubated advocacy organization combating the growing threat of antimicrobial resistance (AMR). There will be a talk followed by a Q&A.

Antimicrobial resistance...

Continue reading

Will the recording from the meeting be uploaded somewhere?

Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post.

How factory farmers block progress — and what we can do about it

Continue reading
This seems like an isolated demand for rigour. When Hormel employees and other associated people gave $500k to an end-of-life care charity - a donation which is part of Lewis's data - I don't think this was a secret scheme to increase beef consumption. (I'm not really sure why it's captured in this data at all actually). People who work in agriculture aren't some sort of evil caricature who only donate money to oppose animal protection; a lot of their donations are probably motivated by the same concerns that motivate everyone else.

When Hormel employees and other associated people gave $500k to an end-of-life care charity - a donation which is part of Lewis's data - I don't think this was a secret scheme to increase beef consumption.

Ya, I wouldn't want to count that. I didn't check what the data included.

People who work in agriculture aren't some sort of evil caricature who only donate money to oppose animal protection; a lot of their donations are probably motivated by the same concerns that motivate everyone else.

I agree. I think if the money is coming through an interest/industry ... (read more)

Vegans could donate to an animal protection group, like HSUS, to lobby on their behalf. That should make it clear why they’re donating.

Written by Ayubu Nnko, @Daniel Abiliba, Alfred Sihwa with @Aurelia Adhiambo and @AnimalAdvocacyAfrica.

Disclaimer: This post was originally published on our website in May 2023. We decided to post this on the EA Forum now in order to make the post accessible to the wider EA audience. Circumstances and details may have evolved since our original publication, and any statements made herein are reflective of the context at that specific time. We encourage our audience to refer to the latest updates and developments from the organisations for the most accurate and current information.

The cage-free movement is increasingly gaining momentum all over the continent. More consumers, international organisations, and activists are calling for the ban of cruel battery cages which are detrimental to animal welfare, and pose serious threats to public and consumer health. At the core of this important work...

Continue reading

What is this post?

This post is a companion piece to recent posts on evidential cooperation in large worlds (ECL). We’ve noticed that in conversations about ECL, the same few initial confusions and objections tend to be brought up. We hope that this post will be useful...

Continue reading

Executive summary: This post addresses common objections and questions about evidential cooperation in large worlds (ECL), which argues we should cooperate with distant civilizations that use similar reasoning.

Key points:

  1. ECL combines reasonable ideas from decision theory and assumes a large universe. It is counterintuitive but worth considering.
  2. There are good arguments against causal decision theory and for noncausal theories that support ECL.
  3. ECL does not seem to be a Pascal's mugging. The ideas behind it are not that unlikely.
  4. ECL's implications may be dam
... (read more)