New & upvoted

Customize feedCustomize feed

Posts tagged community

Quick takes

Show community
View more
I met Australia's Assistant Minister for Defence last Friday. I asked him to write an email to the Minister in charge of AI, asking him to establish an AI Safety Institute. He said he would. He also seemed on board with not having fully autonomous AI weaponry. All because I sent one email asking for a meeting + had said meeting.  Advocacy might be the lowest hanging fruit in AI Safety.
Mini EA Forum Update You can now subscribe to be notified every time a user comments (thanks to the LessWrong team for building the functionality!), and we’ve updated the design of the notification option menus. You can see more details in GitHub here.
My previous take on writing to Politicians got numbers, so I figured I'd post the email I send below. I am going to make some updates, but this is the latest version: --- Hi [Politician] My name is Yanni Kyriacos, I live in Coogee, just down the road from your electorate. If you're up for it, I'd like to meet to discuss the risks posed by AI. In addition to my day job building startups, I do community / movement building in the AI Safety / AI existential risk space. You can learn more about AI Safety ANZ by joining our Facebook group here or the PauseAI movement here. I am also a signatory of Australians for AI Safety - a group that has called for the Australian government to set up an AI Commission (or similar body). Recently I worked with Australian AI experts (such as Good Ancestors Policy) in making a submission to the recent safe and response AI consultation process. In the letter, we called on the government to acknowledge the potential catastrophic and existential risks from artificial intelligence. More on that can be found here. There are many immediate risks from already existing AI systems like ChatGPT or Midjourney, such as disinformation or improper implementation in various businesses. In the not-so-distant future, certain safety nets will need to be activated (such as a Universal Basic Income policy) in the event of mass unemployment due to displacement of jobs with robots and AI systems. But of greatest concern is the speed at which we are marching towards AGI (artificial general intelligence) – systems that will have cognitive abilities at or above human level. Half of AI researchers believe that there is a 10% or greater chance that the invention of artificial superintelligence will mean the end of humanity. Among AI safety scientists, this chance is estimated to be an average of 30%. And these levels of risk aren’t just a concern for people in the far-distant future, with prediction markets such as Metaculus showing these kinds of AI could be invented in the next term of government. Notable examples of individuals sounding the alarm are Prof. Geoffrey Hinton and Prof. Yoshua Bengio, both Turing-award winners and pioneers of the deep learning methods that are currently achieving the most success. The existential risk of AI has been acknowledged by hundreds of scientists, the UN, the US and recently the EU. To make a long story short: we don't know how to align AI with the complex goals and values that humans have. When a superintelligent system is realised, there is a significant risk it will pursue a misaligned goal without us being able to stop it. And even if such a superhuman AI remains under human control, the person (or government) wielding such a power could use this to drastically, irreversibly change the world. Such an AI could be used to develop new technologies and weapons, manipulate masses of people or topple governments. The advancements in the AI landscape have progressed much faster than anticipated. In 2020, it was estimated that an AI would pass university entrance exams by 2050. This goal was achieved in March 2023 by the system GPT-4 from OpenAI. These massive, unexpected leaps have prompted many experts to request a pause in AI development through an open letter to major AI companies. The letter has been signed over 33,000 times so far, including many AI researchers and tech figures. Unfortunately, it seems that companies are not willing to jeopardise their competitive position by voluntarily halting development. A pause would need to be imposed by a government. Luckily, there seems to be broad support for slowing down AI development. A recent poll indicates that 63% of American support regulations to prevent AI companies from building superintelligent AI. At the national level, a pause is also challenging because countries have incentives to not fall behind in AI capabilities. That's why we need an international solution. The UK organised an AI Safety Summit on November 1st and 2nd at Bletchley Park. We hoped that during this summit, leaders would work towards sensible solutions that prevent the very worst of the risks that AI poses. As such I was excited to see that Australia signed the The Bletchley Declaration, agreeing that this risk is real and warrants coordinated international action. However, the recent policy statements by Minister Husic don't seem to align with the urgency that experts are seeing. The last safe moment to act could be very soon. The Summit has not yet produced an international agreement or policy. We have seen proposals being written by the US Senate, and even AI company CEOs have said there is “overwhelming consensus” that regulation is needed. But no proposal so far has seriously considered ways to slow down or prevent a superintelligent AI from being created. I am afraid that lobbying efforts by AI companies to keep regulation at a minimum are turning out to be highly effective. It's essential that the government follows through on its commitment at Bletchley Park to create a national or regional AI safety body. We have such bodies for everything from the risk of plane crashes to the risk of tsunamis. We urgently need one on ensuring the safety of AI systems. Anyway, I'd love to discuss this more in person or via zoom if you're in town soon. Let me know what you think. Cheers,, Yanni 
What is your "Pens Down" moment?  "Pens Down" to mean 'Artificial Super Intelligence in my opinion is close enough that it no longer makes sense to work on whatever else I'm currently working on, because we're about to undergo radical amounts of change very soon/quickly'. For me, it is probably when we have something as powerful as GPT-4 except it is agentic and costs less than $100 / month. So, that looks like a digital personal assistant that can execute an instruction like "have a TV delivered for me by X date, under Y price and organise installation and wall mounting." This is obviously a question mainly for people who don't work full time on AI Safety.
It seems to me like the ratio of preparedness : prevention for environmental change should be way higher

Popular comments

Recent discussion

This is a linkpost for the online courses and series of the Marginal Revolution University[1] (MRU):

  • Development Economics by Alex Tabarrok and Tyler Cowen (course). "Economic growth, geography, trade, property rights, foreign aid, politics, poverty, migration, education, and more".
  • Economic History of the Soviet Union by Guinevere Liberty Nell (course). "Marxist Utopianism, The New Economic Policy in crisis, Stalin's rise, and more".
  • Economics of the Media by Alex Tabarrok and Tyler Cowen (course). "Basic economics of the media, media bias, media and government, and more".
  • Economists in the Wild (series). "A video series that profiles economists and their adventures with real-world research".
  • Everyday Economics by Alex Tabarrok, Don Boudreaux, Ian Bremmer and Tyler Cowen (series). "How do the “big ideas” from economics relate to everyday topics?".
  • Great Economists: Classical Economics and
Continue reading


$8003.98 Charity Entrepreneurship

$5000 Insect Institute

$5000 Shrimp Welfare Project

$5000 Rethink Priorities

$5000 Animal Ethics

$1000 Wild Animal Initiative



I think it’s good to keep track of and explain donations. It creates a record to get better...

Continue reading

Thank you for this. It is indeed inspiring. (And wonderful that you ficus on animals, imho)

Cynthia Schuck-Paim; Wladimir J. Alonso; Cian Hamilton (Welfare Footprint Project) 


In assessing animal welfare, it would be immensely beneficial to rely on a cardinal metric that captures the overall affective experience of sentient beings over a period of ...

Continue reading

I agree that I would rather go through my most painful-ever experiences again than go through a much longer period of chronic pain because chronic pain is debilitating. 

In general, I expect a lot of people to feel more averse to chronic rather than acute pain -- with the assumption that the long-term effects of chronic pain are greater than those of acute -- once thinking beyond themselves. That is, considering not just what they themselves would prefer, all else held equal, but also damage to their productivity and ability to help others (e.g duty to... (read more)

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

This is the second of a collection of three essays, ‘On Wholesomeness’. In the first essay I introduced the idea of wholesomeness as a criterion for choosing actions. This essay will explore the relationship between acting wholesomely and some different conceptions...

Continue reading
I think that's just a minority of people retroactively imagining an additional meaning to the word. The 'whole' in wholesome is in contrast to being injured, not in contrast to something being partial. so you get: uninjured -> healthy -> beneficial -> morally good. Nothing to do with examining parts vs wholes.
('Wholesome' was a word ('hailasam') before English was even its own language, when whole/hail primarily meant being healthy. So it pretty much bypasses the idea of 'leaving nothing out'. It's like saying that a brainstorming session has to be some sort of violent, disturbing process because it contains the word 'storm' in it. Indeed there's a completely separate meaning for 'brainstorm' which is more like this - a moment of mental confusion essentially, which is basically the opposite of a brainstorming session.)

I appreciate the etymological details, and feel a bit embarrassed that I hadn't looked into that already.

I guess I'd describe what's going on as:

  • The original word meant "healthy"
  • I'm largely using it to mean "healthy" in the sense of "healthy for the systems we're embedded in" (which I think is a pretty normal usage)
  • I'm adding a flavour of "attending to the wholeness" (inspired by Christopher Alexander), which includes both "attending to all the parts" (new) as well as "attending to making things fit with existing parts" (essentially an existing meaning, as
... (read more)

EA Global: Bay Area (Global Catastrophic Risks) took place February 2–4. We hosted 820 attendees, 47 of whom volunteered over the weekend to help run the event. Thank you to everyone who attended and a special thank you to our volunteers—we hope it was a valuable weekend! 

Photos and recorded talks

You can now check out photos from the event

View event photos

Recorded talks, such as the media panel on impactful GCR communication, Tessa Alexanian’s talk on preventing engineered pandemics, Joe Carlsmith’s discussion of scheming AIs, and more, are now available on our Youtube channel.

Watch talks

A brief summary of attendee feedback

Our post-event feedback survey received 184 responses. This is lower than our average completion rate — we’re still accepting feedback responses and would love to hear from all our attendees. Each response helps us get better summary metrics and we...

Continue reading

Quick Summary

  • Gives the President wide-ranging powers to strengthen the US industrial base 
  • Has been around without changing that much since 1953
  • Has provisions which allow firms to make voluntary agreements that would normally be illegal under antitrust law 
  • Provided the legal authority for many of the provisions in Biden’s recent Executive Order on AI 

The Defence Production Act

The Defence Production Act (DPA) has been reauthorised (and modified) by Congress since 1950, and in 1953 its powers very significantly reduced. I’m confident that it will continue to be passed - in a roughly similar form -  for the foreseeable future. The current version was passed in 2019 under a Republican senate and is due for reauthorisation in 2025.

Since the Obama Presidency, there’s Republicans have begun to try to prevent bills proposed by Democrats from being passed by default. This ...

Continue reading

In a fiery, though somewhat stilted speech with long pauses for translation, Javier Milei delivered this final message to a cheering crowd at the Conservative Political Action Conference last week:

Don't let socialism advance. Don't endorse regulations. Don't endorse the idea of market failure. Don't allow the advance of the murderous agenda. And don't let the siren calls of social justice woo you.

The reactions on econ twitter were unsurprisingly less positive than the CPAC crowd about calls to boycott market failure, one of the most well established facts in economics. James Medlock, for example, begs libertarians to get a step past econ 101.

The people cheering in the crowd and self-righteously quote tweeting on X are cheering for the wrong reasons. Medlock is correct that these credulous fans need to get a grip on the basics before they start denying established economic theories.


Continue reading

Hello everyone,

My name is Noah, and this is my introduction to the Effective Altruism forum! I have so much to share, but I will begin with a bit about me.


My parents are both highly-educated and wonderful people who nurtured my curiosity and empathy from a very young age, and to them I owe everything! During middle school I created a non-profit to provide aid to the victims of the 2011 Tōhoku earthquake and tsunami, and I worked with the school to fundraise by selling candy during lunch. I went on to volunteer in many forms, from summer camps in Wyoming to STEM workshops in San Jose, but my most impactful work has always been cooking and serving dinner to the families of critically-ill children at the Ronald McDonald House.

In high school I started tinkering with computers and working part-time to hire mentors and freelancers to coach me through personal projects. I went on to attend...

Continue reading

Overview of essay series

This is the first in a collection of three essays exploring and ultimately defending the idea of choosing what feels wholesome as a heuristic for picking actions which are good for the world. I'm including a summary of the series here, before...

Continue reading
Thanks for writing this, a pleasure to read as always. I must admit I come away being rather confused by what you mean by 'wholesomeness'. Is wholesomeness basically consequentialism but with more vibes and less numbers? Your account makes it seem quite close to consequentialism. It also seems really close to virtue ethics - you try to differentiate it by saying it rejects "focus[ing] single-mindedly on excelling at one virtue" but my impression was that virtue ethics was all about balance the golden median anyway. And then it seems pretty close to sincerity/integrity also. I was especially confused by this section: Apparently the activities I think most people would be most likely to label wholesomeness are only "often... somewhat" wholesome. And I think most people would basically never describe experimenting with drugs as wholesome. Maybe it might be good, but if it is good its good for some other reason (like it's educational), not because its wholesome. I think you actually have a really revisionist account of 'wholesomeness' - so revisitionist I think you should probably just pick a new word. It seems like you are trying to rely on some of the vibes of the word while giving it a new meaning which fixates on the word 'whole' to the neglect of the actual historical denotation. Samwise is one of the most wholesome characters I know, but it's not because he was attending to the whole of Middle Earth - it's because of his courage and humility, and his loyalty to Frodo, Rosie and the Shire. A good officer - or Denethor - comes much closer to attending to the whole, but that doesn't mean his batsman isn't more wholesome.

I'm not sure how central this is to your point, but for what it's worth I think you may be overestimating the degree to which I'd disagree with normal judgements about what's wholesome. I would also basically never describe experimenting with drugs as wholesome. (Maybe I'd make an exception if someone was travelling to a country where they were legal in order to experiment with things that might relieve pain from a chronic condition or something; of course it would depend on the details of their attitude.)

I think that it wouldn't be unusual, though, to des... (read more)

Owen Cotton-Barratt
I think it's partially that (where the point of the vibes is often that they're helpful for tracking things which aren't directly good/bad, but have an increased chance of causing good/bad things down the line). It's also a bit like "consequentialism, but with some extra weight on avoiding negative consequences to things you're interacting with" (where the point of this is that it distributes responsibility in a sensible way across agents). I agree that my sense is somewhat revisionist, although I think it's more grounded in the usual usage than you're giving it credit for. I did consider choosing a different word, but after talking it through with some folks felt better about "wholesome" than the alternatives. The main way in which I care about the vibes of the existing word is for people putting it into practice. I think if people ask of an action they're considering "what are the ways in which this might not be wholesome?", it will be easy to feel answers to that question. If I try to define "holistically-good" and then people ask "what are the ways in which this might not be holistically-good?", I think they'll more get caught up in explicit verbal models and not notice things which they might have caught as answers to the first question.  Put another way: one of my guiding principles for this is to try to have an account of things such that I think it could lead to people doing the good ambitious versions of EA, but such that I find it hard to imagine SBF trying to follow it could have made the same mistakes, even if there was motivated cognition towards doing so. Stuff about side constraints doesn't really feel robust enough to me. If there's a new term it could be vulnerable to a lot of reinterpretation. Anchoring in the existing term serves to resist that. Maybe I'm supposed to more explicitly separate out the thing you do felt mental checks for from the thing that you overall choose to pursue? I'm worried that gets contrived.