New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Mini EA Forum Update You can now subscribe to be notified every time a user comments (thanks to the LessWrong team for building the functionality!), and we’ve updated the design of the notification option menus. You can see more details in GitHub here.
My previous take on writing to Politicians got numbers, so I figured I'd post the email I send below. I am going to make some updates, but this is the latest version: --- Hi [Politician] My name is Yanni Kyriacos, I live in Coogee, just down the road from your electorate. If you're up for it, I'd like to meet to discuss the risks posed by AI. In addition to my day job building startups, I do community / movement building in the AI Safety / AI existential risk space. You can learn more about AI Safety ANZ by joining our Facebook group here or the PauseAI movement here. I am also a signatory of Australians for AI Safety - a group that has called for the Australian government to set up an AI Commission (or similar body). Recently I worked with Australian AI experts (such as Good Ancestors Policy) in making a submission to the recent safe and response AI consultation process. In the letter, we called on the government to acknowledge the potential catastrophic and existential risks from artificial intelligence. More on that can be found here. There are many immediate risks from already existing AI systems like ChatGPT or Midjourney, such as disinformation or improper implementation in various businesses. In the not-so-distant future, certain safety nets will need to be activated (such as a Universal Basic Income policy) in the event of mass unemployment due to displacement of jobs with robots and AI systems. But of greatest concern is the speed at which we are marching towards AGI (artificial general intelligence) – systems that will have cognitive abilities at or above human level. Half of AI researchers believe that there is a 10% or greater chance that the invention of artificial superintelligence will mean the end of humanity. Among AI safety scientists, this chance is estimated to be an average of 30%. And these levels of risk aren’t just a concern for people in the far-distant future, with prediction markets such as Metaculus showing these kinds of AI could be invented in the next term of government. Notable examples of individuals sounding the alarm are Prof. Geoffrey Hinton and Prof. Yoshua Bengio, both Turing-award winners and pioneers of the deep learning methods that are currently achieving the most success. The existential risk of AI has been acknowledged by hundreds of scientists, the UN, the US and recently the EU. To make a long story short: we don't know how to align AI with the complex goals and values that humans have. When a superintelligent system is realised, there is a significant risk it will pursue a misaligned goal without us being able to stop it. And even if such a superhuman AI remains under human control, the person (or government) wielding such a power could use this to drastically, irreversibly change the world. Such an AI could be used to develop new technologies and weapons, manipulate masses of people or topple governments. The advancements in the AI landscape have progressed much faster than anticipated. In 2020, it was estimated that an AI would pass university entrance exams by 2050. This goal was achieved in March 2023 by the system GPT-4 from OpenAI. These massive, unexpected leaps have prompted many experts to request a pause in AI development through an open letter to major AI companies. The letter has been signed over 33,000 times so far, including many AI researchers and tech figures. Unfortunately, it seems that companies are not willing to jeopardise their competitive position by voluntarily halting development. A pause would need to be imposed by a government. Luckily, there seems to be broad support for slowing down AI development. A recent poll indicates that 63% of American support regulations to prevent AI companies from building superintelligent AI. At the national level, a pause is also challenging because countries have incentives to not fall behind in AI capabilities. That's why we need an international solution. The UK organised an AI Safety Summit on November 1st and 2nd at Bletchley Park. We hoped that during this summit, leaders would work towards sensible solutions that prevent the very worst of the risks that AI poses. As such I was excited to see that Australia signed the The Bletchley Declaration, agreeing that this risk is real and warrants coordinated international action. However, the recent policy statements by Minister Husic don't seem to align with the urgency that experts are seeing. The last safe moment to act could be very soon. The Summit has not yet produced an international agreement or policy. We have seen proposals being written by the US Senate, and even AI company CEOs have said there is “overwhelming consensus” that regulation is needed. But no proposal so far has seriously considered ways to slow down or prevent a superintelligent AI from being created. I am afraid that lobbying efforts by AI companies to keep regulation at a minimum are turning out to be highly effective. It's essential that the government follows through on its commitment at Bletchley Park to create a national or regional AI safety body. We have such bodies for everything from the risk of plane crashes to the risk of tsunamis. We urgently need one on ensuring the safety of AI systems. Anyway, I'd love to discuss this more in person or via zoom if you're in town soon. Let me know what you think. Cheers,, Yanni 
The general public wants frontier AI models regulated and there doesn't seem to be grassroots focussed orgs attempting to capture and funnel this energy into influencing politicians. E.g. via this kind of activity. This seems like massively low hanging fruit. An example of an organisation that does this (but for GH&W) is Results Australia. Someone should set up such an org.
What is your "Pens Down" moment?  "Pens Down" to mean 'Artificial Super Intelligence in my opinion is close enough that it no longer makes sense to work on whatever else I'm currently working on, because we're about to undergo radical amounts of change very soon/quickly'. For me, it is probably when we have something as powerful as GPT-4 except it is agentic and costs less than $100 / month. So, that looks like a digital personal assistant that can execute an instruction like "have a TV delivered for me by X date, under Y price and organise installation and wall mounting." This is obviously a question mainly for people who don't work full time on AI Safety.
It seems to me like the ratio of preparedness : prevention for environmental change should be way higher

Popular comments

Recent discussion

Unprecedented dangers
inevitably follow
from exponentially scaling
a powerful technology
that we do not understand.

n.b. I'm a masters student in international policy (this program). In my experience, policy oriented folks do not understand that lines four and five can be simultaneously true. I think there are some simple ways that ML researchers can help address this misconception, and I'll share those here once I've written them up.

Continue reading
Yanni Kyriacos posted a Quick Take 2h ago

The general public wants frontier AI models regulated and there doesn't seem to be grassroots focussed orgs attempting to capture and funnel this energy into influencing politicians. E.g. via this kind of activity. This seems like massively low hanging fruit. An example of an organisation that does this (but for GH&W) is Results Australia. Someone should set up such an org.

Continue reading

Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post.

How factory farmers block progress — and what we can do about it

...
Continue reading
1
LewisBollard
2h
Thanks for flagging that. I agree that most of the funds donated by animal ag employees were not to oppose animal protection, or likely any specific policies. I should have clarified that. I also generally don't think of people working in agriculture as evil. I think they're mostly just doing the rationale thing given the goal of profit maximization, and the lack of constraints we've imposed on how to pursue that.
29
weeatquince
14h
Thank you so much for an excellent post. I just wanted to pick up on one of your suggested lessons learned that, at least in my mind, doesn’t follow directly from the evidence you have provided. You say: To me, there are two very opposing ways you could take this. Animal-ag industry is benefiting from cross party support so: A] Animal rights activists need to work more with the political right so that we get cross-party support too, essentially depoliticising animal rights policy, with the aim of animal activists also getting the benefits of cross party support. B] Animal rights activists need to work more with the political left so that supporting animal farming is an unpalatable opinion or action for anyone on the left to hold, essentially politicising animal rights policy, with the aim of industry loosing the benefits of cross-party support. Why do you suggest strategy A] depolticisation? Working with conservative animal lovers. Do you have any evidence this is the correct lesson to draw? I have not yet done much analysis of this question but my initial sense from the history of social change in the US is that the path to major change though an issue becoming highly politicised and championed by one half of the political spectrum is likely to be the quicker (albeit less stable) route to success, and in some cases where entrenched interests are very strong, might be the only path to success (e.g. with slavery). I worry the a focus on depoliticsiation could be a strategic blunder. I have been pondering this for a while and am keen to understand what research, evidence and reasoning there is for keeping animal rights depoliticsed.

Thanks, this is a good point. I agree that it's not obvious we should choose A) over B).

My evidence for A) is that it seems to be the approach that worked in every case where farm animal welfare laws have passed so far. Whereas I've seen a lot of attempts at B), but never seen it succeed. I also think A) really limits your opportunities, since you can only pass reforms when liberals hold all key levers of power (e.g. in the US, you need Democrats to control the House, Senate, and Presidency) and they agree to prioritize your issue.

My sense is that most his... (read more)

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

What is this?

After a running career[1] across marathons, 50K, 50-mile, 100K, and 100-mile distance events over the past eleven years, I'm tackling the 200-mile distance at the Tahoe 200 from June 14-18 this year. 

It's a bit of a ridiculous, silly challenge. It's also completely wild that I have a privileged life living in a high-income country like the US that allows me to tackle such an adventure. 

Given all this, I have decided to fundraise for New Incentives through my training and build-up for the event. This is my PledgeIt donation page. I'm thankful to have the support of the folks at High Impact Athletes in putting my page together and thinking through my campaign. My goal is to raise $10,036 to support 650 children enrolling in New Incentive's vaccination program at a cost of $15.44 per infant[2].

I hope you can help promote my fundraising efforts or consider donating...

Continue reading
21
24

Welcome! Use this thread to introduce yourself or ask questions about anything that confuses you. (For inspiration, you can see the last open thread here.)

Get started on the EA Forum

The "Guide to norms on the Forum" shares more about that discussions we'd like to see on...

Continue reading

Request for advice from animal suffering EAs (and, to a lesser extent, climate EAs?): is there an easy win over getting turkeys from Mary's Turkeys? (Also, how much should I care about getting the heritage variety?)

Background: I routinely cook for myself and my housemates (all of whom are omnivores), and am on a diet that requires animal products for health reasons. Nevertheless, I'd rather impose fewer costs than more costs on others; I stopped eating chicken and chicken eggs in response to this post and recently switched from consuming lots of grass-fini... (read more)

This is a linkpost for the online courses and series of the Marginal Revolution University[1] (MRU):

  • Development Economics by Alex Tabarrok and Tyler Cowen (course). "Economic growth, geography, trade, property rights, foreign aid, politics, poverty, migration, education, and more".
  • Economic History of the Soviet Union by Guinevere Liberty Nell (course). "Marxist Utopianism, The New Economic Policy in crisis, Stalin's rise, and more".
  • Economics of the Media by Alex Tabarrok and Tyler Cowen (course). "Basic economics of the media, media bias, media and government, and more".
  • Economists in the Wild (series). "A video series that profiles economists and their adventures with real-world research".
  • Everyday Economics by Alex Tabarrok, Don Boudreaux, Ian Bremmer and Tyler Cowen (series). "How do the “big ideas” from economics relate to everyday topics?".
  • Great Economists: Classical Economics and
...
Continue reading

Summary 

$8003.98 Charity Entrepreneurship

$5000 Insect Institute

$5000 Shrimp Welfare Project

$5000 Rethink Priorities

$5000 Animal Ethics

$1000 Wild Animal Initiative

 

Background

I think it’s good to keep track of and explain donations. It creates a record to get better...

Continue reading

Thank you for this. It is indeed inspiring. (And wonderful that you ficus on animals, imho)

Cynthia Schuck-Paim; Wladimir J. Alonso; Cian Hamilton (Welfare Footprint Project) 

Overview

In assessing animal welfare, it would be immensely beneficial to rely on a cardinal metric that captures the overall affective experience of sentient beings over a period of ...

Continue reading

I agree that I would rather go through my most painful-ever experiences again than go through a much longer period of chronic pain because chronic pain is debilitating. 

In general, I expect a lot of people to feel more averse to chronic rather than acute pain -- with the assumption that the long-term effects of chronic pain are greater than those of acute -- once thinking beyond themselves. That is, considering not just what they themselves would prefer, all else held equal, but also damage to their productivity and ability to help others (e.g duty to... (read more)