New & upvoted

Customize feedCustomize feed
NEW
CommunityCommunity
Personal+
199
· 5d ago · 9m read

Posts tagged community

Quick takes

Show community
View more
Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn't give specifics on his policy positions, this seems like something he is particularly interested in. I know politics is discouraged on the EA forum, but I thought I would post this to say: EA should really be preparing for a Trump presidency. He's up in the polls and IMO has a >50% chance of winning the election. Right now politicians seem relatively receptive to EA ideas, this may change under a Trump administration.
13
tlevin
14h
1
I think some of the AI safety policy community has over-indexed on the visual model of the "Overton Window" and under-indexed on alternatives like the "ratchet effect," "poisoning the well," "clown attacks," and other models where proposing radical changes can make you, your allies, and your ideas look unreasonable. I'm not familiar with a lot of systematic empirical evidence on either side, but it seems to me like the more effective actors in the DC establishment overall are much more in the habit of looking for small wins that are both good in themselves and shrink the size of the ask for their ideal policy than of pushing for their ideal vision and then making concessions. Possibly an ideal ecosystem has both strategies, but it seems possible that at least some versions of "Overton Window-moving" strategies executed in practice have larger negative effects via associating their "side" with unreasonable-sounding ideas in the minds of very bandwidth-constrained policymakers, who strongly lean on signals of credibility and consensus when quickly evaluating policy options, than the positive effects of increasing the odds of ideal policy and improving the framing for non-ideal but pretty good policies. In theory, the Overton Window model is just a description of what ideas are taken seriously, so it can indeed accommodate backfire effects where you argue for an idea "outside the window" and this actually makes the window narrower. But I think the visual imagery of "windows" actually struggles to accommodate this -- when was the last time you tried to open a window and accidentally closed it instead? -- and as a result, people who rely on this model are more likely to underrate these kinds of consequences. Would be interested in empirical evidence on this question (ideally actual studies from psych, political science, sociology, econ, etc literatures, rather than specific case studies due to reference class tennis type issues).
Excerpt from the most recent update from the ALERT team:   Highly pathogenic avian influenza (HPAI) H5N1: What a week! The news, data, and analyses are coming in fast and furious. Overall, ALERT team members feel that the risk of an H5N1 pandemic emerging over the coming decade is increasing. Team members estimate that the chance that the WHO will declare a Public Health Emergency of International Concern (PHEIC) within 1 year from now because of an H5N1 virus, in whole or in part, is 0.9% (range 0.5%-1.3%). The team sees the chance going up substantially over the next decade, with the 5-year chance at 13% (range 10%-15%) and the 10-year chance increasing to 25% (range 20%-30%).   their estimated 10 year risk is a lot higher than I would have anticipated.
Quick poll [✅ / ❌]: Do you feel like you don't have a good grasp of Shapley values, despite wanting to?  (Context for after voting: I'm trying to figure out if more explainers of this would be helpful. I still feel confused about some of its implications, despite having spent significant time trying to understand it)
I can't find a better place to ask this, but I was wondering whether/where there is a good explanation of the scepticism of leading rationalists about animal consciousness/moral patienthood. I am thinking in particular of Zvi and Yudkowsky. In the recent podcast with Zvi Mowshowitz on 80K, the question came up a bit, and I know he is also very sceptical of interventions for non-human animals on his blog, but I had a hard time finding a clear explanation of where this belief comes from. I really like Zvi's work, and he has been right about a lot of things I was initially on the other side of, so I would be curious to read more of his or similar people's thoughts on this. Seems like potentially a place where there is a motivation gap: non-animal welfare people have little incentive to convince me that they think the things I work on are not that useful.

Popular comments

Recent discussion

Today, The Guardian published an article titled " ‘Eugenics on steroids’: the toxic and contested legacy of Oxford’s Future of Humanity Institute ". I thought I should flag this article here, since it's such a major news organization presenting a rather scathing picture...

Continue reading

I found this very concerning. I posted it but then a helpful admin showed me where it was already posted, I need to be better at searching :D 

When we consider the impact of this, we need to forget for a moment everything we know about EA and imagine the impact this will have on someone who has never heard of EA, or who has just a vague idea about it. 

I do not agree at all with the content of the article, and especially not with the tone of the article, which frankly surprised me from the Guardian. But even this shows how marginal EA is, even in t... (read more)

Sam Watts commented on EA Global: London 2024 25m ago

Applications are now open (here)! Deadline: 19th May 2024

EA Global brings together a wide network of people who have made helping others a core part of their lives. Speakers and attendees share new thinking and research in the field of effective altruism and coordinate ...

Continue reading

Is there a way for me to claim gift aid on my donation for the event? Or pay a lower amount for the ticket and donate separately in order for gift aid to be claimed by EA?

I’ve been working in animal advocacy for two years and have an amateur interest in AI. I’m writing this in a personal capacity, and am not representing the views of my employer. 

Many thanks to everyone who provided feedback and ideas. 

Introduction

In previous posts...

Continue reading
2
BruceF
10h
this is a very helpful post - thank you! I just wanted to make sure you've seen that that Bezos Earth Fund's $100 million AI grand challenge includes alternative proteins as one of three focus areas.  See here for details: https://www.bezosearthfund.org/news-and-insights/bezos-earth-fund-announces-100-million-ai-solutions-climate-change-nature-loss 

Thanks Bruce! Yes, I saw that - great to see this area getting some more funding and public attention!

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

GPT-5 training is probably starting around now. It seems very unlikely that GPT-5 will cause the end of the world. But it’s hard to be sure. I would guess that GPT-5 is more likely to kill me than an asteroid, a supervolcano, a plane crash or a brain tumor. We can predict...

Continue reading

I think there's a meta-level crux which is very important. If you only want to attend protests where the protesters are reasonable and well informed and agree with you, then you implicitly only want to attend small protests.

It seems pretty clear to me that most people are much less concerned about x-risk than job loss and other concerns. So we have to make a decision - do we stick to our guns and have the most epistemically virtuous protest movement in history and make it 10x harder to recruit new people and grow the moment? Or do we compromise and welcome... (read more)

5
richard_ngo
3h
When you're weighing existential risks (or other things which steer human civilization on a large scale) against each other, effects are always going to be denominated in a very large number of lives. And this is what OP said they were doing: "a major consideration here is the use of AI to mitigate other x-risks". So I don't think the headline numbers are very useful here (especially because we could make them far far higher by counting future lives).
1
Benjamin27
4h
I think that AI Safety is probably neglected in the public consciousness, simply because most people still don't understand what AI even "is". This lack of obviously precludes people from caring about AI safety, because they don't appreciate that AI is a qualitatively different technology to any technology hitherto created. And if they're not interfacing with the current LLMs (I suspect most older people aren't) then they can't appreciate the exponential progress in sophistication. By now, people have some visceral understanding of the realities of progressive climate change.  But AI is still an abstract concept, and an exponential technology in its infancy, so it's hard to viscerally grok the idea of AI-x-risk. Let's say that proportion of adults in a developed country that know of, or have used an LLM, is 20%. From that 20%, perhaps half of them (10% of population) have a dim premonition of the profundity of AI. But, anecdotally, no-one I know is really thinking of AI's trajectory, except perhaps a sense of vague foreboding.  I am fairly new to the EA and rationality communities, but I sense that members of EA/rationality are on average cerebral, and perhaps introverted or have an unassuming demeanor. Moreover, the mindset is one of epistemic humility. EA rarely attracts the extroverted, disagreeable, outspoken "activist" types that other movements attract--for example, Israel-Palestine causes or Extinction Rebellion. Due to this, I'm predicting that we have a scarcity of EAs with the comparative advantage of organising and attending protests, and making noise in public. However, I think that protests are necessary to raise public awareness about AI safety and galvanise an educated mass response. The key considerations are: what demands do we set, based on what evidence/reasoning? And in broadcasting AI-safety, how do we balance the trade-off between: * Trying to be as comprehensive and rational as possible in explaining AI, AI-x-risk and the need for safety r

Abstract: Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack the reflexivity and self-formative characteristics inherent in the notion of the subject. By drawing upon a recent dialogue between Foucault and phenomenology, I suggest four techno-philosophical desiderata that would address the gaps in this search for a technological subjectivity: embodied self-care, embodied intentionality, imagination ...

Continue reading

This is a linkpost for Imitation Learning is Probably Existentially Safe by Michael Cohen and Marcus Hutter.

Abstract

Concerns about extinction risk from AI vary among experts in the field. But AI encompasses a very broad category of algorithms. Perhaps some algorithms would

...
Continue reading
5
Matthew_Barnett
16h
I agree with the title and basic thesis of this article but I find its argumentation weak. The obvious reason why no human has ever gained total control over humanity is because no human has ever possessed the capability to do so, not because no human would make the choice to do so if given the opportunity. This distinction is absolutely critical, because if humans have historically lacked total control due to insufficient ability rather than unwillingness, then the quoted argument essentially collapses. That's because we have zero data on what a human would do if they suddenly acquired the power to exert total dominion over the rest of humanity. As a result, it is highly uncertain and speculative to claim that an AI imitating human behavior would refrain from seizing total control if it had that capability. The authors seem to have overlooked this key distinction in their argument. It takes no great leap of imagination to envision scenarios where, if a human was granted near-omnipotent abilities, some individuals would absolutely choose to subjugate the rest of humanity and rule over them in an unconstrained fashion. The primary reason I believe imitation learning is likely safe is that I am skeptical it will imbue AIs with godlike powers in the first place, not because I naively assume humans would nobly refrain from tyranny and oppression if they suddenly acquired such immense capabilities. Note: Had the authors considered this point and argued that an imitation learner emulating humans would be safe precisely because it would not be very powerful, their argument would have been stronger. However, even if they had made this point, it likely would have provided only relatively weak support for the (perhaps implicit) thesis that building imitation learners is a promising and safe approach to building AIs. There are essentially countless proposals one can make for ensuring AI safety simply by limiting its capabilities. Relying solely on the weakness of an AI sys

overThanks for the comment, Matthew!

My understanding is that the authors are making 2 points in the passage you quoted:

  • No human has gained total control over all humanity, so an AI system that did so would not be imitating humans well.
  • Very few humans would endorse human extinction even if they gained total control over all humanity. Note a human endorsing human extinction would mean supporting the death of themselves, and their own family and friends.

The obvious reason why no human has ever gained total control over humanity is because no human has ever po

... (read more)

How do you think about the value of an hour of your work, e.g. for making decisions like whether a time-saving software tool is worth it, whether to outsource a function to an expensive contractor, or whether it makes sense to hire someone to delegate tasks to them?

 

I tried a few different methods:

  1. Derived from my current salary
  2. Derived from what I would likely be earning if I worked in the for-profit sector
  3. Derived from ‘market rate’ for roles similar to mine, if I paid myself at that
  4. Derived from funder willingness to pay for the outcomes from my work, based on my cost-effectiveness modelling.

From this I got £16.50, £20, £35... and £11,250.


Obviously the last of these is wildly uncertain. And there are various reasons why it might be too optimistic (as well as some why it might be too pessimistic), etc etc. 

But the difficulty remains that accounting for 'actual value' just seems to...

Continue reading

The UK government’s public consultation for their proposed animal welfare labelling scheme[1] closes on the 7th of May. I.e. a week away. If you’re in the UK and care about animal welfare, I think you should probably submit an answer to it. If you don't care about ...

Continue reading

Shout out to my Mum for filling in the form. 

About a week ago, Spencer Greenberg and I were debating what proportion of Effective Altruists believe enlightenment is real. Since he has a large audience on X, we thought a poll would be a good way to increase our confidence in our predictions

Before I share my commentary...

Continue reading
1
Guy Raveh
5h
It's good that nobody's talking about this. It would be no more sane than e.g. trying to make everyone religious because then God would eliminate suffering.

Hi Guy! Thanks for commenting :) I am a bit confused by the analogy. Would you mind explaining it further?

7
huw
5h
My sense from a very quick skim of the literature is: 1. There are barely any studies or RCTs on non-dual mindfulness, and certainly not enough to make a conclusion about it having a larger-than-normal effect size[1][2] 2. The most highly-cited meta-analyses that do split out types of meditation either directly find no significant difference between kinds, or claim they don't have enough evidence for a difference in their discussions[1:1][2:1] 3. The effect size is no better or worse than other psychotherapies It might be possible to do some special pleading around non-dual mindfulness in particular, but frankly, everyone who has their own flavour of mindfulness does a lot of special pleading around it, so I'm default skeptical despite non-dual being my personal preference. My sense as an experienced non-dual meditator (~10 years, and having experienced 'ego death' before without psychedelics): 1. I am skeptical that at-will or permanent ego death is possible. By 'at-will', I mean with an ease similar to meditating, with effects lasting longer than an acid trip. 2. I am skeptical that this state would even be desirable; most people that have tried psychedelics aren't on a constant low dose (despite that having few downsides for people not prone to psychosis). 3. Even if it is possible and desirable, I am skeptical that there is a path to this kind of enlightenment for every person, and it might only be possible for a very small percentage of people even with the motivation and infinite free time to practice I think teaching people mindfulness would be good, but probably no better than teaching them any other kind of therapy. Maybe it's generally more acceptable because it's less stigmatised than self-learning CBT. But I'd be really curious to understand what the people who voted yes were thinking, and in particular what they think 'enlightenment' is. ---------------------------------------- 1. https://doi.org/10.1037/a0028168 ↩︎ ↩︎ 2. https://doi.org