New & upvoted

Customize feedCustomize feed
NEW
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn't give specifics on his policy positions, this seems like something he is particularly interested in. I know politics is discouraged on the EA forum, but I thought I would post this to say: EA should really be preparing for a Trump presidency. He's up in the polls and IMO has a >50% chance of winning the election. Right now politicians seem relatively receptive to EA ideas, this may change under a Trump administration.
26
tlevin
1d
1
I think some of the AI safety policy community has over-indexed on the visual model of the "Overton Window" and under-indexed on alternatives like the "ratchet effect," "poisoning the well," "clown attacks," and other models where proposing radical changes can make you, your allies, and your ideas look unreasonable. I'm not familiar with a lot of systematic empirical evidence on either side, but it seems to me like the more effective actors in the DC establishment overall are much more in the habit of looking for small wins that are both good in themselves and shrink the size of the ask for their ideal policy than of pushing for their ideal vision and then making concessions. Possibly an ideal ecosystem has both strategies, but it seems possible that at least some versions of "Overton Window-moving" strategies executed in practice have larger negative effects via associating their "side" with unreasonable-sounding ideas in the minds of very bandwidth-constrained policymakers, who strongly lean on signals of credibility and consensus when quickly evaluating policy options, than the positive effects of increasing the odds of ideal policy and improving the framing for non-ideal but pretty good policies. In theory, the Overton Window model is just a description of what ideas are taken seriously, so it can indeed accommodate backfire effects where you argue for an idea "outside the window" and this actually makes the window narrower. But I think the visual imagery of "windows" actually struggles to accommodate this -- when was the last time you tried to open a window and accidentally closed it instead? -- and as a result, people who rely on this model are more likely to underrate these kinds of consequences. Would be interested in empirical evidence on this question (ideally actual studies from psych, political science, sociology, econ, etc literatures, rather than specific case studies due to reference class tennis type issues).
Excerpt from the most recent update from the ALERT team:   Highly pathogenic avian influenza (HPAI) H5N1: What a week! The news, data, and analyses are coming in fast and furious. Overall, ALERT team members feel that the risk of an H5N1 pandemic emerging over the coming decade is increasing. Team members estimate that the chance that the WHO will declare a Public Health Emergency of International Concern (PHEIC) within 1 year from now because of an H5N1 virus, in whole or in part, is 0.9% (range 0.5%-1.3%). The team sees the chance going up substantially over the next decade, with the 5-year chance at 13% (range 10%-15%) and the 10-year chance increasing to 25% (range 20%-30%).   their estimated 10 year risk is a lot higher than I would have anticipated.
Is EA as a bait and switch a compelling argument for it being bad? I don't really think so 1. There are a wide variety of baits and switches, from what I'd call misleading to some pretty normal activities - is it a bait and switch when churches don't discuss their most controversial beliefs at a "bring your friends" service? What about wearing nice clothes to a first date? [1] 2. EA is a big movement composed of different groups[2]. Many describe it differently. 3. EA has done so much global health stuff I am not sure it can be described as a bait and switch. eg https://docs.google.com/spreadsheets/d/1ip7nXs7l-8sahT6ehvk2pBrlQ6Umy5IMPYStO3taaoc/edit#gid=9418963 4. EA is way more transparent than any comparable movement. If it is a bait and switch then it does so much more to make clear where the money goes eg (https://openbook.fyi/). On the other hand: 1. I do sometimes see people describing EA too favourably or pushing an inaccurate line.   I think that transparency comes with a feature of allowing anyone to come and say "what's going on there" and that can be very beneficial at avoiding error but also bad criticism can be too cheap.  Overall I don't find this line that compelling. And that parts that are seem largely in the past when EA was smaller (when perhaps it mattered less). Now that EA is big, it's pretty clear that it cares about many different things.  Seems fine.  1. ^ @Richard Y Chappell created the analogy.  2. ^ @Sean_o_h argues that here. 
There have been multiple occasions where I've copy and pasted email threads into an LLM and asked it things like: 1. What is X person saying 2. What are the cruxes in this conversation? 3. Summarise this conversation 4. What are the key takeaways 5. What views are being missed from this conversation I really want an email plugin that basically brute forces rationality INTO email conversations. Tangentially - I wonder if LLMs can reliably convert peoples claims into a % through sentiment analysis? This would be useful for Forecasters I believe (and rationality in general)

Popular comments

Recent discussion

There have been multiple occasions where I've copy and pasted email threads into an LLM and asked it things like:

  1. What is X person saying
  2. What are the cruxes in this conversation?
  3. Summarise this conversation
  4. What are the key takeaways
  5. What views are being missed from this conversation

I really want an email plugin that basically brute forces rationality INTO email conversations.

Tangentially - I wonder if LLMs can reliably convert peoples claims into a % through sentiment analysis? This would be useful for Forecasters I believe (and rationality in general)

Continue reading
This is a linkpost for https://ailabwatch.org

I'm launching AI Lab Watch. I collected actions for frontier AI labs to improve AI safety, then evaluated some frontier labs accordingly.

It's a collection of information on what labs should do and what labs are doing. It also has some adjacent resources, including a list...

Continue reading
1
yanni kyriacos
1h
Hi Zach! To clarify, are you basically saying you don't want to improve the project much more than where you've got it to? I think it is possible you've tripped over a highly impactful thing here!
2
Zach Stein-Perlman
17m
Not necessarily. But: 1. There are opportunity costs and other tradeoffs involved in making the project better along public-attention dimensions. 2. The current version is bad at getting public attention; improving it and making it get 1000x public attention would still leave it with little; likely it's better to wait for a different project that's better positioned and more focused on getting public attention. And as I said, I expect such a project to appear soon.

"And as I said, I expect such a project to appear soon."

I dont know whether to read this as "Zach has some inside information that gives him high confidence it will exist" or "Zach is doing wishful thinking" or something else!

 NOTE: This post was updated to include two additional models which meet the criteria for being considered Open Source AI.

As advanced machine learning systems become increasingly widespread, the question of how to make them safe is also gaining attention. Within this...

Continue reading
1
Jacob-Haimes
2h
Good call, I just did some more investigating and I would agree that EleutherAI's Pythia is Open Source, I'll update the post with a new image and wording shortly. As a side note, the extra research I did as a result of your comment led me to find another Open Source model from the Allen Institute for AI (OLMo), and the Model Openness Framework, which I will also be adding. Thanks!

The post has now been updated appropriately, please let me know if you don't think the modifications were sufficient so that I can fix them appropriately.

Now to make the changes on the other platforms!

1
SummaryBot
9h
Executive summary: The term "open source AI" is frequently misused by companies to gain positive perception without meeting the actual criteria for open source, which hinders meaningful discussion about AI governance and regulation. Key points: 1. Open source software is clearly defined, but current AI models don't fit neatly into this definition due to their unique components (architecture, training process, weights). 2. The Open Source AI Definition (OSAID) is still being developed, so there is no formal definition of "open source AI" yet. 3. Many prominent AI models (GPT-4, Llama3, Gemma, Mistral, BLOOMZ) claim to be open source but do not meet the criteria, while only a few (Amber, Crystal, OpenELM) can be considered truly open source. 4. Companies misuse the "open source" label for PR benefits and to lobby for reduced regulations without sacrificing their competitive advantage. 5. To clarify the space, the author proposes categorizing models as Open Source (per OSAID), Shared Weights (released weights only), Open Release (encompasses both previous categories), and Closed Source.     This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

This announcement was written by Toby Tremlett, but don’t worry, I won’t answer the questions for Lewis.

Lewis Bollard, Program Director of Farm Animal Welfare at Open Philanthropy, will be holding an AMA on Wednesday 8th of May. Put all your questions for him on this thread...

Continue reading

Do you think there are promising ways to slow down growth in aquaculture?

4
emre kaplan
1h
Many grantee organisations report the lessons they learnt to their donors. Open Philanthropy must have accumulated a lot of information on the best practices for animal welfare organisations. As far as I understand, grant makers are wary of giving object level advice and micromanaging grantees. On the other hand, many organisations already spend a lot of time trying to learn about the best (and worst) practices in other organisations. Could Open Phil animal welfare team prepare an anonymised write up about what their grantees report as the reasons for their successes and failures?
1
DanteTheAbstract
5h
In your recent 80k podcast almost all the work referenced seems to be targeted at the US and EU (except the Farm animal welfare in Asia section). * What is the actual geographic target of the work that’s being funded? * Is there work being done/planed to look at animal welfare funding opportunities more globally?
Luise commented on Killing the moths 1h ago
225
16

This post was partly inspired by, and shares some themes with, this Joe Carlsmith post. My post (unsurprisingly) expresses fewer concepts with less clarity and resonance, but is hopefully of some value regardless.

Content warning: description of animal death.

I live in a ...

Continue reading

I've got moths in my flat right now and this post made me take solving this more seriously. Thank you!

  • SoGive works with major donors.
  • As part of our work, we meet with several (10-30 per year) charities, generally ones recommended by evaluators we trust, or (occasionally) recommended by our own research.
  • We learn a lot through these conversations. This suggests that we might want to publish our call notes so that others can also learn about the charities we speak with.
  • Given that we take notes during the calls anyway, it might seem that it would be low cost for us to simply publish those. This would be deceptive. 
    • There is a non-trivial time cost for us, partly because documents which are published are held to a higher standard than those which are purely internal, but mostly because of our relationship with the charities. We want them to feel confident that they can speak openly with us. This means not only an extra step in the process (ie sharing a draft with the organisation
...
Continue reading

Summary

  1. Where there’s overfishing, reducing fishing pressure or harvest rates — roughly the share of the population or biomass caught in a fishery per fishing period — actually allows more animals to be caught in the long run.
  2. Sustainable fishery management policies
...
Continue reading
rime
4h3
0
0
1

Just the arguments in the summary are really solid.[1] And while I wasn't considering supporting sustainability in fishing anyway, I now believe it's more urgent to culturally/semiotically/associatively separate between welfare and some strands of "environmentalism". Thanks!

Alas, I don't predict I will work anywhere where this update becomes pivotal to my actions, but my practically relevant takeaway is: I will reproduce the arguments from this post (and/or link it) in contexts where people are discussing conjunctions/disjunctions between environmenta... (read more)

Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn't give specifics on his policy positions, this seems like something he ...

Continue reading

The full quote suggests this is because he classifies Operation Warp Speed (reactive, targeted) as very different from the Office (wasteful, impossible to predict what you'll need, didn't work last time). I would classify this as a disagreement about means rather than ends.

 

One last question, Mr. President, because I know that your time is limited, and I appreciate your generosity. We have just reached the four-year anniversary of the COVID pandemic. One of your historic accomplishments was Operation Warp Speed. If we were to have another pandemic, would you take the same actions to manufacture and distribute a vaccine and get it in the arms of Americans as quickly as possible?

Trump: I did a phenomenal job. I appreciate the way you worded that question. So I have a very important Democrat friend, who probably votes for me, but I'm not 100% sure, because he's a serious Democrat, and he asked me about it. He said Operation Warp Speed was one of the greatest achievements in the history of government. What you did was incredible, the speed of it, and the, you know, it was supposed to take anywhere from five to 12 years, the whole thing. Not only that: the ventilators, the therapeutics, Regeneron and other things. I mean Regeneron was incredible. But therapeutics—everything. The overall—Operation Warp Speed, and you never talk about it. Democrats talk about it as if it’s the greatest achievement. So I don’t talk about it. I let others talk about it. 

You know, you have strong opinions both ways on the vaccines. It's interesting. The Democrats love the vaccine. The Democrats. Only reason I don’t take credit for it. The Republicans, in many cases, don’t, although many of them got it, I can tell you. It’s very interesting. Some of the ones who talk the most. I said, “Well, you didn’t have it did you?” Well, actually he did, but you know, et cetera. 

But Democrats think it’s an incredible, incredible achievement, and they wish they could take credit for it, and Republicans don’t. I don't bring it up. All I do is just, I do the right thing. And we've gotten actually a lot of credit for Operation Warp Speed. And the power and the speed was incredible. And don’t forget, when I said, nobody had any idea what this was. You know, we’re two and a half years, almost three years, nobody ever. Everybody thought of a pandemic as an ancient problem. No longer a modern problem, right? You know, you don't think of that? You hear about 1917 in Europe and all. You didn’t think that could happen. You learned if you could. But nobody saw that coming and we took over, and I’m not blaming the past administrations at all, because again, nobody saw it coming. But the cupboards were bare. 

We had no gowns, we had no masks. We had no goggles, we had no medicines. We had no ventilators. We had nothing. The cupboards were totally bare. And I energized the country like nobody’s ever energized our country. A lot of people give us credit for that. Unfortunately, they’re mostly Democrats that give me the credit.

Well, sir, would you do the same thing again to get vaccines in the arms of Americans as quickly as possible, if it happened again in the next four years?

Trump: Well, there are the variations of it. I mean, you know, we also learned when that first came out, nobody had any idea what this was, this was something that nobody heard of. At that time, they didn’t call it Covid. They called it various names. Somehow they settled on Covid. It was the China virus, various other names. 

But when this came along, nobody had any idea. All they knew was dust coming in from China. And there were bad things happening in China around Wuhan. You know, I predicted. I think you'd know this, but I was very strong on saying that this came from Wuhan. And it came from the Wuhan labs. And I said that from day one. Because I saw things that led me to believe that, very strongly led me to believe that. But I was right on that. A lot of people say that now that Trump really did get it right. A lot of people said, “Oh, it came from caves, or it came from other countries.” China was trying to convince people that it came from Italy and France, you know, first Italy, then France. I said, “No, it came from China, and it came from the Wuhan labs.” And that's where it ended up coming from. So you know, and I said that very early. I never said anything else actually. But I've been given a lot of credit for Operation Warp Speed. But most of that credit has come from Democrats. And I think a big portion of Republicans agree with it, too. But a lot of them don't want to say it. They don't want to talk about it.

So last follow-up: The Biden Administration created the Office of Pandemic Preparedness and Response Policy, a permanent office in the executive branch tasked with preparing for epidemics that have not yet emerged. You disbanded a similar office in 2018 that Obama had created. Would you disband Biden's office, too?

Trump: Well, he wants to spend a lot of money on something that you don't know if it's gonna be 100 years or 50 years or 25 years. And it's just a way of giving out pork. And, yeah, I probably would, because I think we've learned a lot and we can mobilize, you know, we can mobilize. A lot of the things that you do and a lot of the equipment that you buy is obsolete when you get hit with something. And as far as medicines, you know, these medicines are very different depending on what strains, depending on what type of flu or virus it may be. You know, things change so much. So, yeah, I think I would. It doesn't mean that we're not watching out for it all the time. But it's very hard to predict what's coming because there are a lot of variations of these pandemics. I mean, the variations are incredible, if you look at it. But we did a great job with the therapeutics. And, again, these therapeutics were specific to this, not for something else. So, no, I think it's just another—I think it sounds good politically, but I think it's a very expensive solution to something that won't work. You have to move quickly when you see it happening.

 

link

Trump is anti-tackling pandemics except insofar as it implies he did anything wrong

I'd say it's 50/50 but sure. And while politics is discouraged, I don't think that your thing is really what's being discouraged.

 

A crucial consideration in assessing the risks of advanced AI is the moral value we place on "unaligned" AIs—systems that do not share human preferences—which could emerge if we fail to make enough progress on technical alignment.

In this post I'll consider three potential...

Continue reading
4
Rohin Shah
16h
I can believe that if the population you are trying to predict for is just humans, almost all of whom have at least some affective empathy. But I'd feel pretty surprised if this were true in whatever distribution over unaligned AIs we're imagining. In particular, I think if there's no particular reason to expect affective empathy in unaligned AIs, then your prior on it being present should be near-zero (simply because there are lots of specific claims about unaligned AIs about that complicated most of which will be false). And I'd be surprised if "zero vs non-zero affective empathy" was not predictive of utilitarian motivations. I definitely agree that AIs might feel pleasure and pain, though I'm less confident in it than you seem to be. It just seems like AI cognition could be very different from human cognition. For example, I would guess that pain/pleasure are important for learning in humans, but it seems like this is probably not true for AI systems in the current paradigm. (For gradient descent, the learning and the cognition happen separately -- the AI cognition doesn't even get the loss/reward equivalent as an input so cannot "experience" it. For in-context learning, it seems very unclear what the pain/pleasure equivalent would be.) I agree this is possible. But ultimately I'm not seeing any particularly strong reasons to expect this (and I feel like your arguments are mostly saying "nothing rules it out"). Whereas I do think there's a strong reason to expect weaker tendencies: AIs will be different, and on average different implies fewer properties that humans have. So aggregating these I end up concluding that unaligned AIs will be less utilitarian in expectation. (You make a bunch of arguments for why AIs might not be as different as we expect. I agree that if you haven't thought about those arguments before you should probably reduce your expectation of how different AIs will be. But I still think they will be quite different.) I don't see why it mat

Here are a few (long, but high-level) comments I have before responding to a few specific points that I still disagree with:

  • I agree there are some weak reasons to think that humans are likely to be more utilitarian on average than unaligned AIs, for basically the reasons you talk about in your comment (I won't express individual agreement with all the points you gave that I agree with, but you should know that I agree with many of them). 

    However, I do not yet see any strong reasons supporting your view. (The main argument seems to be: AIs will be diff
... (read more)