Quick takes

Linch
5d44
9
2
1

My default story is one where government actors eventually take an increasing (likely dominant) role in the development of AGI. Some assumptions behind this default story:

1. AGI progress continues to be fairly concentrated among a small number of actors, even as AI becomes percentage points of GDP.

2. Takeoff speeds (from the perspective of the State) are relatively slow.

3. Timelines are moderate to long (after 2030 say). 

If what I say is broadly correct, I think this may have has some underrated downstream implications For example, we may be currently... (read more)

Showing 3 of 15 replies (Click to show all)

I basically grant 2, sort of agree with 1, and drastically disagree with three (that timelines will be long.)

Which makes me a bit weird, since while I do have real confidence in the basic story that governments are likely to influence AI a lot, I do have my doubts that governments will try to regulate AI seriously, especially if timelines are short enough.

1
CAISID
1d
A useful thing to explore more here are the socio-legal interactions between private industry and the state, particularly when collaborating on high-tech products or services. There is a lot more interaction between tech-leading industry and the state than many people realise. It's also useful to think of states not as singular entities but of bundles of often fragmented entities organised under a singular authority/leadership. So some parts of 'the state' may have a very good insight into AI development, and some may not have a very good idea at all.  The dynamic of state to corporate regulation is complex and messy, and certainly could do with more AI-context research, but I'd also highlight the importance of government contracts to this idea also.  When the government builds something, it is often via a number of 'trusted' private entities (the more sensitive the project, the more trusted the entity - there is a license system for this in most developed countries) so the whole state/corporate role is likely to be quite mixed anyway and balanced mostly on contractual obligations. It may also differ by industry, too. 
2
Linch
2d
Makes sense! I agree that fast takeoff + short timelines makes my position outlined above much weaker.  I want to flag that if an AI lab and the US gov't are equally responsible for something, then the comparison will still favor the AI lab CEO, as lab CEOs have much greater control of their company than the president has over the USG. 

This is a quick post to talk a little bit about what I’m planning to focus on in the near and medium-term future, and to highlight that I’m currently hiring for a joint executive and research assistant position. You can read more about the role and apply here! If you’re potentially interested, hopefully the comments below can help you figure out whether you’d enjoy the role. 

Recent advances in AI, combined with economic modelling (e.g. here), suggest that we might well face explosive AI-driven growth in technological capability in the next d... (read more)

Showing 3 of 11 replies (Click to show all)

Hi Will,

What is especially interesting here is your focus on an all hazards approach to Grand Challenges. Improved governance has the potential to influence all cause areas, including long-term and short-term, x-risks, and s-risks. 

Here at the Odyssean institute, we’re developing a novel approach to these deep questions of governing Grand Challenges. We’re currently running our first horizon scan on tipping points in global catastrophic risk and will use this as a first step of a longer-term process which will include Decision Making under Deep U... (read more)

2
Lukas Finnveden
1mo
Christiano says ~22% ("but you should treat these numbers as having 0.5 significant figures") without a time-bound; and Carlsmith says ">10%" (see bottom of abstract) by 2070. So no big difference there.
2
David Mathers
1mo
Fair point. Carlsmith said less originally.

Pandemic Prevention: All Nations Should Build Emergency Medical Stockpiles

All nations should have stockpiles of medical resources e.g. masks, PPE, multi-purpose medicines and therapeutics, and various vaccines (smallpox, H1N1, etc). At the slightest hint of danger, these resources should be distributed to every part of the country. There should be enough stock to protect the people for as long as is required to get resupplied.

The Australians have a national medical stockpile and they started distributing masks from it in January 2020 in response to the Cov... (read more)

Pandemic Prevention: We Need Comprehensive Surveillance Testing At All Major Airports

If a Pandemic-Potential Pathogen is to spread internationally, it will have to pass through a national port of entry. If it does, then it will likely leave a trace. Since the air travel network is the major risk for spreading PPPs, we should test airports regularly: wastewater, sewerage, surfaces, and (voluntarily) passengers. Major nodes such as Heathrow, Schipol, Changai, and Dubai should be especially vigilant. If all developed countries were performing comprehensive su... (read more)

2
emmannaemeka
7d
Another idea we are exploring is to sample wastewater in military barracks. Since our military go to several countries for various interventions. They represent an important group too in the process. However, optimization of metagenomics processes and cost is critical.

Interesting. That makes sense. The military are often 'guinea pigs' for testing and experiments.

I expect the cost of this testing would fall over time as our technology improves and we become more precise and efficient in our use of it. But, that is an assumption.

I also think that if we can put these policies in place in the developed world then fewer of these pandemic-potential pathogens will reach the developing world. Had the EU and the USA reacted faster and more competently to Covid, the likes of India and South America could have been spared a lot of cost and suffering.

A potential failure mode of 80k recommending EAs work at AI labs:

  1. 80k promotes a safety related job within a leading AI lab.
  2. 80k's audience (purposefully) skews to high prospect candidates (HPC) - smarter, richer, better connected vs average. 
  3. HPC applies for and gets safety role within AI lab.
  4. HPC candidate stays at the lab but moves roles. 
  5. Now we have a smart, rich, well connected person no longer in safety but in capabilities.

I think this is sufficiently important / likely that 80k should consider tracking these people over time to see if this is a real issue.

Thanks Yanni, I think a lot of people have been concerned about this kind of thing.

I would be surprised if 80,000 hours isn't already tracking this or something like it - perhaps try reaching out to them directly, you might get a better response that way

Don't forget to go to http://www.projectforawesome.com today and vote for videos promoting effective charities like Against Malaria Foundation, The Humane League, GiveDirectly, Good Food Institute, ProVeg, GiveWell and Fish Welfare Initiative!

3
ramekin
5d
How does one vote? (Sorry if this is super obvious and I'm just missing it!)

+1. I went to the Effective Altruism Barcelona Give Directly video, and the voting link just took me to the givewell homepage. 

Ambition is like fire. Too little and you go cold. But unmanaged it leaves you burnt.

I'm curious why there hasn't been more work exploring a pro-AI or pro-AI-acceleration position from an effective altruist perspective. Some points:

  1. Unlike existential risk from other sources (e.g. an asteroid) AI x-risk is unique because humans would be replaced by other beings, rather than completely dying out. This means you can't simply apply a naive argument that AI threatens total extinction of value to make the case that AI safety is astronomically important, in the sense that you can for other x-risks. You generally need additional assumptions.
  2. Total
... (read more)
Showing 3 of 63 replies (Click to show all)
1
Pivocajs
2d
I agree with this. I (strongly) disagree with this. Me being alive is a relatively small part of my values. And since I am not the director of the world, me personally being around to influence things is unlikely to have a decisive impact on things I value. In more detail: Sure, all else being equal, me being there when AI happens is mildly helpful. But the outcome of building AI seems to be a function of, among other things, (i) values of the people building it + (ii) how much reflection they can do on those values + (iii) the environment dynamics these people are subject to (e.g., the current race dynamics between AI companies). And over time, I expect the potential decrease in (i) to be far outweighed by gains in (ii) and (iii). * The first issue is about (i), that it is not actually me building the AGI, either now or in the future. But I am willing to grant that (all else being equal) current generation is more likely to have values closer to my values. * However, I expect that the factors are (ii) and (iii) are just as influential. Regarding (ii), it seems we keep making progress at philosophy, ethics, etc, and to me, this currently far outweighs the value drift in (i). * Regarding (iii), my impression is that the current situation is so bad that it can't get much worse, and we might as well wait. This of course depends on how likely you think we are likely to get a bad outcome if we either (a) get superintelligence without additional progress on alignment or (b) get widespread human-level AI with no progress on alignment, institution design, etc.

Me being alive is a relatively small part of my values.

I agree some people (such as yourself) might be extremely altruistic, and therefore might not care much about their own life relative to other values they hold, but this position is fairly uncommon. Most people care a lot about their own lives (and especially the lives of their family and friends) relative to other things they care about. We can empirically test this hypothesis by looking at how people choose to spend their time and money; and the results are generally that people spend their money on ... (read more)

2
Vasco Grilo
12d
Great points, Matthew! I have wondered about this too. Relatedly, readers may want to check the sequence otherness and control in the age of AGI from Joe Carlsmith, in particular, Does AI risk “other” the AIs?. One potential argument against accelerating AI is that it will increase the chance of catastrophes which will then lead to overregulating AI (e.g. in the same way that nuclear power arguably was overregulated).

Heads up! I'm planning a Draft Amnesty event (like this one). I think the last one went really well, and I'm pretty excited to run this. 

The Draft Amnesty event will probably be a week long, around mid-march. 

I'll likely post some question threads such as "What posts would you like to see someone write?" (like this one) and "What posts are you thinking of writing?" (like this one), and set up some gather.town co-working/ social opportunities for polishing posts/ writing up drafts in the build up. 

I'm also brainstorming ways to make draft amn... (read more)

Dear Toby, thank you for this idea! 

I have an idea that is burning and makes me loose sleep, but its too big for one person so more eyes on it is better. 

The theme is mindful hacking.
A hack is a clever trick, a sort of thing a trickster archetype would do. One thing about tricksters is that it often bites them back, so when hacking one must be mindful about the ethical considerations of their hacks. In particular the recent bestseller A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend them Back, suggests several hacks that se... (read more)

3
CAISID
7d
I'd be really interested to see what posts people want to see. I'm happy to devote some time and effort to creating posts if I thought it would be useful to people. Especially if it's in my skillset. Sometimes it can be hard to tell what's useful beyond getting inbox messages after the fact.

Two sources of human misalignment that may resist a long reflection: malevolence and ideological fanaticism

(Alternative title: Some bad human values may resist idealization[1])

The values of some humans, even if idealized (e.g., during some form of long reflection), may be incompatible with an excellent future. Thus, solving AI alignment will not necessarily lead to utopia.

Others have raised similar concerns before.[2] Joe Carlsmith puts it especially well in the post “An even deeper atheism”:

“And now, of course, the question arises: how diff

... (read more)

Existential risks from within?

(Unimportant discussion of probably useless and confused terminology.)

I sometimes use terms like “inner existential risks” to refer to risk factors like malevolence and fanaticism. Inner existential risks primarily arise from “within the human heart”—that is, they are primarily related to the values, goals and/or beliefs of (some) humans. 

My sense is that most x-risk discourse focuses on outer existential risks, that is, x-risks which primarily arise from outside the human mind. These could be physical or n... (read more)

I have written 7 emails to 7 Politicians aiming to meet them to discuss AI Safety, and already have 2 meetings.

Normally, I'd put this kind of post on twitter, but I'm not on twitter, so it is here instead.

I just want people to know that if they're worried about AI Safety, believe more government engagement is a good thing and can hold a decent conversation (i.e. you understand the issue and are a good verbal/written communicator), then this could be an underrated path to high impact.

Another thing that is great about it is you can choose how many emails to send and how many meetings to have. So it can be done on the side of a "day job".

I want to make some Anki cards to learn/reinforce some important concepts, research findings & facts related to animal advocacy. Any recommendations for key facts, research outputs or concepts to include? E.g. things like how many animals are killed in China, components of the BCC, etc etc

Hi James, did you make this?

2
Pablo
2mo
This deck includes some EAA-related numbers, which may be of interest.
9
Joseph Lemien
2mo
I'd recommend something related to efficiency of creating food, such as how rice provides 11 million calories per acre, while pork produces only 3.5 million calories per acre. Of course other inputs than 'acre' could be used, such as how many pounds of plants are required to make one pound of chicken meat, or units of energy input, etc. Just something to emphasize/highlight the efficiency of growing plants for food compared with growing animals for food.

I'm a little confused as to why we consider the leaders of AI companies (Altman, Hassabis, Amodei etc.) to be "thought leaders" in the field of AI safety in particular. Their job descriptions are to grow the company and increase shareholder value, so their public persona and statements has to reflect that. Surely they are far too compromised for their opinions to be taken too seriously, they couldn't make strong statements against AI growth and development even if they wanted to, because of their job and position.

The recent post "Sam Altman's chip ambition... (read more)

Showing 3 of 4 replies (Click to show all)

I think thoughtleader sometimes means "has thoughts at the leading edge" and sometimes mean "leads the thoughts of the herd on a subject" and that there is sometimes a deliberate ambiguity between the two.

9
NickLaing
7d
Thanks that's a helpful perspective and I would be happy if it was true that they weren't considered AI safety thought leaders. I do feel like they are often seen this way though in the public sphere, and sometimes here on the forum too.
3
Nick K.
5d
I realize that my question sounded rethorical, but I'm actually interested in your sources or reasons for your impression. I certainly don't have a good idea of the general opinion and the media I consume is biased towards what I consider reasonable takes. That being said, I haven't encountered the position you're concerned about very much and would be interested to be hear where you did. Regarding this forum, I imagine one could read into some answers, but overall I don't get the impression that the AI CEO's are seen as big safety proponents.

Mini EA Forum Update

We’ve updated our new user onboarding flow! You can see more details in GitHub here.

In addition to making it way prettier, we’re trying out adding some optional steps, including:

  1. You can select topics you’re interested in, to make your frontpage more relevant to you.
    1. You can also click the “Customize feed” button on the frontpage - see details here.
  2. You can choose some authors to subscribe to. You will be notified when an author you are subscribed to publishes a post.
    1. You can also subscribe from any user’s profile page.
  3. Y
... (read more)

Stand-up comedian in San Francisco spars with ChatGPT AI developers in the audience
https://youtu.be/MJ3E-2tmC60

Last week, we helped facilitate a Digital Platform Coordination call to start conversations between members of the animal advocacy movement and see where work might intersect. If anyone is involved in digital platforms, either using existing solutions or building your own, feel to join the conversation on Slack as we continue to coordinate & share info.

... (read more)
Load more