All of Remmelt's Comments + Replies

I spent time digging into Uganda Community Farm’s plans last year, and ended up becoming a regular donor. From reading the write-ups and later asking Anthony about the sorghum training and grain-processing plant projects, I understood Anthony to be thoughtful and strategic about actually helping relieve poverty in the Kamuli & Buyende region.

Here are short explainers worth reading:

UCF focusses on training farmers and giving them the materials and tools needed to build up t... (read more)

Strong upvote for a community member taking the time to evaluate an intervention presented by an "outsider," act on that evaluation, and share it with others. This adds a lot of value!

3
Anthony Kalulu, a rural farmer in eastern Uganda.
15d
Thanks so much Remmelt for sharing this, and for your kind support to the UCF's work.

Is there an argument that it is impossible?

There is actually an impossibility argument. Even if you could robustly specify goals in AGI, there is another convergent phenonemon that would cause misaligned effects and eventually remove the goal structures.

You can find an intuitive summary here: https://www.lesswrong.com/posts/jFkEhqpsCRbKgLZrd/what-if-alignment-is-not-enough

Thanks! Also a good example of lots of complaints being prepared now by individuals

Actually, looks like there is a thirteenth lawsuit that was filed outside the US.

A class-action privacy lawsuit filed in Israel back in April 2023.

Wondering if this is still ongoing: https://www.einpresswire.com/article/630376275/first-class-action-lawsuit-against-openai-the-district-court-in-israel-approved-suing-openai-in-a-class-action-lawsuit

I agree that implies that those people are more inclined to spend the time to consider options. At least they like listening to other people give interesting opinions about the topic.

But we’re all just humans, interacting socially in a community. I think it’s good to stay humble about that.

If we’re not, then we make ourselves unable to identify and deal with any information cascades, peer proof, and/or peer group pressures that tend to form in communities.

Three reasons come to mind why OpenPhil has not funded us.

  1. Their grant programs don't match, and we have therefore not applied to them.They have fund individuals making early career decisions, our university-based courses, or programs that selectively support "highly talented" young people, or "high quality nuanced" communication. We don't fit any of those categories.
    1. We did sent in a brief application early 2023 though for a regrant covering our funds from FTX, which was not granted (same happened to at least one other field-building org I'm aware of).
  2. AISC
... (read more)

They're not quite doing a brand partnership. 

But 80k has featured various safety researchers working at AGI labs over the years. Eg. see OpenAI.

So it's more like 80k has created free promotional content, and given their stamp of approval of working at AGI labs (of course 'if you weigh up your options, and think it through rationally' like your friends).

1
Rebecca
2mo
I generally think people who listen to detail-focused 3 hour podcasts are the sorts of people who weigh up options 

Do you mean OP, as in Open Philanthropy?

2
Chris バルス
2mo
Apologies. Yes, I mean Open Philanthropy.  

Hi Conor,

Thank you.

I’m glad to see that you already linked to clarifications before. And that you gracefully took the feedback, and removed the prompt engineer role. I feel grateful for your openness here.

It makes me feel less like I’m hitting a brick wall. We can have more of a conversation.

~ ~ ~

The rest is addressed to people on the team, and not to you in particular:

There are grounded reasons why 80k’s approaches to recommending work at AGI labs – with the hope of steering their trajectory – has supported AI corporations to scale. While disabling effort... (read more)

If some employees actually have the guts to whistleblow on current engineering malpractices…

Plenty of concrete practices you can whistleblow on that will be effective in getting society to turn against these companies:

  1. The copying of copyrighted and person-identifying information without permission (pass on evidence to publishers and they will have a lawsuit feast).
  2. The exploitation and underpayment of data workers and coders from the Global South (inside information on how OpenAI staff hid that they instructed workers in Kenya to collect images of chi
... (read more)

If labs do engage in behavior that is flagrantly reckless, employees can act as whistleblowers.

This is the crux for me.

If some employees actually have the guts to whistleblow on current engineering malpractices, I have some hope left that having AI safety researchers at these labs still turns out “net good”.

If this doesn’t happen, then they can keep having conversations about x-risks with their colleagues, but I don’t quite see when they will put up a resistance to dangerous tech scaling. If not now, when?

Internal politics might change

We’ve seen in ... (read more)

2
Remmelt
2mo
Plenty of concrete practices you can whistleblow on that will be effective in getting society to turn against these companies: 1. The copying of copyrighted and person-identifying information without permission (pass on evidence to publishers and they will have a lawsuit feast). 2. The exploitation and underpayment of data workers and coders from the Global South (inside information on how OpenAI staff hid that they instructed workers in Kenya to collect images of child sexual abuse, anyone?). 3. The unscoped misdesign and failure to test these systems for all the uses the AI company promotes. 4. The extent of AI hardware’s environmental pollution. Pick what you’re in a position to whistleblow on. Be very careful to prepare well. You’re exposing a multi-billion-dollar company. First meet in person with an attorney experienced in protecting whistleblowers. Once you start collecting information, make photographs with your personal phone, rather than screenshots or USB copies that might be tracked by software. Make sure you’re not in line of sight of an office camera or webcam. Etc. Etc. Preferably, before you start, talk with an experienced whistleblower about how to maintain anonymity. The more at ease you are there, the more you can bide your time, carefully collecting and storing information. If you need information to get started, email me at remmelt.ellen[a/}protonmail<d0t>com. ~ ~ ~ But don’t wait it out until you can see some concrete dependable sign of “extinction risk”. By that time, it’s too late.

Another problem with the NIST approach is an overemphasis on solving for identified risks, rather than precautionary principle (just don’t use scaled tech that could destabilise society at scale), or on preventing and ensuring legal liability for designs that cause situationalised harms.

Safety-washing of AI is harmful as it gives people an out, a chance to repeat the line "well at least they are allegedly doing some safety stuff", which is a convenient distraction from the fact that AI labs are knowingly developing a technology that can cause human extinction. This distraction causes otherwise safety-conscious people to invest in or work in an industry that they would reconsider if they had access to all the information.

Very much agreed.

It is an extreme claim to make in that context, IMO.

I think Benjamin made it to be nuanced. But the nuance in that article is rather one-sided.

If anything, the nuance should be on the side of identifying any ways you might accidentally support the development of dangerous auto-scaling technologies.

First do, no harm.

Do you think it would be better if no one who worked at OpenAI / Anthropic / Deepmind worked on safety?

It depends on what you mean with 'work on safety'. 
Standard practice for designing machine products to be safe in other established industries is to first narrowly scope the machinery's uses, the context of use, and the user group.  

If employees worked at OpenAI / Anthropic / Deepmind on narrowing their operational scopes, all power to them!  That would certainly help. It seems that leadership, who aim to design unscoped automated machinery... (read more)

5
Derek Shiller
3mo
I think I agree that safety researchers should prefer not to take a purely ceremonial role at a big company if they have other good options, but I'm hesitant to conclude that no one should be willing to do it. I don't think it is remotely obvious that safety research at big companies is ceremonial. There are a few reasons why some people might opt for a ceremonial role: 1. It is good for some AI safety researchers to have access to what is going on at top labs, even if they can't do anything about it. They can at least keep tabs on it and can use that experience later in their careers. 2. It seems bad to isolate capabilities researchers from safety concerns. I bet capabilities researchers would take safety concerns more seriously if they eat lunch every day with someone who is worried than if they only talk to each other. 3. If labs do engage in behavior that is flagrantly reckless, employees can act as whistleblowers. Non-employees can't. Even if they can't prevent a disaster, they can create a paper trail of internal concerns which could be valuable in the future. 4. Internal politics might change and it seems better to have people in place already thinking about these things.

Consider Bridges v South Wales Police, where the court found in favour of Bridges on some elements not because the AI system was biased, but because a Data Protection Impact Assessment (DPIA) had not been carried out. Put simply, SWP hadn’t made sure it wasn’t biased. A DPIA is a foundation-level document in almost any compliance procedure.
 

 

This is an interesting anecdote. 
It reminds me of how US medical companies having to go through FDA's premarket approval process for software designed for prespecified uses holds them from launching... (read more)

2
CAISID
3mo
That's a good regulatory mechanism, and isn't unlike many that exist UK-side for uses intended for security or nuclear application. Surprisingly, there isn't a similar requirement for policing although the above mentioned case has drastically improved the willingness of forces to have such systems adequately (and sometimes publicly) vetted. It certainly increased the seriousness to which AI safety is considered in a few industries. I'd really like to see a similar system as to the one you just mentioned for AI systems over a certain threshold, or for sale to certain industries. A licensing process would be useful, though obviously faces challenges as AI can and does change over time. This is one of the big weaknesses of a NIST certification, and one I am careful to raise with those seeking regulatory input.

Note that we are focussing here on decisions at the individual level.
There are limitations to that. 

See my LessWrong comment.

I don't think control is likely to scale to arbitrarily powerful systems. But it may not need to... which sets us up well for the following phases.


Under the concept of 'control', I am including the capacity of the AI system to control their own components' effects.

I am talking about fundamental workings of control. Ie. control theory and cybernetics.
I.e. as general enough that results are applicable to any following phases as well.

Anders Sandberg has been digging lately into fundamental controllability limits.
Could be interesting to talk with Anders.

A range of opinions from anonymous experts about the upsides and downsides of working on AI capabilities

I did read that compilation of advice, and responded to that in an email (16 May 2023):

"Dear [a],

People will drop in and look at job profiles without reading your other materials on the website. I'd suggest just writing a do-your-research cautionary line about OpenAI and Anthropic in the job descriptions itself.

Also suggest reviewing whether to trust advice on whether to take jobs that contribute to capability research.

  • Particularly advice by nerdy r
... (read more)
5
William the Kiwi
3mo
"This distinction between ‘capabilities’ research and ‘safety’ research is extremely fuzzy, and we have a somewhat poor track record of predicting which areas of research will be beneficial for safety work in the future. This suggests that work that advances some (and perhaps many) kinds of capabilities faster may be useful for reducing risks." This seems like a absurd claim. Are 80k actually making it? EDIT: the claim is made by Benjamin Hilton, one of 80k's analysts and the person the OP is replying too.
1
Remmelt
3mo
Note that we are focussing here on decisions at the individual level. There are limitations to that.  See my LessWrong comment.

Ben, it is very questionable that 80k is promoting non-safety roles at AGI labs as 'career steps'. 

Consider that your model of this situation may be wrong (account for model error). 

  • The upside is that you enabled some people to skill up and gain connections. 
  • The downside is that you are literally helping AGI labs to scale commercially (as well as indirectly supporting capability research).
2
William the Kiwi
3mo
I would agree with Remmelt here. While upskilling people is helpful, if those people then go on to increase the rate of capabilities gain by AI companies, this is reducing the time the world has available to find solutions to alignment and AI regulation. While, as a rule, I don't disagree with an industries increasing their capabilities, I do disagree with this when those capabilities knowingly lead to human extinction.
9
Remmelt
3mo
I did read that compilation of advice, and responded to that in an email (16 May 2023): "Dear [a], People will drop in and look at job profiles without reading your other materials on the website. I'd suggest just writing a do-your-research cautionary line about OpenAI and Anthropic in the job descriptions itself. Also suggest reviewing whether to trust advice on whether to take jobs that contribute to capability research. * Particularly advice by nerdy researchers paid/funded by corporate tech.  * Particularly by computer-minded researchers who might not be aware of the limitations of developing complicated control mechanisms to contain complex machine-environment feedback loops.  Totally up to you of course. Warm regards, Remmelt"   This is what the article says:  "All that said, we think it’s crucial to take an enormous amount of care before working at an organisation that might be a huge force for harm. Overall, it’s complicated to assess whether it’s good to work at a leading AI lab — and it’ll vary from person to person, and role to role."  So you are saying that people are making a decision about working for an AGI lab that might be (or actually is) a huge force for harm. And that whether it's good (or bad) to work at an AGI lab depends on the person – ie. people need to figure this out for them personally. Yet you are openly advertising various jobs at AGI labs on the job board. People are clicking through and applying. Do you know how many read your article beforehand? ~ ~ ~ Even if they did read through the article, both the content and framing of the advice seems misguided. Noticing what is emphasised in your considerations.  Here are the first sentences of each consideration section: (ie. as what readers are most likely to read, and what you might most want to convey). 1. "We think that a leading — but careful — AI project could be a huge force for good, and crucial to preventing an AI-related catastrophe." * Is this your opinion abo

(Let me get back on this when I find time,  hopefully tomorrow)

3
Remmelt
3mo
It depends on what you mean with 'work on safety'.  Standard practice for designing machine products to be safe in other established industries is to first narrowly scope the machinery's uses, the context of use, and the user group.   If employees worked at OpenAI / Anthropic / Deepmind on narrowing their operational scopes, all power to them!  That would certainly help. It seems that leadership, who aim to design unscoped automated machinery to be used everywhere for everyone, would not approve though. If working on safety means in effect playing close to a ceremonial role, where even though you really want to help, you cannot hope to catch up with the scaling efforts, I would reconsider. In other industries, when conscientious employees notice engineering malpractices that are already causing harms across society, sometimes one of them has the guts to find an attorney and become a whistleblower.  Also, in that case, I would prefer the AGI labs to not hire for those close-to-ceremonial roles. I'd prefer them to be bluntly transparent to people in society that they are recklessly scaling ahead, and that they are just adding local bandaids to the 'Shoggoth' machinery. Not that that is going to happen anyway.    If AGI labs can devote their budget to constructing operational design domains, I'm all up. Again, that's counter to the leaders' intentions. Their intention is to scale everywhere and rely on the long-term safety researchers to tell them that there must be some yet-undiscovered general safe control patch.   I think we should avoid promoting AGI labs as a place to work at, or a place that somehow will improve safety. One of the reasons is indeed that I want us to be clear to idealistic talented people that they should really reconsider investing their career into supporting such an organisation. BTW, I'm not quite answering from your suggested perspective of what an AGI lab "should do".  What feels relevant to me is what we can personally consider to d

Thanks, I appreciate the paraphrase. Yes, that is a great summary.

 

I'm more optimistic e.g. that control turns out to be useful, or that there are hacky alignment techniques which work long enough to get through to the automation of crucial safety research

I hear this all the time, but I also notice that people saying it have not investigated the fundamental limits to controllability that you would encounter with any control system.

As a philosopher, would you not want to have a more generalisable and robust argument that this is actually going to work ... (read more)

5
Owen Cotton-Barratt
3mo
I think it's basically things flowing in some form through "the people working on the powerful technology spend time with people seriously concerned with large-scale risks". From a very zoomed out perspective it just seems obvious that we should be more optimistic about worlds where that's happening compared to worlds where it's not (which doesn't mean that necessarily remains true when we zoom in, but it sure affects my priors). If I try to tell more concrete stories they include things of the form "the safety-concerned people have better situational awareness and may make better plans later", and also "when systems start showing troubling indicators, culturally that's taken much more seriously". (Ok, I'm not going super concrete in my stories here, but that's because I don't want to anchor things on a particular narrow pathway.)
4
Owen Cotton-Barratt
3mo
Of course I'd prefer to have something more robust. But I don't think the lack of that means it's necessarily useless. I don't think control is likely to scale to arbitrarily powerful systems. But it may not need to. I think the next phase of the problem is like "keep things safe for long enough that we can get important work out of AI systems", where the important work has to be enough that it can be leveraged to something which sets us up well for the following phases.

Further, I think that there are a bunch of arguments for the value of safety work within labs (e.g. access to sota models; building institutional capacity and learning; cultural outreach) which seem to me to be significant and you're not engaging with.


Let's dig into the arguments you mentioned then.
 

  • Access to SOTA models
    • Given that safety research is intractable where open-ended and increasingly automated systems are scaled anywhere near current rates, I don't really see the value proposition here. 
    • I guess if researchers noticed a bunch of bad des
... (read more)

I think 1 and 3 seem like arguments that reduce the desirability of these roles but it's hard to see how they can make them net-negative.

Yes, specifically by claim 1, positive value can only asymptotically approach 0 
(ignoring opportunity costs). 

  • For small specialised models (designed for specific uses in a specific context of use for a specific user group), we see in practice that safety R&D can make a big difference.
  • For 'AGI', I would argue that the system cannot be controlled sufficiently to stay safe.
  • Unscoped everything-for-everyone model
... (read more)
9
Owen Cotton-Barratt
3mo
Thanks. I'm now understanding your central argument to be: Is that a fair summary? If so, I think: * Conditional on the premise, the conclusion appears to make sense * It still feels kinda galaxy-brained, which may make me want to retain some scepticism * However I feel way less confident than you in the premise, for I believe a number of reasons: * I'm more optimistic e.g. that control turns out to be useful, or that there are hacky alignment techniques which work long enough to get through to the automation of crucial safety research * I think that there are various non-research pathways for such people to (in expectation) increase the safety of the lab they're working at * It's unclear to me what the sign is of quality-of-safety-team-work on perceived-safety to the relevant outsiders (investors/regulators?) * e.g. I think that one class of work people in labs could do is capabilities monitoring, and I think that if this were done to a good standard it could in fact help to reduce perceived-safety to outsiders in a timely fashion * I guess I'm quite sceptical that signals like "well the safety team at this org doesn't really have any top-tier researchers and is generally a bit badly thought of" will be meaningfully legible to the relevant outsiders, so I don't really think that reducing the quality of their work will have too much impact on perceived safety
1
Remmelt
3mo
Let's dig into the arguments you mentioned then.   * Access to SOTA models * Given that safety research is intractable where open-ended and increasingly automated systems are scaled anywhere near current rates, I don't really see the value proposition here.  * I guess if researchers noticed a bunch of bad design practices and violations of the law in inspecting the SOTA models, they could leak information about that to the public?   * Building institutional capacity and learning  * Inside a corporation competing against other corporations, where the more power-hungry individuals tend to find ways to the top, the institutional capacity-building and learning you will see will be directed towards extracting more profit and power.  * I think this argument considered within its proper institutional context actually cuts against your current conclusion.   * Cultural outreach * This reminds me of the cultural exchanges between US and Soviet scientists during the Cold War. Are you thinking of something like that? * Saying that, I notice that the current situation is different in the sense that AI Safety researchers are not one side racing to scale proliferation of dangerous machines in tandem with the other side (AGI labs).  * To the extent though that AI Safety researchers can come to share collectively important insights with colleagues at AGI labs – such as on why and how to stop scaling dangerous machine technology, this cuts against my conclusion. * Looking from the outside, I haven't seen that yet. Early AGI safety thinkers (eg. Yudkowsky, Tegmark) and later funders (eg. Tallinn, Karnofsky) instead supported AGI labs to start up, even if they did not mean to. * But I'm open (and hoping!) to change my mind.  It would be great if safety researchers at AGI labs start connecting to collaborate effectively on restricting harmful scaling. I'm going off the brief descriptions you gave.  Does that cover the arguments as you mea

80,000 Hours handpicks jobs at AGI labs.

Some of those jobs don't even focus on safety – instead they look like policy lobbying roles or engineering support roles.
 
Nine months ago, I wrote my concerns to 80k staff:

Hi [x, y, z] 

I noticed the job board lists positions at OpenAI and AnthropicAI under the AI Safety category:

Not sure whom to contact, so I wanted to share these concerns with each of you:

  1. Capability races
    1. OpenAI's push for scaling the size and applications of transformer-network-based models has led Google and others to copy and compete wi
... (read more)

Hi Remmelt,

Thanks for sharing your concerns, both with us privately and here on the forum. These are tricky issues and we expect people to disagree about how to about how to weigh all the considerations — so it’s really good to have open conversations about them.

Ultimately, we disagree with you that it's net harmful to do technical safety research at AGI labs. In fact, we think it can be the best career step for some of our readers to work in labs, even in non-safety roles. That’s the core reason why we list these roles on our job board.

We argue for this p... (read more)

The ways I tried to pre-empt it failed.


Ie.

  • posting a sequence with familiar concepts to make the outside researcher more known to the community
  • cautioning against jumping to judgements
  • clarifying why alternatives to alignment make sense



Looking back:  I should have just held off until I managed to write one explainer (this one) that folks in my circles did not find extremely unintuitive.

Yep, that is what I was referring to.

Good that you raised this concern. 

 

It does seem like you're likely to be more careful in the future

Yes, I am more selective now in what I put out on the forums.

In part, because I am having more one-on-one calls with (established) researchers.
I find there is much more space to clarify and paraphrase that way. 

On the forums, certain write-ups seem to draw dismissive comments. 
Some combination of:
 (a) is not written by a friend or big name researcher.
 (b) requires some new counterintuitive re... (read more)

1
Remmelt
3mo
Ie. * posting a sequence with familiar concepts to make the outside researcher more known to the community * cautioning against jumping to judgements * clarifying why alternatives to alignment make sense Looking back:  I should have just held off until I managed to write one explainer (this one) that folks in my circles did not find extremely unintuitive.

Also see further discussion on LessWrong here and here.

 I'm worried about these having negative effects, making the AI safety people seem crazy, uninformed, or careless.


If you look at the projects, notice that each is carefully scoped.

  1. ODD project is an engineering project for specifying the domain that a model should be designed for and used in.
  2. Luddite Pro project is about journalism on current misuses of generative AI.
  3. Lawyers project is about supporting creative professionals to litigate based on existing law (DMCA takedowns, item-level disclosures for EU AI Act, pre-litigation research for an EU lawsuit
... (read more)
2
peterbarnett
3mo
Yep, that is what I was referring to. It does seem like you're likely to be more careful in the future, but I'm still fairly worried about advocacy done poorly. (Although, like, I also think people should be able to advocacy if they want)

To lay a middle ground here:  

Thomas' comment was not ad hominem. But I personally think it is somewhat problematic.

Arepo's counterresponse indicates why.

  • Collecting a pile of commenters' negative responses to someone's writings is not a reliable way to judge whether someone's writing makes sense or not.
     

The reason being that alternative hypotheses exist that you would need to test against:

  • Maybe the argument is hard to convey? Maybe the author did a bad job at conveying the argument?
  • Maybe the writing is unpopular, for reasons unrelated to whether
... (read more)

That's an interesting point.  I wonder if this would also be the case if EVF (hypothetically) immediately earmarked proceeds from selling Wytham as donations to other organisations.

All of this of course is ignoring how grantmaking works in practice. 

Maybe I'm being cynical, but I'd give >30% that funders have declined to fund AI Safety Camp in its current form for some good reason. Has anyone written the case against?

To keep communication open here, here is Oliver Habryka’s LessWrong comment.

9
calebp
3mo
Oli’s comment so people don’t need to click through

I also believe that even if alignment is possible, we need more time to solve it.

The “Do Not Build Uncontrollable AI” area is meant for anyone to join who have this concern.

The purpose of this area is to contribute to restricting corporations from recklessly scaling the training and uses of ML models.

I want the area to be open for contributors who think that:

  1. we’re not on track to solving safe control of AGI; and/or
  2. there are fundamental limits to the controllability of AGI, and unfortunately AGI cannot be kept safe over the long term; and/or
  3. corporati
... (read more)

For transparency, we organisers paid $10K to Arb to do the impact evaluation, using separate funding we were able to source.

The impact assessment was commissioned by AISC, not independent.

This is a valid concern. I have worried about conflicts of interest.

I really wanted the evaluators at Arb to do neutral research, without us organisers getting in the way. Linda and I both emphasised this at an orienting call they invited us too.

From Arb’s side, Gavin deliberately stood back and appointed Sam Holton as the main evaluator, who has no connections with AI Safety Camp. Misha did participate in early editions of the camp though.

All in, this is enough to take the report with a grain of salt. Worth picking apart the analysis and looking for any unsound premises.

Glad you raised these concerns!

I suggest people actually dig themselves for evidence as to whether the program is working.

The first four points you raised seem to rely on prestige or social proof. While those can be good indicators of merit, they are also gameable.

Ie.

  • one program can focus on ensuring they are prestigious (to attract time-strapped alignment mentors and picky grantmakers)
  • another program can decide not to (because they’re not willing to sacrifice other aspects they care about).

If there is one thing you can take away from Linda and I is t... (read more)

1
Remmelt
3mo
This is a valid concern. I have worried about conflicts of interest. I really wanted the evaluators at Arb to do neutral research, without us organisers getting in the way. Linda and I both emphasised this at an orienting call they invited us too. From Arb’s side, Gavin deliberately stood back and appointed Sam Holton as the main evaluator, who has no connections with AI Safety Camp. Misha did participate in early editions of the camp though. All in, this is enough to take the report with a grain of salt. Worth picking apart the analysis and looking for any unsound premises.

I did not know this. Thank you for sharing all the details!

It's interesting to read about the paths you went through:
 AISC --> EleutherAI --> AGISF
             --> MATS 2.0 and 2.1
              --> Independent research grant

I'll add it as an individual anecdote to our sheet.

I kept responding in private conversations on Paul’s arguments, to a point that I decided to share my comments here.

  1. The hardware overhang argument has poor grounding.

Labs scaling models results in more investment in producing more GPU chips with more flops (see Sam Altman’s play for the UAE chip factory) and less latency between (see the EA start-up Fathom Radiant, which started up offering fibre-optic-connected supercomputers for OpenAI and now probably shifted to Anthropic).

The increasing levels of model combinatorial complexity and outside signal co... (read more)

This is an incisive description, Geoff. I couldn't put it better.

I'm confused what the two crosses are doing on your comment. 
Maybe the people who disagreed can clarify.

Hey, the invitation link stopped working. Could you update?

Hey, the invitation link stopped working. Could you update?

3
Arepo
3mo
Ah, thanks Remmelt! I've fixed it now.

Respect for this comment.

In the original conception of the unilateralist’s curse, the problem arose from epistemically diverse actors/groups having different assessments of how risky an action was.

The mistake was in the people with the rosiest assessment of the risk of an action taking the action by themselves – in disregard of others’ assessments.

What I want more people in AI Safety to be aware of is that there are many other communities out there who think that what “AGI” labs are doing is super harmful and destabilising.

We’re not the one community conce... (read more)

7
Geoffrey Miller
4mo
Remmelt - I agree. I think EA funders have been way too naive in thinking that, if they just support the right sort of AI development, with due concern for 'alignment' issues, they could steer the AI industry away from catastrophe.  In hindsight, this seems to have been a huge strategic blunder -- and the big mistake was under-estimating the corporate incentives and individual hubris that drives unsafe AI development despite any good intentions of funders and founders.

Helpful comment from you Lucius in the sheet:

"I think our first follow-up grant was 125k USD. Should be on the LTFF website somewhere. There were subsequent grants also related to the AISC project though. And Apollo Research's interpretability agenda also has some relationship with ideas I developed at AISC."

--> I updated the sheet.

Thanks, we’ll give it a go. Linda is working on sending something in for the “Request for proposals for projects to grow our capacity for reducing global catastrophic risks”

Note though AISC does not really fit OpenPhil’s grant programs because we are not affiliated with a university and because we don’t select heavily on our own conceptions of who are “highly promising young people”.

It turns out there are five six AI Safety Camp alumni working at Apollo, including the two co-founders. 

I got to go through alumni's LinkedIn profiles to update our records of post-camp positions.
It's on my to-do list.

3
Remmelt
4mo
Helpful comment from you Lucius in the sheet: "I think our first follow-up grant was 125k USD. Should be on the LTFF website somewhere. There were subsequent grants also related to the AISC project though. And Apollo Research's interpretability agenda also has some relationship with ideas I developed at AISC." --> I updated the sheet.

This is good to know! I’m glad that the experience helped you get involved in AI Safety work.

Could you search for the LTFF grant here and provide me the link? I must have missed it in my searches.

(Also, it looks I missed two of the four alumni working at Apollo. Will update!)

I appreciate you sharing this. I’ll add it to our list of anecdotes.

Also welcoming people sharing any setbacks or negative experiences they had. We want to know if people have sucky experiences so we find ways to make it not sucky next time. Hoping to have a more comprehensive sense ... (read more)

3
Remmelt
4mo
It turns out there are five six AI Safety Camp alumni working at Apollo, including the two co-founders.  I got to go through alumni's LinkedIn profiles to update our records of post-camp positions. It's on my to-do list.

I hadn’t made the GMO protests - AI protests connection.

This reads as a well-researched piece.

The analysis makes sense to me – with an exception to seeing efforts to restrict facial recognition, the Kill Cloud, etc, as orthogonal. I would also focus more on preventing increasing AI harms and Big Tech power consolidation, which most AI-concerned communities agree on.

1
charlieh943
6mo
Appreciate that @Remmelt Ellen! In theory, I think these messages could work together. Though, given animosity between these communities, I think alliances are more challenging. Also I'm curious - what sort of policies would be mutually beneficial for people concerned about facial recognition and x-risk? 
Load more