All of lennart's Comments + Replies

Thanks for sharing, Stephen. Useful resources! Still in the process of listening to Chris Blattman's book "Why We fight". Enjoying it so far.

If anyone is interested to think about the implications of a Taiwan invasion on the compute supply chain (TSMC etc.), PM me.

If someone considers this line of work, I'd be keen to chat and outline some considerations.

2
Nathan Young
2y
Lennart, I was going to put you and Tom in touch. Have you talked?

Maybe one could argue with the second species argument/gorilla problem (Russel in Human Compatible)? Seems plausible to me that we're currently enabling a totalitarian global state for many factory-farmed animals -- and we probably could do this permanently.

3
Geoffrey Miller
2y
Lennart -- thanks for the link. I understand the analogy. The question is, would our totalitarian global state of factory farming actually be stable and permanent (in the sense of lasting thousands of generations?) Seems like we raise animals for meat, and they suffer. If we enjoy faster technological progress than the farm animals, we'll eventually invent ways to grow their meat without having to raise them at all. Their suffering isn't causally relevant to meat production; it's a negative by-product.  So any AI capable of imposing a global totalitarian state in order to exploit our labor (or whatever it's getting from us), should be able to find more efficient alternatives to raising humans at all. Like if the Machines in the Matrix movies found a better way to produce energy (e.g. fusion?) than keeping humans around as 'batteries' locked in a totalitarian virtual reality. In which case we face a true extinction risk, not a non-extinction totalitarian lock-in risk.

Just came here to comment the same. :)

See this profile by 80k, this post, and my sequence for more.

Also, don't hesitate to reach out if you want to hear more about roles for AI hardware experts or if someone is interested.

1
Jessica Wen
2y
Thank you! Would definitely like to chat :)

donations from Switzerland accounting for 48% of our total donation volume last year.

Wow! Again, thanks for enabling this. :) So glad to hear that it worked out.

We got plenty of bunkers, oh I mean refuges, in Switzerland - as Luisa Rodriguez pointed out. Happy to help if this moves forward.

We also got a small one in my basement, happy to co-host a trial run. :)

That's a cool idea! Thanks for sharing and crunching those numbers.

I'd note that we also have the Alignment Forum and LessWrong where the majority of AI-related posts end up. So AI causes might be underrepresented and we could have missed the change over time there (and I'd guess that we have seen an increase over time). Maybe quickly running an analysis on their dataset?

Thanks for the update and the concise summary. I enjoyed the bullet point format and sharing the insights of this survey publicly. Great job!

For context, CEA used to pay $70,000 annually to community builders in San Francisco, with lower salaries in areas with lower costs of living.

During which period was this the practice?

Also, now the update is:

Now CEA have updated their payment policy, with salaries baselined to $90,000 in San Francisco, with a cost of living adjustment for other locations, ...

That's starting when?

1
Vilhelm Skoglund
2y
Thank you! I think this was from the last grant period (2021-2022) and that it was slightly less before that.   This is starting this grant period (from 2022).

Indeed. Given that the payment is a grant and you're working for [name of your local org], what's stopping you from using an appropriate title? Be it CEO, director, strategy, program director, etc. Most nationwide EA groups do so and also a handful of local groups.

Great post Trevor! I share your message. :)

Especially:

The retreat features lots of people who are already “on board” with EA. At least a few should be at least moderately charismatic people for whom EA is a major consideration in how they make decisions.

That's also my experience. While "some buy-in" already helps a lot, people with more experience provide in my experience even "more value" - sharing their EA story, their connected struggles, and maybe how they managed to work on EA-adjacent stuff.
You bring something along these lines up later by sayin... (read more)

2
tlevin
2y
Re: "I'd also encourage the more "senior people" to join retreats from time to time," absolutely; not just (or even primarily) because you can provide value, but because retreats continue to be very useful in sharpening your cause prioritization, increasing your EA context, and building high-trust relationships with other EAs well after you're "senior"!

You can implement them easily in Ghost by using the HTML embed. You find this when you click on the bottom right corner the share button and click "embed".

1
Michael Huang
2y
Thanks very much!

Glad to hear you're now more excited. :)

Regarding:

I know there were a few meet-ups for other groups such as religious EA's, it would be great if there had been something like 'lonely and new meetup'?

I remember that at EAG London 2021 there was an event for newcomers and you were even matched with a mentor. Maybe we should copy this or make it a group effort for all the conferences.

1
Olivia Addy
2y
That sounds like a great idea! I'd of loved something like that.

Thanks for writing this, Akash! :) I’ve been following a similar paradigm for quite a while, some things I did:

  • Meetings only in the afternoon - the less important/demanding the further into the evening
  • Focused work time is scarce and, therefore my morning time is a resource to protect
  • Taking my location into consideration. As you outline, when I'm already somewhere I should make use of it.
  • Stopping to work when I feel like I don't make progress. Trading this time against my future "leisure time" where I'll hopefully be more energetic.

I'm still holding the same view that (a) we will probably see a switch in funding distribution and (b) if this does not happen those groups won't be able to compete with SOTA models.

we will and should see a switch in funding distribution at publicly funded AI research groups

I would change my mind if we find more evidence towards algorithmic innovation being a stronger or the significant driver.

Some recent updates in regards to providing more funding for infrastructure include The National AI Research Cloud which is currently being investigated by the ... (read more)

2
MaxRa
2y
Just realized that I misunderstood the original quote, yes, thanks, this makes total sense. 

The described doubling time of 6.2 months is the result when the outliers are excluded. If one includes all our models, the doubling time was around ≈7 months. However, the number of efficient ML models was only one or two.

For "Semiconductor industry amortize their R&D cost due to slower improvements" the decreased price comes from the longer innovation cycles, so the R&D investments spread out over a longer time period. Competition should then drive the price down.

While in contrast "Sale price amortization when improvements are slower" describes the idea that the sale price within the company will be amortized over a longer time period given that obsolescence will be achieved later.

Those ideas stem from Cotra's appendices: "Room for improvements to silicon chips in... (read more)

Thanks, Sammy. Indeed this is related and very interesting!

Thanks, Michael.

  1. n is counting the number of ML systems in the analysis at the point of writing. (We have added more systems in the meantime). An example for such a system is GPT-3, AlphaFold, etc. - basically a row in our dataset.
  2. Right, good point. I'll add the number of systems for the given time period.
  3. That's hard to answer. I don't think OpenAI misinterpreted anything. For the moment, I think it's probably a mixture of:
    • the inclusion criteria for the systems on which we base this trend
    • actual slower doubling times for reasons which we should figure
... (read more)

I have been wondering the same. However, given that OpenAI's "AI and Compute" inclusion criteria are also a bit vague, I'm having a hard time which of our data points would fulfill their criteria.

In general, I would describe our dataset matching the same criteria because:

  1. "relatively well known" equals our "lots of citations".
  2. "used a lot of compute for their time" equals our dataset if we exclude outliers from efficient ML models.
    • There's a recent trend in efficient ML models that achieve similar performance by using less compute for inference and train
... (read more)
2
MarkusAnderljung
3y
Thanks! What happens to your doubling times if you exclude the outliers from efficient ML models?

Also happy to help on a more local level: eazurich.org/join

If you're not already in contact with EA Zürich, just sent us a mail and we will get back to you: info@eazurich.org

Is there any chance to get a hold of the material which you used for this workshop?