We're excited to release INTELLECT-1, the first 10B parameter language model collaboratively trained across the globe. This represents a 10× scale-up from our previous research and demonstrates that large-scale model training is no longer confined to large corporations but can be achieved through distributed, community-driven approaches. The next step is scaling this even further to frontier model sizes and ultimately open source AGI.

2

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since:

This seems bad, if true. Has anyone considered reaching out to these people?

Curated and popular this week
Relevant opportunities