A language model bigger than GPT-3 has arrived with a bold ambition: freeing AI from Big Tech’s clutches.
Named BLOOM, the large language model (LLM) promises a similar performance to Silicon Valley’s leading systems — but with a radically different approach to access.
While tech giants tend to keep their vaunted LLMs hidden from the public, BLOOM is available to anyone for free.
It’s also multilingual — unlike Google’s LaMDA and OpenAI’s GPT-3 — an unusual feature in an English-dominated field.
These features could democratize access to technology that’s set to make a deep impact on society.
Powerful AI models can be trained and released in an open way.
LLMs are proving proficient at a growing range of tasks, including writing essays, generating code, and translating languages.
They’re also adept at producing harmful content — and their future capabilities are difficult to predict.
BLOOM gives researchers a unique chance to explore their risks and benefits.
“BLOOM is a demonstration that the most powerful AI models can be trained and released by the broader research community with accountability and in an actual open way, in contrast to the typical secrecy of industrial AI research labs.” said Teven Le Scao, co-lead of BLOOM’s training, in a statement.
Opening AI
LLMs are prohibitively expensive to create and run. Training GPT-3, for instance, was estimated to cost up to $27.6 million.
Inevitably, tech companies want to protect such large investments — particularly when they provide competitive advantages.
It’s therefore unsurprising that LLMs are rarely open-sourced — with some notable exceptions.
Meta has produced the most prominent anomaly. In May, the company offered access to the 175-billion parameter OPT system.
The full model, however, is only available upon request and for non-commercial use.
BLOOM ramps up the accessibility.
The 176-billion-parameter model is available for free to any individual or institution who agrees to the system’s Responsible AI License.
Anyone can publicly view the meeting notes, discussions, and code behind the model.
The seeds of BLOOM
BLOOM was created by BigScience, a research project that launched in early 2021. The initiative is bootstrapped and led by AI startup Hugging Face.
“Large ML models have changed the world of AI research over the last two years but the huge compute cost necessary to train them resulted in very few teams actually having the ability to train and research them,” said Thomas Wolf, the BigScience co-lead and Hugging Face co-founder.
The training corpus aligned with our values.
The team of over 1,000 researchers from more than 60 countries and 250 institutions developed BLOOM to promote inclusion and responsibility in LLMs.
They trained the model on the Jean Zay supercomputer in Paris, France.
“We adopted a data-first approach to make sure the training corpus was aligned with our values,” said Christopher Akiki, a BigScience researcher based at Leipzig University.
“The multidisciplinary and international makeup of BigScience enabled us to critically reflect on every step of the process from multiple vantage points: ethical, legal, environmental, linguistic, and technical.
“That meant we were able to mitigate ethical concerns without compromising on performance or scale.”
The size is certainly imposing. At 176 billion parameters, BLOOM is larger than OpenAI’s GPT-3 and MetaAI’s OPT.
The model can generate text in 46 natural languages and dialects and 13 programming languages. For many of them, it’s the first-ever language model with over 100B parameters.
It’s also uniquely affordable. BigScience says researchers can use BLOOM for less than $40/hr on a cloud provider.
The model isn’t likely to compete with those built by Big Tech — but it at least provides a way to scrutinize them.
Get the TNW newsletter
Get the most important tech news in your inbox each week.