Meta, the company formerly known as Facebook, has introduced its own large language model-powered artificial intelligence called LLaMA, which stands for Large Language Model Meta AI.
The AI technology was unveiled by CEO Mark Zuckerberg in a Facebook post, in which he stated that the LLaMA was designed to help researchers advance their work.
Zuckerberg added that large language models (LLMs) have shown a lot of promise in generating text, having conversations, summarizing written material, and more complicated tasks like solving math theorems or predicting protein structures.
While the announcement did not explain which of these tasks LLaMA could currently accomplish, a company blog post published later offered more information.
Meta explained that LLaMA works by taking a sequence of words as an input and predicts a next word to recursively generate text. The AI was trained on text from 20 different languages, including publicly available text from CCNet, C4, Wikipedia, ArXiv, and Stack exchange.
Meta described LLaMA as a “smaller foundation model” that “requires far less computing power and resources” than other large language models. It will be available in multiple sizes, and the company is committed to transparency and responsible AI development. Only AI researchers will be given access to the model, which will be released under a non-commercial license focused on research use cases.
Meta has not incorporated LLaMA into any of its products or platforms, including Instagram and Facebook. According to a company spokesperson, there is no news at this time about a public preview or expanded public access.