As interest in large AI models – especially large language models (LLMs) like OpenAI’s GPT-3 – grows, Nvidia is looking to take advantage of new fully managed, cloud-based services, for enterprise software developers. Today at the company’s Fall 2022 GTC Conference, Nvidia announced the NeMo LLM Service and BioNeMo LLM Service, which ostensibly make it easier to scale LLMs and deploy AI-powered applications. for a range of use cases, including text generation and summarization, protein structure prediction, and Suite.
The new offerings are part of Nvidia’s NeMo, an open-source toolkit for conversational AI, and they’re designed to minimize, if not eliminate, the need for developers to build LLMs from scratch. LLMs are often expensive to develop and train, with a recent model – Google’s PaLM – costing between $9 million and $23 million by leveraging publicly available cloud computing resources.
Using the NeMo LLM service, developers can create models ranging from 3 billion to 530 billion parameters with custom data in minutes to hours, Nvidia claims. (Parameters are the parts of the model learned from historical training data — in other words, the variables that inform the model’s predictions, like the text it generates.) Models can be customized at any time. using a technique called rapid learning, which Nvidia says allows developers to customize models trained with billions of data points for particular, industry-specific applications — for example, a customer service chatbot — using several hundred examples.
Developers can customize models for multiple use cases in a no-code “playground” environment that also offers experimentation capabilities. Once ready to deploy, optimized models can run on cloud instances, on-premises systems, or via an API.
The BioNeMo LLM service is similar to the LLM service, but with adjustments for life science customers. Part of Nvidia’s Clara Discovery Platform and coming soon to early access on Nvidia GPU Cloud, it includes two language models for chemistry and biology applications as well as support for protein data, l DNA and chemistry, explains Nvidia.
BioNeMo LLM will include four pre-trained language models to get started, including a model from Meta’s R&D AI division, Meta AI Labs, which processes amino acid sequences to generate representations that can be used to predict properties and protein functions. Nvidia says that in the future, researchers using the BioNeMo LLM service will be able to customize LLMs for greater accuracy
Recent research has shown that LLMs are remarkably effective in predicting certain biological processes. This is because structures like proteins can be modeled as a kind of language – one with a dictionary (amino acids) strung together to form a sentence (protein). For example, the R&D division of Salesforce several years ago created an LLM model called ProGen that can generate structurally and functionally viable protein sequences.
Both the BioNeMo LLM Service and the LLM Service include the ability to use out-of-the-box and custom templates via a cloud API. Using the Services also gives customers access to the NeMo Megatron framework, now in open beta, which allows developers to create a range of multilingual LLM models, including GPT-3-like language models.
Nvidia says automotive, IT, education, healthcare and telecommunications brands are currently using NeMo Megatron to launch AI-powered services in Chinese, English, Korean and Swedish.
NeMo LLM and BioNeMo services and cloud APIs are expected to be available in early access starting next month. As for the NeMo Megatron framework, developers can try it out for free via Nvidia’s LaunchPad driver platform.