Skip to content

Welcome to Lamini!

Lamini is the leading enterprise LLM platform for finetuning. We make it easy for every enterprise and developer to build customized, private models: easier, faster, and higher-performing than any general LLMs.

Get started with your free API token, and take a quick tour of Lamini's features.

If you want to build larger models, deploy a model in production, host it on your own infrastructure on premise or in your VPC, or other enterprise features, please contact us at info@lamini.ai!

Why Lamini?

Leader in Generative AI

Lamini is built by a team finetuning LLMs over the past two decades: we invented core LLM research like LLM scaling laws, shipped LLMs in production to over 1 billion users, taught nearly a quarter million students online Finetuning LLMs, mentored the tech leads that went on to build the major foundation models: OpenAI’s GPT-3 and GPT-4, Anthropic’s Claude, Meta’s Llama 2, Google’s PaLM, and NVIDIA’s Megatron.

Optimized for enterprise LLMs

Lamini is optimized for enterprise finetuning LLMs, which have big data and use specialized data, tasks, and software interfaces. Lamini includes advanced optimizations for enterprise LLMs, built on and extending PEFT (LoRA), RLHF, and toolformer, to provide data isolation across 4,266x models on the same server, speed up model switching by 1.09 billion times, compress models by 32x, and easily integrate LLMs with enterprise APIs without hyperparameter search.

LLM Superstation

The LLM Superstation combines Lamini's easy-to-use enterprise LLM infrastructure with AMD Instinct™ MI210 and MI250 accelerators. It is optimized for private enterprise LLMs, built to be heavily differentiated with proprietary data. Lamini is the only LLM platform that exclusively runs on AMD Instinct GPUs — in production! Learn more about our collaboration with AMD.

Getting Started

pip install --upgrade lamini

Check out the Inference Quick Tour and Training Quick Tour to quickly learn about Lamini's features.

Once you're up and running, check out detailed documention about our REST API and Python Library.

Web App

In addition to a REST API and Python Package, we also have a web application to help streamline model training and evalutaion. Go to https://app.lamini.ai/ to manage your training jobs, view model eval results, test your finetuned models in a playground, generate API keys, and monitor usage.

Resources