Skip to content

Getting Started

Looking to host Lamini on prem? Check out the installer instructions 🔗.


Lamini can be installed using pip, the package manager for Python. To install Lamini, open a command prompt and type:

pip install lamini

This will download and install the latest version of Lamini and its dependencies.

Check if your installation was done correctly, by importing the LlamaV2Runner in your python interpreter. Fun fact: Lamini is the tribe of which llamas are a part, so you can import the module lamini to work with the LLM engine.

>> from lamini import LlamaV2Runner

Setup your keys

Go to Log in to get your API key and purchase credits (under the Account tab).

Create ~/.powerml/configure_llama.yaml and put a key in it.

    key: "<YOUR-KEY-HERE>"

Another option is to pass in your production key to the config parameter of the LLMEngine class

model = LlamaV2Runner(
    config={"production.key": "<YOUR-KEY-HERE>", "production.url" : "<YOUR-SERVER-URL-HERE>"}

See the Authentication page 🔗 for more advanced options.

Basic test

Run the LLM engine with a basic test to see if installation and authentication were set up correctly.

from lamini import LlamaV2Runner

model = LlamaV2Runner()

answer = model("Tell me a story about llamas.")

Now you're on your way to building your own LLM for your specific use case!

To play with different types in an interface, you can log in at and use the playground there.

Web App

In addition to a REST API and Python Package, we also have a web application to help streamline model training and evalutaion. Go to to view your training jobs, see model evaulation, play around with finetuned models in a playground, generate API keys, and monitor usage.