Getting Started
Looking to host Lamini on prem? Check out the installer instructions 🔗.
Installation
Lamini can be installed using pip, the package manager for Python. To install Lamini, open a command prompt and type:
This will download and install the latest version of Lamini and its dependencies.
Check if your installation was done correctly, by importing the LlamaV2Runner in your python interpreter. Fun fact: Lamini is the tribe of which llamas are a part, so you can import the module lamini
to work with the LLM engine.
Setup your keys
Go to https://lamini.ai. Log in to get your API key and purchase credits (under the Account tab).
Create ~/.powerml/configure_llama.yaml
and put a key in it.
Another option is to pass in your production key to the config parameter of the LLMEngine
class
model = LlamaV2Runner(
config={"production.key": "<YOUR-KEY-HERE>", "production.url" : "<YOUR-SERVER-URL-HERE>"}
)
See the Authentication page 🔗 for more advanced options.
Basic test
Run the LLM engine with a basic test to see if installation and authentication were set up correctly.
from lamini import LlamaV2Runner
model = LlamaV2Runner()
answer = model("Tell me a story about llamas.")
print(answer)
Now you're on your way to building your own LLM for your specific use case!
To play with different types in an interface, you can log in at https://lamini.ai and use the playground there.
Web App
In addition to a REST API and Python Package, we also have a web application to help streamline model training and evalutaion. Go to https://app.lamini.ai/ to view your training jobs, see model evaulation, play around with finetuned models in a playground, generate API keys, and monitor usage.