Ollama
Ollama is a python library. It allows you to run open-source large language models, such as LLaMA2, locally.
Ollama
bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage. For a complete list of supported models and model variants, see the Ollama model library.
See this guide for more details
on how to use Ollama
with LangChain.
Installation and Setupβ
Follow these instructions
to set up and run a local Ollama instance.
To use, you should set up the environment variables ANYSCALE_API_BASE
and
ANYSCALE_API_KEY
.
LLMβ
from langchain.llms import Ollama
See the notebook example here.
Chat Modelsβ
Chat Ollamaβ
from langchain.chat_models import ChatOllama
See the notebook example here.
Ollama functionsβ
from langchain_experimental.llms.ollama_functions import OllamaFunctions
See the notebook example here.
Embedding modelsβ
from langchain.embeddings import OllamaEmbeddings
See the notebook example here.