📦 Installation¶
Overview¶
This page explains how to install MMORE.
Four installation paths are currently documented:
installation standard from PyPI with
pipinstallation from the source repository with
pip install -e .installation with
uvDocker-based setup
Choose the one that best matches your workflow and environment.
API keys and environment variables¶
Some MMORE features require API keys or access tokens.
Hugging Face¶
If you use Hugging Face hosted services or models, set your token in the HF_TOKEN environment variable before running MMORE:
export HF_TOKEN="your_huggingface_token"
Hosted LLM providers¶
If you use a hosted LLM provider, make sure the corresponding API key is available in your environment before running MMORE.
For example, with OpenAI:
export OPENAI_API_KEY="your_openai_api_key"
Notes¶
These environment variables must be set in your shell before launching MMORE commands.
If you run MMORE in Docker, make sure the variables are passed to the container as well.
If you run MMORE on a cluster or in production, make sure these variables are available in the job environment.
Standard installation¶
Use this method if you want to install MMORE directly from PyPI.
Install MMORE¶
pip install mmore
Installation from the source repository¶
Use this method if you want to work from the GitHub repository, modify the code, or contribute to MMORE.
Step 1: Clone the repository¶
git clone https://github.com/swiss-ai/mmore
cd mmore
Step 2: Install the package¶
pip install -e .
Installation with uv¶
Use this method if you want to install MMORE from source while managing the environment and dependencies with uv.
Step 1: Install system dependencies¶
sudo apt update
sudo apt install -y ffmpeg libsm6 libxext6 chromium-browser libnss3 \
libgconf-2-4 libxi6 libxrandr2 libxcomposite1 libxcursor1 libxdamage1 \
libxext6 libxfixes3 libxrender1 libasound2 libatk1.0-0 libgtk-3-0 libreoffice \
libpango-1.0-0 libpangoft2-1.0-0 weasyprint
Step 2: Install uv¶
Refer to the uv installation guide for detailed instructions.
curl -LsSf https://astral.sh/uv/install.sh | sh
Step 3: Clone the repository¶
git clone https://github.com/swiss-ai/mmore
cd mmore
Step 4: Install the project and dependencies¶
uv sync
For a CPU-only installation, use:
uv sync --extra cpu
Step 5: Activate the virtual environment¶
Before running commands, activate the environment:
source .venv/bin/activate
Installation with Docker¶
Use this method if you want to run MMORE in a containerized environment.
Step 1: Install Docker¶
Follow the official Docker installation guide.
Step 2: Build the Docker image¶
sudo docker build -f Dockerfile --build-arg PLATFORM=gpu -t mmore .
For CPU-only platforms, use:
sudo docker build -f Dockerfile --build-arg PLATFORM=cpu -t mmore .
If you are running on a shared cluster environment, you can specify USER_UID and USER_GID variables and set them to your user and group IDs.
Step 3: Start an interactive session¶
For GPU-enabled platforms:
sudo docker run --gpus all -it \
-v ./examples:/app/examples \
-v ./.cache:/home/mmoreuser/.cache \
-e XDG_CACHE_HOME=/home/mmoreuser/.cache \
-e HF_HOME=/home/mmoreuser/.cache/huggingface \
-e TORCH_HOME=/home/mmoreuser/.cache/torch \
mmore
For CPU-only platforms:
sudo docker run -it \
-v ./examples:/app/examples \
-v ./.cache:/home/mmoreuser/.cache \
-e XDG_CACHE_HOME=/home/mmoreuser/.cache \
-e HF_HOME=/home/mmoreuser/.cache/huggingface \
-e TORCH_HOME=/home/mmoreuser/.cache/torch \
mmore
Warning
You may need the Nvidia container toolkit for Docker containers to access your GPU. If GPU execution does not work, refer to the official Nvidia container toolkit installation guide: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
To configure the production repository:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update
sudo apt install -y nvidia-container-toolkit
Then configure Docker to use the Nvidia runtime:
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
You can then use:
docker run --gpus all
Notes¶
The examples folder is mapped to /app/examples inside the container. This corresponds to the default path used in examples/process/config.yaml.
For a manual non-Docker setup, use either the standard installation or the uv workflow described above.