Installing Podman on Proxmox is rather straight forward. The initial idea for how to do this came from a post on the Serve the Home (STH) blog: Create the Ultimate Virtualization and Container Setup (KVM, LXC, Docker) with Management GUIs. In that article STH uses Docker and provides a Portainer GUI. In this example the Portainer GUI is going to be omitted and use Podman instead of Docker.

The daemonless part of Podman is the real selling point. That and podman works great with systemd.

Installation

Unfortunately, at the time of this writing, Podman is still not an official Debian package. Though one is being worked on. Luckily however, Kubic project provides packages for Debian 10 and Proxmox VE 6.x is based on Debian 10.

The instructions are basically lifted straight from the Podman: Getting Started - Installation page for Debian.

## Add package repository
echo 'deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_10/Release.key | apt-key add -

## Update
apt update

## Install
apt install podman

Run podman info should show the current configuration.

Storage — ZFS

This step is not strictly necessary and admittedly I am not sure if it is really of any benefit. Regardless, to configure Podman to use ZFS to store container layers the /etc/containers/storage.conf file needs to be edited. Read a more detailed discussion about the configuration options of the storage.conf file in the documentation: containers-storage.conf.5.md.

For my configuration, I created a ZFS dataset called rpool/podman/store.

[storage]

# Storage Driver
driver = "zfs"

# Temporary storage location
runroot = "/var/run/containers/storage"

# Primary Read/Write location of container storage
graphroot = "/var/lib/containers/storage"

# Storage path for rootless users
#
# rootless_storage_path = "$HOME/.local/share/containers/storage"

[storage.options]
# Storage options to be passed to underlying storage drivers

# AdditionalImageStores is used to pass paths to additional Read/Only image stores
# Must be comma separated list.
additionalimagestores = [
]

[storage.options.zfs]
# Storage Options for ZFS

fsname="rpool/podman/store"
mountopt="nodev"

Networking — macvlan

In order for the containers to be able to join directly to the network a macvlan bridge to the built-in vmbr0 is added. WARNING: This has not been validated as secure. Evaluate this configuration for your environment.

Effecively, the example subnet for this example runs from 192.168.137.0 to 192.168.137.255. However, DHCP is only provided on a subsection of the lan, specifically from 192.168.137.100 - 192.168.137.255. Which means that 192.168.137.2 - 192.168.137.99 are perfect for static leases. Proxmox is already configured to set aside 192.168.137.32/27 (e.g., 192.168.137.32 - 192.168.137.63) for LXC and KVM devices. Therefore, for this example Podman devices will be limited to 192.168.137.64/27 (e.g., 192.168.137.64 - 192.168.137.95).

To do this a new macvlan called unifinet will be created. This is done by creating a file /etc/cni/net.d/90-unifinet.conflist and put the following JSON in it.

{
  "cniVersion": "0.4.0",
  "name": "unifinet",
  "plugins": [
    {
      "type": "macvlan",
      "mode": "bridge",
      "master": "vmbr0",
      "ipam": {
        "type": "host-local",
        "ranges": [
          [
            {
              "subnet": "192.168.137.0/24",
              "rangeStart": "192.168.137.64",
              "rangeEnd": "192.168.137.95",
              "gateway": "192.168.137.1"
            }
          ]
        ],
        "routes": [
          {
            "dst": "0.0.0.0/0"
          }
        ],
        "dns": {
          "nameservers": [
            "192.168.137.1",
            "1.1.1.1",
            "1.0.0.1"
          ]
        }
      }
    }
  ]
}

After saving that file running podman network ls shows the new unifinet network.

Creating container and starting with systemd

Finally, the system is configured and networking is setup. Now create a simple container via a service and configure it to start automatically.

The first step is to start a container running. Doing this is largely out of scope for this discussion. Crafting the appropriate arguments to send to podman run are a combination of the appropriate flags for podman itself and the necessary configuration for the containerized application. Here a simple httpd container is to be used.

podman run \
  --name httpd \
  --network unifinet \
  --ip 192.168.137.65 \
  --cpus 1 \
  --memory 512m \
  registry.fedoraproject.org/f29/httpd

There are 2 things of note above:

  1. The example uses the unifinet created previously (and optionally gave it a static ip in the range)
  2. The container is going to be called httpd

The final step is to ask podman to generate a systemd unit. Again all of the available commands and options are out of scope for this dicussion. See the manual for more information.

Generate a systemd unit from the container above:

podman generate systemd --new --files --name httpd httpd

Install the created unit:

cp container-httpd.service /etc/systemd/system

Start and enable the unit at boot:

systemctl enable --now container-httpd

Tada. Now that container will start automatically at boot.

Backup