ANAVEM
Reference
Languagefr
How to Install Proxmox VE 9.1 Hypervisor on Physical Server Hardware

How to Install Proxmox VE 9.1 Hypervisor on Physical Server Hardware

Install and configure Proxmox VE 9.1 virtualization platform on bare metal server hardware with ZFS storage, network configuration, and web management access.

Emanuel DE ALMEIDAEmanuel DE ALMEIDA
March 17, 2026 15 min 3
hardproxmox 10 steps 15 min

Why Choose Proxmox VE for Server Virtualization?

Proxmox VE stands out as a comprehensive open-source virtualization platform that combines KVM hypervisor technology with LXC containers, offering enterprise-grade features without licensing costs. Unlike proprietary solutions like VMware vSphere, Proxmox VE provides full access to the underlying Debian-based system while delivering professional virtualization capabilities including live migration, high availability clustering, and integrated backup solutions.

What Makes Proxmox VE 9.1 Different from Previous Versions?

The latest Proxmox VE 9.1 release brings significant improvements over earlier versions, including enhanced ZFS performance optimizations, updated kernel 6.8+ for better hardware compatibility with modern AMD and Intel processors, and refined unattended installation capabilities. Built on Debian 12, this version offers improved security features and better integration with modern storage technologies, making it ideal for both small business environments and enterprise data centers.

How Does ZFS RAID1 Benefit Your Virtualization Environment?

ZFS RAID1 configuration provides data redundancy, automatic error detection and correction, and snapshot capabilities that are crucial for virtualization workloads. Unlike traditional RAID implementations, ZFS offers copy-on-write functionality, built-in compression, and the ability to create instant snapshots of virtual machines for backup and testing purposes. This makes it particularly valuable for production environments where data integrity and quick recovery are essential.

Related: Install Hyper-V on Windows Server Using 3 Different Methods

Implementation Guide

Full Procedure

01

Download Proxmox VE ISO and Create Bootable Media

Download the latest Proxmox VE 9.1 ISO from the official website. Navigate to the Proxmox downloads page and select the current stable release.

Create bootable USB media using one of these methods:

On Linux:

# Identify your USB device
lsblk

# Create bootable USB (replace /dev/sdX with your USB device)
sudo dd if=proxmox-ve_9.1-*.iso of=/dev/sdX bs=4M status=progress && sync

On Windows: Use Rufus or balenaEtcher to write the ISO to your USB drive.

Warning: The dd command will completely wipe your USB drive. Double-check the device path with lsblk before running the command.

Verification: After creating the bootable media, safely eject and reinsert the USB drive. You should see it mounted as "PROXMOX" or similar.

02

Configure Server BIOS/UEFI Settings

Boot your server and enter BIOS/UEFI setup (typically F2, F12, or Delete during startup). Configure these critical settings:

Enable Hardware Virtualization:

  • Intel: Enable VT-x and VT-d
  • AMD: Enable AMD-V and IOMMU

Boot Configuration:

  • Set USB as first boot device
  • Disable Secure Boot if using UEFI
  • Enable UEFI boot mode (recommended over Legacy BIOS)

Memory Settings:

  • Enable XMP/DOCP for optimal RAM performance
  • Disable memory power saving features
Pro tip: Take photos of your current BIOS settings before making changes. This helps you revert if needed.

Verification: Save settings and confirm the server boots from USB when inserted.

03

Boot from Installation Media and Start Installer

Insert your bootable USB drive and power on the server. You'll see the Proxmox VE boot menu with several options:

  • Install Proxmox VE (Graphical) - Standard GUI installer
  • Install Proxmox VE (Terminal UI) - Text-based for compatibility
  • Install Proxmox VE (Terminal UI, Serial Console) - For headless installations

Select the graphical installer for most scenarios. The system will load the installer environment, which takes 1-2 minutes.

Once loaded, you'll see the Proxmox VE installer welcome screen. Click "I agree" to accept the End User License Agreement.

Pro tip: If you encounter graphics issues, use the Terminal UI option instead. It provides the same functionality with better hardware compatibility.

Verification: You should see the disk selection screen after accepting the EULA.

04

Configure Storage with ZFS RAID1

The target disk selection screen shows available storage devices. For production environments, configure ZFS RAID1 for redundancy:

Select Storage Configuration:

  1. Click "Options" to access advanced storage settings
  2. Change filesystem from "ext4" to "zfs (RAID1)"
  3. Select two identical SSDs/NVMe drives
  4. Set ashift=12 for modern drives with 4K sectors
  5. Configure swap size (recommended: 8-16 GB)

ZFS Configuration Example:

# The installer will create this ZFS configuration:
# Pool: rpool (RAID1 mirror)
# Datasets:
#   rpool/ROOT/pve-1 (root filesystem)
#   rpool/data (VM storage)
#   rpool/swap (swap partition)
Warning: ZFS RAID1 requires identical drive sizes. The installer will use the smaller drive's capacity if drives differ in size.

Alternative for Single Drive: If using one drive, select ext4 with LVM. Set hdsize to reserve space for future expansion:

# Reserve 100GB for system, leave rest unallocated
hdsize=100

Verification: Review the storage summary showing your selected configuration before proceeding.

05

Set Location and Administrator Credentials

Configure regional settings and create the root administrator account:

Location Settings:

  • Country: Select your location for timezone
  • Timezone: Verify automatic selection or choose manually
  • Keyboard Layout: Select appropriate layout (US, UK, etc.)

Administrator Account:

  • Password: Create strong root password (minimum 8 characters)
  • Confirm Password: Re-enter to verify
  • Email: Enter valid email for system notifications
Pro tip: Use a password manager to generate and store a strong root password. You'll need this for both SSH and web interface access.

Security Best Practices:

# Example strong password criteria:
# - Minimum 12 characters
# - Mix of uppercase, lowercase, numbers, symbols
# - Avoid dictionary words
# Example: Px7$mVe2026!Srv

Verification: Ensure the email address is correct as it's used for important system notifications and certificate management.

06

Configure Network Settings

Configure the management network interface for Proxmox VE access:

Network Configuration:

  • Hostname: Enter FQDN (e.g., pve01.yourdomain.local)
  • IP Address: Set static IP (e.g., 192.168.1.100/24)
  • Gateway: Your network gateway (e.g., 192.168.1.1)
  • DNS Server: Primary DNS (e.g., 192.168.1.1 or 8.8.8.8)

Example Configuration:

# Management Interface Configuration
Hostname: pve01.lab.local
IP Address: 192.168.1.100
Netmask: 255.255.255.0 (/24)
Gateway: 192.168.1.1
DNS: 192.168.1.1
Warning: Use a static IP address for the management interface. DHCP can cause connectivity issues if the IP changes after installation.

Network Planning:

  • Reserve IP range for VM/container networks
  • Document VLAN assignments if using VLANs
  • Plan for additional interfaces (storage, backup networks)

Verification: Confirm the network settings match your environment before proceeding. The installer will test connectivity during installation.

07

Complete Installation and Initial Boot

Review the installation summary showing all configured settings:

  • Target disks and filesystem type
  • Network configuration
  • Timezone and locale settings

Click "Install" to begin the installation process. The installer will:

  1. Partition and format selected disks
  2. Install Proxmox VE packages (5-15 minutes)
  3. Configure bootloader (GRUB)
  4. Apply network and system settings

Installation Progress:

# Installation stages:
# 1. Disk preparation and partitioning
# 2. Base system installation
# 3. Proxmox VE package installation
# 4. Bootloader configuration
# 5. System configuration

When installation completes, remove the USB drive and click "Reboot". The system will restart and boot from the installed system.

Pro tip: During first boot, the system may take 2-3 minutes to initialize ZFS pools and generate SSH keys. Be patient.

Verification: After reboot, you should see the Proxmox VE console showing the management IP address and web interface URL.

08

Access Web Interface and Perform Initial Configuration

Open a web browser and navigate to the Proxmox VE web interface:

# Access the web interface
https://192.168.1.100:8006

You'll see a security warning about the self-signed certificate. Click "Advanced" and "Proceed" to continue.

Initial Login:

  • Username: root
  • Password: [password set during installation]
  • Realm: Linux PAM standard authentication

After login, you'll see the Proxmox VE dashboard. Perform these initial configurations:

Update Package Repositories:

Open the node shell (Node > Shell) and run:

# Comment out enterprise repository (requires subscription)
sed -i 's/^deb/#deb/' /etc/apt/sources.list.d/pve-enterprise.list

# Add no-subscription repository
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list

# Update package lists and upgrade system
apt update && apt full-upgrade -y
Warning: The no-subscription repository is for testing and non-production use. For production environments, consider purchasing a Proxmox VE subscription.

Verification: Run pveversion to confirm Proxmox VE 9.1 is installed and check for any available updates.

09

Configure Network Bridge for Virtual Machines

Create a network bridge to allow VMs to access your network. Navigate to Node > Network in the web interface.

Create Linux Bridge:

  1. Click "Create" > "Linux Bridge"
  2. Name: vmbr0 (default VM bridge)
  3. Bridge ports: Select your physical network interface (e.g., enp1s0)
  4. CIDR: Leave empty (bridge inherits from physical interface)
  5. Gateway: Leave empty
  6. Comment: "VM Network Bridge"

Alternative CLI Configuration:

# Edit network configuration
nano /etc/network/interfaces

# Add bridge configuration:
auto vmbr0
iface vmbr0 inet static
    address 192.168.1.100/24
    gateway 192.168.1.1
    bridge-ports enp1s0
    bridge-stp off
    bridge-fd 0

# Apply network changes
ifreload -a

VLAN Configuration (Optional):

For VLAN-aware bridges:

# Create VLAN-aware bridge
auto vmbr0
iface vmbr0 inet manual
    bridge-ports enp1s0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094
Pro tip: Always test network connectivity after creating bridges. Create a test VM to verify network access before deploying production workloads.

Verification: Check bridge status with brctl show and verify VMs can obtain network connectivity.

10

Configure Storage and Validate Installation

Configure additional storage pools and validate your Proxmox VE installation:

Storage Configuration:

Navigate to Datacenter > Storage to configure additional storage:

  • local: System storage (already configured)
  • local-lvm: VM disk storage (if using LVM)
  • Add NFS/iSCSI storage if available

ZFS Storage Verification:

# Check ZFS pool status
zpool status

# View ZFS datasets
zfs list

# Check available space
df -h

System Health Checks:

# Check system status
systemctl status pve-cluster
systemctl status pvedaemon
systemctl status pveproxy

# Verify virtualization support
egrep -c '(vmx|svm)' /proc/cpuinfo

# Check memory and CPU
free -h
lscpu

Create Test VM:

  1. Click "Create VM" in the web interface
  2. Configure basic settings (VM ID, name, OS type)
  3. Allocate resources (CPU, memory, disk)
  4. Select network bridge (vmbr0)
  5. Start VM to test functionality
Pro tip: Enable hardware monitoring by installing sensors: apt install lm-sensors && sensors-detect. This provides temperature and fan speed monitoring in the web interface.

Final Verification: Access the Summary tab to view system resources, uptime, and version information. Your Proxmox VE 9.1 installation is now complete and ready for production use.

Frequently Asked Questions

What are the minimum hardware requirements for Proxmox VE 9.1 installation?+
Proxmox VE 9.1 requires a 64-bit CPU with virtualization support (VT-x/AMD-V), minimum 2 GB RAM (16 GB+ recommended for ZFS), and 32 GB storage minimum. For production environments, use 32 GB+ RAM, SSD/NVMe storage, and multiple network interfaces for redundancy. The CPU must support hardware virtualization extensions which can be enabled in BIOS/UEFI settings.
Should I use ZFS or ext4 filesystem for Proxmox VE installation?+
ZFS is recommended for production environments due to its advanced features like snapshots, compression, and data integrity checking. However, ZFS requires significant RAM (1 GB per TB + 4-8 GB base). For systems with limited RAM (less than 16 GB), ext4 with LVM is more suitable. ZFS RAID1 provides redundancy but requires two identical drives, while ext4 can work with single drives.
How do I fix repository errors after Proxmox VE installation?+
Repository errors occur because the enterprise repository requires a subscription. Comment out the enterprise repo by editing /etc/apt/sources.list.d/pve-enterprise.list and add the no-subscription repository with: echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list. Then run apt update to refresh package lists.
Why can't my virtual machines access the network after Proxmox installation?+
VMs require a network bridge to access external networks. Create a Linux bridge (vmbr0) in the web interface under Node > Network, connecting it to your physical network interface. The bridge acts as a virtual switch allowing VMs to communicate with the external network. Ensure the bridge is properly configured with the correct physical interface as bridge port.
What should I do if Proxmox VE won't boot after installation?+
Boot issues often stem from BIOS/UEFI configuration problems. Ensure virtualization is enabled (VT-x/AMD-V), Secure Boot is disabled if using UEFI, and the boot order prioritizes the installation drive. For ZFS installations, verify sufficient RAM is available. If using legacy BIOS mode, ensure the bootloader was installed correctly on the target drive. Check hardware compatibility and consider using the Terminal UI installer for problematic systems.
Emanuel DE ALMEIDA
Written by

Emanuel DE ALMEIDA

Microsoft MCSA-certified Cloud Architect | Fortinet-focused. I modernize cloud, hybrid & on-prem infrastructure for reliability, security, performance and cost control - sharing field-tested ops & troubleshooting.

Discussion

Share your thoughts and insights

You must be logged in to comment.

Loading comments...