Proxmox VE 8.4 ISO Installer Download
Proxmox VE 8.4 ISO Installer
Proxmox Virtual Environment is an open-source software server for virtualization management. It is a hosted hypervisor that can run operating systems including Linux and Windows on x64 hardware.
SahliTech, Inc. Provides download mirrors for Proxmox VE, Download Proxmox VE from our fast mirrors.
Proxmox VE is a hypervisor similar to VMWare ESXI, and Microsoft Hyper-V. Proxmox is an OS that provides a bare-metal Virtual Server / Machine environment.
Overview:
Proxmox Virtual Environment
Compute, network, and storage in a single solution.
Proxmox VE is a complete, open-source server management platform for enterprise virtualization. It tightly integrates the KVM hypervisor and Linux Containers (LXC), software-defined storage and networking functionality, on a single platform. With the integrated web-based user interface you can manage VMs and containers, high availability for clusters, or the integrated disaster recovery tools with ease.
The enterprise-class features and a 100% software-based focus make Proxmox VE the perfect choice to virtualize your IT infrastructure, optimize existing resources, and increase efficiencies with minimal expense. You can easily virtualize even the most demanding of Linux and Windows application workloads, and dynamically scale computing and storage as your needs grow, ensuring that your data center adjusts for future growth.
About:
About Proxmox
We develop powerful and efficient open-source software to simplify your server management.
Our main products – Proxmox Virtual Environment, Proxmox Backup Server, and Proxmox Mail Gateway – enable you to implement an affordable, secure, and open-source IT infrastructure. With the Proxmox technology you’ll get those comprehensive yet easy-to-use software solutions you’ve always wanted.
We know that business continuity is at the heart of your organization. To prevent operational disruptions we also provide technical assistance and other support services to keep your business up and running. Together with our worldwide partner network and our huge, active open-source community, we are here to guarantee your efficient workflows. Businesses of any size, sector or industry as well as NPOs or the educational sector all use the open-source platforms from Proxmox.
The company Proxmox Server Solutions GmbH was founded by Martin and Dietmar Maurer in 2005. The Proxmox company history started just shortly before the Proxmox Mail Gateway was released. Three years later, in 2008, we released the first stable version 0.9 of Proxmox Virtual Environment, an open-source virtualization management platform. From the beginning, Proxmox VE had a backup tool built in, which is fine for smaller backups. To provide backup for large VMs, and as well to minimize the duration of backups and storage usage, we developed the Rust-based Proxmox Backup Server and released the first stable version in 2020.
Proxmox Server Solutions today is an independent and profitable company, with our customers and supporters as our only investors. Each day we come together to listen to our users, solve problems, and thus continue to improve our products and services. If you want to help, we’re also hiring. Proxmox is headquartered in Vienna, Austria.
Read more: Press Releases
Downloads:
File Name | Description | Date | Version | Size | Download | MD5/SHA1 |
---|---|---|---|---|---|---|
proxmox-ve_8.4-1_SahliTech.zip | Proxmox Virtual Environment is an open-source software server for virtualization management on x64 based hardware. | 6-24-2025 | 8.4-1 | 1.44GB | Download |
“proxmox-ve_7.3-1_SahliTech.zip” MD5:2F0A7FCE6BCDD23D57688E0957E410D7 “proxmox-ve_7.3-1_SahliTech.iso” MD5: D9CC22950D9F7EACACF779535A36365A
|
Screenshots:
Notes:
Release notes
Released 09. April 2025
- Based on Debian Bookworm (12.10)
- Latest 6.8.12-9 Kernel as new stable default
- Newer 6.14 Kernel as opt-in
- QEMU 9.2.0
- LXC: 6.0.0
- ZFS: 2.2.7 (with compatibility patches for Kernel 6.14)
- Ceph Squid 19.2.1
- Ceph Reef 18.2.4
- Ceph Quincy 17.2.8 is end-of-life, users are advised to upgrade
Highlights
- Live migration with mediated devices such as NVIDIA vGPU. This allows the migration of running VM guests that use mediated devices. Hardware and driver support is required. Currently, only NVIDIA GPUs are known to support live migration.
- Support for external backup providers. Proxmox VE now provides an API that allows developers to write backup provider plugins. A backup provider plugin can implement backup and restore functionality for external backup solutions. This way, external backup solutions can be fully integrated with the Proxmox VE backup stack and GUI.
- Sharing host directories with VM guests using virtiofs. Virtiofs allows sharing directories between the Proxmox VE host and VMs without the overhead of a network filesystem. Modern Linux guests support virtiofs out of the box. Windows guests need to install additional software.
- Latest Linux 6.14 kernel available as opt-in kernel.
- Ceph Squid 19.2.1 is available as stable option.
- Seamless upgrade from Proxmox VE 7.4, see Upgrade from 7 to 8.
Changelog Overview
Before upgrading, please consider Known Issues & Breaking Changes.
Enhancements in the web interface (GUI)
- Allow to set a consent banner that users must acknowledge before logging in (issue 5463). This can be required for compliance reasons and is already supported in Proxmox Backup Server. The banner supports Markdown and can be set in Datacenter → Options → Consent Text.
- Sort storage content such as ISO files according to the browser locale and with numeric sorting (issue 6138). This results in a more natural sorting order with respect to capitalization and numbers in filenames.
- Downloading ISO, CT templates, or OVA appliances from URLs now uses the configured proxy for both HTTP and HTTPS connections (issue 3716).
- The migration network in the datacenter options can now be entered with a custom CIDR (issue 6142).
- Task lists now show an additional action column that opens the task details for better discoverability.
- Confirmation dialogs now mention the guest name to make it clearer for which guest the action will be applied (issue 3181).
- Better align privilege checks for adding PCI, USB, or VirtIO RNG devices in the web UI with the actual privilege checks in the backend.
- Allow uploading and downloading disk images to storages with content type “Import”, in preparation for importing disk images to VMs in the future (issue 2424).
- Fix an issue where clicking on an external link to the GUI would display a login screen, even if the current session was still valid.
- Fix an issue where the PCI mapping editor would preselect the wrong radio button.
- Fix a security issue that allowed XSS via certain QEMU guest agent responses (PSA-2024-00016-1).
- HTML-encode API results before rendering as additional hardening against XSS (PSA-2025-00002-1).
- Various smaller improvements to the GUI.
- Update xterm.js to version 5.5.0.
- Fix an issue where an xterm.js console would not have the correct size on high-latency connections (issue 6223).
- Update noVNC to version 1.6.0.
- Fix some occurrences where strings were not marked as translatable.
- Fix some occurrences where translatable strings were split, which made potentially useful context unavailable for translators.
- Improved translations, among others:
- Bulgarian
- French
- German
- Italian
- Japanese
- Simplified Chinese
- Spanish
- Traditional Chinese
- Ukrainian
Virtual machines (KVM/QEMU)
- New QEMU version 9.2.0 See the upstream changelog for details.
- Live migration with mediated devices such as NVIDIA vGPU (issue 5175). This allows the migration of running VM guests that use mediated devices. It can be enabled on the PCI mapping level by marking the device as capable of live migration. Hardware and driver support is required. Currently, only NVIDIA GPUs are known to support live migration.
- Sharing host directories with VM guests using virtiofs (issue 1027). Virtiofs allows sharing files between host and guest without involving the network. Driver support in the guest is required and implemented out-of-the-box by modern Linux guests running kernel 5.4 and higher. Windows guests need to install additional software to use virtiofs. VMs using virtiofs cannot be live-migrated. Snapshots with RAM and hibernation are not possible.
- Initial support for AMD SEV-SNP (Secure Nested Paging). On supported setups, SEV-SNP can further increase isolation between host and guest. As EFI disks are not supported when using SEV-SNP, print a warning that EFI disks are ignored.
- Disable S3/S4 power states by default. These power states are known to cause problems in some setups, for example with vGPU passthrough to Windows guests. The new Proxmox VE machine version
9.2+pve1
disables S3/S4 power states. New Windows VMs are pinned to that machine version. Existing Windows VMs are already pinned to an earlier machine version that keeps S3/S4 power states enabled. VMs with machine versionlatest
will disable S3/S4 power states the next time they are started. - Clarify the handling of QEMU machine version deprecations. Starting with QEMU 10.1, machine versions will be removed from upstream QEMU after six years. Proxmox VE will support machine versions from approximately two previous major Proxmox VE releases. This is now documented, and QEMU will warn about future deprecation on VM start.
- The VM creation wizard now puts additional ISOs, such as the one for Windows VirtIO drivers, after the installation ISO in the boot order (issue 6116).
- Fix security issues concerning image formats that would allow sufficiently privileged attackers read or read-write access to arbitrary host files. See PSA-2024-00014-1, PSA-2025-00001-1, PSA-2025-00003-1, PSA-2025-00004-1.
- Allow the customization of the ballooning target for automatic memory allocation (issue 2413). Previously, the ballooning algorithm targeted 80% of available host memory, which may be too low for setups with large amounts of memory. The ballooning target can now be configured on a per-node basis.
- Shutting down a VM with the QEMU guest agent enabled will now fall back to ACPI shutdown if the QEMU guest agent is not active. Previously, if the QEMU guest was enabled but not active, the shutdown attempt had no effect.
- Require the
format
option of a VM disk, if set, to be consistent with the format reported by the storage layer if the disk is managed by Proxmox VE. - Improvements to VirtIO RNG devices that provide entropy to VMs:
- Allow non-root users with the
VM.Config.HWType
privilege to configure/dev/urandom
and/dev/random
as an entropy source. - Allow non-root users with the additional
Mapping.Use
privilege on the/mapping/hwrng
ACL path to configure a hardware random number generator. - Remove an outdated warning about entropy starvation when using
/dev/random
as an entropy source.
- Allow non-root users with the
- Allow offline migration if mapped devices are present. Previously, mapped devices would be incorrectly marked as local resources.
- Snapshots with RAM now write the VM state in a dedicated IO thread and thus reduce load on the QEMU main thread. This can improve performance and avoid guest hangs in case the VM state is written to an unreliable network storage. It also avoids deadlocks in case the guest is rebooted while taking a snapshot with RAM.
- Increase the maximum timeout for the VM start with the number of attached virtual NICs (issue 3588).
- Cloning a VM with TPM state will now always fully clone the TPM state.
- Fix an issue where allocating a new EFI disk or TPM state volume for a template would not convert these volumes to base volumes.
- Fix an issue that broke the live import from OVA appliances with disks.
- Prevent resuming a template, as templates are not supposed to be fully started. Templates are partially started in
prelaunch
state during backups. - Prevent moving or cloning VM disks to storages without the “Disk Image” content type (issue 5284).
- Removing the TPM state or EFI disk from a running VM will now be registered as a pending change, as detaching them from a running VM is not supported.
- Fix an issue where backing up a template would fail if a virtual NIC is disconnected (issue 6007).
- Fix some spurious warnings, for example a warning printed by
qm importdisk
(issue 5980). - Work around an issue where QEMU processes for Linux guests would consume more CPU after an update to QEMU 9.2.
- Clarify error messages in case cloning fails.
- Revert a kernel patch that prevented passthrough of the iGPU on Intel Skylake platforms.
- Backport a kernel patch that fixes a KVM performance regression on Intel Emerald Rapids platforms.
Containers (LXC)
- Allow setting the change detection mode for one-shot container backups to Proxmox Backup Server (issue 5936).
- The LXC creation wizard now makes it clearer that nesting is never enabled for privileged containers, by unchecking the corresponding checkbox.
- The API endpoint
/nodes/{node}/lxc/{vmid}/interfaces
now returns all configured IP addresses of the container (issue 5339). Previously, only the first address would be returned. - Fix an issue where opening the console of a container on a different cluster node would ask whether the node’s SSH host key should be trusted.
- Ignore conflicting mount options for read-only mountpoints (issue 5907). This fixes an issue where conflicting options would cause the mount to fail.
General improvements for virtual guests
- Improvements to remote migration:
- Allow remote migration of guests with disks on shared storage, such as RBD or iSCSI.
- Fix an issue where a remote migration of a container failed if
nesting
was explicitly set to 0. - Fix an issue where an offline disk migration would fail in certain situations if the target has a bandwidth limit set (issue 6130).
Improved management for Proxmox VE clusters
- Improvements to
pveproxy
andpvedaemon
:- Increase the maximum allowed size for POST requests to 512 KiB to avoid issues with large configurations (issue 6230). Previously, the maximum allowed size was 64 KiB, which can be exceeded, for example, by creating large PCI mappings.
- API handlers now return errors in the JSON response. This way, client libraries can extract the error without having to parse the HTTP status reason phrase.
- Where appropriate, return HTTP status code 500 “Internal server error” instead of 501 “Not Implemented”.
- In case of errors, add the error message to the HTTP response body if no other explicit content is set.
- Avoid redundant disconnects in case the connecting client sends no data (issue 4816). This should reduce the number of
detected empty handle
warnings in the logs. - Send a TLS close notify before closing a connection to improve compatibility with certain HTTPS clients.
- The
pveproxy
logs can now optionally mention a connecting IP address read from a header, which can be useful in environments where Proxmox VE is accessed via a proxy (issue 5699).
- Improvements to the notification system:
- Allow overriding templates used for notifications sent as plain text as well as HTML (issue 6143).
- Streamline notification templates in preparation for user-overridable templates.
- Clarify the descriptions for notification matcher modes (issue 6088).
- Fix an error that occurred when creating or updating a notification target.
- HTTP requests to webhook and gotify targets now set the
Content-Length
header.
- InfluxDB plugin: Fix an issue where the collected data would be incomplete if guests with a single tag with a numeric value are defined.
- Fix an issue where the
/cluster/metrics/export
returned wrong data for theiowait
metric. - Improve the specification of API response schemas to simplify interactions with the Proxmox VE API.
- Update
corosync
version 3.1.9 with additional hardening patches. pvereport
now also displays the WWIDs of attached disks for easier troubleshooting of multipath configurations.
Backup/Restore
- Support for external backup providers. The Proxmox VE storage stack now provides an API for developing backup provider plugins for external backup solutions. Backup provider plugins can make use of advanced features, for example dirty bitmap tracking for fast incremental backups. Backup solution providers can now develop a custom backup plugin to integrate their backup solution with Proxmox VE.
- Fix a race condition that could prevent proper error propagation during a container backup to Proxmox Backup Server.
- Improvements to the change detection modes for container backups to Proxmox Backup Server introduced in Proxmox VE 8.3:
- Fix an issue where the file size was not considered for metadata comparison, which could cause subsequent restores to fail.
- Increase robustness of fleecing backups:
- Record the name of the fleecing image and clean up leftover fleecing images on suitable occasions, for example the next backup run or a migration (issue 5440).
- Improve the error handling for fleecing and avoid stuck guest IO.
- VM template backups to VMA archives now use the same backup approach as backups to the Proxmox Backup Server.
- File restore from Proxmox Backup Server: Switch to
blockdev
options when preparing drives for the file restore VM. In addition, fix a short-lived regression when using namespaces or encryption due to this change.
Storage
- iSCSI plugin: Decrease the volume of connection checks via TCP ping to avoid recurring spurious warnings on the target side (issue 957).
- Improvements to the ESXi plugin:
- Avoid DBUS errors about the maximum number of match rules per connection (issue 5876).
- Ensure that VMDK disk images are correctly detected.
- OVA/OVF import: Fix a security issue that would allow a sufficiently privileged attacker to obtain a copy of an arbitrary host file by importing a specifically crafted OVA appliance (PSA-2024-00013-1).
- Btrfs plugin: Fix an issue where the guest migration would fail if disks had multiple snapshots on Btrfs (issue 3873).
- Fix an intermittent issue where using ISOs on Btrfs would fail with an error.
- Fix caching issues in the RBD and user-mode iSCSI storage plugins (issue 6085).
- Fix an issue where a replication job would not be deleted if it was disabled.
Ceph
- Ceph Quincy 17.2 is end-of-life, users are advised to upgrade to at least Ceph Reef.
- Add an optional column for the pool application to the GUI.
- Fix an issue where editing a pool via the GUI could occasionally set different values for
size
andmin_size
.
Access control
- Improvements to the OpenID Connect realm:
- Add support for groups (issue 4411). The new
groups-claim
setting specifies an OpenID claim that is used to infer the user’s groups on the Proxmox VE side. Optionally, groups that do not exist on the Proxmox VE side can be automatically created. - Allow to disable queries against the UserInfo endpoint, as some identity providers do not support it (issue 4234).
- Add support for groups (issue 4411). The new
- Fix a spurious warning when using case-insensitive realms such as Active Directory.
- Clarify that password changes for the PAM realm only affect the local node.
Firewall & Software Defined Networking
- Add TLS certificate validation for external IPAM/DNS plugins (PSA-2025-00006-1). The IPAM/DNS plugin integration (currently in tech preview) of the SDN stack was missing TLS certificate validation, potentially allowing MITM attacks.
- Forbid assigning multiple DHCP ranges which overlap to the same zone. Multiple IPAM backends do not support them.
- Rework and improve the Netbox IPAM plugin to properly synchronize the deletion and updates of addresses and networks (issue 5496). Refactor the code to consistently handle errors reported by Netbox. Delete subnets in Netbox after they are removed in Proxmox VE. Properly return the address after storing and obtaining it from Netbox. This fixes an issue preventing guests from starting (issue 5496).
- Fix the handling of DNS name addition in the PowerDNS plugin for dual-stacked setups and zones. Not explicitly sending the type of record to add (
A/AAAA
) caused an error in the PowerDNS API, resulting in no records being added at all. - Fix the detection of pending changes for Boolean configuration options. A setting evaluating to
false
was always shown as pending change. - Add the
log_level_format
option to the firewall API and parsing code (issue 5925). - Skip generating rules in forward chains for security groups bound to an interface, instead of causing an error. Rules in the forward chain operate on multiple interfaces, and binding them to an interface does not make sense.
- Adapt the new
proxmox-firewall
implementation to better align its options and defaults withpve-firewall
.- Fix the behavior of the
nf_conntrack_allow_invalid
option in the newproxmox-firewall
implementation. - Treat an absent firewall-setting for a guest network interface as
false
(issue 6176). - Fix firewall rules for the ICMP protocol (issue 6108).
- Fix the behavior of the
- Ensure that the
frr
service is enabled when (re)starting it. - Update the
frr
routing protocol suite from version 8.5.2 to version 10.2.1. frr
: Add the possibility to havedummy
interfaces be treated asloopback
interfaces in OpenFabric.
Improved management of Proxmox VE nodes
- Several vulnerabilities in GRUB that could be used to bypass SecureBoot were discovered and fixed (PSA-2025-00005-1) The Documentation for SecureBoot now includes instructions to prevent using vulnerable components for booting via a revocation policy.
- Add the
pve-nvidia-vgpu-helper
tool to simplify the NVIDIA vGPU driver setup. - Backport a kernel patch that avoids a performance penalty on Raptor Lake CPUs with recent microcode (issue 6065).
- Backport a kernel patch that fixes Open vSwitch network crashes that would occur with a low probability when exiting ovs-tcpdump.
- Update the DNS plugins provided by acme.sh to upstream version 3.1.0.
- Fix a spurious broken pipe warning when querying the DKMS status in
pve7to8
. - Fix an issue where autoactivation settings for LVM volume groups and logical volumes were not taken into account on boot.
Installation ISO
- Raise the minimum root password length from 5 to 8 characters for all installers. This change is done in accordance with current NIST recommendations.
- Print more user-visible information about the reasons why the automated installation failed.
- Allow RAID levels to be set case-insensitively in the answer file for the automated installer.
- Prevent the automated installer from printing progress messages while there has been no progress.
- Correctly acknowledge the user’s preference whether to reboot on error during automated installation (issue 5984).
- Allow binary executables (in addition to shell scripts) to be used as the first-boot executable for the automated installer.
- Allow properties in the answer file of the automated installer to be either in
snake_case
orkebab-case
. Thekebab-case
variant is preferred to be more consistent with other Proxmox configuration file formats. Thesnake_case
variant will be gradually deprecated and removed in future major version releases. - Validate the locale and first-boot-hook settings while preparing the automated installer ISO, instead of failing the installation due to wrong settings.
- Prevent printing non-critical kernel logging messages, which drew over the TUI installer’s interface.
- Keep the network configuration detected via DHCP in the GUI installer, even when not clicking the
Next
button first (issue 2502). - Add an option to retrieve fully qualified domain name (FQDN) from the DHCP server with the automated installer (issue 5811).
- The ISO now ships and installs FRRouting, but keeps the service disabled.
- Improve the error handling if no DHCP server is configured on the network or no DHCP lease is received. The GUI installer will pre-select the first found interface if the network was not configured with DHCP. The installer will fall back to more sensible values for the interface address, gateway address, and DNS server if the network was not configured with DHCP.
- Add an option to power off the machine after the successful installation with the automated installer (issue 5880).
- Improve the ZFS ARC maximum size settings for systems with a limited amount of memory. On these systems, the ZFS ARC maximum size is clamped in such a way, that there is always at least 1 GiB of memory left to the system.
- Ensure that the ZFS ARC maximum size is also set for additional storages, even though the root filesystem is not ZFS (issue 6285).
- Make Btrfs installations use the
proxmox-boot-tool
to manage the EFI system partitions (issue 5433). - Make GRUB install the bootloader to the disk directly to ensure that a system is still bootable even though the EFI variables were corrupted.
- Fix a bug in the GUI installer’s hard disk options, which causes ext4 and xfs to show the wrong options after switching back from Btrfs’s advanced options tab.
Notable changes
- Beginning with NVIDIA vGPU Software 18, Proxmox VE is an officially supported platform. See NVIDIA vGPU on Proxmox_VE for more details.
Known Issues & Breaking Changes
PXE boot on VM with OVMF requires VirtIO RNG
Due to security reasons, the OVMF firmware now disables PXE boot for guests without a random number generator.
If you want to use PXE boot in OVMF VMs, make sure you add a VirtIO RNG device. This is allowed for root and users with the VM.Config.HWType
privilege.
Broken pass-through of iGPU to VM in legacy mode
Pass-through of integrated graphic cards (iGPU) to a VM is broken when using legacy mode legacy-igd=1
and machine type i440fx
. Use machine type q35
or non-legacy mode.
OSDs deployed on Ceph Squid crash
Ceph Squid currently has an issue where newly created OSDs are crashing. This seems to affect EC pools in particular and only OSDs that where newly created using Ceph 19.2 Squid.
We already published a patched Ceph version (19.2.1-pve3) that works around this issue for newly created OSDs by changing the faulty default setting. Updating your Squid cluster is advised.
Alternatively, you can also manually change the problematic bluestore_elastic_shared_blobs
setting to 0
using the following command: ceph config set osd bluestore_elastic_shared_blobs 0
.
If you have deployed new OSDs using a Ceph Squid version prior to 19.2.1-pve3, i.e. any version including and between 19.2.0-pve1 and 19.2.1-pve2, you should destroy and recreate each OSD after either upgrading to 19.2.1-pve3 or later, or manually changing this setting as described above. You can do so one at a time, waiting for the cluster to recover to a healthy state in between.
“Download from URL” now uses the configured proxy for HTTPS connections
The “Download from URL” allows downloading ISOs, container templates, and OVA appliances directly from the Proxmox VE host.
If a proxy was configured in the datacenter options in Proxmox 8.3 and below, “Download from URL” only used the proxy for HTTP connections, but not for HTTPS connections. This broke “Download from URL” via HTTPS on setups where the host must access the files’ repository via the proxy.
Now, Proxmox VE uses the configured proxy for HTTPS connections too. This may break setups that relied on the previous behavior. For example, setups that have a proxy configured and download files via HTTPS directly from an internal repository, and where the proxy does not allow access to the internal repository. For such setups, it will be necessary to download files manually without a proxy instead.
SDN IPAM/DNS plugins (tech-preview): Configuration update may be necessary to avoid TLS certification verification errors
Users of external IPAM or DNS plugins may see TLS certificate verification errors when trying to communicate with the external IPAM or DNS services after the update to Proxmox 8.4.
The reason is that HTTPS connections to external IPAM or DNS services did not verify TLS certificates in Proxmox VE 8.3 and below (PSA-2025-00006-1), but now verify TLS certificates, which may fail if the certificate is not signed by a CA that is trusted by the Proxmox VE node.
To fix this, either edit the plugin configuration and provide the SHA-256 fingerprint of the external service’s certificate, or ensure the corresponding CA is trusted by the Proxmox VE node.
SDN Netbox IPAM plugin (tech-preview): Recreating DHCP ranges may be necessary
This release fixes a bug where IP Ranges in Netbox were not properly created in a DHCP-enabled simple zone. To fix existing subnets with DHCP ranges configured and Netbox as an IPAM, you have two options:
- Manually create the IP Ranges in Netbox:
- Go to IPAM → IP Ranges and add the DHCP ranges you have configured for your subnet manually
- Let Proxmox VE recreate the IP Ranges:
- Go to SDN → VNets
- Edit the Subnet and delete all DHCP ranges, then click OK
- Edit the Subnet, re-create the DHCP ranges, then click OK
- Apply the SDN configuration
The IP ranges should now show up in Netbox and starting VMs / Containers should work.