---
title: Release Notes v8.5.0
description: Product release notes for Azure CycleCloud public preview v8.5.0
author: adriankjohnson
ms.date: 06/21/2024
ms.author: adjohnso
---
# CycleCloud version 8.5.0
## New Features:
* CycleCloud supports [Confidential VMs and Trusted Platform Launch](../how-to/vm-security.md) for cluster nodes
* CycleCloud supports running [Node Health Checks](../how-to/healthcheck.md) during boot to validate node hardware health prior to joining the cluster with the latest AzureHPC images. See [Azure HPC Health Checks](https://github.com/Azure/azurehpc-health-checks) for details
* Cluster creation forms now support:
* Configuring disk encryption for all cluster nodes
* Assigning a Managed Identity to cluster nodes
* Setting a custom Boot/OS Disk size
* Enabling Node Health Checks
* CycleCloud supports Windows Server 2019 and Windows Server 2022
* CycleCloud now supports HPC Pack 2019
* CycleCloud's Slurm cluster type now enables the Slurm RESTd daemon
* CycleCloud's Slurm cluster type now supports Ubuntu 22
* New alternate CycleCloud CLI installer with bundled Python for installation offline or on older systems
## Resolved Issues:
* Slurm installation would fail with `KeyError: 'ClusterId'` if tags were changed to lowercase
* The Machine Type selection filter for *High performance compute* did not include HBv4
* Traditional CycleCloud CLI installer was not properly checking for Python 3.8+
* CycleCloud service autostart had a race condition with mounting the attached data disks which could cause CycleCloud service start to fail on first boot.
* Copying a cluster could lose the associated cluster-init specification
* Slurm clusters with spaces in their names would not converge
* "No node found for instance" was logged when clusters start
* Login page did not show the currently logged-in user
* Idle GridEngine nodes were killed immediately after starting a job
* Editing a node to make changes then re-importing the cluster would reset those changes
* Cluster start/terminate button label would wrap while the cluster was terminating