OpenStack releases are typically made every six months, and each release brings new features, enhancements, and bug fixes to the platform. Therefore, it’s always a good idea to keep up-to-date with the latest OpenStack releases to ensure that you’re getting the most out of the platform.
That said, many call it the beginning of a new era, when the OpenStack community announced the release of OpenStack Antelope, the 27th version of OpenStack, last week. OpenStack is a central component of LOKI (Linux, OpenStack, Kubernetes, and Infrastructure), the open source standard for running open source cloud infrastructure. In Antelope, OpenStack contributors deliver a host of enhancements to the software’s rapidly growing user base (166% increase in computer cores since 2020), including a new release frequency, stronger integration with Kubernetes and other open source technologies, and expanded support for advanced hardware.
OpenStack Antelope is the first new OpenStack release designed to ease requirements for operators to upgrade every six months. You can now opt for an upgrade cycle of once a year, with an upgrade per Skip Level Upgrade Release Process or “SLURP” release. Antelope is a SLURP release. OpenStack Bobcat, the 28th release of OpenStack, is a non-SLURP release scheduled for October 2023.
Not ready to upgrade yet? In that case, we recommend OpenStack Zed, which is currently the second-last version. You can read all about it through the following link: https://fairbanks.nl/new-openstack-release-update/
New Release Cadence
Antelope is the first in a new release cadence designed to ease the demands upon operators to upgrade every six months. Deployments will now be able to opt into a once-a-year upgrade cycle, upgrading with every Skip Level Upgrade Release Process or “SLURP” release. “Not-SLURP” releases will be available in each six-month interim for those who wish to upgrade more frequently. Antelope is a SLURP release. OpenStack Bobcat however, the 28th release of OpenStack, is a non-SLURP release, scheduled to be released for October 2023.
OpenStack integration with Kubernetes
According to the 2022 OpenStack User Survey, Kubernetes is now deployed on over 85% of OpenStack deployments: 73% through vanilla Kubernetes itself. The rise in OpenStack and Kubernetes production integrations is further documented by an increase to 21% (up from just 16% last year) of users running production workloads with Magnum, the OpenStack service for container orchestration.
Magnum has been updated to support Kubernetes v1.24 running on Fedora CoreOS 36 and 37. Magnum has also been recertified as a Kubernetes orchestrator.
In Antelope, OpenStack contributors continue to expand support for new hardware. For example:
To Cinder OpenStack added new backend drivers (HPE XP iSCSI and FC, Fungible NVMe-TCP, NetApp NVMe-TCP storage drivers) and added features to existing vendor drivers.
Ironic can now export application metrics from the ironic-conductor service into Prometheus, alongside hardware metrics. Previously these were only accessible via statsd.
This is the official overview that OpenStack provided. Furthermore, after the release on the 22nd of March, The OpenInfra Foundation hosted a live stream to update you on OpenStack Antelope and answer questions you might have. You can rewatch the live stream through the following link: https://www.youtube.com/live/YdLTUTyJ1eU?feature=share
What we think about this new OpenStack release
We are more excited than ever with this new OpenStack release, that is starting over in the alphabet. With every new OpenStack release, we have become spoiled with general improvements of features and integration with hardware and software. More important we love the Skip Level Upgrade Release Process and can find ourselves in the combined use of OpenStack and Kubernetes (LOKI) as well. We see that this is recognized even more by OpenStack with this new OpenStack Antelope release.
In addition, OpenStack has listed significant changes reported directly from the project teams. Some highlights may be more significant than others. OpenStack specifies to not take this list as a definitive set of highlights for the release until the Open Infrastructure Foundation marketing staff have had a chance to compile a more accurate message out of these changes.
If you have any questions or remarks, let us know.
- Added a new option to randomize host selection for physical host reservations. This prevents Blazar from always allocating the same hosts.
- Added Resource Allocations API support in the Blazar command-line client. This allows operators to query active and pending allocations for specific hosts, or find which hosts are allocated to a specific reservation.
- New backend drivers were added: HPE XP iSCSI and FC, Fungible NVMe-TCP, NetApp NVMe-TCP storage drivers.
- Many features were added to current drivers. Notable examples are Trisync replication support for Pure driver, volume group snapshot support for IBM SVF driver, Unisphere 10 support for Dell EMC PowerMax driver and Host assisted migration and retype support for Hitachi VSP driver. Apart from driver features, we also made improvements in the NVMe target driver to add support for LVM+nvmet to enable CI testing even in the absence of an NVMe backend.
- Many small bug fixes and improvements in operator experience, especially around the cinder backup service, for example, incremental backup issue in POSIX driver, create backup issue when volume doesn’t exist, etc, were fixed.
- Complete VGPU management feature. The configration is same as Nova side. The difference is we delete mdev when the vm is destroy and assign VGPU trait to create vm.
- Add a new group of APIs to manage the lifecycle of accelerator’s attributes.
- Support get attribute list filter by deployable_id and key.
- Change device_profile_uuid from request path to body in get api.
- Designate zones can now be shared across multiple projects. This not only allows two or more projects to manage recordsets in the zone, but enables “Classless IN-ADDR.ARPA delegation” (RFC 2317) in Designate. Classless IN-ADDR.ARPA delegation allows IP address DNS PTR record assignment in smaller blocks without creating a DNS zone per address.
- Glance now enforces SRBAC policies by default
- Added the support to extend attached volumes for cinder backend.
- Horizon and all horizon plugins now support the nodejs18 version of nodejs.
- Horizon added portforwarding support for the Floating IPs. Users can now perform PortForwarding CRUD operations to the Network Floating Ips using horizon.
- Adds support for service role in default policy, intended for service to service communication.
- Ironic can now export application metrics from the ironic-conductor service into prometheus, alongside hardware metrics. Previously these were only accessible via statsd.
- Adds support for sharding Ironic nodes using a shard-key, to enable external API clients to horizontal scale. See https://docs.openstack.org/api-ref/baremetal/#shards for more information.
- Performance on the /v1/ports and /v1/portgroups API list endpoints has been improved.
- Keystone now has OAuth 2.0 Mutual-TLS Support
- Keystone auth has a new plugin for OAuth 2.0 Device Authorization Grant.
- All services in containers are now running under systemd control.
- Support for deploying Skyline has been added.
- validate-config CLI option has been added, which will validate services configuration files using oslo-config-validator.
- Trove service deployment now supports backend TLS.
- Skyline container images have been added.
- RabbitMQ version has been updated to 3.11.
- trove-api has been switched to WSGI running under Apache.
- Magnum has been updated to support Kubernetes v1.24 running on Fedora CoreOS 36 and 37.
- APIs for transferring shares between projects are now available. When triggered, the transfer will create a key to be used by a user in the destination project.
- Users can now specify metadata while creating their share networks. The behavior should be similar to Manila shares, and users will be able to update and delete the metadata of the given resources.
- Implementation of the secure role-based access control (sRBAC).
- Enabled the host multisegment support (only for ML2/OVS mechanism driver)
- Support of Neutron dynamic routing with ML2/OVN.
- Placement enablement for tunnelled networks and shared resources (tunnelled and physical networks sharing the same interface).
- Support of SQLAlchemy 1.4 (prior to version 2.0).
- Creation of a new OVN agent that will implement all needed functionalities not provided by the ovn-controller. The first one to be migrated will be the metadata service.
- PCI devices can now be scheduled by Nova using the Placement API on a opt-in basis. This will help the nova-scheduler service to better schedule flavors that use PCI (non-Neutron related) resources, will generate less reschedules if an instance cannot be created on a candidate and will help the nova-scheduler to not miss valid candidates if the list was too large.
- Operators can now ask Nova to manage the power consumption of dedicated CPUs so as to either offline them or change their governor if they’re currently not in use by any instance or if the instance is stopped.
- Nova will prevent unexpected compute service renames by persisting a unique compute UUID on local disk. This stored UUID will be considered the source of truth for knowing whether the compute service hostame has been modified or not. As a reminder, changing a compute hostname is forbidden, particularly when this compute is currently running instances on top of it.
- SPICE consoles can now be configured with compression settings which include choices of the compression algorithm and the compression mode.
- Fully-Qualified Domain Names are now considered valid for an instance hostname if you use the 2.94 API microversion.
- By opting into 2.95 API microversion, evacuated instances will remain stopped on the destination host until manually started.
- Support keystone ssl verify. Add configuration item cafile to skyline configuration file.
- Add log_file config into skyline.yaml.sample. Now the log path of skyline is log_dir with log_file, no other hardcode path.
- The nginx.conf.j2 support both http and https. By default is http.
- Support some new params when create the zun container.
- Optimize ID/Name column in the tables.
- Support custom button in the upload form item.
- Support Flavor add CPU and memory search filters.
- Support the global navigation at the top-left position of the page.
- Support progress add abort when use modal form to download file.
- Support create cluster instance from template list.
- Support Magnum in administrator platform.
- Support upgrade cluster and show quota info when resize cluster.
- Support fallbackBestEffort in a grant request and re-selection for placement constraints or no availability zones usecases.
- AutoHeal and AutoScale triggered by an external monitoring tools such as Prometheus without NFVO.
- OAuth 2.0 Mutual-TLS Support for authentication of external servers.
- Multi DBs backend support for PostgreSQL.
- CNF auto scale via performance management threshold interface.
- Update network configuration via Change Current VNF Package API.
- Infrastructure improvement: Use the MariaDB database to support the storage of key configuration data;
- Infrastructure improvement: Use the task scheduling framework Apscheduler to implement background periodic tasks, such as periodic cleaning of expired log data.