Skip to Main Content
IBM Cloud - Structured Ideas

This portal is to open public enhancement requests against IBM Cloud and its products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (

Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.

Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal ( - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal ( - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

IBM Cloud Support Center ( – Use this site for any IBM Cloud defect or support need.

Stack Overflow ( – Use this site for IBM Cloud technical Q&A using the tag "ibm-cloud". - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Future consideration
Categories Containers
Created by Guest
Created on Jan 8, 2020

Kubernetes worker node upgrade should respect PodDisruptionBudget

We are using Kubernetes PodDisruptionBudget to prevent voluntary disruptions of services. This should include the drain of worker nodes during worker node upgrade. While a manual kubectl drain fails in case of insufficient running replicas, IBM Cloud worker node upgrade ignores the PodDisruptionBudget and proceeds with the shutdown of the worker node.

Knowing that, I have to check every service regarding a defined PDB and whether the upgrade will cause a disruption in sense of PDB or not. The other option is to drain every node manually to see if it complies with the PDB.

So please make the automations for the worker node upgrade aware of the actual work load and respect the PDB and not only the ConfigMap ibm-cluster-update-configuration with the unavailability rules on worker node level.

Proposed solution:
provide a second workload aware upgrade option, so the user can choose (and can still use the existing forced upgrade option)

  1. start draining the worker nodes as long as the worker node unavailability rules allow
  2. if a drain fails, cancel the upgrade of the worker node and uncordon the worker node
  3. proceed with remaining worker nodes as long as the worker node unavailability rules allow
  4. inform user about failed drains and possible reasons (PDB, gracefully termination period) and solutions (just retry, manual drain, move pods, forced upgrade option)
Idea priority High