mirror of
https://github.com/kubernetes-sigs/descheduler.git
synced 2026-01-28 06:29:29 +01:00
Remove descheduler/v1alpha1 type
This commit is contained in:
2
.github/workflows/manifests.yaml
vendored
2
.github/workflows/manifests.yaml
vendored
@@ -9,7 +9,7 @@ jobs:
|
|||||||
matrix:
|
matrix:
|
||||||
k8s-version: ["v1.30.0"]
|
k8s-version: ["v1.30.0"]
|
||||||
descheduler-version: ["v0.30.0"]
|
descheduler-version: ["v0.30.0"]
|
||||||
descheduler-api: ["v1alpha1", "v1alpha2"]
|
descheduler-api: ["v1alpha2"]
|
||||||
manifest: ["deployment"]
|
manifest: ["deployment"]
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
|
|||||||
@@ -1,786 +0,0 @@
|
|||||||
[](https://goreportcard.com/report/sigs.k8s.io/descheduler)
|
|
||||||

|
|
||||||
|
|
||||||
<p align="center">
|
|
||||||
<img src="assets/logo/descheduler-stacked-color.png" width="40%" align="center" alt="descheduler">
|
|
||||||
</p>
|
|
||||||
|
|
||||||
# Descheduler for Kubernetes
|
|
||||||
|
|
||||||
Scheduling in Kubernetes is the process of binding pending pods to nodes, and is performed by
|
|
||||||
a component of Kubernetes called kube-scheduler. The scheduler's decisions, whether or where a
|
|
||||||
pod can or can not be scheduled, are guided by its configurable policy which comprises of set of
|
|
||||||
rules, called predicates and priorities. The scheduler's decisions are influenced by its view of
|
|
||||||
a Kubernetes cluster at that point of time when a new pod appears for scheduling.
|
|
||||||
As Kubernetes clusters are very dynamic and their state changes over time, there may be desire
|
|
||||||
to move already running pods to some other nodes for various reasons:
|
|
||||||
|
|
||||||
* Some nodes are under or over utilized.
|
|
||||||
* The original scheduling decision does not hold true any more, as taints or labels are added to
|
|
||||||
or removed from nodes, pod/node affinity requirements are not satisfied any more.
|
|
||||||
* Some nodes failed and their pods moved to other nodes.
|
|
||||||
* New nodes are added to clusters.
|
|
||||||
|
|
||||||
Consequently, there might be several pods scheduled on less desired nodes in a cluster.
|
|
||||||
Descheduler, based on its policy, finds pods that can be moved and evicts them. Please
|
|
||||||
note, in current implementation, descheduler does not schedule replacement of evicted pods
|
|
||||||
but relies on the default scheduler for that.
|
|
||||||
|
|
||||||
Table of Contents
|
|
||||||
=================
|
|
||||||
<!-- toc -->
|
|
||||||
- [Quick Start](#quick-start)
|
|
||||||
- [Run As A Job](#run-as-a-job)
|
|
||||||
- [Run As A CronJob](#run-as-a-cronjob)
|
|
||||||
- [Run As A Deployment](#run-as-a-deployment)
|
|
||||||
- [Install Using Helm](#install-using-helm)
|
|
||||||
- [Install Using Kustomize](#install-using-kustomize)
|
|
||||||
- [User Guide](#user-guide)
|
|
||||||
- [Policy and Strategies](#policy-and-strategies)
|
|
||||||
- [RemoveDuplicates](#removeduplicates)
|
|
||||||
- [LowNodeUtilization](#lownodeutilization)
|
|
||||||
- [HighNodeUtilization](#highnodeutilization)
|
|
||||||
- [RemovePodsViolatingInterPodAntiAffinity](#removepodsviolatinginterpodantiaffinity)
|
|
||||||
- [RemovePodsViolatingNodeAffinity](#removepodsviolatingnodeaffinity)
|
|
||||||
- [RemovePodsViolatingNodeTaints](#removepodsviolatingnodetaints)
|
|
||||||
- [RemovePodsViolatingTopologySpreadConstraint](#removepodsviolatingtopologyspreadconstraint)
|
|
||||||
- [RemovePodsHavingTooManyRestarts](#removepodshavingtoomanyrestarts)
|
|
||||||
- [PodLifeTime](#podlifetime)
|
|
||||||
- [RemoveFailedPods](#removefailedpods)
|
|
||||||
- [Filter Pods](#filter-pods)
|
|
||||||
- [Namespace filtering](#namespace-filtering)
|
|
||||||
- [Priority filtering](#priority-filtering)
|
|
||||||
- [Label filtering](#label-filtering)
|
|
||||||
- [Node Fit filtering](#node-fit-filtering)
|
|
||||||
- [Pod Evictions](#pod-evictions)
|
|
||||||
- [Pod Disruption Budget (PDB)](#pod-disruption-budget-pdb)
|
|
||||||
- [High Availability](#high-availability)
|
|
||||||
- [Configure HA Mode](#configure-ha-mode)
|
|
||||||
- [Metrics](#metrics)
|
|
||||||
- [Compatibility Matrix](#compatibility-matrix)
|
|
||||||
- [Getting Involved and Contributing](#getting-involved-and-contributing)
|
|
||||||
- [Communicating With Contributors](#communicating-with-contributors)
|
|
||||||
- [Roadmap](#roadmap)
|
|
||||||
- [Code of conduct](#code-of-conduct)
|
|
||||||
<!-- /toc -->
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
The descheduler can be run as a `Job`, `CronJob`, or `Deployment` inside of a k8s cluster. It has the
|
|
||||||
advantage of being able to be run multiple times without needing user intervention.
|
|
||||||
The descheduler pod is run as a critical pod in the `kube-system` namespace to avoid
|
|
||||||
being evicted by itself or by the kubelet.
|
|
||||||
|
|
||||||
### Run As A Job
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl create -f kubernetes/base/rbac.yaml
|
|
||||||
kubectl create -f kubernetes/base/configmap.yaml
|
|
||||||
kubectl create -f kubernetes/job/job.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Run As A CronJob
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl create -f kubernetes/base/rbac.yaml
|
|
||||||
kubectl create -f kubernetes/base/configmap.yaml
|
|
||||||
kubectl create -f kubernetes/cronjob/cronjob.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Run As A Deployment
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl create -f kubernetes/base/rbac.yaml
|
|
||||||
kubectl create -f kubernetes/base/configmap.yaml
|
|
||||||
kubectl create -f kubernetes/deployment/deployment.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Install Using Helm
|
|
||||||
|
|
||||||
Starting with release v0.18.0 there is an official helm chart that can be used to install the
|
|
||||||
descheduler. See the [helm chart README](https://github.com/kubernetes-sigs/descheduler/blob/master/charts/descheduler/README.md) for detailed instructions.
|
|
||||||
|
|
||||||
The descheduler helm chart is also listed on the [artifact hub](https://artifacthub.io/packages/helm/descheduler/descheduler).
|
|
||||||
|
|
||||||
### Install Using Kustomize
|
|
||||||
|
|
||||||
You can use kustomize to install descheduler.
|
|
||||||
See the [resources | Kustomize](https://kubectl.docs.kubernetes.io/references/kustomize/cmd/build/) for detailed instructions.
|
|
||||||
|
|
||||||
Run As A Job
|
|
||||||
```
|
|
||||||
kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/job?ref=v0.30.1' | kubectl apply -f -
|
|
||||||
```
|
|
||||||
|
|
||||||
Run As A CronJob
|
|
||||||
```
|
|
||||||
kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/cronjob?ref=v0.30.1' | kubectl apply -f -
|
|
||||||
```
|
|
||||||
|
|
||||||
Run As A Deployment
|
|
||||||
```
|
|
||||||
kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/deployment?ref=v0.30.1' | kubectl apply -f -
|
|
||||||
```
|
|
||||||
|
|
||||||
## User Guide
|
|
||||||
|
|
||||||
See the [user guide](docs/user-guide.md) in the `/docs` directory.
|
|
||||||
|
|
||||||
## Policy and Strategies
|
|
||||||
|
|
||||||
Descheduler's policy is configurable and includes strategies that can be enabled or disabled. By default, all strategies are enabled.
|
|
||||||
|
|
||||||
The policy includes a common configuration that applies to all the strategies:
|
|
||||||
| Name | Default Value | Description |
|
|
||||||
|------|---------------|-------------|
|
|
||||||
| `nodeSelector` | `nil` | limiting the nodes which are processed |
|
|
||||||
| `evictLocalStoragePods` | `false` | allows eviction of pods with local storage |
|
|
||||||
| `evictDaemonSetPods` | `false` | allows eviction of pods associated to DaemonSet resources |
|
|
||||||
| `evictSystemCriticalPods` | `false` | [Warning: Will evict Kubernetes system pods] allows eviction of pods with any priority, including system pods like kube-dns |
|
|
||||||
| `ignorePvcPods` | `false` | set whether PVC pods should be evicted or ignored |
|
|
||||||
| `maxNoOfPodsToEvictPerNode` | `nil` | maximum number of pods evicted from each node (summed through all strategies) |
|
|
||||||
| `maxNoOfPodsToEvictPerNamespace` | `nil` | maximum number of pods evicted from each namespace (summed through all strategies) |
|
|
||||||
| `evictFailedBarePods` | `false` | allow eviction of pods without owner references and in failed phase |
|
|
||||||
|
|
||||||
As part of the policy, the parameters associated with each strategy can be configured.
|
|
||||||
See each strategy for details on available parameters.
|
|
||||||
|
|
||||||
**Policy:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
nodeSelector: prod=dev
|
|
||||||
evictFailedBarePods: false
|
|
||||||
evictLocalStoragePods: true
|
|
||||||
evictDaemonSetPods: true
|
|
||||||
evictSystemCriticalPods: true
|
|
||||||
maxNoOfPodsToEvictPerNode: 40
|
|
||||||
ignorePvcPods: false
|
|
||||||
strategies:
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
The following diagram provides a visualization of most of the strategies to help
|
|
||||||
categorize how strategies fit together.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
### RemoveDuplicates
|
|
||||||
|
|
||||||
This strategy makes sure that there is only one pod associated with a ReplicaSet (RS),
|
|
||||||
ReplicationController (RC), StatefulSet, or Job running on the same node. If there are more,
|
|
||||||
those duplicate pods are evicted for better spreading of pods in a cluster. This issue could happen
|
|
||||||
if some nodes went down due to whatever reasons, and pods on them were moved to other nodes leading to
|
|
||||||
more than one pod associated with a RS or RC, for example, running on the same node. Once the failed nodes
|
|
||||||
are ready again, this strategy could be enabled to evict those duplicate pods.
|
|
||||||
|
|
||||||
It provides one optional parameter, `excludeOwnerKinds`, which is a list of OwnerRef `Kind`s. If a pod
|
|
||||||
has any of these `Kind`s listed as an `OwnerRef`, that pod will not be considered for eviction. Note that
|
|
||||||
pods created by Deployments are considered for eviction by this strategy. The `excludeOwnerKinds` parameter
|
|
||||||
should include `ReplicaSet` to have pods created by Deployments excluded.
|
|
||||||
|
|
||||||
**Parameters:**
|
|
||||||
|
|
||||||
|Name|Type|
|
|
||||||
|---|---|
|
|
||||||
|`excludeOwnerKinds`|list(string)|
|
|
||||||
|`namespaces`|(see [namespace filtering](#namespace-filtering))|
|
|
||||||
|`thresholdPriority`|int (see [priority filtering](#priority-filtering))|
|
|
||||||
|`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))|
|
|
||||||
|`nodeFit`|bool (see [node fit filtering](#node-fit-filtering))|
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"RemoveDuplicates":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
removeDuplicates:
|
|
||||||
excludeOwnerKinds:
|
|
||||||
- "ReplicaSet"
|
|
||||||
```
|
|
||||||
|
|
||||||
### LowNodeUtilization
|
|
||||||
|
|
||||||
This strategy finds nodes that are under utilized and evicts pods, if possible, from other nodes
|
|
||||||
in the hope that recreation of evicted pods will be scheduled on these underutilized nodes. The
|
|
||||||
parameters of this strategy are configured under `nodeResourceUtilizationThresholds`.
|
|
||||||
|
|
||||||
The under utilization of nodes is determined by a configurable threshold `thresholds`. The threshold
|
|
||||||
`thresholds` can be configured for cpu, memory, number of pods, and extended resources in terms of percentage (the percentage is
|
|
||||||
calculated as the current resources requested on the node vs [total allocatable](https://kubernetes.io/docs/concepts/architecture/nodes/#capacity).
|
|
||||||
For pods, this means the number of pods on the node as a fraction of the pod capacity set for that node).
|
|
||||||
|
|
||||||
If a node's usage is below threshold for all (cpu, memory, number of pods and extended resources), the node is considered underutilized.
|
|
||||||
Currently, pods request resource requirements are considered for computing node resource utilization.
|
|
||||||
|
|
||||||
There is another configurable threshold, `targetThresholds`, that is used to compute those potential nodes
|
|
||||||
from where pods could be evicted. If a node's usage is above targetThreshold for any (cpu, memory, number of pods, or extended resources),
|
|
||||||
the node is considered over utilized. Any node between the thresholds, `thresholds` and `targetThresholds` is
|
|
||||||
considered appropriately utilized and is not considered for eviction. The threshold, `targetThresholds`,
|
|
||||||
can be configured for cpu, memory, and number of pods too in terms of percentage.
|
|
||||||
|
|
||||||
These thresholds, `thresholds` and `targetThresholds`, could be tuned as per your cluster requirements. Note that this
|
|
||||||
strategy evicts pods from `overutilized nodes` (those with usage above `targetThresholds`) to `underutilized nodes`
|
|
||||||
(those with usage below `thresholds`), it will abort if any number of `underutilized nodes` or `overutilized nodes` is zero.
|
|
||||||
|
|
||||||
Additionally, the strategy accepts a `useDeviationThresholds` parameter.
|
|
||||||
If that parameter is set to `true`, the thresholds are considered as percentage deviations from mean resource usage.
|
|
||||||
`thresholds` will be deducted from the mean among all nodes and `targetThresholds` will be added to the mean.
|
|
||||||
A resource consumption above (resp. below) this window is considered as overutilization (resp. underutilization).
|
|
||||||
|
|
||||||
**NOTE:** Node resource consumption is determined by the requests and limits of pods, not actual usage.
|
|
||||||
This approach is chosen in order to maintain consistency with the kube-scheduler, which follows the same
|
|
||||||
design for scheduling pods onto nodes. This means that resource usage as reported by Kubelet (or commands
|
|
||||||
like `kubectl top`) may differ from the calculated consumption, due to these components reporting
|
|
||||||
actual usage metrics. Implementing metrics-based descheduling is currently TODO for the project.
|
|
||||||
|
|
||||||
**Parameters:**
|
|
||||||
|
|
||||||
|Name|Type|
|
|
||||||
|---|---|
|
|
||||||
|`thresholds`|map(string:int)|
|
|
||||||
|`targetThresholds`|map(string:int)|
|
|
||||||
|`numberOfNodes`|int|
|
|
||||||
|`useDeviationThresholds`|bool|
|
|
||||||
|`thresholdPriority`|int (see [priority filtering](#priority-filtering))|
|
|
||||||
|`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))|
|
|
||||||
|`nodeFit`|bool (see [node fit filtering](#node-fit-filtering))|
|
|
||||||
|`Namespaces`|(see [namespace filtering](#namespace-filtering))|
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"LowNodeUtilization":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
nodeResourceUtilizationThresholds:
|
|
||||||
thresholds:
|
|
||||||
"cpu" : 20
|
|
||||||
"memory": 20
|
|
||||||
"pods": 20
|
|
||||||
targetThresholds:
|
|
||||||
"cpu" : 50
|
|
||||||
"memory": 50
|
|
||||||
"pods": 50
|
|
||||||
```
|
|
||||||
|
|
||||||
Policy should pass the following validation checks:
|
|
||||||
* Three basic native types of resources are supported: `cpu`, `memory` and `pods`.
|
|
||||||
If any of these resource types is not specified, all its thresholds default to 100% to avoid nodes going from underutilized to overutilized.
|
|
||||||
* Extended resources are supported. For example, resource type `nvidia.com/gpu` is specified for GPU node utilization. Extended resources are optional,
|
|
||||||
and will not be used to compute node's usage if it's not specified in `thresholds` and `targetThresholds` explicitly.
|
|
||||||
* `thresholds` or `targetThresholds` can not be nil and they must configure exactly the same types of resources.
|
|
||||||
* The valid range of the resource's percentage value is \[0, 100\]
|
|
||||||
* Percentage value of `thresholds` can not be greater than `targetThresholds` for the same resource.
|
|
||||||
|
|
||||||
There is another parameter associated with the `LowNodeUtilization` strategy, called `numberOfNodes`.
|
|
||||||
This parameter can be configured to activate the strategy only when the number of under utilized nodes
|
|
||||||
are above the configured value. This could be helpful in large clusters where a few nodes could go
|
|
||||||
under utilized frequently or for a short period of time. By default, `numberOfNodes` is set to zero.
|
|
||||||
|
|
||||||
### HighNodeUtilization
|
|
||||||
|
|
||||||
This strategy finds nodes that are under utilized and evicts pods from the nodes in the hope that these pods will be
|
|
||||||
scheduled compactly into fewer nodes. Used in conjunction with node auto-scaling, this strategy is intended to help
|
|
||||||
trigger down scaling of under utilized nodes.
|
|
||||||
This strategy **must** be used with the scheduler scoring strategy `MostAllocated`. The parameters of this strategy are
|
|
||||||
configured under `nodeResourceUtilizationThresholds`.
|
|
||||||
|
|
||||||
The under utilization of nodes is determined by a configurable threshold `thresholds`. The threshold
|
|
||||||
`thresholds` can be configured for cpu, memory, number of pods, and extended resources in terms of percentage. The percentage is
|
|
||||||
calculated as the current resources requested on the node vs [total allocatable](https://kubernetes.io/docs/concepts/architecture/nodes/#capacity).
|
|
||||||
For pods, this means the number of pods on the node as a fraction of the pod capacity set for that node.
|
|
||||||
|
|
||||||
If a node's usage is below threshold for all (cpu, memory, number of pods and extended resources), the node is considered underutilized.
|
|
||||||
Currently, pods request resource requirements are considered for computing node resource utilization.
|
|
||||||
Any node above `thresholds` is considered appropriately utilized and is not considered for eviction.
|
|
||||||
|
|
||||||
The `thresholds` param could be tuned as per your cluster requirements. Note that this
|
|
||||||
strategy evicts pods from `underutilized nodes` (those with usage below `thresholds`)
|
|
||||||
so that they can be recreated in appropriately utilized nodes.
|
|
||||||
The strategy will abort if any number of `underutilized nodes` or `appropriately utilized nodes` is zero.
|
|
||||||
|
|
||||||
**NOTE:** Node resource consumption is determined by the requests and limits of pods, not actual usage.
|
|
||||||
This approach is chosen in order to maintain consistency with the kube-scheduler, which follows the same
|
|
||||||
design for scheduling pods onto nodes. This means that resource usage as reported by Kubelet (or commands
|
|
||||||
like `kubectl top`) may differ from the calculated consumption, due to these components reporting
|
|
||||||
actual usage metrics. Implementing metrics-based descheduling is currently TODO for the project.
|
|
||||||
|
|
||||||
**Parameters:**
|
|
||||||
|
|
||||||
|Name|Type|
|
|
||||||
|---|---|
|
|
||||||
|`thresholds`|map(string:int)|
|
|
||||||
|`numberOfNodes`|int|
|
|
||||||
|`thresholdPriority`|int (see [priority filtering](#priority-filtering))|
|
|
||||||
|`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))|
|
|
||||||
|`nodeFit`|bool (see [node fit filtering](#node-fit-filtering))|
|
|
||||||
|`Namespaces`|(see [namespace filtering](#namespace-filtering))|
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"HighNodeUtilization":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
nodeResourceUtilizationThresholds:
|
|
||||||
thresholds:
|
|
||||||
"cpu" : 20
|
|
||||||
"memory": 20
|
|
||||||
"pods": 20
|
|
||||||
```
|
|
||||||
|
|
||||||
Policy should pass the following validation checks:
|
|
||||||
* Three basic native types of resources are supported: `cpu`, `memory` and `pods`. If any of these resource types is not specified, all its thresholds default to 100%.
|
|
||||||
* Extended resources are supported. For example, resource type `nvidia.com/gpu` is specified for GPU node utilization. Extended resources are optional, and will not be used to compute node's usage if it's not specified in `thresholds` explicitly.
|
|
||||||
* `thresholds` can not be nil.
|
|
||||||
* The valid range of the resource's percentage value is \[0, 100\]
|
|
||||||
|
|
||||||
There is another parameter associated with the `HighNodeUtilization` strategy, called `numberOfNodes`.
|
|
||||||
This parameter can be configured to activate the strategy only when the number of under utilized nodes
|
|
||||||
is above the configured value. This could be helpful in large clusters where a few nodes could go
|
|
||||||
under utilized frequently or for a short period of time. By default, `numberOfNodes` is set to zero.
|
|
||||||
|
|
||||||
### RemovePodsViolatingInterPodAntiAffinity
|
|
||||||
|
|
||||||
This strategy makes sure that pods violating interpod anti-affinity are removed from nodes. For example,
|
|
||||||
if there is podA on a node and podB and podC (running on the same node) have anti-affinity rules which prohibit
|
|
||||||
them to run on the same node, then podA will be evicted from the node so that podB and podC could run. This
|
|
||||||
issue could happen, when the anti-affinity rules for podB and podC are created when they are already running on
|
|
||||||
node.
|
|
||||||
|
|
||||||
**Parameters:**
|
|
||||||
|
|
||||||
|Name|Type|
|
|
||||||
|---|---|
|
|
||||||
|`thresholdPriority`|int (see [priority filtering](#priority-filtering))|
|
|
||||||
|`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))|
|
|
||||||
|`namespaces`|(see [namespace filtering](#namespace-filtering))|
|
|
||||||
|`labelSelector`|(see [label filtering](#label-filtering))|
|
|
||||||
|`nodeFit`|bool (see [node fit filtering](#node-fit-filtering))|
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"RemovePodsViolatingInterPodAntiAffinity":
|
|
||||||
enabled: true
|
|
||||||
```
|
|
||||||
|
|
||||||
### RemovePodsViolatingNodeAffinity
|
|
||||||
|
|
||||||
This strategy makes sure all pods violating
|
|
||||||
[node affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity)
|
|
||||||
are eventually removed from nodes. Node affinity rules allow a pod to specify
|
|
||||||
`requiredDuringSchedulingIgnoredDuringExecution` type, which tells the scheduler
|
|
||||||
to respect node affinity when scheduling the pod but kubelet to ignore
|
|
||||||
in case node changes over time and no longer respects the affinity.
|
|
||||||
When enabled, the strategy serves as a temporary implementation
|
|
||||||
of `requiredDuringSchedulingRequiredDuringExecution` and evicts pod for kubelet
|
|
||||||
that no longer respects node affinity.
|
|
||||||
|
|
||||||
For example, there is podA scheduled on nodeA which satisfies the node
|
|
||||||
affinity rule `requiredDuringSchedulingIgnoredDuringExecution` at the time
|
|
||||||
of scheduling. Over time nodeA stops to satisfy the rule. When the strategy gets
|
|
||||||
executed and there is another node available that satisfies the node affinity rule,
|
|
||||||
podA gets evicted from nodeA.
|
|
||||||
|
|
||||||
**Parameters:**
|
|
||||||
|
|
||||||
|Name|Type|
|
|
||||||
|---|---|
|
|
||||||
|`nodeAffinityType`|list(string)|
|
|
||||||
|`thresholdPriority`|int (see [priority filtering](#priority-filtering))|
|
|
||||||
|`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))|
|
|
||||||
|`namespaces`|(see [namespace filtering](#namespace-filtering))|
|
|
||||||
|`labelSelector`|(see [label filtering](#label-filtering))|
|
|
||||||
|`nodeFit`|bool (see [node fit filtering](#node-fit-filtering))|
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"RemovePodsViolatingNodeAffinity":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
nodeAffinityType:
|
|
||||||
- "requiredDuringSchedulingIgnoredDuringExecution"
|
|
||||||
```
|
|
||||||
|
|
||||||
### RemovePodsViolatingNodeTaints
|
|
||||||
|
|
||||||
This strategy makes sure that pods violating NoSchedule taints on nodes are removed. For example there is a
|
|
||||||
pod "podA" with a toleration to tolerate a taint ``key=value:NoSchedule`` scheduled and running on the tainted
|
|
||||||
node. If the node's taint is subsequently updated/removed, taint is no longer satisfied by its pods' tolerations
|
|
||||||
and will be evicted.
|
|
||||||
|
|
||||||
Node taints can be excluded from consideration by specifying a list of excludedTaints. If a node taint key **or**
|
|
||||||
key=value matches an excludedTaints entry, the taint will be ignored.
|
|
||||||
|
|
||||||
For example, excludedTaints entry "dedicated" would match all taints with key "dedicated", regardless of value.
|
|
||||||
excludedTaints entry "dedicated=special-user" would match taints with key "dedicated" and value "special-user".
|
|
||||||
|
|
||||||
**Parameters:**
|
|
||||||
|
|
||||||
|Name|Type|
|
|
||||||
|---|---|
|
|
||||||
|`excludedTaints`|list(string)|
|
|
||||||
|`thresholdPriority`|int (see [priority filtering](#priority-filtering))|
|
|
||||||
|`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))|
|
|
||||||
|`namespaces`|(see [namespace filtering](#namespace-filtering))|
|
|
||||||
|`labelSelector`|(see [label filtering](#label-filtering))|
|
|
||||||
|`nodeFit`|bool (see [node fit filtering](#node-fit-filtering))|
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
|
|
||||||
````yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"RemovePodsViolatingNodeTaints":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
excludedTaints:
|
|
||||||
- dedicated=special-user # exclude taints with key "dedicated" and value "special-user"
|
|
||||||
- reserved # exclude all taints with key "reserved"
|
|
||||||
````
|
|
||||||
|
|
||||||
### RemovePodsViolatingTopologySpreadConstraint
|
|
||||||
|
|
||||||
This strategy makes sure that pods violating [topology spread constraints](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
|
|
||||||
are evicted from nodes. Specifically, it tries to evict the minimum number of pods required to balance topology domains to within each constraint's `maxSkew`.
|
|
||||||
This strategy requires k8s version 1.18 at a minimum.
|
|
||||||
|
|
||||||
By default, this strategy only deals with hard constraints, setting parameter `includeSoftConstraints` to `true` will
|
|
||||||
include soft constraints.
|
|
||||||
|
|
||||||
Strategy parameter `labelSelector` is not utilized when balancing topology domains and is only applied during eviction to determine if the pod can be evicted.
|
|
||||||
|
|
||||||
**Parameters:**
|
|
||||||
|
|
||||||
|Name|Type|
|
|
||||||
|---|---|
|
|
||||||
|`includeSoftConstraints`|bool|
|
|
||||||
|`thresholdPriority`|int (see [priority filtering](#priority-filtering))|
|
|
||||||
|`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))|
|
|
||||||
|`namespaces`|(see [namespace filtering](#namespace-filtering))|
|
|
||||||
|`labelSelector`|(see [label filtering](#label-filtering))|
|
|
||||||
|`nodeFit`|bool (see [node fit filtering](#node-fit-filtering))|
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"RemovePodsViolatingTopologySpreadConstraint":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
includeSoftConstraints: false
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### RemovePodsHavingTooManyRestarts
|
|
||||||
|
|
||||||
This strategy makes sure that pods having too many restarts are removed from nodes. For example a pod with EBS/PD that
|
|
||||||
can't get the volume/disk attached to the instance, then the pod should be re-scheduled to other nodes. Its parameters
|
|
||||||
include `podRestartThreshold`, which is the number of restarts (summed over all eligible containers) at which a pod
|
|
||||||
should be evicted, and `includingInitContainers`, which determines whether init container restarts should be factored
|
|
||||||
into that calculation.
|
|
||||||
|
|
||||||
**Parameters:**
|
|
||||||
|
|
||||||
|Name|Type|
|
|
||||||
|---|---|
|
|
||||||
|`podRestartThreshold`|int|
|
|
||||||
|`includingInitContainers`|bool|
|
|
||||||
|`thresholdPriority`|int (see [priority filtering](#priority-filtering))|
|
|
||||||
|`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))|
|
|
||||||
|`namespaces`|(see [namespace filtering](#namespace-filtering))|
|
|
||||||
|`labelSelector`|(see [label filtering](#label-filtering))|
|
|
||||||
|`nodeFit`|bool (see [node fit filtering](#node-fit-filtering))|
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"RemovePodsHavingTooManyRestarts":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
podsHavingTooManyRestarts:
|
|
||||||
podRestartThreshold: 100
|
|
||||||
includingInitContainers: true
|
|
||||||
```
|
|
||||||
|
|
||||||
### PodLifeTime
|
|
||||||
|
|
||||||
This strategy evicts pods that are older than `maxPodLifeTimeSeconds`.
|
|
||||||
|
|
||||||
You can also specify `states` parameter to **only** evict pods matching the following conditions:
|
|
||||||
- [Pod Phase](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) status of: `Running`, `Pending`
|
|
||||||
- [Container State Waiting](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-state-waiting) condition of: `PodInitializing`, `ContainerCreating`
|
|
||||||
|
|
||||||
If a value for `states` or `podStatusPhases` is not specified,
|
|
||||||
Pods in any state (even `Running`) are considered for eviction.
|
|
||||||
|
|
||||||
**Parameters:**
|
|
||||||
|
|
||||||
|Name|Type|Notes|
|
|
||||||
|---|---|---|
|
|
||||||
|`maxPodLifeTimeSeconds`|int||
|
|
||||||
|`podStatusPhases`|list(string)|Deprecated in v0.25+ Use `states` instead|
|
|
||||||
|`states`|list(string)|Only supported in v0.25+|
|
|
||||||
|`thresholdPriority`|int (see [priority filtering](#priority-filtering))||
|
|
||||||
|`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))||
|
|
||||||
|`namespaces`|(see [namespace filtering](#namespace-filtering))||
|
|
||||||
|`labelSelector`|(see [label filtering](#label-filtering))||
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"PodLifeTime":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
podLifeTime:
|
|
||||||
maxPodLifeTimeSeconds: 86400
|
|
||||||
states:
|
|
||||||
- "Pending"
|
|
||||||
- "PodInitializing"
|
|
||||||
```
|
|
||||||
|
|
||||||
### RemoveFailedPods
|
|
||||||
|
|
||||||
This strategy evicts pods that are in failed status phase.
|
|
||||||
You can provide an optional parameter to filter by failed `reasons`.
|
|
||||||
`reasons` can be expanded to include reasons of InitContainers as well by setting the optional parameter `includingInitContainers` to `true`.
|
|
||||||
You can specify an optional parameter `minPodLifetimeSeconds` to evict pods that are older than specified seconds.
|
|
||||||
Lastly, you can specify the optional parameter `excludeOwnerKinds` and if a pod
|
|
||||||
has any of these `Kind`s listed as an `OwnerRef`, that pod will not be considered for eviction.
|
|
||||||
|
|
||||||
**Parameters:**
|
|
||||||
|
|
||||||
|Name|Type|
|
|
||||||
|---|---|
|
|
||||||
|`minPodLifetimeSeconds`|uint|
|
|
||||||
|`excludeOwnerKinds`|list(string)|
|
|
||||||
|`reasons`|list(string)|
|
|
||||||
|`includingInitContainers`|bool|
|
|
||||||
|`thresholdPriority`|int (see [priority filtering](#priority-filtering))|
|
|
||||||
|`thresholdPriorityClassName`|string (see [priority filtering](#priority-filtering))|
|
|
||||||
|`namespaces`|(see [namespace filtering](#namespace-filtering))|
|
|
||||||
|`labelSelector`|(see [label filtering](#label-filtering))|
|
|
||||||
|`nodeFit`|bool (see [node fit filtering](#node-fit-filtering))|
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"RemoveFailedPods":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
failedPods:
|
|
||||||
reasons:
|
|
||||||
- "NodeAffinity"
|
|
||||||
includingInitContainers: true
|
|
||||||
excludeOwnerKinds:
|
|
||||||
- "Job"
|
|
||||||
minPodLifetimeSeconds: 3600
|
|
||||||
```
|
|
||||||
|
|
||||||
## Filter Pods
|
|
||||||
|
|
||||||
### Namespace filtering
|
|
||||||
|
|
||||||
The following strategies accept a `namespaces` parameter which allows to specify a list of including, resp. excluding namespaces:
|
|
||||||
* `PodLifeTime`
|
|
||||||
* `RemovePodsHavingTooManyRestarts`
|
|
||||||
* `RemovePodsViolatingNodeTaints`
|
|
||||||
* `RemovePodsViolatingNodeAffinity`
|
|
||||||
* `RemovePodsViolatingInterPodAntiAffinity`
|
|
||||||
* `RemoveDuplicates`
|
|
||||||
* `RemovePodsViolatingTopologySpreadConstraint`
|
|
||||||
* `RemoveFailedPods`
|
|
||||||
* `LowNodeUtilization` and `HighNodeUtilization` (Only filtered right before eviction)
|
|
||||||
|
|
||||||
For example:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"PodLifeTime":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
podLifeTime:
|
|
||||||
maxPodLifeTimeSeconds: 86400
|
|
||||||
namespaces:
|
|
||||||
include:
|
|
||||||
- "namespace1"
|
|
||||||
- "namespace2"
|
|
||||||
```
|
|
||||||
|
|
||||||
In the examples `PodLifeTime` gets executed only over `namespace1` and `namespace2`.
|
|
||||||
The similar holds for `exclude` field:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"PodLifeTime":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
podLifeTime:
|
|
||||||
maxPodLifeTimeSeconds: 86400
|
|
||||||
namespaces:
|
|
||||||
exclude:
|
|
||||||
- "namespace1"
|
|
||||||
- "namespace2"
|
|
||||||
```
|
|
||||||
|
|
||||||
The strategy gets executed over all namespaces but `namespace1` and `namespace2`.
|
|
||||||
|
|
||||||
It's not allowed to compute `include` with `exclude` field.
|
|
||||||
|
|
||||||
### Priority filtering
|
|
||||||
|
|
||||||
All strategies are able to configure a priority threshold, only pods under the threshold can be evicted. You can
|
|
||||||
specify this threshold by setting `thresholdPriorityClassName`(setting the threshold to the value of the given
|
|
||||||
priority class) or `thresholdPriority`(directly setting the threshold) parameters. By default, this threshold
|
|
||||||
is set to the value of `system-cluster-critical` priority class.
|
|
||||||
|
|
||||||
Note: Setting `evictSystemCriticalPods` to true disables priority filtering entirely.
|
|
||||||
|
|
||||||
E.g.
|
|
||||||
|
|
||||||
Setting `thresholdPriority`
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"PodLifeTime":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
podLifeTime:
|
|
||||||
maxPodLifeTimeSeconds: 86400
|
|
||||||
thresholdPriority: 10000
|
|
||||||
```
|
|
||||||
|
|
||||||
Setting `thresholdPriorityClassName`
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"PodLifeTime":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
podLifeTime:
|
|
||||||
maxPodLifeTimeSeconds: 86400
|
|
||||||
thresholdPriorityClassName: "priorityclass1"
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that you can't configure both `thresholdPriority` and `thresholdPriorityClassName`, if the given priority class
|
|
||||||
does not exist, descheduler won't create it and will throw an error.
|
|
||||||
|
|
||||||
### Label filtering
|
|
||||||
|
|
||||||
The following strategies can configure a [standard kubernetes labelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#labelselector-v1-meta)
|
|
||||||
to filter pods by their labels:
|
|
||||||
|
|
||||||
* `PodLifeTime`
|
|
||||||
* `RemovePodsHavingTooManyRestarts`
|
|
||||||
* `RemovePodsViolatingNodeTaints`
|
|
||||||
* `RemovePodsViolatingNodeAffinity`
|
|
||||||
* `RemovePodsViolatingInterPodAntiAffinity`
|
|
||||||
* `RemovePodsViolatingTopologySpreadConstraint`
|
|
||||||
* `RemoveFailedPods`
|
|
||||||
|
|
||||||
This allows running strategies among pods the descheduler is interested in.
|
|
||||||
|
|
||||||
For example:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"PodLifeTime":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
podLifeTime:
|
|
||||||
maxPodLifeTimeSeconds: 86400
|
|
||||||
labelSelector:
|
|
||||||
matchLabels:
|
|
||||||
component: redis
|
|
||||||
matchExpressions:
|
|
||||||
- {key: tier, operator: In, values: [cache]}
|
|
||||||
- {key: environment, operator: NotIn, values: [dev]}
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### Node Fit filtering
|
|
||||||
|
|
||||||
The following strategies accept a `nodeFit` boolean parameter which can optimize descheduling:
|
|
||||||
* `RemoveDuplicates`
|
|
||||||
* `LowNodeUtilization`
|
|
||||||
* `HighNodeUtilization`
|
|
||||||
* `RemovePodsViolatingInterPodAntiAffinity`
|
|
||||||
* `RemovePodsViolatingNodeAffinity`
|
|
||||||
* `RemovePodsViolatingNodeTaints`
|
|
||||||
* `RemovePodsViolatingTopologySpreadConstraint`
|
|
||||||
* `RemovePodsHavingTooManyRestarts`
|
|
||||||
* `RemoveFailedPods`
|
|
||||||
|
|
||||||
If set to `true` the descheduler will consider whether or not the pods that meet eviction criteria will fit on other nodes before evicting them. If a pod cannot be rescheduled to another node, it will not be evicted. Currently the following criteria are considered when setting `nodeFit` to `true`:
|
|
||||||
- A `nodeSelector` on the pod
|
|
||||||
- Any `tolerations` on the pod and any `taints` on the other nodes
|
|
||||||
- `nodeAffinity` on the pod
|
|
||||||
- Resource `requests` made by the pod and the resources available on other nodes
|
|
||||||
- Whether any of the other nodes are marked as `unschedulable`
|
|
||||||
|
|
||||||
E.g.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"LowNodeUtilization":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
nodeFit: true
|
|
||||||
nodeResourceUtilizationThresholds:
|
|
||||||
thresholds:
|
|
||||||
"cpu": 20
|
|
||||||
"memory": 20
|
|
||||||
"pods": 20
|
|
||||||
targetThresholds:
|
|
||||||
"cpu": 50
|
|
||||||
"memory": 50
|
|
||||||
"pods": 50
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that node fit filtering references the current pod spec, and not that of it's owner.
|
|
||||||
Thus, if the pod is owned by a ReplicationController (and that ReplicationController was modified recently),
|
|
||||||
the pod may be running with an outdated spec, which the descheduler will reference when determining node fit.
|
|
||||||
This is expected behavior as the descheduler is a "best-effort" mechanism.
|
|
||||||
|
|
||||||
Using Deployments instead of ReplicationControllers provides an automated rollout of pod spec changes, therefore ensuring that the descheduler has an up-to-date view of the cluster state.
|
|
||||||
@@ -6,6 +6,6 @@ go build -o "${OS_OUTPUT_BINPATH}/defaulter-gen" "k8s.io/code-generator/cmd/defa
|
|||||||
|
|
||||||
${OS_OUTPUT_BINPATH}/defaulter-gen \
|
${OS_OUTPUT_BINPATH}/defaulter-gen \
|
||||||
--go-header-file "hack/boilerplate/boilerplate.go.txt" \
|
--go-header-file "hack/boilerplate/boilerplate.go.txt" \
|
||||||
--extra-peer-dirs "${PRJ_PREFIX}/pkg/apis/componentconfig/v1alpha1,${PRJ_PREFIX}/pkg/api/v1alpha1" \
|
--extra-peer-dirs "${PRJ_PREFIX}/pkg/apis/componentconfig/v1alpha1,${PRJ_PREFIX}/pkg/api/v1alpha2" \
|
||||||
--output-file zz_generated.defaults.go \
|
--output-file zz_generated.defaults.go \
|
||||||
$(find_dirs_containing_comment_tags "+k8s:defaulter-gen=")
|
$(find_dirs_containing_comment_tags "+k8s:defaulter-gen=")
|
||||||
|
|||||||
@@ -1,278 +0,0 @@
|
|||||||
/*
|
|
||||||
Copyright 2023 The Kubernetes Authors.
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
*/
|
|
||||||
|
|
||||||
package v1alpha1
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
|
|
||||||
v1 "k8s.io/api/core/v1"
|
|
||||||
"k8s.io/apimachinery/pkg/conversion"
|
|
||||||
"k8s.io/apimachinery/pkg/runtime"
|
|
||||||
"k8s.io/apimachinery/pkg/runtime/serializer"
|
|
||||||
"k8s.io/client-go/informers"
|
|
||||||
clientset "k8s.io/client-go/kubernetes"
|
|
||||||
"k8s.io/klog/v2"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/api"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/descheduler/evictions"
|
|
||||||
podutil "sigs.k8s.io/descheduler/pkg/descheduler/pod"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/pluginregistry"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/defaultevictor"
|
|
||||||
frameworktypes "sigs.k8s.io/descheduler/pkg/framework/types"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
// pluginArgConversionScheme is a scheme with internal and v1alpha2 registered,
|
|
||||||
// used for defaulting/converting typed PluginConfig Args.
|
|
||||||
// Access via getPluginArgConversionScheme()
|
|
||||||
|
|
||||||
Scheme = runtime.NewScheme()
|
|
||||||
Codecs = serializer.NewCodecFactory(Scheme, serializer.EnableStrict)
|
|
||||||
)
|
|
||||||
|
|
||||||
// evictorImpl implements the Evictor interface so plugins
|
|
||||||
// can evict a pod without importing a specific pod evictor
|
|
||||||
type evictorImpl struct {
|
|
||||||
podEvictor *evictions.PodEvictor
|
|
||||||
evictorFilter frameworktypes.EvictorPlugin
|
|
||||||
}
|
|
||||||
|
|
||||||
var _ frameworktypes.Evictor = &evictorImpl{}
|
|
||||||
|
|
||||||
// Filter checks if a pod can be evicted
|
|
||||||
func (ei *evictorImpl) Filter(pod *v1.Pod) bool {
|
|
||||||
return ei.evictorFilter.Filter(pod)
|
|
||||||
}
|
|
||||||
|
|
||||||
// PreEvictionFilter checks if pod can be evicted right before eviction
|
|
||||||
func (ei *evictorImpl) PreEvictionFilter(pod *v1.Pod) bool {
|
|
||||||
return ei.evictorFilter.PreEvictionFilter(pod)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Evict evicts a pod (no pre-check performed)
|
|
||||||
func (ei *evictorImpl) Evict(ctx context.Context, pod *v1.Pod, opts evictions.EvictOptions) error {
|
|
||||||
return ei.podEvictor.EvictPod(ctx, pod, opts)
|
|
||||||
}
|
|
||||||
|
|
||||||
// handleImpl implements the framework handle which gets passed to plugins
|
|
||||||
type handleImpl struct {
|
|
||||||
clientSet clientset.Interface
|
|
||||||
getPodsAssignedToNodeFunc podutil.GetPodsAssignedToNodeFunc
|
|
||||||
sharedInformerFactory informers.SharedInformerFactory
|
|
||||||
evictor *evictorImpl
|
|
||||||
}
|
|
||||||
|
|
||||||
var _ frameworktypes.Handle = &handleImpl{}
|
|
||||||
|
|
||||||
// ClientSet retrieves kube client set
|
|
||||||
func (hi *handleImpl) ClientSet() clientset.Interface {
|
|
||||||
return hi.clientSet
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetPodsAssignedToNodeFunc retrieves GetPodsAssignedToNodeFunc implementation
|
|
||||||
func (hi *handleImpl) GetPodsAssignedToNodeFunc() podutil.GetPodsAssignedToNodeFunc {
|
|
||||||
return hi.getPodsAssignedToNodeFunc
|
|
||||||
}
|
|
||||||
|
|
||||||
// SharedInformerFactory retrieves shared informer factory
|
|
||||||
func (hi *handleImpl) SharedInformerFactory() informers.SharedInformerFactory {
|
|
||||||
return hi.sharedInformerFactory
|
|
||||||
}
|
|
||||||
|
|
||||||
// Evictor retrieves evictor so plugins can filter and evict pods
|
|
||||||
func (hi *handleImpl) Evictor() frameworktypes.Evictor {
|
|
||||||
return hi.evictor
|
|
||||||
}
|
|
||||||
|
|
||||||
func Convert_v1alpha1_DeschedulerPolicy_To_api_DeschedulerPolicy(in *DeschedulerPolicy, out *api.DeschedulerPolicy, s conversion.Scope) error {
|
|
||||||
klog.V(1).Info("Warning: v1alpha1 API is deprecated and will be removed in a future release. Use v1alpha2 API instead.")
|
|
||||||
|
|
||||||
err := V1alpha1ToInternal(in, pluginregistry.PluginRegistry, out, s)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func V1alpha1ToInternal(
|
|
||||||
deschedulerPolicy *DeschedulerPolicy,
|
|
||||||
registry pluginregistry.Registry,
|
|
||||||
out *api.DeschedulerPolicy,
|
|
||||||
s conversion.Scope,
|
|
||||||
) error {
|
|
||||||
var evictLocalStoragePods bool
|
|
||||||
if deschedulerPolicy.EvictLocalStoragePods != nil {
|
|
||||||
evictLocalStoragePods = *deschedulerPolicy.EvictLocalStoragePods
|
|
||||||
}
|
|
||||||
|
|
||||||
evictBarePods := false
|
|
||||||
if deschedulerPolicy.EvictFailedBarePods != nil {
|
|
||||||
evictBarePods = *deschedulerPolicy.EvictFailedBarePods
|
|
||||||
if evictBarePods {
|
|
||||||
klog.V(1).Info("Warning: EvictFailedBarePods is set to True. This could cause eviction of pods without ownerReferences.")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
evictSystemCriticalPods := false
|
|
||||||
if deschedulerPolicy.EvictSystemCriticalPods != nil {
|
|
||||||
evictSystemCriticalPods = *deschedulerPolicy.EvictSystemCriticalPods
|
|
||||||
if evictSystemCriticalPods {
|
|
||||||
klog.V(1).Info("Warning: EvictSystemCriticalPods is set to True. This could cause eviction of Kubernetes system pods.")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
evictDaemonSetPods := false
|
|
||||||
if deschedulerPolicy.EvictDaemonSetPods != nil {
|
|
||||||
evictDaemonSetPods = *deschedulerPolicy.EvictDaemonSetPods
|
|
||||||
if evictDaemonSetPods {
|
|
||||||
klog.V(1).Info("Warning: EvictDaemonSetPods is set to True. This could cause eviction of Kubernetes DaemonSet pods.")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ignorePvcPods := false
|
|
||||||
if deschedulerPolicy.IgnorePVCPods != nil {
|
|
||||||
ignorePvcPods = *deschedulerPolicy.IgnorePVCPods
|
|
||||||
}
|
|
||||||
|
|
||||||
var profiles []api.DeschedulerProfile
|
|
||||||
|
|
||||||
// Build profiles
|
|
||||||
for name, strategy := range deschedulerPolicy.Strategies {
|
|
||||||
if _, ok := pluginregistry.PluginRegistry[string(name)]; ok {
|
|
||||||
if strategy.Enabled {
|
|
||||||
params := strategy.Params
|
|
||||||
if params == nil {
|
|
||||||
params = &StrategyParameters{}
|
|
||||||
}
|
|
||||||
|
|
||||||
nodeFit := false
|
|
||||||
if name != "PodLifeTime" {
|
|
||||||
nodeFit = params.NodeFit
|
|
||||||
}
|
|
||||||
|
|
||||||
if params.ThresholdPriority != nil && params.ThresholdPriorityClassName != "" {
|
|
||||||
klog.ErrorS(fmt.Errorf("priority threshold misconfigured"), "only one of priorityThreshold fields can be set", "pluginName", name)
|
|
||||||
return fmt.Errorf("priority threshold misconfigured for plugin %v", name)
|
|
||||||
}
|
|
||||||
|
|
||||||
var priorityThreshold *api.PriorityThreshold
|
|
||||||
if strategy.Params != nil {
|
|
||||||
priorityThreshold = &api.PriorityThreshold{
|
|
||||||
Value: strategy.Params.ThresholdPriority,
|
|
||||||
Name: strategy.Params.ThresholdPriorityClassName,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var pluginConfig *api.PluginConfig
|
|
||||||
var err error
|
|
||||||
if pcFnc, exists := StrategyParamsToPluginArgs[string(name)]; exists {
|
|
||||||
pluginConfig, err = pcFnc(params)
|
|
||||||
if err != nil {
|
|
||||||
klog.ErrorS(err, "skipping strategy", "strategy", name)
|
|
||||||
return fmt.Errorf("failed to get plugin config for strategy %v: %v", name, err)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
klog.ErrorS(fmt.Errorf("unknown strategy name"), "skipping strategy", "strategy", name)
|
|
||||||
return fmt.Errorf("unknown strategy name: %v", name)
|
|
||||||
}
|
|
||||||
|
|
||||||
profile := api.DeschedulerProfile{
|
|
||||||
Name: fmt.Sprintf("strategy-%v-profile", name),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
{
|
|
||||||
Name: defaultevictor.PluginName,
|
|
||||||
Args: &defaultevictor.DefaultEvictorArgs{
|
|
||||||
EvictLocalStoragePods: evictLocalStoragePods,
|
|
||||||
EvictDaemonSetPods: evictDaemonSetPods,
|
|
||||||
EvictSystemCriticalPods: evictSystemCriticalPods,
|
|
||||||
IgnorePvcPods: ignorePvcPods,
|
|
||||||
EvictFailedBarePods: evictBarePods,
|
|
||||||
NodeFit: nodeFit,
|
|
||||||
PriorityThreshold: priorityThreshold,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
*pluginConfig,
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
pluginArgs := registry[string(name)].PluginArgInstance
|
|
||||||
pluginInstance, err := registry[string(name)].PluginBuilder(pluginArgs, &handleImpl{})
|
|
||||||
if err != nil {
|
|
||||||
klog.ErrorS(fmt.Errorf("could not build plugin"), "plugin build error", "plugin", name)
|
|
||||||
return fmt.Errorf("could not build plugin: %v", name)
|
|
||||||
}
|
|
||||||
|
|
||||||
// pluginInstance can be of any of each type, or both
|
|
||||||
profilePlugins := profile.Plugins
|
|
||||||
profile.Plugins = enableProfilePluginsByType(profilePlugins, pluginInstance, pluginConfig)
|
|
||||||
profiles = append(profiles, profile)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
klog.ErrorS(fmt.Errorf("unknown strategy name"), "skipping strategy", "strategy", name)
|
|
||||||
return fmt.Errorf("unknown strategy name: %v", name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
out.Profiles = profiles
|
|
||||||
out.NodeSelector = deschedulerPolicy.NodeSelector
|
|
||||||
out.MaxNoOfPodsToEvictPerNamespace = deschedulerPolicy.MaxNoOfPodsToEvictPerNamespace
|
|
||||||
out.MaxNoOfPodsToEvictPerNode = deschedulerPolicy.MaxNoOfPodsToEvictPerNode
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func enableProfilePluginsByType(profilePlugins api.Plugins, pluginInstance frameworktypes.Plugin, pluginConfig *api.PluginConfig) api.Plugins {
|
|
||||||
profilePlugins = checkBalance(profilePlugins, pluginInstance, pluginConfig)
|
|
||||||
profilePlugins = checkDeschedule(profilePlugins, pluginInstance, pluginConfig)
|
|
||||||
return profilePlugins
|
|
||||||
}
|
|
||||||
|
|
||||||
func checkBalance(profilePlugins api.Plugins, pluginInstance frameworktypes.Plugin, pluginConfig *api.PluginConfig) api.Plugins {
|
|
||||||
_, ok := pluginInstance.(frameworktypes.BalancePlugin)
|
|
||||||
if ok {
|
|
||||||
klog.V(3).Infof("converting Balance plugin: %s", pluginInstance.Name())
|
|
||||||
profilePlugins.Balance.Enabled = []string{pluginConfig.Name}
|
|
||||||
}
|
|
||||||
return profilePlugins
|
|
||||||
}
|
|
||||||
|
|
||||||
func checkDeschedule(profilePlugins api.Plugins, pluginInstance frameworktypes.Plugin, pluginConfig *api.PluginConfig) api.Plugins {
|
|
||||||
_, ok := pluginInstance.(frameworktypes.DeschedulePlugin)
|
|
||||||
if ok {
|
|
||||||
klog.V(3).Infof("converting Deschedule plugin: %s", pluginInstance.Name())
|
|
||||||
profilePlugins.Deschedule.Enabled = []string{pluginConfig.Name}
|
|
||||||
}
|
|
||||||
return profilePlugins
|
|
||||||
}
|
|
||||||
|
|
||||||
// Register Conversions
|
|
||||||
func RegisterConversions(s *runtime.Scheme) error {
|
|
||||||
if err := s.AddGeneratedConversionFunc((*DeschedulerPolicy)(nil), (*api.DeschedulerPolicy)(nil), func(a, b interface{}, scope conversion.Scope) error {
|
|
||||||
return Convert_v1alpha1_DeschedulerPolicy_To_api_DeschedulerPolicy(a.(*DeschedulerPolicy), b.(*api.DeschedulerPolicy), scope)
|
|
||||||
}); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
/*
|
|
||||||
Copyright 2017 The Kubernetes Authors.
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
*/
|
|
||||||
|
|
||||||
package v1alpha1
|
|
||||||
|
|
||||||
import "k8s.io/apimachinery/pkg/runtime"
|
|
||||||
|
|
||||||
func addDefaultingFuncs(scheme *runtime.Scheme) error {
|
|
||||||
return RegisterDefaults(scheme)
|
|
||||||
}
|
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
/*
|
|
||||||
Copyright 2017 The Kubernetes Authors.
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
*/
|
|
||||||
|
|
||||||
// +k8s:deepcopy-gen=package,register
|
|
||||||
// +k8s:defaulter-gen=TypeMeta
|
|
||||||
|
|
||||||
// Package v1alpha1 is the v1alpha1 version of the descheduler API
|
|
||||||
// +groupName=descheduler
|
|
||||||
|
|
||||||
package v1alpha1 // import "sigs.k8s.io/descheduler/pkg/api/v1alpha1"
|
|
||||||
@@ -1,63 +0,0 @@
|
|||||||
/*
|
|
||||||
Copyright 2017 The Kubernetes Authors.
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
*/
|
|
||||||
|
|
||||||
package v1alpha1
|
|
||||||
|
|
||||||
import (
|
|
||||||
"k8s.io/apimachinery/pkg/runtime"
|
|
||||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
|
|
||||||
localSchemeBuilder = &SchemeBuilder
|
|
||||||
AddToScheme = SchemeBuilder.AddToScheme
|
|
||||||
)
|
|
||||||
|
|
||||||
// GroupName is the group name used in this package
|
|
||||||
const (
|
|
||||||
GroupName = "descheduler"
|
|
||||||
GroupVersion = "v1alpha1"
|
|
||||||
)
|
|
||||||
|
|
||||||
// SchemeGroupVersion is group version used to register these objects
|
|
||||||
var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: GroupVersion}
|
|
||||||
|
|
||||||
// Kind takes an unqualified kind and returns a Group qualified GroupKind
|
|
||||||
func Kind(kind string) schema.GroupKind {
|
|
||||||
return SchemeGroupVersion.WithKind(kind).GroupKind()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resource takes an unqualified resource and returns a Group qualified GroupResource
|
|
||||||
func Resource(resource string) schema.GroupResource {
|
|
||||||
return SchemeGroupVersion.WithResource(resource).GroupResource()
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
// We only register manually written functions here. The registration of the
|
|
||||||
// generated functions takes place in the generated files. The separation
|
|
||||||
// makes the code compile even when the generated files are missing.
|
|
||||||
localSchemeBuilder.Register(addKnownTypes, addDefaultingFuncs, RegisterConversions)
|
|
||||||
}
|
|
||||||
|
|
||||||
func addKnownTypes(scheme *runtime.Scheme) error {
|
|
||||||
// TODO this will get cleaned up with the scheme types are fixed
|
|
||||||
scheme.AddKnownTypes(SchemeGroupVersion,
|
|
||||||
&DeschedulerPolicy{},
|
|
||||||
)
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@@ -1,256 +0,0 @@
|
|||||||
/*
|
|
||||||
Copyright 2023 The Kubernetes Authors.
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
*/
|
|
||||||
|
|
||||||
package v1alpha1
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
|
|
||||||
v1 "k8s.io/api/core/v1"
|
|
||||||
"k8s.io/klog/v2"
|
|
||||||
utilptr "k8s.io/utils/ptr"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/api"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/nodeutilization"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/podlifetime"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removeduplicates"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removefailedpods"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodshavingtoomanyrestarts"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodsviolatinginterpodantiaffinity"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodsviolatingnodeaffinity"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodsviolatingnodetaints"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodsviolatingtopologyspreadconstraint"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Once all strategies are migrated the arguments get read from the configuration file
|
|
||||||
// without any wiring. Keeping the wiring here so the descheduler can still use
|
|
||||||
// the v1alpha1 configuration during the strategy migration to plugins.
|
|
||||||
|
|
||||||
var StrategyParamsToPluginArgs = map[string]func(params *StrategyParameters) (*api.PluginConfig, error){
|
|
||||||
"RemovePodsViolatingNodeTaints": func(params *StrategyParameters) (*api.PluginConfig, error) {
|
|
||||||
args := &removepodsviolatingnodetaints.RemovePodsViolatingNodeTaintsArgs{
|
|
||||||
Namespaces: v1alpha1NamespacesToInternal(params.Namespaces),
|
|
||||||
LabelSelector: params.LabelSelector,
|
|
||||||
IncludePreferNoSchedule: params.IncludePreferNoSchedule,
|
|
||||||
ExcludedTaints: params.ExcludedTaints,
|
|
||||||
}
|
|
||||||
if err := removepodsviolatingnodetaints.ValidateRemovePodsViolatingNodeTaintsArgs(args); err != nil {
|
|
||||||
klog.ErrorS(err, "unable to validate plugin arguments", "pluginName", removepodsviolatingnodetaints.PluginName)
|
|
||||||
return nil, fmt.Errorf("strategy %q param validation failed: %v", removepodsviolatingnodetaints.PluginName, err)
|
|
||||||
}
|
|
||||||
return &api.PluginConfig{
|
|
||||||
Name: removepodsviolatingnodetaints.PluginName,
|
|
||||||
Args: args,
|
|
||||||
}, nil
|
|
||||||
},
|
|
||||||
"RemoveFailedPods": func(params *StrategyParameters) (*api.PluginConfig, error) {
|
|
||||||
failedPodsParams := params.FailedPods
|
|
||||||
if failedPodsParams == nil {
|
|
||||||
failedPodsParams = &FailedPods{}
|
|
||||||
}
|
|
||||||
args := &removefailedpods.RemoveFailedPodsArgs{
|
|
||||||
Namespaces: v1alpha1NamespacesToInternal(params.Namespaces),
|
|
||||||
LabelSelector: params.LabelSelector,
|
|
||||||
IncludingInitContainers: failedPodsParams.IncludingInitContainers,
|
|
||||||
MinPodLifetimeSeconds: failedPodsParams.MinPodLifetimeSeconds,
|
|
||||||
ExcludeOwnerKinds: failedPodsParams.ExcludeOwnerKinds,
|
|
||||||
Reasons: failedPodsParams.Reasons,
|
|
||||||
}
|
|
||||||
if err := removefailedpods.ValidateRemoveFailedPodsArgs(args); err != nil {
|
|
||||||
klog.ErrorS(err, "unable to validate plugin arguments", "pluginName", removefailedpods.PluginName)
|
|
||||||
return nil, fmt.Errorf("strategy %q param validation failed: %v", removefailedpods.PluginName, err)
|
|
||||||
}
|
|
||||||
return &api.PluginConfig{
|
|
||||||
Name: removefailedpods.PluginName,
|
|
||||||
Args: args,
|
|
||||||
}, nil
|
|
||||||
},
|
|
||||||
"RemovePodsViolatingNodeAffinity": func(params *StrategyParameters) (*api.PluginConfig, error) {
|
|
||||||
args := &removepodsviolatingnodeaffinity.RemovePodsViolatingNodeAffinityArgs{
|
|
||||||
Namespaces: v1alpha1NamespacesToInternal(params.Namespaces),
|
|
||||||
LabelSelector: params.LabelSelector,
|
|
||||||
NodeAffinityType: params.NodeAffinityType,
|
|
||||||
}
|
|
||||||
if err := removepodsviolatingnodeaffinity.ValidateRemovePodsViolatingNodeAffinityArgs(args); err != nil {
|
|
||||||
klog.ErrorS(err, "unable to validate plugin arguments", "pluginName", removepodsviolatingnodeaffinity.PluginName)
|
|
||||||
return nil, fmt.Errorf("strategy %q param validation failed: %v", removepodsviolatingnodeaffinity.PluginName, err)
|
|
||||||
}
|
|
||||||
return &api.PluginConfig{
|
|
||||||
Name: removepodsviolatingnodeaffinity.PluginName,
|
|
||||||
Args: args,
|
|
||||||
}, nil
|
|
||||||
},
|
|
||||||
"RemovePodsViolatingInterPodAntiAffinity": func(params *StrategyParameters) (*api.PluginConfig, error) {
|
|
||||||
args := &removepodsviolatinginterpodantiaffinity.RemovePodsViolatingInterPodAntiAffinityArgs{
|
|
||||||
Namespaces: v1alpha1NamespacesToInternal(params.Namespaces),
|
|
||||||
LabelSelector: params.LabelSelector,
|
|
||||||
}
|
|
||||||
if err := removepodsviolatinginterpodantiaffinity.ValidateRemovePodsViolatingInterPodAntiAffinityArgs(args); err != nil {
|
|
||||||
klog.ErrorS(err, "unable to validate plugin arguments", "pluginName", removepodsviolatinginterpodantiaffinity.PluginName)
|
|
||||||
return nil, fmt.Errorf("strategy %q param validation failed: %v", removepodsviolatinginterpodantiaffinity.PluginName, err)
|
|
||||||
}
|
|
||||||
return &api.PluginConfig{
|
|
||||||
Name: removepodsviolatinginterpodantiaffinity.PluginName,
|
|
||||||
Args: args,
|
|
||||||
}, nil
|
|
||||||
},
|
|
||||||
"RemovePodsHavingTooManyRestarts": func(params *StrategyParameters) (*api.PluginConfig, error) {
|
|
||||||
tooManyRestartsParams := params.PodsHavingTooManyRestarts
|
|
||||||
if tooManyRestartsParams == nil {
|
|
||||||
tooManyRestartsParams = &PodsHavingTooManyRestarts{}
|
|
||||||
}
|
|
||||||
args := &removepodshavingtoomanyrestarts.RemovePodsHavingTooManyRestartsArgs{
|
|
||||||
Namespaces: v1alpha1NamespacesToInternal(params.Namespaces),
|
|
||||||
LabelSelector: params.LabelSelector,
|
|
||||||
PodRestartThreshold: tooManyRestartsParams.PodRestartThreshold,
|
|
||||||
IncludingInitContainers: tooManyRestartsParams.IncludingInitContainers,
|
|
||||||
}
|
|
||||||
if err := removepodshavingtoomanyrestarts.ValidateRemovePodsHavingTooManyRestartsArgs(args); err != nil {
|
|
||||||
klog.ErrorS(err, "unable to validate plugin arguments", "pluginName", removepodshavingtoomanyrestarts.PluginName)
|
|
||||||
return nil, fmt.Errorf("strategy %q param validation failed: %v", removepodshavingtoomanyrestarts.PluginName, err)
|
|
||||||
}
|
|
||||||
return &api.PluginConfig{
|
|
||||||
Name: removepodshavingtoomanyrestarts.PluginName,
|
|
||||||
Args: args,
|
|
||||||
}, nil
|
|
||||||
},
|
|
||||||
"PodLifeTime": func(params *StrategyParameters) (*api.PluginConfig, error) {
|
|
||||||
podLifeTimeParams := params.PodLifeTime
|
|
||||||
if podLifeTimeParams == nil {
|
|
||||||
podLifeTimeParams = &PodLifeTime{}
|
|
||||||
}
|
|
||||||
|
|
||||||
var states []string
|
|
||||||
if podLifeTimeParams.PodStatusPhases != nil {
|
|
||||||
states = append(states, podLifeTimeParams.PodStatusPhases...)
|
|
||||||
}
|
|
||||||
if podLifeTimeParams.States != nil {
|
|
||||||
states = append(states, podLifeTimeParams.States...)
|
|
||||||
}
|
|
||||||
|
|
||||||
args := &podlifetime.PodLifeTimeArgs{
|
|
||||||
Namespaces: v1alpha1NamespacesToInternal(params.Namespaces),
|
|
||||||
LabelSelector: params.LabelSelector,
|
|
||||||
MaxPodLifeTimeSeconds: podLifeTimeParams.MaxPodLifeTimeSeconds,
|
|
||||||
States: states,
|
|
||||||
}
|
|
||||||
if err := podlifetime.ValidatePodLifeTimeArgs(args); err != nil {
|
|
||||||
klog.ErrorS(err, "unable to validate plugin arguments", "pluginName", podlifetime.PluginName)
|
|
||||||
return nil, fmt.Errorf("strategy %q param validation failed: %v", podlifetime.PluginName, err)
|
|
||||||
}
|
|
||||||
return &api.PluginConfig{
|
|
||||||
Name: podlifetime.PluginName,
|
|
||||||
Args: args,
|
|
||||||
}, nil
|
|
||||||
},
|
|
||||||
"RemoveDuplicates": func(params *StrategyParameters) (*api.PluginConfig, error) {
|
|
||||||
args := &removeduplicates.RemoveDuplicatesArgs{
|
|
||||||
Namespaces: v1alpha1NamespacesToInternal(params.Namespaces),
|
|
||||||
}
|
|
||||||
if params.RemoveDuplicates != nil {
|
|
||||||
args.ExcludeOwnerKinds = params.RemoveDuplicates.ExcludeOwnerKinds
|
|
||||||
}
|
|
||||||
if err := removeduplicates.ValidateRemoveDuplicatesArgs(args); err != nil {
|
|
||||||
klog.ErrorS(err, "unable to validate plugin arguments", "pluginName", removeduplicates.PluginName)
|
|
||||||
return nil, fmt.Errorf("strategy %q param validation failed: %v", removeduplicates.PluginName, err)
|
|
||||||
}
|
|
||||||
return &api.PluginConfig{
|
|
||||||
Name: removeduplicates.PluginName,
|
|
||||||
Args: args,
|
|
||||||
}, nil
|
|
||||||
},
|
|
||||||
"RemovePodsViolatingTopologySpreadConstraint": func(params *StrategyParameters) (*api.PluginConfig, error) {
|
|
||||||
constraints := []v1.UnsatisfiableConstraintAction{v1.DoNotSchedule}
|
|
||||||
if params.IncludeSoftConstraints {
|
|
||||||
constraints = append(constraints, v1.ScheduleAnyway)
|
|
||||||
}
|
|
||||||
args := &removepodsviolatingtopologyspreadconstraint.RemovePodsViolatingTopologySpreadConstraintArgs{
|
|
||||||
Namespaces: v1alpha1NamespacesToInternal(params.Namespaces),
|
|
||||||
LabelSelector: params.LabelSelector,
|
|
||||||
Constraints: constraints,
|
|
||||||
TopologyBalanceNodeFit: utilptr.To(true),
|
|
||||||
}
|
|
||||||
if err := removepodsviolatingtopologyspreadconstraint.ValidateRemovePodsViolatingTopologySpreadConstraintArgs(args); err != nil {
|
|
||||||
klog.ErrorS(err, "unable to validate plugin arguments", "pluginName", removepodsviolatingtopologyspreadconstraint.PluginName)
|
|
||||||
return nil, fmt.Errorf("strategy %q param validation failed: %v", removepodsviolatingtopologyspreadconstraint.PluginName, err)
|
|
||||||
}
|
|
||||||
return &api.PluginConfig{
|
|
||||||
Name: removepodsviolatingtopologyspreadconstraint.PluginName,
|
|
||||||
Args: args,
|
|
||||||
}, nil
|
|
||||||
},
|
|
||||||
"HighNodeUtilization": func(params *StrategyParameters) (*api.PluginConfig, error) {
|
|
||||||
if params.NodeResourceUtilizationThresholds == nil {
|
|
||||||
params.NodeResourceUtilizationThresholds = &NodeResourceUtilizationThresholds{}
|
|
||||||
}
|
|
||||||
args := &nodeutilization.HighNodeUtilizationArgs{
|
|
||||||
EvictableNamespaces: v1alpha1NamespacesToInternal(params.Namespaces),
|
|
||||||
Thresholds: v1alpha1ThresholdToInternal(params.NodeResourceUtilizationThresholds.Thresholds),
|
|
||||||
NumberOfNodes: params.NodeResourceUtilizationThresholds.NumberOfNodes,
|
|
||||||
}
|
|
||||||
if err := nodeutilization.ValidateHighNodeUtilizationArgs(args); err != nil {
|
|
||||||
klog.ErrorS(err, "unable to validate plugin arguments", "pluginName", nodeutilization.HighNodeUtilizationPluginName)
|
|
||||||
return nil, fmt.Errorf("strategy %q param validation failed: %v", nodeutilization.HighNodeUtilizationPluginName, err)
|
|
||||||
}
|
|
||||||
return &api.PluginConfig{
|
|
||||||
Name: nodeutilization.HighNodeUtilizationPluginName,
|
|
||||||
Args: args,
|
|
||||||
}, nil
|
|
||||||
},
|
|
||||||
"LowNodeUtilization": func(params *StrategyParameters) (*api.PluginConfig, error) {
|
|
||||||
if params.NodeResourceUtilizationThresholds == nil {
|
|
||||||
params.NodeResourceUtilizationThresholds = &NodeResourceUtilizationThresholds{}
|
|
||||||
}
|
|
||||||
args := &nodeutilization.LowNodeUtilizationArgs{
|
|
||||||
EvictableNamespaces: v1alpha1NamespacesToInternal(params.Namespaces),
|
|
||||||
Thresholds: v1alpha1ThresholdToInternal(params.NodeResourceUtilizationThresholds.Thresholds),
|
|
||||||
TargetThresholds: v1alpha1ThresholdToInternal(params.NodeResourceUtilizationThresholds.TargetThresholds),
|
|
||||||
UseDeviationThresholds: params.NodeResourceUtilizationThresholds.UseDeviationThresholds,
|
|
||||||
NumberOfNodes: params.NodeResourceUtilizationThresholds.NumberOfNodes,
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := nodeutilization.ValidateLowNodeUtilizationArgs(args); err != nil {
|
|
||||||
klog.ErrorS(err, "unable to validate plugin arguments", "pluginName", nodeutilization.LowNodeUtilizationPluginName)
|
|
||||||
return nil, fmt.Errorf("strategy %q param validation failed: %v", nodeutilization.LowNodeUtilizationPluginName, err)
|
|
||||||
}
|
|
||||||
return &api.PluginConfig{
|
|
||||||
Name: nodeutilization.LowNodeUtilizationPluginName,
|
|
||||||
Args: args,
|
|
||||||
}, nil
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
func v1alpha1NamespacesToInternal(namespaces *Namespaces) *api.Namespaces {
|
|
||||||
internal := &api.Namespaces{}
|
|
||||||
if namespaces != nil {
|
|
||||||
if namespaces.Exclude != nil {
|
|
||||||
internal.Exclude = namespaces.Exclude
|
|
||||||
}
|
|
||||||
if namespaces.Include != nil {
|
|
||||||
internal.Include = namespaces.Include
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
internal = nil
|
|
||||||
}
|
|
||||||
return internal
|
|
||||||
}
|
|
||||||
|
|
||||||
func v1alpha1ThresholdToInternal(thresholds ResourceThresholds) api.ResourceThresholds {
|
|
||||||
internal := make(api.ResourceThresholds, len(thresholds))
|
|
||||||
for k, v := range thresholds {
|
|
||||||
internal[k] = api.Percentage(float64(v))
|
|
||||||
}
|
|
||||||
return internal
|
|
||||||
}
|
|
||||||
@@ -1,859 +0,0 @@
|
|||||||
/*
|
|
||||||
Copyright 2022 The Kubernetes Authors.
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
*/
|
|
||||||
|
|
||||||
package v1alpha1
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/google/go-cmp/cmp"
|
|
||||||
v1 "k8s.io/api/core/v1"
|
|
||||||
utilptr "k8s.io/utils/ptr"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/api"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/nodeutilization"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/podlifetime"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removeduplicates"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removefailedpods"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodshavingtoomanyrestarts"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodsviolatinginterpodantiaffinity"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodsviolatingnodeaffinity"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodsviolatingnodetaints"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodsviolatingtopologyspreadconstraint"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestStrategyParamsToPluginArgsRemovePodsViolatingNodeTaints(t *testing.T) {
|
|
||||||
strategyName := "RemovePodsViolatingNodeTaints"
|
|
||||||
type testCase struct {
|
|
||||||
description string
|
|
||||||
params *StrategyParameters
|
|
||||||
err error
|
|
||||||
result *api.PluginConfig
|
|
||||||
}
|
|
||||||
testCases := []testCase{
|
|
||||||
{
|
|
||||||
description: "wire in all valid parameters",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
ExcludedTaints: []string{
|
|
||||||
"dedicated=special-user",
|
|
||||||
"reserved",
|
|
||||||
},
|
|
||||||
ThresholdPriority: utilptr.To[int32](100),
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: nil,
|
|
||||||
result: &api.PluginConfig{
|
|
||||||
Name: removepodsviolatingnodetaints.PluginName,
|
|
||||||
Args: &removepodsviolatingnodetaints.RemovePodsViolatingNodeTaintsArgs{
|
|
||||||
Namespaces: &api.Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
ExcludedTaints: []string{"dedicated=special-user", "reserved"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params namespaces",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
Include: []string{"test2"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: only one of Include/Exclude namespaces can be set", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tc := range testCases {
|
|
||||||
t.Run(tc.description, func(t *testing.T) {
|
|
||||||
var result *api.PluginConfig
|
|
||||||
var err error
|
|
||||||
if pcFnc, exists := StrategyParamsToPluginArgs[strategyName]; exists {
|
|
||||||
result, err = pcFnc(tc.params)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
if err.Error() != tc.err.Error() {
|
|
||||||
t.Errorf("unexpected error: %s", err.Error())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
// sort to easily compare deepequality
|
|
||||||
diff := cmp.Diff(tc.result, result)
|
|
||||||
if diff != "" {
|
|
||||||
t.Errorf("test '%s' failed. Results are not deep equal. mismatch (-want +got):\n%s", tc.description, diff)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStrategyParamsToPluginArgsRemoveFailedPods(t *testing.T) {
|
|
||||||
strategyName := "RemoveFailedPods"
|
|
||||||
type testCase struct {
|
|
||||||
description string
|
|
||||||
params *StrategyParameters
|
|
||||||
err error
|
|
||||||
result *api.PluginConfig
|
|
||||||
}
|
|
||||||
testCases := []testCase{
|
|
||||||
{
|
|
||||||
description: "wire in all valid parameters",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
FailedPods: &FailedPods{
|
|
||||||
MinPodLifetimeSeconds: utilptr.To[uint](3600),
|
|
||||||
ExcludeOwnerKinds: []string{"Job"},
|
|
||||||
Reasons: []string{"NodeAffinity"},
|
|
||||||
IncludingInitContainers: true,
|
|
||||||
},
|
|
||||||
ThresholdPriority: utilptr.To[int32](100),
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: nil,
|
|
||||||
result: &api.PluginConfig{
|
|
||||||
Name: removefailedpods.PluginName,
|
|
||||||
Args: &removefailedpods.RemoveFailedPodsArgs{
|
|
||||||
ExcludeOwnerKinds: []string{"Job"},
|
|
||||||
MinPodLifetimeSeconds: utilptr.To[uint](3600),
|
|
||||||
Reasons: []string{"NodeAffinity"},
|
|
||||||
IncludingInitContainers: true,
|
|
||||||
Namespaces: &api.Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params namespaces",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
Include: []string{"test2"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: only one of Include/Exclude namespaces can be set", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tc := range testCases {
|
|
||||||
t.Run(tc.description, func(t *testing.T) {
|
|
||||||
var result *api.PluginConfig
|
|
||||||
var err error
|
|
||||||
if pcFnc, exists := StrategyParamsToPluginArgs[strategyName]; exists {
|
|
||||||
result, err = pcFnc(tc.params)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
if err.Error() != tc.err.Error() {
|
|
||||||
t.Errorf("unexpected error: %s", err.Error())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
// sort to easily compare deepequality
|
|
||||||
diff := cmp.Diff(tc.result, result)
|
|
||||||
if diff != "" {
|
|
||||||
t.Errorf("test '%s' failed. Results are not deep equal. mismatch (-want +got):\n%s", tc.description, diff)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStrategyParamsToPluginArgsRemovePodsViolatingNodeAffinity(t *testing.T) {
|
|
||||||
strategyName := "RemovePodsViolatingNodeAffinity"
|
|
||||||
type testCase struct {
|
|
||||||
description string
|
|
||||||
params *StrategyParameters
|
|
||||||
err error
|
|
||||||
result *api.PluginConfig
|
|
||||||
}
|
|
||||||
testCases := []testCase{
|
|
||||||
{
|
|
||||||
description: "wire in all valid parameters",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
NodeAffinityType: []string{"requiredDuringSchedulingIgnoredDuringExecution"},
|
|
||||||
ThresholdPriority: utilptr.To[int32](100),
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: nil,
|
|
||||||
result: &api.PluginConfig{
|
|
||||||
Name: removepodsviolatingnodeaffinity.PluginName,
|
|
||||||
Args: &removepodsviolatingnodeaffinity.RemovePodsViolatingNodeAffinityArgs{
|
|
||||||
NodeAffinityType: []string{"requiredDuringSchedulingIgnoredDuringExecution"},
|
|
||||||
Namespaces: &api.Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params, not setting nodeaffinity type",
|
|
||||||
params: &StrategyParameters{},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: nodeAffinityType needs to be set", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params namespaces",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
NodeAffinityType: []string{"requiredDuringSchedulingIgnoredDuringExecution"},
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
Include: []string{"test2"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: only one of Include/Exclude namespaces can be set", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tc := range testCases {
|
|
||||||
t.Run(tc.description, func(t *testing.T) {
|
|
||||||
var result *api.PluginConfig
|
|
||||||
var err error
|
|
||||||
if pcFnc, exists := StrategyParamsToPluginArgs[strategyName]; exists {
|
|
||||||
result, err = pcFnc(tc.params)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
if err.Error() != tc.err.Error() {
|
|
||||||
t.Errorf("unexpected error: %s", err.Error())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
// sort to easily compare deepequality
|
|
||||||
diff := cmp.Diff(tc.result, result)
|
|
||||||
if diff != "" {
|
|
||||||
t.Errorf("test '%s' failed. Results are not deep equal. mismatch (-want +got):\n%s", tc.description, diff)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStrategyParamsToPluginArgsRemovePodsViolatingInterPodAntiAffinity(t *testing.T) {
|
|
||||||
strategyName := "RemovePodsViolatingInterPodAntiAffinity"
|
|
||||||
type testCase struct {
|
|
||||||
description string
|
|
||||||
params *StrategyParameters
|
|
||||||
err error
|
|
||||||
result *api.PluginConfig
|
|
||||||
}
|
|
||||||
testCases := []testCase{
|
|
||||||
{
|
|
||||||
description: "wire in all valid parameters",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
ThresholdPriority: utilptr.To[int32](100),
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: nil,
|
|
||||||
result: &api.PluginConfig{
|
|
||||||
Name: removepodsviolatinginterpodantiaffinity.PluginName,
|
|
||||||
Args: &removepodsviolatinginterpodantiaffinity.RemovePodsViolatingInterPodAntiAffinityArgs{
|
|
||||||
Namespaces: &api.Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params namespaces",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
Include: []string{"test2"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: only one of Include/Exclude namespaces can be set", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tc := range testCases {
|
|
||||||
t.Run(tc.description, func(t *testing.T) {
|
|
||||||
var result *api.PluginConfig
|
|
||||||
var err error
|
|
||||||
if pcFnc, exists := StrategyParamsToPluginArgs[strategyName]; exists {
|
|
||||||
result, err = pcFnc(tc.params)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
if err.Error() != tc.err.Error() {
|
|
||||||
t.Errorf("unexpected error: %s", err.Error())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
// sort to easily compare deepequality
|
|
||||||
diff := cmp.Diff(tc.result, result)
|
|
||||||
if diff != "" {
|
|
||||||
t.Errorf("test '%s' failed. Results are not deep equal. mismatch (-want +got):\n%s", tc.description, diff)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStrategyParamsToPluginArgsRemovePodsHavingTooManyRestarts(t *testing.T) {
|
|
||||||
strategyName := "RemovePodsHavingTooManyRestarts"
|
|
||||||
type testCase struct {
|
|
||||||
description string
|
|
||||||
params *StrategyParameters
|
|
||||||
err error
|
|
||||||
result *api.PluginConfig
|
|
||||||
}
|
|
||||||
testCases := []testCase{
|
|
||||||
{
|
|
||||||
description: "wire in all valid parameters",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
PodsHavingTooManyRestarts: &PodsHavingTooManyRestarts{
|
|
||||||
PodRestartThreshold: 100,
|
|
||||||
IncludingInitContainers: true,
|
|
||||||
},
|
|
||||||
ThresholdPriority: utilptr.To[int32](100),
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: nil,
|
|
||||||
result: &api.PluginConfig{
|
|
||||||
Name: removepodshavingtoomanyrestarts.PluginName,
|
|
||||||
Args: &removepodshavingtoomanyrestarts.RemovePodsHavingTooManyRestartsArgs{
|
|
||||||
PodRestartThreshold: 100,
|
|
||||||
IncludingInitContainers: true,
|
|
||||||
Namespaces: &api.Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params namespaces",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
Include: []string{"test2"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: only one of Include/Exclude namespaces can be set", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params restart threshold",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
PodsHavingTooManyRestarts: &PodsHavingTooManyRestarts{
|
|
||||||
PodRestartThreshold: 0,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: invalid PodsHavingTooManyRestarts threshold", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tc := range testCases {
|
|
||||||
t.Run(tc.description, func(t *testing.T) {
|
|
||||||
var result *api.PluginConfig
|
|
||||||
var err error
|
|
||||||
if pcFnc, exists := StrategyParamsToPluginArgs[strategyName]; exists {
|
|
||||||
result, err = pcFnc(tc.params)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
if err.Error() != tc.err.Error() {
|
|
||||||
t.Errorf("unexpected error: %s", err.Error())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
// sort to easily compare deepequality
|
|
||||||
diff := cmp.Diff(tc.result, result)
|
|
||||||
if diff != "" {
|
|
||||||
t.Errorf("test '%s' failed. Results are not deep equal. mismatch (-want +got):\n%s", tc.description, diff)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStrategyParamsToPluginArgsPodLifeTime(t *testing.T) {
|
|
||||||
strategyName := "PodLifeTime"
|
|
||||||
type testCase struct {
|
|
||||||
description string
|
|
||||||
params *StrategyParameters
|
|
||||||
err error
|
|
||||||
result *api.PluginConfig
|
|
||||||
}
|
|
||||||
testCases := []testCase{
|
|
||||||
{
|
|
||||||
description: "wire in all valid parameters",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
PodLifeTime: &PodLifeTime{
|
|
||||||
MaxPodLifeTimeSeconds: utilptr.To[uint](86400),
|
|
||||||
States: []string{
|
|
||||||
"Pending",
|
|
||||||
"PodInitializing",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
ThresholdPriority: utilptr.To[int32](100),
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: nil,
|
|
||||||
result: &api.PluginConfig{
|
|
||||||
Name: podlifetime.PluginName,
|
|
||||||
Args: &podlifetime.PodLifeTimeArgs{
|
|
||||||
MaxPodLifeTimeSeconds: utilptr.To[uint](86400),
|
|
||||||
States: []string{
|
|
||||||
"Pending",
|
|
||||||
"PodInitializing",
|
|
||||||
},
|
|
||||||
Namespaces: &api.Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params namespaces",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
PodLifeTime: &PodLifeTime{
|
|
||||||
MaxPodLifeTimeSeconds: utilptr.To[uint](86400),
|
|
||||||
},
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
Include: []string{"test2"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: only one of Include/Exclude namespaces can be set", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params MaxPodLifeTimeSeconds not set",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
PodLifeTime: &PodLifeTime{},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: MaxPodLifeTimeSeconds not set", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tc := range testCases {
|
|
||||||
t.Run(tc.description, func(t *testing.T) {
|
|
||||||
var result *api.PluginConfig
|
|
||||||
var err error
|
|
||||||
if pcFnc, exists := StrategyParamsToPluginArgs[strategyName]; exists {
|
|
||||||
result, err = pcFnc(tc.params)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
if err.Error() != tc.err.Error() {
|
|
||||||
t.Errorf("unexpected error: %s", err.Error())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
// sort to easily compare deepequality
|
|
||||||
diff := cmp.Diff(tc.result, result)
|
|
||||||
if diff != "" {
|
|
||||||
t.Errorf("test '%s' failed. Results are not deep equal. mismatch (-want +got):\n%s", tc.description, diff)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStrategyParamsToPluginArgsRemoveDuplicates(t *testing.T) {
|
|
||||||
strategyName := "RemoveDuplicates"
|
|
||||||
type testCase struct {
|
|
||||||
description string
|
|
||||||
params *StrategyParameters
|
|
||||||
err error
|
|
||||||
result *api.PluginConfig
|
|
||||||
}
|
|
||||||
testCases := []testCase{
|
|
||||||
{
|
|
||||||
description: "wire in all valid parameters",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
RemoveDuplicates: &RemoveDuplicates{
|
|
||||||
ExcludeOwnerKinds: []string{"ReplicaSet"},
|
|
||||||
},
|
|
||||||
ThresholdPriority: utilptr.To[int32](100),
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: nil,
|
|
||||||
result: &api.PluginConfig{
|
|
||||||
Name: removeduplicates.PluginName,
|
|
||||||
Args: &removeduplicates.RemoveDuplicatesArgs{
|
|
||||||
ExcludeOwnerKinds: []string{"ReplicaSet"},
|
|
||||||
Namespaces: &api.Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params namespaces",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
PodLifeTime: &PodLifeTime{
|
|
||||||
MaxPodLifeTimeSeconds: utilptr.To[uint](86400),
|
|
||||||
},
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
Include: []string{"test2"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: only one of Include/Exclude namespaces can be set", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tc := range testCases {
|
|
||||||
t.Run(tc.description, func(t *testing.T) {
|
|
||||||
var result *api.PluginConfig
|
|
||||||
var err error
|
|
||||||
if pcFnc, exists := StrategyParamsToPluginArgs[strategyName]; exists {
|
|
||||||
result, err = pcFnc(tc.params)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
if err.Error() != tc.err.Error() {
|
|
||||||
t.Errorf("unexpected error: %s", err.Error())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
// sort to easily compare deepequality
|
|
||||||
diff := cmp.Diff(tc.result, result)
|
|
||||||
if diff != "" {
|
|
||||||
t.Errorf("test '%s' failed. Results are not deep equal. mismatch (-want +got):\n%s", tc.description, diff)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStrategyParamsToPluginArgsRemovePodsViolatingTopologySpreadConstraint(t *testing.T) {
|
|
||||||
strategyName := "RemovePodsViolatingTopologySpreadConstraint"
|
|
||||||
type testCase struct {
|
|
||||||
description string
|
|
||||||
params *StrategyParameters
|
|
||||||
err error
|
|
||||||
result *api.PluginConfig
|
|
||||||
}
|
|
||||||
testCases := []testCase{
|
|
||||||
{
|
|
||||||
description: "wire in all valid parameters",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
IncludeSoftConstraints: true,
|
|
||||||
ThresholdPriority: utilptr.To[int32](100),
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: nil,
|
|
||||||
result: &api.PluginConfig{
|
|
||||||
Name: removepodsviolatingtopologyspreadconstraint.PluginName,
|
|
||||||
Args: &removepodsviolatingtopologyspreadconstraint.RemovePodsViolatingTopologySpreadConstraintArgs{
|
|
||||||
Constraints: []v1.UnsatisfiableConstraintAction{v1.DoNotSchedule, v1.ScheduleAnyway},
|
|
||||||
TopologyBalanceNodeFit: utilptr.To(true),
|
|
||||||
Namespaces: &api.Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "params without soft constraints",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
IncludeSoftConstraints: false,
|
|
||||||
},
|
|
||||||
err: nil,
|
|
||||||
result: &api.PluginConfig{
|
|
||||||
Name: removepodsviolatingtopologyspreadconstraint.PluginName,
|
|
||||||
Args: &removepodsviolatingtopologyspreadconstraint.RemovePodsViolatingTopologySpreadConstraintArgs{
|
|
||||||
Constraints: []v1.UnsatisfiableConstraintAction{v1.DoNotSchedule},
|
|
||||||
TopologyBalanceNodeFit: utilptr.To(true),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params namespaces",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
Include: []string{"test2"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: only one of Include/Exclude namespaces can be set", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tc := range testCases {
|
|
||||||
t.Run(tc.description, func(t *testing.T) {
|
|
||||||
var result *api.PluginConfig
|
|
||||||
var err error
|
|
||||||
if pcFnc, exists := StrategyParamsToPluginArgs[strategyName]; exists {
|
|
||||||
result, err = pcFnc(tc.params)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
if err.Error() != tc.err.Error() {
|
|
||||||
t.Errorf("unexpected error: %s", err.Error())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
// sort to easily compare deepequality
|
|
||||||
diff := cmp.Diff(tc.result, result)
|
|
||||||
if diff != "" {
|
|
||||||
t.Errorf("test '%s' failed. Results are not deep equal. mismatch (-want +got):\n%s", tc.description, diff)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStrategyParamsToPluginArgsHighNodeUtilization(t *testing.T) {
|
|
||||||
strategyName := "HighNodeUtilization"
|
|
||||||
type testCase struct {
|
|
||||||
description string
|
|
||||||
params *StrategyParameters
|
|
||||||
err error
|
|
||||||
result *api.PluginConfig
|
|
||||||
}
|
|
||||||
testCases := []testCase{
|
|
||||||
{
|
|
||||||
description: "wire in all valid parameters",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
NodeResourceUtilizationThresholds: &NodeResourceUtilizationThresholds{
|
|
||||||
NumberOfNodes: 3,
|
|
||||||
Thresholds: ResourceThresholds{
|
|
||||||
"cpu": Percentage(20),
|
|
||||||
"memory": Percentage(20),
|
|
||||||
"pods": Percentage(20),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
ThresholdPriority: utilptr.To[int32](100),
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: nil,
|
|
||||||
result: &api.PluginConfig{
|
|
||||||
Name: nodeutilization.HighNodeUtilizationPluginName,
|
|
||||||
Args: &nodeutilization.HighNodeUtilizationArgs{
|
|
||||||
Thresholds: api.ResourceThresholds{
|
|
||||||
"cpu": api.Percentage(20),
|
|
||||||
"memory": api.Percentage(20),
|
|
||||||
"pods": api.Percentage(20),
|
|
||||||
},
|
|
||||||
NumberOfNodes: 3,
|
|
||||||
EvictableNamespaces: &api.Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params namespaces",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
NodeResourceUtilizationThresholds: &NodeResourceUtilizationThresholds{
|
|
||||||
NumberOfNodes: 3,
|
|
||||||
Thresholds: ResourceThresholds{
|
|
||||||
"cpu": Percentage(20),
|
|
||||||
"memory": Percentage(20),
|
|
||||||
"pods": Percentage(20),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Include: []string{"test2"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: only Exclude namespaces can be set, inclusion is not supported", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params nil ResourceThresholds",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
NodeResourceUtilizationThresholds: &NodeResourceUtilizationThresholds{
|
|
||||||
NumberOfNodes: 3,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: no resource threshold is configured", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params out of bounds threshold",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
NodeResourceUtilizationThresholds: &NodeResourceUtilizationThresholds{
|
|
||||||
NumberOfNodes: 3,
|
|
||||||
Thresholds: ResourceThresholds{
|
|
||||||
"cpu": Percentage(150),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: cpu threshold not in [0, 100] range", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tc := range testCases {
|
|
||||||
t.Run(tc.description, func(t *testing.T) {
|
|
||||||
var result *api.PluginConfig
|
|
||||||
var err error
|
|
||||||
if pcFnc, exists := StrategyParamsToPluginArgs[strategyName]; exists {
|
|
||||||
result, err = pcFnc(tc.params)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
if err.Error() != tc.err.Error() {
|
|
||||||
t.Errorf("unexpected error: %s", err.Error())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
// sort to easily compare deepequality
|
|
||||||
diff := cmp.Diff(tc.result, result)
|
|
||||||
if diff != "" {
|
|
||||||
t.Errorf("test '%s' failed. Results are not deep equal. mismatch (-want +got):\n%s", tc.description, diff)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStrategyParamsToPluginArgsLowNodeUtilization(t *testing.T) {
|
|
||||||
strategyName := "LowNodeUtilization"
|
|
||||||
type testCase struct {
|
|
||||||
description string
|
|
||||||
params *StrategyParameters
|
|
||||||
err error
|
|
||||||
result *api.PluginConfig
|
|
||||||
}
|
|
||||||
testCases := []testCase{
|
|
||||||
{
|
|
||||||
description: "wire in all valid parameters",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
NodeResourceUtilizationThresholds: &NodeResourceUtilizationThresholds{
|
|
||||||
NumberOfNodes: 3,
|
|
||||||
Thresholds: ResourceThresholds{
|
|
||||||
"cpu": Percentage(20),
|
|
||||||
"memory": Percentage(20),
|
|
||||||
"pods": Percentage(20),
|
|
||||||
},
|
|
||||||
TargetThresholds: ResourceThresholds{
|
|
||||||
"cpu": Percentage(50),
|
|
||||||
"memory": Percentage(50),
|
|
||||||
"pods": Percentage(50),
|
|
||||||
},
|
|
||||||
UseDeviationThresholds: true,
|
|
||||||
},
|
|
||||||
ThresholdPriority: utilptr.To[int32](100),
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: nil,
|
|
||||||
result: &api.PluginConfig{
|
|
||||||
Name: nodeutilization.LowNodeUtilizationPluginName,
|
|
||||||
Args: &nodeutilization.LowNodeUtilizationArgs{
|
|
||||||
Thresholds: api.ResourceThresholds{
|
|
||||||
"cpu": api.Percentage(20),
|
|
||||||
"memory": api.Percentage(20),
|
|
||||||
"pods": api.Percentage(20),
|
|
||||||
},
|
|
||||||
TargetThresholds: api.ResourceThresholds{
|
|
||||||
"cpu": api.Percentage(50),
|
|
||||||
"memory": api.Percentage(50),
|
|
||||||
"pods": api.Percentage(50),
|
|
||||||
},
|
|
||||||
UseDeviationThresholds: true,
|
|
||||||
NumberOfNodes: 3,
|
|
||||||
EvictableNamespaces: &api.Namespaces{
|
|
||||||
Exclude: []string{"test1"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params namespaces",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
NodeResourceUtilizationThresholds: &NodeResourceUtilizationThresholds{
|
|
||||||
NumberOfNodes: 3,
|
|
||||||
Thresholds: ResourceThresholds{
|
|
||||||
"cpu": Percentage(20),
|
|
||||||
"memory": Percentage(20),
|
|
||||||
"pods": Percentage(20),
|
|
||||||
},
|
|
||||||
TargetThresholds: ResourceThresholds{
|
|
||||||
"cpu": Percentage(50),
|
|
||||||
"memory": Percentage(50),
|
|
||||||
"pods": Percentage(50),
|
|
||||||
},
|
|
||||||
UseDeviationThresholds: true,
|
|
||||||
},
|
|
||||||
Namespaces: &Namespaces{
|
|
||||||
Include: []string{"test2"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: only Exclude namespaces can be set, inclusion is not supported", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params nil ResourceThresholds",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
NodeResourceUtilizationThresholds: &NodeResourceUtilizationThresholds{
|
|
||||||
NumberOfNodes: 3,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: thresholds config is not valid: no resource threshold is configured", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid params out of bounds threshold",
|
|
||||||
params: &StrategyParameters{
|
|
||||||
NodeResourceUtilizationThresholds: &NodeResourceUtilizationThresholds{
|
|
||||||
NumberOfNodes: 3,
|
|
||||||
Thresholds: ResourceThresholds{
|
|
||||||
"cpu": Percentage(150),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
err: fmt.Errorf("strategy \"%s\" param validation failed: thresholds config is not valid: cpu threshold not in [0, 100] range", strategyName),
|
|
||||||
result: nil,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tc := range testCases {
|
|
||||||
t.Run(tc.description, func(t *testing.T) {
|
|
||||||
var result *api.PluginConfig
|
|
||||||
var err error
|
|
||||||
if pcFnc, exists := StrategyParamsToPluginArgs[strategyName]; exists {
|
|
||||||
result, err = pcFnc(tc.params)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
if err.Error() != tc.err.Error() {
|
|
||||||
t.Errorf("unexpected error: %s", err.Error())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
// sort to easily compare deepequality
|
|
||||||
diff := cmp.Diff(tc.result, result)
|
|
||||||
if diff != "" {
|
|
||||||
t.Errorf("test '%s' failed. Results are not deep equal. mismatch (-want +got):\n%s", tc.description, diff)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,136 +0,0 @@
|
|||||||
/*
|
|
||||||
Copyright 2017 The Kubernetes Authors.
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
*/
|
|
||||||
|
|
||||||
package v1alpha1
|
|
||||||
|
|
||||||
import (
|
|
||||||
v1 "k8s.io/api/core/v1"
|
|
||||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
|
||||||
)
|
|
||||||
|
|
||||||
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
|
|
||||||
|
|
||||||
type DeschedulerPolicy struct {
|
|
||||||
metav1.TypeMeta `json:",inline"`
|
|
||||||
|
|
||||||
// Strategies
|
|
||||||
Strategies StrategyList `json:"strategies,omitempty"`
|
|
||||||
|
|
||||||
// NodeSelector for a set of nodes to operate over
|
|
||||||
NodeSelector *string `json:"nodeSelector,omitempty"`
|
|
||||||
|
|
||||||
// EvictFailedBarePods allows pods without ownerReferences and in failed phase to be evicted.
|
|
||||||
EvictFailedBarePods *bool `json:"evictFailedBarePods,omitempty"`
|
|
||||||
|
|
||||||
// EvictLocalStoragePods allows pods using local storage to be evicted.
|
|
||||||
EvictLocalStoragePods *bool `json:"evictLocalStoragePods,omitempty"`
|
|
||||||
|
|
||||||
// EvictSystemCriticalPods allows eviction of pods of any priority (including Kubernetes system pods)
|
|
||||||
EvictSystemCriticalPods *bool `json:"evictSystemCriticalPods,omitempty"`
|
|
||||||
|
|
||||||
// EvictDaemonSetPods allows pods owned by a DaemonSet resource to be evicted.
|
|
||||||
EvictDaemonSetPods *bool `json:"evictDaemonSetPods,omitempty"`
|
|
||||||
|
|
||||||
// IgnorePVCPods prevents pods with PVCs from being evicted.
|
|
||||||
IgnorePVCPods *bool `json:"ignorePvcPods,omitempty"`
|
|
||||||
|
|
||||||
// MaxNoOfPodsToEvictPerNode restricts maximum of pods to be evicted per node.
|
|
||||||
MaxNoOfPodsToEvictPerNode *uint `json:"maxNoOfPodsToEvictPerNode,omitempty"`
|
|
||||||
|
|
||||||
// MaxNoOfPodsToEvictPerNamespace restricts maximum of pods to be evicted per namespace.
|
|
||||||
MaxNoOfPodsToEvictPerNamespace *uint `json:"maxNoOfPodsToEvictPerNamespace,omitempty"`
|
|
||||||
|
|
||||||
// MaxNoOfPodsToTotal restricts maximum of pods to be evicted total.
|
|
||||||
MaxNoOfPodsToEvictTotal *uint `json:"maxNoOfPodsToEvictTotal,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type (
|
|
||||||
StrategyName string
|
|
||||||
StrategyList map[StrategyName]DeschedulerStrategy
|
|
||||||
)
|
|
||||||
|
|
||||||
type DeschedulerStrategy struct {
|
|
||||||
// Enabled or disabled
|
|
||||||
Enabled bool `json:"enabled,omitempty"`
|
|
||||||
|
|
||||||
// Weight
|
|
||||||
Weight int `json:"weight,omitempty"`
|
|
||||||
|
|
||||||
// Strategy parameters
|
|
||||||
Params *StrategyParameters `json:"params,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// Namespaces carries a list of included/excluded namespaces
|
|
||||||
// for which a given strategy is applicable.
|
|
||||||
type Namespaces struct {
|
|
||||||
Include []string `json:"include"`
|
|
||||||
Exclude []string `json:"exclude"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// Besides Namespaces ThresholdPriority and ThresholdPriorityClassName only one of its members may be specified
|
|
||||||
type StrategyParameters struct {
|
|
||||||
NodeResourceUtilizationThresholds *NodeResourceUtilizationThresholds `json:"nodeResourceUtilizationThresholds,omitempty"`
|
|
||||||
NodeAffinityType []string `json:"nodeAffinityType,omitempty"`
|
|
||||||
PodsHavingTooManyRestarts *PodsHavingTooManyRestarts `json:"podsHavingTooManyRestarts,omitempty"`
|
|
||||||
PodLifeTime *PodLifeTime `json:"podLifeTime,omitempty"`
|
|
||||||
RemoveDuplicates *RemoveDuplicates `json:"removeDuplicates,omitempty"`
|
|
||||||
FailedPods *FailedPods `json:"failedPods,omitempty"`
|
|
||||||
IncludeSoftConstraints bool `json:"includeSoftConstraints"`
|
|
||||||
Namespaces *Namespaces `json:"namespaces"`
|
|
||||||
ThresholdPriority *int32 `json:"thresholdPriority"`
|
|
||||||
ThresholdPriorityClassName string `json:"thresholdPriorityClassName"`
|
|
||||||
LabelSelector *metav1.LabelSelector `json:"labelSelector"`
|
|
||||||
NodeFit bool `json:"nodeFit"`
|
|
||||||
IncludePreferNoSchedule bool `json:"includePreferNoSchedule"`
|
|
||||||
ExcludedTaints []string `json:"excludedTaints,omitempty"`
|
|
||||||
IncludedTaints []string `json:"includedTaints,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type (
|
|
||||||
Percentage float64
|
|
||||||
ResourceThresholds map[v1.ResourceName]Percentage
|
|
||||||
)
|
|
||||||
|
|
||||||
type NodeResourceUtilizationThresholds struct {
|
|
||||||
UseDeviationThresholds bool `json:"useDeviationThresholds,omitempty"`
|
|
||||||
Thresholds ResourceThresholds `json:"thresholds,omitempty"`
|
|
||||||
TargetThresholds ResourceThresholds `json:"targetThresholds,omitempty"`
|
|
||||||
NumberOfNodes int `json:"numberOfNodes,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type PodsHavingTooManyRestarts struct {
|
|
||||||
PodRestartThreshold int32 `json:"podRestartThreshold,omitempty"`
|
|
||||||
IncludingInitContainers bool `json:"includingInitContainers,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type RemoveDuplicates struct {
|
|
||||||
ExcludeOwnerKinds []string `json:"excludeOwnerKinds,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type PodLifeTime struct {
|
|
||||||
MaxPodLifeTimeSeconds *uint `json:"maxPodLifeTimeSeconds,omitempty"`
|
|
||||||
States []string `json:"states,omitempty"`
|
|
||||||
|
|
||||||
// Deprecated: Use States instead.
|
|
||||||
PodStatusPhases []string `json:"podStatusPhases,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type FailedPods struct {
|
|
||||||
ExcludeOwnerKinds []string `json:"excludeOwnerKinds,omitempty"`
|
|
||||||
MinPodLifetimeSeconds *uint `json:"minPodLifetimeSeconds,omitempty"`
|
|
||||||
Reasons []string `json:"reasons,omitempty"`
|
|
||||||
IncludingInitContainers bool `json:"includingInitContainers,omitempty"`
|
|
||||||
}
|
|
||||||
395
pkg/api/v1alpha1/zz_generated.deepcopy.go
generated
395
pkg/api/v1alpha1/zz_generated.deepcopy.go
generated
@@ -1,395 +0,0 @@
|
|||||||
//go:build !ignore_autogenerated
|
|
||||||
// +build !ignore_autogenerated
|
|
||||||
|
|
||||||
/*
|
|
||||||
Copyright 2024 The Kubernetes Authors.
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
*/
|
|
||||||
|
|
||||||
// Code generated by deepcopy-gen. DO NOT EDIT.
|
|
||||||
|
|
||||||
package v1alpha1
|
|
||||||
|
|
||||||
import (
|
|
||||||
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
|
||||||
runtime "k8s.io/apimachinery/pkg/runtime"
|
|
||||||
)
|
|
||||||
|
|
||||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
|
||||||
func (in *DeschedulerPolicy) DeepCopyInto(out *DeschedulerPolicy) {
|
|
||||||
*out = *in
|
|
||||||
out.TypeMeta = in.TypeMeta
|
|
||||||
if in.Strategies != nil {
|
|
||||||
in, out := &in.Strategies, &out.Strategies
|
|
||||||
*out = make(StrategyList, len(*in))
|
|
||||||
for key, val := range *in {
|
|
||||||
(*out)[key] = *val.DeepCopy()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if in.NodeSelector != nil {
|
|
||||||
in, out := &in.NodeSelector, &out.NodeSelector
|
|
||||||
*out = new(string)
|
|
||||||
**out = **in
|
|
||||||
}
|
|
||||||
if in.EvictFailedBarePods != nil {
|
|
||||||
in, out := &in.EvictFailedBarePods, &out.EvictFailedBarePods
|
|
||||||
*out = new(bool)
|
|
||||||
**out = **in
|
|
||||||
}
|
|
||||||
if in.EvictLocalStoragePods != nil {
|
|
||||||
in, out := &in.EvictLocalStoragePods, &out.EvictLocalStoragePods
|
|
||||||
*out = new(bool)
|
|
||||||
**out = **in
|
|
||||||
}
|
|
||||||
if in.EvictSystemCriticalPods != nil {
|
|
||||||
in, out := &in.EvictSystemCriticalPods, &out.EvictSystemCriticalPods
|
|
||||||
*out = new(bool)
|
|
||||||
**out = **in
|
|
||||||
}
|
|
||||||
if in.EvictDaemonSetPods != nil {
|
|
||||||
in, out := &in.EvictDaemonSetPods, &out.EvictDaemonSetPods
|
|
||||||
*out = new(bool)
|
|
||||||
**out = **in
|
|
||||||
}
|
|
||||||
if in.IgnorePVCPods != nil {
|
|
||||||
in, out := &in.IgnorePVCPods, &out.IgnorePVCPods
|
|
||||||
*out = new(bool)
|
|
||||||
**out = **in
|
|
||||||
}
|
|
||||||
if in.MaxNoOfPodsToEvictPerNode != nil {
|
|
||||||
in, out := &in.MaxNoOfPodsToEvictPerNode, &out.MaxNoOfPodsToEvictPerNode
|
|
||||||
*out = new(uint)
|
|
||||||
**out = **in
|
|
||||||
}
|
|
||||||
if in.MaxNoOfPodsToEvictPerNamespace != nil {
|
|
||||||
in, out := &in.MaxNoOfPodsToEvictPerNamespace, &out.MaxNoOfPodsToEvictPerNamespace
|
|
||||||
*out = new(uint)
|
|
||||||
**out = **in
|
|
||||||
}
|
|
||||||
if in.MaxNoOfPodsToEvictTotal != nil {
|
|
||||||
in, out := &in.MaxNoOfPodsToEvictTotal, &out.MaxNoOfPodsToEvictTotal
|
|
||||||
*out = new(uint)
|
|
||||||
**out = **in
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DeschedulerPolicy.
|
|
||||||
func (in *DeschedulerPolicy) DeepCopy() *DeschedulerPolicy {
|
|
||||||
if in == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
out := new(DeschedulerPolicy)
|
|
||||||
in.DeepCopyInto(out)
|
|
||||||
return out
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
|
||||||
func (in *DeschedulerPolicy) DeepCopyObject() runtime.Object {
|
|
||||||
if c := in.DeepCopy(); c != nil {
|
|
||||||
return c
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
|
||||||
func (in *DeschedulerStrategy) DeepCopyInto(out *DeschedulerStrategy) {
|
|
||||||
*out = *in
|
|
||||||
if in.Params != nil {
|
|
||||||
in, out := &in.Params, &out.Params
|
|
||||||
*out = new(StrategyParameters)
|
|
||||||
(*in).DeepCopyInto(*out)
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DeschedulerStrategy.
|
|
||||||
func (in *DeschedulerStrategy) DeepCopy() *DeschedulerStrategy {
|
|
||||||
if in == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
out := new(DeschedulerStrategy)
|
|
||||||
in.DeepCopyInto(out)
|
|
||||||
return out
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
|
||||||
func (in *FailedPods) DeepCopyInto(out *FailedPods) {
|
|
||||||
*out = *in
|
|
||||||
if in.ExcludeOwnerKinds != nil {
|
|
||||||
in, out := &in.ExcludeOwnerKinds, &out.ExcludeOwnerKinds
|
|
||||||
*out = make([]string, len(*in))
|
|
||||||
copy(*out, *in)
|
|
||||||
}
|
|
||||||
if in.MinPodLifetimeSeconds != nil {
|
|
||||||
in, out := &in.MinPodLifetimeSeconds, &out.MinPodLifetimeSeconds
|
|
||||||
*out = new(uint)
|
|
||||||
**out = **in
|
|
||||||
}
|
|
||||||
if in.Reasons != nil {
|
|
||||||
in, out := &in.Reasons, &out.Reasons
|
|
||||||
*out = make([]string, len(*in))
|
|
||||||
copy(*out, *in)
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FailedPods.
|
|
||||||
func (in *FailedPods) DeepCopy() *FailedPods {
|
|
||||||
if in == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
out := new(FailedPods)
|
|
||||||
in.DeepCopyInto(out)
|
|
||||||
return out
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
|
||||||
func (in *Namespaces) DeepCopyInto(out *Namespaces) {
|
|
||||||
*out = *in
|
|
||||||
if in.Include != nil {
|
|
||||||
in, out := &in.Include, &out.Include
|
|
||||||
*out = make([]string, len(*in))
|
|
||||||
copy(*out, *in)
|
|
||||||
}
|
|
||||||
if in.Exclude != nil {
|
|
||||||
in, out := &in.Exclude, &out.Exclude
|
|
||||||
*out = make([]string, len(*in))
|
|
||||||
copy(*out, *in)
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Namespaces.
|
|
||||||
func (in *Namespaces) DeepCopy() *Namespaces {
|
|
||||||
if in == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
out := new(Namespaces)
|
|
||||||
in.DeepCopyInto(out)
|
|
||||||
return out
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
|
||||||
func (in *NodeResourceUtilizationThresholds) DeepCopyInto(out *NodeResourceUtilizationThresholds) {
|
|
||||||
*out = *in
|
|
||||||
if in.Thresholds != nil {
|
|
||||||
in, out := &in.Thresholds, &out.Thresholds
|
|
||||||
*out = make(ResourceThresholds, len(*in))
|
|
||||||
for key, val := range *in {
|
|
||||||
(*out)[key] = val
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if in.TargetThresholds != nil {
|
|
||||||
in, out := &in.TargetThresholds, &out.TargetThresholds
|
|
||||||
*out = make(ResourceThresholds, len(*in))
|
|
||||||
for key, val := range *in {
|
|
||||||
(*out)[key] = val
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeResourceUtilizationThresholds.
|
|
||||||
func (in *NodeResourceUtilizationThresholds) DeepCopy() *NodeResourceUtilizationThresholds {
|
|
||||||
if in == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
out := new(NodeResourceUtilizationThresholds)
|
|
||||||
in.DeepCopyInto(out)
|
|
||||||
return out
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
|
||||||
func (in *PodLifeTime) DeepCopyInto(out *PodLifeTime) {
|
|
||||||
*out = *in
|
|
||||||
if in.MaxPodLifeTimeSeconds != nil {
|
|
||||||
in, out := &in.MaxPodLifeTimeSeconds, &out.MaxPodLifeTimeSeconds
|
|
||||||
*out = new(uint)
|
|
||||||
**out = **in
|
|
||||||
}
|
|
||||||
if in.States != nil {
|
|
||||||
in, out := &in.States, &out.States
|
|
||||||
*out = make([]string, len(*in))
|
|
||||||
copy(*out, *in)
|
|
||||||
}
|
|
||||||
if in.PodStatusPhases != nil {
|
|
||||||
in, out := &in.PodStatusPhases, &out.PodStatusPhases
|
|
||||||
*out = make([]string, len(*in))
|
|
||||||
copy(*out, *in)
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodLifeTime.
|
|
||||||
func (in *PodLifeTime) DeepCopy() *PodLifeTime {
|
|
||||||
if in == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
out := new(PodLifeTime)
|
|
||||||
in.DeepCopyInto(out)
|
|
||||||
return out
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
|
||||||
func (in *PodsHavingTooManyRestarts) DeepCopyInto(out *PodsHavingTooManyRestarts) {
|
|
||||||
*out = *in
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodsHavingTooManyRestarts.
|
|
||||||
func (in *PodsHavingTooManyRestarts) DeepCopy() *PodsHavingTooManyRestarts {
|
|
||||||
if in == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
out := new(PodsHavingTooManyRestarts)
|
|
||||||
in.DeepCopyInto(out)
|
|
||||||
return out
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
|
||||||
func (in *RemoveDuplicates) DeepCopyInto(out *RemoveDuplicates) {
|
|
||||||
*out = *in
|
|
||||||
if in.ExcludeOwnerKinds != nil {
|
|
||||||
in, out := &in.ExcludeOwnerKinds, &out.ExcludeOwnerKinds
|
|
||||||
*out = make([]string, len(*in))
|
|
||||||
copy(*out, *in)
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RemoveDuplicates.
|
|
||||||
func (in *RemoveDuplicates) DeepCopy() *RemoveDuplicates {
|
|
||||||
if in == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
out := new(RemoveDuplicates)
|
|
||||||
in.DeepCopyInto(out)
|
|
||||||
return out
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
|
||||||
func (in ResourceThresholds) DeepCopyInto(out *ResourceThresholds) {
|
|
||||||
{
|
|
||||||
in := &in
|
|
||||||
*out = make(ResourceThresholds, len(*in))
|
|
||||||
for key, val := range *in {
|
|
||||||
(*out)[key] = val
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceThresholds.
|
|
||||||
func (in ResourceThresholds) DeepCopy() ResourceThresholds {
|
|
||||||
if in == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
out := new(ResourceThresholds)
|
|
||||||
in.DeepCopyInto(out)
|
|
||||||
return *out
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
|
||||||
func (in StrategyList) DeepCopyInto(out *StrategyList) {
|
|
||||||
{
|
|
||||||
in := &in
|
|
||||||
*out = make(StrategyList, len(*in))
|
|
||||||
for key, val := range *in {
|
|
||||||
(*out)[key] = *val.DeepCopy()
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StrategyList.
|
|
||||||
func (in StrategyList) DeepCopy() StrategyList {
|
|
||||||
if in == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
out := new(StrategyList)
|
|
||||||
in.DeepCopyInto(out)
|
|
||||||
return *out
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
|
||||||
func (in *StrategyParameters) DeepCopyInto(out *StrategyParameters) {
|
|
||||||
*out = *in
|
|
||||||
if in.NodeResourceUtilizationThresholds != nil {
|
|
||||||
in, out := &in.NodeResourceUtilizationThresholds, &out.NodeResourceUtilizationThresholds
|
|
||||||
*out = new(NodeResourceUtilizationThresholds)
|
|
||||||
(*in).DeepCopyInto(*out)
|
|
||||||
}
|
|
||||||
if in.NodeAffinityType != nil {
|
|
||||||
in, out := &in.NodeAffinityType, &out.NodeAffinityType
|
|
||||||
*out = make([]string, len(*in))
|
|
||||||
copy(*out, *in)
|
|
||||||
}
|
|
||||||
if in.PodsHavingTooManyRestarts != nil {
|
|
||||||
in, out := &in.PodsHavingTooManyRestarts, &out.PodsHavingTooManyRestarts
|
|
||||||
*out = new(PodsHavingTooManyRestarts)
|
|
||||||
**out = **in
|
|
||||||
}
|
|
||||||
if in.PodLifeTime != nil {
|
|
||||||
in, out := &in.PodLifeTime, &out.PodLifeTime
|
|
||||||
*out = new(PodLifeTime)
|
|
||||||
(*in).DeepCopyInto(*out)
|
|
||||||
}
|
|
||||||
if in.RemoveDuplicates != nil {
|
|
||||||
in, out := &in.RemoveDuplicates, &out.RemoveDuplicates
|
|
||||||
*out = new(RemoveDuplicates)
|
|
||||||
(*in).DeepCopyInto(*out)
|
|
||||||
}
|
|
||||||
if in.FailedPods != nil {
|
|
||||||
in, out := &in.FailedPods, &out.FailedPods
|
|
||||||
*out = new(FailedPods)
|
|
||||||
(*in).DeepCopyInto(*out)
|
|
||||||
}
|
|
||||||
if in.Namespaces != nil {
|
|
||||||
in, out := &in.Namespaces, &out.Namespaces
|
|
||||||
*out = new(Namespaces)
|
|
||||||
(*in).DeepCopyInto(*out)
|
|
||||||
}
|
|
||||||
if in.ThresholdPriority != nil {
|
|
||||||
in, out := &in.ThresholdPriority, &out.ThresholdPriority
|
|
||||||
*out = new(int32)
|
|
||||||
**out = **in
|
|
||||||
}
|
|
||||||
if in.LabelSelector != nil {
|
|
||||||
in, out := &in.LabelSelector, &out.LabelSelector
|
|
||||||
*out = new(v1.LabelSelector)
|
|
||||||
(*in).DeepCopyInto(*out)
|
|
||||||
}
|
|
||||||
if in.ExcludedTaints != nil {
|
|
||||||
in, out := &in.ExcludedTaints, &out.ExcludedTaints
|
|
||||||
*out = make([]string, len(*in))
|
|
||||||
copy(*out, *in)
|
|
||||||
}
|
|
||||||
if in.IncludedTaints != nil {
|
|
||||||
in, out := &in.IncludedTaints, &out.IncludedTaints
|
|
||||||
*out = make([]string, len(*in))
|
|
||||||
copy(*out, *in)
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StrategyParameters.
|
|
||||||
func (in *StrategyParameters) DeepCopy() *StrategyParameters {
|
|
||||||
if in == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
out := new(StrategyParameters)
|
|
||||||
in.DeepCopyInto(out)
|
|
||||||
return out
|
|
||||||
}
|
|
||||||
33
pkg/api/v1alpha1/zz_generated.defaults.go
generated
33
pkg/api/v1alpha1/zz_generated.defaults.go
generated
@@ -1,33 +0,0 @@
|
|||||||
//go:build !ignore_autogenerated
|
|
||||||
// +build !ignore_autogenerated
|
|
||||||
|
|
||||||
/*
|
|
||||||
Copyright 2024 The Kubernetes Authors.
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
*/
|
|
||||||
|
|
||||||
// Code generated by defaulter-gen. DO NOT EDIT.
|
|
||||||
|
|
||||||
package v1alpha1
|
|
||||||
|
|
||||||
import (
|
|
||||||
runtime "k8s.io/apimachinery/pkg/runtime"
|
|
||||||
)
|
|
||||||
|
|
||||||
// RegisterDefaults adds defaulters functions to the given scheme.
|
|
||||||
// Public to allow building arbitrary schemes.
|
|
||||||
// All generated defaulters are covering - they call all nested defaulters.
|
|
||||||
func RegisterDefaults(scheme *runtime.Scheme) error {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@@ -28,7 +28,6 @@ import (
|
|||||||
"k8s.io/klog/v2"
|
"k8s.io/klog/v2"
|
||||||
|
|
||||||
"sigs.k8s.io/descheduler/pkg/api"
|
"sigs.k8s.io/descheduler/pkg/api"
|
||||||
"sigs.k8s.io/descheduler/pkg/api/v1alpha1"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/api/v1alpha2"
|
"sigs.k8s.io/descheduler/pkg/api/v1alpha2"
|
||||||
"sigs.k8s.io/descheduler/pkg/descheduler/scheme"
|
"sigs.k8s.io/descheduler/pkg/descheduler/scheme"
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/pluginregistry"
|
"sigs.k8s.io/descheduler/pkg/framework/pluginregistry"
|
||||||
@@ -54,7 +53,7 @@ func decode(policyConfigFile string, policy []byte, client clientset.Interface,
|
|||||||
internalPolicy := &api.DeschedulerPolicy{}
|
internalPolicy := &api.DeschedulerPolicy{}
|
||||||
var err error
|
var err error
|
||||||
|
|
||||||
decoder := scheme.Codecs.UniversalDecoder(v1alpha1.SchemeGroupVersion, v1alpha2.SchemeGroupVersion, api.SchemeGroupVersion)
|
decoder := scheme.Codecs.UniversalDecoder(v1alpha2.SchemeGroupVersion, api.SchemeGroupVersion)
|
||||||
if err := runtime.DecodeInto(decoder, policy, internalPolicy); err != nil {
|
if err := runtime.DecodeInto(decoder, policy, internalPolicy); err != nil {
|
||||||
return nil, fmt.Errorf("failed decoding descheduler's policy config %q: %v", policyConfigFile, err)
|
return nil, fmt.Errorf("failed decoding descheduler's policy config %q: %v", policyConfigFile, err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -21,24 +21,15 @@ import (
|
|||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/google/go-cmp/cmp"
|
"github.com/google/go-cmp/cmp"
|
||||||
v1 "k8s.io/api/core/v1"
|
|
||||||
"k8s.io/apimachinery/pkg/conversion"
|
"k8s.io/apimachinery/pkg/conversion"
|
||||||
fakeclientset "k8s.io/client-go/kubernetes/fake"
|
fakeclientset "k8s.io/client-go/kubernetes/fake"
|
||||||
utilptr "k8s.io/utils/ptr"
|
utilptr "k8s.io/utils/ptr"
|
||||||
"sigs.k8s.io/descheduler/pkg/api"
|
"sigs.k8s.io/descheduler/pkg/api"
|
||||||
"sigs.k8s.io/descheduler/pkg/api/v1alpha1"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/pluginregistry"
|
"sigs.k8s.io/descheduler/pkg/framework/pluginregistry"
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/defaultevictor"
|
"sigs.k8s.io/descheduler/pkg/framework/plugins/defaultevictor"
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/nodeutilization"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/podlifetime"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removeduplicates"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removefailedpods"
|
"sigs.k8s.io/descheduler/pkg/framework/plugins/removefailedpods"
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodshavingtoomanyrestarts"
|
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodshavingtoomanyrestarts"
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodsviolatinginterpodantiaffinity"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodsviolatingnodeaffinity"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodsviolatingnodetaints"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodsviolatingtopologyspreadconstraint"
|
"sigs.k8s.io/descheduler/pkg/framework/plugins/removepodsviolatingtopologyspreadconstraint"
|
||||||
"sigs.k8s.io/descheduler/pkg/utils"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// scope contains information about an ongoing conversion.
|
// scope contains information about an ongoing conversion.
|
||||||
@@ -57,841 +48,10 @@ func (s scope) Meta() *conversion.Meta {
|
|||||||
return s.meta
|
return s.meta
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestV1alpha1ToV1alpha2(t *testing.T) {
|
|
||||||
SetupPlugins()
|
|
||||||
defaultEvictorPluginConfig := api.PluginConfig{
|
|
||||||
Name: defaultevictor.PluginName,
|
|
||||||
Args: &defaultevictor.DefaultEvictorArgs{
|
|
||||||
PriorityThreshold: &api.PriorityThreshold{
|
|
||||||
Value: nil,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
type testCase struct {
|
|
||||||
description string
|
|
||||||
policy *v1alpha1.DeschedulerPolicy
|
|
||||||
err error
|
|
||||||
result *api.DeschedulerPolicy
|
|
||||||
}
|
|
||||||
testCases := []testCase{
|
|
||||||
{
|
|
||||||
description: "RemoveFailedPods enabled, LowNodeUtilization disabled strategies to profile",
|
|
||||||
policy: &v1alpha1.DeschedulerPolicy{
|
|
||||||
Strategies: v1alpha1.StrategyList{
|
|
||||||
removeduplicates.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
Namespaces: &v1alpha1.Namespaces{
|
|
||||||
Exclude: []string{
|
|
||||||
"test2",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
nodeutilization.LowNodeUtilizationPluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: false,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
NodeResourceUtilizationThresholds: &v1alpha1.NodeResourceUtilizationThresholds{
|
|
||||||
Thresholds: v1alpha1.ResourceThresholds{
|
|
||||||
"cpu": v1alpha1.Percentage(20),
|
|
||||||
"memory": v1alpha1.Percentage(20),
|
|
||||||
"pods": v1alpha1.Percentage(20),
|
|
||||||
},
|
|
||||||
TargetThresholds: v1alpha1.ResourceThresholds{
|
|
||||||
"cpu": v1alpha1.Percentage(50),
|
|
||||||
"memory": v1alpha1.Percentage(50),
|
|
||||||
"pods": v1alpha1.Percentage(50),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
result: &api.DeschedulerPolicy{
|
|
||||||
Profiles: []api.DeschedulerProfile{
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removeduplicates.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removeduplicates.PluginName,
|
|
||||||
Args: &removeduplicates.RemoveDuplicatesArgs{
|
|
||||||
Namespaces: &api.Namespaces{
|
|
||||||
Exclude: []string{
|
|
||||||
"test2",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Balance: api.PluginSet{
|
|
||||||
Enabled: []string{removeduplicates.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
// Disabled strategy is not generating internal plugin since it is not being used internally currently
|
|
||||||
// {
|
|
||||||
// Name: nodeutilization.LowNodeUtilizationPluginName,
|
|
||||||
// PluginConfigs: []api.PluginConfig{
|
|
||||||
// {
|
|
||||||
// Name: nodeutilization.LowNodeUtilizationPluginName,
|
|
||||||
// Args: &nodeutilization.LowNodeUtilizationArgs{
|
|
||||||
// Thresholds: api.ResourceThresholds{
|
|
||||||
// "cpu": api.Percentage(20),
|
|
||||||
// [...]
|
|
||||||
// [...]
|
|
||||||
// },
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "convert global policy fields to defaultevictor",
|
|
||||||
policy: &v1alpha1.DeschedulerPolicy{
|
|
||||||
EvictFailedBarePods: utilptr.To(true),
|
|
||||||
EvictLocalStoragePods: utilptr.To(true),
|
|
||||||
EvictSystemCriticalPods: utilptr.To(true),
|
|
||||||
EvictDaemonSetPods: utilptr.To(true),
|
|
||||||
IgnorePVCPods: utilptr.To(true),
|
|
||||||
Strategies: v1alpha1.StrategyList{
|
|
||||||
removeduplicates.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
Namespaces: &v1alpha1.Namespaces{
|
|
||||||
Exclude: []string{
|
|
||||||
"test2",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
result: &api.DeschedulerPolicy{
|
|
||||||
Profiles: []api.DeschedulerProfile{
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removeduplicates.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
{
|
|
||||||
Name: defaultevictor.PluginName,
|
|
||||||
Args: &defaultevictor.DefaultEvictorArgs{
|
|
||||||
EvictLocalStoragePods: true,
|
|
||||||
EvictDaemonSetPods: true,
|
|
||||||
EvictSystemCriticalPods: true,
|
|
||||||
IgnorePvcPods: true,
|
|
||||||
EvictFailedBarePods: true,
|
|
||||||
PriorityThreshold: &api.PriorityThreshold{
|
|
||||||
Value: nil,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: removeduplicates.PluginName,
|
|
||||||
Args: &removeduplicates.RemoveDuplicatesArgs{
|
|
||||||
Namespaces: &api.Namespaces{
|
|
||||||
Exclude: []string{
|
|
||||||
"test2",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Balance: api.PluginSet{
|
|
||||||
Enabled: []string{removeduplicates.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "convert all strategies",
|
|
||||||
policy: &v1alpha1.DeschedulerPolicy{
|
|
||||||
Strategies: v1alpha1.StrategyList{
|
|
||||||
removeduplicates.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{},
|
|
||||||
},
|
|
||||||
nodeutilization.LowNodeUtilizationPluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
NodeResourceUtilizationThresholds: &v1alpha1.NodeResourceUtilizationThresholds{
|
|
||||||
Thresholds: v1alpha1.ResourceThresholds{
|
|
||||||
"cpu": v1alpha1.Percentage(20),
|
|
||||||
"memory": v1alpha1.Percentage(20),
|
|
||||||
"pods": v1alpha1.Percentage(20),
|
|
||||||
},
|
|
||||||
TargetThresholds: v1alpha1.ResourceThresholds{
|
|
||||||
"cpu": v1alpha1.Percentage(50),
|
|
||||||
"memory": v1alpha1.Percentage(50),
|
|
||||||
"pods": v1alpha1.Percentage(50),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
nodeutilization.HighNodeUtilizationPluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
NodeResourceUtilizationThresholds: &v1alpha1.NodeResourceUtilizationThresholds{
|
|
||||||
Thresholds: v1alpha1.ResourceThresholds{
|
|
||||||
"cpu": v1alpha1.Percentage(20),
|
|
||||||
"memory": v1alpha1.Percentage(20),
|
|
||||||
"pods": v1alpha1.Percentage(20),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
removefailedpods.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{},
|
|
||||||
},
|
|
||||||
removepodshavingtoomanyrestarts.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
PodsHavingTooManyRestarts: &v1alpha1.PodsHavingTooManyRestarts{
|
|
||||||
PodRestartThreshold: 100,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
removepodsviolatinginterpodantiaffinity.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{},
|
|
||||||
},
|
|
||||||
removepodsviolatingnodeaffinity.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
NodeAffinityType: []string{"requiredDuringSchedulingIgnoredDuringExecution"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
removepodsviolatingnodetaints.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{},
|
|
||||||
},
|
|
||||||
removepodsviolatingtopologyspreadconstraint.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
result: &api.DeschedulerPolicy{
|
|
||||||
Profiles: []api.DeschedulerProfile{
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", nodeutilization.HighNodeUtilizationPluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: nodeutilization.HighNodeUtilizationPluginName,
|
|
||||||
Args: &nodeutilization.HighNodeUtilizationArgs{
|
|
||||||
Thresholds: api.ResourceThresholds{
|
|
||||||
"cpu": api.Percentage(20),
|
|
||||||
"memory": api.Percentage(20),
|
|
||||||
"pods": api.Percentage(20),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Balance: api.PluginSet{
|
|
||||||
Enabled: []string{nodeutilization.HighNodeUtilizationPluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", nodeutilization.LowNodeUtilizationPluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: nodeutilization.LowNodeUtilizationPluginName,
|
|
||||||
Args: &nodeutilization.LowNodeUtilizationArgs{
|
|
||||||
Thresholds: api.ResourceThresholds{
|
|
||||||
"cpu": api.Percentage(20),
|
|
||||||
"memory": api.Percentage(20),
|
|
||||||
"pods": api.Percentage(20),
|
|
||||||
},
|
|
||||||
TargetThresholds: api.ResourceThresholds{
|
|
||||||
"cpu": api.Percentage(50),
|
|
||||||
"memory": api.Percentage(50),
|
|
||||||
"pods": api.Percentage(50),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Balance: api.PluginSet{
|
|
||||||
Enabled: []string{nodeutilization.LowNodeUtilizationPluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removeduplicates.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removeduplicates.PluginName,
|
|
||||||
Args: &removeduplicates.RemoveDuplicatesArgs{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Balance: api.PluginSet{
|
|
||||||
Enabled: []string{removeduplicates.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removefailedpods.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removefailedpods.PluginName,
|
|
||||||
Args: &removefailedpods.RemoveFailedPodsArgs{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Deschedule: api.PluginSet{
|
|
||||||
Enabled: []string{removefailedpods.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removepodshavingtoomanyrestarts.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removepodshavingtoomanyrestarts.PluginName,
|
|
||||||
Args: &removepodshavingtoomanyrestarts.RemovePodsHavingTooManyRestartsArgs{
|
|
||||||
PodRestartThreshold: 100,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Deschedule: api.PluginSet{
|
|
||||||
Enabled: []string{removepodshavingtoomanyrestarts.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removepodsviolatinginterpodantiaffinity.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removepodsviolatinginterpodantiaffinity.PluginName,
|
|
||||||
Args: &removepodsviolatinginterpodantiaffinity.RemovePodsViolatingInterPodAntiAffinityArgs{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Deschedule: api.PluginSet{
|
|
||||||
Enabled: []string{removepodsviolatinginterpodantiaffinity.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removepodsviolatingnodeaffinity.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removepodsviolatingnodeaffinity.PluginName,
|
|
||||||
Args: &removepodsviolatingnodeaffinity.RemovePodsViolatingNodeAffinityArgs{
|
|
||||||
NodeAffinityType: []string{"requiredDuringSchedulingIgnoredDuringExecution"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Deschedule: api.PluginSet{
|
|
||||||
Enabled: []string{removepodsviolatingnodeaffinity.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removepodsviolatingnodetaints.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removepodsviolatingnodetaints.PluginName,
|
|
||||||
Args: &removepodsviolatingnodetaints.RemovePodsViolatingNodeTaintsArgs{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Deschedule: api.PluginSet{
|
|
||||||
Enabled: []string{removepodsviolatingnodetaints.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removepodsviolatingtopologyspreadconstraint.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removepodsviolatingtopologyspreadconstraint.PluginName,
|
|
||||||
Args: &removepodsviolatingtopologyspreadconstraint.RemovePodsViolatingTopologySpreadConstraintArgs{
|
|
||||||
Constraints: []v1.UnsatisfiableConstraintAction{v1.DoNotSchedule},
|
|
||||||
TopologyBalanceNodeFit: utilptr.To(true),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Balance: api.PluginSet{
|
|
||||||
Enabled: []string{removepodsviolatingtopologyspreadconstraint.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "pass in all params to check args",
|
|
||||||
policy: &v1alpha1.DeschedulerPolicy{
|
|
||||||
Strategies: v1alpha1.StrategyList{
|
|
||||||
removeduplicates.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
RemoveDuplicates: &v1alpha1.RemoveDuplicates{
|
|
||||||
ExcludeOwnerKinds: []string{"ReplicaSet"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
nodeutilization.LowNodeUtilizationPluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
NodeResourceUtilizationThresholds: &v1alpha1.NodeResourceUtilizationThresholds{
|
|
||||||
Thresholds: v1alpha1.ResourceThresholds{
|
|
||||||
"cpu": v1alpha1.Percentage(20),
|
|
||||||
"memory": v1alpha1.Percentage(20),
|
|
||||||
"pods": v1alpha1.Percentage(20),
|
|
||||||
},
|
|
||||||
TargetThresholds: v1alpha1.ResourceThresholds{
|
|
||||||
"cpu": v1alpha1.Percentage(50),
|
|
||||||
"memory": v1alpha1.Percentage(50),
|
|
||||||
"pods": v1alpha1.Percentage(50),
|
|
||||||
},
|
|
||||||
UseDeviationThresholds: true,
|
|
||||||
NumberOfNodes: 3,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
nodeutilization.HighNodeUtilizationPluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
NodeResourceUtilizationThresholds: &v1alpha1.NodeResourceUtilizationThresholds{
|
|
||||||
Thresholds: v1alpha1.ResourceThresholds{
|
|
||||||
"cpu": v1alpha1.Percentage(20),
|
|
||||||
"memory": v1alpha1.Percentage(20),
|
|
||||||
"pods": v1alpha1.Percentage(20),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
removefailedpods.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
FailedPods: &v1alpha1.FailedPods{
|
|
||||||
MinPodLifetimeSeconds: utilptr.To[uint](3600),
|
|
||||||
ExcludeOwnerKinds: []string{"Job"},
|
|
||||||
Reasons: []string{"NodeAffinity"},
|
|
||||||
IncludingInitContainers: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
removepodshavingtoomanyrestarts.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
PodsHavingTooManyRestarts: &v1alpha1.PodsHavingTooManyRestarts{
|
|
||||||
PodRestartThreshold: 100,
|
|
||||||
IncludingInitContainers: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
removepodsviolatinginterpodantiaffinity.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{},
|
|
||||||
},
|
|
||||||
removepodsviolatingnodeaffinity.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
NodeAffinityType: []string{"requiredDuringSchedulingIgnoredDuringExecution"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
removepodsviolatingnodetaints.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
ExcludedTaints: []string{"dedicated=special-user", "reserved"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
removepodsviolatingtopologyspreadconstraint.PluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
IncludeSoftConstraints: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
result: &api.DeschedulerPolicy{
|
|
||||||
Profiles: []api.DeschedulerProfile{
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", nodeutilization.HighNodeUtilizationPluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: nodeutilization.HighNodeUtilizationPluginName,
|
|
||||||
Args: &nodeutilization.HighNodeUtilizationArgs{
|
|
||||||
Thresholds: api.ResourceThresholds{
|
|
||||||
"cpu": api.Percentage(20),
|
|
||||||
"memory": api.Percentage(20),
|
|
||||||
"pods": api.Percentage(20),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Balance: api.PluginSet{
|
|
||||||
Enabled: []string{nodeutilization.HighNodeUtilizationPluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", nodeutilization.LowNodeUtilizationPluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: nodeutilization.LowNodeUtilizationPluginName,
|
|
||||||
Args: &nodeutilization.LowNodeUtilizationArgs{
|
|
||||||
UseDeviationThresholds: true,
|
|
||||||
NumberOfNodes: 3,
|
|
||||||
Thresholds: api.ResourceThresholds{
|
|
||||||
"cpu": api.Percentage(20),
|
|
||||||
"memory": api.Percentage(20),
|
|
||||||
"pods": api.Percentage(20),
|
|
||||||
},
|
|
||||||
TargetThresholds: api.ResourceThresholds{
|
|
||||||
"cpu": api.Percentage(50),
|
|
||||||
"memory": api.Percentage(50),
|
|
||||||
"pods": api.Percentage(50),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Balance: api.PluginSet{
|
|
||||||
Enabled: []string{nodeutilization.LowNodeUtilizationPluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removeduplicates.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removeduplicates.PluginName,
|
|
||||||
Args: &removeduplicates.RemoveDuplicatesArgs{
|
|
||||||
ExcludeOwnerKinds: []string{"ReplicaSet"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Balance: api.PluginSet{
|
|
||||||
Enabled: []string{removeduplicates.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removefailedpods.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removefailedpods.PluginName,
|
|
||||||
Args: &removefailedpods.RemoveFailedPodsArgs{
|
|
||||||
ExcludeOwnerKinds: []string{"Job"},
|
|
||||||
MinPodLifetimeSeconds: utilptr.To[uint](3600),
|
|
||||||
Reasons: []string{"NodeAffinity"},
|
|
||||||
IncludingInitContainers: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Deschedule: api.PluginSet{
|
|
||||||
Enabled: []string{removefailedpods.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removepodshavingtoomanyrestarts.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removepodshavingtoomanyrestarts.PluginName,
|
|
||||||
Args: &removepodshavingtoomanyrestarts.RemovePodsHavingTooManyRestartsArgs{
|
|
||||||
PodRestartThreshold: 100,
|
|
||||||
IncludingInitContainers: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Deschedule: api.PluginSet{
|
|
||||||
Enabled: []string{removepodshavingtoomanyrestarts.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removepodsviolatinginterpodantiaffinity.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removepodsviolatinginterpodantiaffinity.PluginName,
|
|
||||||
Args: &removepodsviolatinginterpodantiaffinity.RemovePodsViolatingInterPodAntiAffinityArgs{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Deschedule: api.PluginSet{
|
|
||||||
Enabled: []string{removepodsviolatinginterpodantiaffinity.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removepodsviolatingnodeaffinity.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removepodsviolatingnodeaffinity.PluginName,
|
|
||||||
Args: &removepodsviolatingnodeaffinity.RemovePodsViolatingNodeAffinityArgs{
|
|
||||||
NodeAffinityType: []string{"requiredDuringSchedulingIgnoredDuringExecution"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Deschedule: api.PluginSet{
|
|
||||||
Enabled: []string{removepodsviolatingnodeaffinity.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removepodsviolatingnodetaints.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removepodsviolatingnodetaints.PluginName,
|
|
||||||
Args: &removepodsviolatingnodetaints.RemovePodsViolatingNodeTaintsArgs{
|
|
||||||
ExcludedTaints: []string{"dedicated=special-user", "reserved"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Deschedule: api.PluginSet{
|
|
||||||
Enabled: []string{removepodsviolatingnodetaints.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", removepodsviolatingtopologyspreadconstraint.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: removepodsviolatingtopologyspreadconstraint.PluginName,
|
|
||||||
Args: &removepodsviolatingtopologyspreadconstraint.RemovePodsViolatingTopologySpreadConstraintArgs{
|
|
||||||
Constraints: []v1.UnsatisfiableConstraintAction{v1.DoNotSchedule, v1.ScheduleAnyway},
|
|
||||||
TopologyBalanceNodeFit: utilptr.To(true),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Balance: api.PluginSet{
|
|
||||||
Enabled: []string{removepodsviolatingtopologyspreadconstraint.PluginName},
|
|
||||||
},
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid strategy name",
|
|
||||||
policy: &v1alpha1.DeschedulerPolicy{Strategies: v1alpha1.StrategyList{
|
|
||||||
"InvalidName": v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{},
|
|
||||||
},
|
|
||||||
}},
|
|
||||||
result: nil,
|
|
||||||
err: fmt.Errorf("unknown strategy name: InvalidName"),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "invalid threshold priority",
|
|
||||||
policy: &v1alpha1.DeschedulerPolicy{Strategies: v1alpha1.StrategyList{
|
|
||||||
nodeutilization.LowNodeUtilizationPluginName: v1alpha1.DeschedulerStrategy{
|
|
||||||
Enabled: true,
|
|
||||||
Params: &v1alpha1.StrategyParameters{
|
|
||||||
ThresholdPriority: utilptr.To[int32](100),
|
|
||||||
ThresholdPriorityClassName: "name",
|
|
||||||
NodeResourceUtilizationThresholds: &v1alpha1.NodeResourceUtilizationThresholds{
|
|
||||||
Thresholds: v1alpha1.ResourceThresholds{
|
|
||||||
"cpu": v1alpha1.Percentage(20),
|
|
||||||
"memory": v1alpha1.Percentage(20),
|
|
||||||
"pods": v1alpha1.Percentage(20),
|
|
||||||
},
|
|
||||||
TargetThresholds: v1alpha1.ResourceThresholds{
|
|
||||||
"cpu": v1alpha1.Percentage(50),
|
|
||||||
"memory": v1alpha1.Percentage(50),
|
|
||||||
"pods": v1alpha1.Percentage(50),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}},
|
|
||||||
result: nil,
|
|
||||||
err: fmt.Errorf("priority threshold misconfigured for plugin LowNodeUtilization"),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tc := range testCases {
|
|
||||||
t.Run(tc.description, func(t *testing.T) {
|
|
||||||
result := &api.DeschedulerPolicy{}
|
|
||||||
scope := scope{}
|
|
||||||
err := v1alpha1.V1alpha1ToInternal(tc.policy, pluginregistry.PluginRegistry, result, scope)
|
|
||||||
if err != nil {
|
|
||||||
if err.Error() != tc.err.Error() {
|
|
||||||
t.Errorf("unexpected error: %s", err.Error())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
// sort to easily compare deepequality
|
|
||||||
result.Profiles = api.SortDeschedulerProfileByName(result.Profiles)
|
|
||||||
diff := cmp.Diff(tc.result, result)
|
|
||||||
if diff != "" {
|
|
||||||
t.Errorf("test '%s' failed. Results are not deep equal. mismatch (-want +got):\n%s", tc.description, diff)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDecodeVersionedPolicy(t *testing.T) {
|
func TestDecodeVersionedPolicy(t *testing.T) {
|
||||||
client := fakeclientset.NewSimpleClientset()
|
client := fakeclientset.NewSimpleClientset()
|
||||||
SetupPlugins()
|
SetupPlugins()
|
||||||
defaultEvictorPluginConfig := api.PluginConfig{
|
|
||||||
Name: defaultevictor.PluginName,
|
|
||||||
Args: &defaultevictor.DefaultEvictorArgs{
|
|
||||||
PriorityThreshold: &api.PriorityThreshold{
|
|
||||||
Value: utilptr.To[int32](utils.SystemCriticalPriority),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
type testCase struct {
|
type testCase struct {
|
||||||
description string
|
description string
|
||||||
policy []byte
|
policy []byte
|
||||||
@@ -899,124 +59,6 @@ func TestDecodeVersionedPolicy(t *testing.T) {
|
|||||||
result *api.DeschedulerPolicy
|
result *api.DeschedulerPolicy
|
||||||
}
|
}
|
||||||
testCases := []testCase{
|
testCases := []testCase{
|
||||||
{
|
|
||||||
description: "v1alpha1 to internal",
|
|
||||||
policy: []byte(`apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"PodLifeTime":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
podLifeTime:
|
|
||||||
maxPodLifeTimeSeconds: 5
|
|
||||||
namespaces:
|
|
||||||
include:
|
|
||||||
- "testleaderelection-a"
|
|
||||||
`),
|
|
||||||
result: &api.DeschedulerPolicy{
|
|
||||||
Profiles: []api.DeschedulerProfile{
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", podlifetime.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
defaultEvictorPluginConfig,
|
|
||||||
{
|
|
||||||
Name: podlifetime.PluginName,
|
|
||||||
Args: &podlifetime.PodLifeTimeArgs{
|
|
||||||
Namespaces: &api.Namespaces{
|
|
||||||
Include: []string{"testleaderelection-a"},
|
|
||||||
},
|
|
||||||
MaxPodLifeTimeSeconds: utilptr.To[uint](5),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
Deschedule: api.PluginSet{
|
|
||||||
Enabled: []string{podlifetime.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "v1alpha1 to internal with priorityThreshold",
|
|
||||||
policy: []byte(`apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"PodLifeTime":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
podLifeTime:
|
|
||||||
maxPodLifeTimeSeconds: 5
|
|
||||||
namespaces:
|
|
||||||
include:
|
|
||||||
- "testleaderelection-a"
|
|
||||||
thresholdPriority: null
|
|
||||||
thresholdPriorityClassName: prioritym
|
|
||||||
`),
|
|
||||||
result: &api.DeschedulerPolicy{
|
|
||||||
Profiles: []api.DeschedulerProfile{
|
|
||||||
{
|
|
||||||
Name: fmt.Sprintf("strategy-%s-profile", podlifetime.PluginName),
|
|
||||||
PluginConfigs: []api.PluginConfig{
|
|
||||||
{
|
|
||||||
Name: "DefaultEvictor",
|
|
||||||
Args: &defaultevictor.DefaultEvictorArgs{
|
|
||||||
PriorityThreshold: &api.PriorityThreshold{
|
|
||||||
Value: utilptr.To[int32](0),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: podlifetime.PluginName,
|
|
||||||
Args: &podlifetime.PodLifeTimeArgs{
|
|
||||||
Namespaces: &api.Namespaces{
|
|
||||||
Include: []string{"testleaderelection-a"},
|
|
||||||
},
|
|
||||||
MaxPodLifeTimeSeconds: utilptr.To[uint](5),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Plugins: api.Plugins{
|
|
||||||
Filter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
PreEvictionFilter: api.PluginSet{
|
|
||||||
Enabled: []string{defaultevictor.PluginName},
|
|
||||||
},
|
|
||||||
Deschedule: api.PluginSet{
|
|
||||||
Enabled: []string{podlifetime.PluginName},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
description: "v1alpha1 to internal with priorityThreshold value and name should return error",
|
|
||||||
policy: []byte(`apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"PodLifeTime":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
podLifeTime:
|
|
||||||
maxPodLifeTimeSeconds: 5
|
|
||||||
namespaces:
|
|
||||||
include:
|
|
||||||
- "testleaderelection-a"
|
|
||||||
thresholdPriority: 222
|
|
||||||
thresholdPriorityClassName: prioritym
|
|
||||||
`),
|
|
||||||
result: nil,
|
|
||||||
err: fmt.Errorf("failed decoding descheduler's policy config \"filename\": priority threshold misconfigured for plugin PodLifeTime"),
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
description: "v1alpha2 to internal",
|
description: "v1alpha2 to internal",
|
||||||
policy: []byte(`apiVersion: "descheduler/v1alpha2"
|
policy: []byte(`apiVersion: "descheduler/v1alpha2"
|
||||||
|
|||||||
@@ -21,7 +21,6 @@ import (
|
|||||||
"k8s.io/apimachinery/pkg/runtime/serializer"
|
"k8s.io/apimachinery/pkg/runtime/serializer"
|
||||||
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
||||||
"sigs.k8s.io/descheduler/pkg/api"
|
"sigs.k8s.io/descheduler/pkg/api"
|
||||||
"sigs.k8s.io/descheduler/pkg/api/v1alpha1"
|
|
||||||
"sigs.k8s.io/descheduler/pkg/api/v1alpha2"
|
"sigs.k8s.io/descheduler/pkg/api/v1alpha2"
|
||||||
"sigs.k8s.io/descheduler/pkg/apis/componentconfig"
|
"sigs.k8s.io/descheduler/pkg/apis/componentconfig"
|
||||||
componentconfigv1alpha1 "sigs.k8s.io/descheduler/pkg/apis/componentconfig/v1alpha1"
|
componentconfigv1alpha1 "sigs.k8s.io/descheduler/pkg/apis/componentconfig/v1alpha1"
|
||||||
@@ -57,10 +56,8 @@ func init() {
|
|||||||
|
|
||||||
utilruntime.Must(componentconfig.AddToScheme(Scheme))
|
utilruntime.Must(componentconfig.AddToScheme(Scheme))
|
||||||
utilruntime.Must(componentconfigv1alpha1.AddToScheme(Scheme))
|
utilruntime.Must(componentconfigv1alpha1.AddToScheme(Scheme))
|
||||||
utilruntime.Must(v1alpha1.AddToScheme(Scheme))
|
|
||||||
utilruntime.Must(v1alpha2.AddToScheme(Scheme))
|
utilruntime.Must(v1alpha2.AddToScheme(Scheme))
|
||||||
utilruntime.Must(Scheme.SetVersionPriority(
|
utilruntime.Must(Scheme.SetVersionPriority(
|
||||||
v1alpha2.SchemeGroupVersion,
|
v1alpha2.SchemeGroupVersion,
|
||||||
v1alpha1.SchemeGroupVersion,
|
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,11 +1,15 @@
|
|||||||
apiVersion: "descheduler/v1alpha1"
|
apiVersion: "descheduler/v1alpha2"
|
||||||
kind: "DeschedulerPolicy"
|
kind: "DeschedulerPolicy"
|
||||||
strategies:
|
profiles:
|
||||||
"PodLifeTime":
|
- name: ProfileName
|
||||||
enabled: true
|
pluginConfig:
|
||||||
params:
|
- name: "PodLifeTime"
|
||||||
podLifeTime:
|
args:
|
||||||
maxPodLifeTimeSeconds: 5
|
maxPodLifeTimeSeconds: 5
|
||||||
namespaces:
|
namespaces:
|
||||||
include:
|
include:
|
||||||
- "e2e-testleaderelection-a"
|
- "e2e-testleaderelection-a"
|
||||||
|
plugins:
|
||||||
|
deschedule:
|
||||||
|
enabled:
|
||||||
|
- "PodLifeTime"
|
||||||
|
|||||||
@@ -1,11 +1,15 @@
|
|||||||
apiVersion: "descheduler/v1alpha1"
|
apiVersion: "descheduler/v1alpha2"
|
||||||
kind: "DeschedulerPolicy"
|
kind: "DeschedulerPolicy"
|
||||||
strategies:
|
profiles:
|
||||||
"PodLifeTime":
|
- name: ProfileName
|
||||||
enabled: true
|
pluginConfig:
|
||||||
params:
|
- name: "PodLifeTime"
|
||||||
podLifeTime:
|
args:
|
||||||
maxPodLifeTimeSeconds: 5
|
maxPodLifeTimeSeconds: 5
|
||||||
namespaces:
|
namespaces:
|
||||||
include:
|
include:
|
||||||
- "e2e-testleaderelection-b"
|
- "e2e-testleaderelection-b"
|
||||||
|
plugins:
|
||||||
|
deschedule:
|
||||||
|
enabled:
|
||||||
|
- "PodLifeTime"
|
||||||
|
|||||||
@@ -1,27 +0,0 @@
|
|||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: ConfigMap
|
|
||||||
metadata:
|
|
||||||
name: descheduler-policy-configmap
|
|
||||||
namespace: kube-system
|
|
||||||
data:
|
|
||||||
policy.yaml: |
|
|
||||||
apiVersion: "descheduler/v1alpha1"
|
|
||||||
kind: "DeschedulerPolicy"
|
|
||||||
strategies:
|
|
||||||
"RemoveDuplicates":
|
|
||||||
enabled: true
|
|
||||||
"RemovePodsViolatingInterPodAntiAffinity":
|
|
||||||
enabled: true
|
|
||||||
"LowNodeUtilization":
|
|
||||||
enabled: true
|
|
||||||
params:
|
|
||||||
nodeResourceUtilizationThresholds:
|
|
||||||
thresholds:
|
|
||||||
"cpu" : 20
|
|
||||||
"memory": 20
|
|
||||||
"pods": 20
|
|
||||||
targetThresholds:
|
|
||||||
"cpu" : 50
|
|
||||||
"memory": 50
|
|
||||||
"pods": 50
|
|
||||||
Reference in New Issue
Block a user