78 Commits

Author SHA1 Message Date
169eb34a59 Fix playbook names for galaxy (#10021)
Signed-off-by: Kasanic, Denis <denisx.kasanic@intel.com>
2023-04-24 07:09:02 -07:00
acbf44a1b4 Adds support for Ansible collections (#9582) 2023-03-27 02:25:55 -07:00
dd4bc5fbfe [etcd] Sometimes, we do not need to run etcd role on all nodes. (#9173)
* WIP: sometimes,we not run etcd

* fix ansible lint

* like calico(kdd) cni, no need run etcd
2022-09-09 01:29:22 -07:00
b2f9442aba Have ingress_controller and external_provisioner in upgrade-cluster.yml (#8640) 2022-03-22 05:43:43 -07:00
b554246502 Fix host DNS config 1) being edited too soon and 2) not working with NM (#8575)
Signed-off-by: Mac Chaffee <me@macchaffee.com>
2022-02-26 10:29:23 -08:00
e9c8913248 Add kubeadm option to etcd_deployment_type to replace the etcd_kubeadm_enabled variable (#8317)
* Add kubeadm option to etcd_deployment_type to replace the etcd_kubeadm_enabled variable

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* Add etcd kubeadm deployment documentation

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* Refactor warning for the deprecated 'etcd_kubeadm_enabled' variable

Signed-off-by: necatican <necaticanyildirim@gmail.com>
2022-02-22 08:53:16 -08:00
97c667f67c Fix etcd_events not getting upgraded in upgrade-cluster.yml (#8550)
Signed-off-by: Mac Chaffee <me@macchaffee.com>
2022-02-17 08:03:38 -08:00
e4c8c7188e etcd: deploy container engine if needed (#7532)
If the etcd cluster is separate and the etcd_deployment_type is "host",
there is no need for a container engine on the etcd nodes

Do not rely on a 'default(true)' filter, but define a proper default in
kubespray-defaults depending on etcd deployment method and if internal
or external etcd is used
2021-10-12 00:31:47 -07:00
360aff4a57 Rename ansible groups to use _ instead of - (#7552)
* rename ansible groups to use _ instead of -

k8s-cluster -> k8s_cluster
k8s-node -> k8s_node
calico-rr -> calico_rr
no-floating -> no_floating

Note: kube-node,k8s-cluster groups in upgrade CI
      need clean-up after v2.16 is tagged

* ensure old groups are mapped to the new ones
2021-04-29 05:20:50 -07:00
486b223e01 Replace kube-master with kube_control_plane (#7256)
This replaces kube-master with kube_control_plane because of [1]:

  The Kubernetes project is moving away from wording that is
  considered offensive. A new working group WG Naming was created
  to track this work, and the word "master" was declared as offensive.
  A proposal was formalized for replacing the word "master" with
  "control plane". This means it should be removed from source code,
  documentation, and user-facing configuration from Kubernetes and
  its sub-projects.

NOTE: The reason why this changes it to kube_control_plane not
      kube-control-plane is for valid group names on ansible.

[1]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint/README.md#motivation
2021-03-23 17:26:05 -07:00
8800b5c01d Remove rotate_tokens logic
kubeadm never rotates sa.key/sa.pub, so there is no need to delete tokens/restart pods

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
2021-03-04 23:42:22 -08:00
067db686f6 Fix proxy usage when *_PROXY are present in environment (#7309)
Since a790935d02 all proxy users
should be properly configured

Now when you have *_PROXY vars in your environment it can leads to failure
if NO_PROXY is not correct, or to persistent configuration changes
as seen with kubeadm in 1c5391dda7

Instead of playing constant whack-a-bug, inject empty *_PROXY vars everywhere
at the play level, and override at the task level when needed

Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
2021-02-23 09:44:02 -08:00
c5db012c9a Move kubernetes/master to kubernetes/control-plane (#7218)
This is a small step to replace "master" with "control-plane" in
Kubespray project.
2021-02-01 07:15:49 -08:00
a790935d02 Only setup *_PROXY env variables where needed (#7095)
no_proxy is a pain to get right, and having proxy variables present causes issues
(k8s components get proxy configuration after upgrade, see #7100)

It's better to only configure what require proxy:
- the runtime (containerd/docker/crio)
- the package manager + apt_key
- the download tasks

Tested with the following clusters
- 4 CentOS 8 nodes
- 1 Ubuntu 20.04 node

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2021-01-11 07:21:08 -08:00
50e8a52c74 Handle calico-rr nodes as workers so they get upgraded too (#6447)
* Handle calico-rr nodes as workers so they get upgraded too

* calico-rr nodes run 'calico and external cloud provider' too
2020-09-24 04:38:05 -07:00
93951f2ed5 fix use of ansible tags (#6316)
tags are not inherited for include_role therefore the change
from include to import

Co-authored-by: Hans Feldt <hafe@users.noreply.github.com>
2020-06-25 03:00:37 -07:00
69603aed34 add strategy mitogen_linear when installed mitogen (#5985)
* add strategy mitogen_linear when installed mitogen

* add small docs

Rename playbook file

The raw action executes as a regular Mitogen connection, which requires Python on the target, so add strategy: linear to bootstrap-os role playbook.

* add mitogen to  CI test
fix typo

* enable mitogen test on deploy-part1 tests
change version from master to release
download tar.gz archive

* run all CI tests with mitogen

* disable mitogen with upgrade CI tests

* enable mitogen on CI tests via env vars

* disable mitogen on CI test by default, enable on some different OS

* disable mitogen CI test on centos8
(get error  /usr/bin/python: No such file or directory)
2020-04-24 05:20:07 -07:00
aead0e3a69 bump minimal ansible version to 2.8.0 (#5984)
* bump minimal ansible version to 2.8.0

* check ansible version in separate playbook
2020-04-22 13:33:44 -07:00
27a268df33 Gather just the necessary facts (#5955)
* Gather just the necessary facts

* Move fact gathering to separate playbook.
2020-04-17 16:23:36 -07:00
910a821d0b Fix chicken and egg problem with proxy_env not defined on the first … (#5896)
* Fix chicken and egg problem with proxy_env not defined on the first envinronment usage.

* Disable fact gathering for the first proxy_env evaluation.

* Move proxy_env var set up from the role defaults to the root playbooks as fact.
2020-04-08 00:53:43 -07:00
d0af5979c8 install csi-driver not just cinder (#5766) 2020-03-13 05:34:39 -07:00
f697338eec [Openstack] Install Cinder-CSI before first node is schedulable again (#5735)
* install cinder-csi before upgrading nodes

* Only run the Cinder CSI when enabled
2020-03-11 06:31:36 -07:00
a3dedb68d1 [Openstack] Make it possible to apply the new cloud provider during upgrade (#5707)
* run external cloud provider setup during upgrade

* change name of taskt to better reflect what it does

* fix typo
2020-03-11 05:11:36 -07:00
66408a87ee Refactor download role (#5697)
* download file

* download containers

* fix push image to nodes

* pull if none image on host

* fix

* improve docker image tag checks.
do not pull already cached images

* rebase fix merge conflict

* add support download_run_once when upgrade and scale cluster
add some test with download_run_once

* set default values to temp flag for every download cycle

* add save,load abilty for containerd and crio when download_run_once=true

* return redefine image save/load command to  set_docker_image_facts.yml

* move set command to set_container_facts

* ctr in containerd_bin_dir

* fix order of ctr image export arguments

* temporary disable download_run_once for containerd and crio
due https://github.com/containerd/containerd/issues/4075

* remove unused files

* fix strict yaml linter warning and errors

* refactor logical conditions to pull and cache container images

* remove comment due lint check

* document role

* remove image_load_on_localhost, because cached images are always loaded to docker on remote sites

* remove XXX from debug output
2020-03-05 07:31:39 -08:00
7f87ce0362 Upgrade container-engine after draining (#5601)
* Run 'container-engine' after drain.

Move possibly disruptive role 'container-engine' to run after the node
is drained.

As that role have to be run on non-cluster nodes as well (etcd and
calico-rr), and those nodes are not drained, add play for that case.

* Check if api is up before upgrade.

If container engine is restarted in previous role, api controller can
take some time to start. This check ensures api is up before upgrade.
2020-02-27 11:47:28 -08:00
538f4dad9d tag role kubernetes/node-label in playbooks (#5560) 2020-01-20 11:43:36 -08:00
9fda84b1c9 set node label via kubectl label command (#5257)
* set varios node label via kubectl label command, not kubelet options

* remove node_labels from KUBELET_ARGS
2019-12-09 01:43:09 -08:00
bc7d1f36ea Remove wasteful extra gather facts step (#4928)
Ansible will gather facts on the preinstall/download role
automatically at the start of that play.
2019-06-27 06:25:21 -07:00
4348e78b24 Enable kubeadm etcd mode (#4818)
* Enable kubeadm etcd mode

Uses cert commands from kubeadm experimental control plane to
enable non-master nodes to obtain etcd certs.

Related story: PROD-29434

Change-Id: Idafa1d223e5c6ceadf819b6f9c06adf4c4f74178

* Add validation checks and exclude calico kdd mode

Change-Id: Ic234f5e71261d33191376e70d438f9f6d35f358c

* Move etcd mode test to ubuntu flannel HA job

Change-Id: I9af6fd80a1bbb1692ab10d6da095eb368f6bc732

* rename etcd_mode to etcd_kubeadm_enabled

Change-Id: Ib196d6c8a52f48cae370b026f7687ff9ca69c172
2019-06-20 11:12:51 -07:00
e67f848abc ansible-lint: add spaces around variables [E206] (#4699) 2019-05-02 14:24:21 -07:00
a5b46bfc8c Run dns_late preinstall tasks on all k8s nodes (#4672)
* Run dns_late preinstall tasks on all k8s nodes

Related issue: #4656

Change-Id: I63f8559ef1a497b7580ab084561e6603fe647834

* Fix ansible-lint

Change-Id: Ia5b33fa63dbc36d8c3e9557ef3f2ea02af2325a5

* Fix recover_control_plane lint issues

Change-Id: I16643a3193c11b6ba704e9698812cac7e4fd19a8
2019-04-29 05:12:21 -07:00
f69b5f7f33 Upgrade to Ansible 2.7.8 (#4535) 2019-04-17 10:18:05 -07:00
a4e65c7ceb Upgrade to Ansible >2.7.0 (#4471) 2019-04-09 04:21:07 -07:00
5f12b7aedf Remove kubedns and dnsmasq. Move dns_late phase after apps (#4406)
Both kubedns and dnsmasq modes are long not maintained.
We should run dns_late steps at the end because sshd
makes DNS lookups during Ansible run and has 2s timeouts
for each failed lookup trying to connect to coredns before
it is ready.
2019-04-01 12:32:34 -07:00
9ffc65f8f3 Yamllint fixes (#4410)
* Lint everything in the repository with yamllint

* yamllint fixes: syntax fixes only

* yamllint fixes: move comments to play names

* yamllint fixes: indent comments in .gitlab-ci.yml file
2019-04-01 02:38:33 -07:00
c66e9a6d62 Disable become for localhost (#4287) 2019-02-25 19:36:44 -08:00
80379f6cab Fix kube-proxy configuration for kubeadm (#3958)
- Creates and defaults an ansible variable for every configuration option in the `kubeproxy.config.k8s.io/v1alpha1` type spec
  - Fixes vars that were orphaned by removing non-kubeadm
  - Fixes previously harcoded kubeadm values
- Introduces a `main` directory for role default files per component (requires ansible 2.6.0+)
  - Split out just `kube-proxy.yml` in this first effort
- Removes the kube-proxy server field patch task

We should continue to pull out other components from `main.yml` into their own defaults files as I did here for `defaults/main/kube-proxy.yml`. I hope for and will need others to join me in this refactoring across the project until each component config template has a matching role defaults file, with shared defaults in `kubespray-defaults` or `downloads`
2019-01-03 00:04:26 -08:00
cd42e649a7 Fix reconfigure and upgrade cluster (#3938) 2018-12-24 23:06:27 -08:00
4a7f829ecf Reapply win_node patches (#3868) 2018-12-13 06:17:46 -08:00
ddffdb63bf Remove non-kubeadm deployment (#3811)
* Remove non-kubeadm deployment

* More cleanup

* More cleanup

* More cleanup

* More cleanup

* Fix gitlab

* Try stop gce first before absent to make the delete process work

* More cleanup

* Fix bug with checking if kubeadm has already run

* Fix bug with checking if kubeadm has already run

* More fixes

* Fix test

* fix

* Fix gitlab checkout untill kubespray 2.8 is on quay

* Fixed

* Add upgrade path from non-kubeadm to kubeadm. Revert ssl path

* Readd secret checking

* Do gitlab checks from v2.7.0 test upgrade path to 2.8.0

* fix typo

* Fix CI jobs to kubeadm again. Fix broken hyperkube path

* Fix gitlab

* Fix rotate tokens

* More fixes

* More fixes

* Fix tokens
2018-12-06 02:33:38 -08:00
967a042321 Add flag to deploy container engine manually. (#3753)
This feature was removed by PR#3061. But change flag manage_docker to deploy_container_engine.
2018-11-26 07:26:40 -08:00
07d2f1aa36 Add some warning information about deprecating non-kubeadm code (#3759) 2018-11-26 01:17:31 -08:00
3dcb914607 Remove Vault (#3684)
* Remove Vault

* Remove reference to 'kargo' in the doc

* change check order
2018-11-10 08:51:24 -08:00
152c15b19f Disable gather facts when checking ansible version (#3615) 2018-10-31 03:19:17 -07:00
ce5a34d86c ansible: Upgrade to 2.7.1 (#3618)
Only exclude buggy Ansible v2.7.0 (https://github.com/ansible/ansible/issues/46600#issuecomment-433863628)

Fixup #3589
2018-10-31 03:01:19 -07:00
90d8f7aa6a Assert if ansible 2.7 is used (#3589) 2018-10-26 00:29:21 -07:00
c27a91f7f0 Split deploy steps in separate playbooks: part1 (#3451)
* Fix bootstrap_os/ubuntu idempotency

* Update bastion role

* move container_engine in sub-roles

* requires ansible 2.5

* ubuntu18 as first CI job
2018-10-09 19:14:33 -07:00
1d4aa7abcc Fix upgrade k8s 2018-09-16 10:35:12 +08:00
04852ad753 Install Helm on all masters 2018-09-10 11:39:26 +02:00
d407a590a6 container_manager variable to specify runtime. 2018-08-28 06:23:38 +00:00