Compare commits

...

827 Commits

Author SHA1 Message Date
2187882ee0 fix contrib/<file>.md errors identified by markdownlint 2022-08-07 13:47:51 +00:00
4a994c82d1 fix docs/<file>.md errors identified by markdownlint
*	docs/azure-csi.md
* docs/azure.md
* docs/bootstrap-os.md
*	docs/calico.md
* docs/debian.md
* docs/fcos.md
*	docs/vagrant.md
* docs/gcp-lb.md
* docs/kubernetes-apps/registry.md
* docs/setting-up-your-first-cluster.md
* docs/vagrant.md
*	docs/vars.md
2022-08-07 12:41:09 +00:00
b074b91ee9 fix docs/integration.md errors identified by markdownlint 2022-08-07 12:13:18 +00:00
b3f7be7135 describe the use of pre-commit hook in CONTRIBUTING.md 2022-08-07 11:58:47 +00:00
d4082da97f add tmp.md to .gitignore 2022-08-07 11:23:43 +00:00
faecc7420d add pre-commit hook configuration 2022-08-07 11:21:50 +00:00
7e862939db Add kube-vip check to check_readme_versions.sh (#9155)
To check the kube-vip version between readme.md and the default value
on the role, this updates check_readme_versions.sh
2022-08-06 08:26:20 -07:00
0d3bd69a17 add-kube-vip-in-readme (#9149) 2022-08-05 08:13:47 -07:00
2b97b661d8 Move old etcd backup removal after etcd restart (#9147) 2022-08-05 08:09:59 -07:00
24f12b024d Argument jsonpath must be single-quoted in "See if node is schedulable" task (#9146) 2022-08-05 08:09:47 -07:00
f7d363dc96 Fix crio version in README (#9148) 2022-08-04 08:53:46 -07:00
47050003a0 Add docker support for Kylin V10 (#9144)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-08-03 15:03:46 -07:00
4df6e35270 Move oracle7-canal to centos7-canal 2022-08-02 16:55:52 -07:00
307f598bc8 Move flannel to etcd datastore 2022-08-02 16:55:52 -07:00
eb10249a75 Align canal templates with calico official ones (k8s datastore) 2022-08-02 16:55:52 -07:00
b4318e9967 Update to latest local path provisioner version (#9132) 2022-08-01 14:56:28 -07:00
c53561c9a0 Update to latest registry version (#9133) 2022-08-01 14:52:28 -07:00
f2f9f1d377 Add kylin OS support (#9078)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-08-01 10:44:29 -07:00
4487a374b1 Update Kube-router version to 1.5.1 (#9136)
https://github.com/cloudnativelabs/kube-router/releases/tag/v1.5.1
2022-08-01 00:16:28 -07:00
06f8368ce6 Fix Hetzner CCM cluster-cidr (#9127) 2022-07-30 20:18:27 -07:00
5b976a8d80 [calico] add hashes for v3.22.4 & v3.21.6 (#9129) 2022-07-30 20:14:38 -07:00
e73803c72c pid reserved must be str (#9124) 2022-07-30 20:14:27 -07:00
b3876142d2 [cert-manager] Upgrade to v1.9.0 (#9117) 2022-07-29 00:11:11 -07:00
9f11946f8a [argocd] update argocd to v2.4.7 (#9105) 2022-07-27 09:32:29 -07:00
9c28f61dbd Enable shellcheck for contrib/ (#9122)
Today we have many contributions to contrib/offline/ and some PRs
contained invalid coding style for those scripts.
This enables shellcheck to make such invalid coding style easily.
2022-07-26 23:32:32 -07:00
09291bbdd2 Use a variable for roles of remove-node/post-remove (#9096)
Signed-off-by: ydFu <ader.ydfu@gmail.com>
2022-07-26 10:51:09 -07:00
7fa6314791 Add ignore_assert_error to ubuntu20 etcd ha job (#9108) 2022-07-26 10:45:09 -07:00
65d95d767a [helm] upgrade to 3.9.2 (#9115) 2022-07-26 10:41:09 -07:00
8306adb102 update cilium to v1.11.7 (#9119) 2022-07-26 10:33:11 -07:00
4b3db07cdb Fix calicoctl version to v3.23.3 (#9121)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-07-26 10:29:10 -07:00
c24a3a3b15 Keep the style consistent (#9116) 2022-07-24 23:46:59 -07:00
aca6be3adf [calico] add v3.23.3 and make it default (#9112) 2022-07-22 00:01:39 -07:00
9617532561 git ignore .terraform.lock.hcl anywhere (#9109) 2022-07-21 23:07:38 -07:00
ff5e487e32 Add retries to api servers response 2022-07-21 23:03:38 -07:00
9c51ac5157 Switch fedora36se to 35 and 35docker to 36 2022-07-21 23:03:38 -07:00
07eab539a6 Add Fedora 36 support and CI, remove Fedora 34 (eol) 2022-07-21 23:03:38 -07:00
a608a048ad Update kube-ovn to v1.9.7 2022-07-21 23:03:38 -07:00
0cfa03fa8a [flannel] update to v1.18.1 & make it default (#9104) 2022-07-21 00:19:55 -07:00
6525461d97 Add reset tasks specific to calico network_plugin (#9103) 2022-07-19 13:15:27 -07:00
f592fa1235 add kube-vip sans (#9099) 2022-07-19 13:11:28 -07:00
2e1863af78 feat: change default blockSize for calico (#9055)
Signed-off-by: cyclinder qifeng.guo@daocloud.io
2022-07-19 13:05:27 -07:00
2a282711df update-loadbalancers-versions (#9100) 2022-07-19 13:01:28 -07:00
91073d7379 [kubernetes] make v1.24.3 default (#9101) 2022-07-19 02:58:06 -07:00
3ce5458f32 hardening: Add SeccompDefault admission plugin for kubelet (#9074)
* docs(hardening): add SeccompDefault admission plugin to kubelet feature gates

* fix(kubelet-config): enable config through kubelet_feature_gates

* feat(kubelet): add kubelet_seccomp_default variable
2022-07-19 00:50:07 -07:00
98c194735c [kubernetes] add hashes for v1.22.12, v1.23.9 & v1.24.3 (#9092) 2022-07-19 00:30:19 -07:00
626ea64f66 9052 crio add dpkg hold (#9075)
* Update main.yaml

* remove version in dpkg_selection name

* make lint happy

* Fix typo

* add comment / remove useless contition

* remove dpkg hold in reset tasks
2022-07-19 00:30:07 -07:00
0d32c0d92b [upcloud] Add firewall default deny policy and port allowlisting (#9058) 2022-07-19 00:18:06 -07:00
ce04fdde72 [ingress-nginx] upgrade to 1.3.0 (#9088)
* This release removes support for Kubernetes v1.19.0
* This release adds support for Kubernetes v1.24.0
* Starting with this release, we will need permissions on the coordination.k8s.io/leases resource for leaderelection lock
2022-07-14 18:46:25 -07:00
4ed3c85a88 Fix calicoctl checksums for v3.23.2 (#9087)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-07-13 14:02:57 -07:00
14063b023c Extend DNS memory limit. 170Mi tents to OOM (#9084) 2022-07-13 00:03:37 -07:00
3d32f0e953 [#9067] archive offline-files and support env-var NO_HTTP_SERVER to skip nginx-running (#9068) 2022-07-12 00:24:52 -07:00
d821bed2ea Fix some typo (#9056)
* fix ingress controller task name

* fix calico word

* add check typo
2022-07-11 09:49:48 -07:00
058e05df41 Add cri-dockerd url for offline.yml (#9079)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-07-11 06:45:49 -07:00
a7ba7cdcd5 [calico] add v3.23.2 and make it default (#9041) 2022-07-08 10:41:48 -07:00
c01656b1e3 Allow "openSUSE Tumbleweed" to be run (#9072)
The commit 1ce2f04 tried to merge multiple SUSE OS checks including
"openSUSE Leap" and "openSUSE Tumbleweed" into a single SUSE, but
that was a perfect change.
Then the commit c16efc9 tried to fix it for "openSUSE Leap", but it
didn't take care of "openSUSE Tumbleweed".
Then this adds "openSUSE Tumbleweed" to the OS check.
2022-07-08 04:55:47 -07:00
5071529a74 feat: upgrade cilium and add default variables (#9065)
Signed-off-by: eminaktas <eminaktas34@gmail.com>
Signed-off-by: Emin Aktas <emin.aktas@trendyol.com>
2022-07-07 10:35:34 -07:00
6d543b830a Fix vcloud-csi bug related to #9046 (#9066)
* Fix vcloud-csi bug related to #9046

Signed-off-by: yasintahaerol <yasintahaerol@gmail.com>

* add supervisor-fss-namespace=kube-system flag to vsphere-csi-controller-deployment

Signed-off-by: yasintahaerol <yasintahaerol@gmail.com>
2022-07-07 10:31:35 -07:00
e6154998fd fix calico tunl0 routes test (#9061)
Signed-off-by: cyclinder qifeng.guo@daocloud.io
2022-07-06 04:52:49 -07:00
01c6239043 increase ansible fact_caching_timeout (#9059) 2022-07-06 01:04:51 -07:00
4607ac2e93 fix(vsphere-csi): remove namespace env variable and set namespace as kube-system (#9046)
Signed-off-by: eminaktas <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>

Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
2022-07-06 01:00:50 -07:00
9ca5632582 fix-docker-option-in-centos-arm64 (#9047) 2022-07-05 08:26:47 -07:00
51195212b4 [argocd] update argocd to v2.4.3 (#9050) 2022-07-05 08:22:47 -07:00
7414409aa0 Add target components on check_readme_versions.sh (#9045)
This adds target components on check_readme_versions.sh after
merging https://github.com/kubernetes-sigs/kubespray/pull/9044
In addition, this fixes typo on check_readme_versions.sh

This adds `foo_version` variables for some components because
check_readme_versions.sh verifies the corresponding version for
`<component name>_version` from main.yml. This change also makes
consistency in the main.yml. In long-term, we will be able to
remove the existing `foo_image_tag` variables, but that is not now
for backwards compatibility for users.
2022-07-05 08:02:47 -07:00
adfd77f11d add-test-for-kubeadm-etcd-deployment (#9007) 2022-07-05 07:58:47 -07:00
f3ea8cf45e Add Rocky Linux 8 support for vagrant (#8905)
To test Kubespray on Rocky Linux 8 with vagrant, this adds it to
the Vagrantfile.
2022-07-05 07:50:47 -07:00
3bb9542606 Adding support for node & pod pid limit (#9038) 2022-07-05 00:20:48 -07:00
1d0b3829ed remove-etcd-unsupported-arch (#9049) 2022-07-04 05:39:24 -07:00
a5d7178bf8 [docs] update supported components (#9044) 2022-06-29 23:50:07 -07:00
cbef8ea407 [etcd] drop hashes for 3.5.2 2022-06-29 09:44:06 -07:00
2ff4ae1f08 [etcd] drop hashes for 3.5.1 2022-06-29 09:44:06 -07:00
edf7f53f76 [etcd] add etcd 3.5.4 and make it the default for 1.24.x 2022-06-29 09:44:06 -07:00
f58816c33c [krew] update krew (#9043) 2022-06-29 09:02:06 -07:00
1562a9c2ec add missing verbs (#9032) 2022-06-29 00:18:05 -07:00
6cd243f14e Add component version check for README.md (#9042)
During code-review, reviwers needed to take care of README.md also
should be updated when the pull request updated component versions.
This adds the corresponding check to reduce reviwer's burden.
2022-06-29 00:14:05 -07:00
4b03f6c20f add-managed-ntp-support (#9027) 2022-06-28 13:15:34 -07:00
d0a2ba37e8 update deprecated syntax (#9040)
* `ansible.builtin.include` removed in version 2.16

Read the `ansible.builtin.include DEPRECATED` doc:

 https://docs.ansible.com/ansible/latest/collections/ansible/builtin/include_module.html#deprecated

* Update integration.md
2022-06-28 13:11:34 -07:00
e8ccbebd6f add ingress nginx webhook (#9033)
* add ingress nginx webhook

* fix ingress nginx template
2022-06-28 11:55:35 -07:00
d4de9d096f fix-the-issue-of-miss-the-etcd-user (#9016) 2022-06-28 09:13:58 -07:00
e1f06dd406 Add support for the updated (startup|liveness|readiness)Probe.Port numbers in Cilium (#9031) 2022-06-27 11:00:59 -07:00
6f82cf12f5 let containerd_default_runtime be undefined by default (#9026) 2022-06-27 10:56:59 -07:00
ca8080a695 [crun] drop old crun versions 1.2 and 1.3 2022-06-27 10:36:59 -07:00
55d14090d0 [crun] add 1.4.5 and make it the default 2022-06-27 10:36:59 -07:00
da8498bb6f [cert-manager] Upgrade to v1.8.2 (#9029) 2022-06-24 23:50:58 -07:00
b33896844e apply calico bgp peer definition task to all nodes, but delegate to (#8974)
first control plane node
2022-06-24 19:42:57 -07:00
ca212c08de [runc] drop hashes for 1.0.2 and 1.0.3 2022-06-23 09:23:43 -07:00
784439dccf [runc] make 1.1.3 the new default 2022-06-23 09:23:43 -07:00
d818c1c6d9 [runc] add hashes for 1.1.3 2022-06-23 09:23:43 -07:00
b9384ad913 [runc] add hashes for 1.1.2 2022-06-23 09:23:43 -07:00
76b0cbcb4e bump pause container to 3.6 (#9024)
* [pod-infra] bump pod infra container version to 3.6

* [cri-dockerd] align pod infra container image with other CRIs
2022-06-23 01:43:44 -07:00
6bf3306401 Fixed concatenate str & int in auto_renew_certificates_systemd_calendar var (#8979) 2022-06-22 11:55:43 -07:00
bf477c24d3 Chnage from deprecated variable 2022-06-22 00:37:44 -07:00
79f6cd774a create snapshot-controller only if needed 2022-06-22 00:37:44 -07:00
c3c9a42502 support multus multi-architecture installation (#9012)
Signed-off-by: cyclinder qifeng.guo@daocloud.io
2022-06-21 10:56:26 -07:00
4a92b7221a add manage offline files script (#8956)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-06-21 03:49:43 -07:00
9d5d945bdb [MASTER] Add missing configuration for extra tolerations (#8908)
* Added new configuration item for extra tolerations in policy controllers

Signed-off-by: Sébastien Masset <smt.masset@gmail.com>

* Added new configuration item for extra tolerations in DNS autoscaler

Signed-off-by: Sébastien Masset <smt.masset@gmail.com>

* Aligned existing handling of extra DNS tolerations

Signed-off-by: Sébastien Masset <smt.masset@gmail.com>
2022-06-20 01:36:06 -07:00
475ce05979 Fix kubectl download for v1.23.8 amd64 (#9002)
kubectl_checksums for amd64 v1.23.8 was missing the last digit
2022-06-20 01:28:06 -07:00
57d7029317 ansible_maxversion_exclusive (#8919) 2022-06-20 01:24:06 -07:00
e4fe679916 [kubernetes] make v1.24.2 default 2022-06-17 11:08:33 -07:00
123632f5ed [kubernetes] add hashes for v1.22.11, v1.23.8 & v1.24.2 2022-06-17 11:08:33 -07:00
56d83c931b [CI] use debian-11 image with more disk space to ensure successful upgrade tests 2022-06-17 08:00:32 -07:00
a22ae6143a [CI] ensure upgrade tests cover defaults (containerd currently) 2022-06-17 08:00:32 -07:00
a1ec0571b2 [nerdctl] upgrade to 0.20.0 2022-06-17 08:00:32 -07:00
2db39d4856 [containerd] add hashes for 1.5.12, 1.5.13, 1.6.5 and 1.6.6 and make 1.6.6 the new default 2022-06-17 08:00:32 -07:00
e7729daefc Add assertion for IPv6 in verify settings
Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>
2022-06-17 10:36:43 +02:00
97b4d79ed5 feat: make kubernetes owner parametrized (#8952)
* feat: make kubernetes owner parametrized

* docs: update hardening guide with configuration for CIS 1.1.19

* fix: set etcd data directory permissions to be compliant to CIS 1.1.12
2022-06-17 01:34:32 -07:00
890fad389d suggest-to-use-nft-in-centos8 (#8987) 2022-06-17 01:30:32 -07:00
0c203ece2d fix-broken-link-in-readme 2022-06-17 09:29:45 +02:00
9e7f89d2a2 Remove forgotten 1.21 references 2022-06-16 08:55:38 +02:00
24c8ba832a [kubernetes] drop support for configuring insecure apiserver 2022-06-15 00:57:20 -07:00
c2700266b0 [download] fix dependencies for downloads 2022-06-15 00:57:20 -07:00
2cd8c51a07 [kubeadm] use v1beta3 configuration version
* extra admission controls now don't have a version in their file names
  eventratelimit.v1beta2.yaml.j2 -> eventratelimit.yaml.j2
* cri_socket variable includes the unix:// prefix to be conformat with
  upstream
2022-06-15 00:57:20 -07:00
589823bdc1 [CI] remove docker stand-alone molecule test 2022-06-15 00:57:20 -07:00
5dc8be9aa2 [CI] kube 1.24 requires at least 1775Mi of memory, might as well leave the default of 2048 2022-06-15 00:57:20 -07:00
fad296616c [docker] use cri-dockerd instead of dockershim for any kubernetes version deployed with docker as the container_manager 2022-06-15 00:57:20 -07:00
ec01b40e85 [cri_dockerd] upgrade cri_dockerd to 0.2.2 for 1.24 compatibility
* use new artifact release name
* enable cri-dockerd dual setack support if enable_dual_stack_networks
2022-06-15 00:57:20 -07:00
2de5c4821c [calico] clean up workarounds for older versions 2022-06-15 00:57:20 -07:00
9efe145688 [calico] make 3.23.1 the default and drop 3.20.x and 3.19.x 2022-06-15 00:57:20 -07:00
51bc64fb35 [cri-o] support cri-o 1.24 with kube 1.24 2022-06-15 00:57:20 -07:00
6380483e8b [kubeconfig] generate admin kube config from /etc/kubernetes/admin.conf instead of the workaround of using kubeadm init phase kubeadm admin which fails with cri-dockerd 2022-06-15 00:57:20 -07:00
ae1dcb031f [kubernetes] drop pre 1.22.0 workarounds 2022-06-15 00:57:20 -07:00
9535a41187 [kubernetes] make 1.22.0 the minimum version 2022-06-15 00:57:20 -07:00
47495c336b [kubernetes] drop hashes for 1.21.x 2022-06-15 00:57:20 -07:00
d69d4a8303 [kubernetes] make 1.24.1 the new default 2022-06-15 00:57:20 -07:00
ab4d590547 add-ubuntu2204-in-readme 2022-06-15 09:51:59 +02:00
85271fc2e5 add-ci-for-ubuntu2204 (#8958) 2022-06-15 00:47:19 -07:00
f6159c5677 Update Dockerfile base image (#8975)
Signed-off-by: hang.jiang <hang.jiang@daocloud.io>
2022-06-14 15:15:36 -07:00
668b9b026c [cert-manager] Upgrade to v1.8.1 (#8976) 2022-06-14 15:11:34 -07:00
77de7cb785 Expose calico-typha metrics port (#8855) 2022-06-14 07:17:33 -07:00
e5d6c042a9 Fix regex for replacing http_proxy (#8957) 2022-06-14 07:07:34 -07:00
3ae397019c Add arm64 Flatcar OS's pypy bootstrapping (#8959)
- Upgrade pypy's python version to `3.9`
- Upgrade pypy`s version to `7.3.9`
2022-06-14 07:03:35 -07:00
7d3e59cf2e Remove unneeded socat installation for Flatcar (#8970) 2022-06-14 02:23:34 -07:00
4eb83bb7f6 fixes for docker reset (#8966) 2022-06-14 02:15:34 -07:00
1429ba9a07 Update docker version to 20.10.17 (#8965) 2022-06-14 02:11:33 -07:00
889454f2bc Fix typo in calico check (#8969) 2022-06-13 14:10:12 -07:00
2fba94c5e5 fix a typo in the "matallb_auto_assign" variable name (#8949)
* fix a typo in the "matallb_auto_assign" variable name

* add metallb check to fail when deprecated "matallb_auto_assign" variable is defined
2022-06-13 09:40:12 -07:00
4726a110fc remove-support-for-ansible-2.9-2.10 (#8951) 2022-06-10 03:35:47 -07:00
6b43d6aff2 Proposed fix to Issue 8667 (#8944)
Proposed fix to Issue 8667

Proposed fix to Issue 8667
2022-06-09 23:37:46 -07:00
024a3ee551 Replace callback_whitelist with callbacks_enabled (#8759)
When running molecule jobs, we saw the folloing warning message:

 [DEPRECATION WARNING]: [defaults]callback_whitelist option, normalizing names
 to new standard, use callbacks_enabled instead. This feature will be removed
 from ansible-core in version 2.15. Deprecation warnings can be disabled by
 setting deprecation_warnings=False in ansible.cfg.

callbacks_enabled has been added since Ansible 2.11 and Kubespray is using
Ansible 2.12 at master branch. So we can use callbacks_enabled safely to
avoid the warning message.
2022-06-09 13:15:45 -07:00
cd7381d8de Drop Ansible support for v2.9 and v2.10 (#8925)
Ansible v2.9 and v2.10 are EOL as [1].
This drops those version supports by following the upstream Ansible.

This sets use_ssh_args true always because that is required to use
ssh_args on ansible.cfg on Ansible v2.11 or later[2].

ansible_ssh_host is replaced with ansible_host because ansible_ssh_host
has been deprecated already and cenots7 jobs were failed due to the
deprecated ansible_ssh_host.

[1]: https://docs.ansible.com/ansible/devel/reference_appendices/release_and_maintenance.html#ansible-core-changelogs
[2]: https://docs.ansible.com/ansible/latest/collections/ansible/posix/synchronize_module.html#parameter-use_ssh_args
2022-06-09 07:07:42 -07:00
f53764f949 calicoctl repo has been merged in calico (#8920) 2022-06-09 07:01:42 -07:00
57c3aa4560 Merge pull request #8943 from ErikJiang/update-etcd-download-url
update etcd download url in offline.yml
2022-06-08 08:09:48 -07:00
bb530da5c2 [registry] Switch registry to use registry.k8s.io
Please see the conversation here: https://groups.google.com/a/kubernetes.io/g/dev/c/DYZYNQ_A6_c
2022-06-08 14:12:22 +02:00
cc6cbfbe71 Allow disabling calico CNI logs with calico_cni_log_file_path (#8921)
* Allow disabling calico CNI logs with calico_cni_log_file_path

Calico CNI logs up to 1G if it log a lot with current default settings:
log_file_max_size	100	Max file size in MB log files can reach before they are rotated.
log_file_max_age	30	Max age in days that old log files will be kept on the host before they are removed.
log_file_max_count	10	Max number of rotated log files allowed on the host before they are cleaned up.

See https://projectcalico.docs.tigera.io/reference/cni-plugin/configuration#logging

To save disk space, make the path configurable and allow disabling this log by setting
`calico_cni_log_file_path: false`

* Fix markdown

* Update roles/network_plugin/canal/templates/cni-canal.conflist.j2

Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>

Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>
2022-06-07 09:22:56 -07:00
6f556f5451 update etcd download url in offline.yml
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-06-07 22:45:28 +08:00
9074bd297b Update RELEASE.md (#8937)
If opening https://groups.google.com/g/kubernetes-dev we can see the
following message:

  As of January 2, 2022, this group will be sunset in favor of dev@kubernetes.io.

So this replaces kubernetes-dev@googlegroups.com with the new one.

In addition, this adds actual steps to know how to create container images easily.
2022-06-06 23:55:49 -07:00
8030e6f76c fix 8893#issuecomment-1147154353 (#8933)
Signed-off-by: mahjonp <junpeng.man@gmail.com>
2022-06-06 12:40:21 -07:00
27bd7fd737 update kubespray image tag in readme to v2.19.0 (#8934)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-06-06 10:24:21 -07:00
77f436fa39 Fix: set fallback value of kubelet ip6 (#8858) (#8926)
* Fix: set fallback value of kubelet ip6 (#8858)

* Prune the spurious comma in the end of kubelet_address

- Update `roles/kubernetes/node/defaults/main.yml`

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>

* Fix: set fallback value of kubelet ip6 (#8858)

- Apply the lint: 132606368e

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>
2022-06-06 10:08:21 -07:00
814760ba25 Use blocks for macvlan tasks for each distribution (#8918)
For the code readability, this adds blocks for each distribution.
2022-06-06 07:50:24 -07:00
14c0f368b6 the KUESPRAYDIR defined but never used (#8930)
* fix dir error

* the command line should align
2022-06-06 07:42:23 -07:00
0761659a43 Update Kube-router version to 1.5.0 (#8928)
https://github.com/cloudnativelabs/kube-router/releases/tag/v1.5.0
2022-06-06 07:38:34 -07:00
a4f752fb02 Add subjectAltName to calico-apiserver certificate (#8907)
* Add AltName to calico-apiserver certificate

* fix support for centos7 openssl
2022-06-06 07:38:23 -07:00
b2346cdaec [feat] Upgrade metrics server to v0.6.1 (#8909)
* Metrics Server now requires access to nodes/metrics RBAC resource instead of nodes/stats. See: https://github.com/kubernetes-sigs/metrics-server/releases/tag/v0.6.0
* Minimize rbac permissions.
2022-06-06 07:34:37 -07:00
01ca7293f5 support reserve ephemeral-storage (#8895) 2022-06-06 07:34:26 -07:00
4dfce51ded Update dashboard to 2.6.0 (k8s 1.24 support) (#8906) 2022-06-06 16:47:33 +03:00
f82ed24c03 Update KUBESPRAY_VERSION (#8922)
As a step of release process, this updates KUBESPRAY_VERSION.
Thank you so much for creating and pushing container images of
the new version floryut !
2022-06-05 22:08:20 +03:00
1f65e6d3b5 [ingress-nginx] upgrade to 1.2.1 (#8904) 2022-06-01 00:23:10 -07:00
9bf7aaf6cd Update RELEASE.md (#8884)
This updates RELEASE.md file to understand the release process
easily based on hands-on experience.
2022-06-01 00:23:03 -07:00
5512465b34 Revert "Set exact user for Kubelet services" (#8872)
This reverts commit e375678674.

The workaround of explicitly specifying root for the kubelet unit was
for pulling images from private registry. Kubernetes now have a
dedicated mechanism with imagePullSecret.
2022-06-01 00:19:02 -07:00
2f30ab558a Add 1.24 mappings for etcd and snapshot_controller (#8903)
Map appropriate versions of etcd and snapshot_controller containers with
k8s 1.24
2022-06-01 00:09:02 -07:00
5c136ae3af [calico] add 3.22.3 and 3.23.1 (#8897)
* [calico]
* add 3.22.3 and 3.23.1
* set 3.22.3 default
* fix download crd for calico 3.22.3 and upper

* update calico README.md
2022-05-31 13:27:23 -07:00
c927da00e0 Support cilium ip-masq-agent configuration (#8893)
* fix deploy Cilium with eBPF-based Masquerading failed

Signed-off-by: mahjonp <junpeng.man@gmail.com>

* forget to add the enable-ip-masq-agent flag

Signed-off-by: mahjonp <junpeng.man@gmail.com>
2022-05-31 09:26:53 -07:00
1600fd9082 clean up tags (#8880) 2022-05-31 07:52:53 -07:00
14acd124bc fix containerd images downalod bugs (#8894) 2022-05-31 00:22:53 -07:00
e3cbbfb9ed [kubernetes] make 1.23.7 the new default (#8888) 2022-05-29 17:08:51 -07:00
5f21e0b58b Update components version in README.md (#8886) 2022-05-29 14:10:51 -07:00
d22204a59f docs: add hardening guide (#8868) 2022-05-29 12:36:50 -07:00
90289b8502 add arch var in dockerfile (#8875)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-05-29 12:32:51 -07:00
78aacee21b [kubernetes] add hashes for 1.24.1 and other versions. (#8876)
* [kubernetes] add hashes for 1.24.1 and other versions.
versions: v1.21.13, v1.22.10, v1.23.7 & v1.24.1

* [kubernetes] make v1.23.7 default1
2022-05-27 12:00:42 -07:00
f47aca3558 Added |bool for rhel_enable_repos (#8871) 2022-05-26 18:51:55 -07:00
73fc70dbe8 Delete kube_version v1.20- related code (#8869)
Current Kubespray supports the Kubernetes version 1.21 or upper with
`kube_version_min_required: v1.21.0`

Then kube_version v1.20- related code is not used at all.
This deletes those code for cleanup.
2022-05-25 21:31:22 -07:00
dc2a18e436 Merge pull request #8815 from simplekube-ro/dont_clobber_calico
[calico] don't clobber calico options set by the user
2022-05-24 10:25:48 -07:00
82590eb087 fix remove docker-ce.repo failed (#8856) 2022-05-24 05:44:06 -07:00
4c97ce747c Adding support for the kube-router flag --cluster-asn flag (#8837) 2022-05-23 16:39:10 -07:00
ebbc5ed0ce add liupeng0518 to reviewers (#8853) 2022-05-23 21:42:14 +03:00
dc1af5a9c5 [etcd] Add support for setting the request size limit (#8849)
* [etcd] Add extra documentation for `etcd_memory_limit` and `etcd_quota_backend_bytes`

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [etcd] Add support for setting ETCD_MAX_REQUEST_BYTES

Signed-off-by: necatican <necaticanyildirim@gmail.com>
2022-05-23 09:36:03 -07:00
85bd1eea27 fix(calico): add missing "get" verb (#8847)
Signed-off-by: irizzant <i.rizzante@gmail.com>
2022-05-21 01:20:00 -07:00
2b151c6aa2 cni-plugins: upgrade to 1.1.1 (#8852)
Signed-off-by: necatican <necaticanyildirim@gmail.com>
2022-05-21 11:14:16 +03:00
93fe3e06ef Add support for including annotations on aws-ebs-csi-controller (#8779)
* Add support for including annotations on aws-ebs-csi-controller

* update comment to specify role arn
2022-05-20 15:00:00 -07:00
9d3a894991 Possible remove ippools from cni config (#8845)
* Possible remove ippools from cni config

* Typo

* Update roles/network_plugin/calico/templates/cni-calico.conflist.j2

Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>

* Update cni-calico.conflist.j2

Incorrectly deleted calico forwarding content.

* Update roles/network_plugin/calico/templates/cni-calico.conflist.j2

Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>

Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>
2022-05-19 23:45:13 -07:00
0e6b727e53 Update docs for using venv (#8842)
Due many patterns of Linux distributions, it is difficult to install
ansible dependencies as system-wide stably.
Apart of Kubespray doc[1] recommends to use venv to avoid such issue,
and this applies venv usage to the other parts of the doc.

[1]: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/setting-up-your-first-cluster.md#set-up-kubespray
2022-05-19 23:39:12 -07:00
e42a01f203 Fixed systemd-networkd restart for ubuntu 22.04, when using reset.yml (#8841)
* Fixed systemd-networkd restart  for ubuntu 22.04

* fixed systemd-networkd restart for all Ubuntu
2022-05-20 09:34:53 +03:00
a28b58dbd0 [calico]use ipamconfig instead of calico ipam command (#8839)
* use ipamconfig instead of calico ipam command

* fix ansible lint
2022-05-19 11:13:20 -07:00
a26a9ee14f set apparmor_enabled in netchecker task (#8844) 2022-05-19 10:49:21 -07:00
c09fcd4f92 Skip gathering facts when reset_nodes is false (#8843)
The doc[1] explains we need to specify

  "-e reset_nodes=false -e allow_ungraceful_removal=true"

to delete offline node. However the task "Gather facts"
tried to gather facts of offline node also and the task
was failed.
This adds a condition to skip gathering facts when reset_nodes
is false on remove-node.yml.

[1]: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/nodes.md#3-remove-an-old-node-with-remove-nodeyml
2022-05-19 01:04:07 -07:00
593359ec77 fix kube-ovn image (#8838) 2022-05-18 08:36:53 -07:00
34ec4d5d40 Move woopstar to emeritus approver (#8809) 2022-05-18 02:36:53 -07:00
3d8f3bc0b7 Fix the invalid kube vip manifest (#8831)
* add Feature synchronized time checking

* fix-invalid-kube-vip-manifest
2022-05-17 23:48:55 -07:00
eea7bb7692 only need run this once (#8833)
calicoctl ipam xx
calicoctl apply xx
2022-05-17 09:52:27 -07:00
3a89e31dee [ansible] update ansible and cryptography requirements to work on ubuntu 22.04 (#8826) 2022-05-16 11:14:17 -07:00
0c504e4984 [docs] document support for ansible versions (#8827)
drop note about not supporting ansible 2.9 since we still cover it in
nightly CI
2022-05-16 00:50:17 -07:00
0bf070c33b doc: write how to use kata-container for pods (#8817)
kata-container is not used by default even if enabling kata_containers_enabled.
This updates the doc for writing how to do that.
2022-05-13 23:15:18 -07:00
dc8ad78206 fix: incorrect condition type (#8822)
Signed-off-by: cyclinder qifeng.guo@daocloud.io
2022-05-13 14:09:56 -07:00
48e938660d Allow replacement of address prefixes for all images (#8764)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-05-13 09:23:14 +03:00
632d457f78 [ingress-nginx] upgrade to 1.2.0 (#8814) 2022-05-12 09:07:14 -07:00
569a319ff5 [calico] don't clobber user set bgp configuration options that are not managed by kubespray 2022-05-12 15:50:38 +00:00
47812ec002 [calico] don't clobber user set ippool options that are not managed by kubespray 2022-05-12 15:50:05 +00:00
c27dee57ea [calico] don't clobber user set felixconfig options that are not managed by kubespray 2022-05-12 15:49:24 +00:00
b289f533b3 get wrong server name of coredns (#8811)
Signed-off-by: weizhou.lan@daocloud.io <weizhou.lan@daocloud.io>
2022-05-12 08:33:14 -07:00
3eb0a4071a set default value of name to "k8s-pod-network" (#8813)
Signed-off-by: cyclinder qifeng.guo@daocloud.io
2022-05-12 08:29:14 -07:00
5684610a55 Support metallb peer password (#8792)
* support metallb peer password

* add MetalLB BGP password example
2022-05-11 21:39:15 -07:00
f26f544ff6 [kube-ovn]: update kube-ovn version and sync some feature (#8790)
* [kube-ovn]: some feature

kube-ovn vlan mode
ipv6/ipv4 dual stack
...

* remove unused env

* fix readinessprobe
2022-05-11 21:35:15 -07:00
b9e5b0cb53 UpCloud server plan, firewall, load balancer integration (#8758)
* [upcloud] add option to use preconfigured cpu/mem plan

* [upcloud] add option to use firewall rules for API server/SSH access

* [upcloud] add option to use managed load balancer
2022-05-11 10:15:03 -07:00
13443b05a6 Overhaul Cilium manifests to match the newer versions (#8717)
* [cilium] Separate templates for cilium, cilium-operator, and hubble installations

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Update cilium-operator templates

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Allow using custom args and mounting extra volumes for the Cilium Operator

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Update the cilium configmap to filter out the deprecated variables, and add the new variables

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Add an option to use Wireguard encryption on Cilium 1.10 and up

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Update cilium-agent templates

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Bump Cilium version to 1.11.3

Signed-off-by: necatican <necaticanyildirim@gmail.com>
2022-05-11 06:23:04 -07:00
e70c00a0fe fix: Waiting until Volumes will be detached from the node on graceful node removal (#8739) 2022-05-10 09:57:43 -07:00
bb67b654c5 local volume provisioner should not run on control plane nodes by default (#8805) 2022-05-10 19:04:24 +03:00
aef25819bc nit: Add offline note for kube-* images (#8718) 2022-05-10 06:41:44 -07:00
1d96f465f4 arm64 support of cilium (#8803)
when cilium v1.10 , it is ok to support arm64
https://cilium.io/blog/2021/05/20/cilium-110

Signed-off-by: weizhou.lan@daocloud.io <weizhou.lan@daocloud.io>
2022-05-10 02:55:43 -07:00
8f618ab408 Fix condition on kata_containers_version/kube_version when kata_containers_enabled is false (#8804) 2022-05-09 14:56:32 -07:00
5296d7ef9c Added playbook to wait for cloud-init to finish (#8799) 2022-05-09 10:49:19 -07:00
b715500b48 csi: bump upcloud csi driver (#8784) 2022-05-09 10:43:19 -07:00
37a5271f5a feat: add variables to manage makeIPTablesUtilChains and streamingConnectionIdleTimeout kubelet parameters (#8796) 2022-05-09 09:25:19 -07:00
42fc71fafa [PodSecurityPolicy] Move the install of psp (#8744) 2022-05-09 09:21:19 -07:00
02b6e4833a Update Kata Containers runtime (#8797)
* Update Kata containers binary to 2.4.1 version

* Update overhead kata runtime values

* Fix kata-qemu default values in CRI-O
2022-05-08 17:01:18 -07:00
323a111362 [kubelet] set correct resolv.conf for Ubuntu 22.04 (#8795) 2022-05-06 16:31:04 -07:00
e7df4d3dd9 add support for service-account-lookup parameter (#8781)
* feat: add variable to manage service-account-lookup on kube-apiserver

* docs: add documentation about service-account-lookup variable
2022-05-06 00:39:07 -07:00
3e52a0db95 Add optional setting for ca data in auth webhook (#8777)
* Add optional setting for ca data in auth webhook

* add webhook token auth variables to sample inventory
2022-05-05 14:52:43 -07:00
94484873d1 [containerd] add 1.6.4 which is needed for kubernetes 1.24.0 and make it the default (#8791) 2022-05-05 14:10:43 -07:00
0d6ea85167 Assert that IP range is enough for the nodes (#8720)
* Assert that IP range is enough for the nodes 

Co-authored-by: Necatican Yıldırım <necaticanyildirim@gmail.com>

* Fixed whitespace

* Fixed errors

* Fixed errors

Co-authored-by: Necatican Yıldırım <necaticanyildirim@gmail.com>
2022-05-05 08:48:20 -07:00
674ec92224 Add crictl 1.24 for new k8s version (#8787) 2022-05-05 08:40:22 -07:00
e7e5037a86 Add a container_manager validation (#8785) 2022-05-04 23:58:19 -07:00
fbcf426240 Drop containerd 1.4 support (#8780)
The version 1.4 of containerd has been End of Life since March 3, 2022
as https://containerd.io/releases/#support-horizon
It is nice to drop the support from Kubespray also to follow containerd.
2022-05-04 23:02:20 -07:00
2301554e98 [kubernetes] add hashes for 1.24.0 (#8783) 2022-05-04 22:58:21 -07:00
5bc35002ba [remove-etcd-node] fix json path query 2022-05-04 06:35:51 -07:00
9143810a4d [CI] add remove node job 2022-05-04 06:35:51 -07:00
8f118fb619 [reset] fix task inclusion logic for network plugin 2022-05-04 06:35:51 -07:00
1113460b68 [cri-o] molecule switch from ubuntu 18 to ubuntu 20 2022-05-04 14:46:17 +02:00
74c7e009b7 Move flannel to kubespray/quay for CI (#8774) 2022-05-04 00:11:30 -07:00
c20ab7d987 add fix for GCP CSI driver (#8616)
Signed-off-by: Lubos Mercl <lubos.mercl@gmail.com>
2022-05-03 08:55:56 -07:00
fe66121287 [Openstack] master foreach and fixes (#8709)
* [openstack] fix for new network modules

* [openstack] for-each master nodes
2022-05-03 08:51:56 -07:00
9605bbaa67 [nerdctl] upgrade to 0.19.0 (#8772) 2022-05-03 05:39:56 -07:00
b7ce6a9f79 [ansible] upgrade to 5.7 (#8771) 2022-05-03 01:29:55 -07:00
c04a73c11a Update containerd version to 1.6.3 (#8770)
containerd version 1.6.3 has been released as [1]
This adds the checksums and makes Kubespray use it.

[1]: https://github.com/containerd/containerd/releases/tag/v1.6.3
2022-05-02 22:43:55 -07:00
f184725c5f Use ansible 2.12 for testcases_prepare (#8763)
tests/requirements.txt links to tests/requirements-2.12.txt, so
Kubespray uses ansible 2.12 by default for testing. However we
forgot to update testcases_prepare.sh to use ansible 2.12.
This updates testcases_prepare to use ansible 2.12.
2022-05-02 11:34:31 -07:00
26a0b0f1e8 chore(flannel): change flannel repository and upgrade image version (#8740)
* chore: change flannel repository and upgrade image version

* docs: upgrade flanneld version
2022-05-02 11:29:14 -07:00
fa1d222eee add support for EventRateLimit plugin configuration (#8711)
* feat: add support for EventRateLimit admission plugin

* docs: add documentation about admission_control_config_file and EventRateLimit configuration
2022-05-02 11:03:15 -07:00
56cf163a23 [kubernetes] actually make 1.23.6 the default (#8767) 2022-05-02 00:43:14 -07:00
afcedf6d77 Pull master, Rebase, add changes again (#8745) 2022-05-02 00:39:14 -07:00
21fc197ee0 Ensure containerd service unmasking (#8726)
* Force containerd service unmasking

Force systemd to unmask and start service when adding containerd service

* Eliminate restart and move unmasking step

Switch to start instead of restart
Move unmasking to restart handler

* Add unmasking to similar container runtimes

* Add missing service names
2022-04-29 08:39:14 -07:00
fcb4c8fb61 [kubernetes] make 1.23.6 the new default 2022-04-29 07:57:13 -07:00
b6e2c56ae6 [kubernetes] add hashes for 1.21.12 2022-04-29 07:57:13 -07:00
b005985d4e [kubernetes] add hashes for 1.23.6 2022-04-29 07:57:13 -07:00
1294fd5730 check calico ipv6 (#8738)
* check calico ipv6

* just check ipip mode for ipv6
2022-04-29 00:35:13 -07:00
835fd86a08 [CI] split molecule testes to run in parallel (#8756)
* add parametrization to molecule_run.sh

* [CI] split molecule tests to allow parallelization of work
2022-04-29 00:09:12 -07:00
b7004d72c5 [kubernetes] add hashes for 1.22.9 (#8746)
* [kubernetes] add hashes for 1.22.9
2022-04-28 16:10:50 +03:00
eb566ca626 Remove aufs-tools from Ubuntu requirement (#8754)
aufs-tools was required for docker.io package originally,
but Kubespray installs docker-ce package instead today.
In addition, Ubuntu 20.04 doesn't provide aufs-tools as [1].
Then this removes aufs-tools from Ubuntu requirement.

[1]: https://bugs.launchpad.net/ubuntu/+source/aufs-tools/+bug/1947004
2022-04-27 23:04:55 -07:00
aa12f1c56b [CI] fix packet_ubuntu20-calico-etcd-kubeadm-upgrade-ha job (#8752) 2022-04-27 12:39:36 -07:00
6cc5b38a2e [terraform] use modern day equinix metal provider (#8748)
* [terraform] use modern day equinix metal provider

* [CI] ensure packet job tests metal
2022-04-27 10:34:13 -07:00
e6c4330e4e calico: vxlan is the default for calico_network_backend (#8750)
Since https://github.com/kubernetes-sigs/kubespray/pull/8434
2022-04-27 02:24:11 -07:00
1e827f9807 Update kata-containers.md (#8747)
* kata container related options exist in k8s-cluster.yml,
  not k8s_cluster.yml

* https://github.com/kata-containers/runtime has been archived and
  https://github.com/kata-containers/kata-containers is used today.
2022-04-26 07:06:53 -07:00
a4f26dc8f3 [terraform/openstack] add safespring to provider list (#8735) 2022-04-25 04:43:39 -07:00
3f065918d9 Update verbs for volumeattachments resource (#8731)
* Update verbs for volumeattachments resource

Update verbs for volumeattachments resource so that the kubelet can create volumeattachments and mount volumes when deploying Kubernetes on VMware vSphere.

* Update verbs for volumeattachments resource

Update verbs for volumeattachments resource to match upstream

* Update vsphere-csi-controller-rbac.yml.j2
2022-04-22 00:04:13 -07:00
2c2d4513ac [helm] upgrade to 3.8.2 (#8723) 2022-04-18 12:51:50 -07:00
937e64d296 Update flannel use install-cni-plugin to fit upstream (#8714)
* Update flannel use install-cni-plugin to fit upstream

* Replace flannel cni repo

* Remove download flannel binary
2022-04-18 09:44:41 -07:00
3261d26181 [etcd] ensure etcd is properly upgraded when managed by kubeadm (#8722)
* [etcd] ensure etcd is properly upgraded when managed by kubeadm

* [CI] add periodic job to test upgrade of etcd managed by kubeadm
2022-04-17 10:32:41 -07:00
c98a0a448f metallb: Add images to downloads (#8715)
For offline mode
2022-04-14 10:06:46 -07:00
7e7218f5ce etcd: add etcd v3.5.3 for kubernetes 1.21+ (#8712)
* As per this issue https://github.com/kubernetes-sigs/kubespray/pull/8664 I propose to make etcd v.3.5.3 default for any kubernetes version which uses 3.5.x since that 3.5.[0-2] not recommended for production.
2022-04-14 05:48:46 -07:00
45262da726 [calico] call calico checks early on to prevent altering the cluster with bad configuration (#8707) 2022-04-14 01:08:46 -07:00
aef5f1e139 Add tz to kubespray image 2022-04-13 08:22:45 +02:00
3d4baea01c Add tag to AWS VPC subnets for automatic subnet discovery by load balancers or ingress controllers (#8705) 2022-04-12 10:05:23 -07:00
30306d6ec7 Enable external CA mode for control-plane deployment (#8620) 2022-04-12 05:47:23 -07:00
d7254eead6 UpCloud integration (#8653)
* [upcloud] add upcloud csi-driver

* Option to use ansible_host as api ip for kubueconfig
2022-04-11 15:13:23 -07:00
9dced7133c Fixes for Hetzner terraform and Hetzner Cloud (#8702)
* - add ability to specify the network_zone in hetzner terraform
- Export the network id from hetzner terraform the the generated inventory.ini

* - Add with_networks variable to allow different deployments of hcloud controller manager

- Add network id to hcloud controller secret (added via the inventory)

- Don't include extra_args if it's not set
2022-04-11 10:26:06 -07:00
c2fb1a0747 Add VAGRANT_ANSIBLE_TAGS for normal deployment (#8697)
Current ansible.tags 'facts' is for skipping actual Kubespray deployment
at vagrant CI because the deployment takes much time. However the static
'facts' skips the deployment for normal usage of vagrant also.
That causes confusions.

This adds VAGRANT_ANSIBLE_TAGS to skip the deployment for vagrant CI.
2022-04-08 23:58:04 -07:00
00a4d2d3c4 Removed quotation of nerdctl_extra_flags. (#8695)
The quotations in the variable nerdctl_extra_flags are not required for the `nerdctl_image_pull_command` and throw the following error when executing the cluster-playbook with `container_insecure_registries` set:
        unknown flag: --insecure-registry\\\"
This happens as the complete nerdctl_image_pull_command string variable gets split into an array string for the cmd task. The escaped quotation doesn't get escaped properly and is added to the cmd-string array as part of the command. This leads to a wrong written insecure-registry flag, which throws this error.
2022-04-08 08:02:43 -07:00
424ef3b3f9 [calico] add calico apiserver (#8690)
* [calico] add calico apiserver

* fix yamllint

* remove addext argument

* Configure API server with the CA bundle

* add check kdd
2022-04-08 00:02:42 -07:00
996ef98b87 Add support for kube-vip (#8669)
Signed-off-by: Mathieu Parent <math.parent@gmail.com>
2022-04-07 10:37:57 -07:00
19d5a1c7c3 Ensure all Kubelet required kernel values are configured when enabling protectKernelDefaults (#8692) 2022-04-07 08:33:59 -07:00
0481dd946f [cert-manager] Upgrade to v1.8.0 (#8688) 2022-04-06 00:52:57 -07:00
29109575f5 fix: reset docker was not removing docker properly (#8680)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-04-05 21:36:55 -07:00
3782573ede Single quotes are missing in jsonpath argument of kubectl get node (#8683) 2022-04-05 09:45:38 -07:00
bba91a7524 split kube_feature_gates variable for different kubernetes components (#8677)
* feat: split kube_feature_gates variable for different kubernetes components

* docs: add kube_feaute_gates componet variables
2022-04-05 05:39:37 -07:00
b67cadf743 [crun] upgrade to 1.4.4 (#8675) 2022-04-04 23:57:36 -07:00
56dda4392c [validate-container-engine] check if kubelet is present was not working (#8679)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-04-04 09:34:12 -07:00
34fec09ff1 [containerd] upgrade versions to address CVE-2022-24769 (#8671)
* [containerd] add hashes for 1.5.11

* [containerd] add hashes for 1.6.2

* [containerd] make 1.6.2 the new default
2022-04-04 05:30:11 -07:00
cefd1339fc [vsphere_csi] update to 2.5.1 and make external_vsphere_version 7.0u1 by default (#8676) 2022-04-04 01:08:11 -07:00
b915376194 [runc] upgrade to 1.1.1 (#8674) 2022-04-04 00:42:23 -07:00
455cc6ff75 [nerdctl] upgrade to 0.18.0 (#8672) 2022-04-04 00:42:11 -07:00
cc9c376d0f [validate-container-engine] add facts tag to tasks needed for vagrant jobs (#8678) 2022-04-04 00:32:11 -07:00
018611f829 Fix quotation of nerdctl_extra_flags (#8668)
Due to missing quotation of nerdctl_extra_flags, ansible-playbook was failed:

  Using module file /usr/local/lib/python3.6/dist-packages/ansible/modules/command.py
  Pipelining is enabled.
    [..]
    File "/usr/lib/python3.8/shlex.py", line 191, in read_token
      raise ValueError("No closing quotation")

This fixes the issue.

T-Eberle investigated the issue and found the solution.
Thank you T-Eberle!
2022-04-02 10:56:09 -07:00
1781eab21f fix: uninstall contailer engine if service is running (#8662) 2022-04-01 09:20:46 -07:00
78b05d0ffc fix disk controller type in Vagrantfile (#8656) 2022-03-31 10:51:01 -07:00
1c0df78278 Add ETCD_EXPERIMENTAL_INITIAL_CORRUPT_CHECK flag to etcd config (#8664) 2022-03-31 08:17:01 -07:00
6cc9da6b0a Update vagrant.md (#8663)
To read it easily, this puts new lines.
2022-03-31 00:07:00 -07:00
6af9cae0a5 Add missing 2.10 ansible test (#8665) 2022-03-30 08:12:27 -07:00
ef29455652 [ansible] make ansible 5.x the new default version (#8660)
* [ansible] make ansible 5.x the new default version and move different versions tested to nightly jobs

* [CI] jobs were missing proper ansible cleanup
2022-03-29 15:36:11 -07:00
503ab0f722 Run 0100-dhclient-hooks if dhcpclient is enabled (#8658)
If running Kubespray on static IP environments, a task was failed like:

  TASK [kubernetes/preinstall : Configure dhclient hooks for resolv.conf (RH-only)]
  fatal: [ak8s2]: FAILED! => {
    "changed": false, "checksum": "..",
    "msg": "Destination directory /etc/dhcp/dhclient.d does not exist"}

This adds a check for dhclientconffile for running 0100-dhclient-hooks to
run the task only if dhcpclient is enabled.
2022-03-29 00:11:11 -07:00
90883e76af terrform/openstack: Fix templating of ansible_ssh_common_args in no_floating.yml if used as TF module (#8646)
* terraform/openstack: Use path.module for ansible_bastion_template.txt

This extends on #7643 by not using path.root, but switching to path.module
to allow use of the terraform code as a module itself. This change then keeps
all calls to the template file stable even for that use-case.

* terraform/openstack: Make sed calls fail on errors

By using a single call with two replacements to use of sed will create proper exit codes
and allowing for errors to be recognized by terraform.
2022-03-29 00:07:11 -07:00
113de8381c [ansible] add support for ansible 5 (ansible-core 2.12) (#8512) 2022-03-28 08:49:22 -07:00
652f2edbe1 [etcd] add 0 hash for arm v3.5.2 to prevent deployment failures 2022-03-28 08:40:30 +02:00
a67e36703f Update cert-manager to v1.7.2 (#8648) 2022-03-26 04:53:22 -07:00
73c6943402 fix vagrant parameter (#8650) 2022-03-25 18:57:58 -07:00
d46817d690 Remove centos7 molecule while opensuse mirror is flaky 2022-03-25 16:57:58 -07:00
97cb64c62d Remove k8s module for ns creation 2022-03-25 16:57:58 -07:00
3f70241fb7 Update kubernetes image to 2.18.1 2022-03-25 16:57:58 -07:00
21b71b38a3 Vagrantfile: add var to set ansible verbosity level (#8639)
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
2022-03-22 06:11:44 -07:00
b2f9442aba Have ingress_controller and external_provisioner in upgrade-cluster.yml (#8640) 2022-03-22 05:43:43 -07:00
fa9f85c7e9 [sysctl] set fs.may_detach_mounts=1 even when CRIs don't set it themselves (#8635) 2022-03-21 17:36:13 -07:00
ffa285c2e7 Fixed cluster roles for openstack cloud controller (#8638) 2022-03-21 06:19:21 -07:00
7b1dc600d5 Fix the condition of drain on pre-remove task (#8634)
When running cluster.yml for new machines what containerd is already
install but Kubernetes cluster were not installed before, the task
"remove-node | List nodes" is failed like

  "changed": false,
  "cmd": [
    "/usr/local/bin/kubectl", "--kubeconfig",
    "/etc/kubernetes/admin.conf", "get", "nodes", "-o",
    "go-template={{ range .items }}{{ .metadata.name }}
    {{ "\n" }}{{ end }}"
   ],
   ..
   "stderr": "error: stat /etc/kubernetes/admin.conf: no such file or directory",

That was due to lack to check the existing Kubernetes cluster exists
or not before running "kubectl drain" command.
This adds the check to avoid the issue.
2022-03-21 01:39:10 -07:00
5e67ebeb9e [container image] use focal (ubuntu 20.04) base image for our docker builds (#8631) 2022-03-18 09:58:41 -07:00
af7066d33c Updated openstack cloud controller version to v1.22.0 (#8629)
* Updated openstack cloud controller version to match kubernetes version

* Rolled back file structure change
2022-03-18 01:47:16 -07:00
dd2d95ecdf [calico] don't enable ipip encapsulation by default and use vxlan in CI (#8434)
* [calico] make vxlan encapsulation the default

* don't enable ipip encapsulation by default
* set calico_network_backend by default to vxlan
* update sample inventory and documentation

* [CI] pin default calico parameters for upgrade tests to ensure proper upgrade

* [CI] improve netchecker connectivity testing

* [CI] show logs for tests

* [calico] tweak task name

* [CI] Don't run the provisioner from vagrant since we run it in testcases_run.sh

* [CI] move kube-router tests to vagrant to avoid network connectivity issues during netchecker check

* service proxy mode still fails connectivity tests so keeping it manual mode

* [kube-router] account for containerd use-case
2022-03-17 18:05:39 -07:00
a86d9bd8e8 do not remove package in validate container engine role when Fedora CoreOS distr (#8626) 2022-03-17 06:49:20 -07:00
21b1516d80 [kubernetes] add hashes for 1.21.11 2022-03-17 05:03:20 -07:00
4c15038194 [kubernetes] add hashes for 1.22.8 2022-03-17 05:03:20 -07:00
538f9df5cc [kubernetes] make 1.23.5 the default 2022-03-17 05:03:20 -07:00
efb0412b63 [kubernetes] add hashes for 1.23.5 2022-03-17 05:03:20 -07:00
5a486a5cca Calico: Fix Wireguard support for CentOS Stream 9/RHEL 9 Beta (#8625) 2022-03-17 04:11:20 -07:00
394857b5ce [docker] add support for cri-dockerd as a replacement for dockershim (#8623) 2022-03-16 16:28:11 -07:00
5043517cfb [containerd] avoid cleanup of /usr/bin on ostree distributions (#8624) 2022-03-15 13:47:48 -07:00
307d122a84 Helm-apps role for installing helm charts (#8347)
* Sketch of helm-apps role interface

* helm-apps: Early implementation and settings

* helm-apps: Fix README.md example playbook

* fixup! Sketch of helm-apps role interface

* Make the argument specs more explicit

* Remove exposed options from hardcoded default

* Simplify example playbook in README.md

- Define directly the roles parameters
- Add an example of option override for one chart only

* Use release instead of charts

Make explicit that the role is mananing releases, not charts.
Simplify parameters naming
2022-03-14 08:29:58 -07:00
d444a2fb83 [systemd-resolved] Fix DNS configuration according to docs/dns-stack.md and during reset of cluster (#8560) (#8561) 2022-03-14 02:08:22 -07:00
fb7c56e3d3 Add unit test for print_hostnames of inventory.py (#8558)
This adds a unit test for the function.
2022-03-12 23:40:23 -08:00
2b79be68e7 fix typo and duplicated declaration of ingressclasses (#8591) 2022-03-12 23:36:23 -08:00
512d5e3348 Restart etcd if the etcd version changes (#8556)
Signed-off-by: Mac Chaffee <me@macchaffee.com>
2022-03-11 18:08:23 -08:00
4b6892ece9 Add epoch to docker-ce and docker-ce-cli packages to ensure docker up… (#8618)
* Add epoch to docker-ce and docker-ce-cli packages to ensure docker upgrade

* Split container-engine redhat vars to support legacy RHEL 7 version management

* Support ansible_distribution_major_version when disvering vars with ansible_os_family
2022-03-11 02:45:07 -08:00
5a49ac52f9 feat(calico): add configurable ipam strictaffinity (#8581)
Signed-off-by: Toni Tauro <toni.tauro@adfinis.com>
2022-03-07 22:58:33 -08:00
db1e30e4fc [calico] add 3.22.1 (#8612) 2022-03-07 22:54:34 -08:00
b4a61370c8 [cri-o] add cri-0 1.23.x (#8599) 2022-03-07 05:39:07 -08:00
58b2f39ce5 add IPv6 listen directive to nginx if enable_dual_stack_networks (#8596) 2022-03-07 05:39:00 -08:00
56d882abed Clarify confirmation prompt (#8589)
Entering any value causes the play to proceed, e.g., entering "no<Enter>". (This is simply how Ansible's pause module behaves.)
2022-03-07 05:38:54 -08:00
39acb2b84d Update ansible-lint to 5.4.0 (#8607) (#8608)
* Update ansible-lint to 5.4.0 (#8607)

It seems that the Rich version 11.0.0 has a breaking change.
So need to update ansible-lint to 5.3.2 or later.

* Fix for ansible-lint no-changed-when rule (#8607)
2022-03-07 05:35:55 -08:00
3ccba08983 Fix crio_packages for Rocky8 (#8594) 2022-03-07 05:29:05 -08:00
632aa764e6 etcd: add etcd v3.5.1 for kubernetes 1.22+ (#8588)
* There is an issue with etcd v3.5.0 where it resurrects ancient members see: https://github.com/etcd-io/etcd/issues/13196
This issue is clearly fixed in etcd v3.5.2

* Just keep the checksums
2022-03-07 05:28:54 -08:00
f6342b6cf4 [crun] upgrade to 1.4.3 (#8598) 2022-03-04 08:22:52 -08:00
471585dcd5 [containerd]: upgrade versions to fix CVE-2022-23648 (#8597)
* [containerd] add hashes for 1.6.1

* [contained] make 1.6.1 the default

* [containerd] add hashes for 1.5.10

* [containerd] add hashes for 1.4.13

* [nerdct] bump to 0.17.1
2022-03-03 14:51:16 -08:00
51821a811f MetalLB: update to v0.12.1 (#8593)
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
2022-03-03 08:49:48 -08:00
299a9ae7ba terraform/gcp: Add ingress_whitelist (#8590)
Also, do not create unneeded resources (target pools are charged and should
only be created when needed).
2022-03-02 16:52:46 -08:00
bf7a506f79 [containerd] Upgrade containerd to 1.6.0 and re-enable arm64 architecture with default options (#8555)
* [containerd] add checksums for 1.6.0

* [containerd] promote 1.6.0 as the new default

* [runc] promote 1.1.0 as the new default to allow arm deployments out of the box

* [nerdctl] bump to 0.17.0 to align with containerd 1.6.0

* [reset] allow crictl stopp and rmp commands to fail
2022-03-02 15:27:13 -08:00
2e925f82ef Revert "Fix: typos in docs and comments (#7805)" (#8592)
This reverts commit 417180246c.
2022-03-02 11:57:13 -08:00
ddef7e1139 missing "check_mode: no"s for several read-only tasks (#8584)
this is not complete -- there are almost certainly more instances of
this issue
2022-03-02 09:29:14 -08:00
672e47a7eb feat: check & uninstall container engine (#8439)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-02-28 10:59:46 -08:00
3e8e64a3e5 fix typo / error regarding etcd and k8s_cluster groups (#8580)
As far as I can tell this is simply a typo that has existed from the beginning. Having it this way around (`etcd` group as a child and thus subset of `k8s_cluster`) mirrors what is written in the preceeding sentence.
2022-02-28 02:54:58 -08:00
b554246502 Fix host DNS config 1) being edited too soon and 2) not working with NM (#8575)
Signed-off-by: Mac Chaffee <me@macchaffee.com>
2022-02-26 10:29:23 -08:00
6d683c98a3 [Terraform-AWS] Replace CLB with NLB (#8578) 2022-02-24 23:53:54 -08:00
ee079f4740 fix(coredns): make sure to keep coredns repository namespace (#8572)
fix: regex

fix: wrong regex_replace usage
2022-02-24 01:01:33 -08:00
a090038d02 [CI] add ara to collect CI job logs (#8545) 2022-02-23 07:36:19 -08:00
4f1499bd23 Fixup remaining etcd_kubeadm_enabled variables (#8576) 2022-02-23 06:46:18 -08:00
36393d77d3 Encrypting Secret Data at Rest (#8574)
* change default value for Encrypting Secret Data at Rest to secretbox, remove experimental flag and add documentation

* fix MD012/no-multiple-blanks
2022-02-23 03:04:18 -08:00
e053ee4272 Check all places with check_mode: no for side effects (#8573)
and fix the one with side effect.

Also removes `notify` from this task as the task has `changed_when: false`
and notify is not going to fire.
2022-02-23 01:20:18 -08:00
1d46c07307 Cleanup crictl configuration file (#8569) 2022-02-23 00:58:19 -08:00
f9b5e448c1 Prevent removing etcd member when running in check mode (#8570) 2022-02-22 23:34:18 -08:00
3effb008c9 improve validation conditions for MetalLB BGP Peers (#8568) 2022-02-22 23:12:18 -08:00
a088f492f4 chore: remove addon-resizer (#8566)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-02-22 09:51:16 -08:00
e9c8913248 Add kubeadm option to etcd_deployment_type to replace the etcd_kubeadm_enabled variable (#8317)
* Add kubeadm option to etcd_deployment_type to replace the etcd_kubeadm_enabled variable

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* Add etcd kubeadm deployment documentation

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* Refactor warning for the deprecated 'etcd_kubeadm_enabled' variable

Signed-off-by: necatican <necaticanyildirim@gmail.com>
2022-02-22 08:53:16 -08:00
b9a27c91da Update kubernetes dashboard to 2.5.0 2022-02-21 03:54:11 -08:00
d4f654275b Set default kubernetes version to 1.23.4 2022-02-21 03:54:11 -08:00
f6eb4c749d Add kubernetes hashes for 1.23.4/1.22.7/1.21.10 2022-02-21 03:54:11 -08:00
418fc00718 fix: kube-dns service deletion (#8565)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-02-21 02:48:11 -08:00
2537177929 Fix amazon docker version (#8564) 2022-02-18 23:50:11 -08:00
9af719bf99 This fixes the etcd node removal. (#8526)
Since we are already on an etcd node while executing the commands, there 
is no need to find out an etcd IP because it is on localhost.
2022-02-18 07:20:23 -08:00
9e020b252e Configure Etcd container_manager explicitly (#8521)
* Configure Etcd container_manager explicitly

* Add explanation for the Etcd container_manager variable

* Remove redundant space in etcd vars
2022-02-18 00:50:23 -08:00
cc45e365ae Fix print_hostnames of inventory.py (#8554)
When trying to run print_hostnames of inventory.py, it outputs the following
error:

 $ CONFIG_FILE=./test-hosts.yaml python3 ./inventory.py print_hostnames
 Traceback (most recent call last):
   File "./inventory.py", line 472, in <module>
     sys.exit(main())
   File "./inventory.py", line 467, in main
     KubesprayInventory(argv, CONFIG_FILE)
   File "./inventory.py", line 92, in __init__
     self.parse_command(changed_hosts[0], changed_hosts[1:])
   File "./inventory.py", line 415, in parse_command
     self.print_hostnames()
   File "./inventory.py", line 455, in print_hostnames
     print(' '.join(self.yaml_config['all']['hosts'].keys()))
 KeyError: 'all'

because it is missed to load a hosts config file before printing hostnames.
This fixes the issue.
2022-02-17 13:57:03 -08:00
97c667f67c Fix etcd_events not getting upgraded in upgrade-cluster.yml (#8550)
Signed-off-by: Mac Chaffee <me@macchaffee.com>
2022-02-17 08:03:38 -08:00
063fc525b1 nerdctl: upgrade to 0.16.1 (#8539) 2022-02-16 02:04:37 -08:00
0f73d87509 Allow pausing after upgrade but before uncordon (#8530)
* Allow pausing after upgrade but before uncordon

* Expand docs for upgrade pausing vars

Signed-off-by: Mac Chaffee <me@macchaffee.com>
2022-02-15 16:39:02 -08:00
402e85ad6e [calico] upgrade release checksums (#8544)
* [calico] upgrade 3.19.x to 3.19.4

* [calico] upgrade 3.20.x to 3.20.4

* [calico] upgrade 3.21.x to 3.21.4 and make it the default

* [calico] add 3.22.0 checksums

* [calico] account for path changes in calico 3.21.4 crd archive and above
2022-02-15 16:35:02 -08:00
1d635e04e4 Allow to specify a source address for metallb peerings, and target only some nodes using node selectors (#8534) 2022-02-15 13:57:19 -08:00
98d5d0cdd5 add support for Dual Stack node InternalIP (#8542) 2022-02-15 00:28:02 -08:00
31d4a38f09 terraform/gcp: Allow to change extra disk types (#8524) 2022-02-15 00:22:02 -08:00
1ebe456f2d add support for Calico IP6_AUTODETECTION_METHOD (#8541) 2022-02-14 17:26:14 -08:00
c6e5314fab implement download mirrors support (#8474)
* [download] add mechanism to support mirrors

* [calico] support alternate download url
2022-02-14 13:19:32 -08:00
a6a79883b7 Fix: Error when creating subnets more than AZ (#8516) 2022-02-14 13:12:30 -08:00
b02e68222f feat(offline): Improve generate_list.sh to generate offline file list using ansible (#8537) (#8538)
Use jinja2 template and ansible to expand variables.
2022-02-13 23:19:28 -08:00
da8522af64 docs: Update offline-environment.md for containerd (#8520) (#8523)
* Add containerd/runc/nerdctl download url
* Add insecure registries configuration for containerd
2022-02-09 08:08:18 -08:00
84b93090a8 Change Cilium setting identity_allocation_mode to cilium_identity_allocation_mode (#8519)
* Change Cilium identity_allocation_mode to cilium_identity_allocation_mode

* Change inventory sample
2022-02-08 14:04:35 -08:00
5695c892d0 Fix wrong port name in metallb.yml.j2 (#8510) 2022-02-07 09:43:45 -08:00
696101a910 Fixed mitogen.yml (#8508)
Fixed the problem when call ansible-playbook contrib/mitogen/mitogen.yml
"The error was: 'dict object' has no attribute 'section'"

What type of PR is this?

/kind bug

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:
2022-02-07 01:39:43 -08:00
54dfe73d24 Add bastion support to remove-node.yml (#8504)
Somehow bastion support for remove-node.yml was missing.

This commit adds it.
2022-02-04 23:50:50 -08:00
87928baa31 CRI-O: fix unqualified-search registries (#8496) 2022-02-04 23:46:50 -08:00
6a4fd33a03 Added ppc64le support (#8505)
* Added ppc64le support

* Fixed linting errors
2022-02-04 00:14:00 -08:00
790448f48b feat: update cert-manager to 1.7.0 (#8491)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-02-03 17:24:00 -08:00
7759494c85 [terraform][openstack] allow disabling port_security at port level (#8455)
Use openstack_networking_port_v2 and openstack_networking_floatingip_associate_v2
to attach floating ips. This gives us more flexibility on disabling port security
when binding instances directly on provider networks in private cloud scenario.
2022-02-02 08:50:22 -08:00
aed187e56c Fix kubelet_kubelet_cgroups_cgroupfs (#8500)
If kubelet is run with systemd (as it always is when using kubespray),
it starts in systemd's /system.slice/kubelet.service cgroup.

This commit prevents a creation and usage of a second unrelated cgroup.
2022-02-02 00:50:22 -08:00
eac799f589 Amend documentation for docker to containerd migration (#8477)
* Amend PR https://github.com/kubernetes-sigs/kubespray/pull/8471 with missing inventory configuration.

Signed-off-by: Julio Morimoto <julio@morimoto.net.br>

* Amend PR https://github.com/kubernetes-sigs/kubespray/pull/8471 with missing inventory configuration.

Signed-off-by: Julio Morimoto <julio@morimoto.net.br>
2022-02-02 00:46:22 -08:00
5ecb07b59a [nerdctl] upgrade to 0.16.0 (#8484)
* [nerdctl] upgrade nerdctl to 0.16.0

* [nerdctl] add configuration file
2022-02-01 15:11:48 -08:00
ff621fb7f1 [ingress-nginx] upgrade to 1.1.1 (#8490) 2022-02-01 09:50:11 -08:00
958bca8800 terraform/gcp: Do not create unused subnetworks and Upgrade to latest google provider (#8497)
* terraform/gcp: Do not create unused subnetworks

By default terraform creates a subnetwork in each 39 regions

* terraform/gcp: Upgrade to latest google provider

... where "one of source_tags, source_ranges, or source_service_accounts must be defined"
2022-02-01 09:14:11 -08:00
eacd55fbca Use sysctl_file_path variable for all sysctl_file locations (#8395)
* Use sysctl_file_path variable for all sysctl_file locations

* Add sysctl_file_path variable to kubespay-defaults

* Remove previously used sysctl file locations if present

* Use explicit filename in roles/kubernetes/node/defaults/main.yml

* Defaults: use explicit value
2022-02-01 08:12:10 -08:00
0e2ab5c273 [misc] add cristicalin to approvers list (#8494) 2022-02-01 08:08:11 -08:00
c47634290e [helm] upgrade to 3.8.0 (#8489) 2022-02-01 06:34:12 -08:00
92d612c3e0 8487: Allow override of default CoreDNS zone cache (#8488)
Using the coredns_cluster_zone_cache_block variable
2022-02-01 00:48:18 -08:00
2bbe5732b7 Add node label to etcd metrics (#8475)
targetRef on endpoints surfaces as
__meta_kubernetes_endpoint_address_target_kind/__meta_kubernetes_endpoint_address_target_name
in prometheus and gets converted to the label `node` by
prometheus-operator
2022-01-31 06:08:23 -08:00
e6e7fbc25f fix reset containerd_storage_dir undefined (#8478)
* fix reset containerd_storage_dir

* add env to kubespray-defaults
2022-01-31 05:46:23 -08:00
7d4d554436 Document host_resolvconf as default value for resolvconf_mode (#8493)
refs #8247
2022-01-31 03:12:24 -08:00
d31db847b7 feat: update local path to v0.0.21 (#8492) 2022-01-31 01:08:24 -08:00
3562d3378b terraform/gcp: Allow to use preemptible VM instances (#8480) 2022-01-31 00:30:24 -08:00
ababcd5481 [kube] make 1.23.3 the new default 2022-01-31 00:22:24 -08:00
7caffde0b6 [kube] add 1.23.3 hashes 2022-01-31 00:22:24 -08:00
c40b43de01 [mitogent] update to 0.3.2 (#8470) 2022-01-27 08:36:59 -08:00
b0eb5650da Provide initial guidelines for a container engine migration (docker-2-containerd), with special emphasis on the fact that the procedure is still not officially supported. (#8471)
Follow up from https://github.com/kubernetes-sigs/kubespray/issues/8431.

Signed-off-by: Julio Morimoto <julio@morimoto.net.br>
2022-01-27 01:40:10 -08:00
52f221f976 Adaptive Kube-ovn (#8454) 2022-01-27 01:08:10 -08:00
26a5948d2a [reset] remove containerd storage during reset (#8469) 2022-01-26 05:10:01 -08:00
d86a3b962c Proposing fixes for contrib/terraform/vsphere/ #8436 (#8441)
* fixes issues in vSphere Terraform contrib. #8436

* fix formatting

* add variables to the main module and document changes

* add missing newline
2022-01-25 05:24:30 -08:00
d64b341b38 Update terraform GCP to Ubuntu 20.04 (latest LTS) (#8463)
* Fix terraform Warning

Version constraints inside provider configuration blocks are deprecated

Terraform 0.13 and earlier allowed provider version constraints inside the
provider configuration block, but that is now deprecated and will be removed
in a future version of Terraform. To silence this warning, move the provider
version constraint into the required_providers block.

* Fix terraform Warning: Quoted references are deprecated

* terraform: Update GCP Ubuntu to latest LTS
2022-01-25 01:22:30 -08:00
d580014c66 Fix CI for Fedora (followup) + OpenSUSE Leap (update to 15.3) (#8407)
* Fix fedora jobs - followup

* Update OpenSUSE Leap to 15.3

* Fix cilium version in README + update minor 1.11.1
2022-01-24 23:24:30 -08:00
be9a1f80c1 [kube] make 1.23.2 the default version 2022-01-24 11:59:33 -08:00
73ff3b0d3b [kubernetes] add hashes for 1.23.2, 1.22.6 and 1.21.9 2022-01-24 11:59:33 -08:00
9fce9ca42a feat: upgrade azuredisk csi to v1.10.0 (#8432)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-24 00:41:56 -08:00
f1adb734e3 [cri-tools] add hashes for 1.23.0 (#8442) 2022-01-24 00:21:56 -08:00
575e0ca457 feat: add eviction hard to kubelet config (#8421)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-24 00:13:57 -08:00
69f088bb82 add hash-values for runc v1.1.0 - first upstream runc version for multi-arch (#8447) 2022-01-23 23:51:57 -08:00
ef34f5fe7d [calico] switch default iptables backend detection to Auto (#8429) 2022-01-23 23:47:57 -08:00
e88aa7c96b Add youki runtime support (#8411) 2022-01-21 14:01:07 -08:00
38d129a0b6 add external hcloud cloud controller manager (#8440) 2022-01-20 12:31:09 -08:00
392815d97c [cert-manager] Fix missing RBAC rules for ClusterRole cert-manager-cainjector kubernetes-sigs#8104. (#8444) 2022-01-20 12:17:09 -08:00
6e2e61012a Docs - Removed incorrect info on calico_rr. (#8437) 2022-01-17 02:55:30 -08:00
e791089466 cert-manager: Fix incorrect leader election namespace lead to insufficient permission (#8433) 2022-01-17 02:37:29 -08:00
418f12f62a [calico] drop 3.18.x and make 3.21.x the new default (#8426) 2022-01-17 02:29:29 -08:00
caff539ccd Add identity_allocation_mode support for Cilium (#8430)
Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
Signed-off-by: necatican <necaticanyildirim@gmail.com>

Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
2022-01-16 09:29:28 -08:00
c0d1bb1a5c Remove subnet from router on tf-elastx_cleanup (#8425)
The tf-elastx_cleanup test job was failed with error message:

Port xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx cannot be deleted
directly via the port API: has device owner network:router_interface.

That means necessary to remove a subnet from the router before
deleting the port.
This adds a method to removes a subnet from the router automatically.
2022-01-15 00:50:15 -08:00
ea44d64511 [contrib] terraform openstack: allow disabling port security (#8410) 2022-01-14 12:58:32 -08:00
1a69f8c3ad parameterized snaphot controller namespaces (#8305)
* Parameterized snaphot controller namespaces

* add ns yml

* add docs

* namespace
2022-01-14 12:58:26 -08:00
ccd3180a69 cert-manager: Allow to change leader election namespace for GKE Autopilot support (#8424)
More information:

- kubernetes-sigs/kubespray#8393
- jetstack/cert-manager#4102
- jetstack/cert-manager#3717
2022-01-14 12:54:26 -08:00
01dcbc18ac feat: upgrade metallb to v0.11.0 (#8420)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-14 05:22:28 -08:00
7c67ec4976 Fix kubectl call before installing it (#8412) 2022-01-12 23:12:29 -08:00
43d128362f Document image_command_tool and image_command_tool_on_localhost (#8409)
Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
2022-01-11 15:35:24 -08:00
1337c9c244 [csi-snapshotter] upgrade to 5.0 (#8403) 2022-01-11 09:14:33 -08:00
86953b2ac4 fix: add tolerations / affinity to cert-manager (#8389)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-11 09:14:26 -08:00
135c9b29a7 contrib: add cloud-init support for terraform vms (#8394)
* contrib: add cloud-init support for terraform vms

This change enables instance customization via cloud-init,
for example: additional CA certs, custom SSH access etc.

* contrib: update docs for terraform cloud-init

* contrib: disable yamllint in cloud-init

require-starting-space rule breaks cloud-init header

* contrib: documenation formatting

* yamllint: disable comments related checks

* docs: markdown formatting
2022-01-11 05:23:16 -08:00
e0d67367ed Update installation doc with vagrant (#8406) 2022-01-11 05:19:17 -08:00
d007132655 Fix Fedora CI following ipset version in kube-proxy for k8s 1.23 (#8397) 2022-01-11 05:01:17 -08:00
cfd9873bbc Allow to choose container manager commands (#8380)
This allow to workaround #8375 by using image_command_tool=crictl
when containerd_registries is used for containerd.

Also changes image_info_command_on_localhost for docker to return digests.
2022-01-11 01:13:16 -08:00
b2b95cc8f9 fix 0090-etchosts (#7634) 2022-01-11 01:03:16 -08:00
73c889eb10 Fix failures of ansible-lint (#8401)
This fixes the following types of failures:
- empty-string-compare
- literal-compare
- risky-file-permissions
- risky-shell-pipe
- var-spacing

In addition, this changes .gitlab-ci/lint.yml to block the same issue
by using the same method at Kubespray CI.
2022-01-11 00:45:16 -08:00
642725efe7 Bump containerd version to 1.5.9 (#8402) 2022-01-11 00:05:16 -08:00
29aafff2ce etcd: add 3.5.1 for kubernetes 1.23+ (#8320) 2022-01-10 22:45:15 -08:00
df425ac143 Fix etcd certificates reference to support etcd_kubeadm_enabled:true (#7766)
* Fix etcd certificates reference to support etcd_kubeadm_enabled:true

* Add retries to ETCD Join Member task

* Fix etcd certificates reference when etcd_kubeadm_enabled:true

* Fix conflicts
2022-01-10 15:24:25 -08:00
57a1d18db3 Improve first_kube_control_plane variable management to avoid installation failures due to variable overlapping (#8388) 2022-01-10 01:35:19 -08:00
aa4a3d7afd Fix container engine still installed on dedicated etcd node even if etcd_deployment_type: host (#8386) 2022-01-10 01:35:12 -08:00
06ad5525b8 replace runc 1.0.3 arm64 hash with 0 (#8391) 2022-01-10 01:31:13 -08:00
f80fd24a55 Fix risky-file-permissions (#8370)
When running ansible-lint directly, we can see a lot of warning
message like

  risky-file-permissions File permissions unset or incorrect

This fixes the warning messages.
2022-01-09 01:51:12 -08:00
51bd9bee0d Move containerd_version to defaults/main.yml (#8379)
All container image versions were defined in download/defaults/main.yml
except containerd.
The inconsistency caused the offline script(generate_list.sh) could not
output the URL of containerd image.
This moves the definition into a valid file.
In addition, this adds host_os to generate_list.sh for downloading
krew from a valid URL.
2022-01-09 01:47:12 -08:00
52266406f8 Bump cert-manager version to v1.6.1 (#8377) 2022-01-07 16:45:34 -08:00
cd601c77c7 feat: upgrade metrics server to v0.5.2 (#8338)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-07 08:18:33 -08:00
6abae713f7 Update helm / kube-router and coredns (#8382)
* Update kube-router to 1.4.0

* Update Helm to 3.7.2

* Up coredns to 1.8.6 when k8s is 1.23.x
2022-01-06 12:14:27 -08:00
1312f92a8d adding 0 checksum for kata_containers_version on arm(64) (#8383) 2022-01-06 12:08:27 -08:00
92abf26d29 Ensure taint configuration for secondary control-plane nodes (#8363) 2022-01-05 23:56:28 -08:00
c11e4ba9a7 Add missing example offline nerdctl_download_url (#8373) 2022-01-05 10:23:48 -08:00
7ae00947f5 Avoid yanked ruamel.yaml.clib version (#8372)
See https://pypi.org/project/ruamel.yaml.clib/#history

Signed-off-by: Mathieu Parent <math.parent@gmail.com>
2022-01-05 08:06:41 -08:00
59f62473c9 Update configuration of registries in cri-o (#7852)
* Update configuration of registries in cri-o

* Update docs to match new registry configuration
2022-01-05 07:36:40 -08:00
8fbd08d027 Fix DNS configuration when using resolvconf_mode='host_resolvconf' during scale (#23) (#8361) 2022-01-05 03:06:33 -08:00
dda557ed23 Update config.toml.j2 (#8340)
* Update config.toml.j2

i think this commit code is not completed works

exam registry address : a.com:5000

insecure registry must be http://a.com:5000

but this code add insecure a.com:5000 (without http://)

If there is no http, containerd accesses with https even if insecure_skip_verify = true

solution is code edit

* Update config.toml.j2

* Update containerd.yml

* Update containerd.yml

* Update containerd.yml

* Update config.toml.j2
2022-01-05 02:56:33 -08:00
cb54eb40ce Use a variable for standardizing kubectl invocation (#8329)
* Add kubectl variable

* Replace kubectl usage by kubectl variable in roles

* Remove redundant --kubeconfig on kubectl usage

* Replace unecessary shell usage with command
2022-01-05 02:26:32 -08:00
3eab1129b9 CI: Replace CentOS 8 with AlmaLinux 8 before CentOS 8 EOL end of 2021 (#8297) 2022-01-05 02:20:33 -08:00
24f1402a14 nerdctl insecure registry config (#8339)
* Update prep_download.yml

nerdctl insecure registry config

* Update prep_download.yml

* Update prep_download.yml

apply conversations advice

* Update prep_download.yml

* Update prep_download.yml

* Update prep_download.yml

* Update prep_download.yml

* Update prep_download.yml

* Update prep_download.yml

* Update main.yml

* Update main.yml

* Update prep_download.yml

* Update prep_download.yml
2022-01-05 01:14:33 -08:00
bf00550388 Upgrade Cilium to 1.11.0 (#8354)
* Remove kvstore args from Cilium DaemonSet

Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
Signed-off-by: necatican <necaticanyildirim@gmail.com>

* Bump Cilium to 1.11.0

Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
Signed-off-by: necatican <necaticanyildirim@gmail.com>

Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
2022-01-05 00:36:32 -08:00
78c83a8f26 Update containerd doc (#8369)
This is a follow-up change for https://github.com/kubernetes-sigs/kubespray/pull/7911
2022-01-05 00:32:33 -08:00
e72f8e0412 Update node about container_manager variable (#7911)
I was deploy my cluster with separate etcd cluster and not intersect with kube_control_plane or kube_node. And I want to run etcd cluster in docker but still used containerd to make container runtime for all other nodes. Therefore, I was added note to this doc for everyone 

Thank !
2022-01-04 14:29:20 -08:00
6136fa7c49 Update Kubernetes version to 1.23.1 2022-01-04 10:25:00 -08:00
8d2b4ed4a9 Move min k8s version to 1.21 2022-01-04 10:25:00 -08:00
9e9b177674 Update kubespray_version following release 2022-01-04 10:25:00 -08:00
4c4c83f0a1 crun update to 1.4 (#8330)
* [crun] update crun to 1.4

* [crun] drop pre-1.x versions
2022-01-04 08:30:53 -08:00
0e98814732 Configure PriorityClassName for MetalLB deployment (#8362) 2022-01-04 08:20:52 -08:00
92f25bf267 Simplify usage of pre-remove role (#8334)
- Use builtin task scheduling of ansible (same task on each host)
  instead of manual looping on master

Benefits:
- One less play in remove-node.yml playbook
- Parralel node drain
- Drain parameters (timeout, grace period, retries,
  allow_ungraceful_removal) can be adjusted separately for each node
  with ansible variables
2022-01-04 07:10:53 -08:00
63a53c79d0 Fix - Search root filesystem device (#8366) 2022-01-04 06:48:52 -08:00
2f9a8c04dc Add nginx_image_repo to mirrored image on quay (#8364) 2022-01-03 10:03:00 -08:00
8c67f42689 Update offline.yml (#8358)
[cni-plugins] upgrade to stable 1.0.1 (#8331) using flannel cni add flannel_cni_download_url

flannel_cni_download_url offline doc update
2022-01-03 09:58:59 -08:00
783a51e9ac Fix README version for cni/flannel (#8359) 2022-01-03 03:42:59 -08:00
841c61aaa1 Revert "Fix external lb error (#8299)" (#8360)
This reverts commit 4f2e4524b8.
2022-01-03 01:37:00 -08:00
157942a462 fix resolved config (#8351) 2022-01-03 00:06:59 -08:00
e88a27790c fix spelling error (#8342) 2022-01-02 23:55:00 -08:00
ed3932b7d5 [cni-plugins] upgrade to stable 1.0.1 (#8331)
* [cni-plugins] upgrade to stable 1.0.1

* [flannel] use binary from dedicated project
2021-12-23 23:16:15 -08:00
2b5c185826 calico_pool_blocksize must be cast as well in assertion when defined (#8321)
* calico_pool_blocksize must be cast as string in assertion when defined

* Cast as int rather than string
2021-12-23 00:58:37 -08:00
996ecca78b Glusterfs daemonset readiness and liveness params. #8307 (#8309) 2021-12-23 00:32:37 -08:00
c3c128352f Remove registry-proxy (#8327) 2021-12-21 23:55:35 -08:00
02a89543d6 registry: add ingress support (#8311) 2021-12-21 10:20:46 -08:00
c1954ff918 Support deploying kubernetes 1.23 (#8323)
* Ensure entries for 1.23 are added for supported_versions vars

* cri-o: add support for kubernetes 1.23 but still use cri-o 1.22

* kubescheduler-config: diferentiate config versions based on kube_version
2021-12-21 01:38:46 -08:00
b49ae8c21d Delete "kubeadm alpha certs" code (#8322)
"kubeadm alpha certs" command has been promoted to "kubeadm certs" command,
and "kubeadm alpha certs" has been deprecated since Kubernetes v1.20 as [1].
In addition, Kubespray supports Kubernetes v1.20+.
This delete the deprecated command for cleanup.

[1]: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation
2021-12-20 12:53:33 -08:00
1a7b4435f3 Bump default version of kubernetes to 1.22.5 2021-12-20 08:56:56 -08:00
ff5ca5f7f8 add temp location to .gitignore 2021-12-20 08:56:56 -08:00
db0e458217 Kubernetes: add hashes for v1.23.1, v1.23.0, v1.22.5, v1.21.8 and v1.20.14 2021-12-20 08:56:56 -08:00
f01f7c54aa Add support for CRI-O user namespaces (#8268)
* add support for cri-o user namespaces

* comply with yamllint rules
2021-12-20 06:37:25 -08:00
c59407f105 add support for Calico BGPPeer sourceAddress (#8306) 2021-12-20 01:51:25 -08:00
fdc5d7458f Upgrade to nerdctl 0.15.0 and some fixes (#8315)
* nerdctl: move to 0.15.0

* nerdctl: reduce verbosity when pulling images

* download: use proxy environment when using nerdctl to download containers
2021-12-20 00:33:26 -08:00
6aafb9b2d4 fix bad indentation (#8314) 2021-12-17 07:36:29 -08:00
aa9ad1ed60 clean files for kube-ovn (#8310) 2021-12-15 23:39:19 -08:00
aa9b8453a0 registry: service add clusterIP, nodePort, loadBalancer support (#8291)
* registry: service add clusterIP, nodePort, loadBalancer support

* modify camelcase name to underscore

* Add registry service type compatibility check
2021-12-15 00:18:19 -08:00
4daa824b3c CI: fix test name debian10-aio was a 2 instance default (#8286)
* CI: fix test name debian10-aio was a 2 instance default

* CI: Fix running ubuntu20-aio-docker

* CI: Fix running ubuntu18-aio-docker
2021-12-13 14:50:25 -08:00
4f2e4524b8 Fix external lb error (#8299) 2021-12-13 14:46:27 -08:00
8ac510e4d6 sample containerd: containerd_runtimes is removed (#8301)
(#8213) split containerd_runtimes to containerd_runc_runtime and
containerd_additional_runtimes
2021-12-13 14:42:25 -08:00
4f27c763af containerd insecure registry support (#8298) 2021-12-13 00:41:58 -08:00
0e969c0b72 vSphere-CSI: update to 2.4.0 (#8295) 2021-12-10 11:07:23 -08:00
b396801e28 Update Cinder CSI to v1.22 (#8296) 2021-12-10 10:49:11 -08:00
682c8a59c2 containerd: change default resolvconf_mode to host_resolvconf (#8247)
* containerd: change default resolvconf_mode to host_resolvconf

* Wait for kube-apiserver to come back after pod refresh

* Handle resolv.conf gracefully

* Retain currently configured DNS entries to ensure we don't break the resolvers

* Suse uses wickedd for network management so no dhcp hooks

* Molecule: increase ansible timeout

* CI: Increase ansible timeout to 120s for Packet jobs
2021-12-09 14:09:06 -08:00
5a25de37ef Revert "remove no longer present etcd nodes from APIEndpoints list in kubeadm-config configmap (#8244)" (#8287)
This reverts commit dc767c14b9.
2021-12-09 08:24:16 -08:00
bdb923df4a Add oomichi to approvers (#8284)
For taking more responsibility on Kubespray project, this adds
oomichi to the list of approvers.
2021-12-09 00:40:10 -08:00
4ef2cf4c28 Registry add TLS and authentication support (#8229)
* Add registry TLS support

* Add registry configmap and htpasswd auth
2021-12-07 08:32:00 -08:00
990ca38d21 Kata-Containers: add 2.3.0 (#8276)
* Kata-Containers: add checksums for 2.3.0

* Kata-Containers: version 2.3.0 requires kubernetes 1.22.0+
2021-12-07 08:18:08 -08:00
c7e430573f Calico: upgrade 3.21.x to 3.21.2 (#8275) 2021-12-07 08:18:01 -08:00
a328b64464 runc: upgrade to v1.0.3 (#8274) 2021-12-07 06:10:02 -08:00
a16d427536 Set etcd-events listen port to 2383 (#8232) 2021-12-07 00:28:01 -08:00
c98a07825b Use cgroupsv2 where available (fedora) (#8237)
* Containerd: use cgroupsv2 where available (fedora)

* Docker: use cgroupsv2 where available (fedora)

* cri-o: use cgroupsv2 where available (fedora)
2021-12-06 11:19:33 -08:00
a98ca6fcf3 Update loadbalancers versions (#8272)
* Update loadbalancers versions

* fix haproxy_config_dir mode
2021-12-06 09:40:32 -08:00
4550f8c50f calico_flexvol (#8273) 2021-12-06 05:00:32 -08:00
9afca43807 change dns upstream condition for coredns (#8263)
upstream_dns_servers should change corefile config even resolvconf_mode=docker_dns
2021-12-06 02:46:32 -08:00
27ab364df5 Improve control plane scale flow (#13) (#7989)
* Improve control plane scale flow (#13)

* Added version 1.20.10 of K8s

* Setting first_kube_control_plane to a existing one

* Setting first_kube_control_plane to a existing one

* change first_kube_master for first_kube_control_plane

* Ansible-lint changes
2021-12-06 00:16:32 -08:00
615216f397 Fix if bind-address is not set to 0.0.0.0 (#8262)
* if bind-address is not set to 0.0.0.0

* Update docs and left comments

* fix yamllist check: remove space
2021-12-05 23:58:32 -08:00
46b1b7ab34 Fix k8scsi/csi-resizer repo (#8270)
If trying to pull k8scsi/csi-resizer image from gcr.io, we face the error
like:

 $ docker pull gcr.io/k8scsi/csi-resizer:v1.0.0
 Error response from daemon: Head https://gcr.io/v2/k8scsi/csi-resizer/
 manifests/v1.0.0: unknown: Project 'project:k8scsi' not found or deleted.
 $

We can pull the image from quay.io instead.
This fixes the issue.
2021-12-05 23:42:32 -08:00
30d9882851 Add nodelocaldns only if it is enabled (#7731) 2021-12-03 20:36:31 -08:00
dfdebda0b6 Calico: remove duplicate values for CALICO_DISABLE_FILE_LOGGING and FELIX_DEFAULTENDPOINTTOHOSTACTION (#8269) 2021-12-03 20:32:31 -08:00
9d8a83314b containerd: add hashes for 1.5.8 and 1.4.12 and make 1.5.8 the new default (#8239)
* containerd: add hashes for 1.5.8 and 1.4.12 and make 1.5.8 the new default

* containerd: make nerdctl mandatory for container_manager = containerd

* nerdctl: bump to version 0.14.0

* containerd: use nerdctl for image manipulation

* OpenSuSE: install basic nerdctl dependencies
2021-12-03 12:20:35 -08:00
e19ce27352 Remove ovn4nfv support (#8265) 2021-12-03 11:56:35 -08:00
4d711691d0 Fix calico crd archive checksums (#8266)
v3.20.3 and v3.21.1 were re-released with new checksums
2021-12-03 04:56:27 -08:00
ee0f1e9d58 Update etcd-servers for apiserver (#8253) 2021-12-03 00:28:27 -08:00
a24162f596 CI: upgrade vagrant to 2.2.19 (#8264) 2021-12-02 13:23:44 -08:00
e82443241b Move opensuse CI to docker and fix ubuntu16 containerd version for docker (#8257) 2021-12-02 08:01:34 -08:00
9f052702e5 containerd: add support for suse distributions (#8261) 2021-12-02 07:51:33 -08:00
b38382a68f Move cri-o default package to 1.22 (#8258) 2021-12-02 06:21:34 -08:00
785324827c Set ingress-nginx default terminationGracePeriodSeconds to 5 min (#8252)
* set ingress-nginx default terminationGracePeriodSeconds to 5 min for the drain of connection

* Add ingress_nginx_termination_grace_period_seconds at sample inventory
2021-12-02 03:23:33 -08:00
31c7b6747b Calico: add dependencies for 3.21.x (#8250) 2021-12-02 01:17:33 -08:00
dc767c14b9 remove no longer present etcd nodes from APIEndpoints list in kubeadm-config configmap (#8244) 2021-12-01 07:17:15 -08:00
30ec03259d Remove fedora33 - eol (#8246) 2021-11-30 15:53:17 -08:00
38c12288f1 Add option for boot volume type for k8s node (#8256) 2021-11-30 12:59:01 -08:00
0e22a90579 Update docker to 20.10.11 with containerd 1.4.12 (#8255) 2021-11-30 11:49:01 -08:00
0cdf75d41a add macOS .DS_Store to ignore (#8251) 2021-11-30 01:10:56 -08:00
3c6fa6e583 offline install using containerd runtime (#8254)
install containerd on centos need to binary download it 

but offline.yml has no that value

binary download url default in

roles/download/defaults/main.yml:runc_download_url: "https://github.com/opencontainers/runc/releases/download/{{ runc_version }}/runc.{{ image_arch }}"
roles/download/defaults/main.yml:containerd_download_url: "https://github.com/containerd/containerd/releases/download/v{{ containerd_version }}/containerd-{{ containerd_version }}-linux-{{ image_arch }}.tar.gz"

if i use default offlie.yml, it's error from task download files

because runc,containerd down url is none offline

i want fix this 

just add 2 new line
2021-11-30 01:06:56 -08:00
ee882fa462 Add capability to use swap, requires Kube 1.22 (#8241)
* Alpha-NodeSwap: allow nodes to use swap

* CI: Add Fedora 35 with experimental swap job
2021-11-30 00:52:56 -08:00
3431ed9857 containerd: properly pull images with containerd specific tools (#8245) 2021-11-30 00:48:56 -08:00
279808b44e Update minor version for kata/cilium/kube-router/helm 2021-11-29 23:06:56 -08:00
2fd529a993 Update Kubernetes version to v1.22.4 2021-11-29 23:06:56 -08:00
1f6f79c91e Update kubernetes hashes with 1.22.4/1.21.7/1.20.13 2021-11-29 23:06:56 -08:00
52ee5d0fff Various documentation updates (#8243)
* Docs: update CONTRIBUTING.md

* Docs: clean up outdated roadmap and point to github issues instead

* Docs: update note on kubelet_cgroup_driver

* Docs: update kata containers docs with note about cgroup driver

* Docs: note about CI specific overrides
2021-11-29 15:05:21 -08:00
2f44b40d68 OEL7: Fix CentOS7 Extras for OEL7 (#8219)
* OEL7: Fix CentOS7 Extras for OEL7

* Molecule: add logs collection for jobs
2021-11-29 13:39:21 -08:00
20157254c3 Update calico versions (#8238)
* Calico: Bump 3.20.x to 3.20.3

* Calico: Bump 3.18.x to 3.18.6

* Calico: add calico 3.21.1 hashes
2021-11-29 01:15:22 -08:00
09c17ba581 add Gather facts to remove-node.yml (#8231) 2021-11-29 01:01:22 -08:00
a5f88e14d0 Cleanup tests (#8234)
* Add Fedora 35 image, support and CI

* Cleanup tests and allow_failure for vagrant
2021-11-26 09:00:51 -08:00
e78bda65fe Defaults: replace docker with containerd as our default container_manager (#8175)
* Defaults: replace docker with containerd as our default container_manager

* CI: Use docker for download_localhost test

* Defaults: with container_manager=containerd we need etcd_deployment_type=host

* CI: Run weave jobs with docker

* CI: Vagrant don't download_force_cache

* CI: Fix upgrade tests

* should run compatible with old settings, this means docker
* we need to run with a distro that has at least modern containerd,
  this means move from debian9 to debian10 to allow `containerd_version`
  to match between 2.17 and master
2021-11-25 06:54:33 -08:00
3ea496013f Create reset.yml (#8227) 2021-11-24 09:44:20 -08:00
7e1873d927 DeprecationWarning occurs when indentfirst=None is specified in coredns-config.yml.j2 (#8224) 2021-11-24 08:56:21 -08:00
fe0810aff9 Add option to set different server group policy for etcd, node, and master server (#8046) 2021-11-22 02:53:09 -08:00
e35a87e3eb Update registry template (#8198)
* Add registry replica setting

* Add registry liveness and readiness probe

* Set the security context for registry

* Add registry pvc access mode option

* registry add replica requirement check

* docs: add registry replicas setting note

* Update docs/kubernetes-apps/registry.md

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>
2021-11-22 02:45:09 -08:00
a6fcf2e066 Enable experimental modules when rpm-ostree version >= 2021.9 (#8202)
* Enable experimental modules when rpm-ostree version >= 2021.9

* cleanup code
2021-11-22 02:29:09 -08:00
25316825b1 docs: remove basic auth reference in getting-started (#7823) 2021-11-19 14:49:23 -08:00
c74e1c9db3 CI: use images from quay.io to prevent being throttled by docker hub (#8209)
* CI: use netchecker images from quay to prevent throttling

* Molecule: use hello-world image from quay.io
2021-11-19 13:23:40 -08:00
be9de6b9d9 Fix debian 9 check for apt cache update (#8215) 2021-11-19 09:02:51 -08:00
fe8c843cc8 Fix typo in Containerd configuration (#8206) 2021-11-19 08:40:53 -08:00
f48ae18630 Use Pre-existing Floating IP for Bastion (#8214)
* use pre-existing floating IP for bastion

* document bastion_fips in readme
2021-11-19 07:58:52 -08:00
83e0b786d4 Fix wrong baseurl for centos extra repo for Oracle Linux (#8208) 2021-11-18 23:44:51 -08:00
acd5185ad4 Fix fedora reset (#8205)
* Reset: Fedora uses NetworkManager

* CI: test reset on fedora
2021-11-18 16:46:51 -08:00
0263c649f4 Allow to scrape etcd metrics using a service (#8203)
Signed-off-by: Mathieu Parent <math.parent@gmail.com>
2021-11-17 23:53:01 -08:00
8176e9155b Add cristicalin as an official reviewer (#8201) 2021-11-16 14:02:45 -08:00
424163c7d3 add gce support (#8179)
Author:    lmercl <lubos.mercl@gmail.com>
Date:      Wed Nov 10 15:30:04 2021 +0000

fix markdown
2021-11-16 08:58:28 -08:00
2c87170ccf Allow setting 'auto-assign' property to 'false' for default IP pool (Metallb addon) (#8193)
* add metallb auto-assign property for main IP range & update addons.yml for sample inventory

* add new line at the end of file roles\kubernetes-apps\metallb\defaults\main.yml

* set default value for matallb_auto_assign = true
2021-11-16 05:06:27 -08:00
02322c46de Remove helm duplicate check (#8196) 2021-11-15 12:50:48 -08:00
28b5281c45 Python: bring back python 2.7 support for ansible 2.9 in supported EL distributions (#8192) 2021-11-15 08:06:48 -08:00
4d79a55904 Remove extra parameter kube_proxy_remove (#8158)
Signed-off-by: EDGsheryl <edgsheryl@gmail.com>
2021-11-15 00:02:48 -08:00
027cbefb87 change krew uri to krew_download_url (#8190) 2021-11-14 12:08:47 -08:00
a08d82d94e calico add support for container ip forwarding setting (#8184) 2021-11-12 19:06:46 -08:00
5f1456337b Fix krew auto completion command not found at lower version (#8185) 2021-11-12 17:04:46 -08:00
6eeb4883af Fixes various issues in vSphere Terraform code (#8178)
* Fixes various issues in vSphere Terraform code

Provided to address various shortcomings and to fix the following
issue in upstream Kubespray:

https://github.com/kubernetes-sigs/kubespray/issues/8176

* Resolves Terraform formatting issues

* Sets default prefix to human-readable name

* Documents new default prefix in README
2021-11-12 11:40:29 -08:00
b5a5478a8a Added tolerations for cinder-csi-nodeplugin DaemonSet (#8137) 2021-11-11 11:48:07 -08:00
0d0468e127 Exercise multiple ansible versions in CI (#8172)
* Ansible: separate requirements files for supported ansible versions

* Ansible: allow using ansible 2.11

* CI: Exercise Ansible 2.9 and Ansible 2.11 in a basic AIO CI job

* CI: Allow running a reset test outside of idempotency tests and running it in stage1

* CI: move ubuntu18-calico-aio job to stage2 and relay only on ubuntu20 with the variously supported ansible versions for stage1

* CI: add capability to install collections or roles from ansible-galaxy to mitigate missing behavior in older ansible versions
2021-11-10 16:11:50 -08:00
b7ae4a2cfd Kata-Containers: Fix kata-containers runtime (#8068)
* Kata-containes: Fix for ubuntu and centos sometimes kata containers fail to start because of access errors to /dev/vhost-vsock and /dev/vhost-net

* Kata-containers: use similar testing strategy as gvisor

* Kata-Containers: adjust values for 2.2.0 defaults

Make CI tests actually pass

* Kata-Containers: bump to 2.2.2 to fix sandbox_cgroup_only issue
2021-11-09 10:01:48 -08:00
039205560a nodelocaldns: allow a secondary pod for nodelocaldns for local-HA (#8100)
* nodelocaldns: allow a secondary pod for nodelocaldns for local-HA

* CI: add job to test nodelocaldns secondary
2021-11-09 09:57:47 -08:00
801268d5c1 containerd: upgrade versions 1.4.11 and 1.5.7 and make 1.4.11 the default (#8129) 2021-11-09 06:59:47 -08:00
46c536d261 Add krew auto completion (#8171) 2021-11-09 02:43:39 -08:00
4a8757161e Docker: replace the use of containerd_version with docker_containerd_version to avoid causing conflicts when bumping containerd_version (#8130) 2021-11-08 15:56:49 -08:00
65540c5771 krew: update to v0.4.2 (#8168)
krew release urls changed since v0.4.2, clearly OS type and arch inside the filename.

from:
  https://github.com/kubernetes-sigs/krew/releases/download/v0.4.1/krew.tar.gz
to:
  https://github.com/kubernetes-sigs/krew/releases/download/v0.4.2/krew-linux_amd64.tar.gz

define `host_os` like `host_architecture` determine which OS is krew
installed at.
2021-11-08 02:54:59 -08:00
6c1ab24981 Limit kubectl delete node to k8s nodes (#8101)
* Limit kubectl delete node to k8s nodes

This avoids the use of `kubectl delete node` when removing etcd nodes
which are not part of the cluser (separate etcd)

* Take errors into account when deleting node

There should not be error now that we're limiting the deletion to nodes
actually in the cluster

* Retrying on error
2021-11-08 02:22:58 -08:00
61c2ae5549 Add vxlanEnabled spec in FelixConfiguration (#8167) 2021-11-08 00:06:52 -08:00
04711d3b00 Replace path_join to support Ansible 2.9 (#8160) 2021-11-08 00:00:52 -08:00
cb7c30a4f1 Fix cloud_provider check (#8164)
This fixes the preinstall check for cloud_provider option based on
inventory/sample/group_vars/all/all.yml
2021-11-07 23:48:52 -08:00
8922c45556 Added ArgoCD kubernetes-app (#7895)
* Added ArgoCD kubernetes-app

* Update argocd_version to latest
2021-11-07 02:22:51 -08:00
58390c79d0 Bump crun version 1.2 to 1.3 (#8162)
Signed-off-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
Co-authored-by: Necatican Yıldırım <necaticanyildirim@gmail.com>

Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
Co-authored-by: Necatican Yıldırım <necaticanyildirim@gmail.com>
2021-11-06 02:26:50 -07:00
b7eb1cf936 cert-manager: add trusted internal ca when configured (#8135)
* cert-manager: add trusted internal ca when configured

* wrong check for inventory variable

* Update documentation
2021-11-05 09:43:52 -07:00
6e5b9e0ebf Fix Kubelet and Containerd when using cgroupfs as cgroup driver (#8123) 2021-11-05 07:59:54 -07:00
c94291558d Fix containerd install for fcos (#8107)
* Fix containerd install for fcos

* rm orphaned runc and containerd binaries
2021-11-05 07:53:53 -07:00
8d553f7e91 Mitogen: deprecate the use of mitogen and remove coverage from CI (#8147) 2021-11-05 00:57:52 -07:00
a0be7f0e26 heketi: fix deployment logic that was broken by the ansible 3.4 upgrade (#8118) 2021-11-04 13:10:23 -07:00
1c3d082b8d fix calico crds hashes for 3.20.2 (#8157) 2021-11-04 10:38:04 -07:00
2ed211ba15 Fix-CI: python was upgraded in CI to 3.10 and pathlib is now included in python base making this dependency break the CI (#8153) 2021-11-03 12:52:32 -07:00
1161326b54 Add unzip to dockerfile, used in CI 2021-11-02 11:53:41 -07:00
d473a6d442 Update kubespray version following 2.17.x release 2021-11-02 11:53:41 -07:00
8d82033bff fix(doc): update typo (#8148)
I guess `kubernetes-the-hard-way` should be `kubernetes-the-kubespray-way` because of recently created network name is `kubernetes-the-kubespray-way`.
2021-11-02 01:16:58 -07:00
9d4cdb7b02 Ensure addon-resizer 1.8.11 only effective at arch amd64. (#8144)
* Ensure addon-resizer 1.8.11 only effective at arch amd64.

k8s.gcr.io/addon-resizer:1.8.11 returns the amd64 image which is not executable at arm64.

Disable addon-resizer when the platform is not amd64.

When metrics-server upgrade and use addon-resizer:2.3, then revert this
commit and `image_arch` will determine the `addon_resizer_image_tag`.

* Add metrics_server_resizer architectures check
2021-11-01 08:21:19 -07:00
b353e062c7 Update default k8s version to 1.22.3 2021-10-29 10:43:44 -07:00
d8f9b9b61f Update hashes for version v1.20.12/v1.21.6/v1.22.3 2021-10-29 10:43:44 -07:00
0b441ade2c nginx ingress controller should watch kind:ingress without class (#8128) 2021-10-28 11:48:59 -07:00
6f6fad5a16 Calico: add missing verbs in ClusterRole (#8136) 2021-10-28 11:11:01 -07:00
465ffa3c9f Weave: add extra_args for weave-npc (#8140)
* add weave_npc_extra_args in template

* add defaults weave_npc_extra_args

* add sample for weave_npc_extra_args
2021-10-28 08:58:27 -07:00
539c9e0d99 added hirsute in restart network (#8134)
restarting network in ubuntu 21.04 fails and checked the restart menu and found that hirsute was missing in the argument : )
2021-10-27 15:19:10 -07:00
649f962ac6 Metrics-server Deployment has incongruencies in resources requests/limits (#8088)
* fix(metrics-server): update defaults

* fix(metrics-server): typo error
2021-10-27 15:15:11 -07:00
16bdb3fe51 set check_mode to false (#8133) 2021-10-26 19:36:37 -07:00
7c3369e1b9 Fixed default DNS min replica for single node clusters (#8112) 2021-10-26 16:03:46 -07:00
9eacde212f Fix quorum check when recovering broken etcd cluster (#8126) 2021-10-26 15:23:09 -07:00
331647f4ab Remove deprecated Ambassador ingress code (#8086) 2021-10-26 15:19:09 -07:00
c2d4822c38 nginx-ingress: bump up version to 1.0.4 in the README (#8124)
* nginx-ingress: bump to 1.0.4

* Disable builtin ssl_session_cache solving the problem with OpenSSL consuming memory.
* Print warning only instead of error if no IngressClass permission is available.

* nginx-ingress: bump to 1.0.4 in the README
2021-10-25 03:38:23 -07:00
3c30be1320 cert-manager: update docs to reflect 1.5.x links (#8117) 2021-10-25 03:14:23 -07:00
d8d01bf5aa nginx-ingress: bump to 1.0.4 (#8114)
* Disable builtin ssl_session_cache solving the problem with OpenSSL consuming memory.
* Print warning only instead of error if no IngressClass permission is available.
2021-10-24 15:34:22 -07:00
d42b7228c2 Convert numbers to string for calico's inventory check. (#8120)
Fix https://github.com/kubernetes-sigs/kubespray/issues/8119

Signed-off-by: Julio Morimoto <julio@morimoto.net.br>
2021-10-24 11:42:21 -07:00
4db057e9c2 Allow changing metallb default pool name (#8111) 2021-10-22 09:38:39 -07:00
ea8e2fc651 containerd: download containerd from upstream instead of using distro specific packages (#7970)
* Containerd: download containerd from upstream instead of using distro specific packages

split runc download to separate role
make bootstrap-os role deploy container-selinux and seccomp libraries
clean up package manager provided containerd
move variables to docker role that are no longer common with containerd

* Containerd: make molecule testing more relevant

* replace ubuntu18 with ubuntu20
* add centos8 and debian11 to molecule tests
* run kubernetes/preinstall role to ensure relevancy
  of test including dependency packages

* CI: adjust test scenarios for downloaded containerd
2021-10-20 08:47:58 -07:00
10c30ea5b1 Add fallback to node drain using --disable-eviction flag (#8094)
* Add fallback to node drain using --disable-eviction flag

Signed-off-by: Utku Ozdemir <uoz@protonmail.com>

* Move drain fallback tasks to separate file

Signed-off-by: Utku Ozdemir <uoz@protonmail.com>

* Add delegate_facts to fix the drain fallback

Signed-off-by: Utku Ozdemir <uoz@protonmail.com>

* Fix ansible-lint error

Signed-off-by: Utku Ozdemir <uoz@protonmail.com>

* Move drain fallback into block

Signed-off-by: Utku Ozdemir <uoz@protonmail.com>
2021-10-20 00:51:58 -07:00
84b56d23a4 Add jayonlau to reviewers (#8083) 2021-10-19 17:49:57 -07:00
19d07a4f2e Fix ownership related to Calico (#8072)
kube-bench scan outputs warning related to Calico like:

* text: "Ensure that the Container Network Interface file
  permissions are set to 644 or more restrictive (Manual)"
* text: "Ensure that the Container Network Interface file
  ownership is set to root:root (Manual)"

This fixes these warnings.
2021-10-19 17:35:57 -07:00
6a5b87dda4 netchecker: update images to 1.2.2 from Mirantis (#8074)
* netchecker: update images to 1.2.2 from Mirantis which is slightly less ancinet than the l23networks images

* Netchecker: use local etcd instead of kubernetes v1beta1 crds which are no longer suported by kube 1.22+
2021-10-19 10:17:04 -07:00
6aac59394e Rocky Linux support (#8095)
* Add Rocky as a known OS

* Make sure Rocky includes bootstrap-centos.yml

* Update docs with Rocky Linux

* Rocky Linux wireguard and EPEL

* Rocky Linux in the list of supported distributions
2021-10-19 08:29:04 -07:00
f147163b24 Up dashboard version to 2.4.0 - fix forgotten kubeovn version (#8085) 2021-10-15 05:40:54 -07:00
16bf3549c1 Update kube-ovn to 1.8.1 2021-10-14 19:42:54 -07:00
b912dafd7a Update multus to 3.8.0 2021-10-14 19:42:54 -07:00
8b3481f511 Add molecule tests for roles (#8080)
* Add molecule tests for bastion-ssh-config

* Add molecule tests for adduser

* Update .gitignore
2021-10-14 18:46:54 -07:00
7019c2685d Increase cpu limit to prevent throttling (#8076) 2021-10-14 11:03:36 -07:00
d18cc38586 Replcae deprecated --delete-local-data in pre-remove/pre-upgrade tasks (#8081) 2021-10-14 02:25:19 -07:00
cee481f63d cert-manager: upgrade to 1.5.4 (#8069)
* cert-manager: update to 1.5.4

* cert-manager: remove outdated guidelines on creating an initial ClusterIssuer
2021-10-12 09:17:47 -07:00
e4c8c7188e etcd: deploy container engine if needed (#7532)
If the etcd cluster is separate and the etcd_deployment_type is "host",
there is no need for a container engine on the etcd nodes

Do not rely on a 'default(true)' filter, but define a proper default in
kubespray-defaults depending on etcd deployment method and if internal
or external etcd is used
2021-10-12 00:31:47 -07:00
6c004efd5f cert_manager: Remove deprecated ClusterIssuer and its Secret (#8064) 2021-10-11 09:40:40 -07:00
1a57780a75 Add kubeadm_join_phases_skip variable (#8067)
* Add kubeadm_join_phases_skip variable

* Update kubeadm_join_phases_skip comment

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>

* Add kubeadm_join_phases_skip_default variable to follow the same logic with kubeadm_init_phases_skip

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>
2021-10-11 09:36:41 -07:00
ce25e4aa21 MetalLB: update to v0.10.3 (#8071)
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
2021-10-11 08:54:40 -07:00
ef4044b62f csi_driver / cinder: implement rescan-on-resize variable via (#8057)
cinder_csi_rescan_on_resize
2021-10-11 02:14:40 -07:00
9ffe5940fe Remove TF 0.14/0.15 support - Add TF 1.x support only (#8062) 2021-10-08 09:01:06 -07:00
c8d9afce1a Update a bunch of tools (#8061) 2021-10-08 09:00:59 -07:00
285983a555 Update docker version to 20.10.9 - CVE fixes (#8060) 2021-10-08 08:56:58 -07:00
ab4356aa69 Calico: bump default version to 3.20.2 (#8058) 2021-10-07 12:59:33 -07:00
e87d4e9ce3 Added terraform script for Hetzner cloud (#8053) 2021-10-07 10:11:46 -07:00
5fcf047191 local-volume-provisioner quay.io -> k8s.gcr.io (#8054) 2021-10-06 17:08:41 -07:00
c68fb81aa7 Clarify documentation for integration.md (#8049) 2021-10-06 16:44:41 -07:00
e707f78899 After upgrade, allow cilium to be back before uncordoning (#7978)
* After upgrade, allow cilium to be back before uncordoning

* add eol

* use kube_config_dir variable
resolves https://github.com/kubernetes-sigs/kubespray/pull/7978#discussion_r721685549
2021-10-05 12:56:58 -07:00
41e0ca3f85 Move kube_feature_gates to kubelet config (#8048)
to remove deprecation warning:

> Flag --feature-gates has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.
2021-10-05 06:07:10 -07:00
c5c10067ed Update kubespray version to 2.17.x in first cluster guide (#8043) 2021-10-04 00:09:07 -07:00
43958614e3 Fix kubespray flatcar ansible_os_family and ansible_distribution (#8029)
Closes https://github.com/kubernetes-sigs/kubespray/issues/8028

Signed-off-by: Iago Santos <iago.santos.pardo@adfinis.com>
2021-10-01 09:11:23 -07:00
af04906b51 Ensure apparmor is installed (#8036)
Kubespray deployment failed when using containerd backend on nodes that apparmor was not installed or previously removed. This PR ensure apparmor is installed by adding it into required_pkgs var.
2021-09-29 23:52:08 -07:00
c7e17688b9 gVisor: bump release to 20210921 version (#8015)
* gVisor: bump release to 20210921 version

* gVisor: drop support for 20210518.0 version
2021-09-29 11:35:20 -07:00
ac76840c5d Upgrade ruamel.yaml.clib to work with Python 3.10 (#8034)
ruamel.yaml.clib did not build with the upcoming Python 3.10.

Cf. https://sourceforge.net/p/ruamel-yaml-clib/tickets/5/

ruamel.yaml.clib==0.2.4 fixes the issue. It does not work
with Python 3.7 (cf https://sourceforge.net/p/ruamel-yaml-clib/tickets/6/)
but currently Kubespray requires Python >= 3.9.
2021-09-29 07:04:49 -07:00
f5885d05ea In CentOS 8.x Docker install Step: remove podman when existing (#8016) 2021-09-29 06:32:48 -07:00
af949cd967 Fix invalid documentation links (#7692)
* Fix invalid link to Ansible documentation

* Fix invalid link to mitogen doc page

* Fix invalid link to calico doc page

* Fix all invalid links to doc pages
2021-09-28 09:58:43 -07:00
eee2eb11d8 Update weave template to match source for 2.8.1 (#8013) 2021-09-28 09:16:43 -07:00
8d3961edbe Add metrics_server_resizer option (#8018)
The addon-resizer container can reduce resource limits of cpu and
memory of metrics-server container in the pod, and that caused
OOMKilled.
In addition, the original metrics-server manifest doesn't contain
the addon-resizer container as [1].
So this adds metrics_server_resizer option to control the addon-resizer
container deployment and the default value is false to make it stable
for most environments.

[1]: 527679e5e8/manifests/base/deployment.yaml
2021-09-28 00:02:42 -07:00
4c5328fd1f Determine root filesistem device and partition before running growpart (#8024) 2021-09-27 23:58:42 -07:00
1472528f6d check if 'plugins' key exists in calico_cni_config object (#7717)
* check if 'plugins' key exists in calico_cni_config object

* fix whitespace linting error

* fixed when list indentation
2021-09-27 11:04:20 -07:00
9416c9aa86 Enable stable and edge Docker CLI versions (#8019) 2021-09-27 10:44:19 -07:00
da92c7e215 Add proxy for subscription-manager (#8012)
If using proxy, it is necessary to configure it before running
"subscription-manager status" command.
This adds the step.
2021-09-27 08:47:35 -07:00
d27cf375af Remove allowPrivilegeEscalation from metrics-server (#8014)
"allowPrivilegeEscalation: false" blocks deploying metrics-server
on CentOS7. In addition, the original metrics-server manifest doesn't
contain it as [1]. This removes it.

[1]: 527679e5e8/manifests/base/deployment.yaml
2021-09-27 08:43:36 -07:00
432a312a35 Enable stable and edge containerd versions (#8020) 2021-09-27 08:11:35 -07:00
3a6230af6b Kata-Containers: update versions 2.2.0 (default) and 2.1.1 (#8017)
* Kata-Containers: add 2.2.0 hashes and make default

* Kata-Containers: replace 2.1.0 with bugfix version 2.1.1

* Kata-Containers: move to q35 a more modern VM architecture as 'pc' is removed in 2.2.0
2021-09-27 08:07:35 -07:00
ecd267854b Move ovn4nvf crd from v1beta1 to v1 (#8006) 2021-09-27 01:18:22 -07:00
ac846667b7 Check if openstack application credentials are empty since they always exists (#8021) 2021-09-27 01:14:22 -07:00
33146b9481 CI: Add Calico eBPF in HA mode test (#7710)
* Sample-Inventory: add sample for calico_bpf_enabled

* Calico-Doc: note about CONFIG_NET_SCHED for eBPF support

* CI: Add Calico eBPF in HA mode test
2021-09-24 09:57:23 -07:00
4bace2491d Ensure apparmor is installed (#8011)
Kubespray deployment failed when using containerd backend on nodes that apparmor was not installed or previously removed. This PR ensure apparmor is installed by adding it into required_pkgs var.
2021-09-24 07:55:23 -07:00
469b3ec525 Add definition check of disable_service_firewall (#7995)
When not specifying disable_service_firewall, the task is failed.
This adds the definition check.
2021-09-24 02:31:23 -07:00
22017b7ff0 kube-router 1.3.0 -> 1.3.1 (#8007) 2021-09-23 13:42:55 -07:00
88c11b5946 Revert "etcd: enable v2 api only if needed (#8001)" (#8008)
This reverts commit c0e1211abe.
2021-09-23 10:43:14 -07:00
843252c968 Use kube_config_dir for kubeconfig (#7996)
The path of kubeconfig should be configurable, and its default value
is /etc/kubernetes/admin.conf. Most paths of the file are configurable
but some were not. This make those configurable.
2021-09-23 10:19:13 -07:00
ddea79f0f0 Issue 8004: Fix typha prometheus (#8005)
The typha prometheus settings were in the `volumeMounts` section of the
spec and not in the `envs` section. This was cauing the deployment to
fail because it was looking for a volumeMount.

```
failed: [controller-001.a2.da.dev.logdna.net] (item=calico-typha.yml) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "checksum": "598ac79530749e8e2110793b53fc49ac208e7130", "dest": "/etc/kubernetes/calico-typha.yml", "diff": [], "failed": false, "gid": 0, "group": "root", "invocation": {"module_args": {"_original_basename": "calico-typha.yml.j2", "attributes": null, "backup": false, "checksum": "598ac79530749e8e2110793b53fc49ac208e7130", "content": null, "delimiter": null, "dest": "/etc/kubernetes/calico-typha.yml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": null, "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/home/core/.ansible/tmp/ansible-tmp-1632349768.56-75434-32452975679246/source", "unsafe_writes": null, "validate": null}}, "item": {"file": "calico-typha.yml", "name": "calico", "type": "typha"}, "md5sum": "53c00ac7f562cf9ecbbfd27899ea066d", "mode": "0644", "owner": "root", "size": 5378, "src": "/home/core/.ansible/tmp/ansible-tmp-1632349768.56-75434-32452975679246/source", "state": "file", "uid": 0}, "msg": "error running kubectl (/opt/bin/kubectl --namespace=kube-system apply --force --filename=/etc/kubernetes/calico-typha.yml) command (rc=1), out='service/calico-typha unchanged\n', err='error: error validating \"/etc/kubernetes/calico-typha.yml\": error validating data: [ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[2]): unknown field \"value\" in io.k8s.api.core.v1.VolumeMount, ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[2]): missing required field \"mountPath\" in io.k8s.api.core.v1.VolumeMount, ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[3]): unknown field \"value\" in io.k8s.api.core.v1.VolumeMount, ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[3]): missing required field \"mountPath\" in io.k8s.api.core.v1.VolumeMount]; if you choose to ignore these errors, turn validation off with --validate=false\n'"}
```
2021-09-23 08:37:22 -07:00
c0e1211abe etcd: enable v2 api only if needed (#8001)
* etcd: enable v2 api only if needed

Only enable v2 API if we have a consumer (flannel)
This reduce the exposed surface for etcd.

* Fix bad group name
2021-09-22 12:36:32 -07:00
c8d7f000c9 Remove k8s hooks for versions prior to 1.20 (#7998) 2021-09-22 10:32:01 -07:00
598f178054 Fix cilium operator metrics activation (#8000) 2021-09-22 10:00:02 -07:00
6f8b24f367 Allow failure in cert manager job 2021-09-22 09:50:01 -07:00
5d1b34bdcd Move min k8s version to 1.20 2021-09-22 09:50:01 -07:00
8efde799e1 Update kubernetes version to 1.22.2 2021-09-22 09:50:01 -07:00
96b61a5f53 Update KUBE_VERSION in gitlab-ci following release 2021-09-22 09:50:01 -07:00
a517a8db01 Drop chech for kubelet_shutdown_grace_period (#7993)
and kubelet_shutdown_grace_period_critical_pods as ansible cannot do
sane time interval calculations
2021-09-21 18:34:00 -07:00
2211504790 Fix k8s-certs-renew cp path (#7992)
Signed-off-by: Wang Zhen <lazybetrayer@gmail.com>
2021-09-21 00:36:22 -07:00
fb8662ec19 Calico: update versions 3.20.1, 3.19.3 (#7984)
* make Calico 3.20.1 the default version
* drop Calico 3.17.x support
2021-09-20 17:40:23 -07:00
6f7911264f Calico: make calico_min_version check relevant (#7939)
* Calico: make calico_min_version check relevant

* Calico: only check currently installed version against the oldest supported version by the previous release
2021-09-20 07:58:09 -07:00
ae44aff330 Calico: increase calico node probe timeouts and allow tunning (#7981) 2021-09-17 16:08:07 -07:00
b83e8b020a Fix default version (#7977) 2021-09-17 07:31:00 -07:00
30cd91dc6b Add option to kubeadm upgrade command to control certificates renewal during control plane upgrade (#7976)
* Add option to kubeadm upgrade command to control certificates renewal during control plane upgrade

* Remove training whitespace
2021-09-17 04:31:00 -07:00
09af3ab074 Set Kubernetes default version to 1.21.5 2021-09-17 00:39:02 -07:00
f2fa9c3b31 Update hashes with new versions 2021-09-17 00:39:02 -07:00
30a7dfa4f8 Fix ubuntu16/centos8 CI jobs (#7972) 2021-09-16 23:39:01 -07:00
62ab477838 remove kube_proxy_conntrack_max var (#7971) 2021-09-15 08:22:31 -07:00
1edb7d771f Modify connection_strings_etcd to only return etcd nodes (#7966)
Modify connection_strings_etcd to only return etcd nodes - not master nodes - since this results in duplicate hosts in the generated Ansible inventory and is unnecessary.
2021-09-15 00:58:40 -07:00
f8a57f7598 Fix iptables missing on Debian 11 if APT::Install-Recommends=0 (#7964)
On Debian 11, `ipset` just recommend `iptables` so on the system that apt is configured with `APT::Install-Recommends "0";` iptables will not install automatically.
2021-09-14 08:19:09 -07:00
85d18fc107 add node-based upgrade (#7785) 2021-09-13 23:59:07 -07:00
aa00c1d91a Updated UpCloud terraform script to use private network and dynamic (#7779)
additional disks
2021-09-10 13:55:21 -07:00
a5a88e41af Fix: adding new ips with inventory builder (#7577) (#7583)
* Fix: adding new ips with inventory builder (#7577)

* moved conflig loading logic
to after checking whether the config
should be loaded, and added check for
whether the config should be loaded

* added check for removing nodes from config
if the user wants to remove a node, we
need to load the config

* Fix tox errors
2021-09-10 12:21:22 -07:00
35c928798d Fix missing file mode (risky-file-permissions) (#7959)
* Fix missing file mode (risky-file-permissions)

Found this using ansible-lint.

Signed-off-by: Bryan Hundven <bryanhundven@gmail.com>

* Fix another missing file mode (risky-file-permissions)

This one fixes `/etc/crio/config.json`

Signed-off-by: Bryan Hundven <bryanhundven@gmail.com>
2021-09-09 23:35:59 -07:00
83f64a7ff9 Bugfix/cinder csi cloud config template (#7955)
* Fix invalid condition for username and password inclusion

* Use length filter to test variable conditions
2021-09-09 10:04:11 -07:00
60853fa682 Update kube-ovn to 1.7.2 2021-09-09 08:14:10 -07:00
b66356be65 Update cilium to 1.9.10 2021-09-09 08:14:10 -07:00
efae2dbad6 Update snapshot-controller repository and image versions (#7957) 2021-09-09 08:10:11 -07:00
a7b56a616d Fix README for containerd/calico/certmanager/nginx (#7950) 2021-09-08 16:56:10 -07:00
bd8b8916a8 Remove invalid spec - deployment.spec.serviceName (#7949) 2021-09-08 13:05:56 -07:00
57063b6828 Replace incorrect {% end %} tags with {% endif %} in csi_crd templates (#7947) 2021-09-08 12:59:57 -07:00
69b67a293a Calico: Add kube_service_addresses_ipv6 to serviceClusterIPs (#7889) (#7944)
Add IPv6 Service Addresses to BGP advertisement when 
calico_advertise_cluster_ips is true.
2021-09-08 00:37:20 -07:00
d57ddf0be8 Feature DynamicKubeletConfig is deprecated in 1.22 and will not move to GA (#7938)
* Feature DynamicKubeletConfig is deprecated in 1.22 and will not move to GA

* Add check for dynamic_kubelet_configuration with kube >= 1.22
2021-09-07 10:47:16 -07:00
43e7e2d663 nginx-ingress: bump to 1.0.0 to support kube 1.22 (#7942) 2021-09-06 04:50:36 -07:00
d355b43dce ContainerD: bump containerd version to 1.4.9 (#7940) 2021-09-06 04:50:29 -07:00
5d52025266 crictl: add hashes for 1.22 (#7936) 2021-09-06 04:46:29 -07:00
db470f8529 Update CSI snaphotter and make it independent (#7943)
* CSI: update CSI snapshot CRDs

* CSI: update snapshot controller tag version with kubernetes specific versions

* CSI: allow enabling csi_snapshot_controller independent of Cinder CSI

* CSI: Align csi-snapshot-controller with upstream and use a Deployment instead of a StatefulSet
2021-09-06 04:24:29 -07:00
c8f3d88288 Retry vagrant and periodic packet jobs too 2021-09-06 02:58:29 -07:00
b54cf5bd0a Add git to kubespray image 2021-09-06 02:58:29 -07:00
7e4b176323 Update Ansible tags in documentation (#7933) 2021-09-02 10:08:58 -07:00
81bf4f9304 cri-o registry auth support (#7837)
* cri-o registry auth support

* yaml lint for comments

* crio_registry_auth from registry_auth

* crio_registry_auth as defaults
2021-09-01 10:20:59 -07:00
e1967b0700 MetalLB: keep nodeSelector in one place (#7931)
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
2021-09-01 09:05:00 -07:00
507091ec8b Replace cluster_name by dns_domain (#7923)
`cluster_name` defaults to `dns_domain` value (see [here][1] and [here][2])
but they could have different values.

`dns_domain` should be used here instead of `cluster_name` because the DNS
resolution is configured to use `dns_domain`.

[1]: 0ef7af76bc/roles/kubespray-defaults/defaults/main.yaml (L104)
[2]: 1afdb05ea9/inventory/sample/group_vars/k8s_cluster/k8s-cluster.yml (L196)
2021-09-01 08:18:59 -07:00
c7529270ff Fix CI script for Terraform >0.15 (#7928) 2021-09-01 04:30:59 -07:00
48ceca4919 MetalLB: update to v0.10.2 (#7925)
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
2021-09-01 03:00:59 -07:00
0171c71de0 Update Terraform 0.14 to .11, remove 0.13 jobs and add 0.15 2021-08-31 16:32:59 -07:00
46d0df394f Add one retry to packet_pr jobs 2021-08-31 16:32:59 -07:00
207d3e7b4e Add Debian-11 image and CI (#7919) 2021-08-31 14:02:22 -07:00
426ad81db0 Calico: replace hashes for latest 3.17 and 3.18 to the .5 minor versions (#7924) 2021-08-31 13:38:21 -07:00
497d2ca306 Fix Calico's FelixConfiguration when "IP in IP" is disabled (#7926)
When using Calico with:

- `calico_network_backend: vxlan`,
- `calico_ipip_mode: "Never"`,
- `calico_vxlan_mode: "Always"`,

the `FelixConfiguration` object has `ipipEnabled: true`, when it should be false:

This is caused by an error in the `| bool` conversion in the install task:
when `calico_ipip_mode` is `Never`,
`{{ calico_ipip_mode != 'Never' | bool }}` evaluates to `true`:
2021-08-31 13:14:21 -07:00
9d3888a756 During pre-upgrade add a flag to always cordon (#7892)
* During pre-upgrade add a flag to always cordon

* empty

* empty

* empty

* Better default val
2021-08-30 10:56:09 -07:00
c8e090c17f Add preliminary Debian 11 (bullseye) support (#7853)
- Use python3-apt instead because python-apt was removed in Debian 11
- Add gnupg (fix "container-engine/containerd : ensure containerd repository public key is installed" task failed)
- Remove aufs-tools

Signed-off-by: rtsp <git@rtsp.us>
2021-08-30 09:53:06 -07:00
77a74adedd Bump centos8 CI job memory to 3go and remove mitogen for fedora CI (#7921) 2021-08-30 08:25:13 -07:00
1ccf32e08f Update docker to 20.10.8 (#7918) 2021-08-30 08:25:06 -07:00
b5aced20e1 Update Kubernetes version to 1.21.4 2021-08-30 08:17:05 -07:00
17af348be8 Add bunch of Kubernetes versions missing 2021-08-30 08:17:05 -07:00
1afdb05ea9 Fedora and RHEL use etc_t and the convention is <type_name>_t (#7891)
* Fedora and RHEL use etc_t and the convention is <type_name>_t

* Docs: specify all values for preinstall_selinux_state

* CI: Add Fedora 34 with SELinux in enforcing mode
2021-08-27 14:20:53 -07:00
425b6741c6 Fix failed image build on pip installing ansible (#7862)
Related pip bug: https://stackoverflow.com/questions/68687029/unable-to-build-kubespray-container-from-dockerfile
Proposed workaround in comment: https://github.com/pypa/pip/issues/10219#issuecomment-887337037
Setting LANG only prior to launching pip fixes the issue with a successful build
2021-08-26 07:47:23 -07:00
d635961120 Add Infomaniak to compatible public clouds list (#7910) 2021-08-26 06:47:24 -07:00
d5b865da4d Update etcd without rotating etcd certs (#7907) 2021-08-26 00:21:23 -07:00
89993e4833 fix error metrics server capabilities name (#7905) 2021-08-25 12:06:15 -07:00
6b5da84014 Clean up extra spaces last one (#7904)
Although these errors are not important, they affect the code specification.
2021-08-25 12:06:09 -07:00
1c3d33e146 Calico: 3.20.0 policy update to allow access to endpointslices (#7899) 2021-08-25 12:06:01 -07:00
71af4b4a85 chore : use --no-cache-dir flag to pip in dockerfiles to save space (#7898)
using --no-cache-dir flag in pip install ,make sure downloaded packages
by pip don't cached on system . This is a best practice which make sure
to fetch from repo instead of using local cached one . Further , in case
of Docker Containers , by restricting caching , we can reduce image size.
In term of stats , it depends upon the number of python packages
multiplied by their respective size . e.g for heavy packages with a lot
of dependencies it reduce a lot by don't caching pip packages.

Further , more detail information can be found at

https://medium.com/sciforce/strategies-of-docker-images-optimization-2ca9cc5719b6

Signed-off-by: Pratik Raj <rajpratik71@gmail.com>
2021-08-25 12:05:55 -07:00
c49dd50ef3 add tags: always to all included sevice playbook (#7906) 2021-08-25 12:01:54 -07:00
f66c49bf42 Calico: replace version 3.19.1 with 3.19.2 and set as default (#7867)
Bump calico version to 3.19.2 due to adding 3.20.0 earlier
2021-08-25 07:32:41 -07:00
4c9d7dedb3 addons/cert_manager: retries until webhook pods has been created (#7850)
Fix task 'Cert Manager | Wait for Webhook pods become ready' failed due to webhook pods don't exist yet by using `retries..until` trick like kubernetes-sigs/kubespray#7842

This fix should be removed in the future if the kubernetes/kubernetes#83242 is resolved.

Signed-off-by: rtsp <git@rtsp.us>
2021-08-25 07:16:41 -07:00
5336943a8c add cilium_operator_api_serve_addr to cilium operator config (#7901) 2021-08-24 03:49:13 -07:00
9dfade5641 Update nodes.md (#7902) 2021-08-24 02:43:14 -07:00
a040e521b4 feat(containerd): auth support (#7868)
* feat(containerd): auth support

* fix(registry-auth): rename variable
2021-08-23 06:40:00 -07:00
dad4b26c6f Update Azure.md (#7880) 2021-08-20 20:23:58 -07:00
0ac364dfae Calico: use --allow-version-mismatch in calicoctl.sh to allow upgrades (#7873) 2021-08-20 14:30:48 -07:00
dfd35892f2 docs/cert_manager.md: Update docs for K8s v1.22 (#7877) 2021-08-19 18:31:24 -07:00
79166496f3 debian: Fix test failed after bullseye release (#7888) 2021-08-19 15:37:24 -07:00
c7d12cddec Ensure python main function return values (#7860)
The main functions are wrapped by a sys.exit function which expects and
argument. The curent implementation isn't returning values in all cases.
This change ensures main functions return a value in all cases.
2021-08-19 06:51:24 -07:00
1f09229740 Update cilium to 1.9.9 (#7871)
Now that 1.10 is out this is to make 1.9.9 the default. I am running
this version successfully.
2021-08-16 13:34:22 -07:00
c2d4700571 Remove unused python imports (#7859) 2021-08-13 13:35:32 -07:00
c06896a352 Update metrics-server to 0.5.0 (#7864) 2021-08-12 08:19:48 -07:00
c119620f7c Calico: add v3.20.0 hashes (#7855) 2021-08-11 07:50:46 -07:00
7f309bb092 fix parameters for module replace in 0060-resolvconf (#7858) 2021-08-10 17:13:26 -07:00
e2b67b5700 Add suport of Vsphere CSI driver 2.2.X versions (#7848) 2021-08-09 08:19:38 -07:00
82a9064d8d addons/cert_manager: fix kubernetes-sigs#7085 by adding retries..until (#7842)
Fix task 'Cert Manager | Apply ClusterIssuer manifest' failed due to service/endpoints updating delayed even though the wekhook pod status is ready.

Signed-off-by: rtsp <git@rtsp.us>
2021-08-09 08:19:31 -07:00
a70fab2249 Bump crun to 0.21 version (#7854) 2021-08-09 08:11:31 -07:00
86b45fce6a Remove environment variable in remove-node play (#7729) 2021-08-02 04:29:21 -07:00
31a5a4e808 retry to fetch binary if it fails first time (#7839) 2021-07-30 00:17:38 -07:00
5db86f4c2b Update vSphere CPI (#7838)
Changes:
  * ClusterRole updated according to the latest manifests from
    https://github.com/kubernetes/cloud-provider-vsphere
  * vSphere CPI/CSI default versions bumped and
    tested successfully on K8S 1.21.1
  * vSphere documentation updated

Signed-off-by: Vitaliy D <vi7alya@gmail.com>
2021-07-29 18:17:37 -07:00
20c284c276 doc: Update 'Kubespray vs Kubeadm' (#7834)
non-kubeadm mode has been removed since ddffdb63bf
2.5 years ago. The non-kubeadm makes unnecessary confusion today, then
this updates the documentation.
2021-07-28 03:15:34 -07:00
befc6cd650 Update MetalLB documentation (#7833)
- Added a hint about the kube_proxy_strict_arp configuration, which is required for MetalLB to work
 - See also https://github.com/kubernetes-sigs/kubespray/pull/5180/files
2021-07-27 08:46:45 -07:00
97d95775a5 Disable OVH CI until voucher situation is cleared up (#7824) 2021-07-26 06:16:33 -07:00
8f44cd35d8 Fix how to get image ID on offline deployment (#7808)
Previously IDs of container images were gotten from tar files of container
images but that way was wrong. If multiple json files are contained in a
tar file, the script got multiple IDs and tried to pass these IDs on
`docker tag` command. Then the command was failed.

This updates the script to get image IDs from `docker image inspect` command
to fix this issue.
In addition, this adds a check a registry container exists already or not
before deploying registry container to avoid a container conflict failure.
2021-07-26 00:56:33 -07:00
627a06e30d CRI-O: Install libseccomp2 from backports on Debian 10 (#7816)
* CRI-O: Install libseccomp2 from backports on Debian 10

libseccomp2 is a required dependency of cri-o-runc package

The one provided in Debian 10 repositories is outdated

* 7816: Remove useless when condition

As this condition is handled by block
2021-07-23 07:07:16 -07:00
bfebcfa2c5 fix(misc): contrib/terraform/aws (#7818)
* fix(misc): terraform/aws

- handles deployment with a single availability zone
- handles deployment with more than two availability zone
- handles etcd collocation with control-plane nodes (`aws_etcd_num=0`)
- allows to set a bastion instances count (`aws_bastion_num`)
- allows to set bastion/etcd/control-plane/workers rootfs volume size
- removes variables from terraform.tfvars that were not re-used
- adds .terraform.lock.hcl to .gitignore
- changes/updates base image from ubuntu-18.03 to debian-10

tested by a few coworkers of mine, and myself: thanks for the outstanding
work, on both those terraform samples and kubespray playbooks.
I did not test ubuntu deployments, I could still swap from buster to
focal. LMK.

* fix(gitlab-ci)

AFAIU, terraform.tfvars indentation should be fixed for / no diff
returned running `terraform fmt -check -diff`

https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/-/jobs/1445622114
2021-07-23 02:43:16 -07:00
56e230863a Separate gvisor_download_url for runsc and shim (#7760)
To download necessary files in advance for offline deployment,
we can see all file URLs with contrib/offline/generate_list.sh
Most URLs are downloadable, but gvisor's one is not because the
URL is a part of full URLs for gvisor.
To download gvisor's files from the URLs directory, this separates
into two URLs for runsc and the shim.
2021-07-22 07:51:51 -07:00
e5ee47408e Allow failure on tf-elax_ubuntu18-calico (#7814)
tf-elax_ubuntu18-calico is so flake today. The test job is failed
due to SSH connectivity check error after deploying virtual machines
which are used for Kubernetes nodes.
This allows failure on the job to see the test situation without
pull request merger failures.
2021-07-22 07:47:52 -07:00
f21a707e99 Add containerd on Flatcar Container Linux (#7681) 2021-07-21 06:28:07 -07:00
0ef7af76bc Fixup label for oracle linux bootstrap 2021-07-20 01:29:31 -07:00
18666b3e2d Update multus to 3.7.2 (and move to ghcr.io) 2021-07-20 01:29:31 -07:00
ed87386d7b Set default k8s version to 1.21.3 2021-07-20 01:29:31 -07:00
1ad9b33b08 Add hashes for k8s 1.20.8/.9 and 1.19.12/.13 and 1.21.3 2021-07-20 01:29:31 -07:00
000b4565c2 Fix erroneous ansible args 2021-07-20 01:29:31 -07:00
eda75fc706 Update kube-router to 1.3.0 2021-07-20 01:29:31 -07:00
6583add63a Update flannel to 0.14.0 (moved from coreos repo to flannel-io) 2021-07-20 01:29:31 -07:00
441ad841cc Use dashboard 2.3.1 image 2021-07-20 01:29:31 -07:00
6511c5dd7a Set Helm default version to 3.6.3 2021-07-20 01:29:31 -07:00
d5cbb19b39 Update kube-ovn to 1.7.1 2021-07-20 01:29:31 -07:00
b0fcc1ad1d Add error handling for registorying images (#7787)
When running the script, I faced the following error but it was
difficult to know the root problem due to lack of error handling.

  docker tag" requires exactly 2 arguments.
  See 'docker tag --help'.

  Usage:  docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]

  Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

To investigate such errors easily, this adds an error handling.
2021-07-18 17:58:51 -07:00
417180246c Fix: typos in docs and comments (#7805) 2021-07-16 18:58:50 -07:00
1892562614 Updated README (#7800) 2021-07-16 06:38:08 -07:00
22b128dfd2 fix: update metallb docs url (#7802) 2021-07-16 03:38:08 -07:00
802fb8b591 Add application credentials support for cinder (#7799)
* csi-driver: Added possibility to use application credentials for cinder

* external-cloud-controller: Added env vars for openstack application credentials
2021-07-15 00:56:48 -07:00
c2cf0d9945 add containerd on fedora CoreOS (#7794)
* set selinux type t_etc if selinux state is enforcing

* workaround with update repo is no longer needed
remove comments about failing playbook

* grubby is not available in distros using ostree

* remove docker support because removed in fcos
update install script example with live rootfs

* do not call grubby on ostree based distro

* update docs enabling containerd on fedora coreos
2021-07-15 00:00:48 -07:00
3b3ccac212 Update README.md (#7784)
Update README for control_plane's external volume type variable
2021-07-13 22:52:26 -07:00
e61a9077f4 Clean up extra spaces about configuration-qemu.toml.j2 (#7795)
Clean up extra spaces, although these errors are not important, they affect the code specification.
2021-07-13 06:38:34 -07:00
59ce9f9b87 Set image version to v2.16.0 (#7792) 2021-07-13 06:34:36 -07:00
bf54dc082b set selinux type t_etc if selinux state is enforcing (#7791) 2021-07-13 06:34:29 -07:00
3ff7bc1f64 Added k8s 1.21.2 (#7789) 2021-07-13 06:26:29 -07:00
7516fe142f Move to Ansible 3.4.0 (#7672)
* Ansible: move to Ansible 3.4.0 which uses ansible-base 2.10.10

* Docs: add a note about ansible upgrade post 2.9.x

* CI: ensure ansible is removed before ansible 3.x is installed to avoid pip failures

* Ansible: use newer ansible-lint

* Fix ansible-lint 5.0.11 found issues

* syntax issues
* risky-file-permissions
* var-naming
* role-name
* molecule tests

* Mitogen: use 0.3.0rc1 which adds support for ansible 2.10+

* Pin ansible-base to 2.10.11 to get package fix on RHEL8
2021-07-12 00:00:47 -07:00
b0e4c375a7 Allow cri-o offline install (#7777) 2021-07-09 20:52:45 -07:00
d1388d69d0 Fix tests following python change (#7775)
* Fix ansible detection for python3 and ubuntu

* Fix oracle missing centos-extras repo for containerd/docker dependencies
2021-07-08 18:52:53 -07:00
a3149a41f1 Clean up extra spaces (#7783)
Clean up extra spaces, although these errors are not important, they affect the code specification.
2021-07-08 14:56:53 -07:00
823bd9118e Clean up extra spaces of kubespray-aws-inventory.py (#7774)
Clean up extra spaces, although these errors are not important, they affect the code specification.
2021-07-08 01:32:53 -07:00
394afc957b Update vars.md to remove mention of string syntax of node_labels (#7776)
* Update vars.md to remove mention of string syntax of node_labels

Fixes https://github.com/kubernetes-sigs/kubespray/issues/6215

* Try fix makrdown linting

* Update docs/vars.md
2021-07-07 14:20:22 -07:00
63e92d719a Clarify first master replace (#7761)
* Update nodes.md

* fix syntax

* fix syntax - part 2

* replace master with kube_control_plane

* return etcd-master
2021-07-07 13:42:23 -07:00
9b87131b19 Fix Operating Systems menu for Amazon Linux 2 (#7772) 2021-07-05 01:30:55 -07:00
4a15994da0 Update link for kubepsray project (#7758)
https://github.com/kubernetes-incubator/kubespray is an old link,
this updates the link.
2021-07-05 01:12:55 -07:00
d0fb537448 Ubuntu changed package name python-apt to python3-apt (#7769)
* replaced deprecated python package with python3 package

* removed the version due to duplication
2021-07-02 06:56:13 -07:00
59cf1770bc Clean up residual files about /usr/libexec (#7756)
When reset, need to clean up directory /usr/libexec.
2021-07-01 02:13:54 -07:00
b77f207512 Docs: Replace master with control plane (#7767)
This replaces master with "control plane" in Kubespray docs
because of [1].

[1]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint/README.md#motivation
2021-07-01 00:55:55 -07:00
b46a69f5e1 add python requirement ruamel.yaml.clib==0.2.2 to keep python 2.7 compatible (#7754) 2021-06-30 08:19:04 -07:00
0aaba5ea30 added destination filename to cp command (#7764) 2021-06-30 08:13:03 -07:00
bd6d810d0a nodelocaldns: allow binding metrics address to host IP (#7748) 2021-06-29 05:28:41 -07:00
e3850fbbbc Extra spaces of macvlan (#7752)
Although these errors are not important, they affect the code specification.
2021-06-28 02:13:25 -07:00
05d864c913 Calico Docs: clarify the algorithm to calculate calico_veth_mtu (#7749)
* Claico Docs: clarify the algorithm to calculate calico_veth_mtu

* Update sample calico_veth_mtu
2021-06-27 23:59:25 -07:00
a3e34f589a Enable Graceful Node Shutdown for Kubernetes >= 1.21.0 (#7746)
* Enable Graceful Node Shutdown for Kubernetes >= 1.21.0

* Add sample graceful shutdown parameters
2021-06-27 23:53:25 -07:00
a2cf6816ce Calico wireguard (#7638)
* Calico: add Wireguard support

* CI: Add Calico Wireguard scenario
2021-06-25 03:22:45 -07:00
7b3bc54cc3 [KS-0] - added forgotten bracket in README.md (#7727) 2021-06-25 03:10:45 -07:00
cda88e6770 Clean up extra spaces (#7744)
I recently reviewed the code, although these errors are not important, they affect the code specification.
2021-06-25 01:44:46 -07:00
70f1abbc18 fix broken link in doc (#7736)
* fix broken link in doc

* Revert "fix broken link in doc"

This reverts commit b427d1f57f.

* move metallb doc to right place, fixing broken link
2021-06-25 01:34:45 -07:00
bbcafb5d7b Clean up residual files about modules-load.d (#7737)
When reset, need to clean up files kube_proxy-ipvs.conf and kubespray-br_netfilter.conf.
2021-06-25 00:32:45 -07:00
d7039ef707 Openstack cwd (#7643)
* terraform/openstack: Use path.root for ansible_bastion_template.txt

The path.root variable points to the root module path. Using this
instead of a relative path makes less assumptions about the current
working directory.

* terraform/openstack: Add group_vars_path variable

Previously, the group_vars path was assumed to be in CWD. The
default value for the group_vars_path variable is still relative
to CWD and thus should be backwards compatible if unset.
2021-06-25 00:26:45 -07:00
271be92b02 Update kubernetes-reliability.md (#7724)
It's a minor change, I just corrected `–` char to `-`.
2021-06-21 10:36:51 -07:00
a31baf3c16 Fix deployment without openstack cacert (#7723)
* fix group name

* fix external-openstack-cloud-config secret

* don't add ca.cert in the secret if not defined
2021-06-21 05:38:50 -07:00
e83728897b Clean up residual files (#7722)
* Clean up residual files

When reset, you need to clean up to the kerw directory.

* Update main.yml
2021-06-21 05:34:50 -07:00
282a27a07c gVisor: initial support for gVisor container runtime (#7661)
* Docker/Containerd: move downloads urls to containerd-common

* gVisor: initial support for gVisor container runtime
2021-06-21 05:18:51 -07:00
3fe6dbb65c fix image pull url for coredns v1.8.0 (#7702) 2021-06-16 17:00:19 -07:00
7547e6a272 Ubuntu 21.04 changed packagename python-apt in python3-apt (#7715) 2021-06-16 13:58:00 -07:00
1928dafc7e Revert to conmon location override for Redhat and Fedora (#7701) 2021-06-16 09:07:59 -07:00
0cbc0f4119 merge apps roles (#7688) 2021-06-16 08:10:07 -07:00
e77b9bf3ee Update kube-ovn to 1.7.0 (#7686) 2021-06-16 08:10:00 -07:00
7f7e83a4d9 fix local-path-provisioner helper image repo (#7703) 2021-06-16 08:06:00 -07:00
85fe716d46 Drop "Server" from crio repo URL (#7698) (#7699)
$releasever can be 7Server, but there is no such CentOS path on
download.opensuse.org.

Use ansible_distribution_major_version instead of $releasever.
2021-06-11 05:10:59 -07:00
85ff3eb8be Update the version of local_volume_provisioner (#7684)
As [1], v2.4.0 has been released already for local_volume_provisioner.
This updates the version.

[1]: https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/tags
2021-06-11 04:36:59 -07:00
e55c359cf9 Updage docker packages to 20.10.7 (#7685) 2021-06-11 04:32:59 -07:00
8d7327c188 Remove old groups from test inventory (#7656)
We have released v2.16 of Kubespray already, so we can remove those
old groups from the test inventory as the TODO says.
2021-06-09 02:45:48 -07:00
ca731dca95 readme invalid k8s_cluster.yml, the create file k8s-cluster.yml (#7677) 2021-06-07 10:26:56 -07:00
d66da21726 make sure serviceaccounts/token is only in the metadata stage (#7679) 2021-06-07 08:38:40 -07:00
1069b05e68 Improve scale flow and documentation (#7610)
* Improve scale flow

* Add confirmation prompt again
2021-06-07 05:02:40 -07:00
ec0c0d4a28 Calico enable support for eBPF (#7618)
* Calico: align manifests with upstream

* allow enabling typha prometheus metrics

* Calico: enable eBPF support

* manage the kubernetes-services-endpoint configmap

* Calico: document the use of eBPF dataplane

* Calico: improve checks before deployment

* enforce disabling kube-proxy when using eBPF dataplane
* ensure calico_version is supported
2021-06-07 04:58:39 -07:00
1739b27231 Replace yum module with package module (#7621) 2021-06-05 04:16:39 -07:00
d9d29af87f update containerd to version 1.4.6 (#7674) 2021-06-03 10:55:38 -07:00
7036b704b3 Replace Kata 1.x with Kata 2.x (#7670)
* Kata: add Kata 2.x checksums and adjust download urls for 2.x

* Kata: drop 1.x version which is no longer supported

* Kata: set default version 2.1.0
2021-06-02 00:50:41 -07:00
54cda80018 Fix debian docker available version (#7668) 2021-06-01 20:58:39 -07:00
b46e751573 protect against TypeError in case of NoneType (#7659) 2021-06-01 08:24:27 -07:00
6a2ea94b39 Docs improvements (#7660)
* Docs: update sidebar

* Docs: move registry documentation into docs/

* Docs: move rbd_provisioner documentation into docs/

* Docs: move cephfs_provisioner into docs/

* Docs: move local_volume_provisioner documentation into docs/

* Docs: move ambassador.md to docs/ingress_controller/

* Docs: move metallb.md to docs/ingress_controller/

* Docs: move ingress_nginx documentation into docs/

* Docs: move alb_ingress_controller documentation into docs/

* Docs: merge ambassador documentation into docs/ingress_controller/

* Docs: move cert_manager documentation into docs/

* Docs: move bootstrap-os documentation into docs/

* Docs: update file locations in sidebar
2021-06-01 07:30:27 -07:00
4674b03661 Add cinder_csi_ignore_volume_az (#7624)
Signed-off-by: Cedric Hnyda <cedric.hnyda@itera.io>
2021-06-01 07:10:27 -07:00
e2f1964389 Fix typo (#7665)
Signed-off-by: Guangwen Feng <fenggw-fnst@cn.fujitsu.com>
2021-06-01 00:34:27 -07:00
922de32290 spelling mistakes (#7664)
Signed-off-by: kjinan <2008kongxiangsheng@163.com>
2021-05-31 05:46:26 -07:00
7896bc7831 Add Fedora 33 image and CI, remove Fedora 31 (EOL) + update docker packages (#7657)
* Update docker package to 20.10.6

* Add Fedora 33 image and CI, remove Fedora 31 (EOL)
2021-05-28 08:04:25 -07:00
da07459bd6 Update crun 0.19 checksum (#7655)
Checksum of crun 0.19 is not correct, this commit fixes it
2021-05-27 15:20:23 -07:00
3ca205446e Added possibility to specify vSphere credentials via env variables (#7646)
* Added possibility to specify vSphere credentials via env variables

* Removed excessive spacing
2021-05-27 12:02:30 -07:00
eff1931283 Add retries to 'Set label for route reflector' task (#7645) 2021-05-27 12:02:23 -07:00
3a37a49690 Packet renamed (#7653)
* Packet->Equinix Metal rename #6901 

Updates throughout to reflect #6901 renaming for Packet to Equinix Metal.

* Rename Packet to Equinix Metal throughout the project #6901

Packet is renamed to Equinix Metal in more contexts including
documentation links. The Terraform provider used is still the Packet
provider. The environment variables and configuration options still
refer to the Packet name.

Signed-off-by: Marques Johansson <mjohansson@equinix.com>

Co-authored-by: Edward Vielmetti <ed@packet.net>
2021-05-27 11:58:24 -07:00
fd8ae54fa7 Docker default version is now 20.10 2021-05-27 11:18:24 -07:00
79fdee3979 Bump crio to default 1.21 2021-05-27 11:18:24 -07:00
a754c0d476 Kubernetes now use CoreDNS 1.8.0 2021-05-27 11:18:24 -07:00
7208169db3 Update kubernetes version to 1.21.1 2021-05-27 11:18:24 -07:00
94dac10be7 Update KUBE_VERSION in gitlab-ci following release (#7647) 2021-05-26 09:11:29 -07:00
d5fcbcd89f Update nodes.md (#7649) 2021-05-26 09:07:21 -07:00
7b5d43cc00 Calico: upgrade 3.18 to 3.18.4 (#7648) 2021-05-26 05:51:21 -07:00
c5ccedb694 store openstack external cloud controller ca.cert in a k8s secret instead of the host filesystem (#7603) 2021-05-26 00:35:21 -07:00
858b29f425 Calico: add support for v3.19.1 (#7630)
* Calico: add v3.19.1 hashes

* enable liveness probe for calico-kube-controllers

3.19.1

* Calico: drop support for v3.16.x

* Calico: promote v3.18.3 as default
2021-05-25 13:40:50 -07:00
7db76f8809 Add nodeSelctor for other services and node labels before CNI setup (#7613) 2021-05-25 13:40:43 -07:00
918 changed files with 28860 additions and 34322 deletions

View File

@ -18,3 +18,13 @@ skip_list:
# While it can be useful to have these metadata available, they are also available in the existing documentation.
# (Disabled in May 2019)
- '701'
# [role-name] "meta/main.yml" Role name role-name does not match ``^+$`` pattern
# Meta roles in Kubespray don't need proper names
# (Disabled in June 2021)
- 'role-name'
# [var-naming] "defaults/main.yml" File defines variable 'apiVersion' that violates variable naming standards
# In Kubespray we use variables that use camelCase to match their k8s counterparts
# (Disabled in June 2021)
- 'var-naming'

14
.gitignore vendored
View File

@ -3,7 +3,10 @@
**/vagrant_ansible_inventory
*.iml
temp
contrib/offline/offline-files
contrib/offline/offline-files.tar.gz
.idea
.vscode
.tox
.cache
*.bak
@ -11,6 +14,7 @@ temp
*.tfstate.backup
.terraform/
contrib/terraform/aws/credentials.tfvars
.terraform.lock.hcl
/ssh-bastion.conf
**/*.sw[pon]
*~
@ -99,3 +103,13 @@ target/
# virtualenv
venv/
ENV/
# molecule
roles/**/molecule/**/__pycache__/
# macOS
.DS_Store
# Temp location used by our scripts
scripts/tmp/
tmp.md

View File

@ -8,7 +8,7 @@ stages:
- deploy-special
variables:
KUBESPRAY_VERSION: v2.15.1
KUBESPRAY_VERSION: v2.19.0
FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
ANSIBLE_FORCE_COLOR: "true"
@ -16,6 +16,7 @@ variables:
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
CI_TEST_REGISTRY_MIRROR: "./tests/common/_docker_hub_registry_mirror.yml"
CI_TEST_SETTING: "./tests/common/_kubespray_test_settings.yml"
GS_ACCESS_KEY_ID: $GS_KEY
GS_SECRET_ACCESS_KEY: $GS_SECRET
CONTAINER_ENGINE: docker
@ -26,18 +27,20 @@ variables:
ANSIBLE_INVENTORY: ./inventory/sample/${CI_JOB_NAME}-${BUILD_NUMBER}.ini
IDEMPOT_CHECK: "false"
RESET_CHECK: "false"
REMOVE_NODE_CHECK: "false"
UPGRADE_TEST: "false"
MITOGEN_ENABLE: "false"
ANSIBLE_LOG_LEVEL: "-vv"
RECOVER_CONTROL_PLANE_TEST: "false"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube_control_plane[1:]"
TERRAFORM_14_VERSION: 0.14.10
TERRAFORM_13_VERSION: 0.13.6
TERRAFORM_VERSION: 1.0.8
ANSIBLE_MAJOR_VERSION: "2.11"
before_script:
- ./tests/scripts/rebase.sh
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- python -m pip install -r tests/requirements.txt
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements-${ANSIBLE_MAJOR_VERSION}.txt
- mkdir -p /.ssh
.job: &job
@ -51,6 +54,7 @@ before_script:
.testcases: &testcases
<<: *job
retry: 1
before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh
@ -77,3 +81,4 @@ include:
- .gitlab-ci/terraform.yml
- .gitlab-ci/packet.yml
- .gitlab-ci/vagrant.yml
- .gitlab-ci/molecule.yml

View File

@ -14,7 +14,7 @@ vagrant-validate:
stage: unit-tests
tags: [light]
variables:
VAGRANT_VERSION: 2.2.15
VAGRANT_VERSION: 2.2.19
script:
- ./tests/scripts/vagrant-validate.sh
except: ['triggers', 'master']
@ -23,9 +23,8 @@ ansible-lint:
extends: .job
stage: unit-tests
tags: [light]
# lint every yml/yaml file that looks like it contains Ansible plays
script: |-
grep -Rl '^- hosts: \|^ hosts: ' --include \*.yml --include \*.yaml . | xargs -P 4 -n 25 ansible-lint -v
script:
- ansible-lint -v
except: ['triggers', 'master']
syntax-check:
@ -53,6 +52,7 @@ tox-inventory-builder:
- ./tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements.txt
script:
- pip3 install tox
@ -68,6 +68,13 @@ markdownlint:
script:
- markdownlint $(find . -name '*.md' | grep -vF './.git') --ignore docs/_sidebar.md --ignore contrib/dind/README.md
check-readme-versions:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_readme_versions.sh
ci-matrix:
stage: unit-tests
tags: [light]

86
.gitlab-ci/molecule.yml Normal file
View File

@ -0,0 +1,86 @@
---
.molecule:
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: quay.io/kubespray/vagrant:$KUBESPRAY_VERSION
services: []
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
after_script:
- chronic ./tests/scripts/molecule_logs.sh
artifacts:
when: always
paths:
- molecule_logs/
# CI template for periodic CI jobs
# Enabled when PERIODIC_CI_ENABLED var is set
.molecule_periodic:
only:
variables:
- $PERIODIC_CI_ENABLED
allow_failure: true
extends: .molecule
molecule_full:
extends: .molecule_periodic
molecule_no_container_engines:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -e container-engine
when: on_success
molecule_docker:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-dockerd
when: on_success
molecule_containerd:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -i container-engine/containerd
when: on_success
molecule_cri-o:
extends: .molecule
stage: deploy-part2
script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-o
when: on_success
# Stage 3 container engines don't get as much attention so allow them to fail
molecule_kata:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/kata-containers
when: on_success
molecule_gvisor:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/gvisor
when: on_success
molecule_youki:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/youki
when: on_success

View File

@ -2,6 +2,7 @@
.packet:
extends: .testcases
variables:
ANSIBLE_TIMEOUT: "120"
CI_PLATFORM: packet
SSH_USER: kubespray
tags:
@ -22,27 +23,55 @@
allow_failure: true
extends: .packet
packet_ubuntu18-calico-aio:
stage: deploy-part1
extends: .packet_pr
when: on_success
# Future AIO job
# The ubuntu20-calico-aio jobs are meant as early stages to prevent running the full CI if something is horribly broken
packet_ubuntu20-calico-aio:
stage: deploy-part1
extends: .packet_pr
when: on_success
variables:
RESET_CHECK: "true"
packet_ubuntu20-calico-aio-ansible-2_11:
stage: deploy-part1
extends: .packet_periodic
when: on_success
variables:
ANSIBLE_MAJOR_VERSION: "2.11"
RESET_CHECK: "true"
# ### PR JOBS PART2
packet_centos7-flannel-containerd-addons-ha:
packet_ubuntu18-aio-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu20-aio-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu18-calico-aio:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-aio-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-calico-aio:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_centos7-flannel-addons-ha:
extends: .packet_pr
stage: deploy-part2
when: on_success
variables:
MITOGEN_ENABLE: "true"
packet_centos8-crio:
packet_almalinux8-crio:
extends: .packet_pr
stage: deploy-part2
when: on_success
@ -51,10 +80,13 @@ packet_ubuntu18-crio:
extends: .packet_pr
stage: deploy-part2
when: manual
variables:
MITOGEN_ENABLE: "true"
packet_ubuntu16-canal-kubeadm-ha:
packet_fedora35-crio:
extends: .packet_pr
stage: deploy-part2
when: manual
packet_ubuntu16-canal-ha:
stage: deploy-part2
extends: .packet_periodic
when: on_success
@ -69,27 +101,30 @@ packet_ubuntu16-flannel-ha:
extends: .packet_pr
when: manual
packet_ubuntu16-kube-router-sep:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian10-cilium-svc-proxy:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_debian10-containerd:
packet_debian10-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian10-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian11-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian11-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
variables:
MITOGEN_ENABLE: "true"
packet_centos7-calico-ha-once-localhost:
stage: deploy-part2
@ -101,17 +136,27 @@ packet_centos7-calico-ha-once-localhost:
services:
- docker:19.03.9-dind
packet_centos8-kube-ovn:
packet_almalinux8-kube-ovn:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_centos8-calico:
packet_almalinux8-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_fedora32-weave:
packet_rockylinux8-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_almalinux8-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_fedora36-docker-weave:
stage: deploy-part2
extends: .packet_pr
when: on_success
@ -121,14 +166,14 @@ packet_opensuse-canal:
extends: .packet_periodic
when: on_success
packet_ubuntu18-ovn4nfv:
packet_opensuse-docker-cilium:
stage: deploy-part2
extends: .packet_periodic
when: on_success
extends: .packet_pr
when: manual
# ### MANUAL JOBS
packet_ubuntu16-weave-sep:
packet_ubuntu16-docker-weave-sep:
stage: deploy-part2
extends: .packet_pr
when: manual
@ -138,12 +183,18 @@ packet_ubuntu18-cilium-sep:
extends: .packet_pr
when: manual
packet_ubuntu18-flannel-containerd-ha:
packet_ubuntu18-flannel-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu18-flannel-containerd-ha-once:
packet_ubuntu18-flannel-ha-once:
stage: deploy-part2
extends: .packet_pr
when: manual
# Calico HA eBPF
packet_almalinux8-calico-ha-ebpf:
stage: deploy-part2
extends: .packet_pr
when: manual
@ -158,34 +209,44 @@ packet_centos7-calico-ha:
extends: .packet_pr
when: manual
packet_centos7-kube-router:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_centos7-multus-calico:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_oracle7-canal-ha:
packet_centos7-canal-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora33-calico:
packet_fedora36-docker-calico:
stage: deploy-part2
extends: .packet_periodic
when: on_success
variables:
MITOGEN_ENABLE: "true"
RESET_CHECK: "true"
packet_fedora35-calico-selinux:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_fedora35-calico-swap-selinux:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_amazon-linux-2-aio:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora32-kube-ovn-containerd:
packet_almalinux8-calico-nodelocaldns-secondary:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora36-kube-ovn:
stage: deploy-part2
extends: .packet_periodic
when: on_success
@ -199,23 +260,46 @@ packet_centos7-weave-upgrade-ha:
when: on_success
variables:
UPGRADE_TEST: basic
MITOGEN_ENABLE: "false"
packet_debian9-calico-upgrade:
packet_ubuntu20-calico-etcd-kubeadm-upgrade-ha:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
UPGRADE_TEST: basic
# Calico HA Wireguard
packet_ubuntu20-calico-ha-wireguard:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian11-calico-upgrade:
stage: deploy-part3
extends: .packet_pr
when: on_success
variables:
UPGRADE_TEST: graceful
MITOGEN_ENABLE: "false"
packet_debian9-calico-upgrade-once:
packet_almalinux8-calico-remove-node:
stage: deploy-part3
extends: .packet_pr
when: on_success
variables:
REMOVE_NODE_CHECK: "true"
REMOVE_NODE_NAME: "instance-3"
packet_ubuntu20-calico-etcd-kubeadm:
stage: deploy-part3
extends: .packet_pr
when: on_success
packet_debian11-calico-upgrade-once:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
UPGRADE_TEST: graceful
MITOGEN_ENABLE: "false"
packet_ubuntu18-calico-ha-recover:
stage: deploy-part3

View File

@ -11,6 +11,6 @@ shellcheck:
- cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/
- shellcheck --version
script:
# Run shellcheck for all *.sh except contrib/
- find . -name '*.sh' -not -path './contrib/*' -not -path './.git/*' | xargs shellcheck --severity error
# Run shellcheck for all *.sh
- find . -name '*.sh' -not -path './.git/*' | xargs shellcheck --severity error
except: ['triggers', 'master']

View File

@ -12,13 +12,13 @@
# Prepare inventory
- cp contrib/terraform/$PROVIDER/sample-inventory/cluster.tfvars .
- ln -s contrib/terraform/$PROVIDER/hosts
- terraform init contrib/terraform/$PROVIDER
- terraform -chdir="contrib/terraform/$PROVIDER" init
# Copy SSH keypair
- mkdir -p ~/.ssh
- echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa
- chmod 400 ~/.ssh/id_rsa
- echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub
- mkdir -p group_vars
- mkdir -p contrib/terraform/$PROVIDER/group_vars
# Random subnet to avoid routing conflicts
- export TF_VAR_subnet_cidr="10.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).0/24"
@ -28,8 +28,8 @@
tags: [light]
only: ['master', /^pr-.*$/]
script:
- terraform validate -var-file=cluster.tfvars contrib/terraform/$PROVIDER
- terraform fmt -check -diff contrib/terraform/$PROVIDER
- terraform -chdir="contrib/terraform/$PROVIDER" validate
- terraform -chdir="contrib/terraform/$PROVIDER" fmt -check -diff
.terraform_apply:
extends: .terraform_install
@ -53,92 +53,51 @@
# Cleanup regardless of exit code
- chronic ./tests/scripts/testcases_cleanup.sh
tf-0.13.x-validate-openstack:
tf-validate-openstack:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_13_VERSION
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.13.x-validate-packet:
tf-validate-metal:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_13_VERSION
PROVIDER: packet
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: metal
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.13.x-validate-aws:
tf-validate-aws:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_13_VERSION
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.13.x-validate-exoscale:
tf-validate-exoscale:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_13_VERSION
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: exoscale
tf-0.13.x-validate-vsphere:
tf-validate-vsphere:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_13_VERSION
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: vsphere
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.13.x-validate-upcloud:
tf-validate-upcloud:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_13_VERSION
PROVIDER: upcloud
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-openstack:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-packet:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-aws:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-exoscale:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: exoscale
tf-0.14.x-validate-vsphere:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: vsphere
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-upcloud:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: upcloud
CLUSTER: $CI_COMMIT_REF_NAME
# tf-packet-ubuntu16-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: $TERRAFORM_14_VERSION
# TF_VERSION: $TERRAFORM_VERSION
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
@ -152,7 +111,7 @@ tf-0.14.x-validate-upcloud:
# tf-packet-ubuntu18-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: $TERRAFORM_14_VERSION
# TF_VERSION: $TERRAFORM_VERSION
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
@ -187,10 +146,6 @@ tf-0.14.x-validate-upcloud:
OS_INTERFACE: public
OS_IDENTITY_API_VERSION: "3"
TF_VAR_router_id: "ab95917c-41fb-4881-b507-3a6dfe9403df"
# Since ELASTX is in Stockholm, Mitogen helps with latency
MITOGEN_ENABLE: "false"
# Mitogen doesn't support interpreter discovery yet
ANSIBLE_PYTHON_INTERPRETER: "/usr/bin/python3"
tf-elastx_cleanup:
stage: unit-tests
@ -207,9 +162,10 @@ tf-elastx_ubuntu18-calico:
extends: .terraform_apply
stage: deploy-part3
when: on_success
allow_failure: true
variables:
<<: *elastx_variables
TF_VERSION: $TERRAFORM_14_VERSION
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
@ -235,44 +191,45 @@ tf-elastx_ubuntu18-calico:
TF_VAR_image: ubuntu-18.04-server-latest
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
# OVH voucher expired, commenting job until things are sorted out
tf-ovh_cleanup:
stage: unit-tests
tags: [light]
image: python
environment: ovh
variables:
<<: *ovh_variables
before_script:
- pip install -r scripts/openstack-cleanup/requirements.txt
script:
- ./scripts/openstack-cleanup/main.py
# tf-ovh_cleanup:
# stage: unit-tests
# tags: [light]
# image: python
# environment: ovh
# variables:
# <<: *ovh_variables
# before_script:
# - pip install -r scripts/openstack-cleanup/requirements.txt
# script:
# - ./scripts/openstack-cleanup/main.py
tf-ovh_ubuntu18-calico:
extends: .terraform_apply
when: on_success
environment: ovh
variables:
<<: *ovh_variables
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
SSH_USER: ubuntu
TF_VAR_number_of_k8s_masters: "0"
TF_VAR_number_of_k8s_masters_no_floating_ip: "1"
TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
TF_VAR_number_of_etcd: "0"
TF_VAR_number_of_k8s_nodes: "0"
TF_VAR_number_of_k8s_nodes_no_floating_ip: "1"
TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
TF_VAR_number_of_bastions: "0"
TF_VAR_number_of_k8s_masters_no_etcd: "0"
TF_VAR_use_neutron: "0"
TF_VAR_floatingip_pool: "Ext-Net"
TF_VAR_external_net: "6011fbc9-4cbf-46a4-8452-6890a340b60b"
TF_VAR_network_name: "Ext-Net"
TF_VAR_flavor_k8s_master: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
TF_VAR_image: "Ubuntu 18.04"
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
# tf-ovh_ubuntu18-calico:
# extends: .terraform_apply
# when: on_success
# environment: ovh
# variables:
# <<: *ovh_variables
# TF_VERSION: $TERRAFORM_VERSION
# PROVIDER: openstack
# CLUSTER: $CI_COMMIT_REF_NAME
# ANSIBLE_TIMEOUT: "60"
# SSH_USER: ubuntu
# TF_VAR_number_of_k8s_masters: "0"
# TF_VAR_number_of_k8s_masters_no_floating_ip: "1"
# TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
# TF_VAR_number_of_etcd: "0"
# TF_VAR_number_of_k8s_nodes: "0"
# TF_VAR_number_of_k8s_nodes_no_floating_ip: "1"
# TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
# TF_VAR_number_of_bastions: "0"
# TF_VAR_number_of_k8s_masters_no_etcd: "0"
# TF_VAR_use_neutron: "0"
# TF_VAR_floatingip_pool: "Ext-Net"
# TF_VAR_external_net: "6011fbc9-4cbf-46a4-8452-6890a340b60b"
# TF_VAR_network_name: "Ext-Net"
# TF_VAR_flavor_k8s_master: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
# TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
# TF_VAR_image: "Ubuntu 18.04"
# TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'

View File

@ -1,21 +1,5 @@
---
molecule_tests:
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: quay.io/kubespray/vagrant:$KUBESPRAY_VERSION
services: []
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
.vagrant:
extends: .testcases
variables:
@ -31,12 +15,14 @@ molecule_tests:
before_script:
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/testcases_run.sh
after_script:
- chronic ./tests/scripts/testcases_cleanup.sh
allow_failure: true
vagrant_ubuntu18-calico-dual-stack:
stage: deploy-part2
@ -57,3 +43,24 @@ vagrant_ubuntu20-flannel:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_ubuntu16-kube-router-sep:
stage: deploy-part2
extends: .vagrant
when: manual
# Service proxy test fails connectivity testing
vagrant_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2
extends: .vagrant
when: manual
vagrant_fedora35-kube-router:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_centos7-kube-router:
stage: deploy-part2
extends: .vagrant
when: manual

View File

@ -1,2 +1,3 @@
---
MD013: false
MD029: false

48
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,48 @@
---
repos:
- repo: https://github.com/adrienverge/yamllint.git
rev: v1.27.1
hooks:
- id: yamllint
args: [--strict]
- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
args: [ -r, "~MD013,~MD029" ]
exclude: "^.git"
- repo: local
hooks:
- id: ansible-lint
name: ansible-lint
entry: ansible-lint -v
language: python
pass_filenames: false
additional_dependencies:
- .[community]
- id: ansible-syntax-check
name: ansible-syntax-check
entry: env ANSIBLE_INVENTORY=inventory/local-tests.cfg ANSIBLE_REMOTE_USER=root ANSIBLE_BECOME="true" ANSIBLE_BECOME_USER=root ANSIBLE_VERBOSITY="3" ansible-playbook --syntax-check
language: python
files: "^cluster.yml|^upgrade-cluster.yml|^reset.yml|^extra_playbooks/upgrade-only-k8s.yml"
- id: tox-inventory-builder
name: tox-inventory-builder
entry: bash -c "cd contrib/inventory_builder && tox"
language: python
pass_filenames: false
- id: check-readme-versions
name: check-readme-versions
entry: tests/scripts/check_readme_versions.sh
language: script
pass_filenames: false
- id: ci-matrix
name: ci-matrix
entry: tests/scripts/md-table/test.sh
language: script
pass_filenames: false

View File

@ -6,11 +6,22 @@
It is recommended to use filter to manage the GitHub email notification, see [examples for setting filters to Kubernetes Github notifications](https://github.com/kubernetes/community/blob/master/communication/best-practices.md#examples-for-setting-filters-to-kubernetes-github-notifications)
To install development dependencies you can use `pip install -r tests/requirements.txt`
To install development dependencies you can set up a python virtual env with the necessary dependencies:
```ShellSession
virtualenv venv
source venv/bin/activate
pip install -r tests/requirements.txt
```
#### Linting
Kubespray uses `yamllint` and `ansible-lint`. To run them locally use `yamllint .` and `ansible-lint`
Kubespray uses [pre-commit](https://pre-commit.com) hook configuration to run several linters, please install this tool and use it to run validation tests before submitting a PR.
```ShellSession
pre-commit install
pre-commit run -a # To run pre-commit hook on all files in the repository, even if they were not modified
```
#### Molecule
@ -27,5 +38,9 @@ Vagrant with VirtualBox or libvirt driver helps you to quickly spin test cluster
1. Submit an issue describing your proposed change to the repo in question.
2. The [repo owners](OWNERS) will respond to your issue promptly.
3. Fork the desired repo, develop and test your code changes.
4. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
5. Submit a pull request.
4. Install [pre-commit](https://pre-commit.com) and install it in your development repo).
5. Addess any pre-commit validation failures.
6. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
7. Submit a pull request.
8. Work with the reviewers on their suggestions.
9. Ensure to rebase to the HEAD of your target branch and squash un-necessary commits (<https://blog.carbonfive.com/always-squash-and-rebase-your-git-commits/>) before final merger of your contribution.

View File

@ -1,30 +1,37 @@
# Use imutable image tags rather than mutable tags (like ubuntu:18.04)
FROM ubuntu:bionic-20200807
# Use imutable image tags rather than mutable tags (like ubuntu:20.04)
FROM ubuntu:focal-20220531
ARG ARCH=amd64
ARG TZ=Etc/UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt update -y \
&& apt install -y \
libssl-dev python3-dev sshpass apt-transport-https jq moreutils \
ca-certificates curl gnupg2 software-properties-common python3-pip rsync \
ca-certificates curl gnupg2 software-properties-common python3-pip unzip rsync git \
&& rm -rf /var/lib/apt/lists/*
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
"deb [arch=$ARCH] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install --no-install-recommends -y docker-ce \
&& rm -rf /var/lib/apt/lists/*
# Some tools like yamllint need this
# Pip needs this as well at the moment to install ansible
# (and potentially other packages)
# See: https://github.com/pypa/pip/issues/10219
ENV LANG=C.UTF-8
WORKDIR /kubespray
COPY . .
RUN /usr/bin/python3 -m pip install pip -U \
&& /usr/bin/python3 -m pip install -r tests/requirements.txt \
&& python3 -m pip install -r requirements.txt \
RUN /usr/bin/python3 -m pip install --no-cache-dir pip -U \
&& /usr/bin/python3 -m pip install --no-cache-dir -r tests/requirements.txt \
&& python3 -m pip install --no-cache-dir -r requirements.txt \
&& update-alternatives --install /usr/bin/python python /usr/bin/python3 1
RUN KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
&& curl -LO https://storage.googleapis.com/kubernetes-release/release/$KUBE_VERSION/bin/linux/amd64/kubectl \
&& curl -LO https://storage.googleapis.com/kubernetes-release/release/$KUBE_VERSION/bin/linux/$ARCH/kubectl \
&& chmod a+x kubectl \
&& mv kubectl /usr/local/bin/kubectl
# Some tools like yamllint need this
ENV LANG=C.UTF-8

View File

@ -1,5 +1,7 @@
mitogen:
ansible-playbook -c local mitogen.yml -vv
@echo Mitogen support is deprecated.
@echo Please run the following command manually:
@echo ansible-playbook -c local mitogen.yml -vv
clean:
rm -rf dist/
rm *.retry

View File

@ -4,15 +4,20 @@ aliases:
- chadswen
- mirwan
- miouge1
- woopstar
- luckysb
- floryut
- oomichi
- cristicalin
kubespray-reviewers:
- holmsten
- bozzo
- eppo
- oomichi
- jayonlau
- cristicalin
- liupeng0518
kubespray-emeritus_approvers:
- riverzhang
- atoms
- ant31
- woopstar

View File

@ -5,7 +5,7 @@
If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/)
- Can be deployed on **[AWS](docs/aws.md), GCE, [Azure](docs/azure.md), [OpenStack](docs/openstack.md), [vSphere](docs/vsphere.md), [Packet](docs/packet.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- Can be deployed on **[AWS](docs/aws.md), GCE, [Azure](docs/azure.md), [OpenStack](docs/openstack.md), [vSphere](docs/vsphere.md), [Equinix Metal](docs/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster
- **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions**
@ -19,10 +19,10 @@ To deploy the cluster you can use :
#### Usage
```ShellSession
# Install dependencies from ``requirements.txt``
sudo pip3 install -r requirements.txt
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
then run the following steps:
```ShellSession
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
@ -32,7 +32,7 @@ CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inv
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s_cluster.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
@ -57,10 +57,10 @@ A simple way to ensure you get all the correct version of Ansible is to use the
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/) to get the inventory and ssh key into the container, like this:
```ShellSession
docker pull quay.io/kubespray/kubespray:v2.15.1
docker pull quay.io/kubespray/kubespray:v2.19.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.15.1 bash
quay.io/kubespray/kubespray:v2.19.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```
@ -75,10 +75,11 @@ python -V && pip -V
```
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install the necessary requirements
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
then run the following step:
```ShellSession
sudo pip install -r requirements.txt
vagrant up
```
@ -105,64 +106,79 @@ vagrant up
- [AWS](docs/aws.md)
- [Azure](docs/azure.md)
- [vSphere](docs/vsphere.md)
- [Packet Host](docs/packet.md)
- [Equinix Metal](docs/equinix-metal.md)
- [Large deployments](docs/large-deployments.md)
- [Adding/replacing a node](docs/nodes.md)
- [Upgrades basics](docs/upgrades.md)
- [Air-Gap installation](docs/offline-environment.md)
- [NTP](docs/ntp.md)
- [Hardening](docs/hardening.md)
- [Roadmap](docs/roadmap.md)
## Supported Linux Distributions
- **Flatcar Container Linux by Kinvolk**
- **Debian** Buster, Jessie, Stretch, Wheezy
- **Ubuntu** 16.04, 18.04, 20.04
- **CentOS/RHEL** 7, [8](docs/centos8.md)
- **Fedora** 32, 33
- **Fedora CoreOS** (experimental: see [fcos Note](docs/fcos.md))
- **Debian** Bullseye, Buster, Jessie, Stretch
- **Ubuntu** 16.04, 18.04, 20.04, 22.04
- **CentOS/RHEL** 7, [8](docs/centos.md#centos-8)
- **Fedora** 35, 36
- **Fedora CoreOS** (see [fcos Note](docs/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** 7, [8](docs/centos8.md)
- **Alma Linux** [8](docs/centos8.md)
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/amazonlinux.md)
- **Oracle Linux** 7, [8](docs/centos.md#centos-8)
- **Alma Linux** [8](docs/centos.md#centos-8)
- **Rocky Linux** [8](docs/centos.md#centos-8)
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/kylinlinux.md))
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/amazonlinux.md))
Note: Upstart/SysV init based OS types are not supported.
## Supported Components
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.20.7
- [etcd](https://github.com/coreos/etcd) v3.4.13
- [docker](https://www.docker.com/) v19.03 (see note)
- [containerd](https://containerd.io/) v1.4.4
- [cri-o](http://cri-o.io/) v1.20 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.24.3
- [etcd](https://github.com/etcd-io/etcd) v3.5.4
- [docker](https://www.docker.com/) v20.10 (see note)
- [containerd](https://containerd.io/) v1.6.6
- [cri-o](http://cri-o.io/) v1.24 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v0.9.1
- [calico](https://github.com/projectcalico/calico) v3.17.4
- [cni-plugins](https://github.com/containernetworking/plugins) v1.1.1
- [calico](https://github.com/projectcalico/calico) v3.23.3
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.8.9
- [flanneld](https://github.com/coreos/flannel) v0.13.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.6.2
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.2.2
- [multus](https://github.com/intel/multus-cni) v3.7.0
- [ovn4nfv](https://github.com/opnfv/ovn4nfv-k8s-plugin) v1.1.0
- [cilium](https://github.com/cilium/cilium) v1.11.7
- [flannel](https://github.com/flannel-io/flannel) v0.18.1
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.9.7
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.5.1
- [multus](https://github.com/intel/multus-cni) v3.8
- [weave](https://github.com/weaveworks/weave) v2.8.1
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.4.2
- Application
- [ambassador](https://github.com/datawire/ambassador): v1.5
- [cert-manager](https://github.com/jetstack/cert-manager) v1.9.0
- [coredns](https://github.com/coredns/coredns) v1.8.6
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.3.0
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.3
- [argocd](https://argoproj.github.io/) v2.4.7
- [helm](https://helm.sh/) v3.9.2
- [metallb](https://metallb.universe.tf/) v0.12.1
- [registry](https://github.com/distribution/distribution) v2.8.1
- Storage Plugin
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
- [cert-manager](https://github.com/jetstack/cert-manager) v0.16.1
- [coredns](https://github.com/coredns/coredns) v1.7.0
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.43.0
- [aws-ebs-csi-plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) v0.5.0
- [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) v1.10.0
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) v1.22.0
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) v1.4.0
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) v0.0.22
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) v2.4.0
## Container Runtime Notes
- The list of available docker version is 18.09, 19.03 and 20.10. The recommended docker version is 19.03. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
- The list of available docker version is 18.09, 19.03 and 20.10. The recommended docker version is 20.10. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
- The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20)
## Requirements
- **Minimum required version of Kubernetes is v1.19**
- **Ansible v2.9.x, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands, Ansible 2.10.x is experimentally supported for now**
- **Minimum required version of Kubernetes is v1.22**
- **Ansible v2.11+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
@ -195,8 +211,6 @@ You can choose between 10 network plugins. (default: `calico`, except Vagrant us
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [ovn4nfv](docs/ovn4nfv.md): [ovn4nfv-k8s-plugins](https://github.com/opnfv/ovn4nfv-k8s-plugin) is the network controller, OVS agent and CNI server to offer basic SFC and OVN overlay networking.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
@ -217,8 +231,6 @@ See also [Network checker](docs/netcheck.md).
## Ingress Plugins
- [ambassador](docs/ambassador.md): the Ambassador Ingress Controller and API gateway.
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
- [metallb](docs/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
@ -239,6 +251,6 @@ See also [Network checker](docs/netcheck.md).
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/pipeline.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Packet](https://www.packet.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/).
CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Equinix Metal](https://metal.equinix.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/).
See the [test matrix](docs/test_cases.md) for details.

View File

@ -2,17 +2,18 @@
The Kubespray Project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release
1. An issue is proposing a new release with a changelog since the last release. Please see [a good sample issue](https://github.com/kubernetes-sigs/kubespray/issues/8325)
2. At least one of the [approvers](OWNERS_ALIASES) must approve this release
3. The `kube_version_min_required` variable is set to `n-1`
4. Remove hashes for [EOL versions](https://github.com/kubernetes/sig-release/blob/master/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
5. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
6. An approver creates a release branch in the form `release-X.Y`
7. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) docker images are built and tagged
8. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
9. The release issue is closed
10. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
11. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
4. Remove hashes for [EOL versions](https://github.com/kubernetes/website/blob/main/content/en/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
5. Create the release note with [Kubernetes Release Notes Generator](https://github.com/kubernetes/release/blob/master/cmd/release-notes/README.md). See the following `Release note creation` section for the details.
6. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
7. An approver creates a release branch in the form `release-X.Y`
8. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) container images are built and tagged. See the following `Container image creation` section for the details.
9. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
10. The release issue is closed
11. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
12. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
## Major/minor releases and milestones
@ -46,3 +47,37 @@ The Kubespray Project is released on an as-needed basis. The process is as follo
then Kubespray v2.1.0 may be bound to only minor changes to `kube_version`, like v1.5.1
and *any* changes to other components, like etcd v4, or calico 1.2.3.
And Kubespray v3.x.x shall be bound to `kube_version: 2.x.x` respectively.
## Release note creation
You can create a release note with:
```shell
export GITHUB_TOKEN=<your-github-token>
export ORG=kubernetes-sigs
export REPO=kubespray
release-notes --start-sha <The start commit-id> --end-sha <The end commit-id> --dependencies=false --output=/tmp/kubespray-release-note --required-author=""
```
If the release note file(/tmp/kubespray-release-note) contains "### Uncategorized" pull requests, those pull requests don't have a valid kind label(`kind/feature`, etc.).
It is necessary to put a valid label on each pull request and run the above release-notes command again to get a better release note)
## Container image creation
The container image `quay.io/kubespray/kubespray:vX.Y.Z` can be created from Dockerfile of the kubespray root directory:
```shell
cd kubespray/
nerdctl build -t quay.io/kubespray/kubespray:vX.Y.Z .
nerdctl push quay.io/kubespray/kubespray:vX.Y.Z
```
The container image `quay.io/kubespray/vagrant:vX.Y.Z` can be created from build.sh of test-infra/vagrant-docker/:
```shell
cd kubespray/test-infra/vagrant-docker/
./build vX.Y.Z
```
Please note that the above operation requires the permission to push container images into quay.io/kubespray/.
If you don't have the permission, please ask it on the #kubespray-dev channel.

26
Vagrantfile vendored
View File

@ -26,9 +26,12 @@ SUPPORTED_OS = {
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"centos8" => {box: "centos/8", user: "vagrant"},
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
"fedora32" => {box: "fedora/32-cloud-base", user: "vagrant"},
"fedora33" => {box: "fedora/33-cloud-base", user: "vagrant"},
"opensuse" => {box: "bento/opensuse-leap-15.2", user: "vagrant"},
"almalinux8" => {box: "almalinux/8", user: "vagrant"},
"almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"},
"rockylinux8" => {box: "generic/rocky8", user: "vagrant"},
"fedora35" => {box: "fedora/35-cloud-base", user: "vagrant"},
"fedora36" => {box: "fedora/36-cloud-base", user: "vagrant"},
"opensuse" => {box: "opensuse/Leap-15.3.x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
"oraclelinux8" => {box: "generic/oracle8", user: "vagrant"},
@ -53,9 +56,9 @@ $subnet_ipv6 ||= "fd3c:b398:0698:0756"
$os ||= "ubuntu1804"
$network_plugin ||= "flannel"
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
$multi_networking ||= false
$multi_networking ||= "False"
$download_run_once ||= "True"
$download_force_cache ||= "True"
$download_force_cache ||= "False"
# The first three nodes are etcd servers
$etcd_instances ||= $num_instances
# The first two nodes are kube masters
@ -68,9 +71,12 @@ $kube_node_instances_with_disks_size ||= "20G"
$kube_node_instances_with_disks_number ||= 2
$override_disk_size ||= false
$disk_size ||= "20GB"
$local_path_provisioner_enabled ||= false
$local_path_provisioner_enabled ||= "False"
$local_path_provisioner_claim_root ||= "/opt/local-path-provisioner/"
$libvirt_nested ||= false
# boolean or string (e.g. "-vvv")
$ansible_verbosity ||= false
$ansible_tags ||= ENV['VAGRANT_ANSIBLE_TAGS'] || ""
$playbook ||= "cluster.yml"
@ -167,7 +173,7 @@ Vagrant.configure("2") do |config|
# always make /dev/sd{a/b/c} so that CI can ensure that
# virtualbox and libvirt will have the same devices to use for OSDs
(1..$kube_node_instances_with_disks_number).each do |d|
lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "ide"
lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "scsi"
end
end
end
@ -238,9 +244,11 @@ Vagrant.configure("2") do |config|
}
# Only execute the Ansible provisioner once, when all the machines are up and ready.
# And limit the action to gathering facts, the full playbook is going to be ran by testcases_run.sh
if i == $num_instances
node.vm.provision "ansible" do |ansible|
ansible.playbook = $playbook
ansible.verbose = $ansible_verbosity
$ansible_inventory_path = File.join( $inventory, "hosts.ini")
if File.exist?($ansible_inventory_path)
ansible.inventory_path = $ansible_inventory_path
@ -250,7 +258,9 @@ Vagrant.configure("2") do |config|
ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
ansible.host_vars = host_vars
#ansible.tags = ['download']
if $ansible_tags != ""
ansible.tags = [$ansible_tags]
end
ansible.groups = {
"etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"],
"kube_control_plane" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"],

View File

@ -1,9 +1,8 @@
[ssh_connection]
pipelining=True
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
ansible_ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults]
strategy_plugins = plugins/mitogen/ansible_mitogen/plugins/strategy
# https://github.com/ansible/ansible/issues/56930 (to ignore group names with - and .)
force_valid_group_names = ignore
@ -11,11 +10,11 @@ host_key_checking=False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp
fact_caching_timeout = 7200
fact_caching_timeout = 86400
stdout_callback = default
display_skipped_hosts = no
library = ./library
callback_whitelist = profile_tasks
callbacks_enabled = profile_tasks,ara_default
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg

View File

@ -3,13 +3,14 @@
gather_facts: false
become: no
vars:
minimal_ansible_version: 2.9.0
maximal_ansible_version: 2.11.0
minimal_ansible_version: 2.11.0
maximal_ansible_version: 2.13.0
ansible_connection: local
tags: always
tasks:
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"
assert:
msg: "Ansible must be between {{ minimal_ansible_version }} and {{ maximal_ansible_version }}"
msg: "Ansible must be between {{ minimal_ansible_version }} and {{ maximal_ansible_version }} exclusive"
that:
- ansible_version.string is version(minimal_ansible_version, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")

View File

@ -32,7 +32,7 @@
roles:
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, tags: preinstall }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine }
- { role: download, tags: download, when: "not skip_downloads" }
- hosts: etcd
@ -46,7 +46,7 @@
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: not etcd_kubeadm_enabled| default(false)
when: etcd_deployment_type != "kubeadm"
- hosts: k8s_cluster
gather_facts: False
@ -59,7 +59,7 @@
vars:
etcd_cluster_setup: false
etcd_events_cluster_setup: false
when: not etcd_kubeadm_enabled| default(false)
when: etcd_deployment_type != "kubeadm"
- hosts: k8s_cluster
gather_facts: False
@ -86,8 +86,8 @@
roles:
- { role: kubespray-defaults }
- { role: kubernetes/kubeadm, tags: kubeadm}
- { role: network_plugin, tags: network }
- { role: kubernetes/node-label, tags: node-label }
- { role: network_plugin, tags: network }
- hosts: calico_rr
gather_facts: False
@ -116,16 +116,10 @@
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
- hosts: kube_control_plane
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
roles:
- { role: kubespray-defaults }
- { role: kubernetes-apps, tags: apps }
- hosts: k8s_cluster
- name: Apply resolv.conf changes now that cluster DNS is up
hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"

View File

@ -69,7 +69,7 @@ class SearchEC2Tags(object):
hosts[group].append(dns_name)
hosts['_meta']['hostvars'][dns_name] = ansible_host
hosts['k8s_cluster'] = {'children':['kube_control_plane', 'kube_node']}
print(json.dumps(hosts, sort_keys=True, indent=2))

View File

@ -47,6 +47,10 @@ If you need to delete all resources from a resource group, simply call:
**WARNING** this really deletes everything from your resource group, including everything that was later created by you!
## Installing Ansible and the dependencies
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
## Generating an inventory for kubespray
After you have applied the templates, you can generate an inventory with this call:
@ -59,6 +63,5 @@ It will create the file ./inventory which can then be used with kubespray, e.g.:
```shell
cd kubespray-root-dir
sudo pip3 install -r requirements.txt
ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml
```

View File

@ -12,3 +12,4 @@
template:
src: inventory.j2
dest: "{{ playbook_dir }}/inventory"
mode: 0644

View File

@ -22,8 +22,10 @@
template:
src: inventory.j2
dest: "{{ playbook_dir }}/inventory"
mode: 0644
- name: Generate Load Balancer variables
template:
src: loadbalancer_vars.j2
dest: "{{ playbook_dir }}/loadbalancer_vars.yml"
mode: 0644

View File

@ -8,11 +8,13 @@
path: "{{ base_dir }}"
state: directory
recurse: true
mode: 0755
- name: Store json files in base_dir
template:
src: "{{ item }}"
dest: "{{ base_dir }}/{{ item }}"
mode: 0644
with_items:
- network.json
- storage.json

View File

@ -35,6 +35,7 @@
path-exclude=/usr/share/doc/*
path-include=/usr/share/doc/*/copyright
dest: /etc/dpkg/dpkg.cfg.d/01_nodoc
mode: 0644
when:
- ansible_os_family == 'Debian'
@ -63,6 +64,7 @@
copy:
content: "{{ distro_user }} ALL=(ALL) NOPASSWD:ALL"
dest: "/etc/sudoers.d/{{ distro_user }}"
mode: 0640
- name: Add my pubkey to "{{ distro_user }}" user authorized keys
authorized_key:

View File

@ -17,7 +17,7 @@ pass_or_fail() {
test_distro() {
local distro=${1:?};shift
local extra="${*:-}"
local prefix="$distro[${extra}]}"
local prefix="${distro[${extra}]}"
ansible-playbook -i hosts dind-cluster.yaml -e node_distro=$distro
pass_or_fail "$prefix: dind-nodes" || return 1
(cd ../..
@ -71,15 +71,15 @@ for spec in ${SPECS}; do
echo "Loading file=${spec} ..."
. ${spec} || continue
: ${DISTROS:?} || continue
echo "DISTROS=${DISTROS[@]}"
echo "DISTROS:" "${DISTROS[@]}"
echo "EXTRAS->"
printf " %s\n" "${EXTRAS[@]}"
let n=1
for distro in ${DISTROS[@]}; do
for distro in "${DISTROS[@]}"; do
for extra in "${EXTRAS[@]:-NULL}"; do
# Magic value to let this for run once:
[[ ${extra} == NULL ]] && unset extra
docker rm -f ${NODES[@]}
docker rm -f "${NODES[@]}"
printf -v file_out "%s/%s-%02d.out" ${OUTPUT_DIR} ${spec} $((n++))
{
info "${distro}[${extra}] START: file_out=${file_out}"

View File

@ -48,7 +48,7 @@ ROLES = ['all', 'kube_control_plane', 'kube_node', 'etcd', 'k8s_cluster',
'calico_rr']
PROTECTED_NAMES = ROLES
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'print_hostnames',
'load']
'load', 'add']
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
'0': False, 'no': False, 'false': False, 'off': False}
yaml = YAML()
@ -82,22 +82,43 @@ class KubesprayInventory(object):
def __init__(self, changed_hosts=None, config_file=None):
self.config_file = config_file
self.yaml_config = {}
if self.config_file:
loadPreviousConfig = False
printHostnames = False
# See whether there are any commands to process
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
if changed_hosts[0] == "add":
loadPreviousConfig = True
changed_hosts = changed_hosts[1:]
elif changed_hosts[0] == "print_hostnames":
loadPreviousConfig = True
printHostnames = True
else:
self.parse_command(changed_hosts[0], changed_hosts[1:])
sys.exit(0)
# If the user wants to remove a node, we need to load the config anyway
if changed_hosts and changed_hosts[0][0] == "-":
loadPreviousConfig = True
if self.config_file and loadPreviousConfig: # Load previous YAML file
try:
self.hosts_file = open(config_file, 'r')
self.yaml_config = yaml.load_all(self.hosts_file)
except OSError:
pass
self.yaml_config = yaml.load(self.hosts_file)
except OSError as e:
# I am assuming we are catching "cannot open file" exceptions
print(e)
sys.exit(1)
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
self.parse_command(changed_hosts[0], changed_hosts[1:])
if printHostnames:
self.print_hostnames()
sys.exit(0)
self.ensure_required_groups(ROLES)
if changed_hosts:
changed_hosts = self.range2ips(changed_hosts)
self.hosts = self.build_hostnames(changed_hosts)
self.hosts = self.build_hostnames(changed_hosts,
loadPreviousConfig)
self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES)
self.set_all(self.hosts)
self.set_k8s_cluster()
@ -158,17 +179,29 @@ class KubesprayInventory(object):
except IndexError:
raise ValueError("Host name must end in an integer")
def build_hostnames(self, changed_hosts):
# Keeps already specified hosts,
# and adds or removes the hosts provided as an argument
def build_hostnames(self, changed_hosts, loadPreviousConfig=False):
existing_hosts = OrderedDict()
highest_host_id = 0
try:
for host in self.yaml_config['all']['hosts']:
existing_hosts[host] = self.yaml_config['all']['hosts'][host]
host_id = self.get_host_id(host)
if host_id > highest_host_id:
highest_host_id = host_id
except Exception:
pass
# Load already existing hosts from the YAML
if loadPreviousConfig:
try:
for host in self.yaml_config['all']['hosts']:
# Read configuration of an existing host
hostConfig = self.yaml_config['all']['hosts'][host]
existing_hosts[host] = hostConfig
# If the existing host seems
# to have been created automatically, detect its ID
if host.startswith(HOST_PREFIX):
host_id = self.get_host_id(host)
if host_id > highest_host_id:
highest_host_id = host_id
except Exception as e:
# I am assuming we are catching automatically
# created hosts without IDs
print(e)
sys.exit(1)
# FIXME(mattymo): Fix condition where delete then add reuses highest id
next_host_id = highest_host_id + 1
@ -176,6 +209,7 @@ class KubesprayInventory(object):
all_hosts = existing_hosts.copy()
for host in changed_hosts:
# Delete the host from config the hostname/IP has a "-" prefix
if host[0] == "-":
realhost = host[1:]
if self.exists_hostname(all_hosts, realhost):
@ -184,6 +218,8 @@ class KubesprayInventory(object):
elif self.exists_ip(all_hosts, realhost):
self.debug("Marked {0} for deletion.".format(realhost))
self.delete_host_by_ip(all_hosts, realhost)
# Host/Argument starts with a digit,
# then we assume its an IP address
elif host[0].isdigit():
if ',' in host:
ip, access_ip = host.split(',')
@ -203,11 +239,15 @@ class KubesprayInventory(object):
next_host = subprocess.check_output(cmd, shell=True)
next_host = next_host.strip().decode('ascii')
else:
# Generates a hostname because we have only an IP address
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
next_host_id += 1
# Uses automatically generated node name
# in case we dont provide it.
all_hosts[next_host] = {'ansible_host': access_ip,
'ip': ip,
'access_ip': access_ip}
# Host/Argument starts with a letter, then we assume its a hostname
elif host[0].isalpha():
if ',' in host:
try:
@ -226,6 +266,7 @@ class KubesprayInventory(object):
'access_ip': access_ip}
return all_hosts
# Expand IP ranges into individual addresses
def range2ips(self, hosts):
reworked_hosts = []
@ -394,9 +435,11 @@ help - Display this message
print_cfg - Write inventory file to stdout
print_ips - Write a space-delimited list of IPs from "all" group
print_hostnames - Write a space-delimited list of Hostnames from "all" group
add - Adds specified hosts into an already existing inventory
Advanced usage:
Add another host after initial creation: inventory.py 10.10.1.5
Create new or overwrite old inventory file: inventory.py 10.10.1.5
Add another host after initial creation: inventory.py add 10.10.1.6
Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
Add hosts with different ip and access ip: inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.10.3
Add hosts with a specific hostname, ip, and optional access ip: first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3
@ -430,6 +473,7 @@ def main(argv=None):
if not argv:
argv = sys.argv[1:]
KubesprayInventory(argv, CONFIG_FILE)
return 0
if __name__ == "__main__":

View File

@ -13,6 +13,7 @@
# under the License.
import inventory
from test import support
import unittest
from unittest import mock
@ -26,6 +27,28 @@ if path not in sys.path:
import inventory # noqa
class TestInventoryPrintHostnames(unittest.TestCase):
@mock.patch('ruamel.yaml.YAML.load')
def test_print_hostnames(self, load_mock):
mock_io = mock.mock_open(read_data='')
load_mock.return_value = OrderedDict({'all': {'hosts': {
'node1': {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'},
'node2': {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'}}}})
with mock.patch('builtins.open', mock_io):
with self.assertRaises(SystemExit) as cm:
with support.captured_stdout() as stdout:
inventory.KubesprayInventory(
changed_hosts=["print_hostnames"],
config_file="file")
self.assertEqual("node1 node2\n", stdout.getvalue())
self.assertEqual(cm.exception.code, 0)
class TestInventory(unittest.TestCase):
@mock.patch('inventory.sys')
def setUp(self, sys_mock):
@ -67,23 +90,14 @@ class TestInventory(unittest.TestCase):
self.assertRaisesRegex(ValueError, "Host name must end in an",
self.inv.get_host_id, hostname)
def test_build_hostnames_add_one(self):
changed_hosts = ['10.90.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_add_duplicate(self):
changed_hosts = ['10.90.0.2']
expected = OrderedDict([('node1',
expected = OrderedDict([('node3',
{'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'})])
self.inv.yaml_config['all']['hosts'] = expected
result = self.inv.build_hostnames(changed_hosts)
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_build_hostnames_add_two(self):
@ -99,6 +113,30 @@ class TestInventory(unittest.TestCase):
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_add_three(self):
changed_hosts = ['10.90.0.2', '10.90.0.3', '10.90.0.4']
expected = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'}),
('node3', {'ansible_host': '10.90.0.4',
'ip': '10.90.0.4',
'access_ip': '10.90.0.4'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_add_one(self):
changed_hosts = ['10.90.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_delete_first(self):
changed_hosts = ['-10.90.0.2']
existing_hosts = OrderedDict([
@ -113,7 +151,24 @@ class TestInventory(unittest.TestCase):
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.build_hostnames(changed_hosts)
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_build_hostnames_delete_by_hostname(self):
changed_hosts = ['-node1']
existing_hosts = OrderedDict([
('node1', {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'}),
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.inv.yaml_config['all']['hosts'] = existing_hosts
expected = OrderedDict([
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_exists_hostname_positive(self):
@ -313,7 +368,7 @@ class TestInventory(unittest.TestCase):
self.assertRaisesRegex(Exception, "Range of ip_addresses isn't valid",
self.inv.range2ips, host_range)
def test_build_hostnames_different_ips_add_one(self):
def test_build_hostnames_create_with_one_different_ips(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2',
@ -322,17 +377,7 @@ class TestInventory(unittest.TestCase):
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_different_ips_add_duplicate(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
self.inv.yaml_config['all']['hosts'] = expected
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_different_ips_add_two(self):
def test_build_hostnames_create_with_two_different_ips(self):
changed_hosts = ['10.90.0.2,192.168.0.2', '10.90.0.3,192.168.0.3']
expected = OrderedDict([
('node1', {'ansible_host': '192.168.0.2',
@ -341,6 +386,210 @@ class TestInventory(unittest.TestCase):
('node2', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'})])
self.inv.yaml_config['all']['hosts'] = OrderedDict()
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_create_with_three_different_ips(self):
changed_hosts = ['10.90.0.2,192.168.0.2',
'10.90.0.3,192.168.0.3',
'10.90.0.4,192.168.0.4']
expected = OrderedDict([
('node1', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node2', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node3', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_overwrite_one_with_different_ips(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
existing = OrderedDict([('node5',
{'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_overwrite_three_with_different_ips(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node1',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
existing = OrderedDict([
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'}),
('node5', {'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts)
self.assertEqual(expected, result)
def test_build_hostnames_different_ips_add_duplicate(self):
changed_hosts = ['10.90.0.2,192.168.0.2']
expected = OrderedDict([('node3',
{'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
existing = expected
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_build_hostnames_add_two_different_ips_into_one_existing(self):
changed_hosts = ['10.90.0.3,192.168.0.3', '10.90.0.4,192.168.0.4']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_build_hostnames_add_two_different_ips_into_two_existing(self):
changed_hosts = ['10.90.0.4,192.168.0.4', '10.90.0.5,192.168.0.5']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'}),
('node5', {'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
def test_build_hostnames_add_two_different_ips_into_three_existing(self):
changed_hosts = ['10.90.0.5,192.168.0.5', '10.90.0.6,192.168.0.6']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'}),
('node5', {'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'}),
('node6', {'ansible_host': '192.168.0.6',
'ip': '10.90.0.6',
'access_ip': '192.168.0.6'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
# Add two IP addresses into a config that has
# three already defined IP addresses. One of the IP addresses
# is a duplicate.
def test_build_hostnames_add_two_duplicate_one_overlap(self):
changed_hosts = ['10.90.0.4,192.168.0.4', '10.90.0.5,192.168.0.5']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'}),
('node5', {'ansible_host': '192.168.0.5',
'ip': '10.90.0.5',
'access_ip': '192.168.0.5'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)
# Add two duplicate IP addresses into a config that has
# three already defined IP addresses
def test_build_hostnames_add_two_duplicate_two_overlap(self):
changed_hosts = ['10.90.0.3,192.168.0.3', '10.90.0.4,192.168.0.4']
expected = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
existing = OrderedDict([
('node2', {'ansible_host': '192.168.0.2',
'ip': '10.90.0.2',
'access_ip': '192.168.0.2'}),
('node3', {'ansible_host': '192.168.0.3',
'ip': '10.90.0.3',
'access_ip': '192.168.0.3'}),
('node4', {'ansible_host': '192.168.0.4',
'ip': '10.90.0.4',
'access_ip': '192.168.0.4'})])
self.inv.yaml_config['all']['hosts'] = existing
result = self.inv.build_hostnames(changed_hosts, True)
self.assertEqual(expected, result)

View File

@ -1,7 +1,7 @@
---
- name: Install required packages
yum:
package:
name: "{{ item }}"
state: present
with_items:

View File

@ -28,7 +28,7 @@
sysctl:
name: net.ipv4.ip_forward
value: 1
sysctl_file: /etc/sysctl.d/ipv4-ip_forward.conf
sysctl_file: "{{ sysctl_file_path }}"
state: present
reload: yes
@ -37,7 +37,7 @@
name: "{{ item }}"
state: present
value: 0
sysctl_file: /etc/sysctl.d/bridge-nf-call.conf
sysctl_file: "{{ sysctl_file_path }}"
reload: yes
with_items:
- net.bridge.bridge-nf-call-arptables

View File

@ -11,6 +11,7 @@
state: directory
owner: "{{ k8s_deployment_user }}"
group: "{{ k8s_deployment_user }}"
mode: 0700
- name: Configure sudo for deployment user
copy:

View File

@ -5,14 +5,15 @@
- hosts: localhost
strategy: linear
vars:
mitogen_version: 0.2.9
mitogen_url: https://github.com/dw/mitogen/archive/v{{ mitogen_version }}.tar.gz
mitogen_version: 0.3.2
mitogen_url: https://github.com/mitogen-hq/mitogen/archive/refs/tags/v{{ mitogen_version }}.tar.gz
ansible_connection: local
tasks:
- name: Create mitogen plugin dir
file:
path: "{{ item }}"
state: directory
mode: 0755
become: false
loop:
- "{{ playbook_dir }}/plugins/mitogen"
@ -37,6 +38,12 @@
- name: add strategy to ansible.cfg
ini_file:
path: ansible.cfg
section: defaults
option: strategy
value: mitogen_linear
mode: 0644
section: "{{ item.section | d('defaults') }}"
option: "{{ item.option }}"
value: "{{ item.value }}"
with_items:
- option: strategy
value: mitogen_linear
- option: strategy_plugins
value: plugins/mitogen/ansible_mitogen/plugins/strategy

View File

@ -11,8 +11,8 @@
# ## Set disk_volume_device_1 to desired device for gluster brick, if different to /dev/vdb (default).
# ## As in the previous case, you can set ip to give direct communication on internal IPs
# gfs_node1 ansible_ssh_host=95.54.0.18 # disk_volume_device_1=/dev/vdc ip=10.3.0.7
# gfs_node2 ansible_ssh_host=95.54.0.19 # disk_volume_device_1=/dev/vdc ip=10.3.0.8
# gfs_node3 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc ip=10.3.0.9
# gfs_node2 ansible_ssh_host=95.54.0.19 # disk_volume_device_1=/dev/vdc ip=10.3.0.8
# gfs_node3 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc ip=10.3.0.9
# [kube_control_plane]
# node1

View File

@ -14,12 +14,16 @@ This role performs basic installation and setup of Gluster, but it does not conf
Available variables are listed below, along with default values (see `defaults/main.yml`):
glusterfs_default_release: ""
```yaml
glusterfs_default_release: ""
```
You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy).
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.5"
```yaml
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.5"
```
For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/) for more info.
@ -29,9 +33,11 @@ None.
## Example Playbook
```yaml
- hosts: server
roles:
- geerlingguy.glusterfs
```
For a real-world use example, read through [Simple GlusterFS Setup with Ansible](http://www.jeffgeerling.com/blog/simple-glusterfs-setup-ansible), a blog post by this role's author, which is included in Chapter 8 of [Ansible for DevOps](https://www.ansiblefordevops.com/).

View File

@ -1,10 +1,10 @@
---
- name: Install Prerequisites
yum: name={{ item }} state=present
package: name={{ item }} state=present
with_items:
- "centos-release-gluster{{ glusterfs_default_release }}"
- name: Install Packages
yum: name={{ item }} state=present
package: name={{ item }} state=present
with_items:
- glusterfs-client

View File

@ -9,7 +9,7 @@
when: ansible_os_family == "Debian"
- name: install xfs RedHat
yum: name=xfsprogs state=present
package: name=xfsprogs state=present
when: ansible_os_family == "RedHat"
# Format external volumes in xfs
@ -82,6 +82,7 @@
template:
dest: "{{ gluster_mount_dir }}/.test-file.txt"
src: test-file.txt
mode: 0644
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Unmount glusterfs

View File

@ -1,11 +1,11 @@
---
- name: Install Prerequisites
yum: name={{ item }} state=present
package: name={{ item }} state=present
with_items:
- "centos-release-gluster{{ glusterfs_default_release }}"
- name: Install Packages
yum: name={{ item }} state=present
package: name={{ item }} state=present
with_items:
- glusterfs-server
- glusterfs-client

View File

@ -1,5 +0,0 @@
---
- hosts: all
roles:
- role_under_test

View File

@ -3,6 +3,7 @@
template:
src: "{{ item.file }}"
dest: "{{ kube_config_dir }}/{{ item.dest }}"
mode: 0644
with_items:
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}

View File

@ -2,6 +2,13 @@ all:
vars:
heketi_admin_key: "11elfeinhundertundelf"
heketi_user_key: "!!einseinseins"
glusterfs_daemonset:
readiness_probe:
timeout_seconds: 3
initial_delay_seconds: 3
liveness_probe:
timeout_seconds: 3
initial_delay_seconds: 10
children:
k8s_cluster:
vars:

View File

@ -11,7 +11,7 @@
- name: "Install glusterfs mount utils (RedHat)"
become: true
yum:
package:
name: "glusterfs-fuse"
state: "present"
when: "ansible_os_family == 'RedHat'"

View File

@ -1,7 +1,10 @@
---
- name: "Kubernetes Apps | Lay Down Heketi Bootstrap"
become: true
template: { src: "heketi-bootstrap.json.j2", dest: "{{ kube_config_dir }}/heketi-bootstrap.json" }
template:
src: "heketi-bootstrap.json.j2"
dest: "{{ kube_config_dir }}/heketi-bootstrap.json"
mode: 0640
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Bootstrap"
kube:

View File

@ -10,6 +10,7 @@
template:
src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json"
mode: 0644
- name: "Copy topology configuration into container."
changed_when: false
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"

View File

@ -1,6 +1,9 @@
---
- name: "Kubernetes Apps | Lay Down GlusterFS Daemonset"
template: { src: "glusterfs-daemonset.json.j2", dest: "{{ kube_config_dir }}/glusterfs-daemonset.json" }
template:
src: "glusterfs-daemonset.json.j2"
dest: "{{ kube_config_dir }}/glusterfs-daemonset.json"
mode: 0644
become: true
register: "rendering"
- name: "Kubernetes Apps | Install and configure GlusterFS daemonset"
@ -27,7 +30,10 @@
delay: 5
- name: "Kubernetes Apps | Lay Down Heketi Service Account"
template: { src: "heketi-service-account.json.j2", dest: "{{ kube_config_dir }}/heketi-service-account.json" }
template:
src: "heketi-service-account.json.j2"
dest: "{{ kube_config_dir }}/heketi-service-account.json"
mode: 0644
become: true
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Service Account"

View File

@ -4,6 +4,7 @@
template:
src: "heketi-deployment.json.j2"
dest: "{{ kube_config_dir }}/heketi-deployment.json"
mode: 0644
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi"

View File

@ -5,7 +5,7 @@
changed_when: false
- name: "Kubernetes Apps | Deploy cluster role binding."
when: "clusterrolebinding_state.stdout == \"\""
when: "clusterrolebinding_state.stdout | length == 0"
command: "{{ bin_dir }}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account"
- name: Get clusterrolebindings again
@ -15,7 +15,7 @@
- name: Make sure that clusterrolebindings are present now
assert:
that: "clusterrolebinding_state.stdout != \"\""
that: "clusterrolebinding_state.stdout | length > 0"
msg: "Cluster role binding is not present."
- name: Get the heketi-config-secret secret
@ -28,9 +28,10 @@
template:
src: "heketi.json.j2"
dest: "{{ kube_config_dir }}/heketi.json"
mode: 0644
- name: "Deploy Heketi config secret"
when: "secret_state.stdout == \"\""
when: "secret_state.stdout | length == 0"
command: "{{ bin_dir }}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json"
- name: Get the heketi-config-secret secret again
@ -40,5 +41,5 @@
- name: Make sure the heketi-config-secret secret exists now
assert:
that: "secret_state.stdout != \"\""
that: "secret_state.stdout | length > 0"
msg: "Heketi config secret is not present."

View File

@ -2,7 +2,10 @@
- name: "Kubernetes Apps | Lay Down Heketi Storage"
become: true
vars: { nodes: "{{ groups['heketi-node'] }}" }
template: { src: "heketi-storage.json.j2", dest: "{{ kube_config_dir }}/heketi-storage.json" }
template:
src: "heketi-storage.json.j2"
dest: "{{ kube_config_dir }}/heketi-storage.json"
mode: 0644
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Storage"
kube:

View File

@ -16,6 +16,7 @@
template:
src: "storageclass.yml.j2"
dest: "{{ kube_config_dir }}/storageclass.yml"
mode: 0644
register: "rendering"
- name: "Kubernetes Apps | Install and configure Storace Class"
kube:

View File

@ -10,6 +10,7 @@
template:
src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json"
mode: 0644
- name: "Copy topology configuration into container." # noqa 503
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"

View File

@ -73,8 +73,8 @@
"privileged": true
},
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 3,
"timeoutSeconds": {{ glusterfs_daemonset.readiness_probe.timeout_seconds }},
"initialDelaySeconds": {{ glusterfs_daemonset.readiness_probe.initial_delay_seconds }},
"exec": {
"command": [
"/bin/bash",
@ -84,8 +84,8 @@
}
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 10,
"timeoutSeconds": {{ glusterfs_daemonset.liveness_probe.timeout_seconds }},
"initialDelaySeconds": {{ glusterfs_daemonset.liveness_probe.initial_delay_seconds }},
"exec": {
"command": [
"/bin/bash",

View File

@ -1,7 +1,7 @@
---
- name: "Install lvm utils (RedHat)"
become: true
yum:
package:
name: "lvm2"
state: "present"
when: "ansible_os_family == 'RedHat'"
@ -19,7 +19,7 @@
become: true
shell: "pvs {{ disk_volume_device_1 }} --option vg_name | tail -n+2"
register: "volume_groups"
ignore_errors: true
ignore_errors: true # noqa ignore-errors
changed_when: false
- name: "Remove volume groups." # noqa 301
@ -35,11 +35,11 @@
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true
command: "pvremove {{ disk_volume_device_1 }} --yes"
ignore_errors: true
ignore_errors: true # noqa ignore-errors
- name: "Remove lvm utils (RedHat)"
become: true
yum:
package:
name: "lvm2"
state: "absent"
when: "ansible_os_family == 'RedHat' and heketi_remove_lvm"

View File

@ -1,51 +1,51 @@
---
- name: "Remove storage class." # noqa 301
- name: Remove storage class. # noqa 301
command: "{{ bin_dir }}/kubectl delete storageclass gluster"
ignore_errors: true
- name: "Tear down heketi." # noqa 301
ignore_errors: true # noqa ignore-errors
- name: Tear down heketi. # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\""
ignore_errors: true
- name: "Tear down heketi." # noqa 301
ignore_errors: true # noqa ignore-errors
- name: Tear down heketi. # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\""
ignore_errors: true
- name: "Tear down bootstrap."
ignore_errors: true # noqa ignore-errors
- name: Tear down bootstrap.
include_tasks: "../../provision/tasks/bootstrap/tear-down.yml"
- name: "Ensure there is nothing left over." # noqa 301
- name: Ensure there is nothing left over. # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: "Ensure there is nothing left over." # noqa 301
- name: Ensure there is nothing left over. # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: "Tear down glusterfs." # noqa 301
- name: Tear down glusterfs. # noqa 301
command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs"
ignore_errors: true
- name: "Remove heketi storage service." # noqa 301
ignore_errors: true # noqa ignore-errors
- name: Remove heketi storage service. # noqa 301
command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints"
ignore_errors: true
- name: "Remove heketi gluster role binding" # noqa 301
ignore_errors: true # noqa ignore-errors
- name: Remove heketi gluster role binding # noqa 301
command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin"
ignore_errors: true
- name: "Remove heketi config secret" # noqa 301
ignore_errors: true # noqa ignore-errors
- name: Remove heketi config secret # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret"
ignore_errors: true
- name: "Remove heketi db backup" # noqa 301
ignore_errors: true # noqa ignore-errors
- name: Remove heketi db backup # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup"
ignore_errors: true
- name: "Remove heketi service account" # noqa 301
ignore_errors: true # noqa ignore-errors
- name: Remove heketi service account # noqa 301
command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account"
ignore_errors: true
- name: "Get secrets"
ignore_errors: true # noqa ignore-errors
- name: Get secrets
command: "{{ bin_dir }}/kubectl get secrets --output=\"json\""
register: "secrets"
changed_when: false
- name: "Remove heketi storage secret"
- name: Remove heketi storage secret
vars: { storage_query: "items[?metadata.annotations.\"kubernetes.io/service-account.name\"=='heketi-service-account'].metadata.name|[0]" }
command: "{{ bin_dir }}/kubectl delete secret {{ secrets.stdout|from_json|json_query(storage_query) }}"
when: "storage_query is defined"
ignore_errors: true
ignore_errors: true # noqa ignore-errors

View File

@ -9,7 +9,8 @@ This script has two features:
(2) Deploy local container registry and register the container images to the registry.
Step(1) should be done online site as a preparation, then we bring the gotten images
to the target offline environment.
to the target offline environment. if images are from a private registry,
you need to set `PRIVATE_REGISTRY` environment variable.
Then we will run step(2) for registering the images to local registry.
Step(1) can be operated with:
@ -28,16 +29,37 @@ manage-offline-container-images.sh register
This script generates the list of downloaded files and the list of container images by `roles/download/defaults/main.yml` file.
Run this script will generates three files, all downloaded files url in files.list, all container images in images.list, all component version in generate.sh.
Run this script will execute `generate_list.yml` playbook in kubespray root directory and generate four files,
all downloaded files url in files.list, all container images in images.list, jinja2 templates in *.template.
```shell
bash generate_list.sh
./generate_list.sh
tree temp
temp
├── files.list
├── generate.sh
── images.list
0 directories, 3 files
├── files.list.template
── images.list
└── images.list.template
0 directories, 5 files
```
In some cases you may want to update some component version, you can edit `generate.sh` file, then run `bash generate.sh | grep 'https' > files.list` to update file.list or run `bash generate.sh | grep -v 'https'> images.list` to update images.list.
In some cases you may want to update some component version, you can declare version variables in ansible inventory file or group_vars,
then run `./generate_list.sh -i [inventory_file]` to update file.list and images.list.
## manage-offline-files.sh
This script will download all files according to `temp/files.list` and run nginx container to provide offline file download.
Step(1) generate `files.list`
```shell
./generate_list.sh
```
Step(2) download files and run nginx container
```shell
./manage-offline-files.sh
```
when nginx container is running, it can be accessed through <http://127.0.0.1:8080/>.

56
contrib/offline/generate_list.sh Normal file → Executable file
View File

@ -5,53 +5,29 @@ CURRENT_DIR=$(cd $(dirname $0); pwd)
TEMP_DIR="${CURRENT_DIR}/temp"
REPO_ROOT_DIR="${CURRENT_DIR%/contrib/offline}"
: ${IMAGE_ARCH:="amd64"}
: ${ANSIBLE_SYSTEM:="linux"}
: ${ANSIBLE_ARCHITECTURE:="x86_64"}
: ${DOWNLOAD_YML:="roles/download/defaults/main.yml"}
: ${KUBE_VERSION_YAML:="roles/kubespray-defaults/defaults/main.yaml"}
mkdir -p ${TEMP_DIR}
# ARCH used in convert {%- if image_arch != 'amd64' -%}-{{ image_arch }}{%- endif -%} to {{arch}}
if [ "${IMAGE_ARCH}" != "amd64" ]; then ARCH="${IMAGE_ARCH}"; fi
cat > ${TEMP_DIR}/generate.sh << EOF
arch=${ARCH}
image_arch=${IMAGE_ARCH}
ansible_system=${ANSIBLE_SYSTEM}
ansible_architecture=${ANSIBLE_ARCHITECTURE}
EOF
# generate all component version by $DOWNLOAD_YML
grep 'kube_version:' ${REPO_ROOT_DIR}/${KUBE_VERSION_YAML} \
| sed 's/: /=/g' >> ${TEMP_DIR}/generate.sh
grep '_version:' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed 's/: /=/g;s/{{/${/g;s/}}/}/g' | tr -d ' ' >> ${TEMP_DIR}/generate.sh
sed -i 's/kube_major_version=.*/kube_major_version=${kube_version%.*}/g' ${TEMP_DIR}/generate.sh
sed -i 's/crictl_version=.*/crictl_version=${kube_version%.*}.0/g' ${TEMP_DIR}/generate.sh
# generate all download files url
# generate all download files url template
grep 'download_url:' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed 's/: /=/g;s/ //g;s/{{/${/g;s/}}/}/g;s/|lower//g;s/^.*_url=/echo /g' >> ${TEMP_DIR}/generate.sh
| sed 's/^.*_url: //g;s/\"//g' > ${TEMP_DIR}/files.list.template
# generate all images list
grep -E '_repo:|_tag:' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed "s#{%- if image_arch != 'amd64' -%}-{{ image_arch }}{%- endif -%}#{{arch}}#g" \
| sed 's/: /=/g;s/{{/${/g;s/}}/}/g' | tr -d ' ' >> ${TEMP_DIR}/generate.sh
# generate all images list template
sed -n '/^downloads:/,/download_defaults:/p' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed -n "s/repo: //p;s/tag: //p" | tr -d ' ' | sed 's/{{/${/g;s/}}/}/g' \
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/^/echo /g' >> ${TEMP_DIR}/generate.sh
| sed -n "s/repo: //p;s/tag: //p" | tr -d ' ' \
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/\"//g' > ${TEMP_DIR}/images.list.template
# special handling for https://github.com/kubernetes-sigs/kubespray/pull/7570
sed -i 's#^coredns_image_repo=.*#coredns_image_repo=${kube_image_repo}$(if printf "%s\\n%s\\n" v1.21 ${kube_version%.*} | sort --check=quiet --version-sort; then echo -n /coredns/coredns;else echo -n /coredns; fi)#' ${TEMP_DIR}/generate.sh
sed -i 's#^coredns_image_tag=.*#coredns_image_tag=$(if printf "%s\\n%s\\n" v1.21 ${kube_version%.*} | sort --check=quiet --version-sort; then echo -n ${coredns_version};else echo -n ${coredns_version/v/}; fi)#' ${TEMP_DIR}/generate.sh
# add kube-* images to images list
# add kube-* images to images list template
# Those container images are downloaded by kubeadm, then roles/download/defaults/main.yml
# doesn't contain those images. That is reason why here needs to put those images into the
# list separately.
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"
echo "${KUBE_IMAGES}" | tr ' ' '\n' | xargs -L1 -I {} \
echo 'echo ${kube_image_repo}/{}:${kube_version}' >> ${TEMP_DIR}/generate.sh
for i in $KUBE_IMAGES; do
echo "{{ kube_image_repo }}/$i:{{ kube_version }}" >> ${TEMP_DIR}/images.list.template
done
# print files.list and images.list
bash ${TEMP_DIR}/generate.sh | grep 'https' | sort > ${TEMP_DIR}/files.list
bash ${TEMP_DIR}/generate.sh | grep -v 'https' | sort > ${TEMP_DIR}/images.list
# run ansible to expand templates
/bin/cp ${CURRENT_DIR}/generate_list.yml ${REPO_ROOT_DIR}
(cd ${REPO_ROOT_DIR} && ansible-playbook $* generate_list.yml && /bin/rm generate_list.yml) || exit 1

View File

@ -0,0 +1,19 @@
---
- hosts: localhost
become: no
roles:
# Just load default variables from roles.
- role: kubespray-defaults
when: false
- role: download
when: false
tasks:
# Generate files.list and images.list files from templates.
- template:
src: ./contrib/offline/temp/{{ item }}.list.template
dest: ./contrib/offline/temp/{{ item }}.list
with_items:
- files
- images

View File

@ -15,7 +15,7 @@ function create_container_image_tar() {
IMAGES=$(kubectl describe pods --all-namespaces | grep " Image:" | awk '{print $2}' | sort | uniq)
# NOTE: etcd and pause cannot be seen as pods.
# The pause image is used for --pod-infra-container-image option of kubelet.
EXT_IMAGES=$(kubectl cluster-info dump | egrep "quay.io/coreos/etcd:|k8s.gcr.io/pause:" | sed s@\"@@g)
EXT_IMAGES=$(kubectl cluster-info dump | egrep "quay.io/coreos/etcd:|registry.k8s.io/pause:" | sed s@\"@@g)
IMAGES="${IMAGES} ${EXT_IMAGES}"
rm -f ${IMAGE_TAR_FILE}
@ -46,15 +46,16 @@ function create_container_image_tar() {
# NOTE: Here removes the following repo parts from each image
# so that these parts will be replaced with Kubespray.
# - kube_image_repo: "k8s.gcr.io"
# - kube_image_repo: "registry.k8s.io"
# - gcr_image_repo: "gcr.io"
# - docker_image_repo: "docker.io"
# - quay_image_repo: "quay.io"
FIRST_PART=$(echo ${image} | awk -F"/" '{print $1}')
if [ "${FIRST_PART}" = "k8s.gcr.io" ] ||
if [ "${FIRST_PART}" = "registry.k8s.io" ] ||
[ "${FIRST_PART}" = "gcr.io" ] ||
[ "${FIRST_PART}" = "docker.io" ] ||
[ "${FIRST_PART}" = "quay.io" ]; then
[ "${FIRST_PART}" = "quay.io" ] ||
[ "${FIRST_PART}" = "${PRIVATE_REGISTRY}" ]; then
image=$(echo ${image} | sed s@"${FIRST_PART}/"@@)
fi
echo "${FILE_NAME} ${image}" >> ${IMAGE_LIST}
@ -100,15 +101,35 @@ function register_container_images() {
tar -zxvf ${IMAGE_TAR_FILE}
sudo docker load -i ${IMAGE_DIR}/registry-latest.tar
sudo docker run --restart=always -d -p 5000:5000 --name registry registry:latest
set +e
sudo docker container inspect registry >/dev/null 2>&1
if [ $? -ne 0 ]; then
sudo docker run --restart=always -d -p 5000:5000 --name registry registry:latest
fi
set -e
while read -r line; do
file_name=$(echo ${line} | awk '{print $1}')
org_image=$(echo ${line} | awk '{print $2}')
new_image="${LOCALHOST_NAME}:5000/${org_image}"
image_id=$(tar -tf ${IMAGE_DIR}/${file_name} | grep "\.json" | grep -v manifest.json | sed s/"\.json"//)
raw_image=$(echo ${line} | awk '{print $2}')
new_image="${LOCALHOST_NAME}:5000/${raw_image}"
org_image=$(sudo docker load -i ${IMAGE_DIR}/${file_name} | head -n1 | awk '{print $3}')
image_id=$(sudo docker image inspect ${org_image} | grep "\"Id\":" | awk -F: '{print $3}'| sed s/'\",'//)
if [ -z "${file_name}" ]; then
echo "Failed to get file_name for line ${line}"
exit 1
fi
if [ -z "${raw_image}" ]; then
echo "Failed to get raw_image for line ${line}"
exit 1
fi
if [ -z "${org_image}" ]; then
echo "Failed to get org_image for line ${line}"
exit 1
fi
if [ -z "${image_id}" ]; then
echo "Failed to get image_id for file ${file_name}"
exit 1
fi
sudo docker load -i ${IMAGE_DIR}/${file_name}
sudo docker tag ${image_id} ${new_image}
sudo docker push ${new_image}
@ -132,7 +153,8 @@ else
echo "(2) Deploy local container registry and register the container images to the registry."
echo ""
echo "Step(1) should be done online site as a preparation, then we bring"
echo "the gotten images to the target offline environment."
echo "the gotten images to the target offline environment. if images are from"
echo "a private registry, you need to set PRIVATE_REGISTRY environment variable."
echo "Then we will run step(2) for registering the images to local registry."
echo ""
echo "${IMAGE_TAR_FILE} is created to contain your container images."

View File

@ -0,0 +1,44 @@
#!/bin/bash
CURRENT_DIR=$( dirname "$(readlink -f "$0")" )
OFFLINE_FILES_DIR_NAME="offline-files"
OFFLINE_FILES_DIR="${CURRENT_DIR}/${OFFLINE_FILES_DIR_NAME}"
OFFLINE_FILES_ARCHIVE="${CURRENT_DIR}/offline-files.tar.gz"
FILES_LIST=${FILES_LIST:-"${CURRENT_DIR}/temp/files.list"}
NGINX_PORT=8080
# download files
if [ ! -f "${FILES_LIST}" ]; then
echo "${FILES_LIST} should exist, run ./generate_list.sh first."
exit 1
fi
rm -rf "${OFFLINE_FILES_DIR}"
rm "${OFFLINE_FILES_ARCHIVE}"
mkdir "${OFFLINE_FILES_DIR}"
wget -x -P "${OFFLINE_FILES_DIR}" -i "${FILES_LIST}"
tar -czvf "${OFFLINE_FILES_ARCHIVE}" "${OFFLINE_FILES_DIR_NAME}"
[ -n "$NO_HTTP_SERVER" ] && echo "skip to run nginx" && exit 0
# run nginx container server
if command -v nerdctl 1>/dev/null 2>&1; then
runtime="nerdctl"
elif command -v podman 1>/dev/null 2>&1; then
runtime="podman"
elif command -v docker 1>/dev/null 2>&1; then
runtime="docker"
else
echo "No supported container runtime found"
exit 1
fi
sudo "${runtime}" container inspect nginx >/dev/null 2>&1
if [ $? -ne 0 ]; then
sudo "${runtime}" run \
--restart=always -d -p ${NGINX_PORT}:80 \
--volume "${OFFLINE_FILES_DIR}:/usr/share/nginx/html/download" \
--volume "$(pwd)"/nginx.conf:/etc/nginx/nginx.conf \
--name nginx nginx:alpine
fi

View File

@ -0,0 +1,39 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
include /etc/nginx/default.d/*.conf;
location / {
root /usr/share/nginx/html/download;
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}

View File

@ -20,4 +20,4 @@
"'ufw.service' in services"
when:
- disable_service_firewall
- disable_service_firewall is defined and disable_service_firewall

View File

@ -9,8 +9,8 @@ Summary: Ansible modules for installing Kubernetes
Group: System Environment/Libraries
License: ASL 2.0
Url: https://github.com/kubernetes-incubator/kubespray
Source0: https://github.com/kubernetes-incubator/kubespray/archive/%{upstream_version}.tar.gz#/%{name}-%{release}.tar.gz
Url: https://github.com/kubernetes-sigs/kubespray
Source0: https://github.com/kubernetes-sigs/kubespray/archive/%{upstream_version}.tar.gz#/%{name}-%{release}.tar.gz
BuildArch: noarch
BuildRequires: git

View File

@ -1,2 +1,3 @@
*.tfstate*
.terraform.lock.hcl
.terraform

View File

@ -36,8 +36,7 @@ terraform apply -var-file=credentials.tfvars
```
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
- Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated ssh-bastion.conf.
Ansible automatically detects bastion and changes ssh_args
- Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated `ssh-bastion.conf`. Ansible automatically detects bastion and changes `ssh_args`
```commandline
ssh -F ./ssh-bastion.conf user@$ip

View File

@ -20,20 +20,20 @@ module "aws-vpc" {
aws_cluster_name = var.aws_cluster_name
aws_vpc_cidr_block = var.aws_vpc_cidr_block
aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, 2)
aws_avail_zones = data.aws_availability_zones.available.names
aws_cidr_subnets_private = var.aws_cidr_subnets_private
aws_cidr_subnets_public = var.aws_cidr_subnets_public
default_tags = var.default_tags
}
module "aws-elb" {
source = "./modules/elb"
module "aws-nlb" {
source = "./modules/nlb"
aws_cluster_name = var.aws_cluster_name
aws_vpc_id = module.aws-vpc.aws_vpc_id
aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, 2)
aws_avail_zones = data.aws_availability_zones.available.names
aws_subnet_ids_public = module.aws-vpc.aws_subnet_ids_public
aws_elb_api_port = var.aws_elb_api_port
aws_nlb_api_port = var.aws_nlb_api_port
k8s_secure_api_port = var.k8s_secure_api_port
default_tags = var.default_tags
}
@ -52,9 +52,8 @@ module "aws-iam" {
resource "aws_instance" "bastion-server" {
ami = data.aws_ami.distro.id
instance_type = var.aws_bastion_size
count = length(var.aws_cidr_subnets_public)
count = var.aws_bastion_num
associate_public_ip_address = true
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_public, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group
@ -79,11 +78,14 @@ resource "aws_instance" "k8s-master" {
count = var.aws_kube_master_num
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group
root_block_device {
volume_size = var.aws_kube_master_disk_size
}
iam_instance_profile = module.aws-iam.kube_control_plane-profile
key_name = var.AWS_SSH_KEY_NAME
@ -94,10 +96,10 @@ resource "aws_instance" "k8s-master" {
}))
}
resource "aws_elb_attachment" "attach_master_nodes" {
count = var.aws_kube_master_num
elb = module.aws-elb.aws_elb_api_id
instance = element(aws_instance.k8s-master.*.id, count.index)
resource "aws_lb_target_group_attachment" "tg-attach_master_nodes" {
count = var.aws_kube_master_num
target_group_arn = module.aws-nlb.aws_nlb_api_tg_arn
target_id = element(aws_instance.k8s-master.*.private_ip, count.index)
}
resource "aws_instance" "k8s-etcd" {
@ -106,11 +108,14 @@ resource "aws_instance" "k8s-etcd" {
count = var.aws_etcd_num
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group
root_block_device {
volume_size = var.aws_etcd_disk_size
}
key_name = var.AWS_SSH_KEY_NAME
tags = merge(var.default_tags, tomap({
@ -126,11 +131,14 @@ resource "aws_instance" "k8s-worker" {
count = var.aws_kube_worker_num
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group
root_block_device {
volume_size = var.aws_kube_worker_disk_size
}
iam_instance_profile = module.aws-iam.kube-worker-profile
key_name = var.AWS_SSH_KEY_NAME
@ -152,11 +160,11 @@ data "template_file" "inventory" {
public_ip_address_bastion = join("\n", formatlist("bastion ansible_host=%s", aws_instance.bastion-server.*.public_ip))
connection_strings_master = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-master.*.private_dns, aws_instance.k8s-master.*.private_ip))
connection_strings_node = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.private_dns, aws_instance.k8s-worker.*.private_ip))
connection_strings_etcd = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.private_dns, aws_instance.k8s-etcd.*.private_ip))
list_master = join("\n", aws_instance.k8s-master.*.private_dns)
list_node = join("\n", aws_instance.k8s-worker.*.private_dns)
list_etcd = join("\n", aws_instance.k8s-etcd.*.private_dns)
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
connection_strings_etcd = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.private_dns, aws_instance.k8s-etcd.*.private_ip))
list_etcd = join("\n", ((var.aws_etcd_num > 0) ? (aws_instance.k8s-etcd.*.private_dns) : (aws_instance.k8s-master.*.private_dns)))
nlb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-nlb.aws_nlb_api_fqdn}\""
}
}

View File

@ -1,57 +0,0 @@
resource "aws_security_group" "aws-elb" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
vpc_id = var.aws_vpc_id
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
}))
}
resource "aws_security_group_rule" "aws-allow-api-access" {
type = "ingress"
from_port = var.aws_elb_api_port
to_port = var.k8s_secure_api_port
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = aws_security_group.aws-elb.id
}
resource "aws_security_group_rule" "aws-allow-api-egress" {
type = "egress"
from_port = 0
to_port = 65535
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = aws_security_group.aws-elb.id
}
# Create a new AWS ELB for K8S API
resource "aws_elb" "aws-elb-api" {
name = "kubernetes-elb-${var.aws_cluster_name}"
subnets = var.aws_subnet_ids_public
security_groups = [aws_security_group.aws-elb.id]
listener {
instance_port = var.k8s_secure_api_port
instance_protocol = "tcp"
lb_port = var.aws_elb_api_port
lb_protocol = "tcp"
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "HTTPS:${var.k8s_secure_api_port}/healthz"
interval = 30
}
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-elb-api"
}))
}

View File

@ -1,7 +0,0 @@
output "aws_elb_api_id" {
value = aws_elb.aws-elb-api.id
}
output "aws_elb_api_fqdn" {
value = aws_elb.aws-elb-api.dns_name
}

View File

@ -0,0 +1,41 @@
# Create a new AWS NLB for K8S API
resource "aws_lb" "aws-nlb-api" {
name = "kubernetes-nlb-${var.aws_cluster_name}"
load_balancer_type = "network"
subnets = length(var.aws_subnet_ids_public) <= length(var.aws_avail_zones) ? var.aws_subnet_ids_public : slice(var.aws_subnet_ids_public, 0, length(var.aws_avail_zones))
idle_timeout = 400
enable_cross_zone_load_balancing = true
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-nlb-api"
}))
}
# Create a new AWS NLB Instance Target Group
resource "aws_lb_target_group" "aws-nlb-api-tg" {
name = "kubernetes-nlb-tg-${var.aws_cluster_name}"
port = var.k8s_secure_api_port
protocol = "TCP"
target_type = "ip"
vpc_id = var.aws_vpc_id
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
interval = 30
protocol = "HTTPS"
path = "/healthz"
}
}
# Create a new AWS NLB Listener listen to target group
resource "aws_lb_listener" "aws-nlb-api-listener" {
load_balancer_arn = aws_lb.aws-nlb-api.arn
port = var.aws_nlb_api_port
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.aws-nlb-api-tg.arn
}
}

View File

@ -0,0 +1,11 @@
output "aws_nlb_api_id" {
value = aws_lb.aws-nlb-api.id
}
output "aws_nlb_api_fqdn" {
value = aws_lb.aws-nlb-api.dns_name
}
output "aws_nlb_api_tg_arn" {
value = aws_lb_target_group.aws-nlb-api-tg.arn
}

View File

@ -6,8 +6,8 @@ variable "aws_vpc_id" {
description = "AWS VPC ID"
}
variable "aws_elb_api_port" {
description = "Port for AWS ELB"
variable "aws_nlb_api_port" {
description = "Port for AWS NLB"
}
variable "k8s_secure_api_port" {

View File

@ -25,13 +25,14 @@ resource "aws_internet_gateway" "cluster-vpc-internetgw" {
resource "aws_subnet" "cluster-vpc-subnets-public" {
vpc_id = aws_vpc.cluster-vpc.id
count = length(var.aws_avail_zones)
availability_zone = element(var.aws_avail_zones, count.index)
count = length(var.aws_cidr_subnets_public)
availability_zone = element(var.aws_avail_zones, count.index % length(var.aws_avail_zones))
cidr_block = element(var.aws_cidr_subnets_public, count.index)
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public"
"kubernetes.io/cluster/${var.aws_cluster_name}" = "member"
"kubernetes.io/cluster/${var.aws_cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}))
}
@ -43,12 +44,14 @@ resource "aws_nat_gateway" "cluster-nat-gateway" {
resource "aws_subnet" "cluster-vpc-subnets-private" {
vpc_id = aws_vpc.cluster-vpc.id
count = length(var.aws_avail_zones)
availability_zone = element(var.aws_avail_zones, count.index)
count = length(var.aws_cidr_subnets_private)
availability_zone = element(var.aws_avail_zones, count.index % length(var.aws_avail_zones))
cidr_block = element(var.aws_cidr_subnets_private, count.index)
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
"kubernetes.io/cluster/${var.aws_cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}))
}

View File

@ -11,11 +11,11 @@ output "workers" {
}
output "etcd" {
value = join("\n", aws_instance.k8s-etcd.*.private_ip)
value = join("\n", ((var.aws_etcd_num > 0) ? (aws_instance.k8s-etcd.*.private_ip) : (aws_instance.k8s-master.*.private_ip)))
}
output "aws_elb_api_fqdn" {
value = "${module.aws-elb.aws_elb_api_fqdn}:${var.aws_elb_api_port}"
output "aws_nlb_api_fqdn" {
value = "${module.aws-nlb.aws_nlb_api_fqdn}:${var.aws_nlb_api_port}"
}
output "inventory" {

View File

@ -9,6 +9,8 @@ aws_cidr_subnets_private = ["10.250.192.0/20", "10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20", "10.250.240.0/20"]
#Bastion Host
aws_bastion_num = 1
aws_bastion_size = "t2.medium"
#Kubernetes Cluster
@ -17,22 +19,26 @@ aws_kube_master_num = 3
aws_kube_master_size = "t2.medium"
aws_kube_master_disk_size = 50
aws_etcd_num = 3
aws_etcd_size = "t2.medium"
aws_etcd_disk_size = 50
aws_kube_worker_num = 4
aws_kube_worker_size = "t2.medium"
#Settings AWS ELB
aws_kube_worker_disk_size = 50
aws_elb_api_port = 6443
#Settings AWS NLB
aws_nlb_api_port = 6443
k8s_secure_api_port = 6443
kube_insecure_apiserver_address = "0.0.0.0"
default_tags = {
# Env = "devtest" # Product = "kubernetes"
}

View File

@ -10,19 +10,18 @@ ${public_ip_address_bastion}
[kube_control_plane]
${list_master}
[kube_node]
${list_node}
[etcd]
${list_etcd}
[calico_rr]
[k8s_cluster:children]
kube_node
kube_control_plane
calico_rr
[k8s_cluster:vars]
${elb_api_fqdn}
${nlb_api_fqdn}

View File

@ -6,26 +6,34 @@ aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20", "10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20", "10.250.240.0/20"]
#Bastion Host
aws_bastion_size = "t2.medium"
# single AZ deployment
#aws_cidr_subnets_private = ["10.250.192.0/20"]
#aws_cidr_subnets_public = ["10.250.224.0/20"]
# 3+ AZ deployment
#aws_cidr_subnets_private = ["10.250.192.0/24","10.250.193.0/24","10.250.194.0/24","10.250.195.0/24"]
#aws_cidr_subnets_public = ["10.250.224.0/24","10.250.225.0/24","10.250.226.0/24","10.250.227.0/24"]
#Bastion Host
aws_bastion_num = 1
aws_bastion_size = "t3.small"
#Kubernetes Cluster
aws_kube_master_num = 3
aws_kube_master_size = "t3.medium"
aws_kube_master_disk_size = 50
aws_kube_master_num = 3
aws_kube_master_size = "t2.medium"
aws_etcd_num = 0
aws_etcd_size = "t3.medium"
aws_etcd_disk_size = 50
aws_etcd_num = 3
aws_etcd_size = "t2.medium"
aws_kube_worker_num = 4
aws_kube_worker_size = "t2.medium"
aws_kube_worker_num = 4
aws_kube_worker_size = "t3.medium"
aws_kube_worker_disk_size = 50
#Settings AWS ELB
aws_elb_api_port = 6443
k8s_secure_api_port = 6443
kube_insecure_apiserver_address = "0.0.0.0"
aws_nlb_api_port = 6443
k8s_secure_api_port = 6443
default_tags = {
# Env = "devtest"

View File

@ -8,25 +8,26 @@ aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]
aws_avail_zones = ["eu-central-1a","eu-central-1b"]
#Bastion Host
aws_bastion_ami = "ami-5900cc36"
aws_bastion_size = "t2.small"
aws_bastion_num = 1
aws_bastion_size = "t3.small"
#Kubernetes Cluster
aws_kube_master_num = 3
aws_kube_master_size = "t2.medium"
aws_kube_master_size = "t3.medium"
aws_kube_master_disk_size = 50
aws_etcd_num = 3
aws_etcd_size = "t2.medium"
aws_etcd_size = "t3.medium"
aws_etcd_disk_size = 50
aws_kube_worker_num = 4
aws_kube_worker_size = "t2.medium"
aws_cluster_ami = "ami-903df7ff"
aws_kube_worker_size = "t3.medium"
aws_kube_worker_disk_size = 50
#Settings AWS ELB
aws_elb_api_port = 6443
aws_nlb_api_port = 6443
k8s_secure_api_port = 6443
kube_insecure_apiserver_address = 0.0.0.0
default_tags = { }
inventory_file = "../../../inventory/hosts"

View File

@ -25,7 +25,7 @@ data "aws_ami" "distro" {
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
values = ["debian-10-amd64-*"]
}
filter {
@ -33,7 +33,7 @@ data "aws_ami" "distro" {
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
owners = ["136693071363"] # Debian-10
}
//AWS VPC Variables
@ -63,10 +63,18 @@ variable "aws_bastion_size" {
* The number should be divisable by the number of used
* AWS Availability Zones without an remainder.
*/
variable "aws_bastion_num" {
description = "Number of Bastion Nodes"
}
variable "aws_kube_master_num" {
description = "Number of Kubernetes Master Nodes"
}
variable "aws_kube_master_disk_size" {
description = "Disk size for Kubernetes Master Nodes (in GiB)"
}
variable "aws_kube_master_size" {
description = "Instance size of Kube Master Nodes"
}
@ -75,6 +83,10 @@ variable "aws_etcd_num" {
description = "Number of etcd Nodes"
}
variable "aws_etcd_disk_size" {
description = "Disk size for etcd Nodes (in GiB)"
}
variable "aws_etcd_size" {
description = "Instance size of etcd Nodes"
}
@ -83,16 +95,20 @@ variable "aws_kube_worker_num" {
description = "Number of Kubernetes Worker Nodes"
}
variable "aws_kube_worker_disk_size" {
description = "Disk size for Kubernetes Worker Nodes (in GiB)"
}
variable "aws_kube_worker_size" {
description = "Instance size of Kubernetes Worker Nodes"
}
/*
* AWS ELB Settings
* AWS NLB Settings
*
*/
variable "aws_elb_api_port" {
description = "Port for AWS ELB"
variable "aws_nlb_api_port" {
description = "Port for AWS NLB"
}
variable "k8s_secure_api_port" {

View File

@ -31,9 +31,7 @@ The setup looks like following
## Requirements
* Terraform 0.13.0 or newer
*0.12 also works if you modify the provider block to include version and remove all `versions.tf` files*
* Terraform 0.13.0 or newer (0.12 also works if you modify the provider block to include version and remove all `versions.tf` files)
## Quickstart

View File

@ -74,14 +74,23 @@ ansible-playbook -i contrib/terraform/gcs/inventory.ini cluster.yml -b -v
* `ssh_whitelist`: List of IP ranges (CIDR) that will be allowed to ssh to the nodes
* `api_server_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the API server
* `nodeport_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports)
* `ingress_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to ingress on ports 80 and 443
### Optional
* `prefix`: Prefix to use for all resources, required to be unique for all clusters in the same project *(Defaults to `default`)*
* `master_sa_email`: Service account email to use for the master nodes *(Defaults to `""`, auto generate one)*
* `master_sa_scopes`: Service account email to use for the master nodes *(Defaults to `["https://www.googleapis.com/auth/cloud-platform"]`)*
* `master_sa_email`: Service account email to use for the control plane nodes *(Defaults to `""`, auto generate one)*
* `master_sa_scopes`: Service account email to use for the control plane nodes *(Defaults to `["https://www.googleapis.com/auth/cloud-platform"]`)*
* `master_preemptible`: Enable [preemptible](https://cloud.google.com/compute/docs/instances/preemptible)
for the control plane nodes *(Defaults to `false`)*
* `master_additional_disk_type`: [Disk type](https://cloud.google.com/compute/docs/disks/#disk-types)
for extra disks added on the control plane nodes *(Defaults to `"pd-ssd"`)*
* `worker_sa_email`: Service account email to use for the worker nodes *(Defaults to `""`, auto generate one)*
* `worker_sa_scopes`: Service account email to use for the worker nodes *(Defaults to `["https://www.googleapis.com/auth/cloud-platform"]`)*
* `worker_preemptible`: Enable [preemptible](https://cloud.google.com/compute/docs/instances/preemptible)
for the worker nodes *(Defaults to `false`)*
* `worker_additional_disk_type`: [Disk type](https://cloud.google.com/compute/docs/disks/#disk-types)
for extra disks added on the worker nodes *(Defaults to `"pd-ssd"`)*
An example variables file can be found `tfvars.json`

View File

@ -1,8 +1,16 @@
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.0"
}
}
}
provider "google" {
credentials = file(var.keyfile_location)
region = var.region
project = var.gcp_project_id
version = "~> 3.48"
}
module "kubernetes" {
@ -13,12 +21,17 @@ module "kubernetes" {
machines = var.machines
ssh_pub_key = var.ssh_pub_key
master_sa_email = var.master_sa_email
master_sa_scopes = var.master_sa_scopes
worker_sa_email = var.worker_sa_email
worker_sa_scopes = var.worker_sa_scopes
master_sa_email = var.master_sa_email
master_sa_scopes = var.master_sa_scopes
master_preemptible = var.master_preemptible
master_additional_disk_type = var.master_additional_disk_type
worker_sa_email = var.worker_sa_email
worker_sa_scopes = var.worker_sa_scopes
worker_preemptible = var.worker_preemptible
worker_additional_disk_type = var.worker_additional_disk_type
ssh_whitelist = var.ssh_whitelist
api_server_whitelist = var.api_server_whitelist
nodeport_whitelist = var.nodeport_whitelist
ingress_whitelist = var.ingress_whitelist
}

View File

@ -5,6 +5,8 @@
resource "google_compute_network" "main" {
name = "${var.prefix}-network"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "main" {
@ -20,6 +22,8 @@ resource "google_compute_firewall" "deny_all" {
priority = 1000
source_ranges = ["0.0.0.0/0"]
deny {
protocol = "all"
}
@ -39,6 +43,8 @@ resource "google_compute_firewall" "allow_internal" {
}
resource "google_compute_firewall" "ssh" {
count = length(var.ssh_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-ssh-firewall"
network = google_compute_network.main.name
@ -53,6 +59,8 @@ resource "google_compute_firewall" "ssh" {
}
resource "google_compute_firewall" "api_server" {
count = length(var.api_server_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-api-server-firewall"
network = google_compute_network.main.name
@ -67,6 +75,8 @@ resource "google_compute_firewall" "api_server" {
}
resource "google_compute_firewall" "nodeport" {
count = length(var.nodeport_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-nodeport-firewall"
network = google_compute_network.main.name
@ -81,11 +91,15 @@ resource "google_compute_firewall" "nodeport" {
}
resource "google_compute_firewall" "ingress_http" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-http-ingress-firewall"
network = google_compute_network.main.name
priority = 100
source_ranges = var.ingress_whitelist
allow {
protocol = "tcp"
ports = ["80"]
@ -93,11 +107,15 @@ resource "google_compute_firewall" "ingress_http" {
}
resource "google_compute_firewall" "ingress_https" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-https-ingress-firewall"
network = google_compute_network.main.name
priority = 100
source_ranges = var.ingress_whitelist
allow {
protocol = "tcp"
ports = ["443"]
@ -173,7 +191,7 @@ resource "google_compute_disk" "master" {
}
name = "${var.prefix}-${each.key}"
type = "pd-ssd"
type = var.master_additional_disk_type
zone = each.value.machine.zone
size = each.value.disk_size
@ -229,19 +247,28 @@ resource "google_compute_instance" "master" {
# Since we use google_compute_attached_disk we need to ignore this
lifecycle {
ignore_changes = ["attached_disk"]
ignore_changes = [attached_disk]
}
scheduling {
preemptible = var.master_preemptible
automatic_restart = !var.master_preemptible
}
}
resource "google_compute_forwarding_rule" "master_lb" {
count = length(var.api_server_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-master-lb-forward-rule"
port_range = "6443"
target = google_compute_target_pool.master_lb.id
target = google_compute_target_pool.master_lb[count.index].id
}
resource "google_compute_target_pool" "master_lb" {
count = length(var.api_server_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-master-lb-pool"
instances = local.master_target_list
}
@ -258,7 +285,7 @@ resource "google_compute_disk" "worker" {
}
name = "${var.prefix}-${each.key}"
type = "pd-ssd"
type = var.worker_additional_disk_type
zone = each.value.machine.zone
size = each.value.disk_size
@ -326,35 +353,48 @@ resource "google_compute_instance" "worker" {
# Since we use google_compute_attached_disk we need to ignore this
lifecycle {
ignore_changes = ["attached_disk"]
ignore_changes = [attached_disk]
}
scheduling {
preemptible = var.worker_preemptible
automatic_restart = !var.worker_preemptible
}
}
resource "google_compute_address" "worker_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-lb-address"
address_type = "EXTERNAL"
region = var.region
}
resource "google_compute_forwarding_rule" "worker_http_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-http-lb-forward-rule"
ip_address = google_compute_address.worker_lb.address
ip_address = google_compute_address.worker_lb[count.index].address
port_range = "80"
target = google_compute_target_pool.worker_lb.id
target = google_compute_target_pool.worker_lb[count.index].id
}
resource "google_compute_forwarding_rule" "worker_https_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-https-lb-forward-rule"
ip_address = google_compute_address.worker_lb.address
ip_address = google_compute_address.worker_lb[count.index].address
port_range = "443"
target = google_compute_target_pool.worker_lb.id
target = google_compute_target_pool.worker_lb[count.index].id
}
resource "google_compute_target_pool" "worker_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-lb-pool"
instances = local.worker_target_list
}

View File

@ -19,9 +19,9 @@ output "worker_ip_addresses" {
}
output "ingress_controller_lb_ip_address" {
value = google_compute_address.worker_lb.address
value = length(var.ingress_whitelist) > 0 ? google_compute_address.worker_lb.0.address : ""
}
output "control_plane_lb_ip_address" {
value = google_compute_forwarding_rule.master_lb.ip_address
value = length(var.api_server_whitelist) > 0 ? google_compute_forwarding_rule.master_lb.0.ip_address : ""
}

View File

@ -27,6 +27,14 @@ variable "master_sa_scopes" {
type = list(string)
}
variable "master_preemptible" {
type = bool
}
variable "master_additional_disk_type" {
type = string
}
variable "worker_sa_email" {
type = string
}
@ -35,6 +43,14 @@ variable "worker_sa_scopes" {
type = list(string)
}
variable "worker_preemptible" {
type = bool
}
variable "worker_additional_disk_type" {
type = string
}
variable "ssh_pub_key" {}
variable "ssh_whitelist" {
@ -49,6 +65,11 @@ variable "nodeport_whitelist" {
type = list(string)
}
variable "ingress_whitelist" {
type = list(string)
default = ["0.0.0.0/0"]
}
variable "private_network_cidr" {
default = "10.0.10.0/24"
}

View File

@ -16,6 +16,9 @@
"nodeport_whitelist": [
"1.2.3.4/32"
],
"ingress_whitelist": [
"0.0.0.0/0"
],
"machines": {
"master-0": {
@ -24,7 +27,7 @@
"zone": "us-central1-a",
"additional_disks": {},
"boot_disk": {
"image_name": "ubuntu-os-cloud/ubuntu-1804-bionic-v20201116",
"image_name": "ubuntu-os-cloud/ubuntu-2004-focal-v20220118",
"size": 50
}
},
@ -38,7 +41,7 @@
}
},
"boot_disk": {
"image_name": "ubuntu-os-cloud/ubuntu-1804-bionic-v20201116",
"image_name": "ubuntu-os-cloud/ubuntu-2004-focal-v20220118",
"size": 50
}
},
@ -52,7 +55,7 @@
}
},
"boot_disk": {
"image_name": "ubuntu-os-cloud/ubuntu-1804-bionic-v20201116",
"image_name": "ubuntu-os-cloud/ubuntu-2004-focal-v20220118",
"size": 50
}
}

View File

@ -44,6 +44,16 @@ variable "master_sa_scopes" {
default = ["https://www.googleapis.com/auth/cloud-platform"]
}
variable "master_preemptible" {
type = bool
default = false
}
variable "master_additional_disk_type" {
type = string
default = "pd-ssd"
}
variable "worker_sa_email" {
type = string
default = ""
@ -54,6 +64,16 @@ variable "worker_sa_scopes" {
default = ["https://www.googleapis.com/auth/cloud-platform"]
}
variable "worker_preemptible" {
type = bool
default = false
}
variable "worker_additional_disk_type" {
type = string
default = "pd-ssd"
}
variable ssh_pub_key {
description = "Path to public SSH key file which is injected into the VMs."
type = string
@ -70,3 +90,8 @@ variable api_server_whitelist {
variable nodeport_whitelist {
type = list(string)
}
variable "ingress_whitelist" {
type = list(string)
default = ["0.0.0.0/0"]
}

View File

@ -0,0 +1,108 @@
# Kubernetes on Hetzner with Terraform
Provision a Kubernetes cluster on [Hetzner](https://www.hetzner.com/cloud) using Terraform and Kubespray
## Overview
The setup looks like following
```text
Kubernetes cluster
+--------------------------+
| +--------------+ |
| | +--------------+ |
| --> | | | |
| | | Master/etcd | |
| | | node(s) | |
| +-+ | |
| +--------------+ |
| ^ |
| | |
| v |
| +--------------+ |
| | +--------------+ |
| --> | | | |
| | | Worker | |
| | | node(s) | |
| +-+ | |
| +--------------+ |
+--------------------------+
```
The nodes uses a private network for node to node communication and a public interface for all external communication.
## Requirements
* Terraform 0.14.0 or newer
## Quickstart
NOTE: Assumes you are at the root of the kubespray repo.
For authentication in your cluster you can use the environment variables.
```bash
export HCLOUD_TOKEN=api-token
```
Copy the cluster configuration file.
```bash
CLUSTER=my-hetzner-cluster
cp -r inventory/sample inventory/$CLUSTER
cp contrib/terraform/hetzner/default.tfvars inventory/$CLUSTER/
cd inventory/$CLUSTER
```
Edit `default.tfvars` to match your requirement.
Run Terraform to create the infrastructure.
```bash
terraform init ../../contrib/terraform/hetzner
terraform apply --var-file default.tfvars ../../contrib/terraform/hetzner/
```
You should now have a inventory file named `inventory.ini` that you can use with kubespray.
You can use the inventory file with kubespray to set up a cluster.
It is a good idea to check that you have basic SSH connectivity to the nodes. You can do that by:
```bash
ansible -i inventory.ini -m ping all
```
You can setup Kubernetes with kubespray using the generated inventory:
```bash
ansible-playbook -i inventory.ini ../../cluster.yml -b -v
```
## Cloud controller
For better support with the cloud you can install the [hcloud cloud controller](https://github.com/hetznercloud/hcloud-cloud-controller-manager) and [CSI driver](https://github.com/hetznercloud/csi-driver).
Please read the instructions in both repos on how to install it.
## Teardown
You can teardown your infrastructure using the following Terraform command:
```bash
terraform destroy --var-file default.tfvars ../../contrib/terraform/hetzner
```
## Variables
* `prefix`: Prefix to add to all resources, if set to "" don't set any prefix
* `ssh_public_keys`: List of public SSH keys to install on all machines
* `zone`: The zone where to run the cluster
* `network_zone`: the network zone where the cluster is running
* `machines`: Machines to provision. Key of this object will be used as the name of the machine
* `node_type`: The role of this node *(master|worker)*
* `size`: Size of the VM
* `image`: The image to use for the VM
* `ssh_whitelist`: List of IP ranges (CIDR) that will be allowed to ssh to the nodes
* `api_server_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the API server
* `nodeport_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports)
* `ingress_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to kubernetes workers on port 80 and 443

View File

@ -0,0 +1,44 @@
prefix = "default"
zone = "hel1"
network_zone = "eu-central"
inventory_file = "inventory.ini"
ssh_public_keys = [
# Put your public SSH key here
"ssh-rsa I-did-not-read-the-docs",
"ssh-rsa I-did-not-read-the-docs 2",
]
machines = {
"master-0" : {
"node_type" : "master",
"size" : "cx21",
"image" : "ubuntu-20.04",
},
"worker-0" : {
"node_type" : "worker",
"size" : "cx21",
"image" : "ubuntu-20.04",
},
"worker-1" : {
"node_type" : "worker",
"size" : "cx21",
"image" : "ubuntu-20.04",
}
}
nodeport_whitelist = [
"0.0.0.0/0"
]
ingress_whitelist = [
"0.0.0.0/0"
]
ssh_whitelist = [
"0.0.0.0/0"
]
api_server_whitelist = [
"0.0.0.0/0"
]

View File

@ -0,0 +1,52 @@
provider "hcloud" {}
module "kubernetes" {
source = "./modules/kubernetes-cluster"
prefix = var.prefix
zone = var.zone
machines = var.machines
ssh_public_keys = var.ssh_public_keys
network_zone = var.network_zone
ssh_whitelist = var.ssh_whitelist
api_server_whitelist = var.api_server_whitelist
nodeport_whitelist = var.nodeport_whitelist
ingress_whitelist = var.ingress_whitelist
}
#
# Generate ansible inventory
#
data "template_file" "inventory" {
template = file("${path.module}/templates/inventory.tpl")
vars = {
connection_strings_master = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s etcd_member_name=etcd%d",
keys(module.kubernetes.master_ip_addresses),
values(module.kubernetes.master_ip_addresses).*.public_ip,
values(module.kubernetes.master_ip_addresses).*.private_ip,
range(1, length(module.kubernetes.master_ip_addresses) + 1)))
connection_strings_worker = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s",
keys(module.kubernetes.worker_ip_addresses),
values(module.kubernetes.worker_ip_addresses).*.public_ip,
values(module.kubernetes.worker_ip_addresses).*.private_ip))
list_master = join("\n", keys(module.kubernetes.master_ip_addresses))
list_worker = join("\n", keys(module.kubernetes.worker_ip_addresses))
network_id = module.kubernetes.network_id
}
}
resource "null_resource" "inventories" {
provisioner "local-exec" {
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
}
triggers = {
template = data.template_file.inventory.rendered
}
}

View File

@ -0,0 +1,122 @@
resource "hcloud_network" "kubernetes" {
name = "${var.prefix}-network"
ip_range = var.private_network_cidr
}
resource "hcloud_network_subnet" "kubernetes" {
type = "cloud"
network_id = hcloud_network.kubernetes.id
network_zone = var.network_zone
ip_range = var.private_subnet_cidr
}
resource "hcloud_server" "master" {
for_each = {
for name, machine in var.machines :
name => machine
if machine.node_type == "master"
}
name = "${var.prefix}-${each.key}"
image = each.value.image
server_type = each.value.size
location = var.zone
user_data = templatefile(
"${path.module}/templates/cloud-init.tmpl",
{
ssh_public_keys = var.ssh_public_keys
}
)
firewall_ids = [hcloud_firewall.master.id]
}
resource "hcloud_server_network" "master" {
for_each = hcloud_server.master
server_id = each.value.id
subnet_id = hcloud_network_subnet.kubernetes.id
}
resource "hcloud_server" "worker" {
for_each = {
for name, machine in var.machines :
name => machine
if machine.node_type == "worker"
}
name = "${var.prefix}-${each.key}"
image = each.value.image
server_type = each.value.size
location = var.zone
user_data = templatefile(
"${path.module}/templates/cloud-init.tmpl",
{
ssh_public_keys = var.ssh_public_keys
}
)
firewall_ids = [hcloud_firewall.worker.id]
}
resource "hcloud_server_network" "worker" {
for_each = hcloud_server.worker
server_id = each.value.id
subnet_id = hcloud_network_subnet.kubernetes.id
}
resource "hcloud_firewall" "master" {
name = "${var.prefix}-master-firewall"
rule {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.ssh_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "6443"
source_ips = var.api_server_whitelist
}
}
resource "hcloud_firewall" "worker" {
name = "${var.prefix}-worker-firewall"
rule {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.ssh_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "80"
source_ips = var.ingress_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "443"
source_ips = var.ingress_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "30000-32767"
source_ips = var.nodeport_whitelist
}
}

View File

@ -0,0 +1,27 @@
output "master_ip_addresses" {
value = {
for key, instance in hcloud_server.master :
instance.name => {
"private_ip" = hcloud_server_network.master[key].ip
"public_ip" = hcloud_server.master[key].ipv4_address
}
}
}
output "worker_ip_addresses" {
value = {
for key, instance in hcloud_server.worker :
instance.name => {
"private_ip" = hcloud_server_network.worker[key].ip
"public_ip" = hcloud_server.worker[key].ipv4_address
}
}
}
output "cluster_private_network_cidr" {
value = var.private_subnet_cidr
}
output "network_id" {
value = hcloud_network.kubernetes.id
}

View File

@ -0,0 +1,17 @@
#cloud-config
users:
- default
- name: ubuntu
shell: /bin/bash
sudo: "ALL=(ALL) NOPASSWD:ALL"
ssh_authorized_keys:
%{ for ssh_public_key in ssh_public_keys ~}
- ${ssh_public_key}
%{ endfor ~}
ssh_authorized_keys:
%{ for ssh_public_key in ssh_public_keys ~}
- ${ssh_public_key}
%{ endfor ~}

View File

@ -0,0 +1,44 @@
variable "zone" {
type = string
}
variable "prefix" {}
variable "machines" {
type = map(object({
node_type = string
size = string
image = string
}))
}
variable "ssh_public_keys" {
type = list(string)
}
variable "ssh_whitelist" {
type = list(string)
}
variable "api_server_whitelist" {
type = list(string)
}
variable "nodeport_whitelist" {
type = list(string)
}
variable "ingress_whitelist" {
type = list(string)
}
variable "private_network_cidr" {
default = "10.0.0.0/16"
}
variable "private_subnet_cidr" {
default = "10.0.10.0/24"
}
variable "network_zone" {
default = "eu-central"
}

View File

@ -0,0 +1,9 @@
terraform {
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = "1.31.1"
}
}
required_version = ">= 0.14"
}

View File

@ -0,0 +1,7 @@
output "master_ips" {
value = module.kubernetes.master_ip_addresses
}
output "worker_ips" {
value = module.kubernetes.worker_ip_addresses
}

View File

@ -0,0 +1,19 @@
[all]
${connection_strings_master}
${connection_strings_worker}
[kube-master]
${list_master}
[etcd]
${list_master}
[kube-node]
${list_worker}
[k8s-cluster:children]
kube-master
kube-node
[k8s-cluster:vars]
network_id=${network_id}

View File

@ -0,0 +1,50 @@
variable "zone" {
description = "The zone where to run the cluster"
}
variable "network_zone" {
description = "The network zone where the cluster is running"
default = "eu-central"
}
variable "prefix" {
description = "Prefix for resource names"
default = "default"
}
variable "machines" {
description = "Cluster machines"
type = map(object({
node_type = string
size = string
image = string
}))
}
variable "ssh_public_keys" {
description = "Public SSH key which are injected into the VMs."
type = list(string)
}
variable "ssh_whitelist" {
description = "List of IP ranges (CIDR) to whitelist for ssh"
type = list(string)
}
variable "api_server_whitelist" {
description = "List of IP ranges (CIDR) to whitelist for kubernetes api server"
type = list(string)
}
variable "nodeport_whitelist" {
description = "List of IP ranges (CIDR) to whitelist for kubernetes nodeports"
type = list(string)
}
variable "ingress_whitelist" {
description = "List of IP ranges (CIDR) to whitelist for HTTP"
type = list(string)
}
variable "inventory_file" {
description = "Where to store the generated inventory file"
}

View File

@ -0,0 +1,15 @@
terraform {
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = "1.31.1"
}
null = {
source = "hashicorp/null"
}
template = {
source = "hashicorp/template"
}
}
required_version = ">= 0.14"
}

View File

@ -1,16 +1,16 @@
# Kubernetes on Packet with Terraform
# Kubernetes on Equinix Metal with Terraform
Provision a Kubernetes cluster with [Terraform](https://www.terraform.io) on
[Packet](https://www.packet.com).
[Equinix Metal](https://metal.equinix.com) ([formerly Packet](https://blog.equinix.com/blog/2020/10/06/equinix-metal-metal-and-more/)).
## Status
This will install a Kubernetes cluster on Packet bare metal. It should work in all locations and on most server types.
This will install a Kubernetes cluster on Equinix Metal. It should work in all locations and on most server types.
## Approach
The terraform configuration inspects variables found in
[variables.tf](variables.tf) to create resources in your Packet project.
[variables.tf](variables.tf) to create resources in your Equinix Metal project.
There is a [python script](../terraform.py) that reads the generated`.tfstate`
file to generate a dynamic inventory that is consumed by [cluster.yml](../../..//cluster.yml)
to actually install Kubernetes with Kubespray.
@ -35,13 +35,13 @@ now six total etcd replicas.
## Requirements
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
- Install dependencies: `sudo pip install -r requirements.txt`
- Account with Packet Host
- [Install Ansible dependencies](/docs/ansible.md#installing-ansible)
- Account with Equinix Metal
- An SSH key pair
## SSH Key Setup
An SSH keypair is required so Ansible can access the newly provisioned nodes (bare metal Packet hosts). By default, the public SSH key defined in cluster.tfvars will be installed in authorized_key on the newly provisioned nodes (~/.ssh/id_rsa.pub). Terraform will upload this public key and then it will be distributed out to all the nodes. If you have already set this public key in Packet (i.e. via the portal), then set the public keyfile name in cluster.tfvars to blank to prevent the duplicate key from being uploaded which will cause an error.
An SSH keypair is required so Ansible can access the newly provisioned nodes (Equinix Metal hosts). By default, the public SSH key defined in cluster.tfvars will be installed in authorized_key on the newly provisioned nodes (~/.ssh/id_rsa.pub). Terraform will upload this public key and then it will be distributed out to all the nodes. If you have already set this public key in Equinix Metal (i.e. via the portal), then set the public keyfile name in cluster.tfvars to blank to prevent the duplicate key from being uploaded which will cause an error.
If you don't already have a keypair generated (~/.ssh/id_rsa and ~/.ssh/id_rsa.pub), then a new keypair can be generated with the command:
@ -51,7 +51,7 @@ ssh-keygen -f ~/.ssh/id_rsa
## Terraform
Terraform will be used to provision all of the Packet resources with base software as appropriate.
Terraform will be used to provision all of the Equinix Metal resources with base software as appropriate.
### Configuration
@ -60,25 +60,25 @@ Terraform will be used to provision all of the Packet resources with base softwa
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
```ShellSession
cp -LRp contrib/terraform/packet/sample-inventory inventory/$CLUSTER
cp -LRp contrib/terraform/metal/sample-inventory inventory/$CLUSTER
cd inventory/$CLUSTER
ln -s ../../contrib/terraform/packet/hosts
ln -s ../../contrib/terraform/metal/hosts
```
This will be the base for subsequent Terraform commands.
#### Packet API access
#### Equinix Metal API access
Your Packet API key must be available in the `PACKET_AUTH_TOKEN` environment variable.
Your Equinix Metal API key must be available in the `PACKET_AUTH_TOKEN` environment variable.
This key is typically stored outside of the code repo since it is considered secret.
If someone gets this key, they can startup/shutdown hosts in your project!
For more information on how to generate an API key or find your project ID, please see
[API Integrations](https://support.packet.com/kb/articles/api-integrations)
[Accounts Index](https://metal.equinix.com/developers/docs/accounts/).
The Packet Project ID associated with the key will be set later in cluster.tfvars.
The Equinix Metal Project ID associated with the key will be set later in `cluster.tfvars`.
For more information about the API, please see [Packet API](https://www.packet.com/developers/api/)
For more information about the API, please see [Equinix Metal API](https://metal.equinix.com/developers/api/).
Example:
@ -101,7 +101,7 @@ This helps when identifying which hosts are associated with each cluster.
While the defaults in variables.tf will successfully deploy a cluster, it is recommended to set the following values:
- cluster_name = the name of the inventory directory created above as $CLUSTER
- packet_project_id = the Packet Project ID associated with the Packet API token above
- metal_project_id = the Equinix Metal Project ID associated with the Equinix Metal API token above
#### Enable localhost access
@ -119,7 +119,7 @@ Once the Kubespray playbooks are run, a Kubernetes configuration file will be wr
In the cluster's inventory folder, the following files might be created (either by Terraform
or manually), to prevent you from pushing them accidentally they are in a
`.gitignore` file in the `terraform/packet` directory :
`.gitignore` file in the `terraform/metal` directory :
- `.terraform`
- `.tfvars`
@ -135,7 +135,7 @@ plugins. This is accomplished as follows:
```ShellSession
cd inventory/$CLUSTER
terraform init ../../contrib/terraform/packet
terraform init ../../contrib/terraform/metal
```
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
@ -146,7 +146,7 @@ You can apply the Terraform configuration to your cluster with the following com
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/packet
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/metal
export ANSIBLE_HOST_KEY_CHECKING=False
ansible-playbook -i hosts ../../cluster.yml
```
@ -156,7 +156,7 @@ ansible-playbook -i hosts ../../cluster.yml
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession
terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/packet
terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/metal
```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:

View File

@ -1,16 +1,15 @@
# Configure the Packet Provider
provider "packet" {
version = "~> 2.0"
# Configure the Equinix Metal Provider
provider "metal" {
}
resource "packet_ssh_key" "k8s" {
resource "metal_ssh_key" "k8s" {
count = var.public_key_path != "" ? 1 : 0
name = "kubernetes-${var.cluster_name}"
public_key = chomp(file(var.public_key_path))
}
resource "packet_device" "k8s_master" {
depends_on = [packet_ssh_key.k8s]
resource "metal_device" "k8s_master" {
depends_on = [metal_ssh_key.k8s]
count = var.number_of_k8s_masters
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
@ -18,12 +17,12 @@ resource "packet_device" "k8s_master" {
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane", "etcd", "kube_node"]
}
resource "packet_device" "k8s_master_no_etcd" {
depends_on = [packet_ssh_key.k8s]
resource "metal_device" "k8s_master_no_etcd" {
depends_on = [metal_ssh_key.k8s]
count = var.number_of_k8s_masters_no_etcd
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
@ -31,12 +30,12 @@ resource "packet_device" "k8s_master_no_etcd" {
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane"]
}
resource "packet_device" "k8s_etcd" {
depends_on = [packet_ssh_key.k8s]
resource "metal_device" "k8s_etcd" {
depends_on = [metal_ssh_key.k8s]
count = var.number_of_etcd
hostname = "${var.cluster_name}-etcd-${count.index + 1}"
@ -44,12 +43,12 @@ resource "packet_device" "k8s_etcd" {
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "etcd"]
}
resource "packet_device" "k8s_node" {
depends_on = [packet_ssh_key.k8s]
resource "metal_device" "k8s_node" {
depends_on = [metal_ssh_key.k8s]
count = var.number_of_k8s_nodes
hostname = "${var.cluster_name}-k8s-node-${count.index + 1}"
@ -57,7 +56,7 @@ resource "packet_device" "k8s_node" {
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_node"]
}

Some files were not shown because too many files have changed in this diff Show More