Compare commits

...

478 Commits

Author SHA1 Message Date
51d9e2f9b1 Update to Ansible v2.7.16 (#5850) 2020-03-30 06:21:54 -07:00
941aaf93fd remove duplicate ppa step and replace with circtl package download (#5455)
fix error that crictl package not downloaded before install.
```
TASK [container-engine/cri-o : Install crictl] *********************************
fatal: [more-crab]: FAILED! => {"changed": false, "msg": "Source '/tmp/releases/crictl-v1.16.1-linux-amd64.tar.gz' does not exist"}
```
2020-03-30 01:11:53 -07:00
68b3ee8ac1 Add v1.15.10 and v1.15.11 hashes (#5851)
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-03-27 23:07:53 -07:00
55da185dfe Add proxy support to containerd, improves no_proxy (#5583) (#5830)
* containerd: add proxy support

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* kubespray-defaults: add kube_service_addresses / kube_pods_subnet to no_proxy

CIDR notation in no_proxy is supported by a lot of programs/languages,
including go: https://github.com/golang/go/issues/16704
Without that containerd cannot talk the the API server (kube_apiserver_ip),
but it should not go through an external proxy for the nodes/pods/services

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit 9f2dd09628)
2020-03-27 08:10:23 -07:00
f33aafefa2 added "Flatcar", "Flatcar Container Linux by Kinvolk" for all coreOS role (#5607) (#5818)
Co-authored-by: Sylvain Chateau <sylvain.chateau@epitech.eu>
2020-03-27 06:06:23 -07:00
8f2ad2e2f7 Add moreutils in Dockerfile (#5840) 2020-03-27 06:02:24 -07:00
980ac28d60 kube-proxy need conntrack (#5478) (#5828)
(cherry picked from commit 48c41bcbe7)

Co-authored-by: Damon Wang <wangdekui@inspur.com>
2020-03-26 08:52:26 -07:00
fde234fda7 Fix certificates checking when adding etcd node to existing k8s node (#5807) (#5826)
Co-authored-by: alexkomrakov <alexkomrakov@gmail.com>
(cherry picked from commit 6ad6609872)
2020-03-26 08:50:25 -07:00
de26988e05 containerd: bump to 1.2.13 (#5727) (#5832)
https://github.com/containerd/containerd/releases/tag/v1.2.11
CVE-2019-16884 / CVE-2019-17596

https://github.com/containerd/containerd/releases/tag/v1.2.12
CVE-2019-19921 / CVE-2019-16884 / CVE-2019-11253

https://github.com/containerd/containerd/releases/tag/v1.2.13

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit e2ec7c76a4)
2020-03-26 08:48:26 -07:00
173314d9f1 [2.12 branch] Backport Kubernetes 1.16.8 (#5770) (#5774)
* Backport Kubernetes 1.16.8 (#5770)

* Kubernetes 1.16.8

* Upgrade etcd to 3.3.12 (#5718)

* Use kubespray 2.11.2 as start version for the upgrade test case
2020-03-22 23:58:44 -07:00
e181530333 Backport remove dockerproject (#5682)
* Remove dockerproject org (#5548)

* Change dockerproject.org to download.docker.com

dockerproject.org was deprecated in 2017 and has gone down.

* Restore yum repo for containerd

Change-Id: I883bb512a2164a85865b1bd4fb569af0358c8c2b

Co-authored-by: Craig Rodrigues <rodrigc@crodrigues.org>

* remove legacy docker repo in kubernetes/preinstall before any packages installed (#5640)

* Remove dockerproject_.+_repo_.+ variables (#5662)

This 38688a4486 change replaces the
value for dockerproject_.+_repo_.+ docker variables but their new
value was previously defined in other variables. This change removes
the dockerproject_.+_repo_.+ docker variables in favor of the older
ones.

* Remove stale legacy yum docker repo /etc/yum.repos.d/docker.repo (#5569)

* Remove stale legacy yum docker repo /etc/yum.repos.d/docker.repo

* move task 'Remove legacy docker repo file' to pre-upgrade.yml

* fix upgrade procedure when in playbook (#5695)

exists role kubernetes/preinstall and not exists role container-engine

 error 'yum_repo_dir' is undefined

Co-authored-by: Matthew Mosesohn <matthew.mosesohn@gmail.com>
Co-authored-by: Craig Rodrigues <rodrigc@crodrigues.org>
Co-authored-by: Victor Morales <chipahuac@hotmail.com>
2020-03-05 02:34:38 -08:00
366fb084ef Ensure we always fixup kube-proxy kubeconfig (#5524) (#5558)
When running with serial != 100%, like upgrade_cluster.yml, we need to apply this fixup each time
Problem was introduced in 05dc2b3a09

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit 5e9479cded)
2020-02-20 04:15:05 -08:00
34e883e6e2 Upgrade to Kubernetes 1.16.7 (#5627) 2020-02-13 00:36:35 -08:00
22236bfab7 Upgrade to Kubernetes 1.16.6 (#5579) 2020-02-12 02:18:51 -08:00
24d28de979 Fix invalid variable in host inventory script (#5482) 2020-01-27 01:59:02 -08:00
86365d61e3 Rebase on 2.12 (#5488) 2020-01-17 02:10:56 -08:00
370a0635fa Bump nodelocaldns version to 1.15.8 (#5447)
* Bump nodelocaldns version

* Add missing upstreamsvc
2019-12-13 02:22:55 -08:00
db2ca014cb Add Helm 3.x support (#5441)
* Add Helm 3.x support

* tiller enabled when helm < 3.0.0
2019-12-12 09:24:32 -08:00
f0f8379e1b Update aws tf (#5435)
* update aws tf to function as expected

* update tf version

* update syntax for tf v0.12

* update tf version in readme

* update per tf for v0.12
2019-12-12 03:42:33 -08:00
815eebf1d7 Add wait for kubectl get ds after upgrades (#5433) 2019-12-11 11:23:55 -08:00
95cf18ff00 Re introduce CI for upgrades (#5427) 2019-12-11 04:48:06 -08:00
696fcaf391 Ensure 0644 mode for ca.crt on nodes (#5428)
Change-Id: I5e018dfaeffe314300b373aeb7ed5f59929cf4f9
2019-12-11 00:54:04 -08:00
6ff5ccc938 Use kubespray/kubespray:v2.11.0 for CI (#5363) 2019-12-11 00:10:05 -08:00
f8a18fcaca Update the release process doc (#5419) 2019-12-10 04:41:29 -08:00
961c1be53e Remove Digital Ocean CI (#5418) 2019-12-10 04:39:29 -08:00
eda1dcb7f6 Fix TF inventory script (#5424) 2019-12-10 03:41:29 -08:00
5e0140d62c Add k8s 1.15.6 hashes (#5342)
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2019-12-10 00:45:30 -08:00
717fe3cf3a Add checksums for v1.17.0 (#5423) 2019-12-09 21:15:28 -08:00
32d80ca438 Add default value for bin_dir in recover control plane (#5396) 2019-12-09 02:54:02 -08:00
2a9aead50e Set kube_image_repo use {{ gcr_image_repo }} (#5314)
To aviod repeat "gcr.io" again.
2019-12-09 02:52:02 -08:00
9fda84b1c9 set node label via kubectl label command (#5257)
* set varios node label via kubectl label command, not kubelet options

* remove node_labels from KUBELET_ARGS
2019-12-09 01:43:09 -08:00
42702dc1a3 Fixes for CentOS 8 (#5213)
* Fix python3-libselinux installation for RHEL/CentOS 8

In bootstrap-centos.yml we haven't gathered the facts,
so #5127 couldn't work

Minimum ansible version to run kubespray is 2.7.8,
so ansible_distribution_major_version is defined an there is no need to default it

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* Restart NetworkManager for RHEL/CentOS 8

network.service doesn't exist anymore
 # systemctl status network
 Unit network.service could not be found.

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* Add module_hotfixes=True to docker / containerd yum repo config

https://bugzilla.redhat.com/show_bug.cgi?id=1734081
https://bugzilla.redhat.com/show_bug.cgi?id=1756473
Without this setting you end up with the following error:
 # yum install docker-ce
 Failed to set locale, defaulting to C
 Last metadata expiration check: 0:03:21 ago on Thu Sep 26 22:00:05 2019.
 Error:
  Problem: package docker-ce-3:19.03.2-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed
   - cannot install the best candidate for the job
   - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded
   - package containerd.io-1.2.2-3.el7.x86_64 is excluded
   - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded
   - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded
   - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded
 (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2019-12-09 01:37:10 -08:00
40e35b3fa6 Support Openstack servergroups (#5412)
* add support for nova servergroups

* Add documentation for openstack nova servergroups

* uppdate to TF 0.12.12 format and fix etcd

* revert for_each change

* fix variables and formatting in main.tf

* try to avoid errors

* update variable

* Update main.tf

* Update main.tf

* update all other instance resources
2019-12-09 01:15:10 -08:00
b15d41a96a Add support to Ansible 2.9 (#5361) 2019-12-05 07:24:32 -08:00
7da2083986 Add toleration for calico-typha on master (#5405)
Change-Id: Iea9a366cf6ccc4d491bfc49c5d2dba6d98f81b69
2019-12-05 06:24:32 -08:00
37df9a10ff Add CI for Amazon Linux 2 (#5410) 2019-12-05 05:44:32 -08:00
0f845fb350 Add support for Debian 10 (#5408) 2019-12-05 05:42:32 -08:00
23b8998701 Add OIDC to CI (#5407) 2019-12-05 05:40:32 -08:00
401d441c10 Fix Python code style for inventory_builder (#5362) 2019-12-05 01:48:32 -08:00
f7aea8ed89 update oidc to contain quotes (#5406) 2019-12-05 00:24:32 -08:00
a9b67d586b Add markdown CI (#5380) 2019-12-04 07:22:57 -08:00
b1fbead531 Update to TF v0.12.12 (#5267) 2019-12-04 07:20:58 -08:00
b06826e88a Fix OpenSUSE support (#5370) 2019-12-04 05:16:57 -08:00
57fef8f75e Allow customizing kubelet healthz port and bind addr (#5403)
Change-Id: I1634ba2d2d3337243ffcdea86750003a559f2576
2019-12-03 11:56:58 -08:00
f599a4a859 force other resolvers to be secondary when using systemd-resolved (#5391)
Change-Id: I33d46c7e0c5374467e22c5a652b282d1703dea85
2019-12-02 08:41:04 -08:00
e44b0727d5 Allow inventory_builder to add nodes with hostname (#5398)
Change-Id: Ifd7dd7ce8778f4f1be2016cae8d74452173b5312
2019-12-02 08:13:04 -08:00
18cee65c4b Add support for k8s v1.17.0-rc.1, remove hyperkube (#5378)
Change-Id: I3fff04f0211cd9c2e8235acaf51c3aa98abc8bb7
2019-11-28 05:41:03 -08:00
f779cb93d6 update URL for Gluster Getting Started Guide (#5390)
update URL for Gluster Getting Started Guide
2019-11-28 00:45:03 -08:00
aec5080a47 kubernetes/masters: fix task name in kubeadm setup (#5377) 2019-11-27 06:05:20 -08:00
80418a44d5 CoreDNS deployment extra tolerations (#5364)
* Add extra tolerations for coredns

* dns_extra_tolerations option

* dns_extra_tolerations

* missing starting space in comment
2019-11-27 05:49:21 -08:00
257c20f39e add 1.16.3 checksums and set new version as default (#5384) 2019-11-27 01:29:20 -08:00
050de3ab7b update fedora atomic download url (#5357) 2019-11-26 07:53:10 -08:00
f1498d4b53 fix OWNERS file (#5359)
Initially this was to fix a mis-indented approvers key. However, it turns
out that 'oilbeater' is not a member of kubernetes-sigs nor
kubernetes-incubator (the org this repo was migrated from). Thus this
OWNERS file is failing prow's validation check.

As a workaround I've opted to move them to emeritus_approver, which
isn't valiated and can be used as a hint for other approvers in this
repo
2019-11-25 17:59:11 -08:00
18d19d9ed4 containerd: update to 1.2.10 (#5341)
Lot's of bugs and security fixes:
https://github.com/containerd/containerd/releases/tag/v1.2.10
CVE-2019-16884 / CVE-2019-16276
https://github.com/containerd/containerd/releases/tag/v1.2.9
CVE-2019-9512 / CVE-2019-9514 / CVE-2019-9515
https://github.com/containerd/containerd/releases/tag/v1.2.8
CVE-2019-9512 / CVE-2019-9514
https://github.com/containerd/containerd/releases/tag/v1.2.7

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2019-11-22 00:09:29 -08:00
61d7d1459a Purge inactive reviewers. Promote miouge1 to approver (#5365)
Change-Id: I92b94ed9be85d47717e24a6099b4b22066717d02
2019-11-21 11:03:29 -08:00
6924c6e5a3 [FIX] fix match because trim removes leading/trailing whitespace (#5356) 2019-11-19 22:35:18 -08:00
85c851f519 scale down coredns on each master during graceful upgrade (#5344)
This fixes the scenario where masters are upgraded one at a time
and coredns gets improperly scaled back up to 2 replicas.

Change-Id: I7cc9283f40efcfd61b5813c89a5805c95d901567
2019-11-18 00:13:41 -08:00
5cd7d1a3c9 modify host.yml in README.md (#5338) 2019-11-17 18:15:40 -08:00
8b67159239 Do not run kubeadm upgrade on first deploy (#5339)
Change-Id: I68a962a9dd28c83ef07eaeaf53eb98287f38bca9
2019-11-14 02:05:34 -08:00
4f70da2731 Added Amazon Linux 2 support for deploying with docker (#5301) 2019-11-11 07:05:41 -08:00
db5040e6ea Set certs and files with kubeadm token to mode 0640 (#5325)
Change-Id: I298496e55a6889c158b2085fcadeda5e679a873e
2019-11-11 05:41:41 -08:00
97764921ed Fix calico name resolution (#5291) 2019-11-11 04:01:41 -08:00
a6853cb79d library files added to setup.cfg (#5274)
It hopefully ensures the usability of Kubespray as pip.
2019-11-11 03:59:41 -08:00
8c15db53b2 Fix helm for Kubernetes 1.16.2 (#5332)
Since upgrading k8s beyond 1.16.0 version, helm init does
no longer work with helm < 2.16.0 due to
https://github.com/helm/helm/issues/6374

This PR closes issue #5331
2019-11-11 03:53:41 -08:00
0200138a5d Pass ingress_nginx_extra_args when deploying the nginx-ingress addon (#5321) 2019-11-11 03:51:40 -08:00
14af98ebdc Respect cri-tool supported version matrix (#5241)
| Kubernetes Version | cri-tools Version |
|--------------------|-------------------|
| 1.16.x             | v1.16.0           |
| 1.15.X             | v1.15.0           |
| 1.14.X             | v1.14.0           |
| 1.13.X             | v1.13.0           |
| 1.12.X             | v1.12.0           |
| 1.11.X             | v1.11.1           |

- Upgrade to cri-tools 1.16.1
- Add checksums for cri-tools 1.16.1
2019-11-11 03:45:42 -08:00
8a5434419b fix useradd etcd (#5281) 2019-11-11 03:27:41 -08:00
8a406be48a Fix indentation in cilium-ds.yml template (#5305) 2019-11-11 03:25:41 -08:00
feac802456 Remove default docker_options from sample (#5287) 2019-11-11 03:23:40 -08:00
076f254a67 Add cilium_tunnel_mode variable to the cilium config (#5295) 2019-11-11 03:19:42 -08:00
bc3a8a0039 Fixes issue #5299 (#5300) 2019-11-11 03:13:41 -08:00
45d151a69d containerd installation on Debian (#5326) 2019-11-11 02:41:41 -08:00
bd014c409b Skip coredns image when evaluating kubeadm images (#5327)
It will be enabled correctly in downloads

Change-Id: Ief0b7aa2a8ee2ba6a6849820802f8542584b2c04
Related-story: PRODX-1171
2019-11-09 00:51:39 -08:00
08421aad3d [FIX] fix incorrect link to downloads documentation (#5319) 2019-11-07 03:50:42 -08:00
1c25ed669c Remove unnecessary and risky reload network for resolvconf propagation (#5322)
Change-Id: I54d706f7941b4b86c4c6cd45340295577155b884
2019-11-06 10:11:52 -08:00
a005d19f6f Enable systemd-resolved DNS resolution mode (#5318)
Change-Id: If3e253a40782e03cde7fc4a91493517ae31fda17
2019-11-06 03:33:52 -08:00
471589f1f4 Scale down coredns created by kubeadm upgrade to 0 replicas (#5308)
Change-Id: I128b0f9c1acbb956d9a6c4e5510b45a36e296af7
2019-11-05 03:34:38 -08:00
b0ee1f6cc6 Deploy Cinder CSI driver to provision volumes over OpenStack (#5184)
* Deploy Cinder CSI driver to provision volumes over OpenStack

* Deploy Cinder CSI StorageClass

* Cinder CSI doc
2019-11-01 00:59:24 -07:00
79128dcd6b Removes repetition. (#5310) 2019-10-30 06:12:53 -07:00
dd7e1469e9 Fix typo of docs/dns-stack (#5307) 2019-10-30 02:00:55 -07:00
186ec13579 Fix incorrect suggestion to enable old k8s apis (#5292)
Change-Id: If965cc6aa0daaca232dcf2ca0efd649aa097497f
2019-10-30 01:58:53 -07:00
2c4e6b65d7 Raise delay and retry for rotate tokens (#5304)
Change-Id: I87844b43b9a18064e7a99567ce57c1ca1ffcc4a8
2019-10-30 01:56:52 -07:00
108a6297e9 Terraform dynamic inventory 0.12.12 (#5298)
* Update parsing of terraform state file for 0.12.12

* Resource does not seem to have a module element but instead has
provider
* Return the boolean right way if it is already a bool since a bool does
not have an lower method

* Remove the setting of ansible_ssh_user to root for all Packet

Not all servers in packet are accessed as root by default. CoreOS
systems use the `core` user. Removing this allows the user to specify
the remote user with an extra_var or in an ansible.cfg file.

* Default to root user for packet devices except on CoreOS

* Update TF_VERSION for packet in tf-validate-packet

Update TV_VERSION to 0.12.12 for gitlab-ci tf-validate-packet tests

* convert packet terraform files to TV_VERSION 4

* initalize terraform before copying the variable file to the top level dir
2019-10-29 00:02:42 -07:00
94d4ce5a6f Retry cleaning up calico-node container (#5302)
Change-Id: Iad27b107860213759c7ae51f0891d7e5e7c6d96b
2019-10-28 05:11:25 -07:00
81da231b1e Set cluster DNS in kubeadm config for kubelet dynamic config (#5293)
Change-Id: I23116efefe8626d361d1904fc6fb8448f66cf3c5
2019-10-25 02:23:40 -07:00
1a87dcd9b9 readme: update url to the Kubernetes documentation page for Kubespray (#5294) 2019-10-24 22:39:39 -07:00
a1fff30bd9 Generate TLS certs for calico typha (#5258)
* Generate TLS certs for calico typha

Change-Id: I3883f49c124c52d0fc5b900ca2b44e4e2ed0d707

* Add group vars note

Change-Id: I63550dfef616e884efdbd42010a90b2c04c5eb69
2019-10-17 07:02:38 -07:00
81d57fe658 set calico_datastore default value in role kubespray-default (#5259) 2019-10-17 05:58:38 -07:00
3118437e10 check on all cluster node - kubelet_max_pods <= (2 ** (32 - kube_network_node_prefix | int)) - 2 (#5279) 2019-10-17 05:48:38 -07:00
65e461a7c0 download container always been on download_delegate host (#5177)
* download container always been on download_delegate host

* fix also check pull required
2019-10-17 05:38:38 -07:00
c672681ce5 Revert Pull Request #5084 (#5120)
Kubespray Pull Request #5084 (https://github.com/kubernetes-sigs/kubespray/pull/5084) caused more problems than it solved due to limitations with the synchronize module. See comments on Kubespray Issues #5059 (https://github.com/kubernetes-sigs/kubespray/issues/5059) and #5116 (https://github.com/kubernetes-sigs/kubespray/issues/5116). Details from Ansible documentation: "Currently, synchronize is limited to elevating permissions via passwordless sudo. This is because rsync itself is connecting to the remote machine and rsync doesn’t give us a way to pass sudo credentials in. ... Currently there are only a few connection types which support synchronize (ssh, paramiko, local, and docker) because a sync strategy has been determined for those connection types. Note that the connection for these must not need a password as rsync itself is making the connection and rsync does not provide us a way to pass a password to the connection. ..." Thus, reverting Pull Request #5084.
2019-10-17 05:26:37 -07:00
d332a254ee install python3 instead of python2 for fedora >= 30 fixes 5056, fixes 4802 (#5111) 2019-10-17 05:04:38 -07:00
f3c072f6b6 ignore gpg files in inventory (#5209) 2019-10-16 20:22:39 -07:00
3debb8aab5 add KUBELET_VOLUME_PLUGIN to kubelet.env (#5128) 2019-10-16 20:08:38 -07:00
aada6e7e40 Add etcd_data_dir variable to the kubeadm config (#5263) 2019-10-16 19:50:39 -07:00
ac60786c6f Add support for restart handlers for control plane on crio/containerd (#5250)
* Add support for restart handlers for control plane on crio/containerd

Change-Id: I8343cc4e9df7f55b732628ed01cc6e7ea5dcee85

* Update main.yml
2019-10-16 18:58:39 -07:00
db33dc6938 Add support for Kubernetes 1.16.2 (#5272)
* Add support for Kubernetes 1.16.1

* Defaults to 1.16.1

* add 1.16.2 checksums and set new version as default

* correct 1.16.2 checksums and add 1.15.5 checksums
2019-10-16 18:34:38 -07:00
9dfb25cafd fix typo (#5275) 2019-10-16 18:26:38 -07:00
df8d2285b6 Update ingress-nginx to v0.26.1 (#5268) 2019-10-16 18:22:39 -07:00
af6456d1ea Fix selector for calico-typha deployment (#5253)
Change-Id: I79f43379cbe1c495cb416f0572e65f695d5ec2b8
2019-10-16 07:53:42 -07:00
6f57f7dd2f Update nginx image to latest (#5270) 2019-10-16 04:37:42 -07:00
fd2ff675c3 Clarify process for upgrading more than one version (#5264)
Since it is unsupported to skip upgrades, I've detailed the steps for upgrading a step at a time and removed some language that indicated it should work
2019-10-16 04:35:41 -07:00
bec23c8a41 Add k8s v1.15.4 hashes (#5235) 2019-10-16 04:33:41 -07:00
faaff8bd72 Add RotateCertificates to kubelet config if kubelet_rotate_certificates is set. (#5152)
Signed-off-by: Robin Elfrink <robin.elfrink@eu.equinix.com>
2019-10-16 04:31:41 -07:00
8031c6c1e7 Update template for dashboard to support v2.x (#5187)
Secrets and ConfigMap should be created before dashboard pod run.
2019-10-16 04:29:41 -07:00
9d8fc8caad Fix getting nameserver and search for /etc/resolv.conf with comments (#5197) 2019-10-16 04:27:40 -07:00
d1b1add176 contrib/heketi: use inventory node ip in topology instead of guessing it (#5233) 2019-10-16 04:25:42 -07:00
a51b729817 add ignore_errors to the kube-proxy deletion task (#5236)
When using cluster.yml or scale.yml to add/scale nodes in the existing
k8s cluster, the `kubeadm init` wouldn't run. As a result, kube-proxy
wouldn't be created, and therefore the kube-proxy deletion task would
fail, e.g. in the case where kube-router is used and "kube_proxy_remove"
is set to true. As a workaround, add ignore_errors to the kube-proxy
deletion task.
2019-10-16 04:23:40 -07:00
19bc79b1a6 Update cert-manager to v0.11.0 (#5269) 2019-10-16 04:21:40 -07:00
d2fa3c7796 Minimum required version of Kubernetes is v1.15 (#5266) 2019-10-15 06:59:53 -07:00
03cac2109c No need to gather facts on localhost (#5251)
It's unnecessary and breaks when running from within a docker container:
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TimeoutError: Timer expired after 10 seconds
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "/usr/sbin/udevadm info --query property --name /dev/mapper/vg00-root", "msg": "Timer expired after 10 seconds", "rc": 257}
```
2019-10-15 06:57:53 -07:00
932935ecc7 fix wrong path in include install_host.yml in etcd role (#5256) 2019-10-13 18:16:34 -07:00
e01118d36d Fix issue in remove-node/post-remove task (#5185) (#5186) 2019-10-10 05:17:43 -07:00
dea9304968 Enable openstack_cacert to be either file or base64 string (#5243) 2019-10-09 02:19:49 -07:00
2864e13ff9 Reset between kubeadm secondary control plane join attempts (#5240)
Change-Id: Ic9425bf90552d7e3d42b02409af9773d99376384
2019-10-08 00:15:12 -07:00
a8c5a0afdc Make it possible to disable access_ip (openstack provider) (#5239)
* Add a variable do disable access_ip

* Document the use of use_access_ip
2019-10-07 04:09:09 -07:00
0ba336b04e install helm client separately (#5212) 2019-10-04 05:14:02 -07:00
89f1223f64 Fix selector workaround for helm install (#5237)
Change-Id: I826337b59814674c3feb4cd6a4904d9d53e01652
2019-10-03 23:41:56 -07:00
8bc0710073 clean up document (#5214) 2019-10-02 04:41:07 -07:00
fb591bf232 Apply workaround for NetworkManager and calico (#5230)
Change-Id: I5cb2bdf1a57707c1b8da3e5ac0c80e5c353480a4
2019-10-02 04:37:07 -07:00
a43e0d3f95 Switch to Kubernetes v1.16.0 (#5189)
* Switch to Kubernetes v1.16.0

Change-Id: I5d6a9528b2d443750fc5e031aff15ad3ffead158

* Fix download localhost cached file path

Change-Id: I65e79b70e3d1b37265ebc60f41b460cf4b0a0d47

* fix kubeadm etcd for v1.16

Change-Id: I6888a00fd48b530a38b0b31c4095492476af42d2

* disable tf packet jobs

Change-Id: I075c4666547fdea4c50ec04864f38e2cfaa79154

* Disable contiv packet jobs. Fix kube-router

Change-Id: I3170e8789e60711d4cee8faf65f2094480b79b8d

* bump sonobuoy version

Change-Id: Ib946905629c7c53ed88f08fb2f41c454457a0097
2019-10-02 02:21:07 -07:00
f16cc1f08d fix digital rebar url (#5223) 2019-09-30 01:05:38 -07:00
8712bddcbe Add docs for TF vars introduced PR 4239 (#5201) 2019-09-26 04:31:07 -07:00
99dbc6d780 clean-up doc,spelling mistakes (#5206) 2019-09-26 04:25:08 -07:00
75e4cc2fd9 Updated kubectl.sh (#5156)
The script is not usable unless you are in the '.vagrant/provisioners/ansible/inventory/artifacts' folder.
This update makes this usable from anywhere.
2019-09-26 04:23:07 -07:00
81cb302399 MetalLB: fail if kube_proxy_strict_arp is false (#5180)
When using IPVS, kube_proxy_strict_arp = true is required
https://github.com/danderson/metallb/issues/153#issuecomment-518651132

Add kube_proxy_strict_arp to inventory/sample
2019-09-26 04:21:06 -07:00
3bcdf46937 fix-up some spelling mistakes (#5202) 2019-09-25 23:27:08 -07:00
1cf6a99df4 generate kubeadm download image list with options useHyperKubeImage (#5203) 2019-09-25 18:03:06 -07:00
a5d165dc85 Customize host root volume size by Terrafrom provisioning (#4239)
* print hostnames (#5110)

Terrafrom - customize hosts root volume size

disable block_device by default value

Terraform formatting fix

Fixed typos

* fix resources after rebase

* Fix glusterfs image issue
2019-09-25 05:17:59 -07:00
f18e77f1db Blocksize for calico default pool should be configurable (#5198) 2019-09-25 04:44:00 -07:00
2fc02ed456 fix-typo (#5199) 2019-09-25 04:04:00 -07:00
9db61c45ed Upgrade nodelocaldns to 1.15.5 (#5191) 2019-09-22 20:13:22 -07:00
8cb54cd74d fix broken scale procedure: (#5193)
- do not run etcd role when etcd_kubeadm_enabled == true
- remove default value 'systemd' for cgroup driver in containerd role.
  this value override autodetect in kubelet_cgroup_driver_detected from docker info
2019-09-22 01:07:22 -07:00
a3f1ce25f8 Add support for k8s v1.14.6 (#5182) 2019-09-18 02:53:30 -07:00
3c7f682e90 Parameterize gcr, quay, and docker image repo defines (#5146)
This allows to easily override the gcr, quay, and docker repos with the
mirror repos in countries like China, where the default accesses are
blocked or unstable.
2019-09-18 02:49:30 -07:00
8984096f35 use hyperkubeimage to run controlplane containers (#5178) 2019-09-17 18:33:28 -07:00
1ce7831f6d Update main.yml (#5166) 2019-09-17 05:36:24 -07:00
46f6c09d21 Fixes issue #5160 (#5171) 2019-09-16 23:42:22 -07:00
6fe2248314 Use more native way to update kubeconfigs using kubeadm (#5165)
Change-Id: I1076b418f85a26d9896be69910052128afc51cee
2019-09-13 03:40:29 -07:00
cb4f797d32 Fix macro on local_volume_provisioner (#5168)
mydict.keys() should be converted to list,
otherwise it causes errors in loop iteration.

Remove extra space after class name, which broke configmap.

Also allow set reclaimPolicy property.
2019-09-13 00:50:33 -07:00
eb40ac163f Move cri_socket var to kubespray-defaults (#5149) 2019-09-10 12:30:55 -07:00
27ec548b88 Add support for k8s v1.16.0-beta.2 (#5148)
Cleaned up deprecated APIs:
apps/v1beta1
apps/v1beta2
extensions/v1beta1 for ds,deploy,rs

Add workaround for deploying helm using incompatible
deployment manifest.
Change-Id: I78b36741348f47a999df3841ee63cf4e6f377830
2019-09-10 12:06:54 -07:00
637f09f140 Fix ansible task titles (#5154)
* Fix ansible task titles for CRI connection tasks

* Fix Azure subscription ID check task title
2019-09-10 01:34:54 -07:00
9b0f57a0a6 Adjust endpoints for kube-proxy,controller,scheduler to proper ip (#5150)
Change-Id: I5aa009358bee7035922b5a10327997e47c9ba434
2019-09-09 10:33:20 -07:00
5f02068f90 Documenting Terraform variable az_list explicitly (#5132)
* added az_list to README section

* added az_list to cluster.tfvars
2019-09-09 07:41:19 -07:00
7f74906d33 Make haproxy/nginx client timeout configurable (#5140)
Change-Id: I61319a06eb33d9fc868e19941924f387088b856b
2019-09-05 00:32:51 -07:00
56523812d3 print hostnames (#5110) 2019-08-29 05:07:57 -07:00
602b2d198a Cleanup: fix typo in doc (#5105)
Signed-off-by: Guangming Wang <guangming.wang@daocloud.io>
2019-08-28 02:35:15 -07:00
4d95bb1421 Use python3-libselinux on RHEL8/Centos8 (#5127)
* Use python3-libselinux on RHEL8/Centos8

* The fact ansible_facts.distribution_major_version is not present on older Ansible version.
Default it to 0 in when not present and use libselinux-python as package to get current
default behaviour.
2019-08-28 02:33:15 -07:00
184ac6a4e6 Parse calico nodes as json (#5114) 2019-08-27 10:16:42 -07:00
10e0fe86fb remove unimplemented custom_flags vars, document the extra_args vars (issue 4352) (#5108) 2019-08-23 01:21:18 -07:00
7e1645845f Allow calico settings to be modified (#5101)
Previous logic used calicoctl.sh create --skip-exists, which
allowed setting initial values, but not permitting changes.
2019-08-23 00:01:19 -07:00
f255ce3f02 Added CRI-O support for ubuntu (#4629)
* Added CRI-O support for ubuntu

* implemented feedback

* set crictl to fixed version

* Fix errors during rebasing

* Fix linting errors
2019-08-22 03:54:31 -07:00
07ecef86e3 Replace fetch with synchronize due to memory error (#5084)
Fix for Kubespray Issue #5059 (https://github.com/kubernetes-sigs/kubespray/issues/5059). There is a known issue with the 'fetch' module that will sometimes lead to it failing with a memory error. See ansible/ansible#11702 (https://github.com/ansible/ansible/issues/11702). I encountered this issue with the "Copy kubectl binary to ansible host" task in kubespray/roles/kubernetes/client/tasks/main.yml, and it caused my entire deployment to error out (see "Output of ansible run" above). Replacing 'fetch' with 'synchronize' fixes this issue.
2019-08-22 02:40:32 -07:00
3bc4b4c174 Use raw module for bootstrap-debian.yml (#5061)
Updated Openstack to terraform 0.12 (#5062)

* update openstack to terraform 0.12(.5)

* replace cluter.tf with cluster.tfvars

* update README.md to terraform 0.12

* update Openstack CI tests to use terraform 0.12

* specify terraform version in openstack README

* gitlab CI to copy cluster.tfvars in case of openstack provider

* The terraform/openstack dynamic inventory can read
tfstate v4 (generated by terraform 0.12) and convert them internally
ro v3 (as generated by terraform 0.11.x).

Additionally the script has been updated to Python 3.
2019-08-22 01:46:31 -07:00
da089b5fca Update CRI-O in CentOS (#4582)
According to their compatibility matrix[1] the 1.11.5 version seems to
be deprecated. This change updates the CentOS repository reference.

[1] https://github.com/cri-o/cri-o#compatibility-matrix-cri-o---kubernetes-clusters
2019-08-22 01:16:32 -07:00
d4f094cc11 Add localhost to ansible.limit. (#5037)
Upgrade to Kubernetes 1.15.3 (#5091)
2019-08-22 01:14:32 -07:00
494a6512b8 fix bug: run Copy image to ansible host cache on download_delegate host (#5094)
* run 'task download_container | Copy image to ansible host cache' with synchronize on download_delegate host

* try to run task copy file to ansible host on all inventory, not only on first random host
2019-08-21 23:38:30 -07:00
3732c3a9b1 terraform/openstack: add network_dns_domain variable (#5093)
This allows the user to optionally specify the dns_domain attribute on the
generated internal kubernetes network.
2019-08-21 05:09:15 -07:00
f6a63d88a7 Allow to configure strict ARP on kube-proxy (#5092) 2019-08-20 18:21:17 -07:00
86cc703c75 Upgrade to Kubernetes 1.15.3 (#5091) 2019-08-20 02:05:32 -07:00
4dba34bd02 add cinder max attached volumes (#5089) 2019-08-19 23:45:32 -07:00
b0437516c1 Kube-router annotate.yml: Use group 'k8s-cluster' instead of 'all' (#5087) (#5088) 2019-08-19 04:53:29 -07:00
da015e0249 Updated Openstack to terraform 0.12 (#5062)
* update openstack to terraform 0.12(.5)

* replace cluter.tf with cluster.tfvars

* update README.md to terraform 0.12

* update Openstack CI tests to use terraform 0.12

* specify terraform version in openstack README

* gitlab CI to copy cluster.tfvars in case of openstack provider

* The terraform/openstack dynamic inventory can read
tfstate v4 (generated by terraform 0.12) and convert them internally
ro v3 (as generated by terraform 0.11.x).

Additionally the script has been updated to Python 3.
2019-08-18 01:30:05 -07:00
554857da97 add cluster name into filer if specifed in environment variable (#5085) 2019-08-16 19:28:08 -07:00
9bf23fa43b cleanup vars.md typos words (#5086)
Signed-off-by: Guangming Wang <guangming.wang@daocloud.io>
2019-08-16 19:12:09 -07:00
42287066d3 fix word errors in downloads.md (#5083)
Signed-off-by: Guangming Wang <guangming.wang@daocloud.io>
2019-08-15 20:10:33 -07:00
a1ff1de975 fix openstack_cacert conditional (#5078) 2019-08-15 05:50:34 -07:00
1bfbc5bbc4 remove resource-container default value for kube-proxy (#4994) 2019-08-15 05:30:33 -07:00
c5b4d3ceaa upgrade Helm to 2.14.3 (#5075)
Signed-off-by: Bart Verwilst <bart@verwilst.be>
2019-08-15 04:34:33 -07:00
8fc9c5d025 Upgrade ingress nginx to 0.25.1 (#5081) 2019-08-15 04:14:34 -07:00
42bba66c02 Disable moderator (#5069)
* Test the CI

* Disable CI moderator
2019-08-15 03:02:34 -07:00
53bc80bb59 Ingress nginx (#5066)
* remove svc-default-backend

* update ingress-nginx clusterrole
2019-08-15 02:34:33 -07:00
771ce96e6d Set initial kubeadm token if specified in kubeadm init (#5057)
Change-Id: I7fd94ec6d195af60d237b3cfe91668ca1f707d26
2019-08-15 02:26:33 -07:00
fc456ff0cd move kube-ovn images to dockerhub (#5063) 2019-08-14 04:02:24 -07:00
b4f70db878 Fix broken containerd pinning on Ubuntu (#5072) 2019-08-13 19:26:23 -07:00
5707f79b33 Allow to configure number of kube-masters (#5073)
Change-Id: Ia3f30a1216b3ea063cd72c839ef6dff753cf10c6
2019-08-13 18:52:24 -07:00
0a2f4edfc6 Always download coredns images with kubeadm (#5071)
Fixes situation when using manual mode because it
tries to download coredns v1.3.1 from the same
image repository where kubernetes images are
downloaded from.

Change-Id: Ibbec8a72c8162ce8befa74e2013a268737ea5f8a
2019-08-13 08:53:43 -07:00
56fa46716e Add missing coredns tag. (#5054) 2019-08-09 02:29:27 -07:00
b74abe56fd Bump minimum K8S version to 1.14 (#5055)
Signed-off-by: Craig Rodrigues <craig@portworx.com>
2019-08-09 02:13:26 -07:00
62aecd1e4a multus | fix use last version (#5041) 2019-08-08 23:05:25 -07:00
973afef96e Fix variable for rbd_provisioner_user_secret (#5042)
* Update main.yml

* fix dead link 404
2019-08-08 20:03:25 -07:00
a235605d2c go to k8s 1.15.2, update nodelocaldns to latest bugfix release (#5048) 2019-08-08 19:49:25 -07:00
023108a733 Refactor calico route reflector to run in k8s cluster (#4975)
* Refactor calico-rr to run in k8s cluster with taint

Change-Id: I75a3169ff5b36ce8302fc7ef1c32d3eb697b5afa

* add preinstall checks

* rework calico/rr role

Change-Id: I2f0a7e6cb77cf91ad4a615923680760d2e5d9ca8

* add empty calico-rr group

Change-Id: I006c0a60db9b72d02245bf8fdfabcf982144a5ad
2019-08-08 07:37:22 -07:00
75d1be8272 Fix check for removing etcd member (#5051)
Change-Id: Ib27d051ff111f813097a9b33a86465a2a30a6db0
2019-08-07 08:26:51 -07:00
a44235d11b Refactor remove node to allow removing dead nodes and etcd members (#5009)
Change-Id: I1c59249f08f16d0f6fd60df6ab61f17a0a7df189
2019-08-07 04:46:50 -07:00
7abf6a6958 Allow etcd member join by checking cluster health only on first etcd (#5032)
Change-Id: I9cc01cef3a437893225e2d9f58495826bbce7be9
2019-08-07 04:44:50 -07:00
0d0b1fdf82 Ansible version bump for CVE-2019-10156 (#5050) 2019-08-06 23:56:50 -07:00
b710c72f04 Add ability to setup virtual ip for ingress-controller (#5044) 2019-08-06 19:24:50 -07:00
678c316d01 Optionally refresh kubeadm token every time (#5045)
Change-Id: I278cb14aa93abf20160cc001f69e2f472504e6d8
2019-08-06 07:35:55 -07:00
bc6de32faf Upgrade Cilium network plugin to v1.5.5. (#5014)
* Needs an additional cilium-operator deployment.
  * Added option to enable hostPort mappings.
2019-08-06 01:37:55 -07:00
7cf8ad4dc7 Optionally refresh kubeadm token every time (#5043)
Change-Id: I278cb14aa93abf20160cc001f69e2f472504e6d8
2019-08-06 00:59:53 -07:00
02ec72fa40 Fix commands for using experimental kubeadm control plane (#5006) 2019-08-05 07:31:50 -07:00
d22634a597 Refactor containerd ubuntu setup and remove redundant tasks (#5015) 2019-08-05 07:29:48 -07:00
4132cee687 Fix mistakes in downloads docs (#5038) 2019-08-04 18:25:47 -07:00
f3df0d5f4a Always create bash_completion.d folder (#5039) 2019-08-04 18:15:48 -07:00
1d285e654d Fix small typo (#5029) 2019-08-01 06:52:16 -07:00
dc6ad64ec7 [contrib/heketi]: tear down additions and fixes. Heketi updated to version 9 (#5027)
* lvm packages removal during tear down skipped by default
  * lvm utils execution PATH fixed for CentOS/RH
  * Heketi updated to the latest version 9

Signed-off-by: Vitaliy Dmitriev <vi7alya@gmail.com>
2019-08-01 04:00:16 -07:00
92bfcf0467 Add CoreDNS endpoint_pod_names option (#5012) 2019-07-31 11:26:15 -07:00
54b1fe83f3 Add an option to reserve resources for OS system daemons (#5007) 2019-07-31 11:24:15 -07:00
5337cff179 Add packet_ubuntu18-flannel-containerd (#5004) 2019-07-31 11:22:14 -07:00
1be788f785 add Kube-OVN cni to kubespray (#5020) 2019-07-30 20:10:20 -07:00
8afbf339f7 fix broken link (#5023) 2019-07-30 19:18:22 -07:00
8c935dfb50 Update CoreDNS to 1.6.0 (#5021) 2019-07-30 18:58:21 -07:00
66c5ed8406 Update critools to v1.15.0 (#5016) 2019-07-30 12:04:09 -07:00
4087e97505 Additional files and dirs to remove when running reset (#5000) 2019-07-30 12:02:08 -07:00
da50ed0936 move flexvolume plugin directory creation to preinstall (#4999)
* move flexvolume plugin directory creation to preinstall

* changes per pr feedback
2019-07-30 12:00:10 -07:00
fbbfff3795 fix broken ubuntu containerd engine (#5002) 2019-07-30 11:58:11 -07:00
fb9103acd3 Update calico-typha deployment to address v3.7.x changes (#5003)
* Update calico-typha deployment to address v3.7.x changes

So that calico-typha works for Calico v3.7.x.

* Apply changes for v3.7.x only.
2019-07-24 09:12:16 -07:00
49d921cf91 Restart canal after scale or upgrade. Just like PR#4531, but for canal (#4992) 2019-07-22 00:50:53 -07:00
fe29c97ae8 add ansible_hostname and ansible_fqdn to apiserver_sans (#4990) 2019-07-22 00:48:53 -07:00
2abb6c8689 update to kubernetes 1.15.1 (#4989)
* update to kubernetes 1.15.1

* Revert to sonobuoy 0.15.0

* update test timeout from 3 to 5 minutes
2019-07-21 12:24:51 -07:00
a3ca441998 Remove unused handlers from Flannel CNI (#4984)
* Only reload docker when is_atomic for Flannel

* Remove unused handlers from Flannel CNI
2019-07-21 00:16:54 -07:00
9cf503acb1 configure docker_options directly with template (#4912) 2019-07-21 00:12:53 -07:00
1cbdd7ed5c Fixup etcdctl download for etcd kubeadm mode (#4991)
Change-Id: I8d8e59a97823390f40e8810905ca52430329f040
2019-07-21 00:08:53 -07:00
428e52e0d1 Fix calico handler for containerd (#4985)
crictl tool must be used to delete containers in case of containerd
deployment
2019-07-16 08:35:24 -07:00
70dc222719 Upgrade local volume provisioner to v2.3.2 (#4983) 2019-07-16 05:27:26 -07:00
69f796f0c7 use the locally deployed kubectl binary within the kubectl.sh helper script (#4311) 2019-07-16 02:23:25 -07:00
5826f0810c Check all apt config files for configured proxies (#4972) 2019-07-16 01:41:24 -07:00
de9443a694 remove unused code (#4981) 2019-07-16 01:39:24 -07:00
99c5f7e013 add k8s_external plugin to CoreDNS configuration (#4704) 2019-07-16 00:53:23 -07:00
d9dedc2cd5 Let Premoderator script add labels (#4978)
* Let Premoderator script add labels

* Fix JQ error

* Minor fixes

* Debug patch label output

* Try again

* Try again

* Try again

* Try again

* Try again

* Minor cleanup
2019-07-16 00:43:23 -07:00
23ae6027ab remove support for calico v2.x (#4974)
* Remove support for calico below version v3.0.0

Change-Id: If8fe3036b9e054901a8b2c48516eff1e1271970f

* Update main.yml

* fixup node peering

Change-Id: Ifac4d363deba826f0c80e390ce80a28df9827323

* fixups

Change-Id: Ic35417330af6741962003b3930604393c90804d1

* fixups

Change-Id: I0ea82d634bb0c81d9b7dc50569c70988bc8d3a3b
2019-07-15 07:47:09 -07:00
781b5691c9 prep_kubeadm_images: parse repo and tag (#4976) (#4977) 2019-07-15 03:15:09 -07:00
fd9bbcb157 Enable nodes to run calicoctl for calico kdd mode (#4956)
* Enable nodes to run calicoctl

per-node tasks require waiting for calico-node to be applied

Change-Id: Ibe1076b7334a2da0332f2dd766fde0c3f172d1f2

* cleanup tasks that should run on master

Change-Id: I43a837879ef41596f14657ecd7f813899b6865ae

* Switch run_once calico logic to just run on first master

Change-Id: I6893711e354f63c5e1eaf6ac2e23d9a6347a555d
2019-07-15 01:59:06 -07:00
e0410661fa azure loadbalancer vars generation (#4892) 2019-07-15 01:27:06 -07:00
8ef754678a Ansible 2.8.x note (#4908)
Update README.md to link to the open issue that shows Ansible 2.8.x doesn't work with Kubespray.  The requirements.txt file is already fixed to 2.7.8 so only the README needed updating, I think.
2019-07-15 01:05:07 -07:00
161a8f55fa Update CoreDNS to 1.5.2 (#4970) 2019-07-15 00:59:06 -07:00
7481cc31e1 Update Weave CNI to 2.5.2 (#4969) 2019-07-15 00:57:06 -07:00
b15b6e834f fix parsing refresh of kubeadm cert key (#4971)
* fix parsing refresh of kubeadm cert key

Change-Id: I4de2a1df6498790a80351b4bc7d88e6c9e470358

* Update kubeadm-secondary-experimental.yml
2019-07-15 00:45:06 -07:00
76640cf1a1 update docker-ce to 18.09.7 (#4973) 2019-07-14 22:59:04 -07:00
374ea7b81d update supported calico version to 3.7.3 in README (#4966) 2019-07-12 03:31:05 -07:00
46bef931e9 Fix images info logic for containerd (#4965)
As crictl tool is used to download images, it must be also used to gather
images info
2019-07-12 03:29:07 -07:00
a36e9ae690 Add checksums for k8s 1.14.4 (#4959) 2019-07-11 23:19:05 -07:00
728155a2a1 Support for Oracle Linux (#3655)
Fixed Issue #1032

test case for OEL7 AIL with kubeadm

Add packet CI stuff for oracle 7
2019-07-11 23:17:05 -07:00
cdf9a9f4fc Generate certificate key before kubeadm control plane config (#4964) 2019-07-11 05:30:54 -07:00
29307740dd Enable containerd to deploy vanilla containerd package (#4951)
* Enable containerd to deploy vanilla containerd package

Fixes kubeadm references to CRI socket for containerd
Fixes download role cache feature to work with containerd

Change-Id: I2ab8f0031107e2f0d1a85c39b4beb66f08509a01

* use containerd for flannel-addons job

Change-Id: Ied375c7d65e64a625ffbd995ff16f2374067dee6

* add containerd vars

Change-Id: Ib9a8a04e501c481a86235413cbec63f3672baf91

* fixup vars

Change-Id: Ibea64e4b18405a578b52a13da100384582aa24c2

* more fixes

* fix rh repo

Change-Id: I00575a77cfb7b81d6095db5d918a52023c8f13ba

* Adjust helm host install for containerd
2019-07-10 23:46:54 -07:00
a038d62644 doc-fix: azure.md azure cli parameter update (#4936) 2019-07-10 05:46:25 -07:00
20c7e31ea3 Add calico 3.7.3 support (#4953)
* Add calico 3.7.3 support

* add calico_datastore variable to policy controller role

* add missing clusterrole rules for calico policy controller

* disable calico kube controller when kdd mode is used for versions < 3.6
2019-07-09 12:42:28 -07:00
65065e7fdf Default kubeadm_images to empty dict instead of set_fact (#4957)
Change-Id: I9ef52232176fe8e2a99b4f09ae05e1451f7ad1ff
2019-07-09 12:07:30 -07:00
352297cf8d Fixup deploy of kubeadm etcd for Kubernetes v1.15.0 (#4952)
* Fixup deploy of kubeadm etcd for Kubernetes v1.15.0

Change-Id: If42c2c75c4d278ba9475ebf76c243f3e6ee4d02e

* undo renaming cloud config file

Change-Id: Iafbd27c3887d6a2a6d0819c711f150ecf70c515d
2019-07-09 15:41:59 +03:00
a67a50f9c0 nodelocaldns: allow to set health port, switch to 9254 by default (#4902)
8080 is a pretty common port, using nodelocaldns_ip:8080 still
prevents node processes or hostNetwork=true processes to bind to *:8080
so switch to 9254 by default (prometheus port is 9253)

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2019-07-09 00:52:01 -07:00
324bc41097 Add support for Docker plugins (#4934)
* Add support for Docker plugins

* support multiple Docker plugins using looped include

* fix yamllint error
2019-07-08 06:44:35 -07:00
c81b443d93 Fix order of names in /etc/hosts (#4940)
Configure fqdn properly
2019-07-08 06:08:34 -07:00
dc16ab92f4 fix for calico with kdd datastore (#4922)
* fix for calico with kdd datastore

* remove AS number from daemonset

* revert changes to canal

* additionnal fixes for kdd datastore in calico
2019-07-08 12:20:03 +03:00
53032a6695 Use kubespray-defaults in remove-node role (#4946) 2019-07-05 03:28:34 -07:00
d90a5f291b Using also uppercase proxy env variables (#4910) 2019-07-02 18:13:12 -07:00
3b7791501e Adding "-F /dev/null" to load null SSH config file. (#4933) 2019-07-02 01:53:08 -07:00
f2b8a3614d Use K8s 1.15 (#4905)
* Use K8s 1.15

* Use Kubernetes 1.15 and use kubeadm.k8s.io/v1beta2 for
  InitConfiguration.
* bump to v1.15.0

* Remove k8s 1.13 checksums.

* Update README kubernetes version 1.15.0.

* Update metrics server 0.3.3 for k8s 1.15

* Remove less than k8s 1.14 related code

* Use kubeadm with --upload-certs instead of --experimental-upload-certs due to depricate

* Update dnsautoscaler 1.6.0

* Skip certificateKey if it's not defined

* Add kubeadm-conftolplane.v2beta2 for k8s 1.15 or later

* Support kubeadm control plane for k8s 1.15

* Update sonobuoy version 0.15.0 for k8s 1.15
2019-07-02 01:51:08 -07:00
e89b47c7ee Add nginx stub metrics if health check enabled (#4938)
Change-Id: Iac90beef20e63fb4a539f91836231469c573f402
2019-07-01 13:38:37 -07:00
2aa66eb12d Default to refreshing kubeadm etcd key (#4931)
Change-Id: Icc0176773b6d581c43647de433214079440d7321
2019-06-30 03:37:22 -07:00
4c8b93e5b9 containerd support (#4664)
* Add limited containerd support

Containerd support for Ubuntu + Calico

* Added CRI-O support for ubuntu

* containerd support.

* Reset  containerd support.

* fix lint.

* implemented feedback

* Change task name cri xx instead of cri-o in reset task and timeout condition.

* set crictl to fixed version

* Use docker-ce's container.io package for containerd.

* Add check containerd is installable or not.

* Avoid stop docker when use containerd and optimize retry for reset.

* Add config.toml.

* Fixed containerd for kubelet.env.

* Merge PR #4629

* Remove unused ubuntu variable for containerd

* Polish code for containerd and cri-o

* Refactoring cri socket configuration.

* Configurable conmon.

* Remove unused crictl/runc download

* Now crictl and runc is downloaded by common crictl.yml.

* fixed yamllint error

* Fixed brokenfiles by conflict.

* Remove commented line in config.toml

* Remove readded v1.12.x version

* Fixed broken set_docker_image_facts

* Fix yamllint errors.

* Remove unused apt source

* Fix crictl could not be installed

* Add containerd config from skolekonov's PR #4601
2019-06-29 14:09:20 -07:00
216631bf02 Repair kube_proxy_exclude_cidrs (#4909) 2019-06-28 00:39:37 -07:00
c7f3123e28 kubeadm_discovery_address should not contain proto (#4930) 2019-06-28 00:37:37 -07:00
f599c2a691 add macvlan cni to kubespray (#4901)
* add macvlan cni to kubespray

* macvlan: lint yaml files and fix sample config file

* macvlan: add OWNERS file

* add macvlan to README

* macvlan : CI first shoot

* macvlan : CI add full masquerade

* delegate retrive pod cidr to master only

* macvlan: add config for CI

* macvlan: add netchecker deployment
2019-06-28 00:35:38 -07:00
bc7d1f36ea Remove wasteful extra gather facts step (#4928)
Ansible will gather facts on the preinstall/download role
automatically at the start of that play.
2019-06-27 06:25:21 -07:00
80fa294a31 Disable redundant CI test cases (#4918)
Change-Id: I1991bca8368adc20832d2bb15644411653446b51
2019-06-27 04:49:22 -07:00
465dfd68bc Fix empty kube_override_hostname in apiserver_sans (#4916)
kubernetes/master role defines this value as an empty string
when using a cloud provider, not undefined. The check was updated
accordingly.

Change-Id: I58dc31ef4fd568a717a6753eb89ca687933018ae
2019-06-25 08:00:37 -07:00
73f45fbe94 Revert "Filter undefined SANs for apiserver cert (#4913)" (#4914)
This reverts commit d270678bda.
2019-06-25 06:56:00 -07:00
d270678bda Filter undefined SANs for apiserver cert (#4913)
Change-Id: I37442fb095fb4217f67f74744ad07c1d5d8229ea
2019-06-25 05:54:36 -07:00
de028814e5 Upgrade to etcd version 3.3.10 per 1.14 release notes. (#4898)
* Upgrade to etcd version 3.3.10 per 1.14 release notes.

* Update etcd binary checksums
2019-06-24 01:27:55 -07:00
b5406b752d Add kube_override_hostname to kubeadm certs. (#4903) 2019-06-23 23:19:56 -07:00
6025981ceb Allow skip kubeadm image prep but install kubeadm (#4904)
Change-Id: I744e9a192cd863a1ce22fbd16d217c5dfb16750c
2019-06-23 23:17:56 -07:00
4348e78b24 Enable kubeadm etcd mode (#4818)
* Enable kubeadm etcd mode

Uses cert commands from kubeadm experimental control plane to
enable non-master nodes to obtain etcd certs.

Related story: PROD-29434

Change-Id: Idafa1d223e5c6ceadf819b6f9c06adf4c4f74178

* Add validation checks and exclude calico kdd mode

Change-Id: Ic234f5e71261d33191376e70d438f9f6d35f358c

* Move etcd mode test to ubuntu flannel HA job

Change-Id: I9af6fd80a1bbb1692ab10d6da095eb368f6bc732

* rename etcd_mode to etcd_kubeadm_enabled

Change-Id: Ib196d6c8a52f48cae370b026f7687ff9ca69c172
2019-06-20 11:12:51 -07:00
e2f9adc2ff Add holmsten to reviewers (slack - gix) (#4896) 2019-06-20 00:38:48 -07:00
f67a24499b Allow to specify feature_control in calico cni config (#4879)
* Allow to specify feature_control in calico cni config

* list length checking

* double check

* remove 2 conditions
2019-06-16 23:14:07 -07:00
5c704552d8 multus | use last version (#4880) 2019-06-16 23:12:07 -07:00
d83ea51101 fix markdown style (#4886) 2019-06-14 05:42:21 -07:00
fa6027e8f0 fix references to sample files in setup.cfg (#4882) 2019-06-14 03:52:21 -07:00
2849191e67 CNI plugins: use last version 0.8.1 (#4878)
* CNI plugins: bump version 0.8.1

* cni plugins : update checksums

* cni : update readme
2019-06-14 02:42:23 -07:00
0559eec681 Update Dockerfile to use python3 (#4885) 2019-06-14 01:54:24 -07:00
a3a7fe7c8e fix start CoreDNS when init secondary master (#4867) 2019-06-11 04:56:18 -07:00
9b2d176617 Enable packet_ubuntu-contiv-sep (#4595) 2019-06-11 03:28:16 -07:00
7a3547e4d1 Enable packet_*-kube-router jobs (#4594) 2019-06-11 02:58:18 -07:00
e6fb686156 added the ability to define and deploy multiple address pools to metallb (#4757) 2019-06-11 00:20:21 -07:00
5e80603bbb updated vagrant doc (#3719) 2019-06-10 23:58:14 -07:00
c8d95a1586 Remove dnsPolicy from PSP (#4864) 2019-06-10 23:34:16 -07:00
27a99e0a3f Added configurable min memory assertions (#4307) 2019-06-10 23:22:15 -07:00
3cc351dff9 Require min version of Kubernetes (#4860)
* Require minimum version of Kubernetes

* Remove checksums for kubernetes version 1.12

* Add kube_version to precheck output and add min required version to README

* Fix merge

* Fix defaults

* Fix typo in precheck
2019-06-10 23:18:15 -07:00
23c9071c30 Added file and container image caching (#4828)
* File and container image downloads are now cached localy, so that repeated vagrant up/down runs do not trigger downloading of those files. This is especially useful on laptops with kubernetes runnig locally on vm's. The total size of the cache, after an ansible run, is currently around 800MB, so bandwidth (=time) savings can be quite significant.

* When download_run_once is false, the default is still not to cache, but setting download_force_cache will still enable caching.

* The local cache location can be set with download_cache_dir and defaults to /tmp/kubernetes_cache

* A local docker instance is no longer required to cache docker images; Images are cached to file. A local docker instance is still required, though, if you wish to download images on localhost.

* Fixed a FIXME, wher the argument was that delegate_to doesn't play nice with omit. That is a correct observation and the fix is to use default(inventory_host) instead of default(omit). See ansible/ansible#26009

* Removed "Register docker images info" task from download_container and set_docker_image_facts because it was faulty and unused.

* Removed redundant when:download.{container,enabled,run_once} conditions from {sync,download}_container.yml

* All features of commit d6fd0d2aca by Timoses <timosesu@gmail.com>, merged May 1st 2019, are included in this patch. Not all code was included verbatim, but each feature of that commit was checked to be working in this patch. One notable change: The actual downloading of the kubeadm images was moved to {download,sync)_container, to enable caching.

Note 1: I considered splitting this patch, but most changes that are not directly related to caching, are a pleasant by-product of implementing the caching code, so splitting would be impractical.

Note 2: I have my doubts about the usefulness of the upload, download and upgrade tags in the download role. Must they remain or can they be removed? If anybody knows, then please speak up.
2019-06-10 11:21:07 -07:00
14141ec137 Rebase only on PRs (#4861) 2019-06-10 11:17:05 -07:00
5bec2edaf7 remove namespace from ClusterRole (#4856) 2019-06-10 11:15:12 -07:00
f504d0ea99 Remove invalid field dnsPolicy from podSecurityPolicy (#4863)
Change-Id: I02864011bf5fda5dbd35c7513c73875769036f87
2019-06-10 07:11:10 -07:00
3b7797b1a1 Ensure haproxy and nginx reload when config changes (#4862)
Change-Id: Ia9a41e7b1cfcb1e6acb2dbae6eecc541dce25a74
2019-06-10 05:59:08 -07:00
aa63eb6196 disable ansible group name warning (#4852) 2019-06-10 03:29:09 -07:00
23aa3e4638 Remove GCE tests and CNCF funding ended (#4859) 2019-06-10 00:31:06 -07:00
56ae3bfec2 Add support for IPv6 for Openstack in terraform.py via metadata (#4716)
* Add support for IPv6 for Openstack in terraform.py via metadata

* document terraform.py metadata variables for openstack
2019-06-09 23:01:05 -07:00
4d5c4a13cb Add missing checksums, update default k8s version to 1.14.3 (#4850)
This PR adds missing checksums for kubeadm and hyperkube and changes
default version to 1.14.3

Signed-off-by: Sergey Nuzhdin <ipaq.lw@gmail.com>
2019-06-09 11:49:05 -07:00
69a8f91512 Update dns-autoscaler.yml.j2 (#4857)
Merge two tolerations.  because the latest tolerations will cover the first tolerations.
2019-06-09 11:39:04 -07:00
fa791cc344 update link to Weave Net Troubleshooting docs (#4853)
Signed-off-by: Daniel Holbach <daniel@weave.works>
2019-06-07 05:52:00 -07:00
456f743470 Fix etcd_events_cluster_enabled in CI due to wrong var used (#4849) 2019-06-06 07:10:17 -07:00
ab6f0012cc Make local volume provisioner dir mode a variable (#4821)
* Make local volume provisioner dir mode a variable

I need to change this for Nagios monitoring. Others may
need to as well. Had to close previous commits, sorry for
the spam.

* Make local volume provisioner dir mode a variable

I need to change this for Nagios monitoring. Others may
need to as well. Had to close previous commits, sorry for
the spam.
2019-06-06 04:36:14 -07:00
4afbf51d32 kube-router: Set ownership of /opt/cni/bin/* to kube (#4825)
Task "kube-roter | Set cni directory permissions"
sets ownership of /opt/cni/bin to "kube"

Task "kube-router | Copy cni plugins"
copies the binaries from the archive setting the ownership
back to "root"

Fix "kube-roter" typo

Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
2019-06-06 04:34:13 -07:00
d62684b617 Fixed missing meta for generic CNI network plugin (#4845) 2019-06-06 02:22:11 -07:00
a8dfcbbfc7 Switch /root references to ansible_env.HOME (#4842)
* kube config dir for current/ansible become user

* remove extra /

* fix default value
2019-06-06 02:06:11 -07:00
bbdc6210f5 use dpkg_selections module to hold docker-ce on Debian family hosts (#4820)
* use dpkg_selections module to hold docker-ce on Debian family hosts

* removed debian_docker.j2 template as it is no longer required
2019-06-06 01:16:13 -07:00
c7f6ed1495 Move moderator between part1 and part2 (#4844) 2019-06-06 01:00:17 -07:00
818aa7aeb1 Set dnsPolicy to ClusterFirstWithHostNet when hostNetwork is true (#4843) 2019-06-05 03:17:55 -07:00
045acc724b fix relative paths for bastion host template (#4126)
This is a fix for #4124
2019-06-05 01:51:55 -07:00
d540560619 Preinstall fails on checking etcd group length (#4839) 2019-06-05 01:37:53 -07:00
797bfd85b0 Only create kubeadm compat cert dir link if it does not exist (#4840) 2019-06-05 01:27:53 -07:00
07cb8ebef7 Add support for arm images for hyperkube, kubeadm and cni_binary (#4261)
* Add support for arm images for hyperkube, kubeadm and cni_binary

* Add dummy etcd checksum for arm

This commit adds dummy etcd checksum for arm to avoid "no attribute" error
during setup.

* Add etcd host assert check

* Add 1.13.4 checksums of kubeadm and hyperkube for arm

* Update checksums of kubeadm and hyperkube for arm

* Add dummy checksums for calicoctl_binary_checksums dict

* disable gather_facts because it causes tests to fail

* Remove architecture check for etcd, due to unable to run tests
2019-06-05 00:05:55 -07:00
54416cabfd prefer_udp for upstream dns servers (#4810) 2019-06-04 23:27:55 -07:00
3617ae31f6 Optionally skip predownload of kubeadm images (#4832) 2019-06-04 04:35:02 -07:00
4f05d801c3 Use short cluster_name for TF CI (#4835) 2019-06-04 04:25:00 -07:00
956afcb33f Move tf-ovh to part2 (#4834) 2019-06-04 01:39:07 -07:00
6347419233 Avoid duplicating nameservers (#4833) 2019-06-04 00:13:02 -07:00
0c7a50fe1e README: Make usage section clearer (#4034)
Long option --become was used in the example but in the comment describing it the short option -b was used.
Use same option in description and example to avoid confusion.
2019-05-31 12:48:28 -07:00
7423932510 Add ready plugin for CoreDNS (#4817) 2019-05-28 06:47:56 -07:00
b41530ba5d Add missing extraArgs to kubeadm-config (#4814) 2019-05-28 03:57:52 -07:00
29e916508c Update roadmap (#4811) 2019-05-28 02:05:54 -07:00
b45f3f0004 Add tf-ovh_coreos CI job (#4763) 2019-05-28 01:51:53 -07:00
2a5721b4d4 Change CentOS CRI-O repo from developer repo to public one (#4807) 2019-05-27 05:33:51 -07:00
e30a703c8e Add Kubernetes conformance tests (#4614) 2019-05-27 05:31:52 -07:00
333f1a4a40 kubeadm join path fixed for RH linux (#4798) 2019-05-27 01:49:51 -07:00
84b278021a Update openstack.yml (#4795)
Fix comment style
2019-05-25 05:19:27 -07:00
1e470b0473 Fix certificate-key param for kubeadm init (#4789)
* Fix certificate-key param for kubeadm init

* Fix yamllint error
2019-05-22 02:06:11 -07:00
0ef3a7914c Added pod psp in Rancher Local Path Provisioner (#4385)
* Added pod psp in Rancher Local Path Provisioner

Added pod security policy (psp) in Rancher Local Path Provisioner.

Signed-off-by: André R. de Miranda <andre@miranda.work>

* Apply psp for Rancher Local Path Provisioner only when local_path_provisioner_namespace is not kube-system and also reorganized the templates
2019-05-22 00:16:08 -07:00
a3fff1e438 cordon all deleted nodes before drain (#4756)
Kubespray waits exit of every drain before run other one.
Running drain every after each other seems better than parallel, because we should check resources availability every time.
But, this way, we have one additional problem: possible restart pods on the nodes that are killed little bit later.
Fast cordon before heavy drain seems like an easy solution.
2019-05-21 23:36:05 -07:00
4bc204925a Error in nginx when starting registry-proxy (#4785)
Error starting nginx because in requiredDropCapabilities is dropped all capabilities.

The nginx requires the following capabilities:
- CHOWN
- SETGID
- SETUID

Signed-off-by: André R. de Miranda <andre@miranda.work>
2019-05-20 11:27:15 -07:00
5d9946184a Add ignore_assert_errors to "kube-master, ... (#4779)
... kube-node or etcd is empty" task
As a assert must be ignored if ignore_assert_errors is true
2019-05-20 11:25:14 -07:00
5ba169a612 Ignore 2 ansible-lint rules (E204, E701) on purpose. (#4744) 2019-05-20 11:23:14 -07:00
872b37f751 updated pinning to prevent breaking changes (#4783)
* updated ansible pinning to prevent more possibilities of breaking changes

* more exact pinning of ansible version

* more exact pinning of ansible version and also all the rest

* added testing requirements.txt pinning settings

* removed boto from testing requirements.txt
2019-05-20 11:21:14 -07:00
8485136f9a var node_labels as string (#4764) 2019-05-19 12:31:10 -07:00
ff1bc739f1 Change default for kubelet_flexvolumes_plugins_dir (#4752) 2019-05-19 12:29:10 -07:00
594a0e7f1b Fix invalid YAML formatting within addons.yml (#4753) 2019-05-16 02:05:49 -07:00
8e28ba38d2 Add Load Balancer IP to API servers SANs (#4775)
- Add loadbalancer_apiserver.address to apiserver_sans
2019-05-16 01:23:42 -07:00
73c2ff17dd Fix Ansible-lint error [E502] (#4743) 2019-05-16 00:27:43 -07:00
13f225e6ae Only pull images for destined host groups (#4735)
Without this, pulls are considered for all
hosts groups, even if not targetted by the downloads
`groups` list. Hence, a download/sync is triggered
even though the host does not require the image.
2019-05-16 00:25:48 -07:00
3f62492a15 Use standard testcases job for TF CI (#4732) 2019-05-14 02:01:14 -07:00
5e3bd2dff1 Use common playbook to wait for SSH (#4734) 2019-05-10 01:25:59 -07:00
787a9c74fa Terraform wait for floating IP instance has been associated (#4321)
* Add wait for floating ip associate with instance

* Terraform formatting fix

* Sort Open Telekom Cloud in compatible list
2019-05-09 02:16:50 -07:00
14749df6f3 Fix "netchecker-server" ClusterRole (#4730)
* Add sha256 hashes for calicoctl v3.6.1

Hashes are added to calicoctl_binary_checksums for both adm and arm platforms.

* Add rules for "network-checker.ext" resource to "netchecker-server" ClusterRole

So that it could access the resource after it is created.

Corresponding issues:
https://github.com/Mirantis/k8s-netchecker-server/issues/125
https://github.com/kubernetes-sigs/kubespray/issues/3281
2019-05-09 01:30:49 -07:00
2db2898112 Fixed runc path in runtime for RedHat os family (#4731) 2019-05-09 01:28:48 -07:00
3776000fc4 Run TF tests from repo root (#4723) 2019-05-08 23:40:49 -07:00
f0572e59e7 Always do OVH CI (#4722) 2019-05-08 23:38:53 -07:00
6217184c7f Merge pull request #4720 from MarkusTeufelberger/patch-1
Update default CentOS version on Azure
2019-05-09 08:38:44 +02:00
044dcbaed0 Add Kubelet config, remove deprecated flags and fix minor bugs (#4724)
* Add kubelet config

* Change kubelet_authorization_mode_webhook to true

* Fix lint

* Sync env file

* Refactor the kubernetes node folder

* Remove deprecated flag and fix lint
2019-05-08 13:38:36 -07:00
8a5eae94ea Minor cleanups of CoreDNS issues and CI job (#4719)
* Minor cleanups

* Add comment in docs that nodelocaldns cache is enabled by default
2019-05-07 13:20:36 -07:00
bf3c6aeed1 Add kube anon auth settings to kubeadm config templates (#4713)
* Disable kube_api_anonymous_auth by default to secure the setup

* Disable metrics-server in addons. Health endpoint is slow and unstable

* Fix anonymous-auth missing in configuration

* Cleanup a bit

* Fix kube anon auth
2019-05-07 12:52:34 -07:00
f3fbf995ca Update default CentOS version on Azure 2019-05-07 13:37:42 +02:00
03bded2b6b Fix adding output of kubeadm to the admin.conf downloaded to the artifacts directory (#4696)
Fixes issue https://github.com/kubernetes-sigs/kubespray/issues/4695
2019-05-06 03:29:36 -07:00
d5c0829d61 Removing unnecessary httplib2 install (#4708) 2019-05-03 17:55:38 -07:00
00369303de Fixing msg parameter for debug module (#4702)
According to [`debug` module documentation](https://docs.ansible.com/ansible/latest/modules/debug_module.html?highlight=msg), the correct parameter name is `msg`.

With the previous `message` parameter name I was getting FAILED messages while ansible was trying to debug previous FAILED tasks.
2019-05-03 12:21:42 -07:00
1f1479c0a7 Update ingress nginx 0.24.1. (#4691) 2019-05-03 12:19:39 -07:00
e67f848abc ansible-lint: add spaces around variables [E206] (#4699) 2019-05-02 14:24:21 -07:00
560f50d3cd Add support for http(s)_proxy to CoreOS, Fedora and OpenSUSE (#4669)
* Add support for http(s)_proxy to CoreOS and Fedora

* fix opensuse proxy support

* Fix CoreOS proxy support

* update documentation
2019-05-02 12:28:22 -07:00
3f45122d0d Refactor Terraform CI (#4654) 2019-05-02 12:26:19 -07:00
50bdaa573c Apply etcd_extra_vars to etcd-events.env as well. (#4219)
This change ensures that etcd_extra_vars variable applies
to events etcd as well.
2019-05-02 12:24:27 -07:00
24b6698cc9 Disable CI deploys on master (#4690) 2019-05-02 12:20:20 -07:00
73885d3b9e Validate Vagrantfile in CI unit-tests (#4642)
* Validate vagrant file on CI

* Install vagrant

* Install vagrant

* Install vagrant

* Install vagrant

* Install vagrant

* Install vagrant

* Test vagrant validate
2019-05-02 11:24:21 -07:00
f29387316f Fix ansible-lint 602 (#4688) 2019-05-01 23:42:17 -07:00
d6fd0d2aca Enable delegating all downloads (binaries, images, kubeadm images) (#4420)
* Download to delegate and sync files when download_run_once

* Fail on error after saving container image

* Do not set changed status when downloaded container was up to date

* Only sync containers when they are actually required

Previously, non-required images (pull_required=false as
image existed on target host) were synced to the target
hosts. This failed as the image was not downloaded to
the download_delegate and hence was not available for
syncing.

* Sync containers when only missing on some hosts

* Consider images with multiple repo tags

* Enable kubeadm images pull/syncing with download_delegate

* Use kubeadm images list to pull/sync

'kubeadm config images pull' is replaced by collecting the images
list with 'kubeadm config images list' and using the commonly
used method of pull/syncing the images.

* Ensure containers are downloaded and synced for all hosts

* Fix download/syncing when download_delegate is a kubernetes host
2019-05-01 01:10:56 -07:00
e814da1eec ansible-lint: Don't use the local_action module [E504] (#4666) 2019-05-01 00:38:55 -07:00
e029a09345 Update CI to use 2.10.0 release (#4682)
* Update CI to use 2.10.0 release

* Add rsync as it's required to use synchronize
2019-04-30 07:29:37 -07:00
dcd9c9509b Add etcd role dependency on kube user to avoid etcd role failure when running scale.yml with a fresh node. (#3240) (#4479) 2019-04-30 04:01:36 -07:00
15eb7db36d Fix k8s api endpoint for secondary nodes in control plane mode (#4675)
Change-Id: I1588458b54c52443ad8d0afbd266f77ac0afea67
2019-04-29 07:50:24 -07:00
a5b46bfc8c Run dns_late preinstall tasks on all k8s nodes (#4672)
* Run dns_late preinstall tasks on all k8s nodes

Related issue: #4656

Change-Id: I63f8559ef1a497b7580ab084561e6603fe647834

* Fix ansible-lint

Change-Id: Ia5b33fa63dbc36d8c3e9557ef3f2ea02af2325a5

* Fix recover_control_plane lint issues

Change-Id: I16643a3193c11b6ba704e9698812cac7e4fd19a8
2019-04-29 05:12:21 -07:00
fbba259933 ingress-nginx: enable --report-node-internal-ip-address flag (#4114)
Close #4113
2019-04-29 01:44:22 -07:00
7b77e2d232 Remove docker-storage-setup dependency if not needed (#4077)
When docker_container_storage_setup is false,
docker service should not depend on docker-storage-setup service,
because it's not installed.

For example, when using overlay2 on recent RHEL 7/Centos 7 kernels,
you most likely don't need it.
2019-04-29 01:42:22 -07:00
48a182844c Documentation and playbook for recovering control plane from node failure (#4146) 2019-04-29 01:40:20 -07:00
9335cdcebc ansible-lint: Add exception for invocation of "rm" (#4609) 2019-04-29 01:34:20 -07:00
38af93b60c Remove rkt support (#4671) 2019-04-29 01:14:20 -07:00
741de6051c Fix nodeselectors for contiv and nginx-ingress (#4662)
* Fix nodeselectors for contiv and nginx-ingress

Change-Id: Ib3eb6bd87193c69a90ee944c9164a0b6792c79ba

* Set kube proxy mode to iptables for addons task

Change-Id: Iff71a71f672405c74b4708c71db15ddc4391a53a
2019-04-28 23:36:19 -07:00
b8f0de3074 Fixed etcd-servers-overrides in kubeadm config (#4668)
* kube-apiserver will fail if used comma as separator
2019-04-28 23:02:20 -07:00
88d919337e ansible-lint: don't compare to empty string [E602] (#4665) 2019-04-28 23:00:20 -07:00
f518b90c6b associate fips for masters with no etcd (#4657) 2019-04-28 22:58:20 -07:00
d5c33e6d6c Refactor test cases (#4655) 2019-04-28 22:56:19 -07:00
338eb4ce65 Fix kubeadm upload certs with when condition (#4659)
* Fix kubeadm upload certs with when condition

Change-Id: I916dd2375b71eea2386047c7f185a2f8361f7a61

* Update kubeadm-secondary-experimental.yml
2019-04-27 01:14:20 -07:00
009e208bcd Remove RHEL from packet deploy (#4661)
Change-Id: I131d77bb9d16cc0f252dd86166c29f72daa9a64a
2019-04-26 09:56:29 -07:00
81e6877b02 Make cilium tests pass (#4660)
Cilium requires a high kernel. rhel7 and centos7 are too low, so they are removed.
Bumping ubuntu to ubuntu-1804

Change-Id: Ib1bffa45b8f9ed0ba500f751714372b3a3f7878b
2019-04-26 05:54:37 -07:00
3722acee85 Fix broken metrics-server deployment not starting (#4651)
* Fix metrics-server deployment

* Make metrics server work

* Fix sample inventory
2019-04-26 00:44:26 -07:00
a4a35f8a4f Git checkout a specific version for testing upgrades (#4653) 2019-04-25 05:24:46 -07:00
82119ca923 Add support calico kubernetes datastore and typha. (#4498)
* Add support calico kubernetes datastore and typha.

* Add typha_enabled to kubespray-defaults.
2019-04-25 05:00:48 -07:00
6ca2019002 Fix issue with etcd arm host installation case (#4589)
Use host_architecture variable.
2019-04-25 04:58:47 -07:00
53e3463b5a Fix GCE tests with undefined CI_PLATFORM (#4650) 2019-04-25 04:20:47 -07:00
c9ed5f69d7 Prepend docker.io for all docker hub images (#4648)
Change-Id: I71dc793641bc168e40419e38f33f68f5325e77a9
2019-04-25 01:34:46 -07:00
696d481e3b Fix dynamic inventory parsing in contrib/tf/packet (#4645) 2019-04-25 00:40:46 -07:00
f5a83ceded Fix typo in test-infra playbook (#4644) 2019-04-24 13:34:46 -07:00
3fe66a1298 Update downloads role to download to correct group (#4638) 2019-04-24 10:48:03 -07:00
6af1f65d3c Fix python syntax in Terraform dynamic inventory (#4643) 2019-04-24 10:34:04 -07:00
4a10dca7d4 Add an ability to provide oidc cert in base64 (#4618) 2019-04-24 09:40:01 -07:00
4d57ed314d Clean up check for setting kubeadm certificate key (#4634)
Change-Id: I2c97c4753089eb3ec2e6b01b2681a8be98ecbb57
2019-04-24 07:14:12 -07:00
86d0e12695 Add missing comma (#4636) 2019-04-24 07:10:02 -07:00
4e81bcc147 Fixing Vagrant cluster provisioning (#4218)
* Pass ansible_ssh_user as host_var

Co-authored-by: Damian Darczuk <damian.darczuk@intel.com>
Co-authored-by: Paweł Pałucki <pawel.palucki@intel.com>

* Create a directory before downloading container images to ansible host

Co-authored-by: Damian Darczuk <damian.darczuk@intel.com>
Co-authored-by: Paweł Pałucki <pawel.palucki@intel.com>

* Set private key usuing synchronize task options

Co-authored-by: Damian Darczuk <damian.darczuk@intel.com>
Co-authored-by: Paweł Pałucki <pawel.palucki@intel.com>
2019-04-24 05:42:05 -07:00
691baf5b14 Calico fix (#4540)
* Mark "Calico | Set global as_num" as "unchanged"

This command executes with "--skip-exists" parameter, so it is idempotent
and should not be marked as "changed".

* trigger ci
2019-04-24 05:40:01 -07:00
6243467856 remove duble check for run this task just one time (#4613) 2019-04-24 05:38:01 -07:00
3c5a4474ac Increase ansible-lint speed (#4632) 2019-04-24 05:28:00 -07:00
01da65252b Reduce VM size for Packet CI (#4630) 2019-04-24 04:30:04 -07:00
f3e7615bef Switch deploy-part1 AIO job to Calico (#4628)
* Switch deploy-part1 AIO job to Calico

* Cleanup file

* Remove newline at end
2019-04-24 03:32:04 -07:00
f47a666227 support azure loadbalancer standard sku (#4150) (#4476)
add the support of the folling property in azure-credential-check.yml
  - azure_loadbalancer_sku: Sku of Load Balancer and Public IP. Candidate values are: basic and standard.
  - azure_exclude_master_from_standard_lb: excludes master nodes from standard load balancer.
  - azure_disable_outbound_snat: disables the outbound SNAT for public load balancer rules
  - useInstanceMetadata: Use instance metadata service where possible
  - azure_primary_availability_set: (Optional) The name of the availability set that should be used as the load balancer backend
2019-04-24 02:14:01 -07:00
b708db4cd5 Update to v1.14.1 (#4481) 2019-04-24 02:08:01 -07:00
a3144e7e21 Test with minimum requirements (#4615) 2019-04-24 02:02:03 -07:00
683efc5698 Move on_success test to deploy-part2 (#4627) 2019-04-24 01:42:04 -07:00
38a3075025 Always rebase on master before running a job (#4616) 2019-04-24 01:38:01 -07:00
fc072300ea Purge legacy cleanup tasks from older than 1 year (#4450)
We don't need to support upgrades from 2 year old installs,
just from the last major version.

Also changed most retried tasks to 1s delay instead of longer.
2019-04-24 00:08:05 -07:00
d25ecfe1c1 Update Docker defaults to 18.09.5 and drop deprecated (#4624)
As of kubernetes v1.14, docker 18.09 is [validated for use](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#external-dependencies). Docker 1.11 and 1.12 were dropped.

This patch:
- Updates the default docker version to 18.09
- Updates Docker packages to the latest 18.09 patch (18.09.5)
- Removes options for Docker 1.11 and 1.12
2019-04-23 22:24:01 -07:00
37d98e79ec Pin Terraform provider versions (#4620) 2019-04-23 22:22:01 -07:00
a65605b17a ansible-lint: Don't use bare variables (#4608)
Circumvented one false positive from ansible-lint
Moved a block of jinja magic into its own variable
2019-04-23 22:20:00 -07:00
424e59805f ansible-lint: Fix commands that are also available as module (#4619) 2019-04-23 22:18:00 -07:00
6df8111cd4 Merge 020_check and 030_check (#4623)
* Merge 020_check and 030_check

* Fix pods output and fail if test pods is not ready
2019-04-23 16:12:00 -07:00
76db060afb Define and implement specs for bootstrap-os (#4455)
* Add README to bootstrap-os role

* Rework bootstrap-os once more

* Document workarounds for bugs/deficiencies in Ansible modules
* Unify and document role variables
* Remove installation of additional packages and repositories
* Merge Ubuntu and Debian tasks
* Remove pipelining setting from default playbooks
* Fix OpenSUSE not running its required tasks
2019-04-23 15:46:02 -07:00
d588532c9b Update probe timeouts, delays etc. (#4612)
* Fix merge conflict

* Add check delay

* Add more liveness and readiness options to metrics-server
2019-04-23 14:46:02 -07:00
d6d7458d68 Fix control plane setup without a hardcoded key (#4610) 2019-04-23 14:37:59 -07:00
228b244c84 Move inline shell into script files (#4604) 2019-04-23 13:36:03 -07:00
d89ecb8308 disable metrics server and fix terraform (#4617)
* disable metrics server in centos7-flannel-addons job

Change-Id: I1d87923547584896f64dda9ea8feb5581ad48cbe

* Fix tf facility->facilities syntax

Change-Id: I434bfe53f47e8e4a546890e0b62d24bde6e6d6a7

* Update Terraform CI for facilities

* Fix undefined variable error
2019-04-23 12:06:03 -07:00
50751bb610 Revert "Optimize kube resources creation (#4572)" (#4621)
This reverts commit f8fdc0cd93.
2019-04-23 20:37:23 +03:00
64f48bf84c Update ansible.md (#4599)
Ansible 2.0 has deprecated the “ssh” from ansible_ssh_host.

Updating the docs to be more aligned with the Ansible version used in the sample/inventory.ini file as well.
Also adding `[bastion]` group in the docs to avoid confusion.
2019-04-22 23:36:09 -07:00
f8fdc0cd93 Optimize kube resources creation (#4572) 2019-04-22 23:34:10 -07:00
09fe95bc60 Avoid creating k8s cert dir on non-k8s nodes (#4602) 2019-04-21 15:27:43 -07:00
ada5941a70 Unmask Docker service in ClearLinux (#4583)
The docker service provided by the containers-basic bundle is masked
in ClearLinux distribution. This is causing errors in the following
steps. This commit ensures that the unit is not masked.
2019-04-21 07:31:43 -07:00
88fe3403ce Add overcommitment for CPU in Packet CI playbook (#4597) 2019-04-21 02:27:44 -07:00
04f2682ac6 Drop unused dynamic inventory functions (#4138) 2019-04-21 01:59:45 -07:00
873b5608cf add master_allowed_remote_ips (with terraform fmt) (#4022) 2019-04-21 01:57:44 -07:00
12086744e0 Update docs for inventory_builder (#4581) 2019-04-20 11:09:45 -07:00
33ab615072 Wait longer for node to join the cluster (#4549) 2019-04-20 07:05:40 -07:00
f696d7abee Simplify syntax-check CI job (#4585) 2019-04-20 06:37:40 -07:00
5a1cf19278 Install cri-tools on fedora (#4350) 2019-04-20 06:29:40 -07:00
416e65509b Add documentation about CPU arch compatibility (#4302) 2019-04-20 06:27:40 -07:00
4de6a78e26 Fix CI for packet_centos7-flannel-addons (#4586) 2019-04-20 06:21:40 -07:00
026088deea Re-Add docker:dind for Packet CI (#4567) 2019-04-20 06:19:40 -07:00
f142e671b3 Cleanup references to Travis CI (#4208)
Broken since 4efb0b7
2019-04-20 06:17:40 -07:00
2f49b6caa8 Use yamllint --strict (#4587) 2019-04-20 06:15:41 -07:00
50c86919dc Packet CI: Increasing the time wiating for IP to be assigned (#4584) 2019-04-20 06:13:40 -07:00
781cc00cc4 Add a testcase to check that pods are running (#4555) 2019-04-20 06:11:40 -07:00
05dc2b3a09 Use K8s 1.14 and add kubeadm experimental control plane mode (#4514)
* Use K8s 1.14 and add kubeadm experimental control plane mode

This reverts commit d39c273d96.

* Cleanup kubeadm setup run on first master

* pin kubeadm_certificate_key in test

* Remove kubelet autolabel of kube-node, add symlink for pki dir

Change-Id: Id5e74dd667c60675dbfe4193b0bc9fb44380e1ca
2019-04-19 06:01:54 -07:00
d0e628911c Add sha256 hashes for calicoctl v3.6.1 (#4580)
Hashes are added to calicoctl_binary_checksums for both adm and arm platforms.
2019-04-19 05:45:55 -07:00
656633f784 YAMLLint everything (#4576) 2019-04-18 23:59:54 -07:00
530e1c329d Add shellcheck CI (#4562) 2019-04-18 23:57:54 -07:00
f5aec8add4 Fix runc absolute path (#4542)
The BINDIR variable defined on the runc's Makefile[1] defines
installation path is on $(PREFIX)/sbin which used for most of the
Linux distributions. This change fixes the absolute path used for
non-ClearLinux distributions (CentOS, Ubuntu).

[1] https://github.com/opencontainers/runc/blob/master/Makefile#L10
2019-04-18 15:41:58 -07:00
f92309bfd0 Fix ansible-lint for ceph package (#4568) 2019-04-18 13:45:25 -07:00
ef10feb26f Comment loadbalancer_* settings in sample inventory (#4566) 2019-04-18 04:20:10 -07:00
c6586829de Ensure /etc/bash_completion.d/ folder exists (#4543)
The Stateless ClearLinux feature[1] requires the creation of folders
in /etc folder. This change ensure the existence of the
/etc/bash_completion.d/ folder for ClearLinux Distribution.

[1] https://clearlinux.org/features/stateless
2019-04-18 02:24:10 -07:00
b103385678 added missing sidebar link to Packet doc (#4513) 2019-04-18 02:22:10 -07:00
848191e97a Enable working Packet CI jobs and delay GCE CI (#4559) 2019-04-18 01:50:09 -07:00
04e3fb6a5a Fix ansible-lint error 103 (#4511) 2019-04-18 01:42:10 -07:00
b218e17f44 ansible-lint: E403 Package installs should not use latest (#4500) 2019-04-18 01:34:08 -07:00
bba6d0c613 Fix CI link (#4521) 2019-04-18 01:12:08 -07:00
49af1f9969 Fix ansible-lint e601 in create-vms (#4561) 2019-04-17 10:46:10 -07:00
a6dc50e7cb Add host information for canal readiness probe (#4548) 2019-04-17 10:22:02 -07:00
f69b5f7f33 Upgrade to Ansible 2.7.8 (#4535) 2019-04-17 10:18:05 -07:00
37eac010c8 ansible-lint: Don’t compare to literal True/False (#4499) 2019-04-17 08:42:03 -07:00
d4b9f15c0a PHASE 2 - Enable Packet-CI in gitlab and move unit-tests and deploy-part1 (#4538)
* PHASE 2 - Enable Packet-CI in gitlab

* Add gitlab files

* Reset files back and only keep Packet

* Include packet

* Add missing Upgrade Tests

* Update GCE jobs etc

* Fix bug

* Yaml lint all gitlab files

* Remove GCE

* Test

* Test again

* Enable GCE again

* Install requirements

* Cleanup the gitlab file

* Cleanup runner tags

* Install requirements

* Test

* Test variables for gce

* Test again

* Test again

* Fix

* Update
2019-04-17 08:32:03 -07:00
ec3daedf9e Revert "Fix for unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels (#4320)" (#4553)
This reverts commit 586ad89d50.
2019-04-17 07:58:06 -07:00
1cf76a10db Disable usage of default security group (#4533) 2019-04-17 02:10:03 -07:00
d83181a2be add RBD Provisioner Addon (#3667) (#3668)
Based on the CephFS Provisioner Addon, the following changes have been made:
 - Upstream v2.1.1-k8s1.11
 - Configurable Provisioner replicas
2019-04-16 23:14:02 -07:00
b834a28891 PHASE 1 - Add Packet-CI playbook and configuration (#4537) 2019-04-16 14:49:07 -07:00
78f6f6b889 Mark "Calico | Set global as_num" as "unchanged" (#4539)
This command executes with "--skip-exists" parameter, so it is idempotent
and should not be marked as "changed".
2019-04-16 09:31:11 -07:00
0b02f6593b Split .gitlab-ci.yml into several files (#4519) 2019-04-16 05:35:05 -07:00
7f1d9ff543 [contrib/terraform/openstack] Add k8s_allowed_remote_ips variable (#4506)
* Add k8s_allowed_remote_ips variable

Useful for defining CIDRs allowed to initiate a SSH connection when
you don't want to use a bastion.

* Add TF_VAR_k8s_allowed_remote_ips variable to tf-apply-ovh
2019-04-15 07:22:08 -07:00
c5fb734098 Switch calicoctl from a container to a binary (#4524) 2019-04-15 04:24:04 -07:00
d5d3cfd3fa Sanitize the cluster_name variable (#4509) 2019-04-15 04:22:06 -07:00
cc77a8c395 Add logo folders (#4515) 2019-04-12 11:00:47 -07:00
d39c273d96 Revert "Use K8s 1.14 and add kubeadm experimental control plane mode (#4317)" (#4510)
This reverts commit 316508626d.
2019-04-11 12:52:43 -07:00
316508626d Use K8s 1.14 and add kubeadm experimental control plane mode (#4317)
* Use Kubernetes 1.14 and experimental control plane support

* bump to v1.14.0
2019-04-11 05:30:13 -07:00
46ba6a4154 ansible-lint: when lines should not include Jinja2 variables (#4496) 2019-04-11 03:06:10 -07:00
d8cbbc414e Add a PR template (#4491) 2019-04-11 03:04:14 -07:00
ebae491e3f Add several issue templates (#4493) 2019-04-11 03:02:13 -07:00
6f919e5020 Add CI for Ubuntu 18.04 on Packet (#4439) 2019-04-11 00:26:10 -07:00
4ff851b302 Enable nodelocaldns by default (#4461)
* Enable nodelocaldns by default

* Enable nodelocaldns by default

* nodelocaldns is now default

* Disable enable_nodelocaldns for the addons CI jobs

Disable enable_nodelocaldns for the addons CI jobs to make sure things still work without nodelocaldns
2019-04-11 00:24:08 -07:00
3af90f8772 disable cloud-routes for non-cloud plugin (#4443) 2019-04-10 23:50:09 -07:00
cb54d074b5 Fix syntax of yaml in .gitlab-ci.yml file (#4409) 2019-04-10 23:46:10 -07:00
9032e271f1 Upgrade CoreDNS to 1.5.0 (#4494) 2019-04-10 13:40:08 -07:00
15597aa493 Do not force TCP connections to upstreams. (#4492) 2019-04-10 12:40:09 -07:00
3b9d13fda9 Return back bind API server node loadbalancer to 127.0.0.1 for security purposes. (#4489) 2019-04-10 12:20:08 -07:00
5e0249ae7c Add HAProxy as internal loadbalancer (#4480) 2019-04-10 05:56:18 -07:00
27958e4247 Fix "Prevent inventory.py from configuring an even number of nodes in etcd" #4399 (#4465)
by making clusters with fewer than 3 nodes have only 1 etcd node
2019-04-10 05:52:14 -07:00
353afa7cb0 Fix ipip: false in calico v3 (#4473) 2019-04-10 05:50:15 -07:00
e865c50574 Fix terraform fmt on contrib/terraform/aws (#4484) 2019-04-10 04:32:14 -07:00
a30ad1e5a5 Added generic CNI network plugin (#4322)
* Added generic CNI network plugin

* Added CNI network plugin documentation

* added necessary fix
2019-04-10 04:16:15 -07:00
586ad89d50 Fix for unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels (#4320)
* Fix the file path for all.yml and k8s-cluster.yml

* Fix --node-labels namespace error "unknown labels specified"

* Update templates and configs kubelet node-labels
2019-04-10 04:14:12 -07:00
6caa639243 Update CoreDNS label as specified in the kubernetes coredns repository (#3920) 2019-04-10 04:12:13 -07:00
80f31818df Add terraform validate for contrib/terraform/aws (#4438) 2019-04-10 02:14:14 -07:00
854cc53fa5 Add CI for contrib/terraform/openstack (#4475) 2019-04-10 02:12:16 -07:00
d2a1ac3b0c Add Ansible-lint CI step (#4411)
* Add ansible-lint as gitlab-ci step

* Fix jinja2 syntax in include_tasks that breaks ansible-lint

* Use a block scalar to get around gitlab quoting/escaping rules

* Run ansible-lint in verbose mode in CI
2019-04-10 02:04:16 -07:00
a678d1be9d Update CI to use 2.9.0 release and update Dockerfile to now use 18.04 (#4472)
* Update CI to use 2.9.0 release and update Dockerfile to now use 18.04

* Update CI to use 2.9.0 release and update Dockerfile to now use 18.04

* Update the kubectl bin
2019-04-09 05:57:06 -07:00
097806dfe8 Added tag kube-proxy (#4272)
Signed-off-by: André R. de Miranda <andre@miranda.work>
2019-04-09 05:25:06 -07:00
7cdf1fd388 quote values for kube_oidc_groups_prefix and kube_oidc_username_prefix values to accept colon, e.g oidc: (#4305)
This will fix error: error converting YAML to JSON: yaml: line 36: mapping values are not allowed in this context

Signed-off-by: Abdulaziz AlMalki <almalki.a@gmail.com>
2019-04-09 05:23:06 -07:00
622 changed files with 16089 additions and 6876 deletions

27
.ansible-lint Normal file
View File

@ -0,0 +1,27 @@
---
parseable: true
skip_list:
# see https://docs.ansible.com/ansible-lint/rules/default_rules.html for a list of all default rules
# The following rules throw errors.
# These either still need to be corrected in the repository and the rules re-enabled or documented why they are skipped on purpose.
- '301'
- '302'
- '303'
- '305'
- '306'
- '404'
- '503'
# These rules are intentionally skipped:
#
# [E204]: "Lines should be no longer than 160 chars"
# This could be re-enabled with a major rewrite in the future.
# For now, there's not enough value gain from strictly limiting line length.
# (Disabled in May 2019)
- '204'
# [E701]: "meta/main.yml should contain relevant info"
# Roles in Kubespray are not intended to be used/imported by Ansible Galaxy.
# While it can be useful to have these metadata available, they are also available in the existing documentation.
# (Disabled in May 2019)
- '701'

View File

@ -1,16 +1,11 @@
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions.-->
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one):
---
name: Bug Report
about: Report a bug encountered while operating Kubernetes
labels: kind/bug
---
<!--
If this is a BUG REPORT, please:
- Fill in as much of the template below as you can. If you leave out
information, we can't help you as well.
If this is a FEATURE REQUEST, please:
- Describe *in detail* the feature/behavior/change you'd like to see.
In both cases, be ready for followup questions, and please respond in a timely
Please, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.

11
.github/ISSUE_TEMPLATE/enhancement.md vendored Normal file
View File

@ -0,0 +1,11 @@
---
name: Enhancement Request
about: Suggest an enhancement to the Kubespray project
labels: kind/feature
---
<!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
**Why is this needed**:

20
.github/ISSUE_TEMPLATE/failing-test.md vendored Normal file
View File

@ -0,0 +1,20 @@
---
name: Failing Test
about: Report test failures in Kubespray CI jobs
labels: kind/failing-test
---
<!-- Please only use this template for submitting reports about failing tests in Kubespray CI jobs -->
**Which jobs are failing**:
**Which test(s) are failing**:
**Since when has it been failing**:
**Testgrid link**:
**Reason for failure**:
**Anything else we need to know**:

18
.github/ISSUE_TEMPLATE/support.md vendored Normal file
View File

@ -0,0 +1,18 @@
---
name: Support Request
about: Support request or question relating to Kubespray
labels: triage/support
---
<!--
STOP -- PLEASE READ!
GitHub is not the right place for support requests.
If you're looking for help, check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubespray) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
You can also post your question on the [Kubernetes Slack](http://slack.k8s.io/) or the [Discuss Kubernetes](https://discuss.kubernetes.io/) forum.
If the matter is security related, please disclose it privately via https://kubernetes.io/security/.
-->

44
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,44 @@
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide
2. Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here:
https://git.k8s.io/community/contributors/devel/release.md#issue-kind-label
3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/testing.md
4. If you want *faster* PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
5. Follow the instructions for writing a release note: https://git.k8s.io/community/contributors/guide/release-notes.md
6. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests
-->
**What type of PR is this?**
> Uncomment only one ` /kind <>` line, hit enter to put that in a new line, and remove leading whitespaces from that line:
>
> /kind api-change
> /kind bug
> /kind cleanup
> /kind design
> /kind documentation
> /kind failing-test
> /kind feature
> /kind flake
**What this PR does / why we need it**:
**Which issue(s) this PR fixes**:
<!--
*Automatically closes linked issue when PR is merged.
Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.
_If PR is about `failing-tests or flakes`, please post the related issues/tests in a comment and do not use `Fixes`_*
-->
Fixes #
**Special notes for your reviewer**:
**Does this PR introduce a user-facing change?**:
<!--
If no, just write "NONE" in the release-note block below.
If yes, a release note is required:
Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".
-->
```release-note
```

View File

@ -1,9 +1,10 @@
---
stages:
- unit-tests
- moderator
- deploy-part1
- moderator
- deploy-part2
- deploy-gce
- deploy-special
variables:
@ -27,785 +28,45 @@ variables:
UPGRADE_TEST: "false"
LOG_LEVEL: "-vv"
# asia-east1-a
# asia-northeast1-a
# europe-west1-b
# us-central1-a
# us-east1-b
# us-west1-a
before_script:
- /usr/bin/python -m pip install -r tests/requirements.txt
- ./tests/scripts/rebase.sh
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- python -m pip install -r tests/requirements.txt
- mkdir -p /.ssh
.job: &job
tags:
- kubernetes
- docker
image: quay.io/kubespray/kubespray:v2.8
.docker_service: &docker_service
services:
- docker:dind
.create_cluster: &create_cluster
<<: *job
<<: *docker_service
.gce_variables: &gce_variables
GCE_USER: travis
SSH_USER: $GCE_USER
CLOUD_MACHINE_TYPE: "g1-small"
CI_PLATFORM: "gce"
PRIVATE_KEY: $GCE_PRIVATE_KEY
.do_variables: &do_variables
PRIVATE_KEY: $DO_PRIVATE_KEY
CI_PLATFORM: "do"
SSH_USER: root
- packet
variables:
KUBESPRAY_VERSION: v2.11.2
image: quay.io/kubespray/kubespray:$KUBESPRAY_VERSION
.testcases: &testcases
<<: *job
<<: *docker_service
cache:
key: "$CI_BUILD_REF_NAME"
paths:
- downloads/
- $HOME/.cache
services:
- docker:dind
before_script:
- docker info
- /usr/bin/python -m pip install -r requirements.txt
- /usr/bin/python -m pip install -r tests/requirements.txt
- mkdir -p /.ssh
- mkdir -p $HOME/.ssh
- ansible-playbook --version
- export PYPATH=$([[ ! "$CI_JOB_NAME" =~ "coreos" ]] && echo /usr/bin/python || echo /opt/bin/python)
- echo "CI_JOB_NAME is $CI_JOB_NAME"
- echo "PYPATH is $PYPATH"
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh
- ./tests/scripts/testcases_prepare.sh
script:
- pwd
- ls
- echo ${PWD}
- echo "${STARTUP_SCRIPT}"
- cd tests && make create-${CI_PLATFORM} -s ; cd -
# Check out latest tag if testing upgrade
- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
# Checkout the CI vars file so it is available
- test "${UPGRADE_TEST}" != "false" && git checkout "${CI_BUILD_REF}" tests/files/${CI_JOB_NAME}.yml
# Workaround https://github.com/kubernetes-sigs/kubespray/issues/2021
- 'sh -c "echo ignore_assert_errors: true | tee -a tests/files/${CI_JOB_NAME}.yml"'
# Create cluster
- >
ansible-playbook
-i ${ANSIBLE_INVENTORY}
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_ssh_user=${SSH_USER}
-e local_release_dir=${PWD}/downloads
--limit "all:!fake_hosts"
cluster.yml
# Repeat deployment if testing upgrade
- >
if [ "${UPGRADE_TEST}" != "false" ]; then
test "${UPGRADE_TEST}" == "basic" && PLAYBOOK="cluster.yml";
test "${UPGRADE_TEST}" == "graceful" && PLAYBOOK="upgrade-cluster.yml";
git checkout "${CI_BUILD_REF}";
ansible-playbook
-i ${ANSIBLE_INVENTORY}
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_ssh_user=${SSH_USER}
-e local_release_dir=${PWD}/downloads
--limit "all:!fake_hosts"
$PLAYBOOK;
fi
# Tests Cases
## Test Master API
- ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/010_check-apiserver.yml $LOG_LEVEL
## Ping the between 2 pod
- ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/030_check-network.yml $LOG_LEVEL
## Advanced DNS checks
- ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root --limit "all:!fake_hosts" tests/testcases/040_check-network-adv.yml $LOG_LEVEL
## Idempotency checks 1/5 (repeat deployment)
- >
if [ "${IDEMPOT_CHECK}" = "true" ]; then
ansible-playbook
-i ${ANSIBLE_INVENTORY}
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH}
-e local_release_dir=${PWD}/downloads
--limit "all:!fake_hosts"
cluster.yml;
fi
## Idempotency checks 2/5 (Advanced DNS checks)
- >
if [ "${IDEMPOT_CHECK}" = "true" ]; then
ansible-playbook
-i ${ANSIBLE_INVENTORY}
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
--limit "all:!fake_hosts"
tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
fi
## Idempotency checks 3/5 (reset deployment)
- >
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook
-i ${ANSIBLE_INVENTORY}
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH}
-e reset_confirmation=yes
--limit "all:!fake_hosts"
reset.yml;
fi
## Idempotency checks 4/5 (redeploy after reset)
- >
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook
-i ${ANSIBLE_INVENTORY}
-b --become-user=root
--private-key=${HOME}/.ssh/id_rsa
-u $SSH_USER
${SSH_ARGS}
${LOG_LEVEL}
-e @${CI_TEST_VARS}
-e ansible_python_interpreter=${PYPATH}
-e local_release_dir=${PWD}/downloads
--limit "all:!fake_hosts"
cluster.yml;
fi
## Idempotency checks 5/5 (Advanced DNS checks)
- >
if [ "${IDEMPOT_CHECK}" = "true" -a "${RESET_CHECK}" = "true" ]; then
ansible-playbook -i ${ANSIBLE_INVENTORY} -e ansible_python_interpreter=${PYPATH}
-u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root
--limit "all:!fake_hosts"
tests/testcases/040_check-network-adv.yml $LOG_LEVEL;
fi
- ./tests/scripts/testcases_run.sh
after_script:
- cd tests && make delete-${CI_PLATFORM} -s ; cd -
.gce: &gce
<<: *testcases
variables:
<<: *gce_variables
.do: &do
variables:
<<: *do_variables
<<: *testcases
# Test matrix. Leave the comments for markup scripts.
.coreos_calico_aio_variables: &coreos_calico_aio_variables
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.ubuntu18_flannel_aio_variables: &ubuntu18_flannel_aio_variables
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables
# stage: deploy-part1
UPGRADE_TEST: "graceful"
.ubuntu_canal_kubeadm_variables: &ubuntu_canal_kubeadm_variables
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.ubuntu_canal_ha_variables: &ubuntu_canal_ha_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_contiv_sep_variables: &ubuntu_contiv_sep_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.coreos_cilium_variables: &coreos_cilium_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_cilium_sep_variables: &ubuntu_cilium_sep_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.rhel7_weave_variables: &rhel7_weave_variables
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.centos7_flannel_addons_variables: &centos7_flannel_addons_variables
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
.debian9_calico_variables: &debian9_calico_variables
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
.coreos_canal_variables: &coreos_canal_variables
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
.rhel7_canal_sep_variables: &rhel7_canal_sep_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_weave_sep_variables: &ubuntu_weave_sep_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.centos7_calico_ha_variables: &centos7_calico_ha_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.centos7_kube_router_variables: &centos7_kube_router_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.centos7_multus_calico_variables: &centos7_multus_calico_variables
# stage: deploy-part2
UPGRADE_TEST: "graceful"
.coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.coreos_kube_router_variables: &coreos_kube_router_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables
# stage: deploy-part1
MOVED_TO_GROUP_VARS: "true"
.ubuntu_flannel_variables: &ubuntu_flannel_variables
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
.ubuntu_kube_router_variables: &ubuntu_kube_router_variables
# stage: deploy-special
MOVED_TO_GROUP_VARS: "true"
.opensuse_canal_variables: &opensuse_canal_variables
# stage: deploy-part2
MOVED_TO_GROUP_VARS: "true"
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
### PR JOBS PART1
gce_ubuntu18-flannel-aio:
stage: deploy-part1
<<: *job
<<: *gce
variables:
<<: *ubuntu18_flannel_aio_variables
<<: *gce_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
### PR JOBS PART2
gce_coreos-calico-aio:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *coreos_calico_aio_variables
<<: *gce_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
gce_centos7-flannel-addons:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_flannel_addons_variables
when: on_success
except: ['triggers']
only: [/^pr-.*$/]
### MANUAL JOBS
gce_centos-weave-kubeadm-sep:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos_weave_kubeadm_variables
when: on_success
only: ['triggers']
gce_ubuntu-weave-sep:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_weave_sep_variables
when: manual
only: ['triggers']
gce_coreos-calico-sep-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_calico_aio_variables
when: on_success
only: ['triggers']
gce_ubuntu-canal-ha-triggers:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_ha_variables
when: on_success
only: ['triggers']
gce_centos7-flannel-addons-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_flannel_addons_variables
when: on_success
only: ['triggers']
gce_ubuntu-weave-sep-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_weave_sep_variables
when: on_success
only: ['triggers']
# More builds for PRs/merges (manual) and triggers (auto)
do_ubuntu-canal-ha:
stage: deploy-part2
<<: *job
<<: *do
variables:
<<: *do_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-canal-ha:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_ha_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-canal-kubeadm:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_kubeadm_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-canal-kubeadm-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_canal_kubeadm_variables
when: on_success
only: ['triggers']
gce_ubuntu-flannel-ha:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_flannel_variables
when: manual
except: ['triggers']
gce_centos-weave-kubeadm-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos_weave_kubeadm_variables
when: on_success
only: ['triggers']
gce_ubuntu-contiv-sep:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_contiv_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-cilium:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_cilium_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-cilium-sep:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_cilium_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_rhel7-weave:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *rhel7_weave_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_rhel7-weave-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *rhel7_weave_variables
when: on_success
only: ['triggers']
gce_debian9-calico-upgrade:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *debian9_calico_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_debian9-calico-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *debian9_calico_variables
when: on_success
only: ['triggers']
gce_coreos-canal:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_canal_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-canal-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_canal_variables
when: on_success
only: ['triggers']
gce_rhel7-canal-sep:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *rhel7_canal_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_rhel7-canal-sep-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *rhel7_canal_sep_variables
when: on_success
only: ['triggers']
gce_centos7-calico-ha:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_calico_ha_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_centos7-calico-ha-triggers:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_calico_ha_variables
when: on_success
only: ['triggers']
gce_centos7-kube-router:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_kube_router_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_centos7-multus-calico:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *centos7_multus_calico_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_opensuse-canal:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *opensuse_canal_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
# no triggers yet https://github.com/kubernetes-incubator/kargo/issues/613
gce_coreos-alpha-weave-ha:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_alpha_weave_ha_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_coreos-kube-router:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *coreos_kube_router_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-rkt-sep:
stage: deploy-part2
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_rkt_sep_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_ubuntu-kube-router-sep:
stage: deploy-special
<<: *job
<<: *gce
variables:
<<: *gce_variables
<<: *ubuntu_kube_router_variables
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
- ./tests/scripts/testcases_cleanup.sh
# For failfast, at least 1 job must be defined in .gitlab-ci.yml
# Premoderated with manual actions
ci-authorized:
<<: *job
extends: .job
stage: moderator
before_script:
- apt-get -y install jq
script:
- /bin/sh scripts/premoderator.sh
except: ['triggers', 'master']
# Disable ci moderator
only: []
syntax-check:
<<: *job
stage: unit-tests
script:
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root cluster.yml -vvv --syntax-check
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root upgrade-cluster.yml -vvv --syntax-check
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root reset.yml -vvv --syntax-check
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root extra_playbooks/upgrade-only-k8s.yml -vvv --syntax-check
except: ['triggers', 'master']
yamllint:
<<: *job
stage: unit-tests
script:
- yamllint .
except: ['triggers', 'master']
tox-inventory-builder:
stage: unit-tests
<<: *job
script:
- pip install tox
- cd contrib/inventory_builder && tox
when: manual
except: ['triggers', 'master']
# Tests for contrib/terraform/
.terraform_install: &terraform_install
<<: *job
before_script:
# Set Ansible config
- cp ansible.cfg ~/.ansible.cfg
# Install Terraform
- apt-get install -y unzip
- curl https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip > /tmp/terraform.zip
- unzip /tmp/terraform.zip && mv ./terraform /usr/local/bin/ && terraform --version
# Prepare inventory
- cp -LRp contrib/terraform/$PROVIDER/sample-inventory inventory/$CLUSTER
- cd inventory/$CLUSTER
- ln -s ../../contrib/terraform/$PROVIDER/hosts
- terraform init ../../contrib/terraform/$PROVIDER
# Copy SSH keypair
- mkdir -p ~/.ssh
- echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa
- chmod 400 ~/.ssh/id_rsa
- echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub
- export TF_VAR_public_key_path=""
only: ['master', /^pr-.*$/]
.terraform_validate: &terraform_validate
<<: *terraform_install
stage: unit-tests
script:
- terraform validate -var-file=cluster.tf ../../contrib/terraform/$PROVIDER
- terraform fmt -check -diff ../../contrib/terraform/$PROVIDER
.terraform_apply: &terraform_apply
<<: *terraform_install
stage: deploy-part2
when: manual
script:
- terraform apply -auto-approve ../../contrib/terraform/$PROVIDER
- ansible-playbook -i hosts ../../cluster.yml
after_script:
# Cleanup regardless of exit code
- cd inventory/$CLUSTER
- terraform destroy -auto-approve ../../contrib/terraform/$PROVIDER
tf-validate-openstack:
<<: *terraform_validate
variables:
TF_VERSION: 0.11.11
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-packet:
<<: *terraform_validate
variables:
TF_VERSION: 0.11.11
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
tf-apply-packet:
<<: *terraform_apply
variables:
TF_VERSION: 0.11.11
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
TF_VAR_cluster_name: $CI_COMMIT_REF_NAME
TF_VAR_number_of_k8s_masters: "1"
TF_VAR_number_of_k8s_nodes: "1"
TF_VAR_plan_k8s_masters: t1.small.x86
TF_VAR_plan_k8s_nodes: t1.small.x86
TF_VAR_facility: "ewr1"
include:
- .gitlab-ci/lint.yml
- .gitlab-ci/shellcheck.yml
- .gitlab-ci/terraform.yml
- .gitlab-ci/packet.yml

247
.gitlab-ci/gce.yml Normal file
View File

@ -0,0 +1,247 @@
---
.gce_variables: &gce_variables
GCE_USER: travis
SSH_USER: $GCE_USER
CLOUD_MACHINE_TYPE: "g1-small"
CI_PLATFORM: "gce"
PRIVATE_KEY: $GCE_PRIVATE_KEY
.cache: &cache
cache:
key: "$CI_BUILD_REF_NAME"
paths:
- downloads/
- $HOME/.cache
.gce: &gce
extends: .testcases
<<: *cache
variables:
<<: *gce_variables
tags:
- gce
except: ['triggers']
only: [/^pr-.*$/]
.centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables
# stage: deploy-part1
UPGRADE_TEST: "graceful"
.centos7_multus_calico_variables: &centos7_multus_calico_variables
# stage: deploy-gce
UPGRADE_TEST: "graceful"
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
### PR JOBS PART1
gce_ubuntu18-flannel-aio:
stage: deploy-part1
<<: *gce
when: manual
### PR JOBS PART2
gce_coreos-calico-aio:
stage: deploy-gce
<<: *gce
when: on_success
gce_centos7-flannel-addons:
stage: deploy-gce
<<: *gce
when: manual
### MANUAL JOBS
gce_centos-weave-kubeadm-sep:
stage: deploy-gce
extends: .gce
variables:
<<: *centos_weave_kubeadm_variables
when: on_success
only: ['triggers']
except: []
gce_ubuntu-weave-sep:
stage: deploy-gce
<<: *gce
when: manual
only: ['triggers']
except: []
gce_coreos-calico-sep-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_ubuntu-canal-ha-triggers:
stage: deploy-special
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_centos7-flannel-addons-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_ubuntu-weave-sep-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
# More builds for PRs/merges (manual) and triggers (auto)
gce_ubuntu-canal-ha:
stage: deploy-special
<<: *gce
when: manual
gce_ubuntu-canal-kubeadm:
stage: deploy-gce
<<: *gce
when: manual
gce_ubuntu-canal-kubeadm-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_ubuntu-flannel-ha:
stage: deploy-gce
<<: *gce
when: manual
gce_centos-weave-kubeadm-triggers:
stage: deploy-gce
extends: .gce
variables:
<<: *centos_weave_kubeadm_variables
when: on_success
only: ['triggers']
except: []
gce_ubuntu-contiv-sep:
stage: deploy-special
<<: *gce
when: manual
gce_coreos-cilium:
stage: deploy-special
<<: *gce
when: manual
gce_ubuntu18-cilium-sep:
stage: deploy-special
<<: *gce
when: manual
gce_rhel7-weave:
stage: deploy-gce
<<: *gce
when: manual
gce_rhel7-weave-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_debian9-calico-upgrade:
stage: deploy-gce
<<: *gce
when: manual
gce_debian9-calico-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_coreos-canal:
stage: deploy-gce
<<: *gce
when: manual
gce_coreos-canal-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_rhel7-canal-sep:
stage: deploy-special
<<: *gce
when: manual
gce_rhel7-canal-sep-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_centos7-calico-ha:
stage: deploy-special
<<: *gce
when: manual
gce_centos7-calico-ha-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_centos7-kube-router:
stage: deploy-special
<<: *gce
when: manual
gce_centos7-multus-calico:
stage: deploy-gce
extends: .gce
variables:
<<: *centos7_multus_calico_variables
when: manual
gce_oracle-canal:
stage: deploy-gce
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_opensuse-canal:
stage: deploy-gce
<<: *gce
when: manual
# no triggers yet https://github.com/kubernetes-incubator/kargo/issues/613
gce_coreos-alpha-weave-ha:
stage: deploy-special
<<: *gce
when: manual
gce_coreos-kube-router:
stage: deploy-special
<<: *gce
when: manual
gce_ubuntu-kube-router-sep:
stage: deploy-special
<<: *gce
when: manual

63
.gitlab-ci/lint.yml Normal file
View File

@ -0,0 +1,63 @@
---
yamllint:
extends: .job
stage: unit-tests
variables:
LANG: C.UTF-8
script:
- yamllint --strict .
except: ['triggers', 'master']
vagrant-validate:
extends: .job
stage: unit-tests
script:
- curl -sL https://releases.hashicorp.com/vagrant/2.2.4/vagrant_2.2.4_x86_64.deb -o /tmp/vagrant_2.2.4_x86_64.deb
- dpkg -i /tmp/vagrant_2.2.4_x86_64.deb
- vagrant validate --ignore-provider
except: ['triggers', 'master']
ansible-lint:
extends: .job
stage: unit-tests
# lint every yml/yaml file that looks like it contains Ansible plays
script: |-
grep -Rl '^- hosts: \|^ hosts: ' --include \*.yml --include \*.yaml . | xargs -P 4 -n 25 ansible-lint -v
except: ['triggers', 'master']
syntax-check:
extends: .job
stage: unit-tests
variables:
ANSIBLE_INVENTORY: inventory/local-tests.cfg
ANSIBLE_REMOTE_USER: root
ANSIBLE_BECOME: "true"
ANSIBLE_BECOME_USER: root
ANSIBLE_VERBOSITY: "3"
script:
- ansible-playbook --syntax-check cluster.yml
- ansible-playbook --syntax-check upgrade-cluster.yml
- ansible-playbook --syntax-check reset.yml
- ansible-playbook --syntax-check extra_playbooks/upgrade-only-k8s.yml
except: ['triggers', 'master']
tox-inventory-builder:
stage: unit-tests
extends: .job
before_script:
- ./tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip install -r tests/requirements.txt
script:
- pip3 install tox
- cd contrib/inventory_builder && tox
except: ['triggers', 'master']
markdownlint:
stage: unit-tests
image: node
before_script:
- npm install -g markdownlint-cli
script:
- markdownlint README.md docs --ignore docs/_sidebar.md

126
.gitlab-ci/packet.yml Normal file
View File

@ -0,0 +1,126 @@
---
.packet: &packet
extends: .testcases
variables:
CI_PLATFORM: "packet"
SSH_USER: "kubespray"
tags:
- packet
only: [/^pr-.*$/]
except: ['triggers']
packet_ubuntu18-calico-aio:
stage: deploy-part1
extends: .packet
when: on_success
# ### PR JOBS PART2
packet_centos7-flannel-addons:
extends: .packet
stage: deploy-part2
when: on_success
# ### MANUAL JOBS
packet_centos-weave-kubeadm-sep:
stage: deploy-part2
extends: .packet
when: on_success
variables:
UPGRADE_TEST: basic
packet_ubuntu-weave-sep:
stage: deploy-part2
extends: .packet
when: manual
# # More builds for PRs/merges (manual) and triggers (auto)
packet_ubuntu-canal-ha:
stage: deploy-special
extends: .packet
when: manual
packet_ubuntu-canal-kubeadm:
stage: deploy-part2
extends: .packet
when: on_success
packet_ubuntu-flannel-ha:
stage: deploy-part2
extends: .packet
when: manual
# Contiv does not work in k8s v1.16
# packet_ubuntu-contiv-sep:
# stage: deploy-part2
# extends: .packet
# when: on_success
packet_ubuntu18-cilium-sep:
stage: deploy-special
extends: .packet
when: manual
packet_ubuntu18-flannel-containerd:
stage: deploy-part2
extends: .packet
when: manual
packet_debian9-macvlan-sep:
stage: deploy-part2
extends: .packet
when: manual
packet_debian9-calico-upgrade:
stage: deploy-part2
extends: .packet
when: on_success
variables:
UPGRADE_TEST: graceful
packet_debian10-containerd:
stage: deploy-part2
extends: .packet
when: on_success
packet_centos7-calico-ha:
stage: deploy-part2
extends: .packet
when: manual
packet_centos7-kube-ovn:
stage: deploy-part2
extends: .packet
when: on_success
packet_centos7-kube-router:
stage: deploy-part2
extends: .packet
when: manual
packet_centos7-multus-calico:
stage: deploy-part2
extends: .packet
when: manual
packet_opensuse-canal:
stage: deploy-part2
extends: .packet
when: manual
packet_oracle-7-canal:
stage: deploy-part2
extends: .packet
when: manual
packet_ubuntu-kube-router-sep:
stage: deploy-part2
extends: .packet
when: manual
packet_amazon-linux-2-aio:
stage: deploy-part2
extends: .packet
when: manual

15
.gitlab-ci/shellcheck.yml Normal file
View File

@ -0,0 +1,15 @@
---
shellcheck:
extends: .job
stage: unit-tests
variables:
SHELLCHECK_VERSION: v0.6.0
before_script:
- ./tests/scripts/rebase.sh
- curl --silent "https://storage.googleapis.com/shellcheck/shellcheck-"${SHELLCHECK_VERSION}".linux.x86_64.tar.xz" | tar -xJv
- cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/
- shellcheck --version
script:
# Run shellcheck for all *.sh except contrib/
- find . -name '*.sh' -not -path './contrib/*' | xargs shellcheck --severity error
except: ['triggers', 'master']

161
.gitlab-ci/terraform.yml Normal file
View File

@ -0,0 +1,161 @@
---
# Tests for contrib/terraform/
.terraform_install:
extends: .job
before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh
- ./tests/scripts/testcases_prepare.sh
- ./tests/scripts/terraform_install.sh
# Set Ansible config
- cp ansible.cfg ~/.ansible.cfg
# Prepare inventory
- cp contrib/terraform/$PROVIDER/sample-inventory/cluster.tfvars .
- ln -s contrib/terraform/$PROVIDER/hosts
- terraform init contrib/terraform/$PROVIDER
# Copy SSH keypair
- mkdir -p ~/.ssh
- echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa
- chmod 400 ~/.ssh/id_rsa
- echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub
.terraform_validate:
extends: .terraform_install
stage: unit-tests
only: ['master', /^pr-.*$/]
script:
- terraform validate -var-file=cluster.tfvars contrib/terraform/$PROVIDER
- terraform fmt -check -diff contrib/terraform/$PROVIDER
.terraform_apply:
extends: .terraform_install
stage: deploy-part2
when: manual
only: [/^pr-.*$/]
variables:
ANSIBLE_INVENTORY_UNPARSED_FAILED: "true"
ANSIBLE_INVENTORY: hosts
CI_PLATFORM: tf
TF_VAR_ssh_user: $SSH_USER
TF_VAR_cluster_name: $CI_JOB_ID
script:
- tests/scripts/testcases_run.sh
after_script:
# Cleanup regardless of exit code
- ./tests/scripts/testcases_cleanup.sh
tf-validate-openstack:
extends: .terraform_validate
variables:
TF_VERSION: 0.12.12
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-packet:
extends: .terraform_validate
variables:
TF_VERSION: 0.12.12
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-aws:
extends: .terraform_validate
variables:
TF_VERSION: 0.12.12
PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME
# tf-packet-ubuntu16-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: 0.12.12
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
# TF_VAR_number_of_k8s_nodes: "1"
# TF_VAR_plan_k8s_masters: t1.small.x86
# TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_facility: ewr1
# TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_16_04
#
# tf-packet-ubuntu18-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: 0.12.12
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
# TF_VAR_number_of_k8s_nodes: "1"
# TF_VAR_plan_k8s_masters: t1.small.x86
# TF_VAR_plan_k8s_nodes: t1.small.x86
# TF_VAR_facility: ams1
# TF_VAR_public_key_path: ""
# TF_VAR_operating_system: ubuntu_18_04
.ovh_variables: &ovh_variables
OS_AUTH_URL: https://auth.cloud.ovh.net/v3
OS_PROJECT_ID: 8d3cd5d737d74227ace462dee0b903fe
OS_PROJECT_NAME: "9361447987648822"
OS_USER_DOMAIN_NAME: Default
OS_PROJECT_DOMAIN_ID: default
OS_USERNAME: 8XuhBMfkKVrk
OS_REGION_NAME: UK1
OS_INTERFACE: public
OS_IDENTITY_API_VERSION: "3"
tf-ovh_ubuntu18-calico:
extends: .terraform_apply
when: on_success
variables:
<<: *ovh_variables
TF_VERSION: 0.12.12
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
SSH_USER: ubuntu
TF_VAR_number_of_k8s_masters: "0"
TF_VAR_number_of_k8s_masters_no_floating_ip: "1"
TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
TF_VAR_number_of_etcd: "0"
TF_VAR_number_of_k8s_nodes: "0"
TF_VAR_number_of_k8s_nodes_no_floating_ip: "1"
TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
TF_VAR_number_of_bastions: "0"
TF_VAR_number_of_k8s_masters_no_etcd: "0"
TF_VAR_use_neutron: "0"
TF_VAR_floatingip_pool: "Ext-Net"
TF_VAR_external_net: "6011fbc9-4cbf-46a4-8452-6890a340b60b"
TF_VAR_network_name: "Ext-Net"
TF_VAR_flavor_k8s_master: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
TF_VAR_image: "Ubuntu 18.04"
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
tf-ovh_coreos-calico:
extends: .terraform_apply
when: on_success
variables:
<<: *ovh_variables
TF_VERSION: 0.12.12
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
SSH_USER: core
TF_VAR_number_of_k8s_masters: "0"
TF_VAR_number_of_k8s_masters_no_floating_ip: "1"
TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
TF_VAR_number_of_etcd: "0"
TF_VAR_number_of_k8s_nodes: "0"
TF_VAR_number_of_k8s_nodes_no_floating_ip: "1"
TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
TF_VAR_number_of_bastions: "0"
TF_VAR_number_of_k8s_masters_no_etcd: "0"
TF_VAR_use_neutron: "0"
TF_VAR_floatingip_pool: "Ext-Net"
TF_VAR_external_net: "6011fbc9-4cbf-46a4-8452-6890a340b60b"
TF_VAR_network_name: "Ext-Net"
TF_VAR_flavor_k8s_master: "4d4fd037-9493-4f2b-9afe-b542b5248eac" # b2-7
TF_VAR_flavor_k8s_node: "4d4fd037-9493-4f2b-9afe-b542b5248eac" # b2-7
TF_VAR_image: "CoreOS Stable"
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'

2
.markdownlint.yaml Normal file
View File

@ -0,0 +1,2 @@
---
MD013: false

View File

@ -1,11 +1,11 @@
FROM ubuntu:16.04
FROM ubuntu:18.04
RUN mkdir /kubespray
WORKDIR /kubespray
RUN apt update -y && \
apt install -y \
libssl-dev python-dev sshpass apt-transport-https jq \
ca-certificates curl gnupg2 software-properties-common python-pip
libssl-dev python3-dev sshpass apt-transport-https jq moreutils \
ca-certificates curl gnupg2 software-properties-common python3-pip rsync
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
@ -13,7 +13,6 @@ RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - &&
stable" \
&& apt update -y && apt-get install docker-ce -y
COPY . .
RUN /usr/bin/python -m pip install pip -U && /usr/bin/python -m pip install -r tests/requirements.txt && python -m pip install -r requirements.txt
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.11.3/bin/linux/amd64/kubectl \
RUN /usr/bin/python3 -m pip install pip -U && /usr/bin/python3 -m pip install -r tests/requirements.txt && python3 -m pip install -r requirements.txt && update-alternatives --install /usr/bin/python python /usr/bin/python3 1
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kubectl \
&& chmod a+x kubectl && cp kubectl /usr/local/bin/kubectl

View File

@ -4,17 +4,12 @@ aliases:
- mattymo
- atoms
- chadswen
- rsmitty
- bogdando
- bradbeam
- woopstar
- mirwan
- miouge1
- riverzhang
- holser
- smana
- verwilst
- woopstar
kubespray-reviewers:
- jjungnickel
- archifleks
- chapsuk
- mirwan
- miouge1
- holmsten

262
README.md
View File

@ -1,19 +1,17 @@
![Kubernetes Logo](https://raw.githubusercontent.com/kubernetes-sigs/kubespray/master/docs/img/kubernetes-logo.png)
# Deploy a Production Ready Kubernetes Cluster
Deploy a Production Ready Kubernetes Cluster
============================================
![Kubernetes Logo](https://raw.githubusercontent.com/kubernetes-sigs/kubespray/master/docs/img/kubernetes-logo.png)
If you have questions, check the [documentation](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/)
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster
- **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions**
- **Continuous integration tests**
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster
- **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions**
- **Continuous integration tests**
Quick Start
-----------
## Quick Start
To deploy the cluster you can use :
@ -21,31 +19,35 @@ To deploy the cluster you can use :
#### Usage
# Install dependencies from ``requirements.txt``
sudo pip install -r requirements.txt
```ShellSession
# Install dependencies from ``requirements.txt``
sudo pip install -r requirements.txt
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
# Update Ansible inventory file with inventory builder
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Update Ansible inventory file with inventory builder
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/inventory.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `-b` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without -b the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.ini --become --become-user=root cluster.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/inventory.ini --become --become-user=root cluster.yml
```
Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
As a consequence, `ansible-playbook` command will fail with:
```
```raw
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
```
probably pointing on a task depending on a module present in requirements.txt (i.e. "unseal vault").
One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible.
@ -56,156 +58,154 @@ A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` en
For Vagrant we need to install python dependencies for provisioning tasks.
Check if Python and pip are installed:
python -V && pip -V
```ShellSession
python -V && pip -V
```
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install the necessary requirements
sudo pip install -r requirements.txt
vagrant up
```ShellSession
sudo pip install -r requirements.txt
vagrant up
```
Documents
---------
## Documents
- [Requirements](#requirements)
- [Kubespray vs ...](docs/comparisons.md)
- [Getting started](docs/getting-started.md)
- [Ansible inventory and tags](docs/ansible.md)
- [Integration with existing ansible repo](docs/integration.md)
- [Deployment data variables](docs/vars.md)
- [DNS stack](docs/dns-stack.md)
- [HA mode](docs/ha-mode.md)
- [Network plugins](#network-plugins)
- [Vagrant install](docs/vagrant.md)
- [CoreOS bootstrap](docs/coreos.md)
- [Debian Jessie setup](docs/debian.md)
- [openSUSE setup](docs/opensuse.md)
- [Downloaded artifacts](docs/downloads.md)
- [Cloud providers](docs/cloud.md)
- [OpenStack](docs/openstack.md)
- [AWS](docs/aws.md)
- [Azure](docs/azure.md)
- [vSphere](docs/vsphere.md)
- [Packet Host](docs/packet.md)
- [Large deployments](docs/large-deployments.md)
- [Upgrades basics](docs/upgrades.md)
- [Roadmap](docs/roadmap.md)
- [Requirements](#requirements)
- [Kubespray vs ...](docs/comparisons.md)
- [Getting started](docs/getting-started.md)
- [Ansible inventory and tags](docs/ansible.md)
- [Integration with existing ansible repo](docs/integration.md)
- [Deployment data variables](docs/vars.md)
- [DNS stack](docs/dns-stack.md)
- [HA mode](docs/ha-mode.md)
- [Network plugins](#network-plugins)
- [Vagrant install](docs/vagrant.md)
- [CoreOS bootstrap](docs/coreos.md)
- [Debian Jessie setup](docs/debian.md)
- [openSUSE setup](docs/opensuse.md)
- [Downloaded artifacts](docs/downloads.md)
- [Cloud providers](docs/cloud.md)
- [OpenStack](docs/openstack.md)
- [AWS](docs/aws.md)
- [Azure](docs/azure.md)
- [vSphere](docs/vsphere.md)
- [Packet Host](docs/packet.md)
- [Large deployments](docs/large-deployments.md)
- [Upgrades basics](docs/upgrades.md)
- [Roadmap](docs/roadmap.md)
Supported Linux Distributions
-----------------------------
## Supported Linux Distributions
- **Container Linux by CoreOS**
- **Debian** Buster, Jessie, Stretch, Wheezy
- **Ubuntu** 16.04, 18.04
- **CentOS/RHEL** 7
- **Fedora** 28
- **Fedora/CentOS** Atomic
- **openSUSE** Leap 42.3/Tumbleweed
- **Container Linux by CoreOS**
- **Debian** Buster, Jessie, Stretch, Wheezy
- **Ubuntu** 16.04, 18.04
- **CentOS/RHEL** 7
- **Fedora** 28
- **Fedora/CentOS** Atomic
- **openSUSE** Leap 42.3/Tumbleweed
- **Oracle Linux** 7
Note: Upstart/SysV init based OS types are not supported.
Supported Components
--------------------
## Supported Components
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.13.5
- [etcd](https://github.com/coreos/etcd) v3.2.26
- [docker](https://www.docker.com/) v18.06 (see note)
- [rkt](https://github.com/rkt/rkt) v1.21.0 (see Note 2)
- [cri-o](http://cri-o.io/) v1.11.5 (experimental: see [CRI-O Note](docs/cri-o.md). Only on centos based OS)
- Network Plugin
- [calico](https://github.com/projectcalico/calico) v3.4.0
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.3.0
- [contiv](https://github.com/contiv/install) v1.2.1
- [flanneld](https://github.com/coreos/flannel) v0.11.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.5
- [multus](https://github.com/intel/multus-cni) v3.1.autoconf
- [weave](https://github.com/weaveworks/weave) v2.5.1
- Application
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [cert-manager](https://github.com/jetstack/cert-manager) v0.5.2
- [coredns](https://github.com/coredns/coredns) v1.4.0
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.21.0
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.16.8
- [etcd](https://github.com/coreos/etcd) v3.3.12
- [docker](https://www.docker.com/) v18.06 (see note)
- [containerd](https://containerd.io/) v1.2.13
- [cri-o](http://cri-o.io/) v1.14.0 (experimental: see [CRI-O Note](docs/cri-o.md). Only on centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v0.8.1
- [calico](https://github.com/projectcalico/calico) v3.7.3
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.5.5
- [contiv](https://github.com/contiv/install) v1.2.1
- [flanneld](https://github.com/coreos/flannel) v0.11.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.5
- [multus](https://github.com/intel/multus-cni) v3.2.1
- [weave](https://github.com/weaveworks/weave) v2.5.2
- Application
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
- [cert-manager](https://github.com/jetstack/cert-manager) v0.11.0
- [coredns](https://github.com/coredns/coredns) v1.6.0
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.26.1
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md) was updated to 1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06. kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md) was updated to 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09. kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Note 2: rkt support as docker alternative is limited to control plane (etcd and
kubelet). Docker is still used for Kubernetes cluster workloads and network
plugins' related OS services. Also note, only one of the supported network
plugins can be deployed for a given single cluster.
## Requirements
Requirements
------------
- **Ansible v2.7.6 (or newer) and python-netaddr is installed on the machine
that will run Ansible commands**
- **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment))
- The target servers are configured to allow **IPv4 forwarding**.
- **Your ssh key must be copied** to all the servers part of your inventory.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
- **Minimum required version of Kubernetes is v1.15**
- **Ansible v2.7.16 and python-netaddr is installed on the machine that will run Ansible commands**
- **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment))
- The target servers are configured to allow **IPv4 forwarding**.
- **Your ssh key must be copied** to all the servers part of your inventory.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall.
- If kubespray is ran from non-root user account, correct privilege escalation method
- If kubespray is ran from non-root user account, correct privilege escalation method
should be configured in the target servers. Then the `ansible_become` flag
or command parameters `--become or -b` should be specified.
Hardware:
Hardware:
These limits are safe guarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide.
- Master
- Memory: 1500 MB
- Node
- Memory: 1024 MB
- Master
- Memory: 1500 MB
- Node
- Memory: 1024 MB
Network Plugins
---------------
## Network Plugins
You can choose between 6 network plugins. (default: `calico`, except Vagrant uses `flannel`)
You can choose between 10 network plugins. (default: `calico`, except Vagrant uses `flannel`)
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
- [calico](docs/calico.md): bgp (layer 3) networking.
- [calico](docs/calico.md): bgp (layer 3) networking.
- [canal](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
- [canal](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [contiv](docs/contiv.md): supports vlan, vxlan, bgp and Cisco SDN networking. This plugin is able to
- [contiv](docs/contiv.md): supports vlan, vxlan, bgp and Cisco SDN networking. This plugin is able to
apply firewall policies, segregate containers in multiple network and bridging pods onto physical networks.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)).
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
- [kube-router](docs/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
- [kube-ovn](docs/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
- [kube-router](docs/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
simplicity and high performance: it uses IPVS to provide Kube Services Proxy (if setup to replace kube-proxy),
iptables for network policies, and BGP for ods L3 networking (with optionally BGP peering with out-of-cluster BGP peers).
It can also optionally advertise routes to Kubernetes cluster Pods CIDRs, ClusterIPs, ExternalIPs and LoadBalancerIPs.
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
- [macvlan](docs/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
The choice is defined with the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md).
Community docs and resources
----------------------------
## Community docs and resources
- [kubernetes.io/docs/getting-started-guides/kubespray/](https://kubernetes.io/docs/getting-started-guides/kubespray/)
- [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr
- [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty
- [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8)
- [kubernetes.io/docs/setup/production-environment/tools/kubespray/](https://kubernetes.io/docs/setup/production-environment/tools/kubespray/)
- [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr
- [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty
- [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8)
Tools and projects on top of Kubespray
--------------------------------------
## Tools and projects on top of Kubespray
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/master/doc/integrations/ansible.rst)
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/v4/doc/integrations/ansible.rst)
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
CI Tests
--------
## CI Tests
[![Build graphs](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/badges/master/build.svg)](https://gitlab.com/kubespray-ci/kubernetes-incubator__kubespray/pipelines)
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/build.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
CI/end-to-end tests sponsored by Google (GCE)
See the [test matrix](docs/test_cases.md) for details.

View File

@ -3,16 +3,19 @@
The Kubespray Project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release
2. At least one of the [OWNERS](OWNERS) must LGTM this release
3. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION`
4. The release issue is closed
5. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
2. At least one of the [approvers](OWNERS_ALIASES) must approve this release
3. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
4. An approver creates a release branch in the form `release-vX.Y`
5. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) docker image is built and tagged
6. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
7. The release issue is closed
8. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
## Major/minor releases, merge freezes and milestones
* Kubespray does not maintain stable branches for releases. Releases are tags, not
branches, and there are no backports. Therefore, there is no need for merge
freezes as well.
* Kubespray maintains one branch for major releases (vX.Y). Minor releases are available only as tags.
* Security patches and bugs might be backported.
* Fixes for major releases (vX.x.0) and minor releases (vX.Y.x) are delivered
via maintenance releases (vX.Y.Z) and assigned to the corresponding open

20
Vagrantfile vendored
View File

@ -21,10 +21,11 @@ SUPPORTED_OS = {
"ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"},
"ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"},
"centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.5", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"fedora" => {box: "fedora/28-cloud-base", user: "vagrant"},
"opensuse" => {box: "opensuse/openSUSE-15.0-x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
}
# Defaults for config options defined in CONFIG
@ -180,11 +181,20 @@ Vagrant.configure("2") do |config|
"flannel_interface": "eth1",
"kube_network_plugin": $network_plugin,
"kube_network_plugin_multus": $multi_networking,
"docker_keepcache": "1",
"download_run_once": "False",
"download_run_once": "True",
"download_localhost": "False",
"download_cache_dir": ENV['HOME'] + "/kubespray_cache",
# Make kubespray cache even when download_run_once is false
"download_force_cache": "True",
# Keeping the cache on the nodes can improve provisioning speed while debugging kubespray
"download_keep_remote_cache": "False",
"docker_keepcache": "1",
# These two settings will put kubectl and admin.config in $inventory/artifacts
"kubeconfig_localhost": "True",
"kubectl_localhost": "True",
"local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}",
"local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}"
"local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}",
"ansible_ssh_user": SUPPORTED_OS[$os][:user]
}
# Only execute the Ansible provisioner once, when all the machines are up and ready.
@ -196,7 +206,7 @@ Vagrant.configure("2") do |config|
ansible.inventory_path = $ansible_inventory_path
end
ansible.become = true
ansible.limit = "all"
ansible.limit = "all,localhost"
ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
ansible.host_vars = host_vars

View File

@ -4,6 +4,8 @@ ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults]
strategy_plugins = plugins/mitogen/ansible_mitogen/plugins/strategy
# https://github.com/ansible/ansible/issues/56930 (to ignore group names with - and .)
force_valid_group_names = ignore
host_key_checking=False
gathering = smart
@ -14,6 +16,6 @@ library = ./library
callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg
[inventory]
ignore_patterns = artifacts, credentials

View File

@ -3,11 +3,11 @@
gather_facts: false
become: no
tasks:
- name: "Check ansible version >=2.7.6"
- name: "Check ansible version >=2.7.8"
assert:
msg: "Ansible must be v2.7.6 or higher"
msg: "Ansible must be v2.7.8 or higher"
that:
- ansible_version.string is version("2.7.6", ">=")
- ansible_version.string is version("2.7.8", ">=")
tags:
- check
vars:
@ -19,57 +19,50 @@
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- hosts: k8s-cluster:etcd:calico-rr
- hosts: k8s-cluster:etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
vars:
# Need to disable pipelining for bootstrap-os as some systems have requiretty in sudoers set, which makes pipelining
# fail. bootstrap-os fixes this on these systems, so in later plays it can be enabled.
ansible_ssh_pipelining: false
roles:
- { role: kubespray-defaults}
- { role: bootstrap-os, tags: bootstrap-os}
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
vars:
ansible_ssh_pipelining: true
gather_facts: false
pre_tasks:
- name: gather facts from all instances
setup:
delegate_to: "{{item}}"
delegate_facts: true
with_items: "{{ groups['k8s-cluster'] + groups['etcd'] + groups['calico-rr']|default([]) }}"
run_once: true
- hosts: k8s-cluster:etcd:calico-rr
- hosts: k8s-cluster:etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/preinstall, tags: preinstall }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
- { role: download, tags: download, when: "not skip_downloads" }
environment: "{{proxy_env}}"
environment: "{{ proxy_env }}"
- hosts: etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: true, etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}" }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: not etcd_kubeadm_enabled| default(false)
- hosts: k8s-cluster:calico-rr
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: false, etcd_events_cluster_setup: false }
- role: etcd
tags: etcd
vars:
etcd_cluster_setup: false
etcd_events_cluster_setup: false
when: not etcd_kubeadm_enabled| default(false)
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/node, tags: node }
environment: "{{proxy_env}}"
environment: "{{ proxy_env }}"
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@ -85,6 +78,13 @@
- { role: kubespray-defaults}
- { role: kubernetes/kubeadm, tags: kubeadm}
- { role: network_plugin, tags: network }
- { role: kubernetes/node-label }
- hosts: calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: ['network', 'calico_rr']}
- hosts: kube-master[0]
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
@ -102,16 +102,15 @@
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
- hosts: calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: network }
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps, tags: apps }
environment: "{{ proxy_env }}"
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }
environment: "{{proxy_env}}"

View File

@ -42,8 +42,11 @@ class SearchEC2Tags(object):
region = os.environ['REGION']
ec2 = boto3.resource('ec2', region)
instances = ec2.instances.filter(Filters=[{'Name': 'tag:'+tag_key, 'Values': tag_value}, {'Name': 'instance-state-name', 'Values': ['running']}])
filters = [{'Name': 'tag:'+tag_key, 'Values': tag_value}, {'Name': 'instance-state-name', 'Values': ['running']}]
cluster_name = os.getenv('CLUSTER_NAME')
if cluster_name:
filters.append({'Name': 'tag-key', 'Values': ['kubernetes.io/cluster/'+cluster_name]})
instances = ec2.instances.filter(Filters=filters)
for instance in instances:
##Suppose default vpc_visibility is private

View File

@ -7,7 +7,7 @@ cluster_name: example
# node that can be used to access the masters and minions
use_bastion: false
# Set this to a prefered name that will be used as the first part of the dns name for your bastotion host. For example: k8s-bastion.<azureregion>.cloudapp.azure.com.
# Set this to a preferred name that will be used as the first part of the dns name for your bastotion host. For example: k8s-bastion.<azureregion>.cloudapp.azure.com.
# This is convenient when exceptions have to be configured on a firewall to allow ssh to the given bastion host.
# bastion_domain_prefix: k8s-bastion

View File

@ -4,8 +4,11 @@
command: azure vm list-ip-address --json {{ azure_resource_group }}
register: vm_list_cmd
- set_fact:
- name: Set vm_list
set_fact:
vm_list: "{{ vm_list_cmd.stdout }}"
- name: Generate inventory
template: src=inventory.j2 dest="{{playbook_dir}}/inventory"
template:
src: inventory.j2
dest: "{{ playbook_dir }}/inventory"

View File

@ -8,9 +8,22 @@
command: az vm list -o json --resource-group {{ azure_resource_group }}
register: vm_list_cmd
- set_fact:
- name: Query Azure Load Balancer Public IP
command: az network public-ip show -o json -g {{ azure_resource_group }} -n kubernetes-api-pubip
register: lb_pubip_cmd
- name: Set VM IP, roles lists and load balancer public IP
set_fact:
vm_ip_list: "{{ vm_ip_list_cmd.stdout }}"
vm_roles_list: "{{ vm_list_cmd.stdout }}"
lb_pubip: "{{ lb_pubip_cmd.stdout }}"
- name: Generate inventory
template: src=inventory.j2 dest="{{playbook_dir}}/inventory"
template:
src: inventory.j2
dest: "{{ playbook_dir }}/inventory"
- name: Generate Load Balancer variables
template:
src: loadbalancer_vars.j2
dest: "{{ playbook_dir }}/loadbalancer_vars.yml"

View File

@ -0,0 +1,8 @@
## External LB example config
apiserver_loadbalancer_domain_name: {{ lb_pubip.dnsSettings.fqdn }}
loadbalancer_apiserver:
address: {{ lb_pubip.ipAddress }}
port: 6443
## Internal loadbalancers for apiservers
loadbalancer_apiserver_localhost: false

View File

@ -29,7 +29,7 @@ sshKeyPath: "/home/{{admin_username}}/.ssh/authorized_keys"
imageReference:
publisher: "OpenLogic"
offer: "CentOS"
sku: "7.2"
sku: "7.5"
version: "latest"
imageReferenceJson: "{{imageReference|to_json}}"

View File

@ -1,10 +1,18 @@
---
- set_fact:
base_dir: "{{playbook_dir}}/.generated/"
- name: Set base_dir
set_fact:
base_dir: "{{ playbook_dir }}/.generated/"
- file: path={{base_dir}} state=directory recurse=true
- name: Create base_dir
file:
path: "{{ base_dir }}"
state: directory
recurse: true
- template: src={{item}} dest="{{base_dir}}/{{item}}"
- name: Store json files in base_dir
template:
src: "{{ item }}"
dest: "{{ base_dir }}/{{ item }}"
with_items:
- network.json
- storage.json

View File

@ -12,7 +12,7 @@
- name: Null-ify some linux tools to ease DIND
file:
src: "/bin/true"
dest: "{{item}}"
dest: "{{ item }}"
state: link
force: yes
with_items:
@ -52,7 +52,7 @@
- rsyslog
- "{{ distro_ssh_service }}"
- name: Create distro user "{{distro_user}}"
- name: Create distro user "{{ distro_user }}"
user:
name: "{{ distro_user }}"
uid: 1000

View File

@ -28,7 +28,7 @@
- /lib/modules:/lib/modules
- "{{ item }}:/dind/docker"
register: containers
with_items: "{{groups.containers}}"
with_items: "{{ groups.containers }}"
tags:
- addresses
@ -79,6 +79,7 @@
with_items: "{{ containers.results }}"
- name: Early hack image install to adapt for DIND
# noqa 302 - this task uses the raw module intentionally
raw: |
rm -fv /usr/bin/udevadm /usr/sbin/udevadm
delegate_to: "{{ item._ansible_item_label|default(item.item) }}"

View File

@ -20,6 +20,8 @@
# Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
# Add hosts with different ip and access ip:
# inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.1.3
# Add hosts with a specific hostname, ip, and optional access ip:
# inventory.py first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3
# Delete a host: inventory.py -10.10.1.3
# Delete a host by id: inventory.py -node1
#
@ -44,7 +46,8 @@ import sys
ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster',
'calico-rr']
PROTECTED_NAMES = ROLES
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'load']
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'print_hostnames',
'load']
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
'0': False, 'no': False, 'false': False, 'off': False}
yaml = YAML()
@ -59,6 +62,7 @@ def get_var_as_bool(name, default):
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.yaml")
KUBE_MASTERS = int(os.environ.get("KUBE_MASTERS_MASTERS", 2))
# Reconfigures cluster distribution at scale
SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50))
MASSIVE_SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 200))
@ -78,7 +82,7 @@ class KubesprayInventory(object):
try:
self.hosts_file = open(config_file, 'r')
self.yaml_config = yaml.load(self.hosts_file)
except FileNotFoundError:
except OSError:
pass
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
@ -93,14 +97,16 @@ class KubesprayInventory(object):
self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES)
self.set_all(self.hosts)
self.set_k8s_cluster()
self.set_etcd(list(self.hosts.keys())[:3])
etcd_hosts_count = 3 if len(self.hosts.keys()) >= 3 else 1
self.set_etcd(list(self.hosts.keys())[:etcd_hosts_count])
if len(self.hosts) >= SCALE_THRESHOLD:
self.set_kube_master(list(self.hosts.keys())[3:5])
self.set_kube_master(list(self.hosts.keys())[
etcd_hosts_count:(etcd_hosts_count + KUBE_MASTERS)])
else:
self.set_kube_master(list(self.hosts.keys())[:2])
self.set_kube_master(list(self.hosts.keys())[:KUBE_MASTERS])
self.set_kube_node(self.hosts.keys())
if len(self.hosts) >= SCALE_THRESHOLD:
self.set_calico_rr(list(self.hosts.keys())[:3])
self.set_calico_rr(list(self.hosts.keys())[:etcd_hosts_count])
else: # Show help if no options
self.show_help()
sys.exit(0)
@ -191,8 +197,21 @@ class KubesprayInventory(object):
'ip': ip,
'access_ip': access_ip}
elif host[0].isalpha():
raise Exception("Adding hosts by hostname is not supported.")
if ',' in host:
try:
hostname, ip, access_ip = host.split(',')
except Exception:
hostname, ip = host.split(',')
access_ip = ip
if self.exists_hostname(all_hosts, host):
self.debug("Skipping existing host {0}.".format(host))
continue
elif self.exists_ip(all_hosts, ip):
self.debug("Skipping existing host {0}.".format(ip))
continue
all_hosts[hostname] = {'ansible_host': access_ip,
'ip': ip,
'access_ip': access_ip}
return all_hosts
def range2ips(self, hosts):
@ -202,11 +221,11 @@ class KubesprayInventory(object):
try:
# Python 3.x
start = int(ip_address(start_address))
end = int(ip_address(end_address))
except:
end = int(ip_address(end_address))
except Exception:
# Python 2.7
start = int(ip_address(unicode(start_address)))
end = int(ip_address(unicode(end_address)))
start = int(ip_address(str(start_address)))
end = int(ip_address(str(end_address)))
return [ip_address(ip).exploded for ip in range(start, end + 1)]
for host in hosts:
@ -345,6 +364,8 @@ class KubesprayInventory(object):
self.print_config()
elif command == 'print_ips':
self.print_ips()
elif command == 'print_hostnames':
self.print_hostnames()
elif command == 'load':
self.load_file(args)
else:
@ -358,11 +379,13 @@ Available commands:
help - Display this message
print_cfg - Write inventory file to stdout
print_ips - Write a space-delimited list of IPs from "all" group
print_hostnames - Write a space-delimited list of Hostnames from "all" group
Advanced usage:
Add another host after initial creation: inventory.py 10.10.1.5
Add range of hosts: inventory.py 10.10.1.3-10.10.1.5
Add hosts with different ip and access ip: inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.10.3
Add hosts with a specific hostname, ip, and optional access ip: first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3
Delete a host: inventory.py -10.10.1.3
Delete a host by id: inventory.py -node1
@ -378,6 +401,9 @@ MASSIVE_SCALE_THRESHOLD Separate K8s master and ETCD if # of nodes >= 200
def print_config(self):
yaml.dump(self.yaml_config, sys.stdout)
def print_hostnames(self):
print(' '.join(self.yaml_config['all']['hosts'].keys()))
def print_ips(self):
ips = []
for host, opts in self.yaml_config['all']['hosts'].items():

View File

@ -12,6 +12,7 @@
# License for the specific language governing permissions and limitations
# under the License.
import inventory
import mock
import unittest
@ -22,7 +23,7 @@ path = "./contrib/inventory_builder/"
if path not in sys.path:
sys.path.append(path)
import inventory
import inventory # noqa
class TestInventory(unittest.TestCase):
@ -43,8 +44,8 @@ class TestInventory(unittest.TestCase):
def test_get_ip_from_opts_invalid(self):
optstring = "notanaddr=value something random!chars:D"
self.assertRaisesRegexp(ValueError, "IP parameter not found",
self.inv.get_ip_from_opts, optstring)
self.assertRaisesRegex(ValueError, "IP parameter not found",
self.inv.get_ip_from_opts, optstring)
def test_ensure_required_groups(self):
groups = ['group1', 'group2']
@ -63,8 +64,8 @@ class TestInventory(unittest.TestCase):
def test_get_host_id_invalid(self):
bad_hostnames = ['node', 'no99de', '01node', 'node.111111']
for hostname in bad_hostnames:
self.assertRaisesRegexp(ValueError, "Host name must end in an",
self.inv.get_host_id, hostname)
self.assertRaisesRegex(ValueError, "Host name must end in an",
self.inv.get_host_id, hostname)
def test_build_hostnames_add_one(self):
changed_hosts = ['10.90.0.2']
@ -192,8 +193,8 @@ class TestInventory(unittest.TestCase):
('node2', {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'})])
self.assertRaisesRegexp(ValueError, "Unable to find host",
self.inv.delete_host_by_ip, existing_hosts, ip)
self.assertRaisesRegex(ValueError, "Unable to find host",
self.inv.delete_host_by_ip, existing_hosts, ip)
def test_purge_invalid_hosts(self):
proper_hostnames = ['node1', 'node2']
@ -309,8 +310,8 @@ class TestInventory(unittest.TestCase):
def test_range2ips_incorrect_range(self):
host_range = ['10.90.0.4-a.9b.c.e']
self.assertRaisesRegexp(Exception, "Range of ip_addresses isn't valid",
self.inv.range2ips, host_range)
self.assertRaisesRegex(Exception, "Range of ip_addresses isn't valid",
self.inv.range2ips, host_range)
def test_build_hostnames_different_ips_add_one(self):
changed_hosts = ['10.90.0.2,192.168.0.2']

View File

@ -1,7 +1,7 @@
[tox]
minversion = 1.6
skipsdist = True
envlist = pep8, py27
envlist = pep8, py33
[testenv]
whitelist_externals = py.test

View File

@ -1,15 +1,9 @@
---
- name: Upgrade all packages to the latest version (yum)
yum:
name: '*'
state: latest
when: ansible_os_family == "RedHat"
- name: Install required packages
yum:
name: "{{ item }}"
state: latest
state: present
with_items:
- bind-utils
- ntp
@ -21,23 +15,13 @@
update_cache: yes
cache_valid_time: 3600
name: "{{ item }}"
state: latest
state: present
install_recommends: no
with_items:
- dnsutils
- ntp
when: ansible_os_family == "Debian"
- name: Upgrade all packages to the latest version (apt)
shell: apt-get -o \
Dpkg::Options::=--force-confdef -o \
Dpkg::Options::=--force-confold -q -y \
dist-upgrade
environment:
DEBIAN_FRONTEND: noninteractive
when: ansible_os_family == "Debian"
# Create deployment user if required
- include: user.yml
when: k8s_deployment_user is defined

View File

@ -2,9 +2,11 @@
```
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that dont run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
```
This playbook aims to automate [this](https://metallb.universe.tf/tutorial/layer2/tutorial). It deploys MetalLB into kubernetes and sets up a layer 2 loadbalancer.
This playbook aims to automate [this](https://metallb.universe.tf/concepts/layer2/). It deploys MetalLB into kubernetes and sets up a layer 2 loadbalancer.
## Install
```
Defaults can be found in contrib/metallb/roles/provision/defaults/main.yml. You can override the defaults by copying the contents of this file to somewhere in inventory/mycluster/group_vars such as inventory/mycluster/groups_vars/k8s-cluster/addons.yml and making any adjustments as required.
ansible-playbook --ask-become -i inventory/sample/hosts.ini contrib/metallb/metallb.yml
```

View File

@ -1,6 +1,12 @@
---
metallb:
ip_range: "10.5.0.50-10.5.0.99"
protocol: "layer2"
# additional_address_pools:
# kube_service_pool:
# ip_range: "10.5.1.50-10.5.1.99"
# protocol: "layer2"
# auto_assign: false
limits:
cpu: "100m"
memory: "100Mi"

View File

@ -1,4 +1,9 @@
---
- name: "Kubernetes Apps | Check cluster settings for MetalLB"
fail:
msg: "MetalLB require kube_proxy_strict_arp = true, see https://github.com/danderson/metallb/issues/153#issuecomment-518651132"
when:
- "kube_proxy_mode == 'ipvs' and not kube_proxy_strict_arp"
- name: "Kubernetes Apps | Lay Down MetalLB"
become: true
template: { src: "{{ item }}.j2", dest: "{{ kube_config_dir }}/{{ item }}" }
@ -9,7 +14,7 @@
- name: "Kubernetes Apps | Install and configure MetalLB"
kube:
name: "MetalLB"
kubectl: "{{bin_dir}}/kubectl"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/{{ item.item }}"
state: "{{ item.changed | ternary('latest','present') }}"
become: true

View File

@ -8,6 +8,14 @@ data:
config: |
address-pools:
- name: loadbalanced
protocol: layer2
protocol: {{ metallb.protocol }}
addresses:
- {{ metallb.ip_range }}
{% if metallb.additional_address_pools is defined %}{% for pool in metallb.additional_address_pools %}
- name: {{ pool }}
protocol: {{ metallb.additional_address_pools[pool].protocol }}
addresses:
- {{ metallb.additional_address_pools[pool].ip_range }}
auto-assign: {{ metallb.additional_address_pools[pool].auto_assign }}
{% endfor %}
{% endif %}

View File

@ -115,7 +115,7 @@ roleRef:
kind: Role
name: config-watcher
---
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: metallb-system
@ -169,7 +169,7 @@ spec:
- net_raw
---
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: metallb-system

View File

@ -0,0 +1,15 @@
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

View File

@ -21,7 +21,7 @@ You can specify a `default_release` for apt on Debian/Ubuntu by overriding this
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.5"
For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](http://www.gluster.org/community/documentation/index.php/Getting_started_install) for more info.
For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/) for more info.
## Dependencies

View File

@ -3,7 +3,7 @@
- name: Include OS-specific variables.
include_vars: "{{ ansible_os_family }}.yml"
# Instal xfs package
# Install xfs package
- name: install xfs Debian
apt: name=xfsprogs state=present
when: ansible_os_family == "Debian"
@ -36,7 +36,7 @@
- "{{ gluster_brick_dir }}"
- "{{ gluster_mount_dir }}"
- name: Configure Gluster volume.
- name: Configure Gluster volume with replicas
gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
@ -46,6 +46,18 @@
host: "{{ inventory_hostname }}"
force: yes
run_once: true
when: groups['gfs-cluster']|length > 1
- name: Configure Gluster volume without replicas
gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}"
force: yes
run_once: true
when: groups['gfs-cluster']|length <= 1
- name: Mount glusterfs to retrieve disk size
mount:

View File

@ -1,6 +1,8 @@
---
- name: Kubernetes Apps | Lay Down k8s GlusterFS Endpoint and PV
template: src={{item.file}} dest={{kube_config_dir}}/{{item.dest}}
template:
src: "{{ item.file }}"
dest: "{{ kube_config_dir }}/{{ item.dest }}"
with_items:
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
@ -12,9 +14,9 @@
kube:
name: glusterfs
namespace: default
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.dest}}"
state: "{{item.changed | ternary('latest','present') }}"
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.item.type }}"
filename: "{{ kube_config_dir }}/{{ item.item.dest }}"
state: "{{ item.changed | ternary('latest','present') }}"
with_items: "{{ gluster_pv.results }}"
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined

View File

@ -14,3 +14,5 @@ ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contr
```
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/network-storage/heketi/heketi-tear-down.yml
```
Add `--extra-vars "heketi_remove_lvm=true"` to the command above to remove LVM packages from the system

View File

@ -4,6 +4,7 @@
register: "initial_heketi_state"
changed_when: false
command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
- name: "Bootstrap heketi."
when:
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Service']\"))|length == 0"
@ -16,15 +17,20 @@
register: "initial_heketi_pod"
command: "{{ bin_dir }}/kubectl get pods --selector=deploy-heketi=pod,glusterfs=heketi-pod,name=deploy-heketi --output=json"
changed_when: false
- name: "Ensure heketi bootstrap pod is up."
assert:
that: "(initial_heketi_pod.stdout|from_json|json_query('items[*]'))|length == 1"
- set_fact:
- name: Store the initial heketi pod name
set_fact:
initial_heketi_pod_name: "{{ initial_heketi_pod.stdout|from_json|json_query(\"items[*].metadata.name|[0]\") }}"
- name: "Test heketi topology."
changed_when: false
register: "heketi_topology"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
- name: "Load heketi topology."
when: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*]\")|flatten|length == 0"
include_tasks: "bootstrap/topology.yml"
@ -42,6 +48,7 @@
command: "{{ bin_dir }}/kubectl get secrets,endpoints,services,jobs --output=json"
changed_when: false
register: "heketi_storage_state"
# ensure endpoints actually exist before trying to move database data to it
- name: "Create heketi storage."
include_tasks: "bootstrap/storage.yml"

View File

@ -6,7 +6,7 @@
- name: "Kubernetes Apps | Install and configure Heketi Bootstrap"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/heketi-bootstrap.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"
- name: "Wait for heketi bootstrap to complete."

View File

@ -6,7 +6,7 @@
- name: "Create heketi storage."
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/heketi-storage-bootstrap.json"
state: "present"
vars:

View File

@ -6,7 +6,7 @@
- name: "Kubernetes Apps | Install and configure GlusterFS daemonset"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/glusterfs-daemonset.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"
- name: "Kubernetes Apps | Label GlusterFS nodes"
@ -33,6 +33,6 @@
- name: "Kubernetes Apps | Install and configure Heketi Service Account"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/heketi-service-account.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"

View File

@ -1,11 +1,19 @@
---
- register: "label_present"
- name: Get storage nodes
register: "label_present"
command: "{{ bin_dir }}/kubectl get node --selector=storagenode=glusterfs,kubernetes.io/hostname={{ node }} --ignore-not-found=true"
changed_when: false
- name: "Assign storage label"
when: "label_present.stdout_lines|length == 0"
command: "{{ bin_dir }}/kubectl label node {{ node }} storagenode=glusterfs"
- register: "label_present"
- name: Get storage nodes again
register: "label_present"
command: "{{ bin_dir }}/kubectl get node --selector=storagenode=glusterfs,kubernetes.io/hostname={{ node }} --ignore-not-found=true"
changed_when: false
- assert: { that: "label_present|length > 0", msg: "Node {{ node }} has not been assigned with label storagenode=glusterfs." }
- name: Ensure the label has been set
assert:
that: "label_present|length > 0"
msg: "Node {{ node }} has not been assigned with label storagenode=glusterfs."

View File

@ -1,19 +1,24 @@
---
- name: "Kubernetes Apps | Lay Down Heketi"
become: true
template: { src: "heketi-deployment.json.j2", dest: "{{ kube_config_dir }}/heketi-deployment.json" }
template:
src: "heketi-deployment.json.j2"
dest: "{{ kube_config_dir }}/heketi-deployment.json"
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/heketi-deployment.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"
- name: "Ensure heketi is up and running."
changed_when: false
register: "heketi_state"
vars:
heketi_state: { stdout: "{}" }
heketi_state:
stdout: "{}"
pods_query: "items[?kind=='Pod'].status.conditions|[0][?type=='Ready'].status|[0]"
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get deployments,pods --selector=glusterfs --output=json"
@ -22,5 +27,7 @@
- "heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
retries: 60
delay: 5
- set_fact:
- name: Set the Heketi pod name
set_fact:
heketi_pod_name: "{{ heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod'].metadata.name|[0]\") }}"

View File

@ -7,7 +7,7 @@
- name: "Kubernetes Apps | Test Heketi"
register: "heketi_service_state"
command: "{{bin_dir}}/kubectl get service heketi-storage-endpoints -o=name --ignore-not-found=true"
command: "{{ bin_dir }}/kubectl get service heketi-storage-endpoints -o=name --ignore-not-found=true"
changed_when: false
- name: "Kubernetes Apps | Bootstrap Heketi"

View File

@ -1,31 +1,44 @@
---
- register: "clusterrolebinding_state"
command: "{{bin_dir}}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
- name: Get clusterrolebindings
register: "clusterrolebinding_state"
command: "{{ bin_dir }}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
changed_when: false
- name: "Kubernetes Apps | Deploy cluster role binding."
when: "clusterrolebinding_state.stdout == \"\""
command: "{{bin_dir}}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account"
- register: "clusterrolebinding_state"
command: "{{bin_dir}}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
command: "{{ bin_dir }}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account"
- name: Get clusterrolebindings again
register: "clusterrolebinding_state"
command: "{{ bin_dir }}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
changed_when: false
- assert:
- name: Make sure that clusterrolebindings are present now
assert:
that: "clusterrolebinding_state.stdout != \"\""
msg: "Cluster role binding is not present."
- register: "secret_state"
command: "{{bin_dir}}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
- name: Get the heketi-config-secret secret
register: "secret_state"
command: "{{ bin_dir }}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
changed_when: false
- name: "Render Heketi secret configuration."
become: true
template:
src: "heketi.json.j2"
dest: "{{ kube_config_dir }}/heketi.json"
- name: "Deploy Heketi config secret"
when: "secret_state.stdout == \"\""
command: "{{bin_dir}}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json"
- register: "secret_state"
command: "{{bin_dir}}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
command: "{{ bin_dir }}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json"
- name: Get the heketi-config-secret secret again
register: "secret_state"
command: "{{ bin_dir }}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
changed_when: false
- assert:
- name: Make sure the heketi-config-secret secret exists now
assert:
that: "secret_state.stdout != \"\""
msg: "Heketi config secret is not present."

View File

@ -7,6 +7,6 @@
- name: "Kubernetes Apps | Install and configure Heketi Storage"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/heketi-storage.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"

View File

@ -20,6 +20,6 @@
- name: "Kubernetes Apps | Install and configure Storace Class"
kube:
name: "GlusterFS"
kubectl: "{{bin_dir}}/kubectl"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/storageclass.yml"
state: "{{ rendering.changed | ternary('latest', 'present') }}"

View File

@ -1,6 +1,6 @@
{
"kind": "DaemonSet",
"apiVersion": "extensions/v1beta1",
"apiVersion": "apps/v1",
"metadata": {
"name": "glusterfs",
"labels": {
@ -69,7 +69,7 @@
},
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 60,
"initialDelaySeconds": 3,
"exec": {
"command": [
"/bin/bash",
@ -80,7 +80,7 @@
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 60,
"initialDelaySeconds": 10,
"exec": {
"command": [
"/bin/bash",

View File

@ -30,7 +30,7 @@
},
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"apiVersion": "apps/v1",
"metadata": {
"name": "deploy-heketi",
"labels": {
@ -56,7 +56,7 @@
"serviceAccountName": "heketi-service-account",
"containers": [
{
"image": "heketi/heketi:7",
"image": "heketi/heketi:9",
"imagePullPolicy": "Always",
"name": "deploy-heketi",
"env": [
@ -106,7 +106,7 @@
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 30,
"initialDelaySeconds": 10,
"httpGet": {
"path": "/hello",
"port": 8080

View File

@ -44,7 +44,7 @@
},
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"apiVersion": "apps/v1",
"metadata": {
"name": "heketi",
"labels": {
@ -68,7 +68,7 @@
"serviceAccountName": "heketi-service-account",
"containers": [
{
"image": "heketi/heketi:7",
"image": "heketi/heketi:9",
"imagePullPolicy": "Always",
"name": "heketi",
"env": [
@ -122,7 +122,7 @@
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 30,
"initialDelaySeconds": 10,
"httpGet": {
"path": "/hello",
"port": 8080

View File

@ -16,7 +16,7 @@
{
"addresses": [
{
"ip": "{{ hostvars[node]['ansible_facts']['default_ipv4']['address'] }}"
"ip": "{{ hostvars[node].ip }}"
}
],
"ports": [

View File

@ -12,7 +12,7 @@
"{{ node }}"
],
"storage": [
"{{ hostvars[node]['ansible_facts']['default_ipv4']['address'] }}"
"{{ hostvars[node].ip }}"
]
},
"zone": 1

View File

@ -0,0 +1,2 @@
---
heketi_remove_lvm: false

View File

@ -14,6 +14,8 @@
when: "ansible_os_family == 'Debian'"
- name: "Get volume group information."
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true
shell: "pvs {{ disk_volume_device_1 }} --option vg_name | tail -n+2"
register: "volume_groups"
@ -21,12 +23,16 @@
changed_when: false
- name: "Remove volume groups."
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true
command: "vgremove {{ volume_group }} --yes"
with_items: "{{ volume_groups.stdout_lines }}"
loop_control: { loop_var: "volume_group" }
- name: "Remove physical volume from cluster disks."
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true
command: "pvremove {{ disk_volume_device_1 }} --yes"
ignore_errors: true
@ -36,11 +42,11 @@
yum:
name: "lvm2"
state: "absent"
when: "ansible_os_family == 'RedHat'"
when: "ansible_os_family == 'RedHat' and heketi_remove_lvm"
- name: "Remove lvm utils (Debian)"
become: true
apt:
name: "lvm2"
state: "absent"
when: "ansible_os_family == 'Debian'"
when: "ansible_os_family == 'Debian' and heketi_remove_lvm"

View File

@ -10,7 +10,7 @@ This project will create:
* AWS ELB in the Public Subnet for accessing the Kubernetes API from the internet
**Requirements**
- Terraform 0.8.7 or newer
- Terraform 0.12.0 or newer
**How to Use:**

View File

@ -1,5 +1,5 @@
terraform {
required_version = ">= 0.8.7"
required_version = ">= 0.12.0"
}
provider "aws" {
@ -16,22 +16,22 @@ data "aws_availability_zones" "available" {}
*/
module "aws-vpc" {
source = "modules/vpc"
source = "./modules/vpc"
aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}"
aws_avail_zones = "${slice(data.aws_availability_zones.available.names,0,2)}"
aws_avail_zones = "${slice(data.aws_availability_zones.available.names, 0, 2)}"
aws_cidr_subnets_private = "${var.aws_cidr_subnets_private}"
aws_cidr_subnets_public = "${var.aws_cidr_subnets_public}"
default_tags = "${var.default_tags}"
}
module "aws-elb" {
source = "modules/elb"
source = "./modules/elb"
aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_id = "${module.aws-vpc.aws_vpc_id}"
aws_avail_zones = "${slice(data.aws_availability_zones.available.names,0,2)}"
aws_avail_zones = "${slice(data.aws_availability_zones.available.names, 0, 2)}"
aws_subnet_ids_public = "${module.aws-vpc.aws_subnet_ids_public}"
aws_elb_api_port = "${var.aws_elb_api_port}"
k8s_secure_api_port = "${var.k8s_secure_api_port}"
@ -39,7 +39,7 @@ module "aws-elb" {
}
module "aws-iam" {
source = "modules/iam"
source = "./modules/iam"
aws_cluster_name = "${var.aws_cluster_name}"
}
@ -54,18 +54,18 @@ resource "aws_instance" "bastion-server" {
instance_type = "${var.aws_bastion_size}"
count = "${length(var.aws_cidr_subnets_public)}"
associate_public_ip_address = true
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public,count.index)}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public, count.index)}"
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
vpc_security_group_ids = "${module.aws-vpc.aws_security_group}"
key_name = "${var.AWS_SSH_KEY_NAME}"
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-bastion-${count.index}",
"Cluster", "${var.aws_cluster_name}",
"Role", "bastion-${var.aws_cluster_name}-${count.index}"
))}"
"Name", "kubernetes-${var.aws_cluster_name}-bastion-${count.index}",
"Cluster", "${var.aws_cluster_name}",
"Role", "bastion-${var.aws_cluster_name}-${count.index}"
))}"
}
/*
@ -79,25 +79,25 @@ resource "aws_instance" "k8s-master" {
count = "${var.aws_kube_master_num}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private, count.index)}"
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
vpc_security_group_ids = "${module.aws-vpc.aws_security_group}"
iam_instance_profile = "${module.aws-iam.kube-master-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}"
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-master${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "master"
))}"
"Name", "kubernetes-${var.aws_cluster_name}-master${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "master"
))}"
}
resource "aws_elb_attachment" "attach_master_nodes" {
count = "${var.aws_kube_master_num}"
elb = "${module.aws-elb.aws_elb_api_id}"
instance = "${element(aws_instance.k8s-master.*.id,count.index)}"
instance = "${element(aws_instance.k8s-master.*.id, count.index)}"
}
resource "aws_instance" "k8s-etcd" {
@ -106,18 +106,18 @@ resource "aws_instance" "k8s-etcd" {
count = "${var.aws_etcd_num}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private, count.index)}"
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
vpc_security_group_ids = "${module.aws-vpc.aws_security_group}"
key_name = "${var.AWS_SSH_KEY_NAME}"
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-etcd${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "etcd"
))}"
"Name", "kubernetes-${var.aws_cluster_name}-etcd${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "etcd"
))}"
}
resource "aws_instance" "k8s-worker" {
@ -126,19 +126,19 @@ resource "aws_instance" "k8s-worker" {
count = "${var.aws_kube_worker_num}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private, count.index)}"
vpc_security_group_ids = ["${module.aws-vpc.aws_security_group}"]
vpc_security_group_ids = "${module.aws-vpc.aws_security_group}"
iam_instance_profile = "${module.aws-iam.kube-worker-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}"
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-worker${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "worker"
))}"
"Name", "kubernetes-${var.aws_cluster_name}-worker${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "worker"
))}"
}
/*
@ -148,14 +148,14 @@ resource "aws_instance" "k8s-worker" {
data "template_file" "inventory" {
template = "${file("${path.module}/templates/inventory.tpl")}"
vars {
public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_host=%s" , aws_instance.bastion-server.*.public_ip))}"
connection_strings_master = "${join("\n",formatlist("%s ansible_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
vars = {
public_ip_address_bastion = "${join("\n", formatlist("bastion ansible_host=%s", aws_instance.bastion-server.*.public_ip))}"
connection_strings_master = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
connection_strings_node = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.tags.Name, aws_instance.k8s-worker.*.private_ip))}"
connection_strings_etcd = "${join("\n",formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.tags.Name, aws_instance.k8s-etcd.*.private_ip))}"
list_master = "${join("\n",aws_instance.k8s-master.*.tags.Name)}"
list_node = "${join("\n",aws_instance.k8s-worker.*.tags.Name)}"
list_etcd = "${join("\n",aws_instance.k8s-etcd.*.tags.Name)}"
connection_strings_etcd = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.tags.Name, aws_instance.k8s-etcd.*.private_ip))}"
list_master = "${join("\n", aws_instance.k8s-master.*.tags.Name)}"
list_node = "${join("\n", aws_instance.k8s-worker.*.tags.Name)}"
list_etcd = "${join("\n", aws_instance.k8s-etcd.*.tags.Name)}"
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
}
}
@ -165,7 +165,7 @@ resource "null_resource" "inventories" {
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
}
triggers {
triggers = {
template = "${data.template_file.inventory.rendered}"
}
}

View File

@ -28,7 +28,7 @@ resource "aws_security_group_rule" "aws-allow-api-egress" {
# Create a new AWS ELB for K8S API
resource "aws_elb" "aws-elb-api" {
name = "kubernetes-elb-${var.aws_cluster_name}"
subnets = ["${var.aws_subnet_ids_public}"]
subnets = var.aws_subnet_ids_public
security_groups = ["${aws_security_group.aws-elb.id}"]
listener {

View File

@ -3,15 +3,15 @@ output "aws_vpc_id" {
}
output "aws_subnet_ids_private" {
value = ["${aws_subnet.cluster-vpc-subnets-private.*.id}"]
value = aws_subnet.cluster-vpc-subnets-private.*.id
}
output "aws_subnet_ids_public" {
value = ["${aws_subnet.cluster-vpc-subnets-public.*.id}"]
value = aws_subnet.cluster-vpc-subnets-public.*.id
}
output "aws_security_group" {
value = ["${aws_security_group.kubernetes.*.id}"]
value = aws_security_group.kubernetes.*.id
}
output "default_tags" {

View File

@ -0,0 +1,53 @@
#Global Vars
aws_cluster_name = "devtest"
#VPC Vars
aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20", "10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20", "10.250.240.0/20"]
#Bastion Host
aws_bastion_size = "t2.medium"
#Kubernetes Cluster
aws_kube_master_num = 3
aws_kube_master_size = "t2.medium"
aws_etcd_num = 3
aws_etcd_size = "t2.medium"
aws_kube_worker_num = 4
aws_kube_worker_size = "t2.medium"
#Settings AWS ELB
aws_elb_api_port = 6443
k8s_secure_api_port = 6443
kube_insecure_apiserver_address = "0.0.0.0"
default_tags = {
# Env = "devtest" # Product = "kubernetes"
}
inventory_file = "../../../inventory/hosts"
## Credentials
#AWS Access Key
AWS_ACCESS_KEY_ID = ""
#AWS Secret Key
AWS_SECRET_ACCESS_KEY = ""
#EC2 SSH Key Name
AWS_SSH_KEY_NAME = ""
#AWS Region
AWS_DEFAULT_REGION = "eu-central-1"

View File

@ -0,0 +1 @@
../../../../inventory/sample/group_vars

View File

@ -2,9 +2,9 @@
aws_cluster_name = "devtest"
#VPC Vars
aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20","10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]
aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20", "10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20", "10.250.240.0/20"]
#Bastion Host
aws_bastion_size = "t2.medium"
@ -12,24 +12,24 @@ aws_bastion_size = "t2.medium"
#Kubernetes Cluster
aws_kube_master_num = 3
aws_kube_master_num = 3
aws_kube_master_size = "t2.medium"
aws_etcd_num = 3
aws_etcd_num = 3
aws_etcd_size = "t2.medium"
aws_kube_worker_num = 4
aws_kube_worker_num = 4
aws_kube_worker_size = "t2.medium"
#Settings AWS ELB
aws_elb_api_port = 6443
k8s_secure_api_port = 6443
aws_elb_api_port = 6443
k8s_secure_api_port = 6443
kube_insecure_apiserver_address = "0.0.0.0"
default_tags = {
# Env = "devtest"
# Product = "kubernetes"
# Env = "devtest"
# Product = "kubernetes"
}
inventory_file = "../../../inventory/hosts"

View File

@ -1,4 +1,5 @@
.terraform
*.tfvars
!sample-inventory\/cluster.tfvars
*.tfstate
*.tfstate.backup

View File

@ -16,14 +16,13 @@ most modern installs of OpenStack that support the basic services.
- [ELASTX](https://elastx.se/)
- [EnterCloudSuite](https://www.entercloudsuite.com/)
- [FugaCloud](https://fuga.cloud/)
- [Open Telekom Cloud](https://cloud.telekom.de/) : requires to set the variable `wait_for_floatingip = "true"` in your cluster.tfvars
- [OVH](https://www.ovh.com/)
- [Rackspace](https://www.rackspace.com/)
- [Ultimum](https://ultimum.io/)
- [VexxHost](https://vexxhost.com/)
- [Zetta](https://www.zetta.io/)
### Known incompatible public clouds
- T-Systems / Open Telekom Cloud: requires `wait_until_associated`
## Approach
The terraform configuration inspects variables found in
@ -70,7 +69,7 @@ binaries available on hyperkube v1.4.3_coreos.0 or higher.
## Requirements
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html) 0.12 or later
- [Install Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html)
- you already have a suitable OS image in Glance
- you already have a floating IP pool created
@ -220,12 +219,14 @@ set OS_PROJECT_DOMAIN_NAME=Default
The construction of the cluster is driven by values found in
[variables.tf](variables.tf).
For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|Variable | Description |
|---------|-------------|
|`cluster_name` | All OpenStack resources will use the Terraform variable`cluster_name` (default`example`) in their name to make it easier to track. For example the first compute resource will be named`example-kubernetes-1`. |
|`az_list` | List of Availability Zones available in your OpenStack cluster. |
|`network_name` | The name to be given to the internal network that will be generated |
|`network_dns_domain` | (Optional) The dns_domain for the internal network that will be generated |
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|`external_net` | UUID of the external network that will be routed to |
@ -243,7 +244,16 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube-node` for tainting them as nodes, empty by default. |
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube-ingress` for running ingress controller pods, empty by default. |
|`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default |
|`master_allowed_remote_ips` | List of CIDR blocks allowed to initiate an API connection, `["0.0.0.0/0"]` by default |
|`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default |
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|`wait_for_floatingip` | Let Terraform poll the instance until the floating IP has been associated, `false` by default. |
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|`gfs_root_volume_size_in_gb` | Size of the root volume for gluster, 0 to use ephemeral storage |
|`etcd_root_volume_size_in_gb` | Size of the root volume for etcd nodes, 0 to use ephemeral storage |
|`bastion_root_volume_size_in_gb` | Size of the root volume for bastions, 0 to use ephemeral storage |
|`use_server_group` | Create and use openstack nova servergroups, default: false |
#### Terraform state files
@ -274,7 +284,7 @@ This should finish fairly quickly telling you Terraform has successfully initial
You can apply the Terraform configuration to your cluster with the following command
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
$ terraform apply -var-file=cluster.tf ../../contrib/terraform/openstack
$ terraform apply -var-file=cluster.tfvars ../../contrib/terraform/openstack
```
if you chose to create a bastion host, this script will create
@ -288,7 +298,7 @@ pick it up automatically.
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession
$ terraform destroy -var-file=cluster.tf ../../contrib/terraform/openstack
$ terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/openstack
```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
@ -323,6 +333,30 @@ $ ssh-add ~/.ssh/id_rsa
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
#### Metadata variables
The [python script](../terraform.py) that reads the
generated`.tfstate` file to generate a dynamic inventory recognizes
some variables within a "metadata" block, defined in a "resource"
block (example):
```
resource "openstack_compute_instance_v2" "example" {
...
metadata {
ssh_user = "ubuntu"
prefer_ipv6 = true
python_bin = "/usr/bin/python3"
}
...
}
```
As the example shows, these let you define the SSH username for
Ansible, a Python binary which is needed by Ansible if
`/usr/bin/python` doesn't exist, and whether the IPv6 address of the
instance should be preferred over IPv4.
#### Bastion host
Bastion access will be determined by:
@ -389,6 +423,14 @@ kube_network_plugin: flannel
# For Container Linux by CoreOS:
resolvconf_mode: host_resolvconf
```
- Set max amount of attached cinder volume per host (default 256)
```
node_volume_attach_limit: 26
```
- Disable access_ip, this will make all innternal cluster traffic to be sent over local network when a floating IP is attached (default this value is set to 1)
```
use_access_ip: 0
```
### Deploy Kubernetes

View File

@ -1,16 +1,21 @@
module "network" {
source = "modules/network"
provider "openstack" {
version = "~> 1.17"
}
external_net = "${var.external_net}"
network_name = "${var.network_name}"
subnet_cidr = "${var.subnet_cidr}"
cluster_name = "${var.cluster_name}"
dns_nameservers = "${var.dns_nameservers}"
use_neutron = "${var.use_neutron}"
module "network" {
source = "./modules/network"
external_net = "${var.external_net}"
network_name = "${var.network_name}"
subnet_cidr = "${var.subnet_cidr}"
cluster_name = "${var.cluster_name}"
dns_nameservers = "${var.dns_nameservers}"
network_dns_domain = "${var.network_dns_domain}"
use_neutron = "${var.use_neutron}"
}
module "ips" {
source = "modules/ips"
source = "./modules/ips"
number_of_k8s_masters = "${var.number_of_k8s_masters}"
number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}"
@ -23,7 +28,7 @@ module "ips" {
}
module "compute" {
source = "modules/compute"
source = "./modules/compute"
cluster_name = "${var.cluster_name}"
az_list = "${var.az_list}"
@ -36,6 +41,11 @@ module "compute" {
number_of_bastions = "${var.number_of_bastions}"
number_of_k8s_nodes_no_floating_ip = "${var.number_of_k8s_nodes_no_floating_ip}"
number_of_gfs_nodes_no_floating_ip = "${var.number_of_gfs_nodes_no_floating_ip}"
bastion_root_volume_size_in_gb = "${var.bastion_root_volume_size_in_gb}"
etcd_root_volume_size_in_gb = "${var.etcd_root_volume_size_in_gb}"
master_root_volume_size_in_gb = "${var.master_root_volume_size_in_gb}"
node_root_volume_size_in_gb = "${var.node_root_volume_size_in_gb}"
gfs_root_volume_size_in_gb = "${var.gfs_root_volume_size_in_gb}"
gfs_volume_size_in_gb = "${var.gfs_volume_size_in_gb}"
public_key_path = "${var.public_key_path}"
image = "${var.image}"
@ -49,12 +59,19 @@ module "compute" {
network_name = "${var.network_name}"
flavor_bastion = "${var.flavor_bastion}"
k8s_master_fips = "${module.ips.k8s_master_fips}"
k8s_master_no_etcd_fips = "${module.ips.k8s_master_no_etcd_fips}"
k8s_node_fips = "${module.ips.k8s_node_fips}"
bastion_fips = "${module.ips.bastion_fips}"
bastion_allowed_remote_ips = "${var.bastion_allowed_remote_ips}"
master_allowed_remote_ips = "${var.master_allowed_remote_ips}"
k8s_allowed_remote_ips = "${var.k8s_allowed_remote_ips}"
k8s_allowed_egress_ips = "${var.k8s_allowed_egress_ips}"
supplementary_master_groups = "${var.supplementary_master_groups}"
supplementary_node_groups = "${var.supplementary_node_groups}"
worker_allowed_ports = "${var.worker_allowed_ports}"
wait_for_floatingip = "${var.wait_for_floatingip}"
use_access_ip = "${var.use_access_ip}"
use_server_groups = "${var.use_server_groups}"
network_id = "${module.network.router_id}"
}
@ -72,7 +89,7 @@ output "router_id" {
}
output "k8s_master_fips" {
value = "${module.ips.k8s_master_fips}"
value = "${concat(module.ips.k8s_master_fips, module.ips.k8s_master_no_etcd_fips)}"
}
output "k8s_node_fips" {

View File

@ -1,43 +1,55 @@
data "openstack_images_image_v2" "vm_image" {
name = "${var.image}"
}
data "openstack_images_image_v2" "gfs_image" {
name = "${var.image_gfs == "" ? var.image : var.image_gfs}"
}
resource "openstack_compute_keypair_v2" "k8s" {
name = "kubernetes-${var.cluster_name}"
public_key = "${chomp(file(var.public_key_path))}"
}
resource "openstack_networking_secgroup_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master"
description = "${var.cluster_name} - Kubernetes Master"
name = "${var.cluster_name}-k8s-master"
description = "${var.cluster_name} - Kubernetes Master"
delete_default_rules = true
}
resource "openstack_networking_secgroup_rule_v2" "k8s_master" {
count = "${length(var.master_allowed_remote_ips)}"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "6443"
port_range_max = "6443"
remote_ip_prefix = "0.0.0.0/0"
remote_ip_prefix = "${var.master_allowed_remote_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.k8s_master.id}"
}
resource "openstack_networking_secgroup_v2" "bastion" {
name = "${var.cluster_name}-bastion"
count = "${var.number_of_bastions ? 1 : 0}"
description = "${var.cluster_name} - Bastion Server"
name = "${var.cluster_name}-bastion"
count = "${var.number_of_bastions != "" ? 1 : 0}"
description = "${var.cluster_name} - Bastion Server"
delete_default_rules = true
}
resource "openstack_networking_secgroup_rule_v2" "bastion" {
count = "${var.number_of_bastions ? length(var.bastion_allowed_remote_ips) : 0}"
count = "${var.number_of_bastions != "" ? length(var.bastion_allowed_remote_ips) : 0}"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
remote_ip_prefix = "${var.bastion_allowed_remote_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.bastion.id}"
security_group_id = "${openstack_networking_secgroup_v2.bastion[count.index].id}"
}
resource "openstack_networking_secgroup_v2" "k8s" {
name = "${var.cluster_name}-k8s"
description = "${var.cluster_name} - Kubernetes"
name = "${var.cluster_name}-k8s"
description = "${var.cluster_name} - Kubernetes"
delete_default_rules = true
}
resource "openstack_networking_secgroup_rule_v2" "k8s" {
@ -47,9 +59,29 @@ resource "openstack_networking_secgroup_rule_v2" "k8s" {
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
}
resource "openstack_networking_secgroup_rule_v2" "k8s_allowed_remote_ips" {
count = "${length(var.k8s_allowed_remote_ips)}"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
remote_ip_prefix = "${var.k8s_allowed_remote_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
}
resource "openstack_networking_secgroup_rule_v2" "egress" {
count = "${length(var.k8s_allowed_egress_ips)}"
direction = "egress"
ethertype = "IPv4"
remote_ip_prefix = "${var.k8s_allowed_egress_ips[count.index]}"
security_group_id = "${openstack_networking_secgroup_v2.k8s.id}"
}
resource "openstack_networking_secgroup_v2" "worker" {
name = "${var.cluster_name}-k8s-worker"
description = "${var.cluster_name} - Kubernetes worker nodes"
name = "${var.cluster_name}-k8s-worker"
description = "${var.cluster_name} - Kubernetes worker nodes"
delete_default_rules = true
}
resource "openstack_networking_secgroup_rule_v2" "worker" {
@ -63,9 +95,27 @@ resource "openstack_networking_secgroup_rule_v2" "worker" {
security_group_id = "${openstack_networking_secgroup_v2.worker.id}"
}
resource "openstack_compute_servergroup_v2" "k8s_master" {
count = "%{ if var.use_server_groups }1%{else}0%{endif}"
name = "k8s-master-srvgrp"
policies = ["anti-affinity"]
}
resource "openstack_compute_servergroup_v2" "k8s_node" {
count = "%{ if var.use_server_groups }1%{else}0%{endif}"
name = "k8s-node-srvgrp"
policies = ["anti-affinity"]
}
resource "openstack_compute_servergroup_v2" "k8s_etcd" {
count = "%{ if var.use_server_groups }1%{else}0%{endif}"
name = "k8s-etcd-srvgrp"
policies = ["anti-affinity"]
}
resource "openstack_compute_instance_v2" "bastion" {
name = "${var.cluster_name}-bastion-${count.index+1}"
count = "${var.number_of_bastions}"
count = "${var.bastion_root_volume_size_in_gb == 0 ? var.number_of_bastions : 0}"
image_name = "${var.image}"
flavor_id = "${var.flavor_bastion}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
@ -75,24 +125,60 @@ resource "openstack_compute_instance_v2" "bastion" {
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.bastion.name}",
"default",
"${element(openstack_networking_secgroup_v2.bastion.*.name, count.index)}",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "bastion"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > contrib/terraform/group_vars/no-floating.yml"
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "bastion_custom_volume_size" {
name = "${var.cluster_name}-bastion-${count.index+1}"
count = "${var.bastion_root_volume_size_in_gb > 0 ? var.number_of_bastions : 0}"
image_name = "${var.image}"
flavor_id = "${var.flavor_bastion}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.bastion_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${element(openstack_networking_secgroup_v2.bastion.*.name, count.index)}",
]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "bastion"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master-${count.index+1}"
count = "${var.number_of_k8s_masters}"
count = "${var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
@ -102,28 +188,76 @@ resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.network_name}"
}
# The join() hack is described here: https://github.com/hashicorp/terraform/issues/11566
# As a workaround for creating "dynamic" lists (when, for example, no bastion host is created)
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
security_groups = ["${compact(list(
openstack_networking_secgroup_v2.k8s_master.name,
join(" ", openstack_networking_secgroup_v2.bastion.*.id),
openstack_networking_secgroup_v2.k8s.name,
"default",
))}"]
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master_custom_volume_size" {
name = "${var.cluster_name}-k8s-master-${count.index+1}"
count = "${var.master_root_volume_size_in_gb > 0 ? var.number_of_k8s_masters : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.master_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
count = "${var.number_of_k8s_masters_no_etcd}"
count = "${var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_etcd : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
@ -133,26 +267,76 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.network_name}"
}
security_groups = ["${compact(list(
openstack_networking_secgroup_v2.k8s_master.name,
join(" ", openstack_networking_secgroup_v2.bastion.*.id),
openstack_networking_secgroup_v2.k8s.name,
))}"]
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd_custom_volume_size" {
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
count = "${var.master_root_volume_size_in_gb > 0 ? var.number_of_k8s_masters_no_etcd : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.master_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "etcd" {
name = "${var.cluster_name}-etcd-${count.index+1}"
count = "${var.number_of_etcd}"
count = "${var.etcd_root_volume_size_in_gb == 0 ? var.number_of_etcd : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}"
@ -164,16 +348,62 @@ resource "openstack_compute_instance_v2" "etcd" {
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_etcd[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_etcd[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,vault,no-floating"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_instance_v2" "etcd_custom_volume_size" {
name = "${var.cluster_name}-etcd-${count.index+1}"
count = "${var.etcd_root_volume_size_in_gb > 0 ? var.number_of_etcd : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_etcd}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.etcd_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_etcd[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_etcd[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,vault,no-floating"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip}"
count = "${var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_floating_ip : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
@ -185,19 +415,66 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
"default",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_custom_volume_size" {
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
count = "${var.master_root_volume_size_in_gb > 0 ? var.number_of_k8s_masters_no_floating_ip : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.master_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "etcd,kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
count = "${var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_floating_ip_no_etcd : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_master}"
@ -210,47 +487,65 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.number_of_k8s_nodes}"
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd_custom_volume_size" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
count = "${var.master_root_volume_size_in_gb > 0 ? var.number_of_k8s_masters_no_floating_ip_no_etcd : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
flavor_id = "${var.flavor_k8s_master}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.master_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${compact(list(
openstack_networking_secgroup_v2.k8s_master.name,
join(" ", openstack_networking_secgroup_v2.bastion.*.id),
openstack_networking_secgroup_v2.k8s.name,
"default",
))}"]
security_groups = ["${openstack_networking_secgroup_v2.k8s_master.name}",
"${openstack_networking_secgroup_v2.k8s.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_master[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,${var.supplementary_node_groups}"
kubespray_groups = "kube-master,${var.supplementary_master_groups},k8s-cluster,vault,no-floating"
depends_on = "${var.network_id}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > contrib/terraform/group_vars/no-floating.yml"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
count = "${var.number_of_k8s_nodes_no_floating_ip}"
resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.node_root_volume_size_in_gb == 0 ? var.number_of_k8s_nodes : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
@ -262,44 +557,213 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.worker.name}",
"default",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_node[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,${var.supplementary_node_groups}"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_node_custom_volume_size" {
name = "${var.cluster_name}-k8s-node-${count.index+1}"
count = "${var.node_root_volume_size_in_gb > 0 ? var.number_of_k8s_nodes : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.node_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.worker.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_node[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,${var.supplementary_node_groups}"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ../../contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element( concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > group_vars/no-floating.yml"
}
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
count = "${var.node_root_volume_size_in_gb == 0 ? var.number_of_k8s_nodes_no_floating_ip : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.worker.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_node[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,no-floating,${var.supplementary_node_groups}"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip_custom_volume_size" {
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
count = "${var.node_root_volume_size_in_gb > 0 ? var.number_of_k8s_nodes_no_floating_ip : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_k8s_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.vm_image.id}"
source_type = "image"
volume_size = "${var.node_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"${openstack_networking_secgroup_v2.worker.name}",
]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_node[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user}"
kubespray_groups = "kube-node,k8s-cluster,no-floating,${var.supplementary_node_groups}"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_floatingip_associate_v2" "bastion" {
count = "${var.number_of_bastions}"
floating_ip = "${var.bastion_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.bastion.*.id, count.index)}"
count = "${var.bastion_root_volume_size_in_gb == 0 ? var.number_of_bastions : 0}"
floating_ip = "${var.bastion_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.bastion.*.id, count.index)}"
wait_until_associated = "${var.wait_for_floatingip}"
}
resource "openstack_compute_floatingip_associate_v2" "bastion_custom_volume_size" {
count = "${var.bastion_root_volume_size_in_gb > 0 ? var.number_of_bastions : 0}"
floating_ip = "${var.bastion_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.bastion_custom_volume_size.*.id, count.index)}"
wait_until_associated = "${var.wait_for_floatingip}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_master" {
count = "${var.number_of_k8s_masters}"
instance_id = "${element(openstack_compute_instance_v2.k8s_master.*.id, count.index)}"
floating_ip = "${var.k8s_master_fips[count.index]}"
count = "${var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters : 0}"
instance_id = "${element(openstack_compute_instance_v2.k8s_master.*.id, count.index)}"
floating_ip = "${var.k8s_master_fips[count.index]}"
wait_until_associated = "${var.wait_for_floatingip}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_master_custom_volume_size" {
count = "${var.master_root_volume_size_in_gb > 0 ? var.number_of_k8s_masters : 0}"
instance_id = "${element(openstack_compute_instance_v2.k8s_master_custom_volume_size.*.id, count.index)}"
floating_ip = "${var.k8s_master_fips[count.index]}"
wait_until_associated = "${var.wait_for_floatingip}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_master_no_etcd" {
count = "${var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_etcd : 0}"
instance_id = "${element(openstack_compute_instance_v2.k8s_master_no_etcd.*.id, count.index)}"
floating_ip = "${var.k8s_master_no_etcd_fips[count.index]}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_master_no_etcd_custom_volume_size" {
count = "${var.master_root_volume_size_in_gb > 0 ? var.number_of_k8s_masters_no_etcd : 0}"
instance_id = "${element(openstack_compute_instance_v2.k8s_master_no_etcd_custom_volume_size.*.id, count.index)}"
floating_ip = "${var.k8s_master_no_etcd_fips[count.index]}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_node" {
count = "${var.number_of_k8s_nodes}"
floating_ip = "${var.k8s_node_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.k8s_node.*.id, count.index)}"
count = "${var.node_root_volume_size_in_gb == 0 ? var.number_of_k8s_nodes : 0}"
floating_ip = "${var.k8s_node_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.k8s_node.*.id, count.index)}"
wait_until_associated = "${var.wait_for_floatingip}"
}
resource "openstack_compute_floatingip_associate_v2" "k8s_node_custom_volume_size" {
count = "${var.node_root_volume_size_in_gb > 0 ? var.number_of_k8s_nodes : 0}"
floating_ip = "${var.k8s_node_fips[count.index]}"
instance_id = "${element(openstack_compute_instance_v2.k8s_node_custom_volume_size.*.id, count.index)}"
wait_until_associated = "${var.wait_for_floatingip}"
}
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
name = "${var.cluster_name}-glusterfs_volume-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
count = "${var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0}"
description = "Non-ephemeral volume for GlusterFS"
size = "${var.gfs_volume_size_in_gb}"
}
resource "openstack_blockstorage_volume_v2" "glusterfs_volume_custom_volume_size" {
name = "${var.cluster_name}-glusterfs_volume-${count.index+1}"
count = "${var.gfs_root_volume_size_in_gb > 0 ? var.number_of_gfs_nodes_no_floating_ip : 0}"
description = "Non-ephemeral volume for GlusterFS"
size = "${var.gfs_volume_size_in_gb}"
}
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
count = "${var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image_gfs}"
flavor_id = "${var.flavor_gfs_node}"
@ -309,19 +773,69 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}",
"default",
]
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_node[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user_gfs}"
kubespray_groups = "gfs-cluster,network-storage,no-floating"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip_custom_volume_size" {
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
count = "${var.gfs_root_volume_size_in_gb > 0 ? var.number_of_gfs_nodes_no_floating_ip : 0}"
availability_zone = "${element(var.az_list, count.index)}"
image_name = "${var.image}"
flavor_id = "${var.flavor_gfs_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
block_device {
uuid = "${data.openstack_images_image_v2.gfs_image.id}"
source_type = "image"
volume_size = "${var.gfs_root_volume_size_in_gb}"
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_networking_secgroup_v2.k8s.name}"]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = "${openstack_compute_servergroup_v2.k8s_node[0].id}"
}
}
metadata = {
ssh_user = "${var.ssh_user_gfs}"
kubespray_groups = "gfs-cluster,network-storage,no-floating"
depends_on = "${var.network_id}"
use_access_ip = "${var.use_access_ip}"
}
}
resource "openstack_compute_volume_attach_v2" "glusterfs_volume" {
count = "${var.number_of_gfs_nodes_no_floating_ip}"
count = "${var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0}"
instance_id = "${element(openstack_compute_instance_v2.glusterfs_node_no_floating_ip.*.id, count.index)}"
volume_id = "${element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index)}"
}
resource "openstack_compute_volume_attach_v2" "glusterfs_volume_custom_root_volume_size" {
count = "${var.gfs_root_volume_size_in_gb > 0 ? var.number_of_gfs_nodes_no_floating_ip : 0}"
instance_id = "${element(openstack_compute_instance_v2.glusterfs_node_no_floating_ip_custom_volume_size.*.id, count.index)}"
volume_id = "${element(openstack_blockstorage_volume_v2.glusterfs_volume_custom_volume_size.*.id, count.index)}"
}

View File

@ -22,6 +22,16 @@ variable "number_of_bastions" {}
variable "number_of_gfs_nodes_no_floating_ip" {}
variable "bastion_root_volume_size_in_gb" {}
variable "etcd_root_volume_size_in_gb" {}
variable "master_root_volume_size_in_gb" {}
variable "node_root_volume_size_in_gb" {}
variable "gfs_root_volume_size_in_gb" {}
variable "gfs_volume_size_in_gb" {}
variable "public_key_path" {}
@ -54,6 +64,10 @@ variable "k8s_master_fips" {
type = "list"
}
variable "k8s_master_no_etcd_fips" {
type = "list"
}
variable "k8s_node_fips" {
type = "list"
}
@ -66,6 +80,20 @@ variable "bastion_allowed_remote_ips" {
type = "list"
}
variable "master_allowed_remote_ips" {
type = "list"
}
variable "k8s_allowed_remote_ips" {
type = "list"
}
variable "k8s_allowed_egress_ips" {
type = "list"
}
variable "wait_for_floatingip" {}
variable "supplementary_master_groups" {
default = ""
}
@ -77,3 +105,9 @@ variable "supplementary_node_groups" {
variable "worker_allowed_ports" {
type = "list"
}
variable "use_access_ip" {}
variable "use_server_groups" {
type = bool
}

View File

@ -1,5 +1,5 @@
resource "null_resource" "dummy_dependency" {
triggers {
triggers = {
dependency_id = "${var.router_id}"
}
}
@ -10,6 +10,12 @@ resource "openstack_networking_floatingip_v2" "k8s_master" {
depends_on = ["null_resource.dummy_dependency"]
}
resource "openstack_networking_floatingip_v2" "k8s_master_no_etcd" {
count = "${var.number_of_k8s_masters_no_etcd}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
}
resource "openstack_networking_floatingip_v2" "k8s_node" {
count = "${var.number_of_k8s_nodes}"
pool = "${var.floatingip_pool}"

View File

@ -1,11 +1,15 @@
output "k8s_master_fips" {
value = ["${openstack_networking_floatingip_v2.k8s_master.*.address}"]
value = "${openstack_networking_floatingip_v2.k8s_master[*].address}"
}
output "k8s_master_no_etcd_fips" {
value = "${openstack_networking_floatingip_v2.k8s_master_no_etcd[*].address}"
}
output "k8s_node_fips" {
value = ["${openstack_networking_floatingip_v2.k8s_node.*.address}"]
value = "${openstack_networking_floatingip_v2.k8s_node[*].address}"
}
output "bastion_fips" {
value = ["${openstack_networking_floatingip_v2.bastion.*.address}"]
value = "${openstack_networking_floatingip_v2.bastion[*].address}"
}

View File

@ -14,4 +14,4 @@ variable "network_name" {}
variable "router_id" {
default = ""
}
}

View File

@ -8,13 +8,14 @@ resource "openstack_networking_router_v2" "k8s" {
resource "openstack_networking_network_v2" "k8s" {
name = "${var.network_name}"
count = "${var.use_neutron}"
dns_domain = var.network_dns_domain != null ? "${var.network_dns_domain}" : null
admin_state_up = "true"
}
resource "openstack_networking_subnet_v2" "k8s" {
name = "${var.cluster_name}-internal-network"
count = "${var.use_neutron}"
network_id = "${openstack_networking_network_v2.k8s.id}"
network_id = "${openstack_networking_network_v2.k8s[count.index].id}"
cidr = "${var.subnet_cidr}"
ip_version = 4
dns_nameservers = "${var.dns_nameservers}"
@ -22,6 +23,6 @@ resource "openstack_networking_subnet_v2" "k8s" {
resource "openstack_networking_router_interface_v2" "k8s" {
count = "${var.use_neutron}"
router_id = "${openstack_networking_router_v2.k8s.id}"
subnet_id = "${openstack_networking_subnet_v2.k8s.id}"
router_id = "${openstack_networking_router_v2.k8s[count.index].id}"
subnet_id = "${openstack_networking_subnet_v2.k8s[count.index].id}"
}

View File

@ -2,6 +2,8 @@ variable "external_net" {}
variable "network_name" {}
variable "network_dns_domain" {}
variable "cluster_name" {}
variable "dns_nameservers" {

View File

@ -1,6 +1,9 @@
# your Kubernetes cluster name here
cluster_name = "i-didnt-read-the-docs"
# list of availability zones available in your OpenStack cluster
#az_list = ["nova"]
# SSH key to use for access to nodes
public_key_path = "~/.ssh/id_rsa.pub"

View File

@ -44,6 +44,26 @@ variable "number_of_gfs_nodes_no_floating_ip" {
default = 0
}
variable "bastion_root_volume_size_in_gb" {
default = 0
}
variable "etcd_root_volume_size_in_gb" {
default = 0
}
variable "master_root_volume_size_in_gb" {
default = 0
}
variable "node_root_volume_size_in_gb" {
default = 0
}
variable "gfs_root_volume_size_in_gb" {
default = 0
}
variable "gfs_volume_size_in_gb" {
default = 75
}
@ -55,12 +75,12 @@ variable "public_key_path" {
variable "image" {
description = "the image to use"
default = "ubuntu-14.04"
default = ""
}
variable "image_gfs" {
description = "Glance image to use for GlusterFS"
default = "ubuntu-16.04"
default = ""
}
variable "ssh_user" {
@ -103,6 +123,12 @@ variable "network_name" {
default = "internal"
}
variable "network_dns_domain" {
description = "dns_domain for the internal network"
type = "string"
default = null
}
variable "use_neutron" {
description = "Use neutron"
default = 1
@ -125,6 +151,11 @@ variable "floatingip_pool" {
default = "external"
}
variable "wait_for_floatingip" {
description = "Terraform will poll the instance until the floating IP has been associated."
default = "false"
}
variable "external_net" {
description = "uuid of the external/public network"
}
@ -145,6 +176,24 @@ variable "bastion_allowed_remote_ips" {
default = ["0.0.0.0/0"]
}
variable "master_allowed_remote_ips" {
description = "An array of CIDRs allowed to access API of masters"
type = "list"
default = ["0.0.0.0/0"]
}
variable "k8s_allowed_remote_ips" {
description = "An array of CIDRs allowed to SSH to hosts"
type = "list"
default = []
}
variable "k8s_allowed_egress_ips" {
description = "An array of CIDRs allowed for egress traffic"
type = "list"
default = ["0.0.0.0/0"]
}
variable "worker_allowed_ports" {
type = "list"
@ -157,3 +206,11 @@ variable "worker_allowed_ports" {
},
]
}
variable "use_access_ip" {
default = 1
}
variable "use_server_groups" {
default = false
}

View File

@ -38,7 +38,7 @@ now six total etcd replicas.
## SSH Key Setup
An SSH keypair is required so Ansible can access the newly provisioned nodes (bare metal Packet hosts). By default, the public SSH key defined in cluster.tf will be installed in authorized_key on the newly provisioned nodes (~/.ssh/id_rsa.pub). Terraform will upload this public key and then it will be distributed out to all the nodes. If you have already set this public key in Packet (i.e. via the portal), then set the public keyfile name in cluster.tf to blank to prevent the duplicate key from being uploaded which will cause an error.
An SSH keypair is required so Ansible can access the newly provisioned nodes (bare metal Packet hosts). By default, the public SSH key defined in cluster.tfvars will be installed in authorized_key on the newly provisioned nodes (~/.ssh/id_rsa.pub). Terraform will upload this public key and then it will be distributed out to all the nodes. If you have already set this public key in Packet (i.e. via the portal), then set the public keyfile name in cluster.tfvars to blank to prevent the duplicate key from being uploaded which will cause an error.
If you don't already have a keypair generated (~/.ssh/id_rsa and ~/.ssh/id_rsa.pub), then a new keypair can be generated with the command:
@ -72,7 +72,7 @@ If someone gets this key, they can startup/shutdown hosts in your project!
For more information on how to generate an API key or find your project ID, please see:
https://support.packet.com/kb/articles/api-integrations
The Packet Project ID associated with the key will be set later in cluster.tf.
The Packet Project ID associated with the key will be set later in cluster.tfvars.
For more information about the API, please see:
https://www.packet.com/developers/api/
@ -88,7 +88,7 @@ Note that to deploy several clusters within the same project you need to use [te
The construction of the cluster is driven by values found in
[variables.tf](variables.tf).
For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
The `cluster_name` is used to set a tag on each server deployed as part of this cluster.
This helps when identifying which hosts are associated with each cluster.
@ -138,7 +138,7 @@ This should finish fairly quickly telling you Terraform has successfully initial
You can apply the Terraform configuration to your cluster with the following command
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
$ terraform apply -var-file=cluster.tf ../../contrib/terraform/packet
$ terraform apply -var-file=cluster.tfvars ../../contrib/terraform/packet
$ export ANSIBLE_HOST_KEY_CHECKING=False
$ ansible-playbook -i hosts ../../cluster.yml
```
@ -147,7 +147,7 @@ $ ansible-playbook -i hosts ../../cluster.yml
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession
$ terraform destroy -var-file=cluster.tf ../../contrib/terraform/packet
$ terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/packet
```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:

View File

@ -1,60 +1,63 @@
# Configure the Packet Provider
provider "packet" {}
provider "packet" {
version = "~> 2.0"
}
resource "packet_ssh_key" "k8s" {
count = "${var.public_key_path != "" ? 1 : 0}"
count = var.public_key_path != "" ? 1 : 0
name = "kubernetes-${var.cluster_name}"
public_key = "${chomp(file(var.public_key_path))}"
public_key = chomp(file(var.public_key_path))
}
resource "packet_device" "k8s_master" {
depends_on = ["packet_ssh_key.k8s"]
depends_on = [packet_ssh_key.k8s]
count = "${var.number_of_k8s_masters}"
hostname = "${var.cluster_name}-k8s-master-${count.index+1}"
plan = "${var.plan_k8s_masters}"
facility = "${var.facility}"
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
count = var.number_of_k8s_masters
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
plan = var.plan_k8s_masters
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-master", "etcd", "kube-node"]
}
resource "packet_device" "k8s_master_no_etcd" {
depends_on = ["packet_ssh_key.k8s"]
depends_on = [packet_ssh_key.k8s]
count = "${var.number_of_k8s_masters_no_etcd}"
hostname = "${var.cluster_name}-k8s-master-${count.index+1}"
plan = "${var.plan_k8s_masters_no_etcd}"
facility = "${var.facility}"
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
count = var.number_of_k8s_masters_no_etcd
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
plan = var.plan_k8s_masters_no_etcd
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-master"]
}
resource "packet_device" "k8s_etcd" {
depends_on = ["packet_ssh_key.k8s"]
depends_on = [packet_ssh_key.k8s]
count = "${var.number_of_etcd}"
hostname = "${var.cluster_name}-etcd-${count.index+1}"
plan = "${var.plan_etcd}"
facility = "${var.facility}"
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
count = var.number_of_etcd
hostname = "${var.cluster_name}-etcd-${count.index + 1}"
plan = var.plan_etcd
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
tags = ["cluster-${var.cluster_name}", "etcd"]
}
resource "packet_device" "k8s_node" {
depends_on = ["packet_ssh_key.k8s"]
depends_on = [packet_ssh_key.k8s]
count = "${var.number_of_k8s_nodes}"
hostname = "${var.cluster_name}-k8s-node-${count.index+1}"
plan = "${var.plan_k8s_nodes}"
facility = "${var.facility}"
operating_system = "${var.operating_system}"
billing_cycle = "${var.billing_cycle}"
project_id = "${var.packet_project_id}"
count = var.number_of_k8s_nodes
hostname = "${var.cluster_name}-k8s-node-${count.index + 1}"
plan = var.plan_k8s_nodes
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
tags = ["cluster-${var.cluster_name}", "k8s-cluster", "kube-node"]
}

View File

@ -1,15 +1,16 @@
output "k8s_masters" {
value = "${packet_device.k8s_master.*.access_public_ipv4}"
value = packet_device.k8s_master.*.access_public_ipv4
}
output "k8s_masters_no_etc" {
value = "${packet_device.k8s_master_no_etcd.*.access_public_ipv4}"
value = packet_device.k8s_master_no_etcd.*.access_public_ipv4
}
output "k8s_etcds" {
value = "${packet_device.k8s_etcd.*.access_public_ipv4}"
value = packet_device.k8s_etcd.*.access_public_ipv4
}
output "k8s_nodes" {
value = "${packet_device.k8s_node.*.access_public_ipv4}"
value = packet_device.k8s_node.*.access_public_ipv4
}

View File

@ -54,3 +54,4 @@ variable "number_of_etcd" {
variable "number_of_k8s_nodes" {
default = 0
}

View File

@ -0,0 +1,4 @@
terraform {
required_version = ">= 0.12"
}

View File

@ -1,4 +1,4 @@
#!/usr/bin/env python2
#!/usr/bin/env python3
#
# Copyright 2015 Cisco Systems, Inc.
#
@ -20,15 +20,15 @@
Dynamic inventory for Terraform - finds all `.tfstate` files below the working
directory and generates an inventory based on them.
"""
from __future__ import unicode_literals, print_function
import argparse
from collections import defaultdict
import random
from functools import wraps
import json
import os
import re
VERSION = '0.3.0pre'
VERSION = '0.4.0pre'
def tfstates(root=None):
@ -38,15 +38,58 @@ def tfstates(root=None):
if os.path.splitext(name)[-1] == '.tfstate':
yield os.path.join(dirpath, name)
def convert_to_v3_structure(attributes, prefix=''):
""" Convert the attributes from v4 to v3
Receives a dict and return a dictionary """
result = {}
if isinstance(attributes, str):
# In the case when we receive a string (e.g. values for security_groups)
return {'{}{}'.format(prefix, random.randint(1,10**10)): attributes}
for key, value in attributes.items():
if isinstance(value, list):
if len(value):
result['{}{}.#'.format(prefix, key, hash)] = len(value)
for i, v in enumerate(value):
result.update(convert_to_v3_structure(v, '{}{}.{}.'.format(prefix, key, i)))
elif isinstance(value, dict):
result['{}{}.%'.format(prefix, key)] = len(value)
for k, v in value.items():
result['{}{}.{}'.format(prefix, key, k)] = v
else:
result['{}{}'.format(prefix, key)] = value
return result
def iterresources(filenames):
for filename in filenames:
with open(filename, 'r') as json_file:
state = json.load(json_file)
for module in state['modules']:
name = module['path'][-1]
for key, resource in module['resources'].items():
yield name, key, resource
tf_version = state['version']
if tf_version == 3:
for module in state['modules']:
name = module['path'][-1]
for key, resource in module['resources'].items():
yield name, key, resource
elif tf_version == 4:
# In version 4 the structure changes so we need to iterate
# each instance inside the resource branch.
for resource in state['resources']:
name = resource['provider'].split('.')[-1]
for instance in resource['instances']:
key = "{}.{}".format(resource['type'], resource['name'])
if 'index_key' in instance:
key = "{}.{}".format(key, instance['index_key'])
data = {}
data['type'] = resource['type']
data['provider'] = resource['provider']
data['depends_on'] = instance.get('depends_on', [])
data['primary'] = {'attributes': convert_to_v3_structure(instance['attributes'])}
if 'id' in instance['attributes']:
data['primary']['id'] = instance['attributes']['id']
data['primary']['meta'] = instance['attributes'].get('meta',{})
yield name, key, data
else:
raise KeyError('tfstate version %d not supported' % tf_version)
## READ RESOURCES
PARSERS = {}
@ -109,7 +152,7 @@ def calculate_mantl_vars(func):
def _parse_prefix(source, prefix, sep='.'):
for compkey, value in source.items():
for compkey, value in list(source.items()):
try:
curprefix, rest = compkey.split(sep, 1)
except ValueError:
@ -127,7 +170,7 @@ def parse_attr_list(source, prefix, sep='.'):
idx, key = compkey.split(sep, 1)
attrs[idx][key] = value
return attrs.values()
return list(attrs.values())
def parse_dict(source, prefix, sep='.'):
@ -139,6 +182,9 @@ def parse_list(source, prefix, sep='.'):
def parse_bool(string_form):
if type(string_form) is bool:
return string_form
token = string_form.lower()[0]
if token == 't':
@ -149,75 +195,6 @@ def parse_bool(string_form):
raise ValueError('could not convert %r to a bool' % string_form)
@parses('triton_machine')
@calculate_mantl_vars
def triton_machine(resource, module_name):
raw_attrs = resource['primary']['attributes']
name = raw_attrs.get('name')
groups = []
attrs = {
'id': raw_attrs['id'],
'dataset': raw_attrs['dataset'],
'disk': raw_attrs['disk'],
'firewall_enabled': parse_bool(raw_attrs['firewall_enabled']),
'image': raw_attrs['image'],
'ips': parse_list(raw_attrs, 'ips'),
'memory': raw_attrs['memory'],
'name': raw_attrs['name'],
'networks': parse_list(raw_attrs, 'networks'),
'package': raw_attrs['package'],
'primary_ip': raw_attrs['primaryip'],
'root_authorized_keys': raw_attrs['root_authorized_keys'],
'state': raw_attrs['state'],
'tags': parse_dict(raw_attrs, 'tags'),
'type': raw_attrs['type'],
'user_data': raw_attrs['user_data'],
'user_script': raw_attrs['user_script'],
# ansible
'ansible_ssh_host': raw_attrs['primaryip'],
'ansible_ssh_port': 22,
'ansible_ssh_user': 'root', # it's "root" on Triton by default
# generic
'public_ipv4': raw_attrs['primaryip'],
'provider': 'triton',
}
# private IPv4
for ip in attrs['ips']:
if ip.startswith('10') or ip.startswith('192.168'): # private IPs
attrs['private_ipv4'] = ip
break
if 'private_ipv4' not in attrs:
attrs['private_ipv4'] = attrs['public_ipv4']
# attrs specific to Mantl
attrs.update({
'consul_dc': _clean_dc(attrs['tags'].get('dc', 'none')),
'role': attrs['tags'].get('role', 'none'),
'ansible_python_interpreter': attrs['tags'].get('python_bin', 'python')
})
# add groups based on attrs
groups.append('triton_image=' + attrs['image'])
groups.append('triton_package=' + attrs['package'])
groups.append('triton_state=' + attrs['state'])
groups.append('triton_firewall_enabled=%s' % attrs['firewall_enabled'])
groups.extend('triton_tags_%s=%s' % item
for item in attrs['tags'].items())
groups.extend('triton_network=' + network
for network in attrs['networks'])
# groups specific to Mantl
groups.append('role=' + attrs['role'])
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
@parses('packet_device')
def packet_device(resource, tfvars=None):
raw_attrs = resource['primary']['attributes']
@ -226,7 +203,7 @@ def packet_device(resource, tfvars=None):
attrs = {
'id': raw_attrs['id'],
'facility': raw_attrs['facility'],
'facilities': parse_list(raw_attrs, 'facilities'),
'hostname': raw_attrs['hostname'],
'operating_system': raw_attrs['operating_system'],
'locked': parse_bool(raw_attrs['locked']),
@ -236,7 +213,7 @@ def packet_device(resource, tfvars=None):
'state': raw_attrs['state'],
# ansible
'ansible_ssh_host': raw_attrs['network.0.address'],
'ansible_ssh_user': 'root', # it's always "root" on Packet
'ansible_ssh_user': 'root', # Use root by default in packet
# generic
'ipv4_address': raw_attrs['network.0.address'],
'public_ipv4': raw_attrs['network.0.address'],
@ -246,8 +223,11 @@ def packet_device(resource, tfvars=None):
'provider': 'packet',
}
if raw_attrs['operating_system'] == 'coreos_stable':
# For CoreOS set the ssh_user to core
attrs.update({'ansible_ssh_user': 'core'})
# add groups based on attrs
groups.append('packet_facility=' + attrs['facility'])
groups.append('packet_operating_system=' + attrs['operating_system'])
groups.append('packet_locked=%s' % attrs['locked'])
groups.append('packet_state=' + attrs['state'])
@ -259,94 +239,6 @@ def packet_device(resource, tfvars=None):
return name, attrs, groups
@parses('digitalocean_droplet')
@calculate_mantl_vars
def digitalocean_host(resource, tfvars=None):
raw_attrs = resource['primary']['attributes']
name = raw_attrs['name']
groups = []
attrs = {
'id': raw_attrs['id'],
'image': raw_attrs['image'],
'ipv4_address': raw_attrs['ipv4_address'],
'locked': parse_bool(raw_attrs['locked']),
'metadata': json.loads(raw_attrs.get('user_data', '{}')),
'region': raw_attrs['region'],
'size': raw_attrs['size'],
'ssh_keys': parse_list(raw_attrs, 'ssh_keys'),
'status': raw_attrs['status'],
# ansible
'ansible_ssh_host': raw_attrs['ipv4_address'],
'ansible_ssh_port': 22,
'ansible_ssh_user': 'root', # it's always "root" on DO
# generic
'public_ipv4': raw_attrs['ipv4_address'],
'private_ipv4': raw_attrs.get('ipv4_address_private',
raw_attrs['ipv4_address']),
'provider': 'digitalocean',
}
# attrs specific to Mantl
attrs.update({
'consul_dc': _clean_dc(attrs['metadata'].get('dc', attrs['region'])),
'role': attrs['metadata'].get('role', 'none'),
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
})
# add groups based on attrs
groups.append('do_image=' + attrs['image'])
groups.append('do_locked=%s' % attrs['locked'])
groups.append('do_region=' + attrs['region'])
groups.append('do_size=' + attrs['size'])
groups.append('do_status=' + attrs['status'])
groups.extend('do_metadata_%s=%s' % item
for item in attrs['metadata'].items())
# groups specific to Mantl
groups.append('role=' + attrs['role'])
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
@parses('softlayer_virtualserver')
@calculate_mantl_vars
def softlayer_host(resource, module_name):
raw_attrs = resource['primary']['attributes']
name = raw_attrs['name']
groups = []
attrs = {
'id': raw_attrs['id'],
'image': raw_attrs['image'],
'ipv4_address': raw_attrs['ipv4_address'],
'metadata': json.loads(raw_attrs.get('user_data', '{}')),
'region': raw_attrs['region'],
'ram': raw_attrs['ram'],
'cpu': raw_attrs['cpu'],
'ssh_keys': parse_list(raw_attrs, 'ssh_keys'),
'public_ipv4': raw_attrs['ipv4_address'],
'private_ipv4': raw_attrs['ipv4_address_private'],
'ansible_ssh_host': raw_attrs['ipv4_address'],
'ansible_ssh_port': 22,
'ansible_ssh_user': 'root',
'provider': 'softlayer',
}
# attrs specific to Mantl
attrs.update({
'consul_dc': _clean_dc(attrs['metadata'].get('dc', attrs['region'])),
'role': attrs['metadata'].get('role', 'none'),
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
})
# groups specific to Mantl
groups.append('role=' + attrs['role'])
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
def openstack_floating_ips(resource):
raw_attrs = resource['primary']['attributes']
attrs = {
@ -397,10 +289,16 @@ def openstack_host(resource, module_name):
attrs['private_ipv4'] = raw_attrs['network.0.fixed_ip_v4']
try:
attrs.update({
'ansible_ssh_host': raw_attrs['access_ip_v4'],
'publicly_routable': True,
})
if 'metadata.prefer_ipv6' in raw_attrs and raw_attrs['metadata.prefer_ipv6'] == "1":
attrs.update({
'ansible_ssh_host': re.sub("[\[\]]", "", raw_attrs['access_ip_v6']),
'publicly_routable': True,
})
else:
attrs.update({
'ansible_ssh_host': raw_attrs['access_ip_v4'],
'publicly_routable': True,
})
except (KeyError, ValueError):
attrs.update({'ansible_ssh_host': '', 'publicly_routable': False})
@ -410,9 +308,9 @@ def openstack_host(resource, module_name):
if 'metadata.ssh_user' in raw_attrs:
attrs['ansible_ssh_user'] = raw_attrs['metadata.ssh_user']
if 'volume.#' in raw_attrs.keys() and int(raw_attrs['volume.#']) > 0:
if 'volume.#' in list(raw_attrs.keys()) and int(raw_attrs['volume.#']) > 0:
device_index = 1
for key, value in raw_attrs.items():
for key, value in list(raw_attrs.items()):
match = re.search("^volume.*.device$", key)
if match:
attrs['disk_volume_device_'+str(device_index)] = value
@ -430,7 +328,7 @@ def openstack_host(resource, module_name):
groups.append('os_image=' + attrs['image']['name'])
groups.append('os_flavor=' + attrs['flavor']['name'])
groups.extend('os_metadata_%s=%s' % item
for item in attrs['metadata'].items())
for item in list(attrs['metadata'].items()))
groups.append('os_region=' + attrs['region'])
# groups specific to Mantl
@ -444,293 +342,24 @@ def openstack_host(resource, module_name):
return name, attrs, groups
@parses('aws_instance')
@calculate_mantl_vars
def aws_host(resource, module_name):
name = resource['primary']['attributes']['tags.Name']
raw_attrs = resource['primary']['attributes']
groups = []
attrs = {
'ami': raw_attrs['ami'],
'availability_zone': raw_attrs['availability_zone'],
'ebs_block_device': parse_attr_list(raw_attrs, 'ebs_block_device'),
'ebs_optimized': parse_bool(raw_attrs['ebs_optimized']),
'ephemeral_block_device': parse_attr_list(raw_attrs,
'ephemeral_block_device'),
'id': raw_attrs['id'],
'key_name': raw_attrs['key_name'],
'private': parse_dict(raw_attrs, 'private',
sep='_'),
'public': parse_dict(raw_attrs, 'public',
sep='_'),
'root_block_device': parse_attr_list(raw_attrs, 'root_block_device'),
'security_groups': parse_list(raw_attrs, 'security_groups'),
'subnet': parse_dict(raw_attrs, 'subnet',
sep='_'),
'tags': parse_dict(raw_attrs, 'tags'),
'tenancy': raw_attrs['tenancy'],
'vpc_security_group_ids': parse_list(raw_attrs,
'vpc_security_group_ids'),
# ansible-specific
'ansible_ssh_port': 22,
'ansible_ssh_host': raw_attrs['public_ip'],
# generic
'public_ipv4': raw_attrs['public_ip'],
'private_ipv4': raw_attrs['private_ip'],
'provider': 'aws',
}
# attrs specific to Ansible
if 'tags.sshUser' in raw_attrs:
attrs['ansible_ssh_user'] = raw_attrs['tags.sshUser']
if 'tags.sshPrivateIp' in raw_attrs:
attrs['ansible_ssh_host'] = raw_attrs['private_ip']
# attrs specific to Mantl
attrs.update({
'consul_dc': _clean_dc(attrs['tags'].get('dc', module_name)),
'role': attrs['tags'].get('role', 'none'),
'ansible_python_interpreter': attrs['tags'].get('python_bin','python')
})
# groups specific to Mantl
groups.extend(['aws_ami=' + attrs['ami'],
'aws_az=' + attrs['availability_zone'],
'aws_key_name=' + attrs['key_name'],
'aws_tenancy=' + attrs['tenancy']])
groups.extend('aws_tag_%s=%s' % item for item in attrs['tags'].items())
groups.extend('aws_vpc_security_group=' + group
for group in attrs['vpc_security_group_ids'])
groups.extend('aws_subnet_%s=%s' % subnet
for subnet in attrs['subnet'].items())
# groups specific to Mantl
groups.append('role=' + attrs['role'])
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
@parses('google_compute_instance')
@calculate_mantl_vars
def gce_host(resource, module_name):
name = resource['primary']['id']
raw_attrs = resource['primary']['attributes']
groups = []
# network interfaces
interfaces = parse_attr_list(raw_attrs, 'network_interface')
for interface in interfaces:
interface['access_config'] = parse_attr_list(interface,
'access_config')
for key in interface.keys():
if '.' in key:
del interface[key]
# general attrs
attrs = {
'can_ip_forward': raw_attrs['can_ip_forward'] == 'true',
'disks': parse_attr_list(raw_attrs, 'disk'),
'machine_type': raw_attrs['machine_type'],
'metadata': parse_dict(raw_attrs, 'metadata'),
'network': parse_attr_list(raw_attrs, 'network'),
'network_interface': interfaces,
'self_link': raw_attrs['self_link'],
'service_account': parse_attr_list(raw_attrs, 'service_account'),
'tags': parse_list(raw_attrs, 'tags'),
'zone': raw_attrs['zone'],
# ansible
'ansible_ssh_port': 22,
'provider': 'gce',
}
# attrs specific to Ansible
if 'metadata.ssh_user' in raw_attrs:
attrs['ansible_ssh_user'] = raw_attrs['metadata.ssh_user']
# attrs specific to Mantl
attrs.update({
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)),
'role': attrs['metadata'].get('role', 'none'),
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
})
try:
attrs.update({
'ansible_ssh_host': interfaces[0]['access_config'][0]['nat_ip'] or interfaces[0]['access_config'][0]['assigned_nat_ip'],
'public_ipv4': interfaces[0]['access_config'][0]['nat_ip'] or interfaces[0]['access_config'][0]['assigned_nat_ip'],
'private_ipv4': interfaces[0]['address'],
'publicly_routable': True,
})
except (KeyError, ValueError):
attrs.update({'ansible_ssh_host': '', 'publicly_routable': False})
# add groups based on attrs
groups.extend('gce_image=' + disk['image'] for disk in attrs['disks'])
groups.append('gce_machine_type=' + attrs['machine_type'])
groups.extend('gce_metadata_%s=%s' % (key, value)
for (key, value) in attrs['metadata'].items()
if key not in set(['sshKeys']))
groups.extend('gce_tag=' + tag for tag in attrs['tags'])
groups.append('gce_zone=' + attrs['zone'])
if attrs['can_ip_forward']:
groups.append('gce_ip_forward')
if attrs['publicly_routable']:
groups.append('gce_publicly_routable')
# groups specific to Mantl
groups.append('role=' + attrs['metadata'].get('role', 'none'))
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
@parses('vsphere_virtual_machine')
@calculate_mantl_vars
def vsphere_host(resource, module_name):
raw_attrs = resource['primary']['attributes']
network_attrs = parse_dict(raw_attrs, 'network_interface')
network = parse_dict(network_attrs, '0')
ip_address = network.get('ipv4_address', network['ip_address'])
name = raw_attrs['name']
groups = []
attrs = {
'id': raw_attrs['id'],
'ip_address': ip_address,
'private_ipv4': ip_address,
'public_ipv4': ip_address,
'metadata': parse_dict(raw_attrs, 'custom_configuration_parameters'),
'ansible_ssh_port': 22,
'provider': 'vsphere',
}
try:
attrs.update({
'ansible_ssh_host': ip_address,
})
except (KeyError, ValueError):
attrs.update({'ansible_ssh_host': '', })
attrs.update({
'consul_dc': _clean_dc(attrs['metadata'].get('consul_dc', module_name)),
'role': attrs['metadata'].get('role', 'none'),
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
})
# attrs specific to Ansible
if 'ssh_user' in attrs['metadata']:
attrs['ansible_ssh_user'] = attrs['metadata']['ssh_user']
groups.append('role=' + attrs['role'])
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
@parses('azure_instance')
@calculate_mantl_vars
def azure_host(resource, module_name):
name = resource['primary']['attributes']['name']
raw_attrs = resource['primary']['attributes']
groups = []
attrs = {
'automatic_updates': raw_attrs['automatic_updates'],
'description': raw_attrs['description'],
'hosted_service_name': raw_attrs['hosted_service_name'],
'id': raw_attrs['id'],
'image': raw_attrs['image'],
'ip_address': raw_attrs['ip_address'],
'location': raw_attrs['location'],
'name': raw_attrs['name'],
'reverse_dns': raw_attrs['reverse_dns'],
'security_group': raw_attrs['security_group'],
'size': raw_attrs['size'],
'ssh_key_thumbprint': raw_attrs['ssh_key_thumbprint'],
'subnet': raw_attrs['subnet'],
'username': raw_attrs['username'],
'vip_address': raw_attrs['vip_address'],
'virtual_network': raw_attrs['virtual_network'],
'endpoint': parse_attr_list(raw_attrs, 'endpoint'),
# ansible
'ansible_ssh_port': 22,
'ansible_ssh_user': raw_attrs['username'],
'ansible_ssh_host': raw_attrs['vip_address'],
}
# attrs specific to mantl
attrs.update({
'consul_dc': attrs['location'].lower().replace(" ", "-"),
'role': attrs['description']
})
# groups specific to mantl
groups.extend(['azure_image=' + attrs['image'],
'azure_location=' + attrs['location'].lower().replace(" ", "-"),
'azure_username=' + attrs['username'],
'azure_security_group=' + attrs['security_group']])
# groups specific to mantl
groups.append('role=' + attrs['role'])
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
@parses('clc_server')
@calculate_mantl_vars
def clc_server(resource, module_name):
raw_attrs = resource['primary']['attributes']
name = raw_attrs.get('id')
groups = []
md = parse_dict(raw_attrs, 'metadata')
attrs = {
'metadata': md,
'ansible_ssh_port': md.get('ssh_port', 22),
'ansible_ssh_user': md.get('ssh_user', 'root'),
'provider': 'clc',
'publicly_routable': False,
}
try:
attrs.update({
'public_ipv4': raw_attrs['public_ip_address'],
'private_ipv4': raw_attrs['private_ip_address'],
'ansible_ssh_host': raw_attrs['public_ip_address'],
'publicly_routable': True,
})
except (KeyError, ValueError):
attrs.update({
'ansible_ssh_host': raw_attrs['private_ip_address'],
'private_ipv4': raw_attrs['private_ip_address'],
})
attrs.update({
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)),
'role': attrs['metadata'].get('role', 'none'),
})
groups.append('role=' + attrs['role'])
groups.append('dc=' + attrs['consul_dc'])
return name, attrs, groups
def iter_host_ips(hosts, ips):
'''Update hosts that have an entry in the floating IP list'''
for host in hosts:
host_id = host[1]['id']
if host_id in ips:
ip = ips[host_id]
host[1].update({
'access_ip_v4': ip,
'access_ip': ip,
'public_ipv4': ip,
'ansible_ssh_host': ip,
})
if 'use_access_ip' in host[1]['metadata'] and host[1]['metadata']['use_access_ip'] == "0":
host[1].pop('access_ip')
yield host

View File

@ -13,7 +13,7 @@
/usr/local/share/ca-certificates/vault-ca.crt
{%- elif ansible_os_family == "RedHat" -%}
/etc/pki/ca-trust/source/anchors/vault-ca.crt
{%- elif ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] -%}
{%- elif ansible_os_family in ["Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"] -%}
/etc/ssl/certs/vault-ca.pem
{%- endif %}
@ -25,7 +25,7 @@
- name: bootstrap/ca_trust | update ca-certificates (Debian/Ubuntu/CoreOS)
command: update-ca-certificates
when: vault_ca_cert.changed and ansible_os_family in ["Debian", "CoreOS", "Container Linux by CoreOS"]
when: vault_ca_cert.changed and ansible_os_family in ["Debian", "CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
- name: bootstrap/ca_trust | update ca-certificates (RedHat)
command: update-ca-trust extract

View File

@ -21,7 +21,7 @@
- name: bootstrap/sync_secrets | Print out warning message if secrets are not available and vault is initialized
pause:
prompt: >
Vault orchestration may not be able to proceed. The Vault cluster is initialzed, but
Vault orchestration may not be able to proceed. The Vault cluster is initialized, but
'root_token' or 'unseal_keys' were not found in {{ vault_secrets_dir }}. These are
needed for many vault orchestration steps.
when: vault_cluster_is_initialized and not vault_secrets_available

View File

@ -11,7 +11,7 @@
until: vault_etcd_health_check.status == 200 or vault_etcd_health_check.status == 401
retries: 3
delay: 2
delegate_to: "{{groups['etcd'][0]}}"
delegate_to: "{{ groups['etcd'][0] }}"
run_once: true
failed_when: false
register: vault_etcd_health_check

View File

@ -1,7 +1,7 @@
---
# Stop temporary Vault if it's running (can linger if playbook fails out)
- name: stop vault-temp container
shell: docker stop {{ vault_temp_container_name }} || rkt stop {{ vault_temp_container_name }}
shell: docker stop {{ vault_temp_container_name }}
failed_when: false
register: vault_temp_stop
changed_when: vault_temp_stop is succeeded

View File

@ -5,17 +5,19 @@
set_fact:
sync_file_dir: "{{ sync_file_path | dirname }}"
sync_file: "{{ sync_file_path | basename }}"
when: sync_file_path is defined and sync_file_path != ''
when:
- sync_file_path is defined
- sync_file_path
- name: "sync_file | Set fact for sync_file_path when undefined"
set_fact:
sync_file_path: "{{ (sync_file_dir, sync_file)|join('/') }}"
when: sync_file_path is not defined or sync_file_path == ''
when: sync_file_path is not defined or not sync_file_path
- name: "sync_file | Set fact for key path name"
set_fact:
sync_file_key_path: "{{ sync_file_path.rsplit('.', 1)|first + '-key.' + sync_file_path.rsplit('.', 1)|last }}"
when: sync_file_key_path is not defined or sync_file_key_path == ''
when: sync_file_key_path is not defined or not sync_file_key_path
- name: "sync_file | Check if {{sync_file_path}} file exists"
stat:
@ -46,17 +48,17 @@
- name: "sync_file | Remove sync sources with files that do not match sync_file_srcs|first"
set_fact:
_: "{% if inventory_hostname in sync_file_srcs %}{{ sync_file_srcs.remove(inventory_hostname) }}{% endif %}"
when: >-
sync_file_srcs|d([])|length > 1 and
inventory_hostname != sync_file_srcs|first
when:
- sync_file_srcs|d([])|length > 1
- inventory_hostname != sync_file_srcs|first
- name: "sync_file | Remove sync sources with keys that do not match sync_file_srcs|first"
set_fact:
_: "{% if inventory_hostname in sync_file_srcs %}{{ sync_file_srcs.remove(inventory_hostname) }}{% endif %}"
when: >-
sync_file_is_cert|d() and
sync_file_key_srcs|d([])|length > 1 and
inventory_hostname != sync_file_key_srcs|first
when:
- sync_file_is_cert|d()
- sync_file_key_srcs|d([])|length > 1
- inventory_hostname != sync_file_key_srcs|first
- name: "sync_file | Consolidate file and key sources"
set_fact:

View File

@ -1,45 +0,0 @@
[Unit]
Description=hashicorp vault on rkt
Documentation=https://github.com/hashicorp/vault
Wants=network.target
[Service]
User=root
Restart=on-failure
RestartSec=10s
TimeoutStartSec=5
LimitNOFILE=40000
# Container has the following internal mount points:
# /vault/file/ # File backend storage location
# /vault/logs/ # Log files
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/vault.uuid
ExecStart=/usr/bin/rkt run \
--insecure-options=image \
--volume hosts,kind=host,source=/etc/hosts,readOnly=true \
--mount volume=hosts,target=/etc/hosts \
--volume=volume-vault-file,kind=host,source=/var/lib/vault \
--volume=volume-vault-logs,kind=host,source={{ vault_log_dir }} \
--volume=vault-cert-dir,kind=host,source={{ vault_cert_dir }} \
--mount=volume=vault-cert-dir,target={{ vault_cert_dir }} \
--volume=vault-conf-dir,kind=host,source={{ vault_config_dir }} \
--mount=volume=vault-conf-dir,target={{ vault_config_dir }} \
--volume=vault-secrets-dir,kind=host,source={{ vault_secrets_dir }} \
--mount=volume=vault-secrets-dir,target={{ vault_secrets_dir }} \
--volume=vault-roles-dir,kind=host,source={{ vault_roles_dir }} \
--mount=volume=vault-roles-dir,target={{ vault_roles_dir }} \
--volume=etcd-cert-dir,kind=host,source={{ etcd_cert_dir }} \
--mount=volume=etcd-cert-dir,target={{ etcd_cert_dir }} \
docker://{{ vault_image_repo }}:{{ vault_image_tag }} \
--uuid-file-save=/var/run/vault.uuid \
--name={{ vault_container_name }} \
--net=host \
--caps-retain=CAP_IPC_LOCK \
--exec vault -- \
server \
--config={{ vault_config_dir }}/config.json
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/vault.uuid
[Install]
WantedBy=multi-user.target

View File

@ -93,6 +93,6 @@ Potential Work
- Change the Vault role to not run certain tasks when ``root_token`` and
``unseal_keys`` are not present. Alternatively, allow user input for these
values when missing.
- Add the ability to start temp Vault with Host, Rkt, or Docker
- Add the ability to start temp Vault with Host or Docker
- Add a dynamic way to change out the backend role creation during Bootstrap,
so other services can be used (such as Consul)

View File

@ -3,7 +3,6 @@
* [Getting started](/docs/getting-started.md)
* [Ansible](docs/ansible.md)
* [Variables](/docs/vars.md)
* [Ansible](/docs/ansible.md)
* Operations
* [Integration](docs/integration.md)
* [Upgrades](/docs/upgrades.md)
@ -20,6 +19,7 @@
* [AWS](docs/aws.md)
* [Azure](docs/azure.md)
* [OpenStack](/docs/openstack.md)
* [Packet](/docs/packet.md)
* [vSphere](/docs/vsphere.md)
* Operating Systems
* [Atomic](docs/atomic.md)

View File

@ -1,9 +1,7 @@
Ansible variables
===============
# Ansible variables
## Inventory
Inventory
-------------
The inventory is composed of 3 groups:
* **kube-node** : list of kubernetes nodes where the pods will run.
@ -14,7 +12,7 @@ Note: do not modify the children of _k8s-cluster_, like putting
the _etcd_ group into the _k8s-cluster_, unless you are certain
to do that and you have it fully contained in the latter:
```
```ShellSession
k8s-cluster ⊂ etcd => kube-node ∩ etcd = etcd
```
@ -32,15 +30,15 @@ There are also two special groups:
Below is a complete inventory example:
```
```ini
## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface
node1 ansible_ssh_host=95.54.0.12 ip=10.3.0.1
node2 ansible_ssh_host=95.54.0.13 ip=10.3.0.2
node3 ansible_ssh_host=95.54.0.14 ip=10.3.0.3
node4 ansible_ssh_host=95.54.0.15 ip=10.3.0.4
node5 ansible_ssh_host=95.54.0.16 ip=10.3.0.5
node6 ansible_ssh_host=95.54.0.17 ip=10.3.0.6
node1 ansible_host=95.54.0.12 ip=10.3.0.1
node2 ansible_host=95.54.0.13 ip=10.3.0.2
node3 ansible_host=95.54.0.14 ip=10.3.0.3
node4 ansible_host=95.54.0.15 ip=10.3.0.4
node5 ansible_host=95.54.0.16 ip=10.3.0.5
node6 ansible_host=95.54.0.17 ip=10.3.0.6
[kube-master]
node1
@ -63,17 +61,16 @@ kube-node
kube-master
```
Group vars and overriding variables precedence
----------------------------------------------
## Group vars and overriding variables precedence
The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
Optional variables are located in the `inventory/sample/group_vars/all.yml`.
Mandatory variables that are common for at least one role (or a node group) can be found in the
`inventory/sample/group_vars/k8s-cluster.yml`.
There are also role vars for docker, rkt, kubernetes preinstall and master roles.
There are also role vars for docker, kubernetes preinstall and master roles.
According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
those cannot be overridden from the group vars. In order to override, one should use
the `-e ` runtime flags (most simple way) or other layers described in the docs.
the `-e` runtime flags (most simple way) or other layers described in the docs.
Kubespray uses only a few layers to override things (or expect them to
be overridden for roles):
@ -97,8 +94,8 @@ block vars (only for tasks in block) | Kubespray overrides for internal roles' l
task vars (only for the task) | Unused for roles, but only for helper scripts
**extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml``
Ansible tags
------------
## Ansible tags
The following tags are defined in playbooks:
| Tag name | Used for
@ -145,21 +142,25 @@ Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all
tags found in the codebase. New tags will be listed with the empty "Used for"
field.
Example commands
----------------
## Example commands
Example command to filter and apply only DNS configuration tasks and skip
everything else related to host OS configuration and downloading images of containers:
```
```ShellSession
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
```
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
```
```ShellSession
ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf
```
And this prepares all container images locally (at the ansible runner node) without installing
or upgrading related stuff or trying to upload container to K8s cluster nodes:
```
```ShellSession
ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
-e download_run_once=true -e download_localhost=true \
--tags download --skip-tags upload,upgrade
@ -167,15 +168,16 @@ ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing.
Bastion host
--------------
## Bastion host
If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
you can use a so called *bastion* host to connect to your nodes. To specify and use a bastion,
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
bastion host.
```
bastion ansible_ssh_host=x.x.x.x
```ShellSession
[bastion]
bastion ansible_host=x.x.x.x
```
For more information about Ansible and bastion hosts, read

17
docs/arch.md Normal file
View File

@ -0,0 +1,17 @@
# Architecture compatibility
The following table shows the impact of the CPU architecture on compatible features:
- amd64: Cluster using only x86/amd64 CPUs
- arm64: Cluster using only arm64 CPUs
- amd64 + arm64: Cluster with a mix of x86/amd64 and arm64 CPUs
| kube_network_plugin | amd64 | arm64 | amd64 + arm64 |
| ------------------- | ----- | ----- | ------------- |
| Calico | Y | Y | Y |
| Weave | Y | Y | Y |
| Flannel | Y | N | N |
| Canal | Y | N | N |
| Cilium | Y | N | N |
| Contib | Y | N | N |
| kube-router | Y | N | N |

View File

@ -1,23 +1,22 @@
Atomic host bootstrap
=====================
# Atomic host bootstrap
Atomic host testing has been done with the network plugin flannel. Change the inventory var `kube_network_plugin: flannel`.
Note: Flannel is the only plugin that has currently been tested with atomic
### Vagrant
## Vagrant
* For bootstrapping with Vagrant, use box centos/atomic-host or fedora/atomic-host
* For bootstrapping with Vagrant, use box centos/atomic-host or fedora/atomic-host
* Update VagrantFile variable `local_release_dir` to `/var/vagrant/temp`.
* Update `vm_memory = 2048` and `vm_cpus = 2`
* Networking on vagrant hosts has to be brought up manually once they are booted.
```
```ShellSession
vagrant ssh
sudo /sbin/ifup enp0s8
```
* For users of vagrant-libvirt download centos/atomic-host qcow2 format from https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/
* For users of vagrant-libvirt download fedora/atomic-host qcow2 format from https://getfedora.org/en/atomic/download/
* For users of vagrant-libvirt download centos/atomic-host qcow2 format from <https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/>
* For users of vagrant-libvirt download fedora/atomic-host qcow2 format from <https://dl.fedoraproject.org/pub/alt/atomic/stable/>
Then you can proceed to [cluster deployment](#run-deployment)

View File

@ -1,11 +1,10 @@
AWS
===============
# AWS
To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`. Refer to the [Kubespray Configuration](#kubespray-configuration) for customizing the provider.
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets, route tables and all instances that kubernetes will be run on with key `kubernetes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targetted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`.
You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets, route tables and all instances that kubernetes will be run on with key `kubernetes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targeted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`.
Make sure your VPC has both DNS Hostnames support and Private DNS enabled.
@ -13,11 +12,13 @@ The next step is to make sure the hostnames in your `inventory` file are identic
You can now create your cluster!
### Dynamic Inventory ###
## Dynamic Inventory
There is also a dynamic inventory script for AWS that can be used if desired. However, be aware that it makes some certain assumptions about how you'll create your inventory. It also does not handle all use cases and groups that we may use as part of more advanced deployments. Additions welcome.
This will produce an inventory that is passed into Ansible that looks like the following:
```
```json
{
"_meta": {
"hostvars": {
@ -48,15 +49,18 @@ This will produce an inventory that is passed into Ansible that looks like the f
```
Guide:
- Create instances in AWS as needed.
- Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube-master`, `etcd`, or `kube-node`. You can also share roles like `kube-master, etcd`
- Copy the `kubespray-aws-inventory.py` script from `kubespray/contrib/aws_inventory` to the `kubespray/inventory` directory.
- Set the following AWS credentials and info as environment variables in your terminal:
```
```ShellSession
export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="yyyyy"
export REGION="us-east-2"
```
- We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kubespray-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml`
## Kubespray configuration
@ -75,4 +79,3 @@ aws_kubernetes_cluster_id|string|KubernetesClusterID is the cluster id we'll use
aws_disable_security_group_ingress|bool|The aws provider creates an inbound rule per load balancer on the node security group. However, this can run into the AWS security group rule limit of 50 if many LoadBalancers are created. This flag disables the automatic ingress creation. It requires that the user has setup a rule that allows inbound traffic on kubelet ports from the local VPC subnet (so load balancers can access it). E.g. 10.82.0.0/16 30000-32000.
aws_elb_security_group|string|Only in Kubelet version >= 1.7 : AWS has a hard limit of 500 security groups. For large clusters creating a security group for each ELB can cause the max number of security groups to be reached. If this is set instead of creating a new Security group for each ELB this security group will be used instead.
aws_disable_strict_zone_check|bool|During the instantiation of an new AWS cloud provider, the detected region is validated against a known set of regions. In a non-standard, AWS like environment (e.g. Eucalyptus), this check may be undesirable. Setting this to true will disable the check and provide a warning that the check was skipped. Please note that this is an experimental feature and work-in-progress for the moment.

View File

@ -1,5 +1,4 @@
Azure
===============
# Azure
To deploy Kubernetes on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'azure'`.
@ -7,49 +6,77 @@ All your instances are required to run in a resource group and a routing table h
Not all features are supported yet though, for a list of the current status have a look [here](https://github.com/colemickens/azure-kubernetes-status)
### Parameters
## Parameters
Before creating the instances you must first set the `azure_` variables in the `group_vars/all.yml` file.
All of the values can be retrieved using the azure cli tool which can be downloaded here: https://docs.microsoft.com/en-gb/azure/xplat-cli-install
All of the values can be retrieved using the azure cli tool which can be downloaded here: <https://docs.microsoft.com/en-gb/azure/xplat-cli-install>
After installation you have to run `azure login` to get access to your account.
### azure\_tenant\_id + azure\_subscription\_id
#### azure\_tenant\_id + azure\_subscription\_id
run `azure account show` to retrieve your subscription id and tenant id:
`azure_tenant_id` -> Tenant ID field
`azure_subscription_id` -> ID field
### azure\_location
#### azure\_location
The region your instances are located, can be something like `westeurope` or `westcentralus`. A full list of region names can be retrieved via `azure location list`
### azure\_resource\_group
#### azure\_resource\_group
The name of the resource group your instances are in, can be retrieved via `azure group list`
#### azure\_vnet\_name
### azure\_vnet\_name
The name of the virtual network your instances are in, can be retrieved via `azure network vnet list`
#### azure\_subnet\_name
The name of the subnet your instances are in, can be retrieved via `azure network vnet subnet list RESOURCE_GROUP VNET_NAME`
### azure\_subnet\_name
The name of the subnet your instances are in, can be retrieved via `azure network vnet subnet list --resource-group RESOURCE_GROUP --vnet-name VNET_NAME`
### azure\_security\_group\_name
#### azure\_security\_group\_name
The name of the network security group your instances are in, can be retrieved via `azure network nsg list`
#### azure\_aad\_client\_id + azure\_aad\_client\_secret
### azure\_aad\_client\_id + azure\_aad\_client\_secret
These will have to be generated first:
- Create an Azure AD Application with:
`azure ad app create --name kubernetes --identifier-uris http://kubernetes --home-page http://example.com --password CLIENT_SECRET`
The name, identifier-uri, home-page and the password can be choosen
`azure ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET`
display name, identifier-uri, homepage and the password can be chosen
Note the AppId in the output.
- Create Service principal for the application with:
`azure ad sp create --applicationId AppId`
`azure ad sp create --id AppId`
This is the AppId from the last command
- Create the role assignment with:
`azure role assignment create --spn http://kubernetes -o "Owner" -c /subscriptions/SUBSCRIPTION_ID`
`azure role assignment create --role "Owner" --assignee http://kubernetes --subscription SUBSCRIPTION_ID`
azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your choosen secret.
azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your chosen secret.
### azure\_loadbalancer\_sku
Sku of Load Balancer and Public IP. Candidate values are: basic and standard.
### azure\_exclude\_master\_from\_standard\_lb
azure\_exclude\_master\_from\_standard\_lb excludes master nodes from `standard` load balancer.
### azure\_disable\_outbound\_snat
azure\_disable\_outbound\_snat disables the outbound SNAT for public load balancer rules. It should only be set when azure\_exclude\_master\_from\_standard\_lb is `standard`.
### azure\_primary\_availability\_set\_name
(Optional) The name of the availability set that should be used as the load balancer backend .If this is set, the Azure
cloudprovider will only add nodes from that availability set to the load balancer backend pool. If this is not set, and
multiple agent pools (availability sets) are used, then the cloudprovider will try to add all nodes to a single backend
pool which is forbidden. In other words, if you use multiple agent pools (availability sets), you MUST set this field.
### azure\_use\_instance\_metadata
Use instance metadata service where possible
## Provisioning Azure with Resource Group Templates

View File

@ -1,82 +1,83 @@
Calico
===========
# Calico
---
**N.B. Version 2.6.5 upgrade to 3.1.1 is upgrading etcd store to etcdv3**
If you create automated backups of etcdv2 please switch for creating etcdv3 backups, as kubernetes and calico now uses etcdv3
N.B. **Version 2.6.5 upgrade to 3.1.1 is upgrading etcd store to etcdv3**
If you create automated backups of etcdv2 please switch for creating etcdv3 backups, as kubernetes and calico now uses etcdv3
After migration you can check `/tmp/calico_upgrade/` directory for converted items to etcdv3.
**PLEASE TEST upgrade before upgrading production cluster.**
---
Check if the calico-node container is running
```
```ShellSession
docker ps | grep calico
```
The **calicoctl** command allows to check the status of the network workloads.
* Check the status of Calico nodes
```
```ShellSession
calicoctl node status
```
or for versions prior to *v1.0.0*:
```
```ShellSession
calicoctl status
```
* Show the configured network subnet for containers
```
```ShellSession
calicoctl get ippool -o wide
```
or for versions prior to *v1.0.0*:
```
```ShellSession
calicoctl pool show
```
* Show the workloads (ip addresses of containers and their located)
```
```ShellSession
calicoctl get workloadEndpoint -o wide
```
and
```
```ShellSession
calicoctl get hostEndpoint -o wide
```
or for versions prior *v1.0.0*:
```
```ShellSession
calicoctl endpoint show --detail
```
##### Optional : Define network backend
## Configuration
### Optional : Define network backend
In some cases you may want to define Calico network backend. Allowed values are 'bird', 'gobgp' or 'none'. Bird is a default value.
To re-define you need to edit the inventory and add a group variable `calico_network_backend`
```
```yml
calico_network_backend: none
```
##### Optional : Define the default pool CIDR
### Optional : Define the default pool CIDR
By default, `kube_pods_subnet` is used as the IP range CIDR for the default IP Pool.
In some cases you may want to add several pools and not have them considered by Kubernetes as external (which means that they must be within or equal to the range defined in `kube_pods_subnet`), it starts with the default IP Pool of which IP range CIDR can by defined in group_vars (k8s-cluster/k8s-net-calico.yml):
```
```ShellSession
calico_pool_cidr: 10.233.64.0/20
```
##### Optional : BGP Peering with border routers
### Optional : BGP Peering with border routers
In some cases you may want to route the pods subnet and so NAT is not needed on the nodes.
For instance if you have a cluster spread on different locations and you want your pods to talk each other no matter where they are located.
@ -84,11 +85,11 @@ The following variables need to be set:
`peer_with_router` to enable the peering with the datacenter's border router (default value: false).
you'll need to edit the inventory and add a hostvar `local_as` by node.
```
```ShellSession
node1 ansible_ssh_host=95.54.0.12 local_as=xxxxxx
```
##### Optional : Defining BGP peers
### Optional : Defining BGP peers
Peers can be defined using the `peers` variable (see docs/calico_peer_example examples).
In order to define global peers, the `peers` variable can be defined in group_vars with the "scope" attribute of each global peer set to "global".
@ -97,16 +98,17 @@ NB: Ansible's `hash_behaviour` is by default set to "replace", thus defining bot
Since calico 3.4, Calico supports advertising Kubernetes service cluster IPs over BGP, just as it advertises pod IPs.
This can be enabled by setting the following variable as follow in group_vars (k8s-cluster/k8s-net-calico.yml)
```
```yml
calico_advertise_cluster_ips: true
```
##### Optional : Define global AS number
### Optional : Define global AS number
Optional parameter `global_as_num` defines Calico global AS number (`/calico/bgp/v1/global/as_num` etcd key).
It defaults to "64512".
##### Optional : BGP Peering with route reflectors
### Optional : BGP Peering with route reflectors
At large scale you may want to disable full node-to-node mesh in order to
optimize your BGP topology and improve `calico-node` containers' start times.
@ -114,20 +116,20 @@ optimize your BGP topology and improve `calico-node` containers' start times.
To do so you can deploy BGP route reflectors and peer `calico-node` with them as
recommended here:
* https://hub.docker.com/r/calico/routereflector/
* https://docs.projectcalico.org/v3.1/reference/private-cloud/l3-interconnect-fabric
* <https://hub.docker.com/r/calico/routereflector/>
* <https://docs.projectcalico.org/v3.1/reference/private-cloud/l3-interconnect-fabric>
You need to edit your inventory and add:
* `calico-rr` group with nodes in it. At the moment it's incompatible with
`kube-node` due to BGP port conflict with `calico-node` container. So you
should not have nodes in both `calico-rr` and `kube-node` groups.
* `calico-rr` group with nodes in it. `calico-rr` can be combined with
`kube-node` and/or `kube-master`. `calico-rr` group also must be a child
group of `k8s-cluster` group.
* `cluster_id` by route reflector node/group (see details
[here](https://hub.docker.com/r/calico/routereflector/))
Here's an example of Kubespray inventory with route reflectors:
Here's an example of Kubespray inventory with standalone route reflectors:
```
```ini
[all]
rr0 ansible_ssh_host=10.210.1.10 ip=10.210.1.10
rr1 ansible_ssh_host=10.210.1.11 ip=10.210.1.11
@ -154,6 +156,7 @@ node5
[k8s-cluster:children]
kube-node
kube-master
calico-rr
[calico-rr]
rr0
@ -176,35 +179,35 @@ The inventory above will deploy the following topology assuming that calico's
![Image](figures/kubespray-calico-rr.png?raw=true)
##### Optional : Define default endpoint to host action
By default Calico blocks traffic from endpoints to the host itself by using an iptables DROP action. When using it in kubernetes the action has to be changed to RETURN (default in kubespray) or ACCEPT (see https://github.com/projectcalico/felix/issues/660 and https://github.com/projectcalico/calicoctl/issues/1389). Otherwise all network packets from pods (with hostNetwork=False) to services endpoints (with hostNetwork=True) within the same node are dropped.
### Optional : Define default endpoint to host action
By default Calico blocks traffic from endpoints to the host itself by using an iptables DROP action. When using it in kubernetes the action has to be changed to RETURN (default in kubespray) or ACCEPT (see <https://github.com/projectcalico/felix/issues/660> and <https://github.com/projectcalico/calicoctl/issues/1389).> Otherwise all network packets from pods (with hostNetwork=False) to services endpoints (with hostNetwork=True) within the same node are dropped.
To re-define default action please set the following variable in your inventory:
```
```yml
calico_endpoint_to_host_action: "ACCEPT"
```
##### Optional : Define address on which Felix will respond to health requests
## Optional : Define address on which Felix will respond to health requests
Since Calico 3.2.0, HealthCheck default behavior changed from listening on all interfaces to just listening on localhost.
To re-define health host please set the following variable in your inventory:
```
```yml
calico_healthhost: "0.0.0.0"
```
Cloud providers configuration
=============================
## Cloud providers configuration
Please refer to the official documentation, for example [GCE configuration](http://docs.projectcalico.org/v1.5/getting-started/docker/installation/gce) requires a security rule for calico ip-ip tunnels. Note, calico is always configured with ``ipip: true`` if the cloud provider was defined.
##### Optional : Ignore kernel's RPF check setting
### Optional : Ignore kernel's RPF check setting
By default the felix agent(calico-node) will abort if the Kernel RPF setting is not 'strict'. If you want Calico to ignore the Kernel setting:
```
```yml
calico_node_ignorelooserpf: true
```
@ -212,7 +215,7 @@ Note that in OpenStack you must allow `ipip` traffic in your security groups,
otherwise you will experience timeouts.
To do this you must add a rule which allows it, for example:
```
```ShellSession
neutron security-group-rule-create --protocol 4 --direction egress k8s-a0tp4t
neutron security-group-rule-create --protocol 4 --direction igress k8s-a0tp4t
```

Some files were not shown because too many files have changed in this diff Show More