Compare commits

...

603 Commits

Author SHA1 Message Date
c3814bb258 Check kube-apiserver up on all masters before upgrade (#7193) (#7197)
Only checking the kubernetes api on the first master when upgrading is not enough.
Each master needs to be checked before it's upgrade.

Signed-off-by: Rick Haan <rickhaan94@gmail.com>
2021-01-29 02:33:40 -08:00
90e6e19403 update hashes for 1.18 and 1.19 for kubespray-2.14 (#7207) 2021-01-27 03:53:40 -08:00
7e419310ce Update azure cloud config (#7208) (#7220)
* Allow configureable vni and port for flannel overlay

* additional options for azure cloud config
2021-01-27 03:47:40 -08:00
11b72e2408 [2.14] Backport: 6758, 6853 and 7003 to fix CRI-O pkg (#7209)
* cherry-pick bump crio version to 1.19 (#6758)

cherry-pick modifications:
* keep default to 1.17 as release 2.14 came with
* don't change readme with newer versions

* bump crio version to 1.19

* crio package name has changed for debian/ubuntu
* crio upgrade does not work, see #6757

* update crio info in docs

* Install cri-o with package version (#6853)

and thereby support upgrade from e.g. 1.18.x to 1.19.y

Included OSes:
- Centos7/8
- Ubuntu18/20

New variables for overriding by default installed packages:
- centos_crio_packages
- ubuntu_crio_packages

* add support crio version for varios k8s vers (#7003)

* add support crio version for various k8s vers

* regexp in pkg versions

Co-authored-by: Hans Feldt <2808287+hafe@users.noreply.github.com>
Co-authored-by: Sergey <s.bondarev@southbridge.ru>
2021-01-26 07:18:34 -08:00
c267d427ce Fix proxy and module_hotfixes (#7067)
This fixes the Containerd + EL8 case that was missed in 7d1ab3374e

On CentOS 8 with proxy ansible render inline `proxy` and `module_hotfixes` options.

For example:
```
proxy=http://127.0.0.1:3128module_hotfixes=True
```

But expected result:
```
proxy=http://127.0.0.1:3128
module_hotfixes=True
```

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit 03f316e7a2)
2020-12-22 04:54:26 -08:00
6d37c3cde6 Bump nodelocaldns to 1.16.0 (#7068)
This new version uses the same base image as kube-proxy
(k8s.gcr.io/build-image/debian-iptables)
This allow to automatically pick iptables-legacy or iptables-nft,
and be compatible with RHEL/CentOS 8
https://github.com/kubernetes/dns/pull/367

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit e909f84966)
2020-12-22 04:48:28 -08:00
af84e56099 Fix nf_conntrack_ipv4 modprobe (#7014)
RedHat 8.3 merged nf_conntrack_ipv4 in nf_conntrack but still advertise 4.18
so just try to modprobe and decide depending on the success
Also nf_conntrack is a dependency of ip_vs, so no need to care about it

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
(cherry picked from commit 00e0f3bd2b)
2020-12-18 06:00:25 -08:00
d3954a5590 [2.14] fix ci (#7021)
* fix flake8 errors in Kubespray CI - tox-inventory-builder

* fix flake8 errors in Kubespray CI - tox-inventory-builder

* Invalidate CRI-O kubic repo's cache

Signed-off-by: Victor Morales <v.morales@samsung.com>

* add support to configure pkg install retries

and use in CI job tf-ovh_ubuntu18-calico (due to it failing often)

* Switch Calico and Cilium image repos to Quay.io

Co-authored-by: Victor Morales <v.morales@samsung.com>
Co-authored-by: Barry Melbourne <9964974+bmelbourne@users.noreply.github.com>

Conflicts:
	roles/download/defaults/main.yml

* up vagrant box to fedora/33-cloud-base in cri-o molecule tests

(cherry picked from commit 06ec5393d7)

* add Google proxy-mirror-cache for docker hub to CI tests

(cherry picked from commit d739a6bb2f)

* containerd docker hub registry mirror support

* containerd docker hub registry mirror support

* add docs

* fix typo

* fix yamllint

* fix indent in sample
and ansible-playbook param in testcases_run

* fix md

* mv common vars to tests/common/_docker_hub_registry_mirror.yml

* checkout vars to upgrade tests

(cherry picked from commit 4a8a52bad9)

* Exclude .git/ from shellcheck

If a branch name contains '.sh', current shellcheck checks the branch
file under .git/ and outputs error because the format is not shell
script one.
This makes shellcheck exclude files under .git/ to avoid this issue.

(cherry picked from commit e2467d87b6)

Co-authored-by: Hans Feldt <2808287+hafe@users.noreply.github.com>
Co-authored-by: Sergey <s.bondarev@southbridge.ru>
Co-authored-by: Kenichi Omichi <ken-oomichi@wx.jp.nec.com>
2020-12-17 08:07:09 -08:00
75d648cae5 Fix unintended SIGPIPE (#6817) 2020-10-21 03:24:20 -07:00
087d9c204f Fix cinder & external_openstack cacert deployment (#6745) (#6832)
The CA cert was only deployed on master nodes
2020-10-21 01:48:20 -07:00
775cadda62 Update hashes and set default to 1.18.10 (#6842) 2020-10-21 01:30:23 -07:00
19c000c127 Set ansible_python_interpreter to python3 on debian (fix error with mitogen) (#6633) (#6744)
Co-authored-by: Florian Ruynat <16313165+floryut@users.noreply.github.com>
2020-10-01 06:16:54 -07:00
b39a196cfb properly generate extravolumes in kubeadmconfig for centos (#6707) 2020-09-23 01:42:08 -07:00
9fc14b3e6c Make sure node_ip is set if node is in etcd group (#6720) 2020-09-23 00:46:08 -07:00
f9a7dce7ca Add Kubernetes hashes 1.19.2/1.18.9/1.17.12 and set default (#6699) 2020-09-18 14:44:28 -07:00
fbbbd90732 fix kubelet_flexvolumes_plugins_dir undefined (#6645) (#6670)
Co-authored-by: w33dw0r7d <w33dw0r7d@gmail.com>
2020-09-18 02:14:46 -07:00
9869b46432 Move from widehat.opensuse to download.opensuse for crio centos (#6682) (#6704) 2020-09-18 02:04:45 -07:00
6cd33700f5 NetworkManager lists must be separated by , (#6649) 2020-09-11 00:30:15 -07:00
a1f04e9869 Cleanup v1.16 hashes (#6635) 2020-09-08 01:51:43 -07:00
961149b865 Update kube_version_min_required for 2.14 release (#6634) 2020-09-07 23:59:43 -07:00
597c810ef0 Resolve Vagrant etcd unhealthy cluster error (#6630) 2020-09-07 12:09:41 -07:00
2de6a5676d Fedora coreos networkmanager global dns and bootstrapping fix (#6577)
* remove podman cni plugin

* configure networkamanger global dns

* allow installation of python3-libselinux by disabling update repo temporary

* remove ipv4 section because it is not a valid configuration
2020-09-07 02:27:41 -07:00
050578da94 Update Cilium to 1.8.3 (#6629) 2020-09-07 02:11:49 -07:00
5a437add01 Fix upgrade playbook name (#6625)
* Fix upgrade playbook name

* Fix my fix :)
2020-09-07 02:11:42 -07:00
6fc73e3038 Add Kubernetes 1.16.15 hashes (#6624) 2020-09-07 01:23:41 -07:00
d97e9b9e50 Fix oracle linux repo (#6627) 2020-09-07 01:15:41 -07:00
fa0eb11bf4 Update kubernetes dashboard (#6623) 2020-09-04 05:29:41 -07:00
f660c29348 Declare port 10254 in nginx ingress pod template (#6609) 2020-09-04 04:54:11 -07:00
6613895de0 remove kubelet startup warnings for non docker container runtime (#6605)
Removes these startup warnings:

Warning: For remote container runtime, --pod-infra-container-image is ignored in kubelet, which should be set in that remote runtime instead
Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
2020-09-04 04:54:04 -07:00
803d52ffce kubernetes: remove unused variables (#6601) 2020-09-04 04:53:56 -07:00
fc61f8d52e Update cert manager to 0.16.1 (#6600)
* Update cert manager to 0.16.1

* Update cert manager to 0.16.1

Co-authored-by: Barry Melbourne <9964974+bmelbourne@users.noreply.github.com>
2020-09-04 04:53:48 -07:00
0553814b4f Add selectable dns policy for kube-router (#6586) 2020-09-04 04:53:41 -07:00
f1566cb8c2 Add protectKernelDefaults option (default true) to kubelet config file (#6611) 2020-09-03 07:41:41 -07:00
c1ba8e1b3a Rotate kubelet server certificate. (#6453)
* Rotate kubelet server certificate.

* CI test kubelet server cert rotation

* Approve kubelet serving certificates in tests.
2020-09-03 07:25:41 -07:00
2ff7ab8d40 Add snapshot-controller for CSI drivers and snapshot CRDs, add a default volumesnapshotclass when running cinder CSI (#6537)
* add snapshot-controller and v1beta1 snapshot api

* fix typo

* udpate manifest to v1beta1

* update

* update manifests

* fix spelling

* wait until crd is applied

* fix missing info in kube module

* revert snapshotclass

* add snapshot crds before applying the csi driver

* add crds, missed them in last commit

* use pull policy from kubespray
2020-09-03 04:01:43 -07:00
93698a8f73 Calico: update crds to v1 and cr (#6360)
* Update CustomResourceDefinition for kubecontrollersconfigurations.crd.projectcalico.org to v1
* Align ClusterRole for kube-controllers with upstream (calico)
2020-09-03 00:51:40 -07:00
6245587dc8 Fix E306 in roles/network_plugin (#6516)
Signed-off-by: Miouge1 <maxime@root314.com>
2020-09-02 23:55:40 -07:00
2faf53b039 Check node_ip is defined when removing etcd node (#6603) 2020-09-01 01:05:58 -07:00
e0b1787740 Use crictl 1.19.0 for k8s 1.19.x (#6598) 2020-09-01 01:05:50 -07:00
9849dba5d3 Update cni plugins with minor fix (#6592) 2020-08-31 05:16:21 -07:00
03c9c091f2 Docker: Set Cgroup driver by default to systemd (#6563)
* Set Docker Cgroup driver to systemd

* Add docker_cgroup_driver in Docker defaults
2020-08-31 04:56:20 -07:00
5a8b68a429 Add support for openstack application credentials (#6534)
* Add support for openstack application credentials

* Add some lines for readability

* Update external_openstack_tenant_id check

Do not check external_openstack_tenant_id when application credentials are defined

* Add check for external_openstack_domain_id

* Fix typo
2020-08-31 03:30:28 -07:00
34d88ea6d9 Fix Ansible-lint E303 (#6409) 2020-08-31 03:30:20 -07:00
0665b45e61 Update nginx ingress to 0.35.0 (#6599) 2020-08-31 03:24:21 -07:00
648fcf3a2e Fix E306 in roles/etcd (#6515) 2020-08-31 03:20:20 -07:00
058438a25d Remove support for CoreOS Container Linux (#6576) 2020-08-28 02:28:53 -07:00
6e938a3106 Fix E306 in other roles (#6517) 2020-08-28 01:20:53 -07:00
2f93d62aa5 Update nginx ingress to 0.34.1 (#6571) 2020-08-27 10:15:53 -07:00
8ba3d7ec75 Add Kubernetes 1.19 hashes (#6593) 2020-08-27 09:45:53 -07:00
9e2d282709 cri-o: add variable to configure unsecure pull (#6568)
By default do not allow "unqualified" (without a registry) images
because it is considered unsecure and subject to mitm attacks.

To enable insecure pull configure for example:

crio_registries:
  - "docker.io"
  - "quay.io"
2020-08-27 09:09:53 -07:00
706c7cb4f1 etcd should not fail when adding an already existing member (#6587) 2020-08-27 02:33:01 -07:00
5884eeb606 Remove ethtool workaround, issue is now fixed (#6579) 2020-08-27 02:29:01 -07:00
e7ee19bd66 Update bunch of dependencies with minor fixes (#6570) 2020-08-27 02:25:01 -07:00
2f8fc92182 make it possible to open additional ports on master nodes (#6547) 2020-08-27 02:07:13 -07:00
f59d3fc4a3 Deviceroutesourceaddress (#6508)
* add FELIX_DEVICEROUTESOURCEADDRESS calico option

* add calico_use_default_route_src_ipaddr option 

add calico_use_default_route_src_ipaddr option to use FELIX_DEVICEROUTESOURCEADDRESS calico option

* Update k8s-net-calico.yml
2020-08-27 02:07:01 -07:00
8e2bae0f2a Fix Ansible Lint warnings (No such file or directory) (#6581) 2020-08-26 23:19:10 -07:00
e6dae03a0d Add cilium hubble server in config (#6575)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-08-26 23:19:02 -07:00
2f2ed116f7 Improve metallb template for bgp peers (#6574)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-08-26 23:15:03 -07:00
e91c6a7bd1 update the ovn4nfv-k8s-plugin image version to v1.1.0 (#6531)
Signed-off-by: Kuralamudhan Ramakrishnan <kuralamudhan.ramakrishnan@intel.com>
2020-08-26 23:11:03 -07:00
1ff95e85f4 Rollback coredns, should not have been updated before 1.19 (#6573) 2020-08-26 03:30:03 -07:00
36924b63dc Allow webhook authorization (#6502) 2020-08-24 06:29:41 -07:00
0c80d3d9fa Add proxy_env calculation to reset.yml (#6558) 2020-08-21 02:03:46 -07:00
411510cbe6 Use proper openssl command to differentiate between host and ip in API certificate check (#6392)
* Use proper openssl command to differentiate between host and ip in current certificate check

* fixup! Use proper openssl command to differentiate between host and ip in current certificate check
2020-08-21 02:03:39 -07:00
6e2b8a5750 Add timeout to Get current version of calico cluster version, again (#6493) 2020-08-21 00:13:51 -07:00
ca66a96d0a make pre-remove node draining a failable task (#6442)
and add configuration to allow ungraceful removal
2020-08-21 00:13:39 -07:00
0c09ec5d13 Bump Openstack cloud controller image verison to 1.18.2 (#6562) 2020-08-21 00:10:03 -07:00
a8e2110b2d #6552 Update extras_rh_repo_base_url (#6556) 2020-08-21 00:09:55 -07:00
250541d29d Use proper pypy download url in bootstrap script (#6555)
The bootstrap-os role uses a bootstrap script to provision a
python interpreter on flatcar and container os hosts. As the
pypy project switched to another hoster, the download url changed.

If applied this will use the new proper pypy download url in bootstrap script
2020-08-21 00:09:47 -07:00
142b9e1eff Update k8s hashes and set default version to 1.18.8 (#6532) 2020-08-21 00:09:39 -07:00
f204212963 Add docs for 'setting up your first cluster' (#6544) 2020-08-21 00:05:40 -07:00
91ae87fa60 Fix setting node label if kube_override_hostname is defined (#6557) 2020-08-20 06:23:30 -07:00
85646c96ad Add docs about CI setup (#6397) 2020-08-20 04:37:23 -07:00
d6456d13c2 Update coredns to 1.7.0 (#6538) 2020-08-20 04:33:44 -07:00
98f7485303 Update weave to 2.7.0 + minor update to Cilium (#6501) 2020-08-20 04:33:36 -07:00
a42d811420 fix scale playbook (#6482) 2020-08-20 04:33:23 -07:00
bf6fdce339 Fix cert-manager E305 ansible-lint error (#6549) 2020-08-20 04:25:45 -07:00
fa378f09c3 Edited pre-upgrade task to uncordon a node failing to drain (#6546) 2020-08-20 04:25:36 -07:00
d9d11e2291 Update sonobuoy dependency (#6536) 2020-08-20 04:25:23 -07:00
73b2683697 Allow hosts with hyphen in name (#6529) 2020-08-18 00:53:30 -07:00
d8a749fd27 Update apiserver-audit-policy.yaml.j2 (#6526) 2020-08-18 00:49:37 -07:00
f2d2d080f6 add master_volume_type variable (#6524) 2020-08-18 00:49:29 -07:00
78ceef6b15 Remove unused variable (#6522) 2020-08-18 00:45:29 -07:00
ca8e59fa85 Add new cilium options for native routing (#6519)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-08-18 00:39:42 -07:00
b0210567aa Fixed Kubespray container-engine/docker role to populate docker.service (#6518) 2020-08-18 00:39:30 -07:00
33ec13293b Fix cilium_deploy_additionally with kubeadm etcd (#6514)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-08-18 00:35:36 -07:00
bedb411d06 improve Cilium metrics support (#6513)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-08-18 00:35:29 -07:00
ef3e98807e tlsminversion and tlsciphersuites kubelet (#6490) 2020-08-13 02:48:13 -07:00
49158dbe40 Minor Ambassador docs updates (#6503)
Signed-off-by: Alvaro Saurin <alvaro.saurin@gmail.com>
2020-08-06 08:37:42 -07:00
35682b5228 Fix cilium strict kube proxy replacement in HA (#6473)
* Update the cilium svc proxy test to HA mode

Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>

* Fix cilium strict kube-proxy in HA

Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>

* Add a single global endpoint variable

Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>

* Add cilium docs about kube-proxy replacement

Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>

* Fix issues in docs

Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-08-06 00:14:55 -07:00
9cc70e9e70 Upgrade JetStack Cert-Manager to v0.15.2 (#6414)
* Upgrade JetStack Cert-Manager to v0.15.2

* Add README.md table of contents
2020-08-05 23:26:55 -07:00
50598d9d47 Fix E306 in tests/ (#6495) 2020-08-05 13:22:55 -07:00
fc23f37af7 Fix E306 in roles/kubernetes (#6500) 2020-08-05 07:56:28 -07:00
bfe143808f Allows tls verify skip on webhook auth url (#6472) 2020-08-05 05:02:29 -07:00
91742055e0 Fix E306 in scripts/ (#6496) 2020-08-05 01:56:28 -07:00
6c41f64a98 Correct sample inventory to pass yamllint (#6499)
Nit alert.  Sample inventory throws an error when processed
by yamllint.  The default line is currently commented out.
However, when uncommenting it our linters fail.
2020-08-05 01:52:48 -07:00
e72dbf3dfc Option for MetalLB to talk BGP (#6383)
* Option for MetalLB to talk BGP

* Check for BGP peers when metallb_protocol is bgp

* README clarification

* Commented values as documentation only in the sample inventory

* layer 2 or BGP, not both
2020-08-05 01:52:40 -07:00
c3b78c3255 bootstrap-os for remove-node (#6154) 2020-08-05 01:52:28 -07:00
fb666c44b3 Quoted type constraints are deprecated (#6497) 2020-08-05 01:32:28 -07:00
58b5bf7886 Update base image to v2.13.3 (#6494) 2020-08-05 01:28:29 -07:00
cc70200a07 Fix Flexvolume mount in Openstack Controller (#6480) 2020-08-04 05:28:35 -07:00
ffbd98fec6 Remove hvac dependency (#6476) 2020-08-04 05:28:28 -07:00
f3c17361da Create a PodDisruptionBudget for the Cinder CSI controllerplugin (#6385) 2020-08-04 05:28:19 -07:00
bdf0238328 Upgrade molecule to v3 (#6468)
Signed-off-by: Victor Morales <v.morales@samsung.com>
2020-08-04 05:24:19 -07:00
39b907cdfb Remove workaround for kubeadm upgrade (#6478)
https://github.com/kubernetes/kubeadm/issues/1498 was closed
2020-08-03 01:17:40 -07:00
24a7878e7c Update kube-router to 1.0.1 and kube-ovn to 1.3.0 (#6479) 2020-08-01 00:34:04 -07:00
2364a84579 fix src for audit webhook config yaml (#6470) 2020-08-01 00:33:56 -07:00
c6e5be91e9 crio: align template crio.conf with upstream (#6432)
* log level by default increased to 'info'
* cgroup manager by default set to 'systemd'
* stream port (used by kubelet) bound to 127.0.0.1 for security reasons
* metrics can be enabled and port specified
2020-08-01 00:33:48 -07:00
ce22c0e6a4 Add option to configure IPVS timeouts in kube-proxy configration manifest. (#6396) 2020-08-01 00:33:40 -07:00
bd60df97aa Fix download calico policy condition (#6474) 2020-08-01 00:29:48 -07:00
94df580674 Moved docker_dns_options to defaults so it can be overridden (#6394)
* Moved docker_dns_options to defaults so it can be overridden

* Fixed yaml indentation and markdown

* Moved docker_dns_search_domains to defaults
2020-08-01 00:29:41 -07:00
90e5f8ffe1 adding ovn4nfv in kubespray (#6381)
Signed-off-by: Kuralamudhan Ramakrishnan <kuralamudhan.ramakrishnan@intel.com>
2020-07-31 07:33:08 -07:00
bf6168fca8 Move fedora30 jobs to fedora32 (#6426) 2020-07-30 23:31:07 -07:00
a78e861a89 Fix test if openstack_cacert is a base64 string (#6421) 2020-07-30 13:15:17 -07:00
3550e3c145 Adding kube-proxy-replacement support in cilium (#6334)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-07-30 02:46:31 -07:00
8425c2363b Replaced a broken link (#6467) 2020-07-30 00:58:31 -07:00
15ec44901d azure csi typo (#6469) 2020-07-30 00:52:31 -07:00
924cc11af6 Upgrade to kubernetes 1.18.6 (#6405)
- Add 1.17.9 and 1.16.13 SHAs
2020-07-29 14:54:09 -07:00
0fa5a252b9 Documentation for Ingress (#6378)
Signed-off-by: Alvaro Saurin <alvaro.saurin@gmail.com>
2020-07-29 06:55:47 -07:00
fe46349786 Fix ansible-lint E301 for commands fetching data (#6465) 2020-07-28 08:39:47 -07:00
96a2b386f2 Fix shellcheck url (#6462) 2020-07-28 05:57:08 -07:00
214e08f8c9 Fix ansible-lint E305 (#6459) 2020-07-28 01:39:08 -07:00
8bd3b50e31 Fix ansible-lint E404 (#6417) 2020-07-28 01:21:08 -07:00
b8c4bd200e Update README.md and openstack.md (#6455) 2020-07-27 07:44:17 -07:00
e70f27dd79 Add noqa and disable .ansible-lint global exclusions (#6410) 2020-07-27 06:24:17 -07:00
b680cdd0e4 Move healthz check to secure ports (#6446) 2020-07-27 00:26:17 -07:00
c9f63e5016 Update multus version & crio conf (#6444) 2020-07-26 23:36:16 -07:00
d8a197ca51 Fix remove etcd broken with etcdctl_api 3 (#6448) 2020-07-26 23:32:29 -07:00
1f9841f609 update cinder csi manifests (#6434) 2020-07-26 23:32:17 -07:00
aa21edeb53 Update docker package to 19.03.12 (#6439) 2020-07-22 09:26:06 -07:00
eb69f126de * add proxy_env definition to remove_node.yml resolving #6430 (#6431) 2020-07-22 00:28:05 -07:00
70edccf7e0 Newer version of Local Path Provisioner in samples (#6437)
To make it less confusing for users who uncommented whole block of
local path provisioner [1] the samples should point at least to
version 0.0.3 which supports helper image [2] configured by
local_path_provisioner_helper_image_repo variable. As 0.0.3 is a bit old
samples could point to current newest release 0.0.14.

[1] 45a177e2a0 (commitcomment-38625688)
[2] 315d67fa8c
2020-07-22 00:08:11 -07:00
4b80a7f6fe Felix configuration via extraenvs of calico node (#6433) 2020-07-22 00:08:04 -07:00
e06e6895da Remove dbus-tools from coreos bootstrap (#6428)
Trying to layer this package on Fedora 32 causes the install to crash
and furthermore it looks like the original bug linked to in the comment
has been resolved for Fedora 31
2020-07-22 00:04:04 -07:00
50fc82acdc Minor update to Cilium and Calico (#6438) 2020-07-21 23:58:33 -07:00
ea67bb6e41 Fix typo: Modprode -> Modprobe (#6429) 2020-07-21 23:58:25 -07:00
b19f2e2d3d Update the calico_veth_mtu setting to affect IP-in-IP users (#6419)
* Update calico_veth_mtu to FELIX_IPINIP variable

calico_veth_mtu is specified in the configuration, but since it only works for wireguard, modify it to work for IP-in-IP users.

* Update template with more cleaner expression
2020-07-21 23:58:18 -07:00
9c48f666ec change /etc/ssl/etcd to etcd_config_dir param (#6408)
* change /etc/ssl/etcd to etcd_config_dir param

* add use etcd_events_data_dir param
2020-07-21 23:58:05 -07:00
4990eec4a2 Replace Openstack with OpenStack (#6413)
The official word is OpenStack, not Openstack as [1].
This replaces it with OpenStack in the docs.

[1]: https://www.openstack.org/
2020-07-21 23:54:05 -07:00
bf8c8976dd Upgrade etcd to 3.4.3 (#5998) 2020-07-20 07:26:51 -07:00
a7ec0ed587 add audit webhook support (#6317)
* add audit webhook support

* use generic name auditsink
2020-07-20 01:32:54 -07:00
1a1fe99669 Add a way to deploy cilium alongside another CNI (#6373)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-07-17 05:57:01 -07:00
8818073ff3 Cleanup old build-cephfs-provisioner.yml playbook (#6418) 2020-07-17 04:15:00 -07:00
b35e6558bc Always enable GitLab CI artifacts for cluster-dump (#6412) 2020-07-16 13:45:00 -07:00
5e22574402 Remove allow-release-candidate-upgrades already include in experimental-upgrades flag (#6349) 2020-07-15 00:26:37 -07:00
e1873ab872 add calico-node selinux (#6359) 2020-07-15 00:22:38 -07:00
29312a3ec0 Add oomichi to reviwers of MetalLB addon (#6393)
I'd like to review PRs related to metallb addon as possible to make
it better, and it would be easy to track related PRs if becoming the
reviewer.
2020-07-14 20:44:37 -07:00
feeb701c13 Respect kube_override_hostname during removal/upgrade (#6347)
* respect kube_override_hostname during removal/upgrade

* Use hostvars in loop
2020-07-13 07:18:40 -07:00
b347aefd61 Fixed fedora modular repos activation for fcos (#6300)
* Enable fedora modular repos for fcos #6299

* Fixed fedora modular repos activation for fcos #6300
2020-07-13 07:18:32 -07:00
abfa1636e4 Fix kube-proxy post deployment removal (#5554)
* Fix kube-proxy removal

* Fix unwanted skipped task for kube-proxy
* Fix kube_proxy_remove default

Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>

* Add test for kube-router svc proxy

Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-07-13 07:12:33 -07:00
deca5ec903 Remove old csi-attacher flag and fix RBAC for Cinder CSI (#6358)
Add proper RBAC for new csi-attacher version
2020-07-13 04:48:32 -07:00
05b9f14b76 Update cilium minimum kernel preinstall check (#6376)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-07-13 04:44:32 -07:00
4cb576da19 Add readiness probe to dns-autoscaler (#6382) 2020-07-13 02:50:34 -07:00
8cb644fbec Add Fedora CoreOS kubevirt image for tests (#6337) 2020-07-10 01:07:48 -07:00
22996babcf allow kubeadm to upgrade etcd (#6345)
Co-authored-by: Hans Feldt <hafe@users.noreply.github.com>
2020-07-07 12:36:00 -07:00
75ad868cbd crio: harden downloads with retry (#6374)
CI job 624031102 failed with:

fatal: [ubuntu1804]: FAILED! => {"changed": false, "msg": "Failed to download key at https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_18.04/Release.key: Request failed: <urlopen error [Errno -3] Temporary failure in name resolution>"}

Assuming its a temporary problem it should get more robust with a
couple of retries like in other roles.

Co-authored-by: Hans Feldt <hafe@users.noreply.github.com>
2020-07-07 12:32:01 -07:00
9433fe46c8 Add workaround with include_task for mitogen (#6312) 2020-07-07 08:09:59 -07:00
935c5093e2 Enable OVH CI (#6365) 2020-07-06 01:56:51 -07:00
6bb47d8adb Fix can't remove etcd node (#6363)
* add remove_node_ip

* move remove_node_ip to remove etcd part

* fix: remove tail space

* fix: handle ubuntu: focal
2020-07-04 02:02:48 -07:00
57eefdd458 Fix azure-cloud-config.j2 JSON syntax (#6364) 2020-07-02 23:38:47 -07:00
060d25fc79 Update MetalLB README.md (#6350)
Recently MetalLB becomes one of addons with renaming the options.
This updates MetalLB README.md for this change.
2020-07-02 07:12:54 -07:00
4ce970c0b2 Cilium: overwrite auto-detected MTU of underlying network (#6329) 2020-07-02 07:12:47 -07:00
017df7113d Patch Calico for V3.14.0 missing CR and CRD (#6276) 2020-07-01 08:44:16 -07:00
00fe3d5094 Explicitly set ETCDCTL_API and use ETCDCTL_ENDPOINTS (#6327) 2020-07-01 04:56:16 -07:00
bcac3c62a2 Add additional metadata configuration options to external Openstack CCM (kubernetes-sigs#6338) (#6339)
* Add additional metadata configuration option to external Openstack CCM (kubernetes-sigs#6338)

* Set the variable external_openstack_metadata_search_order undefined by default
2020-07-01 04:52:17 -07:00
2a82dff3ae Remove runtime-config from kubeadm if empty (#6311) 2020-06-30 11:22:05 -07:00
16ec5939c2 Update deprecated api (#6245) 2020-06-30 09:00:07 -07:00
b064274e27 Update kube-router to 1.0.0 (#6211) 2020-06-30 08:54:06 -07:00
ae003af262 Fix kubelet cgroup driver detection for crio (#6331)
* Fix kubelet cgroup driver detection for crio

Remove fact standalone_kubelet since it is not used

* Fix yamllint complaints of roles/kubernetes/node/tasks/facts.yml

Co-authored-by: Hans Feldt <hafe@users.noreply.github.com>
2020-06-30 02:32:05 -07:00
f515898cb5 Update hashes and set default version to 1.18.5 (#6335) 2020-06-30 02:00:05 -07:00
25bab0e976 Change MetalLB to one of addons (#6238)
This changes MetalLB contrib to one of addons for deploying MetalLB with
Kubernetes cluster deployment. By the default, Kubespray doesn't deploy
MetalLB addon.
2020-06-29 15:11:59 -07:00
8213b1802b Update calico to 1.15.0 + minor update to kube-ovn/weave (#6306) 2020-06-29 14:39:58 -07:00
4c1e0b188d Add .editorconfig file (#6307) 2020-06-29 12:39:59 -07:00
09b23f96d7 Use NetworkManager to manage resolv.conf in FedoraCoreOS (#6291) 2020-06-29 00:26:17 -07:00
56f389a9f3 Add USE_REAL_HOSTNAME to inventory.py (#6293)
inventory_builder creates hosts.yaml file with hostnames like "node1",
"node2", etc. Even if specifying override_system_hostname=false, the
output of "kubectl get nodes" shows those hostnames ("node1", etc.)
without using actual hostnames.
To solve this issue, this adds an option USE_REAL_HOSTNAME to get
actual hostnames when creating hosts.yaml file instead of "node1", etc.
2020-06-26 00:03:47 -07:00
45e12df8a3 Cleanup OpenStack network things (#6283) 2020-06-26 00:03:39 -07:00
1892cd65f6 Add support for dns_etchosts (#6236) 2020-06-26 00:03:31 -07:00
d3ca9d1db9 kube_encryption_resources must be output as yaml (#6309) 2020-06-25 23:59:31 -07:00
16ad344c41 Gather ansible_default_ipv4 for specific groups (#6318) 2020-06-25 23:55:31 -07:00
8ca2a9a7d5 added azure_cloud parameter to Azure's cloud_config (#6321) 2020-06-25 14:35:30 -07:00
93cbcb61b8 Fix some doc links (#6328) 2020-06-25 11:56:37 -07:00
276c450759 Use connection: local when delegate_to: localhost (#6322)
This will avoid SSH connection on the local host
2020-06-25 08:14:38 -07:00
a6a6e843af Add /dev volume (#6319) 2020-06-25 06:22:38 -07:00
f54f63ec3f Update cilium to 1.8.0 (#6314) 2020-06-25 06:16:38 -07:00
93951f2ed5 fix use of ansible tags (#6316)
tags are not inherited for include_role therefore the change
from include to import

Co-authored-by: Hans Feldt <hafe@users.noreply.github.com>
2020-06-25 03:00:37 -07:00
c29b21717d Add event-ttl duration (#6310)
* Add event-ttl duration

* Fix wrong location
2020-06-24 08:15:17 -07:00
80d16e6c91 Support for Ambassador OSS as an Ingress (#6135)
Support for Ambassador OSS as an Ingress Controller when
settings `ingress_ambassador_enabled: true`.

Signed-off-by: Alvaro Saurin <alvaro.saurin@gmail.com>
2020-06-24 07:39:17 -07:00
68cfb9a053 Update OpenStack doc for external cloud provider (#6252)
Now the in-tree cloud provider is deprecated and it is recommended to
the external cloud provider for OpenStack instead.
The doc described how to upgrade from the in-tree cloud provider, but
it is better to describe how to deploy the external cloud provider from
scratch instead for current situation.
This updates the OpenStack doc for this usecase.
2020-06-22 04:48:39 -07:00
d50fe9550c bump dashboard to 2.0.2 (#6303) 2020-06-22 01:14:40 -07:00
8f5c4dcd2e Add support for Kata Containers (#6256)
* Install Kata Containers as additional container runtime

* Create RuntimeClasses for Kata Containers

* Updated Vagrant to optionally run without Docker as container manager

* Updated Vagrant to optionally use Libvirt nested virtualization

* Add Kata Containers documentation

* Fix lint errors

* Add kata_containers_enabled to kubespray-defaults

* Fixed typo error

* Fixed typo error
2020-06-22 00:28:39 -07:00
1a802726d2 Update base image to v2.13.2 (#6296) 2020-06-19 06:47:58 -07:00
90c867b424 Update loadbalancers versions (haproxy&nginx) (#6278) 2020-06-18 07:48:19 -07:00
eeb77369cb Update hashes and set default to 1.18.4 (#6285) 2020-06-18 06:30:19 -07:00
69a48cbdd7 Add Vagrant CI for Ubuntu 20.04 (#6279) 2020-06-18 01:18:05 -07:00
33b8ad0d89 Update test-cases documentation (#6264) 2020-06-17 23:40:05 -07:00
605cfeb3e4 Test bootstrap-os on more platforms (#6277) 2020-06-17 04:52:39 -07:00
c6588856c7 Add Ubuntu 20.04 support and use Python 3 (#6157) 2020-06-16 13:04:05 -07:00
dba645421f ADD tls cipher suites support (#6024)
* ADD tls cipher suites support

yaml lint

yamllint

* update test case

* update test case
2020-06-16 04:10:05 -07:00
f437ac0b27 Fix nologin wrong path (#6272) 2020-06-16 02:30:04 -07:00
8ec6729cae Add disable_ipv6_dns: true in E2E tests (#6266) 2020-06-16 01:12:03 -07:00
19d4b5dd04 Update various dependencies (#6265) 2020-06-16 01:08:03 -07:00
78251b0304 Fix check external_openstack_tenant_name value (#6270)
We need to specify either external_openstack_tenant_name or
external_openstack_tenant_id. Those values were checked by seeing they
are defined or they have actual values separately.
However those values are always defined because of the following code
of openstack/defaults/main.yml:

external_openstack_tenant_id: "{{ lookup('env','OS_TENANT_ID')| default(lookup('env','OS_PROJECT_ID'),true) }}"
external_openstack_tenant_name: "{{ lookup('env','OS_TENANT_NAME')| default(lookup('env','OS_PROJECT_NAME'),true) }}"

So even if not specifying both values, those checks could not detect
the misconfiguration. This fixes this to detect the misconfiguration.
2020-06-16 01:02:03 -07:00
10e54eca26 make better condition for applying nf_conntrack kernel tweak (#6267)
* MINOR: Check kernel version before enable modprobe nf_conntrack

* CLEANUP: no more need to ignore error of this task

* MINOR: Fixing yaml and ansible lint error - remove trailling-space
2020-06-16 00:34:06 -07:00
a8740c6e13 fix a few tasks falsely reporting "changed" (#6269)
Co-authored-by: Hans Feldt <hafe@users.noreply.github.com>
2020-06-16 00:24:03 -07:00
06391b6dd9 Fix kubectl.sh parameter quoting (#6239)
If the special parameter "$@" is not quoted, the following command will not work:

./kubectl.sh patch storageclass my-storage-class -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
2020-06-14 13:57:57 -07:00
8dc01df60b Oracle Linux 8 support and fixes (#6198)
* Add oraclelinux8 and disable firewalld

Add oraclelinux8 image and disable firewalld on oraclelinux VMs

* Fix Oracle Linux repositories

As documented in: http://yum.oracle.com/getting-started.html#installing-software-from-oracle-linux-yum-server
public-yum-ol7.repo was deprecated on release 7.6. Some repos were integrated into oracle-linux-ol7.repo (i.e.: ol7_latest, ol7_addons) and other are available as packages (epel). This also adds support for oraclelinux8

* Fix to use ansible_distribution_version

Instead of ansible_distribution_major_version

* Update README.md
2020-06-12 01:59:56 -07:00
a9de6dde33 Cleanup unneeded elif in kubelet env file (#6261) 2020-06-12 01:27:55 -07:00
75571ed303 manual intervention on etcd member removal aren't required anymore (#6248) 2020-06-12 01:13:54 -07:00
1912df7e3e Create /etc/gai.conf if not exists when disable_ipv6_dns is 'true' (#6258) 2020-06-12 00:55:55 -07:00
bacbb2a0ca Add custom dashboard namespace test (#6249)
Add custom dashboard namespace test
2020-06-12 00:52:03 -07:00
e1ba25a4fb Bump CSI containers to latest version (#6221)
* bump csi containers

* bump snapshoter to 2.1.1
2020-06-12 00:51:55 -07:00
10a17cfe54 Look up OS_PROJECT_NAME for OpenStack project name (#6262)
On OpenStack history, we used to call "tenant" for separeted namespace.
However we use "project" now instead.
Then we have replaced "tenant" with "project". Then all "TENANT" variables
also are renamed to "PROJECT".
This makes Kubespray search "PROJECT" variable also for newer OpenStack
clouds.
2020-06-12 00:47:56 -07:00
5a311236c4 Enable portmap CNI plugin with kube-router (#6204)
... to have working `hostPort` for containers.

See: https://www.kube-router.io/docs/user-guide/#hostport-support
2020-06-10 10:08:52 -07:00
a7b8708dfc calico: use absolute path to docker, crictl binary (#6253)
To avoid the following error (ignored when pipefail is off)

  RUNNING HANDLER [network_plugin/calico : containerd | delete calico-node containers] *******************************************************************************
  changed: [node1] => {"attempts": 1, "changed": true, "cmd": "crictl pods --name calico-node-* -q | xargs -I% --no-run-if-empty bash -c \"crictl stopp % && crictl rmp %\"", "delta": "0:00:00.004240", "end": "2020-06-10 03:32:41.316955", "rc": 0, "start": "2020-06-10 03:32:41.312715", "stderr": "/bin/sh: crictl: command not found", "stderr_lines": ["/bin/sh: crictl: command not found"], "stdout": "", "stdout_lines": []}
2020-06-10 03:22:08 -07:00
8964dc53df Add Offline docs to docs website's sidebar (#6251)
Fix the offline docs URL in README
2020-06-09 12:17:01 -07:00
ecc3a0aec5 Update kube-ovn to 1.2.0 - also update minor version for multus and weave (#6223) 2020-06-09 12:09:01 -07:00
144743e818 Fix indentation in a few places so file can be round-tripped more easily (#6178)
with the Python ruamel.yml library

- Change True/False to true/false in a few places so file can
  be more easily round-tripped with the Python ruamel.yml library
2020-06-09 06:39:20 -07:00
7712bd0c76 remove ectd node in pre step, instead of post step (#6099) 2020-06-09 05:37:17 -07:00
101686c665 Remove outdated CriticalAddonsOnly toleration and critical-pod annotation (#6202) 2020-06-09 05:23:30 -07:00
f2ca929a4a Move nodes readiness test before pods readiness (#6089) 2020-06-09 05:23:18 -07:00
13f2b3d134 Improve air-gap installation instructions (#6234) 2020-06-09 03:25:17 -07:00
50204d9551 Add rpm-ostree cleanup task (#5986) 2020-06-09 02:49:17 -07:00
6852f821a5 Update nginx ingress to 0.32.0 (#6063) 2020-06-09 02:45:18 -07:00
953bc8dee2 Update docker & docker-cli to 19.03.11 (#6225) 2020-06-07 23:55:46 -07:00
9afd3f0c32 Use a random subnet for elastx CI (#6232) 2020-06-06 12:11:45 -07:00
3f443f3878 set allowVolumeExpansion in cinder csi (#6220) 2020-06-05 08:27:43 -07:00
5dd85197af Manage containerd.io package with docker CRI. (#6218)
* Manage containerd.io package with docker CRI.

* Refactor common containerd stuff to separate role

* Fix check mode and unnecessary shell.
2020-06-05 05:55:44 -07:00
764a851189 Terraform quoted references are now deprecated (#6203) 2020-06-05 00:05:43 -07:00
b98cb74f5e Use 19.03.9 in localhost CI (#6201) 2020-06-04 08:59:14 -07:00
750db9139a fix CRI-O repos for centos distributions (#6224)
* fix CRI-O repos for centos distributions

* fix CRI-O repos for centos distributions
- revert workarounds

* fix CRI-O repos for centos distributions
- use https for centos repos

* avoid 302 redirects for centos repos
2020-06-04 01:08:44 -07:00
f2c8b393e1 Upgrade calico to 3.14.1 (#6219)
* upgrade calico to 3.14.1

* add checksums for calico 3.14.1 and update readme
2020-06-03 00:38:17 -07:00
fd59556222 Add Elastx CI (#6127) 2020-06-03 00:00:17 -07:00
0b54e8e04c fix documentation example (#6216)
Signed-off-by: Wang Zhen <lazybetrayer@gmail.com>
2020-06-02 05:42:23 -07:00
85b3526617 Fix vSphere CPI configMap and vSphere CSI secret re-deploy (#6209) (#6210) 2020-06-02 05:42:15 -07:00
7ff8fc259b Support all taints in network plugins manifests (#6208)
flannel, ovn and multus network plugins did not support all taint keys. This
update changes the tolerations to support them all.

According to the documentation:

```
There are two special cases: An empty key with operator Exists matches all keys,
values and effects which means this will tolerate everything. An empty effect matches
all effects with key key.
```

Usage of the empty `key` and `effect` ensures the network plugin daemonset will
be deployed on every nodes (ex: in case of custom taints, or NoExecute effect)
2020-06-02 05:38:15 -07:00
cc507d7ace disable bird-check flag for probes of calico-node pods when calico_network_backend is not 'bird'. (#6217) 2020-06-01 12:44:14 -07:00
7c0fbe2959 dead link (#6181)
* dead link

* triggger ci
2020-06-01 09:33:56 -07:00
6bc60e021e Update minor version for dependencies (#6206) 2020-05-29 05:11:24 -07:00
54816f1217 Update containerd package to 1.2.13-3.2.el7 (#6162)
* Update containerd package to 1.2.13-3.2.el7

* Update Fedora containerd package versions.

* Update Redhat containerd stable and edge packages.
2020-05-29 05:11:16 -07:00
be3283c9ba Fix conflicting clusterIP fact between coredns and nodelocaldns (#6195) 2020-05-29 04:27:15 -07:00
249b0a2a80 Allow metallb:speaker to create events (#6147)
Since MetalLB v0.8[1], metallb:speaker has started publishing an event
nodeAssigned on k8s resource.
To support MetalLB v0.8+, this allows metallb:speaker to create events.

[1]: 5cc6e23776 (diff-60053ad6fecb5a3cfabb6f3d9e720899R246)
2020-05-29 04:17:16 -07:00
71d476b121 Auto detect github target branch in rebase script (#6187) 2020-05-28 12:37:15 -07:00
45d8797dce Fix download boolean for local_path_provisioner (#6177) 2020-05-28 06:56:02 -07:00
b6e21a18cc Modify the populate no_proxy task to use a combine rather than relying on the hash_behaviour setting to be set to merge rather than replace (#6112) 2020-05-28 06:42:03 -07:00
f959cc296f Fix metrics-server rules (#6165) 2020-05-28 03:18:02 -07:00
ab44beba17 weave: support any taint effect in daemonset tolerations (#6159)
Since weave 2.5.1, `NoExecute` taint effect is no more supported,
this changes the daemonset tolerations to change this behavior.

Also remove the toleration key `CriticalAddonsOnly` not required anymore.
2020-05-28 01:10:02 -07:00
b2a0b649fd Add new Kubernetes version hashes and set default to 1.18.3 (#6173) 2020-05-28 01:02:03 -07:00
6179405e84 Update docker default to 19.03 - cleanup docker docs & refs (#6153) 2020-05-28 00:52:02 -07:00
83d945127f Make vagrant CI normal (#6074) 2020-05-28 00:46:02 -07:00
1be15a0864 Enable crio 1.18 (#6197) 2020-05-28 00:42:15 -07:00
41b44739b1 Bump CNI plugins to 0.8.6 (#6196)
https://github.com/containernetworking/plugins/releases/tag/v0.8.6

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-05-28 00:42:03 -07:00
38ca58ae8d update pause images version: 3.2 (#6190) 2020-05-28 00:38:02 -07:00
fd7829d468 Update MetalLB version (#6139)
If running MetalLB v0.7.3 on k8s v1.18.2, metallb pods output the
following parsing error of v1.ServiceList:

  $ kubectl logs controller-dbb46cf84-fw8h8 -n metallb-system
  {
    "caller":"reflector.go:205",
    "level":"error",
    "msg":"go.universe.tf/metallb/internal/k8s/k8s.go:231:
      Failed to list *v1.Service: v1.ServiceList:
        Items: []v1.Service: v1.Service: ObjectMeta:
        v1.ObjectMeta: readObjectFieldAsBytes:
        expect : after object field, parsing 1605

Then an external IP address is never allocated to the Service of
LoadBalancer type.
By updating MetalLB version to the latest v0.9[1] today, this issue
can be solved.

[1]: https://hub.docker.com/r/metallb/controller/tags
2020-05-27 14:10:03 -07:00
d62836f2ab Replace seccomp profile docker/default with runtime/default (#6170)
Signed-off-by: Wang Zhen <lazybetrayer@gmail.com>
2020-05-27 14:02:02 -07:00
4fd03b93f7 Rewrite download_hash in Python (#5995)
- Directly update the main.yml file with the new hashes.
2020-05-27 06:52:40 -07:00
1617a6ea8e CI upgrade from v2.13.1 (#6188) 2020-05-27 05:22:40 -07:00
e9ce7243b8 Match docker-cli version with docker-engine version (when available) (#6163) 2020-05-25 05:37:11 -07:00
d036a04d4d restart kubelet service when kube-config.yml is changed (#5402)
* fix(kubelet): exec notify restart kubelet service when kube-config.yml changed

* Revert "refactor(kubelet handler): change task name("reload kubelet") this is misleading"

This reverts commit 8f5d29560802c7c997293adb1ce9f84d3b20b6cb.

* fix(handlers,kubelet): setting right notify task name
2020-05-19 10:13:37 -07:00
35ad57674e Update containerd to 1.2.13-2 (#6156) 2020-05-18 07:57:36 -07:00
437189c213 Fix missing permissions for OpenStack cloud-controller-manager preventing metrics scraping (#6124) 2020-05-18 02:35:45 -07:00
0f5fd1edc0 update documentation to add and remove nodes (#6095)
* update documentation to add and remove nodes

* add information about parameters to change when adding multiple etcd nodes

* add information about reset_nodes

* add documentation about adding existing nodes to ectd masters.
2020-05-18 02:35:37 -07:00
b5aaaf864d Add additional network configuration options to external Openstack CCM (#6083) (#6085)
* Add additional network configuration options to external Openstack CCM (#6083)

* Change the default version of external openstack cloud controller image to v1.18.1 since there was an issue in v1.18.0 where some IPs of the private network were ignored

* Change Network section in external-openstack-cloud-config.j2 to Networking

* Add networking customization information in the openstack documentation
2020-05-18 02:31:36 -07:00
d948839320 Fix resolv.conf configuration for Fedora CoreOS. (#6138) 2020-05-18 02:27:36 -07:00
a5af58c05a Fix apiserver port when upgrading (#6136) 2020-05-18 01:21:36 -07:00
d8a61b94a9 Update MetalLB README (#6140)
This updates MetalLB README as following
- Remove unnecessary markdown to read it easily on github
- Make words consistency (kubernetes, loadbalancer)
- Add change-required option
2020-05-18 01:17:36 -07:00
fda05df5f1 Only fix kube-proxy address on evaluating kube_master hosts (#6152)
Change-Id: I83a7101a6cd99eb531d8385de5c31aee4f474469
2020-05-17 13:05:36 -07:00
3997aa9a0f Use OS packaging default value for apparmor_profile in crio.conf (#6125) 2020-05-14 21:47:00 -07:00
81292f9cf3 Fix apt update don't access Docker’s official repository for Ubuntu (#6106) 2020-05-13 07:06:26 -07:00
167e293594 Fix erroneous variable name (docker_keepcache) (#6129) 2020-05-13 06:26:27 -07:00
1f9ccfe54d Rollback metrics-server version and enable in one CI test (#6130) 2020-05-13 06:20:26 -07:00
d3d0360526 Changed state to present instead of installed in glusterfs role for Debian (#6096) 2020-05-12 13:50:30 -07:00
826b0f384d Add installation of requirements for Azure (#6076)
Due to lack of requirements installation on Azure README, the error
can happen:

 "The ipaddr filter requires python's netaddr be installed on the
  ansible controller"

It is nice to add the installation for Azure users.
2020-05-12 13:50:23 -07:00
a3131e271a Removed env vars DOCKER_NETWORK_OPTIONS and INSECURE_REGISTRY from docker.service.j2 (#6126) 2020-05-12 13:46:21 -07:00
ed12936be2 Add missing RBAC rule #6116 (#6121) 2020-05-11 04:25:51 -07:00
7c00ce5f30 Update metrics-server tag and template (#6090) 2020-05-11 03:55:50 -07:00
c87bd53352 Update calico to 3.14.0 (#6120) 2020-05-11 03:51:51 -07:00
af1c93cdfc Add option to expose metrics on separate port (#6092) 2020-05-10 12:21:51 -07:00
9ce7fc9b2c Create namespace when dashboard deployment uses customized namespace. (#6107)
* Create namespace when dashboard deployment uses customized namespace.

* Fix syntax.
2020-05-10 11:38:02 -07:00
b6243bfc1c Fix ImagePullPolicy missing variable usage (#6091) 2020-05-10 11:37:50 -07:00
21ea079896 Disable OVH CI (#6114) 2020-05-09 15:19:50 -07:00
93579773d6 Cleanup kubernetes 1.15.x hashes (and references) as it has now reached EOL (#5876) 2020-05-09 12:19:50 -07:00
0bd23f720d Fix docker fedora packages (#6097) 2020-05-08 15:39:51 -07:00
dca3bf0e80 Fix first etcd member exclusion in host group pattern (#6109) 2020-05-08 15:31:51 -07:00
c605a05c6b Update coredns to 1.6.7 (#6086) 2020-05-08 12:07:51 -07:00
c44f13114f Allow containerd runtime with fedora os (30/31) - add CI test (#6094) 2020-05-08 07:55:43 -07:00
ef7076e36f fix expected str instance, float found #6078 (#6103) 2020-05-08 05:57:42 -07:00
324106e91e Remove Kubernetes <1.16 conditionals (#6088) 2020-05-08 00:45:43 -07:00
218b2a5992 Workaround about inconsistent CRI-O YUM repo path on Kubic repos (#6101) 2020-05-07 12:59:42 -07:00
61e7afa9f0 Fix some typos and outdated docs (#6071) 2020-05-06 11:17:25 -07:00
367566adaa Fix kubernetes-dashboard template identation (#6066)
The 98e7a07fba commit udpates the
dashboard version to 2.0.0 but it enable skip login flag wasn't
updated. This change updates its identation to avoid issues when
dashboard_skip_login is enabled.
2020-05-06 11:17:17 -07:00
c06f482901 Update default kubernetes version to 1.18.2 (#6064) 2020-05-06 11:17:09 -07:00
965fe1db94 Update cni spec to 0.4.0 for network plugin allowing it (#6053) 2020-05-06 11:13:09 -07:00
f6be326feb Update kube-ovn to 1.1.1 (#6060) 2020-05-06 11:05:09 -07:00
c58e5e80ce Bump pypy to 7.3.1, verify hash (#6070)
As of pypy 7.3.0, we can utilize the official pypy project as opposed to
the previously used "portable-pypy" distribution.
2020-05-06 04:49:08 -07:00
641a2a8bb4 Skip molecule tests for Ubuntu 18.04 (#6077) 2020-05-05 07:17:09 -07:00
7d497e46c5 Update calico to 3.13.3 (#6061) 2020-05-04 08:56:26 -07:00
d414588a47 Azure: Rename apply-rg_2.sh to apply-rg.sh (#6049)
apply-rg.sh was for Azure command version 1("azure" command) and the
command is old and version 2("az" command) is officially used today.
apply-rg_2.sh was for the version 2. In addition, the README[1] says
we need to run apply-rg.sh for applying templates.

This renames apply-rg_2.sh to apply-rg.sh for common usages of the
version 2.

[1]: https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/azurerm#generating-and-applying
2020-05-03 12:42:26 -07:00
79de8ff169 Replace "replicas" option (CI tests) removed in latest k8s versions (#6068) 2020-05-03 12:36:34 -07:00
38daee41d5 Reorder tests in packet file (#6067) 2020-05-03 12:36:26 -07:00
f8f55bc413 Update cilium to 1.7.3 (#6069) 2020-05-03 12:32:26 -07:00
7457ce7f2d Update Kubespray CI image to v2.13.0 (#6062) 2020-05-02 00:56:26 -07:00
01dbc909be Make Vagrant CI use unsafe I/O (#6058) 2020-05-01 07:30:29 -07:00
0512c22607 Update contrib/azurerm/README.md (#6057)
The ansible-playbook needs to ssh-login to Azure virtual machines with
ssh keypair, and users need to specify ssh_public_keys for their own
ssh public key. The change of ssh_public_keys is mandatory.
So this updates contrib/azurerm/README.md to explain that.
In addition, the path of all.yml was wrong. That also is updated with
this.
2020-04-30 23:46:12 -07:00
f0d5a96464 Update deprecated command in azure script (#6056)
apply-rg_2.sh uses 'az group deployment' command but the command is
deprecated like the following warning message:

"This command is implicitly deprecated because command group
 'group deployment' is deprecated and will be removed in a future release.
 Use 'deployment group' instead."

This updates these deprecated commands.

FYI: The command has been deprecated since [1] on azure-cli side.
[1]: 991cb7cc7c (diff-2057bbb8441166e4910b34b09d22b58cR222)
2020-04-30 23:46:06 -07:00
361645e8b6 Fix multus missing cni and erroneous CI tests (#6051) 2020-04-30 23:38:05 -07:00
353d44a4a6 Add CI var for http_proxy (#6039) 2020-04-30 05:44:17 -07:00
680aa60429 Specify tag for OpenStack Cloud Controller image (#6048) 2020-04-30 02:02:17 -07:00
6b3cf8c4b8 Update azure with az command (#6042)
As the download page[1], the command name is "az", not "azure". This
replaces "azure" command with "az" command for fixing it.
In addition, "az account list-locations" is correct command line to
know available location as [2].

[1]: https://docs.microsoft.com/en-gb/azure/xplat-cli-install
[2]: https://docs.microsoft.com/en-us/cli/azure/account?view=azure-cli-latest#az-account-list-locations
2020-04-30 00:00:26 -07:00
e41766fd58 Fix broken Octavia integration in OpenStack External Cloud Provider (#6046) 2020-04-29 11:30:25 -07:00
e4c820c35e Add molecule tests to containerd role (#6037) 2020-04-29 09:08:25 -07:00
db5f83f8c9 update dashboard access doc for 2.0.x (#6036)
* update dashboard access doc for 2.0.x

* make metrics scrapper system-cluster-critical
2020-04-29 07:20:25 -07:00
412d560bcf Add CI for 16x ubuntu servers (#6040) 2020-04-29 07:14:24 -07:00
a468954519 Fix default value for standalone tests (#6043) 2020-04-29 06:34:24 -07:00
a3d3f27aaa allow dns autoscaler limits to be specified via variables (#6020) 2020-04-28 23:34:25 -07:00
72b68c7f82 Update spray version in ci/dockerfile (#6041) 2020-04-28 23:26:25 -07:00
28333d4513 Fix crio runc path on Ubuntu (#6035) 2020-04-28 05:28:06 -07:00
ed8c0ee95a Add EppO to the reviewers group (#6034) 2020-04-28 11:21:09 +03:00
724a316204 Cinder-CSI default storageclass and volumeBindingMode (#6026)
* Set volumeBindingMode in cinder CSI template (#22)

* make sure true/false is lowercase in cinder-csi storageclass
2020-04-28 00:12:04 -07:00
d70cafc1e4 vagrant: Add Flatcart images (#6029) 2020-04-28 00:08:05 -07:00
18c8e0a14a rename mitogen playbook inside makefile (#6025) 2020-04-27 01:13:29 -07:00
3ff6a2e7ff Update default (erroneous) backend value for calico (#6031) 2020-04-27 00:03:39 -07:00
1ee3ff738e Add option to enable usage reports to calico servers (#6030) 2020-04-27 00:03:30 -07:00
52edd4c9bc Fix liveness probe for cilium operator (#6016) 2020-04-26 23:59:29 -07:00
d8345c5eae MetalLB IP address range extension (#6023)
* MetalLB IP address range extension

* MetalLB IP address range extension
2020-04-26 23:55:28 -07:00
98e7a07fba bump to dashboard 2.0.0 with metrics scrapper support (#5821)
* bump to dashboard 2.0 rc6 with metrics scrapper

* fix missing yaml seperator making Replicaset complaining about missing ServiceAccount

* unwanted legay gross hack forgot to remove before

* no  need namespace on CrBinding

* bump to 2.0.0 release

* remove dashboard_metrics_scrapper_enabled
2020-04-25 03:55:28 -07:00
3d5988577a Support Cilium from version 1.5 (#6006) 2020-04-24 06:00:10 -07:00
69603aed34 add strategy mitogen_linear when installed mitogen (#5985)
* add strategy mitogen_linear when installed mitogen

* add small docs

Rename playbook file

The raw action executes as a regular Mitogen connection, which requires Python on the target, so add strategy: linear to bootstrap-os role playbook.

* add mitogen to  CI test
fix typo

* enable mitogen test on deploy-part1 tests
change version from master to release
download tar.gz archive

* run all CI tests with mitogen

* disable mitogen with upgrade CI tests

* enable mitogen on CI tests via env vars

* disable mitogen on CI test by default, enable on some different OS

* disable mitogen CI test on centos8
(get error  /usr/bin/python: No such file or directory)
2020-04-24 05:20:07 -07:00
299e35ebe4 Cleanup unused/erroneous variables (#6003) 2020-04-24 01:54:07 -07:00
6674be2572 Cleanup Vagrant VMs before molecule and vagrant CI (#6009) 2020-04-24 01:30:07 -07:00
cf1566e8ed Centos, debian and fedora CRI-O repo (#6008)
* replace removed repo with kubic repository for centos 7

* add crio configuration for centos8

* add crio configurations for debian

* use correct crio version for fedora

* simplify calulation of required crio version
- gives possibility to overwrite

* change default path for runc

* change default for seccomp path

* change default for conmon
2020-04-24 01:18:07 -07:00
c6d91b89d7 Update CONTRIBUTING.md (#6012) 2020-04-23 14:36:06 -07:00
b44f7957d5 Update CI matrix (#6010) 2020-04-23 09:51:11 -07:00
aead0e3a69 bump minimal ansible version to 2.8.0 (#5984)
* bump minimal ansible version to 2.8.0

* check ansible version in separate playbook
2020-04-22 13:33:44 -07:00
b0484fe3e5 Ubuntu crio repo (#5994)
* declare kubic repo for ubuntu

* do not install crictl twice

* move fedora repo modular tasks to crio_repo file

* move centos repo tasks to crio_repo

* declare crio version matrix for ubuntu

* update documentation crio support for ubuntu
2020-04-22 13:29:45 -07:00
b8cd9403df Fix nginx template missing latest changes (#6000) 2020-04-22 08:41:52 -07:00
d7df577898 k8s-dns-node-cache 1.15.12 was released (#5999) 2020-04-22 07:43:53 -07:00
09bccc97ba Add CRI-O CI (#5460) 2020-04-22 06:09:52 -07:00
1c187e9729 Downgrade coredns to 1.6.5 due to upgrade errors while migrating coredns configmap (Corefile) (#5960) 2020-04-22 05:27:52 -07:00
8939196f0d Verify apiserver version in CI (#5918) 2020-04-21 12:31:53 -07:00
15be42abfd Update path of all.yml on Azure README (#5993)
cloud_provider option exists in ./inventory/sample/group_vars/all/all.yml
In addition, the quick start shows to create configuration by copying
./inventory/sample. So this updates path of all.yml for fitting the above.
2020-04-21 07:21:04 -07:00
ca45d5ffbe Fix retries keyword missing until instruction (#5989) 2020-04-21 07:20:56 -07:00
2bec26dba5 Add proxy support to CRI-O service (#4607)
* Add proxy support to CRI-O service

The crio.service requires proxy environment variables when it's
deployed behind a corporated network. This change creates a systemd
configuration file when the proxy variables are defined.

* Remove unnecesary crio's tasks
2020-04-21 04:12:55 -07:00
03c8d0113c Add vSphere external cloud provider (#5959) 2020-04-20 08:47:39 -07:00
536606c2ed Fix kube-proxy ds win nodeselector check for 1.17 (#5982)
* Fix kube-proxy ds nodeselector for older versions

* Fix for ansible-lint
2020-04-20 08:43:39 -07:00
6e29a47784 generate flannel manifest only on first master (#5983) 2020-04-20 01:33:38 -07:00
826a440fa6 Add floryut to reviewers (#5979) 2020-04-19 22:53:38 -07:00
baff4e61cf remove image flannel cni (#5980) 2020-04-19 06:13:37 -07:00
4d7eca7d2e Add Dockerfile for vagrant image (#5977) 2020-04-18 13:53:36 -07:00
32fec3bb74 Update minor version for tools (helm, busybox, registry etc...) (#5961) 2020-04-18 07:59:36 -07:00
3134dd4c0d Drop support for Fedora 28 and add Fedora 30 and 31 (#5969) 2020-04-18 06:35:36 -07:00
56a9c7a802 Add Vagrant CI (#5487) 2020-04-18 06:09:35 -07:00
bfa468c771 Ensure upgrade CI jobs are named correctly (#5909) 2020-04-18 06:05:36 -07:00
6318bb9f96 Return the ability to start control plain from the hyperkube image (#5422) 2020-04-18 05:59:36 -07:00
8618a3119b Fix selector check for windows (#5974) 2020-04-18 00:41:35 -07:00
27a268df33 Gather just the necessary facts (#5955)
* Gather just the necessary facts

* Move fact gathering to separate playbook.
2020-04-17 16:23:36 -07:00
7930f6fa0a Ensure /etc/sysconfig/proxy for openSUSE bootstrap (#5445)
The playbook that bootstrap openSUSE servers assumes that the
/etc/sysconfig/proxy file exists but the execution fails when
these file is not present. This change guarantees its existence.
2020-04-17 14:23:35 -07:00
49bd208026 Update hashes (1.18.2/1.17.5/1.16.9) and set default to 1.17.5 (#5967) 2020-04-17 06:55:07 -07:00
83fe607f62 Cleanup deprecated labels beta.kubernetes.io/arch and beta.kubernetes.io/os (#5964) 2020-04-17 05:51:06 -07:00
ea8b799ff0 Update link to deprecated repository (#5965)
https://github.com/colemickens/azure-kubernetes-status is deprecated
and will be removed soon as the README.
So this updates the link to the repository for a new one.
2020-04-17 04:07:07 -07:00
e2d6f8d897 Update packet.md (#5963)
The Terraform installation part states that is for CentOS 7, but the echo command refers to OS X binary. Updated the echo command to use the Linux version.
2020-04-16 13:07:07 -07:00
0924c2510c Use role to copy CNI bin (#5953) 2020-04-16 10:06:45 -07:00
065292f8a4 Terraform/OpenStack: Allow free form worker node definition (#5952)
* Terraform/OpenStack: Allow free form worker node definition

* fixup! Terraform/OpenStack: Allow free form worker node definition
2020-04-16 07:52:45 -07:00
35f248dff0 assembly fallback_ips and no_proxy var only one time on localhost and… (#5957)
* assembly fallback_ips and no_proxy var only one time on localhost and populate result on all hosts

* add tag always, fix ansible lint errors

* workaround to mitogen issue dw/mitogen#663

* do not gather fact before install python on coreos like distros

* try to pass docker molecule test
2020-04-16 07:22:47 -07:00
b09fe64ff1 Calculate inventory list only once (#5956) 2020-04-16 06:12:45 -07:00
54debdbda2 Generate unique username per cluster in client kubeconfig (#5943)
* Generate unique username per cluster

* rename admin kubeconfig shell output to raw_admin_kubeconfig

* Make the linter happy

* Fix lint errors

* Cleaning up tasks
2020-04-16 05:32:45 -07:00
b6341287bb Add Molecule to Docker role (#5129)
* Add Molecule for container-engine/docker

* Add bootstrap-os to Molecule prepare stage
2020-04-15 23:28:45 -07:00
6a92e34994 Update tests names (#5904) 2020-04-15 09:24:03 -07:00
00efc63f74 Customize PodSecurityPolicies from inventory (#5920)
* Customize PodSecurityPolicies from inventory

* Fixed yaml indentation
2020-04-15 03:18:02 -07:00
b061cce913 Allow configureable vni and port for flannel overlay (#5939) 2020-04-15 03:14:02 -07:00
c929b5e82e Upgrade kube-ovn to v1.1.0 and move test from centos7 to centos8 (#5852) 2020-04-15 03:10:03 -07:00
58f48500b1 Update Flannel manifests, install script and version (0.12) + fix tests scripts (#5937)
* Add CI_TEST_VARS to tests

* Update flannel to 0.12.0 (with new manifests) and disable tx/rx
offloading in networking test
2020-04-14 23:48:02 -07:00
b5125e59ab update rbac.authorization.k8s.io to non deprecated api-groups (#5517) 2020-04-14 13:14:04 -07:00
d316b02d28 else condition required otherwise AnsibleUndefinedVariable is triggered (#5722) 2020-04-14 07:06:12 -07:00
7910198b93 fix error in templating in local-path-provisioner (#5950) 2020-04-14 06:52:12 -07:00
7b2f35c7d4 Update vars.md (#5947)
Add the `container_manager` variable as a Cluster Variable in the global Docs
2020-04-13 23:11:10 -07:00
45874a23bb Remove 1.16.x flag for packet_centos7-weave-kubeadm-sep (#5907) 2020-04-11 00:15:48 -07:00
9c3b573f8e Cleanup fedora coreos with crio container (#5887)
* fix upgrade of crio on fcos
- update documents

* install conntrack required by kube-proxy
- like commit 48c41bcbe7

* enable fedora modular repo for crio

* allow to override crio configuration
- set cgroup manager same to kubelet_cgroup_driver if defined
- path of seccomp_profile depends on distribution

* allow to override crio configuration
- fix path for ubuntu

* allow to override crio configuration
- fix cni path for fcos
2020-04-10 23:51:47 -07:00
7d6ef61491 Fix metallb speaker when podsecuritypolicy_enabled=true (#5932) (#5933) 2020-04-10 23:48:03 -07:00
6a7c3c6e3f Upgrade terraform version to 0.12.24 (#5928) 2020-04-10 23:47:56 -07:00
883194afec Fix Cilium permissions (#5923)
* added required permissions for querying endpointslice resources

* copy-pasted role permissions from cilium install manifests

* bumped cilium version to v1.7.2
2020-04-10 23:47:48 -07:00
3a63aa6b1e downgrade nodelocaldns version due bug with flood to error log (#5931)
https://github.com/kubernetes/kubernetes/issues/90043
2020-04-10 23:41:55 -07:00
337499d772 Remove hashes only for EOL version in RELEASE cycle. (#5924) 2020-04-10 23:41:47 -07:00
82123f3c4e Upgrade azure csi and fix aws csi tag (#5938) 2020-04-10 17:53:47 -07:00
8f3d820664 always download docker image on download_host when download_run_once=true (#5921) 2020-04-10 01:59:47 -07:00
7d812f8112 Set LANG in Dockerfile (#5929) 2020-04-10 01:25:46 -07:00
473a8beff0 Remove hard-coded dependance to docker.service in kubelet.service file (#5917) 2020-04-09 08:43:46 -07:00
0d675cdd1a Update Calico to v3.13.2, Multus to v3.4.1. Add ConfigMap get permission to allow calico-node access to kubeadm config. (#5912) 2020-04-09 07:27:43 -07:00
9cce46ea8c Fix idempotence issue in bootstrap-os (#5916) 2020-04-09 03:31:44 -07:00
2e67289473 Terraform/OpenStack: Fix idempotency bug in module.network.openstack_networking_router_interface_v2.k8s[0] (#5914) 2020-04-09 02:27:44 -07:00
980aeafebe Add kubernetes 1.18.1 hashes (#5915) 2020-04-09 01:53:43 -07:00
7d1ab3374e Proxy fixes (#5869)
* Fix proxy and module_hotfixes

On CentOS 8 with proxy ansible render inline `proxy` and `module_hotfixes` options.

For example:

`proxy=http://127.0.0.1:3128module_hotfixes=True`

But expected result:

```
proxy=http://127.0.0.1:3128
module_hotfixes=True
```

* Use ini_file module for work with ini files

* Prevent duplicates proxy= option in /etc/yum.conf

Module `lineinfile` is weak, use most powerful module `ini_file` and add or remove `proxy=` when `http_proxy` is defined or not.
2020-04-09 01:25:44 -07:00
01b9b263ed Remove 1.16.x flag for tf-ovh_coreos-calico (now 1.17 ready) (#5853) 2020-04-08 10:57:44 -07:00
c33a049292 Update docker RHEL/CentOS versions to the latest patch versions available. (#5872) 2020-04-08 10:09:45 -07:00
7eaa7c957a Fix conntrack for opensuse and docker support (#5880) 2020-04-08 07:37:44 -07:00
f055ba7965 Add crictl 1.18.0 hashes for k8s 1.18 (#5877) 2020-04-08 02:19:43 -07:00
157c247563 fix readonly flexvolume in fcos and coreos (#5885) 2020-04-08 01:41:43 -07:00
a35b6dc1af Fix scaling (#5889)
* etcd: etcd-events doesn't depend on etcd_cluster_setup

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* etcd: remove condition already present on include_tasks

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* etcd: fix scaling up

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* etcd: use *access_addresses, do not delegate to etcd[0]

We want to wait for the full cluster to be healthy,
so use all the cluster addresses
Also we should be able to run the playbook when etcd[0] is down
(not tested), so do not delegate to etcd[0]

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* etcd: use failed_when for health check

unhealthy cluster is expected on first run, so use failed_when
instead of ignore_errors to remove scary red messages

Also use run_once

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* kubernetes/preinstall: ensure ansible_fqdn is up to date after changing /etc/hosts

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* kubernetes/master: regenerate apiserver cert if needed

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-04-08 01:27:43 -07:00
910a821d0b Fix chicken and egg problem with proxy_env not defined on the first … (#5896)
* Fix chicken and egg problem with proxy_env not defined on the first envinronment usage.

* Disable fact gathering for the first proxy_env evaluation.

* Move proxy_env var set up from the role defaults to the root playbooks as fact.
2020-04-08 00:53:43 -07:00
2c21e7bd3a make explicit that doc is at kubespray.io (#5878) 2020-04-08 00:19:43 -07:00
45a177e2a0 add local-path-provosioner helper image def (#5817) 2020-04-07 23:51:43 -07:00
0c51352a74 remove unused kubelet options (#5903) 2020-04-07 11:51:44 -07:00
9b1980cfff Change docker.io repo to variable and upgrade alb image (#5898) 2020-04-07 08:07:42 -07:00
ae29296e20 Replace latest tags for csi drivers (#5899) 2020-04-07 06:55:44 -07:00
75e743bfae CentOS 8 CI (#5842)
* requirements.txt: Bump versions

Ansible 2.8+ allow ansible_python_interpreter autodetection

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* tests: do not force ansible_python_interpreter

we do not expect people to set ansible_python_interpreter, so we should not set it in the CI

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* Add CentOS 8 Calico to CI

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-04-07 05:49:43 -07:00
2f19d964f6 Bump requirements.txt versions / remove ansible_python_interpreter hack (#5847)
* requirements.txt: Bump versions

Ansible 2.8+ allow ansible_python_interpreter autodetection

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* tests: do not force ansible_python_interpreter

we do not expect people to set ansible_python_interpreter, so we should not set it in the CI

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-04-07 01:47:44 -07:00
0d2990510e Terraform/OpenStack: Enable usage of an existing router (#5890) 2020-04-06 02:41:46 -07:00
e732df56a1 Move packet_centos7-calico-ha-once-localhost to the appropriate CI stage (#5881) 2020-04-02 02:02:24 -07:00
2f92d6bca3 Upgrade coredns to 1.6.9 (#5871) 2020-04-01 10:58:23 -07:00
c72903e8d6 Update release policy (#5749) 2020-04-01 07:29:28 -07:00
ded58d3b66 Add molecule test for bootstrap-os (#5845) 2020-04-01 07:25:28 -07:00
be9414fabe Add cluster dump artifact in CI jobs (#5796) 2020-04-01 07:23:29 -07:00
033afe1574 Fix Docker in Docker CI jobs (#5867) 2020-04-01 07:19:28 -07:00
c35461a005 Add checksums for v1.18.0 (#5843)
* Add checksums for v1.18.0

* Add crictl version for k8s 1.18
2020-04-01 06:41:28 -07:00
a93421019b Upgrade ingress-nginx to 0.30.0 (#5870) 2020-04-01 03:57:28 -07:00
4fd3e2ece7 Fix download_run_once in packet_ubuntu18-flannel-containerd-once (#5864) 2020-04-01 03:15:28 -07:00
937adec515 Azure Disk CSI deployment (#5833)
* Azure Disk CSI deployment

* Mention Azure CSI support

* Fix: remove unnecessary file

* Typo in documentation

* Add newline to end of file
2020-04-01 00:53:27 -07:00
bce3f282f1 fix systemd cgroup driver for containerd (#5220) 2020-04-01 00:43:26 -07:00
f8ad44a99f Azure vmss - kubelet: failed to get instance ID from cloud provider: instance not found #5824 (#5855)
* kubernetes-sigs-kubespray #5824

Added support nodes which are part of Virtual Machine Scale Sets(VMSS)

* kubernetes-sigs-kubespray #5824

* kubernetes-sigs-kubespray #5824

Added comments and updatetd azure docs.

* kubernetes-sigs-kubespray #5824

Added supported values comments for "azure_vmtype" in azure.yml
2020-03-31 10:12:40 -07:00
7ee2f0d918 Hide after_script output if return code is zero (#5862) 2020-03-31 05:28:40 -07:00
9cbb373ae2 Update base CI image to v2.12.5 (#5858) 2020-03-31 01:28:40 -07:00
484df62c5a GCP Persistent Disk CSI Driver deployment (#5857)
* GCP Persistent Disk CSI Driver deployment

* Fix MD lint

* Fix Yaml lint
2020-03-31 00:06:40 -07:00
79a6b72a13 Removed deprecated label kubernetes.io/cluster-service (#5372) 2020-03-30 01:19:53 -07:00
d439564a7e disable gpgcheck if gpgkey is empty (#5621)
Signed-off-by: Chris Randles <randles.chris@gmail.com>
2020-03-30 01:13:53 -07:00
b0a5f265e3 Honor bastion host config from inventary (#5522)
Before this commit, the bastion entry in the inventary was not honored,
so machines behind firewalls or with unrouted addresses were not
reachable for ansible.
2020-03-30 01:11:53 -07:00
8800eb3492 Remove unicode chars from coredns template (#5848) 2020-03-27 11:39:54 -07:00
09308d6125 Upgrade to Kubernetes 1.174 (#5628)
* Upgrade to Kubernetes 1.17.4 - change defaults

* Update ci jobs to previous k8s release (will fix them afterward)
2020-03-27 07:40:23 -07:00
a8822e24b0 Fix terraform formatting (#5823) 2020-03-27 05:46:24 -07:00
a60e4c0a3f Remove unused kubeadm_enabled variable (#5838) 2020-03-27 04:58:23 -07:00
b2d740dd1f Add Ubuntu 20.04 RC image and test job (#5836) 2020-03-27 02:14:23 -07:00
3237b2702f Add config coredns_external_zones (#5280)
Allows to add custom zone resolving servers.
2020-03-26 23:34:23 -07:00
e8c49b0090 Improve curl invocation (#5844)
- make it follow redirects
- error out if an HTTP error is encountered
2020-03-26 23:12:23 -07:00
3dd51cd648 Add moreutils in Dockerfile (#5839) 2020-03-26 13:58:23 -07:00
e03aa795fa Move long running jobs into separate CI stage (#5837) 2020-03-26 13:56:24 -07:00
a8a05a21a4 AWS EBS CSI implementation (#5549)
* AWS EBS CSI implementation

* Fixing image repos

* Add OWNERS file

* Fix expressions

* Add csi-driver tag

* Add AWS EBS prefix to variables

* Add AWS EBS CSI Driver documentation
2020-03-25 13:10:25 -07:00
63fa406c3c Move host_architecture to kubespray-defaults (#5811)
The variable is defined in `kubernetes/preinstall` role and used in several roles. Since `kubernetes/preinstall` is not always included when `ansible-playbook` is run with tag selectors (see #5734 for reason), they will fail, or individual roles must copy the same fact definitions (as in #3846). Moving the definition to the always-included `kubespray-defaults` role will resolve the dependency problem.
2020-03-25 12:58:25 -07:00
6ad6609872 Fix certificates checking when adding etcd node to existing k8s node (#5807)
Co-authored-by: alexkomrakov <alexkomrakov@gmail.com>
2020-03-25 12:46:25 -07:00
474fbf09c4 fix wrong cilium_operator repo variable (#5819) 2020-03-25 02:17:03 -07:00
47849b8ff7 docker: Fix docker install on CentOS/RHEL 8 (#5820)
we can't set module_hotfixes=True using yum_repository ansible module
Fixes 38688a4486
(keep docker-ce.repo name)

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-03-25 01:03:03 -07:00
0379a52f03 Fix etcd install with docker and etcd_kubeadm_enabled (#5777)
- This solves issue #5721 & #5713 (dupes)
  - Provide a cleaner default usage pattern for the download role
    around etcd that supports 'host' and 'docker' properly
  - Extract the 'etcdctl' as a separate task install piece and reuse it where
    appropriate
  - Update the kubeadm-etcd task to reflect the above change
2020-03-24 08:12:47 -07:00
bc2eeb0560 use variables for cilium-operator instead of hardcoded value (#5802) 2020-03-24 07:40:47 -07:00
81f07c3783 Disable IPv6 support for canal's calico-node (#5684)
This implements the same behavior as a15a0b5eb9/roles/network_plugin/calico/templates/calico-node.yml.j2

More info: https://github.com/projectcalico/felix/issues/1447
2020-03-24 07:10:49 -07:00
f90926389a Fix wrong Docker ubuntu repo URL (#5815) 2020-03-24 04:36:46 -07:00
dcb97e775e Fix broken internal links (#5799) 2020-03-20 15:40:44 -07:00
096de82fd9 Fixup recover_control_plane with Ansible 2.9 (#5806)
Tests as filters support is removed as of Ansible 2.9
https://docs.ansible.com/ansible/latest/porting_guides/porting_guide_2.5.html#jinja-tests-used-as-filters

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-03-20 14:22:06 -07:00
db693d46df Fixed an issue where without a default GW, ansible_default_ipv4 would return an empty dict which passed as a valid fallback_ip dict item (#5394) 2020-03-20 14:06:09 -07:00
b8d628c5f3 rename handler to fix ansible 2.8 issue (#5801) 2020-03-20 13:54:08 -07:00
0aa22998e2 Bump node local dns version to 1.15.11 (#5805)
k8s-dns-node-cache now uses debian-iptables base images
to automatically use either iptables-legacy or iptables-nft
https://github.com/kubernetes/dns/pull/355
https://github.com/kubernetes/kubernetes/pull/82966

This adds support CentOS 8

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-03-20 13:44:09 -07:00
afe047a77f Add documentation for scripts/openstack-cleanup (#5803) 2020-03-20 13:18:06 -07:00
1ae794e5e4 Add script to cleanup old gitlab branches (#5795) 2020-03-20 13:16:06 -07:00
a7a204ebca Add kube_encryption_resources variable to configure which resources are encrypted at rest (#5797) 2020-03-20 04:14:36 -07:00
8774d7e4d5 Fix ERROR! the playbook: tests/testcases/020_check-nodes-ready.yml could not be found (#5798) 2020-03-20 01:14:35 -07:00
34e51ac1cb Add a test to check that nodes are Ready (#5793) 2020-03-19 04:09:14 -07:00
nmr
d152dc2e6a Update prep_download.yml (#5791)
Fix check if user can use docker without sudo.
2020-03-18 13:30:44 -07:00
8ce5a9dd19 remove atomic support because reached end of live (#5783) 2020-03-17 14:31:27 -07:00
820d8e6ce6 Adding new registry_port option (#5779)
New override are added to allow installation of the registry
on different ports than ``5000``. The default port is unchanged
from previous versions
2020-03-17 05:52:22 -07:00
3cefd60c37 Add OWNERS file for kube-router (#5782)
I propose also my help as a reviewer
2020-03-17 04:14:22 -07:00
876d4de6be Fedora CoreOS support (#5657)
* fedora coreos support
- bootstrap and new fact for

* fedora coreos support
- fix bootstrap condition

* fedora coreos support
- allow customize packages for fedora coreos bootstrap

* fedora coreos support
- prevent install ptyhon3 and epel via dnf for fedora coreos

* fedora coreos support
- handle all ostree like os in same way

* fedora coreos support
- handle all ostree like os in same way for crio

* fedora coreos support
- add fcos documentations
2020-03-17 03:12:21 -07:00
974902af31 Update Kube-router version to v0.4.0 (#5756) 2020-03-17 02:40:21 -07:00
45626a05dc fix pip requirements version (#5174)
Because using python Program create inventory it will happen error, thus I change python pip version to install kubespray requirements.
2020-03-16 05:10:35 -07:00
4b5299bb7a Add variables to configure Containerd default runtime, untrusted runt… (#5497)
* Add variables to configure Containerd default runtime, untrusted runtime and additional runtimes

* Add containerd settings to sample inventory

* Empty commit
2020-03-16 03:48:36 -07:00
ceab27c97a Add OWNERS file for recover_control_plane (#5505)
Related to #5432
2020-03-16 03:46:35 -07:00
03d1b56a8f fix check exists download cache (#5776) 2020-03-16 03:34:35 -07:00
64190dfc73 Fix deploy heketi show selector missing error. (#5738) 2020-03-16 03:32:36 -07:00
29128eb316 Add AWS ALB Ingress Controller (#5489)
* Add AWS ALB Ingress Controller Ansible role

* remove trailing spaces

* update owners

* ALB ingress: update rbac clusterrole and remove role

* Move alb-ingress role to roles/kubernetes-apps/ingress_controller folder
2020-03-16 02:58:35 -07:00
ea9f8b4258 Add document about adding/replacing a node (#5570)
* Add document about adding/replacing a node

* Update nodes.md

Amend for comments
2020-03-15 03:32:34 -07:00
1cb03a184b kubernetes 1.15.11 (#5775) 2020-03-14 07:16:34 -07:00
158d998ec4 Support configuring the Calico iptables insert mode (#5473)
* Support configuring the insert mode

Defaults to the upstream default https://docs.projectcalico.org/v3.9/reference/felix/configuration

so nothing should change for existing deployments.

This allows coexistence with other firewall management technologies.

* Add a note to the sample config
2020-03-14 06:36:35 -07:00
168241df4f Python bootstrap: upgrade pypy to 3.6-7.2.0. (#5511)
Solves problem with mitogen about 'Compress object has no attribute copy' in zlib module.
2020-03-14 06:32:35 -07:00
f5417032bf Merge OracleLinux in RedHat bootstrap-os (#5575)
* Merge OracleLinux in RedHat bootstrap-os

* Set default for use_oracle_public_repo in main.yaml
2020-03-14 06:28:34 -07:00
d69db3469e Add external zones in nodelocaldns configuration (#5591)
Allows to configure additionnal zone for domains not resolved by `upstream_dns_servers`.
2020-03-14 06:26:34 -07:00
980a4fa401 Add docker-ce 19.03 packages for Debian & Ubuntu (#5729)
* Add docker-ce 19.03 packages for Debian & Ubuntu

K8s has updated the recommended Docker version to 19.03. More
specifically it should be 19.03.4, but since we used 18.06.7 instead of
.2, I'm assuming the latest patch version should be used here as well.

* Add docker 19.03 for redhat
2020-03-14 06:24:35 -07:00
027e2e8a11 Update CoreDNS to 1.6.7 (#5761) 2020-03-14 04:20:34 -07:00
dcfda9d9d2 Change python crypto module from pycrypto to cryptography (#5769) 2020-03-14 03:30:34 -07:00
ca73e29ec5 Use k8s.gcr.io for kubernetes related images (#5764)
* Use k8s.gcr.io for kubernetes related images

* Use k8s.gcr.io in inventory sample
2020-03-13 14:41:48 -07:00
0330442c63 Kubernetes 1.16.8 (#5770)
* Kubernetes 1.16.8

* Use 1.16.8 in sample inventory and kubespray-defaults
2020-03-13 13:41:47 -07:00
221c6a8eef Use a separate runner for light CI jobs (#5771) 2020-03-13 20:29:22 +03:00
25a1e5f952 Include etcd image repository when using kubeadm etcd deployment mode (#5725) 2020-03-13 10:28:39 -07:00
38df80046e CI inventory should start at 1 instead of 0 (#5763) 2020-03-13 10:22:39 -07:00
57bb7aa5f6 Fix delete nodes task (#5747) 2020-03-13 08:36:40 -07:00
86996704ce remove unused crictl hashes (#5754) 2020-03-13 06:56:40 -07:00
f53ac2a5a0 Update metrics addon for 1.16 (#5706)
* upgrade metrics server and resizer images version

* scope "apps" api group for addon resizer
2020-03-13 06:46:40 -07:00
d0af5979c8 install csi-driver not just cinder (#5766) 2020-03-13 05:34:39 -07:00
43020bd064 Fix the command for kube-proxy cleanup (#5671) 2020-03-13 05:32:39 -07:00
dc00b96f47 Add missing Coreos OS family string (#5759) 2020-03-13 04:24:39 -07:00
71c856878c update multus to 3.4 and add crio support (#5701)
Signed-off-by: Chris Randles <randles.chris@gmail.com>
2020-03-13 04:22:39 -07:00
19865e81db Add OWNERS file for OpenStack CSI driver and cloud controller (#5753) 2020-03-13 02:52:39 -07:00
a4258b1244 Add automatic cleanup of OpenStack CI VMs (#5760) 2020-03-12 15:12:39 -07:00
e0b76b185a Failover for adding proxy when line exists in file (#5751)
The 'regexp' parameter matches last occurrence of a line starting with 'proxy=' and replaces it with the one defined in 'line' parameter. If no match - it works same way as before. This fixes resuming cluster deployments failed after that task (if there was no more than one line starting with 'proxy' in the yum.conf file - this condition should also be reassured with the change introduced here) eg. if they were initiated with Terraform.
2020-03-12 15:08:39 -07:00
c47f441b13 fix kube-proxy server address when local apiserver lb is disabled (#5730)
refs #5277

As the issue describes, when no external or local load-balanced is used,
kube-proxy won't be able to contact apiserver at 127.0.0.1. So the
config map should be left as is.
2020-03-12 10:40:39 -07:00
7c854a18bb Enable retries on SSH error during CI (#5755) 2020-03-12 10:10:39 -07:00
8df2c0a7c6 Upgrade CNI plugins to 0.8.5 (#5717) 2020-03-12 07:22:38 -07:00
e60b9f796e add calico VXLAN mode, update docs and vars in sample inventory (#5731)
* calico VXLAN mode

* check vars if calico backend defined
2020-03-12 01:20:37 -07:00
2c8bcc6722 Upgrade etcd to 3.3.12 (#5718)
* Upgrade etcd to 3.3.18

* Try with etcd 3.3.15 (kubeadm 1.16.7 default)

* Back to square one

* Try with 3.3.11

* Upgrade etcd to 3.3.18 (take 2)

* Try with 3.3.12
2020-03-11 08:25:38 -07:00
e257d92f41 Cilium updates (#5438)
* Add resources needed to deploy 1.6.4

* Use cilium v1.6.4

* Change deprecated option name

* Add update crd to clusterrole cilium

* Cilium 1.6.4 -> 1.6.5

* Make monitor-aggregation config configurable as a variable

* Change monitor-aggregation default none->medium

* Cilium 1.6.5 -> 1.6.6

* Update to 1.7.0

* v1.7.0->v1.7.1
2020-03-11 08:15:36 -07:00
f697338eec [Openstack] Install Cinder-CSI before first node is schedulable again (#5735)
* install cinder-csi before upgrading nodes

* Only run the Cinder CSI when enabled
2020-03-11 06:31:36 -07:00
e2ec7c76a4 containerd: bump to 1.2.13 (#5727)
https://github.com/containerd/containerd/releases/tag/v1.2.11
CVE-2019-16884 / CVE-2019-17596

https://github.com/containerd/containerd/releases/tag/v1.2.12
CVE-2019-19921 / CVE-2019-16884 / CVE-2019-11253

https://github.com/containerd/containerd/releases/tag/v1.2.13

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-03-11 05:39:36 -07:00
058d101bf9 Escape dots in jsonpath keys. (#5600)
+ use more secure `command` instead of `shell`
+ read-only command doesn't change state - make idempotent
+ multi-line long string
2020-03-11 05:17:36 -07:00
833794feef [Openstack] Cleanup the old in-tree openstack cloud provider (#5742)
* Added playbook to migrate openstack cloud provider

* remove old cloud provider config

* Rewrite provisioned-by annotation on Cinder PVs

* update indents

Co-authored-by: Jonathan Süssemilch Poulain <jonathan@sofiero.net>
2020-03-11 05:13:36 -07:00
a3dedb68d1 [Openstack] Make it possible to apply the new cloud provider during upgrade (#5707)
* run external cloud provider setup during upgrade

* change name of taskt to better reflect what it does

* fix typo
2020-03-11 05:11:36 -07:00
4a463567ac [Openstack] A guide on how to replace the in-tree cloudprovider with the external one (#5741)
* add documentation for how to upgrade to the new external cloud provider

* add migrate_openstack_provider playbook

* fix codeblock syntax highligth

* make docs for migrating cloud provider better

* update grammar

* fix typo

* Make sure the code is correct markdown

* remove Fenced code blocks

* fix markdown syntax

* remove extra lines and fix trailing spaces
2020-03-11 05:09:35 -07:00
9f3ed7d855 change ignore_errors: to when: in assert tasks (#5716) 2020-03-10 08:09:36 -07:00
221b429c24 move var preinstall_selinux_state: to roles/kubespray-defaults/defaults/main.yaml (#5715) 2020-03-10 07:45:35 -07:00
b937d1cd9a Bump ansible from 2.7.12 to 2.7.16 (#5739)
Bumps [ansible](https://github.com/ansible/ansible) from 2.7.12 to 2.7.16.
- [Release notes](https://github.com/ansible/ansible/releases)
- [Commits](https://github.com/ansible/ansible/compare/v2.7.12...v2.7.16)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-03-09 06:31:34 -07:00
986c46c2b6 Check ansible version >=2.7.8 in recover-control-plane.yml playbook (#5724) 2020-03-07 10:17:34 -08:00
e029216566 Update security contacts (#5719) 2020-03-06 10:47:24 -08:00
roc
2d4595887d Fix youtube url (#5582) 2020-03-06 02:13:22 -08:00
2beffe688a Make etcdctl connect to localhost out of the box (#5643)
* Make etcdctl connect to localhost out of the box

* etcdctl envs: use admin-.pem instead of member-.pem
2020-03-06 02:05:23 -08:00
66408a87ee Refactor download role (#5697)
* download file

* download containers

* fix push image to nodes

* pull if none image on host

* fix

* improve docker image tag checks.
do not pull already cached images

* rebase fix merge conflict

* add support download_run_once when upgrade and scale cluster
add some test with download_run_once

* set default values to temp flag for every download cycle

* add save,load abilty for containerd and crio when download_run_once=true

* return redefine image save/load command to  set_docker_image_facts.yml

* move set command to set_container_facts

* ctr in containerd_bin_dir

* fix order of ctr image export arguments

* temporary disable download_run_once for containerd and crio
due https://github.com/containerd/containerd/issues/4075

* remove unused files

* fix strict yaml linter warning and errors

* refactor logical conditions to pull and cache container images

* remove comment due lint check

* document role

* remove image_load_on_localhost, because cached images are always loaded to docker on remote sites

* remove XXX from debug output
2020-03-05 07:31:39 -08:00
62b418cd16 Use 'k8s.gcr.io' instead of 'gcr.io/google-containers' (#5709)
Ref: kubernetes/kubeadm/issues/2051

See: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kubernetes-sig-release/ew-k9PEBckQ/T7dFepHdCAAJ

Signed-off-by: Nguyen Hai Truong <truongnh@vn.fujitsu.com>
2020-03-05 05:44:37 -08:00
5361cc075d Use the v2.12.3 docker image for CI (#5712) 2020-03-05 05:40:37 -08:00
be12164290 Add option and defaults to configure metrics exporting in containerd (#5466)
* Add metrics exporting in containerd config

* Add containerd.yml with containerd configuration example to the sample group_vars
2020-03-04 14:46:38 -08:00
588896712e Fix kube-router config generation (#5531)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-03-04 02:11:47 -08:00
6221b94fdf Fix variable naming bug in OpenStack CCM (#5702) 2020-03-03 06:45:38 -08:00
efef80f67b Add support for HA deployment of OpenStack Cinder CSI plugin (#5691) 2020-03-03 06:33:38 -08:00
0c1a0ab966 implement max-volumes for cinder csi (#5666) 2020-03-02 03:30:43 -08:00
678ed5ced5 fix upgrade procedure when in playbook (#5695)
exists role kubernetes/preinstall and not exists role container-engine

 error 'yum_repo_dir' is undefined
2020-02-28 01:56:38 -08:00
7f87ce0362 Upgrade container-engine after draining (#5601)
* Run 'container-engine' after drain.

Move possibly disruptive role 'container-engine' to run after the node
is drained.

As that role have to be run on non-cluster nodes as well (etcd and
calico-rr), and those nodes are not drained, add play for that case.

* Check if api is up before upgrade.

If container engine is restarted in previous role, api controller can
take some time to start. This check ensures api is up before upgrade.
2020-02-27 11:47:28 -08:00
d1acf7f192 Add additional configuration options to external Openstack CCM (#5661)
- Add support for manage-security-groups flag
- Add support for internal-lb flag
2020-02-26 13:03:19 -08:00
171d2ce59c Implement topology support for Cinder CSI (#5667)
* make cinder csi topology aware

* change feature description do better reflect whats being done

* remove sameas true since it isn't required
2020-02-26 05:12:25 -08:00
c6170eb79d docs: fix some typos (#5618)
Although it is spelling mistakes, it might make affect while reading.

Signed-off-by: Nguyen Hai Truong <truongnh@vn.fujitsu.com>
2020-02-26 04:46:28 -08:00
d2c44dd4df Modifying Ansible filter 'failed' according to Ansible 2.5 porting guide (#5678) 2020-02-26 00:18:26 -08:00
9b7090ca1d add mangle table in the iptable flush task (#5672)
When kube-router is used as cni, rules might be added to the mangle table
to support external IPs. Therefore, mangle table should be flushed during
reset as well.
2020-02-26 00:04:26 -08:00
ee8e88b111 Require a complete inventory with all variables in the bugreport (#5655) 2020-02-25 23:58:25 -08:00
a901b1f0d7 convert volumes to dynamic blocks, openstack (#5673) 2020-02-24 01:20:49 -08:00
82efd95901 Remove dockerproject_.+_repo_.+ variables (#5662)
This 38688a4486 change replaces the
value for dockerproject_.+_repo_.+ docker variables but their new
value was previously defined in other variables. This change removes
the dockerproject_.+_repo_.+ docker variables in favor of the older
ones.
2020-02-22 13:28:47 -08:00
4c803d579b @ #5008 | Local path provisioner boolean annotation is rendered incorrectly and not applied (#5669) 2020-02-22 07:08:47 -08:00
b34ec6c46b Enhance ha document (#5664)
* Fix HAproxy config to avoid client offered only unsupported versions error

* Add HAproxy SSL check interval

* Fix ha mode document markdown
2020-02-22 07:04:47 -08:00
6368c626c5 Ignore assertion comparison for kube_network_node_prefix when using calico (#5632)
* Fix incorrect assertion comparison for kube_network_node_prefix

* Ignore assertion comparison for kube_network_node_prefix when using calico

* Adding more var docs description for kube_network_node_prefix

* Fixing trailing whitespaces
2020-02-20 00:39:03 -08:00
a5445d9c5c Add stable repo on all masters with helm 3.x.x (#5659) 2020-02-19 14:05:46 -08:00
da86457cda remove unused var 'kube_apiserver_admission_control' (#5648) (#5651) 2020-02-19 05:08:25 -08:00
eb00693325 Do not display skipped hosts/tasks. (#5620)
Replace deprecated callback plugin `skippy` with `default`, which
also supports ignoring skipped hosts.
2020-02-19 02:38:25 -08:00
a15a0b5eb9 Make calico iptables lock timeout configurable (#5658)
Adds `calico_iptables_lock_timeout_secs` variable to calico DS yaml.
2020-02-19 02:28:25 -08:00
646fd5f47b External OpenStack Cloud Controller Manager implementation (#5491)
* External OpenStack Cloud Controller Manager implementation

* Adding controller image tag

* Minor fixes

* Restructuring the external cloud controller to work with KubeADM
2020-02-18 04:47:28 -08:00
277b347604 add az_list_node variable to specify different AZs for kubelets (#5413)
* rebase and add az_list_node variable to specify different AZs for kubelets

* fix missing variable name change
2020-02-18 04:29:27 -08:00
12bc634ec3 helm default version 3.1.0 (#5634)
* helm default version 3.1.0

* fix newline
try to retest2
2020-02-18 03:21:29 -08:00
769e54d8f5 Fix a typo in integration.md (#5616) 2020-02-18 02:29:29 -08:00
ad50bc4ccb Cache facts for 2 hours (#5633)
Sets a 2 hour timeout value for facts caching.
2020-02-18 01:31:28 -08:00
0ca7aa126b added "Flatcar", "Flatcar Container Linux by Kinvolk" for all coreOS role (#5607) 2020-02-18 00:15:29 -08:00
d0d9967457 Fix to Vagrant config.rb apply correctly (#5525) 2020-02-18 00:13:28 -08:00
b51b52ac0e Fixing and issue where if the pids in the orphan list no longer exist then all systemd child processes would be killed. (#5636) 2020-02-17 09:33:29 -08:00
36c1f32ef9 remove legacy docker repo in kubernetes/preinstall before any packages installed (#5640) 2020-02-17 08:59:28 -08:00
fa245ffdd5 Fix some minor issues with the Cinder CSI plugin (#5561)
Add Cinder images to download role
2020-02-17 03:47:28 -08:00
f7c5f45833 Ability to define plugins.cri.containerd params (#5624)
* Ability to define plugins.cri.containerd params

* addition of containerd field commented as an example

* documentation of containerd_config
2020-02-17 02:15:29 -08:00
579976260f Added in code to allow control over pull policy for local path provis… (#5334)
* Added in code to allow control over pull policy for local path provisioner

* change to imagePullPolicy to use globally used variable k8s_image_pull_policy

* removed unusued variable from defaults

* updated contiv-etcd and cinder-csi-controllerplugin to use k8s_image_pull_policy variable
2020-02-17 02:13:30 -08:00
d56e9f6b80 Fix Cinder CSI bugs (#5492) 2020-02-17 01:49:28 -08:00
57b0b6a9b1 update Calico CNI description (#5523) 2020-02-17 01:47:28 -08:00
640190217d UPdate docs to match actuall required settings to perform an unsafe upgrade using cluster.yml playbook. Relates to https://github.com/kubernetes-sigs/kubespray/issues/4736 and https://github.com/kubernetes-sigs/kubespray/issues/4139 (#5609) 2020-02-17 01:45:29 -08:00
a08f485d76 updated links in the PR template (#5614) 2020-02-17 12:16:35 +03:00
f6b66839bd Use 'private_dns' as hostname in inventory file (#5463) 2020-02-17 00:59:28 -08:00
26700e7882 kubelet_config_extra_args and kubelet_node_config_extra_args (#5623)
* Introduce kubelet_config_extra_args and kubelet_node_config_extra_args to pass params to kubelet via YAML config

* kubelet_config_extra_args is not the alternative
2020-02-14 16:05:28 -08:00
d86229dc2b Upgrade cri-tools (crictl) to 1.17.0 (#5629) 2020-02-14 02:50:17 -08:00
f56171b513 Remove old features gates (#5608) 2020-02-14 02:24:17 -08:00
516e9a4de6 Securing http link to https link (#5617)
Fix http link to https link for security

Signed-off-by: Nguyen Hai Truong <truongnh@vn.fujitsu.com>
2020-02-13 14:46:17 -08:00
765d907ea1 added reference to calico_ip_auto_method in sample inventory group vars (#5612) 2020-02-13 13:18:36 -08:00
287421e21e Set helm 3.0 as default (#5503)
* set helm 3.0 as default

* remove trainling space in vars.yml

* switched to helm 3.0.3
2020-02-13 02:18:35 -08:00
2761fda2c9 Update bug-report.md (#5585) 2020-02-13 01:34:35 -08:00
339e36fbe6 Files to archive can be passed directly (#5571) 2020-02-12 07:50:51 -08:00
5e648b96e8 Fix default value of kube_api_server_endpoint (#5529)
Signed-off-by: Arthur Outhenin-Chalandre <arthur@cri.epita.fr>
2020-02-11 01:40:01 -08:00
ac2135e450 Fix recover-control-plane to work with etcd 3.3.x and add CI (#5500)
* Fix recover-control-plane to work with etcd 3.3.x and add CI

* Set default values for testcase

* Add actual test jobs

* Attempt to satisty gitlab ci linter

* Fix ansible targets

* Set etcd_member_name as stated in the docs...

* Recovering from 0 masters is not supported yet

* Add other master to broken_kube-master group as well

* Increase number of retries to see if etcd needs more time to heal

* Make number of retries for ETCD loops configurable, increase it for recovery CI and document it
2020-02-11 01:38:01 -08:00
68c8c05775 improve documentation about user account and connecting to API (#5415)
* improve documentation about user acct and connecting to API

* fix lint
2020-02-11 01:36:00 -08:00
14b1cab5d2 force rotate control plane certifcate on master node when upgrade cluster (#5596) 2020-02-10 06:09:54 -08:00
e570e2e736 Remove last rkt references (#5606) 2020-02-07 02:19:43 -08:00
16fd2e5d68 Fix etcd deployment type variable location (#5587)
On deployments types where etcd server is splitted from Kube Master, the deployment fails since it cannot find the variable.
2020-02-07 02:17:43 -08:00
422b25ab1f Bind Docker service to containerd.service on versions >=18.09.1 (#5477) 2020-02-07 02:15:44 -08:00
b7527399b5 fully clean docker_options from sample inventory (#5414)
* comment out docker_options

* fix yamllint
2020-02-07 02:13:43 -08:00
89bad11ad8 Update PULL_REQUEST_TEMPLATE.md (#5597) 2020-02-07 02:11:44 -08:00
aca
9d32e2c3b0 fix duplicates when scheduler_extra_volumes defined (#5566) 2020-02-07 02:09:44 -08:00
099341582a Update nginx image to latest (#5590) 2020-02-07 02:07:44 -08:00
942c98003f Add LuckySB as an approver (#5584)
Change-Id: I830d5bff9fa3c50b83a9eb1fd6dff521f8e55dc1
2020-02-05 11:21:55 -08:00
cad3bf3e8c Add CentOS 8 image for testing (#5589) 2020-01-29 02:06:16 -08:00
2ab5cc73cd Fix typo in Multus plugin. (#5568) 2020-01-29 01:28:13 -08:00
9f2dd09628 Add proxy support to containerd, improves no_proxy (#5583)
* containerd: add proxy support

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* kubespray-defaults: add kube_service_addresses / kube_pods_subnet to no_proxy

CIDR notation in no_proxy is supported by a lot of programs/languages,
including go: https://github.com/golang/go/issues/16704
Without that containerd cannot talk the the API server (kube_apiserver_ip),
but it should not go through an external proxy for the nodes/pods/services

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-01-29 01:24:14 -08:00
2798adc837 Remove stale legacy yum docker repo /etc/yum.repos.d/docker.repo (#5569)
* Remove stale legacy yum docker repo /etc/yum.repos.d/docker.repo

* move task 'Remove legacy docker repo file' to pre-upgrade.yml
2020-01-28 02:31:40 -08:00
54d9404c0e Fix hashes... kubernetes 1.17.2 (#5581) 2020-01-24 06:44:31 -08:00
f1025dce4e Update to hashes and default version (1.15.8 / 1.16.5 / 1.17.1) (#5564) 2020-01-23 03:54:49 -08:00
538f4dad9d tag role kubernetes/node-label in playbooks (#5560) 2020-01-20 11:43:36 -08:00
5323e232b2 recreate in another branch due to rebase problem (#5557) 2020-01-18 02:23:35 -08:00
5d9986ab5f Fix temp filename for debian-10 image (#5540) 2020-01-17 02:08:56 -08:00
38688a4486 Remove dockerproject org (#5548)
* Change dockerproject.org to download.docker.com

dockerproject.org was deprecated in 2017 and has gone down.

* Restore yum repo for containerd

Change-Id: I883bb512a2164a85865b1bd4fb569af0358c8c2b

Co-authored-by: Craig Rodrigues <rodrigc@crodrigues.org>
2020-01-17 00:38:55 -08:00
d640a57f9b update api-version for PriorityClass following removal in 1.17 (#5450) 2020-01-16 01:52:22 -08:00
5e9479cded Ensure we always fixup kube-proxy kubeconfig (#5524)
When running with serial != 100%, like upgrade_cluster.yml, we need to apply this fixup each time
Problem was introduced in 05dc2b3a09

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-01-14 02:45:09 -08:00
06ffe44f1f Remove downloading deprecated calico-rr image (#5528)
Change-Id: I7354d33c7db513e0ee27c9a4cc40e8501c0e1061
2020-01-14 02:41:08 -08:00
b35b816287 Raise typha max connections to 300 (#5527)
Raises limit from 100 to 300 because the default is far too low
and the pod can handle 300 with the given resources.

Change-Id: Ib1eec10da3d09d198933fcfe87291587e58d7cdb
2020-01-10 00:24:33 -08:00
bf15d06568 Update to Kubernetes 1.15.7 (#5518) 2020-01-08 17:35:40 -08:00
2c2ffa846c Calico: update to 3.11.1, allow to configure calico_iptables_backend (#5514)
I've tested this update by deploying a containerd / etcd cluster on top CentOS7,
MetalLB + NGINX Ingress. Upgrade using upgrade-cluster.yml

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
2020-01-08 02:27:40 -08:00
48c41bcbe7 kube-proxy need conntrack (#5478) 2020-01-06 02:31:35 -08:00
beb47e1c63 update ingress_nginx install guide (#5502) 2020-01-06 02:27:35 -08:00
303c3654a1 Set pipefail in case tar fails (#5506) 2020-01-06 02:25:34 -08:00
5fab610fab Clean kubectl cache after upgrade on first master (#5479)
Resolves issue where kubectl cache of <v1.16 api schema
interferes with interacting with daemonsets and deployments.

Change-Id: I63b7046958f2008eb144b6da0004c598f945e0ae
2020-01-06 02:23:35 -08:00
3c3ebc05cc Fix invalid count index (#5469) 2020-01-02 01:57:39 -08:00
94956ebde9 Fix invalid variable in host inventory script (#5481) 2019-12-20 05:01:33 -08:00
e716bed11b A fix of install instructions (#5483)
* Update from https://github.com/kubernetes-sigs/kubespray/issues/4318#issuecomment-470161397

* Woops I missed a spot
2019-12-20 04:39:32 -08:00
ccbcad9741 Ubuntu CRI-O (#5426)
* Fix crictl

* Reload systemd daemon before enabling service

* Typo

* Add crictl template

* Remove seccomp.json for ubuntu

* Set runtime path of runc for ubuntu

* Change path to conmon
2019-12-19 04:37:57 -08:00
15a8c34717 Update PULL_REQUEST_TEMPLATE.md (#5476) 2019-12-19 04:21:57 -08:00
b815f48803 Add script for generating binary hashes (#5470)
Change-Id: I4498d1c0585ee98c23856208d660caadf67cab34
2019-12-18 00:29:57 -08:00
95c97332bf Bump yamllint and ansible-lint versions (#5421) 2019-12-17 07:13:59 -08:00
9bdf6b00cc Remove inline shell in YAML for vagrant-validate (#5386) 2019-12-17 07:11:59 -08:00
91b23caa19 Remove GCE tests files (#5459) 2019-12-17 07:09:59 -08:00
5df48ef8fd [docs] Add CI matrix and script (#5461)
* Rename CI jobs from ubuntu to ubuntu16

* Add CI matrix and script
2019-12-17 07:07:59 -08:00
109078c5e0 Update CNI plugins to v0.8.3 (#5453) 2019-12-16 04:53:36 -08:00
c0b262a22a Add kube-router configuration to enable metrics exposure (#5416) 2019-12-16 04:35:36 -08:00
8bb1af9926 fix typo (#5452) 2019-12-16 02:55:36 -08:00
538f1f1a68 cri-o: redhat.yml - remove package cri-tools (#5444)
There is no cri-tools package in CentOS/EPEL/Red Hat.
Additionally, cri-tools is provided into the installation via
roles/download/defaults/main.yml:104:crictl_download_url.
2019-12-16 02:53:36 -08:00
b60ab3ae44 Update CI to use v2.12.0 image and update release process (#5448) 2019-12-13 05:42:54 -08:00
792 changed files with 40360 additions and 6335 deletions

View File

@ -2,15 +2,8 @@
parseable: true
skip_list:
# see https://docs.ansible.com/ansible-lint/rules/default_rules.html for a list of all default rules
# The following rules throw errors.
# These either still need to be corrected in the repository and the rules re-enabled or documented why they are skipped on purpose.
- '301'
- '302'
- '303'
- '305'
- '306'
- '404'
- '503'
# DO NOT add any other rules to this skip_list, instead use local `# noqa` with a comment explaining WHY it is necessary
# These rules are intentionally skipped:
#

15
.editorconfig Normal file
View File

@ -0,0 +1,15 @@
root = true
[*.{yaml,yml,yml.j2,yaml.j2}]
indent_style = space
indent_size = 2
trim_trailing_whitespace = true
insert_final_newline = true
charset = utf-8
[{Dockerfile}]
indent_style = space
indent_size = 2
trim_trailing_whitespace = true
insert_final_newline = true
charset = utf-8

View File

@ -18,6 +18,8 @@ explain why.
- **Version of Ansible** (`ansible --version`):
- **Version of Python** (`python --version`):
**Kubespray version (commit) (`git rev-parse --short HEAD`):**
@ -25,8 +27,8 @@ explain why.
**Network plugin used**:
**Copy of your inventory file:**
**Full inventory with variables (`ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"`):**
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
**Command used to invoke ansible**:

View File

@ -1,9 +1,9 @@
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide
1. If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide/first-contribution.md and developer guide https://git.k8s.io/community/contributors/devel/development.md
2. Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here:
https://git.k8s.io/community/contributors/devel/release.md#issue-kind-label
3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/testing.md
https://git.k8s.io/community/contributors/devel/sig-release/release.md#issuepr-kind-label
3. Ensure you have added or ran the appropriate tests for your PR: https://git.k8s.io/community/contributors/devel/sig-testing/testing.md
4. If you want *faster* PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
5. Follow the instructions for writing a release note: https://git.k8s.io/community/contributors/guide/release-notes.md
6. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests

1
.gitignore vendored
View File

@ -1,6 +1,7 @@
.vagrant
*.retry
**/vagrant_ansible_inventory
*.iml
temp
.idea
.tox

View File

@ -4,17 +4,18 @@ stages:
- deploy-part1
- moderator
- deploy-part2
- deploy-gce
- deploy-part3
- deploy-special
variables:
KUBESPRAY_VERSION: v2.13.3
FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
# DOCKER_HOST: tcp://localhost:2375
ANSIBLE_FORCE_COLOR: "true"
MAGIC: "ci check this"
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
CI_TEST_REGISTRY_MIRROR: "./tests/common/_docker_hub_registry_mirror.yml"
GS_ACCESS_KEY_ID: $GS_KEY
GS_SECRET_ACCESS_KEY: $GS_SECRET
CONTAINER_ENGINE: docker
@ -26,7 +27,10 @@ variables:
IDEMPOT_CHECK: "false"
RESET_CHECK: "false"
UPGRADE_TEST: "false"
LOG_LEVEL: "-vv"
MITOGEN_ENABLE: "false"
ANSIBLE_LOG_LEVEL: "-vv"
RECOVER_CONTROL_PLANE_TEST: "false"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube-master[1:]"
before_script:
- ./tests/scripts/rebase.sh
@ -37,14 +41,14 @@ before_script:
.job: &job
tags:
- packet
variables:
KUBESPRAY_VERSION: v2.11.0
image: quay.io/kubespray/kubespray:$KUBESPRAY_VERSION
artifacts:
when: always
paths:
- cluster-dump/
.testcases: &testcases
<<: *job
services:
- docker:dind
before_script:
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- ./tests/scripts/rebase.sh
@ -52,7 +56,7 @@ before_script:
script:
- ./tests/scripts/testcases_run.sh
after_script:
- ./tests/scripts/testcases_cleanup.sh
- chronic ./tests/scripts/testcases_cleanup.sh
# For failfast, at least 1 job must be defined in .gitlab-ci.yml
# Premoderated with manual actions
@ -70,3 +74,4 @@ include:
- .gitlab-ci/shellcheck.yml
- .gitlab-ci/terraform.yml
- .gitlab-ci/packet.yml
- .gitlab-ci/vagrant.yml

View File

@ -1,247 +0,0 @@
---
.gce_variables: &gce_variables
GCE_USER: travis
SSH_USER: $GCE_USER
CLOUD_MACHINE_TYPE: "g1-small"
CI_PLATFORM: "gce"
PRIVATE_KEY: $GCE_PRIVATE_KEY
.cache: &cache
cache:
key: "$CI_BUILD_REF_NAME"
paths:
- downloads/
- $HOME/.cache
.gce: &gce
extends: .testcases
<<: *cache
variables:
<<: *gce_variables
tags:
- gce
except: ['triggers']
only: [/^pr-.*$/]
.centos_weave_kubeadm_variables: &centos_weave_kubeadm_variables
# stage: deploy-part1
UPGRADE_TEST: "graceful"
.centos7_multus_calico_variables: &centos7_multus_calico_variables
# stage: deploy-gce
UPGRADE_TEST: "graceful"
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
### PR JOBS PART1
gce_ubuntu18-flannel-aio:
stage: deploy-part1
<<: *gce
when: manual
### PR JOBS PART2
gce_coreos-calico-aio:
stage: deploy-gce
<<: *gce
when: on_success
gce_centos7-flannel-addons:
stage: deploy-gce
<<: *gce
when: manual
### MANUAL JOBS
gce_centos-weave-kubeadm-sep:
stage: deploy-gce
extends: .gce
variables:
<<: *centos_weave_kubeadm_variables
when: on_success
only: ['triggers']
except: []
gce_ubuntu-weave-sep:
stage: deploy-gce
<<: *gce
when: manual
only: ['triggers']
except: []
gce_coreos-calico-sep-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_ubuntu-canal-ha-triggers:
stage: deploy-special
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_centos7-flannel-addons-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_ubuntu-weave-sep-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
# More builds for PRs/merges (manual) and triggers (auto)
gce_ubuntu-canal-ha:
stage: deploy-special
<<: *gce
when: manual
gce_ubuntu-canal-kubeadm:
stage: deploy-gce
<<: *gce
when: manual
gce_ubuntu-canal-kubeadm-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_ubuntu-flannel-ha:
stage: deploy-gce
<<: *gce
when: manual
gce_centos-weave-kubeadm-triggers:
stage: deploy-gce
extends: .gce
variables:
<<: *centos_weave_kubeadm_variables
when: on_success
only: ['triggers']
except: []
gce_ubuntu-contiv-sep:
stage: deploy-special
<<: *gce
when: manual
gce_coreos-cilium:
stage: deploy-special
<<: *gce
when: manual
gce_ubuntu18-cilium-sep:
stage: deploy-special
<<: *gce
when: manual
gce_rhel7-weave:
stage: deploy-gce
<<: *gce
when: manual
gce_rhel7-weave-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_debian9-calico-upgrade:
stage: deploy-gce
<<: *gce
when: manual
gce_debian9-calico-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_coreos-canal:
stage: deploy-gce
<<: *gce
when: manual
gce_coreos-canal-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_rhel7-canal-sep:
stage: deploy-special
<<: *gce
when: manual
gce_rhel7-canal-sep-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_centos7-calico-ha:
stage: deploy-special
<<: *gce
when: manual
gce_centos7-calico-ha-triggers:
stage: deploy-gce
<<: *gce
when: on_success
only: ['triggers']
except: []
gce_centos7-kube-router:
stage: deploy-special
<<: *gce
when: manual
gce_centos7-multus-calico:
stage: deploy-gce
extends: .gce
variables:
<<: *centos7_multus_calico_variables
when: manual
gce_oracle-canal:
stage: deploy-gce
<<: *gce
when: manual
except: ['triggers']
only: ['master', /^pr-.*$/]
gce_opensuse-canal:
stage: deploy-gce
<<: *gce
when: manual
# no triggers yet https://github.com/kubernetes-incubator/kargo/issues/613
gce_coreos-alpha-weave-ha:
stage: deploy-special
<<: *gce
when: manual
gce_coreos-kube-router:
stage: deploy-special
<<: *gce
when: manual
gce_ubuntu-kube-router-sep:
stage: deploy-special
<<: *gce
when: manual

View File

@ -2,6 +2,7 @@
yamllint:
extends: .job
stage: unit-tests
tags: [light]
variables:
LANG: C.UTF-8
script:
@ -11,15 +12,17 @@ yamllint:
vagrant-validate:
extends: .job
stage: unit-tests
tags: [light]
variables:
VAGRANT_VERSION: 2.2.4
script:
- curl -sL https://releases.hashicorp.com/vagrant/2.2.4/vagrant_2.2.4_x86_64.deb -o /tmp/vagrant_2.2.4_x86_64.deb
- dpkg -i /tmp/vagrant_2.2.4_x86_64.deb
- vagrant validate --ignore-provider
- ./tests/scripts/vagrant-validate.sh
except: ['triggers', 'master']
ansible-lint:
extends: .job
stage: unit-tests
tags: [light]
# lint every yml/yaml file that looks like it contains Ansible plays
script: |-
grep -Rl '^- hosts: \|^ hosts: ' --include \*.yml --include \*.yaml . | xargs -P 4 -n 25 ansible-lint -v
@ -28,6 +31,7 @@ ansible-lint:
syntax-check:
extends: .job
stage: unit-tests
tags: [light]
variables:
ANSIBLE_INVENTORY: inventory/local-tests.cfg
ANSIBLE_REMOTE_USER: root
@ -43,6 +47,7 @@ syntax-check:
tox-inventory-builder:
stage: unit-tests
tags: [light]
extends: .job
before_script:
- ./tests/scripts/rebase.sh
@ -56,8 +61,16 @@ tox-inventory-builder:
markdownlint:
stage: unit-tests
tags: [light]
image: node
before_script:
- npm install -g markdownlint-cli
script:
- markdownlint README.md docs --ignore docs/_sidebar.md
ci-matrix:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/md-table/test.sh

View File

@ -14,87 +14,145 @@ packet_ubuntu18-calico-aio:
extends: .packet
when: on_success
# Future AIO job
packet_ubuntu20-calico-aio:
stage: deploy-part1
extends: .packet
when: on_success
# ### PR JOBS PART2
packet_centos7-flannel-addons:
packet_centos7-flannel-containerd-addons-ha:
extends: .packet
stage: deploy-part2
when: on_success
# ### MANUAL JOBS
packet_centos-weave-kubeadm-sep:
stage: deploy-part2
extends: .packet
when: on_success
variables:
UPGRADE_TEST: basic
MITOGEN_ENABLE: "true"
packet_ubuntu-weave-sep:
packet_centos7-crio:
extends: .packet
stage: deploy-part2
when: on_success
variables:
MITOGEN_ENABLE: "true"
packet_ubuntu18-crio:
extends: .packet
stage: deploy-part2
when: manual
variables:
MITOGEN_ENABLE: "true"
packet_ubuntu16-canal-kubeadm-ha:
stage: deploy-part2
extends: .packet
when: manual
when: on_success
# # More builds for PRs/merges (manual) and triggers (auto)
packet_ubuntu-canal-ha:
packet_ubuntu16-canal-sep:
stage: deploy-special
extends: .packet
when: manual
packet_ubuntu-canal-kubeadm:
stage: deploy-part2
extends: .packet
when: on_success
packet_ubuntu-flannel-ha:
packet_ubuntu16-flannel-ha:
stage: deploy-part2
extends: .packet
when: manual
packet_ubuntu16-kube-router-sep:
stage: deploy-part2
extends: .packet
when: manual
packet_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2
extends: .packet
when: manual
packet_debian10-cilium-svc-proxy:
stage: deploy-part2
extends: .packet
when: manual
packet_debian10-containerd:
stage: deploy-part2
extends: .packet
when: on_success
variables:
MITOGEN_ENABLE: "true"
packet_centos7-calico-ha-once-localhost:
stage: deploy-part2
extends: .packet
when: on_success
variables:
# This will instruct Docker not to start over TLS.
DOCKER_TLS_CERTDIR: ""
services:
- docker:19.03.9-dind
packet_centos8-kube-ovn:
stage: deploy-part2
extends: .packet
when: on_success
packet_centos8-calico:
stage: deploy-part2
extends: .packet
when: on_success
packet_fedora32-weave:
stage: deploy-part2
extends: .packet
when: on_success
packet_opensuse-canal:
stage: deploy-part2
extends: .packet
when: on_success
packet_ubuntu18-ovn4nfv:
stage: deploy-part2
extends: .packet
when: on_success
# Contiv does not work in k8s v1.16
# packet_ubuntu-contiv-sep:
# packet_ubuntu16-contiv-sep:
# stage: deploy-part2
# extends: .packet
# when: on_success
# ### MANUAL JOBS
packet_ubuntu16-weave-sep:
stage: deploy-part2
extends: .packet
when: manual
packet_ubuntu18-cilium-sep:
stage: deploy-special
extends: .packet
when: manual
packet_ubuntu18-flannel-containerd:
packet_ubuntu18-flannel-containerd-ha:
stage: deploy-part2
extends: .packet
when: manual
packet_debian9-macvlan-sep:
packet_ubuntu18-flannel-containerd-ha-once:
stage: deploy-part2
extends: .packet
when: manual
packet_debian9-calico-upgrade:
packet_debian9-macvlan:
stage: deploy-part2
extends: .packet
when: on_success
variables:
UPGRADE_TEST: graceful
packet_debian10-containerd:
stage: deploy-part2
extends: .packet
when: on_success
when: manual
packet_centos7-calico-ha:
stage: deploy-part2
extends: .packet
when: manual
packet_centos7-kube-ovn:
stage: deploy-part2
extends: .packet
when: on_success
packet_centos7-kube-router:
stage: deploy-part2
extends: .packet
@ -105,22 +163,67 @@ packet_centos7-multus-calico:
extends: .packet
when: manual
packet_opensuse-canal:
packet_oracle7-canal-ha:
stage: deploy-part2
extends: .packet
when: manual
packet_oracle-7-canal:
packet_fedora31-flannel:
stage: deploy-part2
extends: .packet
when: manual
packet_ubuntu-kube-router-sep:
stage: deploy-part2
extends: .packet
when: manual
when: on_success
variables:
MITOGEN_ENABLE: "true"
packet_amazon-linux-2-aio:
stage: deploy-part2
extends: .packet
when: manual
packet_fedora32-kube-ovn-containerd:
stage: deploy-part2
extends: .packet
when: on_success
# ### PR JOBS PART3
# Long jobs (45min+)
packet_centos7-weave-upgrade-ha:
stage: deploy-part3
extends: .packet
when: on_success
variables:
UPGRADE_TEST: basic
MITOGEN_ENABLE: "false"
packet_debian9-calico-upgrade:
stage: deploy-part3
extends: .packet
when: on_success
variables:
UPGRADE_TEST: graceful
MITOGEN_ENABLE: "false"
packet_debian9-calico-upgrade-once:
stage: deploy-part3
extends: .packet
when: on_success
variables:
UPGRADE_TEST: graceful
MITOGEN_ENABLE: "false"
packet_ubuntu18-calico-ha-recover:
stage: deploy-part3
extends: .packet
when: on_success
variables:
RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube-master[1:]"
packet_ubuntu18-calico-ha-recover-noquorum:
stage: deploy-part3
extends: .packet
when: on_success
variables:
RECOVER_CONTROL_PLANE_TEST: "true"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[1:],kube-master[1:]"

View File

@ -2,14 +2,15 @@
shellcheck:
extends: .job
stage: unit-tests
tags: [light]
variables:
SHELLCHECK_VERSION: v0.6.0
before_script:
- ./tests/scripts/rebase.sh
- curl --silent "https://storage.googleapis.com/shellcheck/shellcheck-"${SHELLCHECK_VERSION}".linux.x86_64.tar.xz" | tar -xJv
- curl --silent --location "https://github.com/koalaman/shellcheck/releases/download/"${SHELLCHECK_VERSION}"/shellcheck-"${SHELLCHECK_VERSION}".linux.x86_64.tar.xz" | tar -xJv
- cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/
- shellcheck --version
script:
# Run shellcheck for all *.sh except contrib/
- find . -name '*.sh' -not -path './contrib/*' | xargs shellcheck --severity error
- find . -name '*.sh' -not -path './contrib/*' -not -path './.git/*' | xargs shellcheck --severity error
except: ['triggers', 'master']

View File

@ -18,10 +18,14 @@
- echo "$PACKET_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa
- chmod 400 ~/.ssh/id_rsa
- echo "$PACKET_PUBLIC_KEY" | base64 -d > ~/.ssh/id_rsa.pub
- mkdir -p group_vars
# Random subnet to avoid routing conflicts
- export TF_VAR_subnet_cidr="10.$(( $RANDOM % 256 )).$(( $RANDOM % 256 )).0/24"
.terraform_validate:
extends: .terraform_install
stage: unit-tests
tags: [light]
only: ['master', /^pr-.*$/]
script:
- terraform validate -var-file=cluster.tfvars contrib/terraform/$PROVIDER
@ -29,9 +33,14 @@
.terraform_apply:
extends: .terraform_install
stage: deploy-part2
tags: [light]
stage: deploy-part3
when: manual
only: [/^pr-.*$/]
artifacts:
when: always
paths:
- cluster-dump/
variables:
ANSIBLE_INVENTORY_UNPARSED_FAILED: "true"
ANSIBLE_INVENTORY: hosts
@ -42,33 +51,33 @@
- tests/scripts/testcases_run.sh
after_script:
# Cleanup regardless of exit code
- ./tests/scripts/testcases_cleanup.sh
- chronic ./tests/scripts/testcases_cleanup.sh
tf-validate-openstack:
extends: .terraform_validate
variables:
TF_VERSION: 0.12.12
TF_VERSION: 0.12.24
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-packet:
extends: .terraform_validate
variables:
TF_VERSION: 0.12.12
TF_VERSION: 0.12.24
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-aws:
extends: .terraform_validate
variables:
TF_VERSION: 0.12.12
TF_VERSION: 0.12.24
PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME
# tf-packet-ubuntu16-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: 0.12.12
# TF_VERSION: 0.12.24
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
@ -82,7 +91,7 @@ tf-validate-aws:
# tf-packet-ubuntu18-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: 0.12.12
# TF_VERSION: 0.12.24
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
@ -104,12 +113,87 @@ tf-validate-aws:
OS_INTERFACE: public
OS_IDENTITY_API_VERSION: "3"
# Elastx is generously donating resources for Kubespray on Openstack CI
# Contacts: @gix @bl0m1
.elastx_variables: &elastx_variables
OS_AUTH_URL: https://ops.elastx.cloud:5000
OS_PROJECT_ID: 564c6b461c6b44b1bb19cdb9c2d928e4
OS_PROJECT_NAME: kubespray_ci
OS_USER_DOMAIN_NAME: Default
OS_PROJECT_DOMAIN_ID: default
OS_USERNAME: kubespray@root314.com
OS_REGION_NAME: se-sto
OS_INTERFACE: public
OS_IDENTITY_API_VERSION: "3"
TF_VAR_router_id: "ab95917c-41fb-4881-b507-3a6dfe9403df"
# Since ELASTX is in Stockholm, Mitogen helps with latency
MITOGEN_ENABLE: "false"
# Mitogen doesn't support interpreter discovery yet
ANSIBLE_PYTHON_INTERPRETER: "/usr/bin/python3"
tf-elastx_cleanup:
stage: unit-tests
tags: [light]
image: python
variables:
<<: *elastx_variables
before_script:
- pip install -r scripts/openstack-cleanup/requirements.txt
script:
- ./scripts/openstack-cleanup/main.py
tf-elastx_ubuntu18-calico:
extends: .terraform_apply
stage: deploy-part3
when: on_success
variables:
<<: *elastx_variables
TF_VERSION: 0.12.24
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
SSH_USER: ubuntu
TF_VAR_number_of_k8s_masters: "1"
TF_VAR_number_of_k8s_masters_no_floating_ip: "0"
TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
TF_VAR_number_of_etcd: "0"
TF_VAR_number_of_k8s_nodes: "1"
TF_VAR_number_of_k8s_nodes_no_floating_ip: "0"
TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
TF_VAR_number_of_bastions: "0"
TF_VAR_number_of_k8s_masters_no_etcd: "0"
TF_VAR_floatingip_pool: "elx-public1"
TF_VAR_dns_nameservers: '["1.1.1.1", "8.8.8.8", "8.8.4.4"]'
TF_VAR_use_access_ip: "0"
TF_VAR_external_net: "600b8501-78cb-4155-9c9f-23dfcba88828"
TF_VAR_network_name: "ci-$CI_JOB_ID"
TF_VAR_az_list: '["sto1"]'
TF_VAR_az_list_node: '["sto1"]'
TF_VAR_flavor_k8s_master: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_flavor_k8s_node: 3f73fc93-ec61-4808-88df-2580d94c1a9b # v1-standard-2
TF_VAR_image: ubuntu-18.04-server-latest
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
tf-ovh_cleanup:
stage: unit-tests
tags: [light]
image: python
environment: ovh
variables:
<<: *ovh_variables
before_script:
- pip install -r scripts/openstack-cleanup/requirements.txt
script:
- ./scripts/openstack-cleanup/main.py
tf-ovh_ubuntu18-calico:
extends: .terraform_apply
when: on_success
environment: ovh
variables:
<<: *ovh_variables
TF_VERSION: 0.12.12
TF_VERSION: 0.12.24
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
@ -131,31 +215,3 @@ tf-ovh_ubuntu18-calico:
TF_VAR_flavor_k8s_node: "defa64c3-bd46-43b4-858a-d93bbae0a229" # s1-8
TF_VAR_image: "Ubuntu 18.04"
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'
tf-ovh_coreos-calico:
extends: .terraform_apply
when: on_success
variables:
<<: *ovh_variables
TF_VERSION: 0.12.12
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
SSH_USER: core
TF_VAR_number_of_k8s_masters: "0"
TF_VAR_number_of_k8s_masters_no_floating_ip: "1"
TF_VAR_number_of_k8s_masters_no_floating_ip_no_etcd: "0"
TF_VAR_number_of_etcd: "0"
TF_VAR_number_of_k8s_nodes: "0"
TF_VAR_number_of_k8s_nodes_no_floating_ip: "1"
TF_VAR_number_of_gfs_nodes_no_floating_ip: "0"
TF_VAR_number_of_bastions: "0"
TF_VAR_number_of_k8s_masters_no_etcd: "0"
TF_VAR_use_neutron: "0"
TF_VAR_floatingip_pool: "Ext-Net"
TF_VAR_external_net: "6011fbc9-4cbf-46a4-8452-6890a340b60b"
TF_VAR_network_name: "Ext-Net"
TF_VAR_flavor_k8s_master: "4d4fd037-9493-4f2b-9afe-b542b5248eac" # b2-7
TF_VAR_flavor_k8s_node: "4d4fd037-9493-4f2b-9afe-b542b5248eac" # b2-7
TF_VAR_image: "CoreOS Stable"
TF_VAR_k8s_allowed_remote_ips: '["0.0.0.0/0"]'

54
.gitlab-ci/vagrant.yml Normal file
View File

@ -0,0 +1,54 @@
---
molecule_tests:
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: quay.io/kubespray/vagrant:$KUBESPRAY_VERSION
services: []
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
.vagrant:
extends: .testcases
variables:
CI_PLATFORM: "vagrant"
SSH_USER: "vagrant"
VAGRANT_DEFAULT_PROVIDER: "libvirt"
KUBESPRAY_VAGRANT_CONFIG: tests/files/${CI_JOB_NAME}.rb
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: quay.io/kubespray/vagrant:$KUBESPRAY_VERSION
services: []
before_script:
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/testcases_run.sh
after_script:
- chronic ./tests/scripts/testcases_cleanup.sh
vagrant_ubuntu18-flannel:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_ubuntu18-weave-medium:
stage: deploy-part2
extends: .vagrant
when: manual
vagrant_ubuntu20-flannel:
stage: deploy-part2
extends: .vagrant
when: on_success

View File

@ -2,10 +2,30 @@
## How to become a contributor and submit your own code
### Environment setup
It is recommended to use filter to manage the GitHub email notification, see [examples for setting filters to Kubernetes Github notifications](https://github.com/kubernetes/community/blob/master/communication/best-practices.md#examples-for-setting-filters-to-kubernetes-github-notifications)
To install development dependencies you can use `pip install -r tests/requirements.txt`
#### Linting
Kubespray uses `yamllint` and `ansible-lint`. To run them locally use `yamllint .` and `./tests/scripts/ansible-lint.sh`
#### Molecule
[molecule](https://github.com/ansible-community/molecule) is designed to help the development and testing of Ansible roles. In Kubespray you can run it all for all roles with `./tests/scripts/molecule_run.sh` or for a specific role (that you are working with) with `molecule test` from the role directory (`cd roles/my-role`).
When developing or debugging a role it can be useful to run `molecule create` and `molecule converge` separately. Then you can use `molecule login` to SSH into the test environment.
#### Vagrant
Vagrant with VirtualBox or libvirt driver helps you to quickly spin test clusters to test things end to end. See [README.md#vagrant](README.md)
### Contributing A Patch
1. Submit an issue describing your proposed change to the repo in question.
2. The [repo owners](OWNERS) will respond to your issue promptly.
3. Fork the desired repo, develop and test your code changes.
4. Sign the CNCF CLA (https://git.k8s.io/community/CLA.md#the-contributor-license-agreement)
4. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
5. Submit a pull request.

View File

@ -4,15 +4,18 @@ RUN mkdir /kubespray
WORKDIR /kubespray
RUN apt update -y && \
apt install -y \
libssl-dev python3-dev sshpass apt-transport-https jq \
libssl-dev python3-dev sshpass apt-transport-https jq moreutils \
ca-certificates curl gnupg2 software-properties-common python3-pip rsync
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install docker-ce -y
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install docker-ce -y
COPY . .
RUN /usr/bin/python3 -m pip install pip -U && /usr/bin/python3 -m pip install -r tests/requirements.txt && python3 -m pip install -r requirements.txt && update-alternatives --install /usr/bin/python python /usr/bin/python3 1
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kubectl \
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.5/bin/linux/amd64/kubectl \
&& chmod a+x kubectl && cp kubectl /usr/local/bin/kubectl
# Some tools like yamllint need this
ENV LANG=C.UTF-8

View File

@ -1,5 +1,5 @@
mitogen:
ansible-playbook -c local mitogen.yaml -vv
ansible-playbook -c local mitogen.yml -vv
clean:
rm -rf dist/
rm *.retry

View File

@ -9,7 +9,11 @@ aliases:
- riverzhang
- verwilst
- woopstar
- luckysb
kubespray-reviewers:
- jjungnickel
- archifleks
- holmsten
- bozzo
- floryut
- eppo

View File

@ -2,10 +2,10 @@
![Kubernetes Logo](https://raw.githubusercontent.com/kubernetes-sigs/kubespray/master/docs/img/kubernetes-logo.png)
If you have questions, check the [documentation](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/)
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- Can be deployed on **[AWS](docs/aws.md), GCE, [Azure](docs/azure.md), [OpenStack](docs/openstack.md), [vSphere](docs/vsphere.md), [Packet](docs/packet.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster
- **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions**
@ -21,14 +21,14 @@ To deploy the cluster you can use :
```ShellSession
# Install dependencies from ``requirements.txt``
sudo pip install -r requirements.txt
sudo pip3 install -r requirements.txt
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
# Update Ansible inventory file with inventory builder
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/inventory.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml
@ -38,7 +38,7 @@ cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/inventory.ini --become --become-user=root cluster.yml
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
```
Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
@ -75,6 +75,7 @@ vagrant up
- [Requirements](#requirements)
- [Kubespray vs ...](docs/comparisons.md)
- [Getting started](docs/getting-started.md)
- [Setting up your first cluster](docs/setting-up-your-first-cluster.md)
- [Ansible inventory and tags](docs/ansible.md)
- [Integration with existing ansible repo](docs/integration.md)
- [Deployment data variables](docs/vars.md)
@ -82,7 +83,8 @@ vagrant up
- [HA mode](docs/ha-mode.md)
- [Network plugins](#network-plugins)
- [Vagrant install](docs/vagrant.md)
- [CoreOS bootstrap](docs/coreos.md)
- [Flatcar Container Linux bootstrap](docs/flatcar.md)
- [Fedora CoreOS bootstrap](docs/fcos.md)
- [Debian Jessie setup](docs/debian.md)
- [openSUSE setup](docs/opensuse.md)
- [Downloaded artifacts](docs/downloads.md)
@ -93,54 +95,59 @@ vagrant up
- [vSphere](docs/vsphere.md)
- [Packet Host](docs/packet.md)
- [Large deployments](docs/large-deployments.md)
- [Adding/replacing a node](docs/nodes.md)
- [Upgrades basics](docs/upgrades.md)
- [Air-Gap installation](docs/offline-environment.md)
- [Roadmap](docs/roadmap.md)
## Supported Linux Distributions
- **Container Linux by CoreOS**
- **Flatcar Container Linux by Kinvolk**
- **Debian** Buster, Jessie, Stretch, Wheezy
- **Ubuntu** 16.04, 18.04
- **CentOS/RHEL** 7
- **Fedora** 28
- **Fedora/CentOS** Atomic
- **Ubuntu** 16.04, 18.04, 20.04
- **CentOS/RHEL** 7, 8 (experimental: see [centos 8 notes](docs/centos8.md))
- **Fedora** 31, 32
- **Fedora CoreOS** (experimental: see [fcos Note](docs/fcos.md))
- **openSUSE** Leap 42.3/Tumbleweed
- **Oracle Linux** 7
- **Oracle Linux** 7, 8 (experimental: [centos 8 notes](docs/centos8.md) apply)
Note: Upstart/SysV init based OS types are not supported.
## Supported Components
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.16.3
- [etcd](https://github.com/coreos/etcd) v3.3.10
- [docker](https://www.docker.com/) v18.06 (see note)
- [cri-o](http://cri-o.io/) v1.14.0 (experimental: see [CRI-O Note](docs/cri-o.md). Only on centos based OS)
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.18.10
- [etcd](https://github.com/coreos/etcd) v3.4.3
- [docker](https://www.docker.com/) v19.03 (see note)
- [containerd](https://containerd.io/) v1.2.13
- [cri-o](http://cri-o.io/) v1.17 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v0.8.1
- [calico](https://github.com/projectcalico/calico) v3.7.3
- [cni-plugins](https://github.com/containernetworking/plugins) v0.8.6
- [calico](https://github.com/projectcalico/calico) v3.15.2
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.5.5
- [cilium](https://github.com/cilium/cilium) v1.8.3
- [contiv](https://github.com/contiv/install) v1.2.1
- [flanneld](https://github.com/coreos/flannel) v0.11.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v0.2.5
- [multus](https://github.com/intel/multus-cni) v3.2.1
- [weave](https://github.com/weaveworks/weave) v2.5.2
- [flanneld](https://github.com/coreos/flannel) v0.12.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.3.0
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.0.1
- [multus](https://github.com/intel/multus-cni) v3.6.0
- [ovn4nfv](https://github.com/opnfv/ovn4nfv-k8s-plugin) v1.1.0
- [weave](https://github.com/weaveworks/weave) v2.7.0
- Application
- [ambassador](https://github.com/datawire/ambassador): v1.5
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
- [cert-manager](https://github.com/jetstack/cert-manager) v0.11.0
- [coredns](https://github.com/coredns/coredns) v1.6.0
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.26.1
- [cert-manager](https://github.com/jetstack/cert-manager) v0.16.1
- [coredns](https://github.com/coredns/coredns) v1.6.7
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.35.0
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md) was updated to 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09. kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Note: The list of validated [docker versions](https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker) is 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09 and 19.03. The recommended docker version is 19.03. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
## Requirements
- **Minimum required version of Kubernetes is v1.15**
- **Ansible v2.7.8 and python-netaddr is installed on the machine that will run Ansible commands**
- **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment))
- **Minimum required version of Kubernetes is v1.17**
- **Ansible v2.9+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.
- **Your ssh key must be copied** to all the servers part of your inventory.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
@ -163,7 +170,10 @@ You can choose between 10 network plugins. (default: `calico`, except Vagrant us
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
- [calico](docs/calico.md): bgp (layer 3) networking.
- [Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options
designed to give you the most efficient networking across a range of situations, including non-overlay
and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts,
pods, and (if using Istio and Envoy) applications at the service mesh layer.
- [canal](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
@ -172,6 +182,8 @@ You can choose between 10 network plugins. (default: `calico`, except Vagrant us
- [contiv](docs/contiv.md): supports vlan, vxlan, bgp and Cisco SDN networking. This plugin is able to
apply firewall policies, segregate containers in multiple network and bridging pods onto physical networks.
- [ovn4nfv](docs/ovn4nfv.md): [ovn4nfv-k8s-plugins](https://github.com/opnfv/ovn4nfv-k8s-plugin) is the network controller, OVS agent and CNI server to offer basic SFC and OVN overlay networking.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
@ -190,12 +202,18 @@ The choice is defined with the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md).
## Ingress Plugins
- [ambassador](docs/ambassador.md): the Ambassador Ingress Controller and API gateway.
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
## Community docs and resources
- [kubernetes.io/docs/setup/production-environment/tools/kubespray/](https://kubernetes.io/docs/setup/production-environment/tools/kubespray/)
- [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr
- [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty
- [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8)
- [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=CJ5G4GpqDy0)
## Tools and projects on top of Kubespray
@ -204,7 +222,8 @@ See also [Network checker](docs/netcheck.md).
## CI Tests
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/build.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/pipeline.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Packet](https://www.packet.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/).
CI/end-to-end tests sponsored by Google (GCE)
See the [test matrix](docs/test_cases.md) for details.

View File

@ -4,40 +4,45 @@ The Kubespray Project is released on an as-needed basis. The process is as follo
1. An issue is proposing a new release with a changelog since the last release
2. At least one of the [approvers](OWNERS_ALIASES) must approve this release
3. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
4. An approver creates a release branch in the form `release-vX.Y`
5. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) docker image is built and tagged
6. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
7. The release issue is closed
8. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
3. The `kube_version_min_required` variable is set to `n-1`
4. Remove hashes for [EOL versions](https://github.com/kubernetes/sig-release/blob/master/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
5. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
6. An approver creates a release branch in the form `release-X.Y`
7. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) docker images are built and tagged
8. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
9. The release issue is closed
10. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
11. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
## Major/minor releases, merge freezes and milestones
## Major/minor releases and milestones
* Kubespray maintains one branch for major releases (vX.Y). Minor releases are available only as tags.
* For major releases (vX.Y) Kubespray maintains one branch (`release-X.Y`). Minor releases (vX.Y.Z) are available only as tags.
* Security patches and bugs might be backported.
* Fixes for major releases (vX.x.0) and minor releases (vX.Y.x) are delivered
* Fixes for major releases (vX.Y) and minor releases (vX.Y.Z) are delivered
via maintenance releases (vX.Y.Z) and assigned to the corresponding open
milestone (vX.Y). That milestone remains open for the major/minor releases
support lifetime, which ends once the milestone closed. Then only a next major
or minor release can be done.
[GitHub milestone](https://github.com/kubernetes-sigs/kubespray/milestones).
That milestone remains open for the major/minor releases support lifetime,
which ends once the milestone is closed. Then only a next major or minor release
can be done.
* Kubespray major and minor releases are bound to the given ``kube_version`` major/minor
* Kubespray major and minor releases are bound to the given `kube_version` major/minor
version numbers and other components' arbitrary versions, like etcd or network plugins.
Older or newer versions are not supported and not tested for the given release.
Older or newer component versions are not supported and not tested for the given
release (even if included in the checksum variables, like `kubeadm_checksums`).
* There is no unstable releases and no APIs, thus Kubespray doesn't follow
[semver](http://semver.org/). Every version describes only a stable release.
[semver](https://semver.org/). Every version describes only a stable release.
Breaking changes, if any introduced by changed defaults or non-contrib ansible roles'
playbooks, shall be described in the release notes. Other breaking changes, if any in
the contributed addons or bound versions of Kubernetes and other components, are
considered out of Kubespray scope and are up to the components' teams to deal with and
document.
* Minor releases can change components' versions, but not the major ``kube_version``.
Greater ``kube_version`` requires a new major or minor release. For example, if Kubespray v2.0.0
is bound to ``kube_version: 1.4.x``, ``calico_version: 0.22.0``, ``etcd_version: v3.0.6``,
then Kubespray v2.1.0 may be bound to only minor changes to ``kube_version``, like v1.5.1
* Minor releases can change components' versions, but not the major `kube_version`.
Greater `kube_version` requires a new major or minor release. For example, if Kubespray v2.0.0
is bound to `kube_version: 1.4.x`, `calico_version: 0.22.0`, `etcd_version: v3.0.6`,
then Kubespray v2.1.0 may be bound to only minor changes to `kube_version`, like v1.5.1
and *any* changes to other components, like etcd v4, or calico 1.2.3.
And Kubespray v3.x.x shall be bound to ``kube_version: 2.x.x`` respectively.
And Kubespray v3.x.x shall be bound to `kube_version: 2.x.x` respectively.

View File

@ -1,13 +1,13 @@
# Defined below are the security contacts for this repo.
#
# They are the contact point for the Product Security Team to reach out
# They are the contact point for the Product Security Committee to reach out
# to for triaging and handling of incoming issues.
#
# The below names agree to abide by the
# [Embargo Policy](https://github.com/kubernetes/sig-release/blob/master/security-release-process-documentation/security-release-process.md#embargo-policy)
# [Embargo Policy](https://git.k8s.io/security/private-distributors-list.md#embargo-policy)
# and will be removed and replaced if they violate that agreement.
#
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/
atoms
mattymo
mattymo

112
Vagrantfile vendored
View File

@ -7,63 +7,72 @@ require 'fileutils'
Vagrant.require_version ">= 2.0.0"
CONFIG = File.join(File.dirname(__FILE__), "vagrant/config.rb")
CONFIG = File.join(File.dirname(__FILE__), ENV['KUBESPRAY_VAGRANT_CONFIG'] || 'vagrant/config.rb')
COREOS_URL_TEMPLATE = "https://storage.googleapis.com/%s.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json"
FLATCAR_URL_TEMPLATE = "https://%s.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vagrant.json"
# Uniq disk UUID for libvirt
DISK_UUID = Time.now.utc.to_i
SUPPORTED_OS = {
"coreos-stable" => {box: "coreos-stable", user: "core", box_url: COREOS_URL_TEMPLATE % ["stable"]},
"coreos-alpha" => {box: "coreos-alpha", user: "core", box_url: COREOS_URL_TEMPLATE % ["alpha"]},
"coreos-beta" => {box: "coreos-beta", user: "core", box_url: COREOS_URL_TEMPLATE % ["beta"]},
"ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"},
"ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"},
"centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"fedora" => {box: "fedora/28-cloud-base", user: "vagrant"},
"opensuse" => {box: "opensuse/openSUSE-15.0-x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
"flatcar-stable" => {box: "flatcar-stable", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["stable"]},
"flatcar-beta" => {box: "flatcar-beta", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["beta"]},
"flatcar-alpha" => {box: "flatcar-alpha", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["alpha"]},
"flatcar-edge" => {box: "flatcar-edge", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["edge"]},
"ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"},
"ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"},
"ubuntu2004" => {box: "generic/ubuntu2004", user: "vagrant"},
"centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"centos8" => {box: "centos/8", user: "vagrant"},
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
"fedora31" => {box: "fedora/31-cloud-base", user: "vagrant"},
"fedora32" => {box: "fedora/32-cloud-base", user: "vagrant"},
"opensuse" => {box: "bento/opensuse-leap-15.1", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
"oraclelinux8" => {box: "generic/oracle8", user: "vagrant"},
}
# Defaults for config options defined in CONFIG
$num_instances = 3
$instance_name_prefix = "k8s"
$vm_gui = false
$vm_memory = 2048
$vm_cpus = 1
$shared_folders = {}
$forwarded_ports = {}
$subnet = "172.17.8"
$os = "ubuntu1804"
$network_plugin = "flannel"
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
$multi_networking = false
# The first three nodes are etcd servers
$etcd_instances = $num_instances
# The first two nodes are kube masters
$kube_master_instances = $num_instances == 1 ? $num_instances : ($num_instances - 1)
# All nodes are kube nodes
$kube_node_instances = $num_instances
# The following only works when using the libvirt provider
$kube_node_instances_with_disks = false
$kube_node_instances_with_disks_size = "20G"
$kube_node_instances_with_disks_number = 2
$override_disk_size = false
$disk_size = "20GB"
$local_path_provisioner_enabled = false
$local_path_provisioner_claim_root = "/opt/local-path-provisioner/"
$playbook = "cluster.yml"
host_vars = {}
if File.exist?(CONFIG)
require CONFIG
end
# Defaults for config options defined in CONFIG
$num_instances ||= 3
$instance_name_prefix ||= "k8s"
$vm_gui ||= false
$vm_memory ||= 2048
$vm_cpus ||= 2
$shared_folders ||= {}
$forwarded_ports ||= {}
$subnet ||= "172.18.8"
$os ||= "ubuntu1804"
$network_plugin ||= "flannel"
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
$multi_networking ||= false
$download_run_once ||= "True"
$download_force_cache ||= "True"
# The first three nodes are etcd servers
$etcd_instances ||= $num_instances
# The first two nodes are kube masters
$kube_master_instances ||= $num_instances == 1 ? $num_instances : ($num_instances - 1)
# All nodes are kube nodes
$kube_node_instances ||= $num_instances
# The following only works when using the libvirt provider
$kube_node_instances_with_disks ||= false
$kube_node_instances_with_disks_size ||= "20G"
$kube_node_instances_with_disks_number ||= 2
$override_disk_size ||= false
$disk_size ||= "20GB"
$local_path_provisioner_enabled ||= false
$local_path_provisioner_claim_root ||= "/opt/local-path-provisioner/"
$libvirt_nested ||= false
$playbook ||= "cluster.yml"
host_vars = {}
$box = SUPPORTED_OS[$os][:box]
# if $inventory is not set, try to use example
$inventory = "inventory/sample" if ! $inventory
@ -136,6 +145,8 @@ Vagrant.configure("2") do |config|
end
node.vm.provider :libvirt do |lv|
lv.nested = $libvirt_nested
lv.cpu_mode = "host-model"
lv.memory = $vm_memory
lv.cpus = $vm_cpus
lv.default_prefix = 'kubespray'
@ -176,19 +187,24 @@ Vagrant.configure("2") do |config|
# Disable swap for each vm
node.vm.provision "shell", inline: "swapoff -a"
# Disable firewalld on oraclelinux vms
if ["oraclelinux","oraclelinux8"].include? $os
node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld"
end
host_vars[vm_name] = {
"ip": ip,
"flannel_interface": "eth1",
"kube_network_plugin": $network_plugin,
"kube_network_plugin_multus": $multi_networking,
"download_run_once": "True",
"download_run_once": $download_run_once,
"download_localhost": "False",
"download_cache_dir": ENV['HOME'] + "/kubespray_cache",
# Make kubespray cache even when download_run_once is false
"download_force_cache": "True",
"download_force_cache": $download_force_cache,
# Keeping the cache on the nodes can improve provisioning speed while debugging kubespray
"download_keep_remote_cache": "False",
"docker_keepcache": "1",
"docker_rpm_keepcache": "1",
# These two settings will put kubectl and admin.config in $inventory/artifacts
"kubeconfig_localhost": "True",
"kubectl_localhost": "True",

View File

@ -1,2 +1,2 @@
---
theme: jekyll-theme-slate
theme: jekyll-theme-slate

View File

@ -11,7 +11,9 @@ host_key_checking=False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp
stdout_callback = skippy
fact_caching_timeout = 7200
stdout_callback = default
display_skipped_hosts = no
library = ./library
callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles

15
ansible_version.yml Normal file
View File

@ -0,0 +1,15 @@
---
- hosts: localhost
gather_facts: false
become: no
vars:
minimal_ansible_version: 2.8.0
ansible_connection: local
tasks:
- name: "Check ansible version >={{ minimal_ansible_version }}"
assert:
msg: "Ansible must be {{ minimal_ansible_version }} or higher"
that:
- ansible_version.string is version(minimal_ansible_version, ">=")
tags:
- check

View File

@ -1,44 +1,55 @@
---
- hosts: localhost
- name: Check ansible version
import_playbook: ansible_version.yml
- hosts: all
gather_facts: false
become: no
tags: always
tasks:
- name: "Check ansible version >=2.7.8"
assert:
msg: "Ansible must be v2.7.8 or higher"
that:
- ansible_version.string is version("2.7.8", ">=")
tags:
- check
vars:
ansible_connection: local
- name: "Set up proxy environment"
set_fact:
proxy_env:
http_proxy: "{{ http_proxy | default ('') }}"
HTTP_PROXY: "{{ http_proxy | default ('') }}"
https_proxy: "{{ https_proxy | default ('') }}"
HTTPS_PROXY: "{{ https_proxy | default ('') }}"
no_proxy: "{{ no_proxy | default ('') }}"
NO_PROXY: "{{ no_proxy | default ('') }}"
no_log: true
- hosts: bastion[0]
gather_facts: False
roles:
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- { role: kubespray-defaults }
- { role: bastion-ssh-config, tags: ["localhost", "bastion"] }
- hosts: k8s-cluster:etcd
strategy: linear
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: bootstrap-os, tags: bootstrap-os}
- name: Gather facts
tags: always
import_playbook: facts.yml
- hosts: k8s-cluster:etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, tags: preinstall }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
- { role: download, tags: download, when: "not skip_downloads" }
environment: "{{ proxy_env }}"
- hosts: etcd
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
@ -47,9 +58,10 @@
when: not etcd_kubeadm_enabled| default(false)
- hosts: k8s-cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- role: etcd
tags: etcd
vars:
@ -58,59 +70,68 @@
when: not etcd_kubeadm_enabled| default(false)
- hosts: k8s-cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes/node, tags: node }
environment: "{{ proxy_env }}"
- hosts: kube-master
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes/master, tags: master }
- { role: kubernetes/client, tags: client }
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
- hosts: k8s-cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes/kubeadm, tags: kubeadm}
- { role: network_plugin, tags: network }
- { role: kubernetes/node-label }
- { role: kubernetes/node-label, tags: node-label }
- hosts: calico-rr
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: ['network', 'calico_rr']}
- { role: kubespray-defaults }
- { role: network_plugin/calico/rr, tags: ['network', 'calico_rr'] }
- hosts: kube-master[0]
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when: "secret_changed|default(false)" }
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"]}
- { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"] }
- hosts: kube-master
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes-apps/external_cloud_controller, tags: external-cloud-controller }
- { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes-apps/ingress_controller, tags: ingress-controller }
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
- hosts: kube-master
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes-apps, tags: apps }
environment: "{{ proxy_env }}"
- hosts: k8s-cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }

View File

@ -15,8 +15,9 @@ Resource Group. It will not install Kubernetes itself, this has to be done in a
## Configuration through group_vars/all
You have to modify at least one variable in group_vars/all, which is the **cluster_name** variable. It must be globally
unique due to some restrictions in Azure. Most other variables should be self explanatory if you have some basic Kubernetes
You have to modify at least two variables in group_vars/all. The one is the **cluster_name** variable, it must be globally
unique due to some restrictions in Azure. The other one is the **ssh_public_keys** variable, it must be your ssh public
key to access your azure virtual machines. Most other variables should be self explanatory if you have some basic Kubernetes
experience.
## Bastion host
@ -59,6 +60,7 @@ It will create the file ./inventory which can then be used with kubespray, e.g.:
```shell
$ cd kubespray-root-dir
$ ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all.yml" cluster.yml
$ sudo pip3 install -r requirements.txt
$ ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml
```

View File

@ -9,18 +9,11 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
exit 1
fi
if az &>/dev/null; then
echo "azure cli 2.0 found, using it instead of 1.0"
./apply-rg_2.sh "$AZURE_RESOURCE_GROUP"
elif azure &>/dev/null; then
ansible-playbook generate-templates.yml
azure group deployment create -f ./.generated/network.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/storage.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/availability-sets.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/bastion.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/masters.json -g $AZURE_RESOURCE_GROUP
azure group deployment create -f ./.generated/minions.json -g $AZURE_RESOURCE_GROUP
else
echo "Azure cli not found"
fi
ansible-playbook generate-templates.yml
az deployment group create --template-file ./.generated/network.json -g $AZURE_RESOURCE_GROUP
az deployment group create --template-file ./.generated/storage.json -g $AZURE_RESOURCE_GROUP
az deployment group create --template-file ./.generated/availability-sets.json -g $AZURE_RESOURCE_GROUP
az deployment group create --template-file ./.generated/bastion.json -g $AZURE_RESOURCE_GROUP
az deployment group create --template-file ./.generated/masters.json -g $AZURE_RESOURCE_GROUP
az deployment group create --template-file ./.generated/minions.json -g $AZURE_RESOURCE_GROUP

View File

@ -1,19 +0,0 @@
#!/usr/bin/env bash
set -e
AZURE_RESOURCE_GROUP="$1"
if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
echo "AZURE_RESOURCE_GROUP is missing"
exit 1
fi
ansible-playbook generate-templates.yml
az group deployment create --template-file ./.generated/network.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/storage.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/availability-sets.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/bastion.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/masters.json -g $AZURE_RESOURCE_GROUP
az group deployment create --template-file ./.generated/minions.json -g $AZURE_RESOURCE_GROUP

View File

@ -9,10 +9,6 @@ if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
exit 1
fi
if az &>/dev/null; then
echo "azure cli 2.0 found, using it instead of 1.0"
./clear-rg_2.sh "$AZURE_RESOURCE_GROUP"
else
ansible-playbook generate-templates.yml
azure group deployment create -g "$AZURE_RESOURCE_GROUP" -f ./.generated/clear-rg.json -m Complete
fi
ansible-playbook generate-templates.yml
az group deployment create -g "$AZURE_RESOURCE_GROUP" --template-file ./.generated/clear-rg.json --mode Complete

View File

@ -1,14 +0,0 @@
#!/usr/bin/env bash
set -e
AZURE_RESOURCE_GROUP="$1"
if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
echo "AZURE_RESOURCE_GROUP is missing"
exit 1
fi
ansible-playbook generate-templates.yml
az group deployment create -g "$AZURE_RESOURCE_GROUP" --template-file ./.generated/clear-rg.json --mode Complete

View File

@ -1,6 +1,6 @@
---
- name: Query Azure VMs
- name: Query Azure VMs # noqa 301
command: azure vm list-ip-address --json {{ azure_resource_group }}
register: vm_list_cmd

View File

@ -1,14 +1,14 @@
---
- name: Query Azure VMs IPs
- name: Query Azure VMs IPs # noqa 301
command: az vm list-ip-addresses -o json --resource-group {{ azure_resource_group }}
register: vm_ip_list_cmd
- name: Query Azure VMs Roles
- name: Query Azure VMs Roles # noqa 301
command: az vm list -o json --resource-group {{ azure_resource_group }}
register: vm_list_cmd
- name: Query Azure Load Balancer Public IP
- name: Query Azure Load Balancer Public IP # noqa 301
command: az network public-ip show -o json -g {{ azure_resource_group }} -n kubernetes-api-pubip
register: lb_pubip_cmd

View File

@ -69,7 +69,7 @@
# Running systemd-machine-id-setup doesn't create a unique id for each node container on Debian,
# handle manually
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave)
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave) # noqa 301
raw: |
echo {{ item | hash('sha1') }} > /etc/machine-id.new
mv -b /etc/machine-id.new /etc/machine-id

View File

@ -41,6 +41,7 @@ from ruamel.yaml import YAML
import os
import re
import subprocess
import sys
ROLES = ['all', 'kube-master', 'kube-node', 'etcd', 'k8s-cluster',
@ -69,6 +70,7 @@ MASSIVE_SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 200))
DEBUG = get_var_as_bool("DEBUG", True)
HOST_PREFIX = os.environ.get("HOST_PREFIX", "node")
USE_REAL_HOSTNAME = get_var_as_bool("USE_REAL_HOSTNAME", False)
# Configurable as shell vars end
@ -81,7 +83,7 @@ class KubesprayInventory(object):
if self.config_file:
try:
self.hosts_file = open(config_file, 'r')
self.yaml_config = yaml.load(self.hosts_file)
self.yaml_config = yaml.load_all(self.hosts_file)
except OSError:
pass
@ -167,6 +169,7 @@ class KubesprayInventory(object):
# FIXME(mattymo): Fix condition where delete then add reuses highest id
next_host_id = highest_host_id + 1
next_host = ""
all_hosts = existing_hosts.copy()
for host in changed_hosts:
@ -191,8 +194,14 @@ class KubesprayInventory(object):
self.debug("Skipping existing host {0}.".format(ip))
continue
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
next_host_id += 1
if USE_REAL_HOSTNAME:
cmd = ("ssh -oStrictHostKeyChecking=no "
+ access_ip + " 'hostname -s'")
next_host = subprocess.check_output(cmd, shell=True)
next_host = next_host.strip().decode('ascii')
else:
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
next_host_id += 1
all_hosts[next_host] = {'ansible_host': access_ip,
'ip': ip,
'access_ip': access_ip}
@ -229,7 +238,7 @@ class KubesprayInventory(object):
return [ip_address(ip).exploded for ip in range(start, end + 1)]
for host in hosts:
if '-' in host and not host.startswith('-'):
if '-' in host and not (host.startswith('-') or host[0].isalpha()):
start, end = host.strip().split('-')
try:
reworked_hosts.extend(ips(start, end))

View File

@ -51,7 +51,7 @@ class TestInventory(unittest.TestCase):
groups = ['group1', 'group2']
self.inv.ensure_required_groups(groups)
for group in groups:
self.assertTrue(group in self.inv.yaml_config['all']['children'])
self.assertIn(group, self.inv.yaml_config['all']['children'])
def test_get_host_id(self):
hostnames = ['node99', 'no99de01', '01node01', 'node1.domain',
@ -209,8 +209,8 @@ class TestInventory(unittest.TestCase):
('doesnotbelong2', {'whateveropts=ilike'})])
self.inv.yaml_config['all']['hosts'] = existing_hosts
self.inv.purge_invalid_hosts(proper_hostnames)
self.assertTrue(
bad_host not in self.inv.yaml_config['all']['hosts'].keys())
self.assertNotIn(
bad_host, self.inv.yaml_config['all']['hosts'].keys())
def test_add_host_to_group(self):
group = 'etcd'
@ -227,8 +227,8 @@ class TestInventory(unittest.TestCase):
host = 'node1'
self.inv.set_kube_master([host])
self.assertTrue(
host in self.inv.yaml_config['all']['children'][group]['hosts'])
self.assertIn(
host, self.inv.yaml_config['all']['children'][group]['hosts'])
def test_set_all(self):
hosts = OrderedDict([
@ -246,8 +246,8 @@ class TestInventory(unittest.TestCase):
self.inv.set_k8s_cluster()
for host in expected_hosts:
self.assertTrue(
host in
self.assertIn(
host,
self.inv.yaml_config['all']['children'][group]['children'])
def test_set_kube_node(self):
@ -255,16 +255,16 @@ class TestInventory(unittest.TestCase):
host = 'node1'
self.inv.set_kube_node([host])
self.assertTrue(
host in self.inv.yaml_config['all']['children'][group]['hosts'])
self.assertIn(
host, self.inv.yaml_config['all']['children'][group]['hosts'])
def test_set_etcd(self):
group = 'etcd'
host = 'node1'
self.inv.set_etcd([host])
self.assertTrue(
host in self.inv.yaml_config['all']['children'][group]['hosts'])
self.assertIn(
host, self.inv.yaml_config['all']['children'][group]['hosts'])
def test_scale_scenario_one(self):
num_nodes = 50

View File

@ -1,12 +0,0 @@
# Deploy MetalLB into Kubespray/Kubernetes
```
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that dont run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
```
This playbook aims to automate [this](https://metallb.universe.tf/concepts/layer2/). It deploys MetalLB into kubernetes and sets up a layer 2 loadbalancer.
## Install
```
Defaults can be found in contrib/metallb/roles/provision/defaults/main.yml. You can override the defaults by copying the contents of this file to somewhere in inventory/mycluster/group_vars such as inventory/mycluster/groups_vars/k8s-cluster/addons.yml and making any adjustments as required.
ansible-playbook --ask-become -i inventory/sample/hosts.ini contrib/metallb/metallb.yml
```

View File

@ -1 +0,0 @@
../../library

View File

@ -1,6 +0,0 @@
---
- hosts: kube-master[0]
tags:
- "provision"
roles:
- { role: provision }

View File

@ -1,14 +0,0 @@
---
metallb:
ip_range: "10.5.0.50-10.5.0.99"
protocol: "layer2"
# additional_address_pools:
# kube_service_pool:
# ip_range: "10.5.1.50-10.5.1.99"
# protocol: "layer2"
# auto_assign: false
limits:
cpu: "100m"
memory: "100Mi"
port: "7472"
version: v0.7.3

View File

@ -1,23 +0,0 @@
---
- name: "Kubernetes Apps | Check cluster settings for MetalLB"
fail:
msg: "MetalLB require kube_proxy_strict_arp = true, see https://github.com/danderson/metallb/issues/153#issuecomment-518651132"
when:
- "kube_proxy_mode == 'ipvs' and not kube_proxy_strict_arp"
- name: "Kubernetes Apps | Lay Down MetalLB"
become: true
template: { src: "{{ item }}.j2", dest: "{{ kube_config_dir }}/{{ item }}" }
with_items: ["metallb.yml", "metallb-config.yml"]
register: "rendering"
when:
- "inventory_hostname == groups['kube-master'][0]"
- name: "Kubernetes Apps | Install and configure MetalLB"
kube:
name: "MetalLB"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/{{ item.item }}"
state: "{{ item.changed | ternary('latest','present') }}"
become: true
with_items: "{{ rendering.results }}"
when:
- "inventory_hostname == groups['kube-master'][0]"

View File

@ -1,21 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: loadbalanced
protocol: {{ metallb.protocol }}
addresses:
- {{ metallb.ip_range }}
{% if metallb.additional_address_pools is defined %}{% for pool in metallb.additional_address_pools %}
- name: {{ pool }}
protocol: {{ metallb.additional_address_pools[pool].protocol }}
addresses:
- {{ metallb.additional_address_pools[pool].ip_range }}
auto-assign: {{ metallb.additional_address_pools[pool].auto_assign }}
{% endfor %}
{% endif %}

View File

@ -1,221 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
labels:
app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: metallb-system
name: controller
labels:
app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: metallb-system
name: speaker
labels:
app: metallb
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metallb-system:controller
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["services/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metallb-system:speaker
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["services", "endpoints", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: metallb-system
name: config-watcher
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
## Role bindings
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metallb-system:controller
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: controller
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:controller
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metallb-system:speaker
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: speaker
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:speaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: metallb-system
name: config-watcher
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: controller
- kind: ServiceAccount
name: speaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: config-watcher
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: metallb-system
name: speaker
labels:
app: metallb
component: speaker
spec:
selector:
matchLabels:
app: metallb
component: speaker
template:
metadata:
labels:
app: metallb
component: speaker
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ metallb.port }}"
spec:
serviceAccountName: speaker
terminationGracePeriodSeconds: 0
hostNetwork: true
containers:
- name: speaker
image: metallb/speaker:{{ metallb.version }}
imagePullPolicy: IfNotPresent
args:
- --port={{ metallb.port }}
- --config=config
env:
- name: METALLB_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- name: monitoring
containerPort: {{ metallb.port }}
resources:
limits:
cpu: {{ metallb.limits.cpu }}
memory: {{ metallb.limits.memory }}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- all
add:
- net_raw
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: metallb-system
name: controller
labels:
app: metallb
component: controller
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: metallb
component: controller
template:
metadata:
labels:
app: metallb
component: controller
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ metallb.port }}"
spec:
serviceAccountName: controller
terminationGracePeriodSeconds: 0
securityContext:
runAsNonRoot: true
runAsUser: 65534 # nobody
containers:
- name: controller
image: metallb/controller:{{ metallb.version }}
imagePullPolicy: IfNotPresent
args:
- --port={{ metallb.port }}
- --config=config
ports:
- name: monitoring
containerPort: {{ metallb.port }}
resources:
limits:
cpu: {{ metallb.limits.cpu }}
memory: {{ metallb.limits.memory }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
---

View File

@ -1,5 +1,5 @@
---
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard

View File

@ -7,7 +7,7 @@
register: glusterfs_ppa_added
when: glusterfs_ppa_use
- name: Ensure GlusterFS client will reinstall if the PPA was just added.
- name: Ensure GlusterFS client will reinstall if the PPA was just added. # noqa 503
apt:
name: "{{ item }}"
state: absent
@ -18,7 +18,7 @@
- name: Ensure GlusterFS client is installed.
apt:
name: "{{ item }}"
state: installed
state: present
default_release: "{{ glusterfs_default_release }}"
with_items:
- glusterfs-client

View File

@ -7,7 +7,7 @@
register: glusterfs_ppa_added
when: glusterfs_ppa_use
- name: Ensure GlusterFS will reinstall if the PPA was just added.
- name: Ensure GlusterFS will reinstall if the PPA was just added. # noqa 503
apt:
name: "{{ item }}"
state: absent
@ -19,7 +19,7 @@
- name: Ensure GlusterFS is installed.
apt:
name: "{{ item }}"
state: installed
state: present
default_release: "{{ glusterfs_default_release }}"
with_items:
- glusterfs-server

View File

@ -8,7 +8,7 @@
{% for host in groups['gfs-cluster'] %}
{
"addresses": [
{
{
"ip": "{{hostvars[host]['ip']|default(hostvars[host].ansible_default_ipv4['address'])}}"
}
],

View File

@ -1,7 +1,7 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: glusterfs
name: glusterfs
spec:
capacity:
storage: "{{ hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb }}Gi"

View File

@ -6,7 +6,7 @@
- name: "Delete bootstrap Heketi."
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"deploy-heketi\""
when: "heketi_resources.stdout|from_json|json_query('items[*]')|length > 0"
- name: "Ensure there is nothing left over."
- name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"deploy-heketi\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"

View File

@ -13,7 +13,7 @@
- name: "Copy topology configuration into container."
changed_when: false
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology."
- name: "Load heketi topology." # noqa 503
when: "render.changed"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
register: "load_heketi"

View File

@ -18,7 +18,7 @@
- name: "Provision database volume."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} setup-openshift-heketi-storage"
when: "heketi_database_volume_exists is undefined"
- name: "Copy configuration from pod."
- name: "Copy configuration from pod." # noqa 301
become: true
command: "{{ bin_dir }}/kubectl cp {{ initial_heketi_pod_name }}:/heketi-storage.json {{ kube_config_dir }}/heketi-storage-bootstrap.json"
- name: "Get heketi volume ids."

View File

@ -10,10 +10,10 @@
template:
src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json"
- name: "Copy topology configuration into container."
- name: "Copy topology configuration into container." # noqa 503
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology."
- name: "Load heketi topology." # noqa 503
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
- name: "Get heketi topology."

View File

@ -12,6 +12,11 @@
}
},
"spec": {
"selector": {
"matchLabels": {
"glusterfs-node": "daemonset"
}
},
"template": {
"metadata": {
"name": "glusterfs",

View File

@ -42,6 +42,11 @@
}
},
"spec": {
"selector": {
"matchLabels": {
"name": "deploy-heketi"
}
},
"replicas": 1,
"template": {
"metadata": {

View File

@ -55,6 +55,11 @@
}
},
"spec": {
"selector": {
"matchLabels": {
"name": "heketi"
}
},
"replicas": 1,
"template": {
"metadata": {

View File

@ -22,7 +22,7 @@
ignore_errors: true
changed_when: false
- name: "Remove volume groups."
- name: "Remove volume groups." # noqa 301
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true
@ -30,7 +30,7 @@
with_items: "{{ volume_groups.stdout_lines }}"
loop_control: { loop_var: "volume_group" }
- name: "Remove physical volume from cluster disks."
- name: "Remove physical volume from cluster disks." # noqa 301
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true

View File

@ -1,43 +1,43 @@
---
- name: "Remove storage class."
- name: "Remove storage class." # noqa 301
command: "{{ bin_dir }}/kubectl delete storageclass gluster"
ignore_errors: true
- name: "Tear down heketi."
- name: "Tear down heketi." # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\""
ignore_errors: true
- name: "Tear down heketi."
- name: "Tear down heketi." # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\""
ignore_errors: true
- name: "Tear down bootstrap."
include_tasks: "../provision/tasks/bootstrap/tear-down.yml"
- name: "Ensure there is nothing left over."
include_tasks: "../../provision/tasks/bootstrap/tear-down.yml"
- name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: "Ensure there is nothing left over."
- name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: "Tear down glusterfs."
- name: "Tear down glusterfs." # noqa 301
command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs"
ignore_errors: true
- name: "Remove heketi storage service."
- name: "Remove heketi storage service." # noqa 301
command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints"
ignore_errors: true
- name: "Remove heketi gluster role binding"
- name: "Remove heketi gluster role binding" # noqa 301
command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin"
ignore_errors: true
- name: "Remove heketi config secret"
- name: "Remove heketi config secret" # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret"
ignore_errors: true
- name: "Remove heketi db backup"
- name: "Remove heketi db backup" # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup"
ignore_errors: true
- name: "Remove heketi service account"
- name: "Remove heketi service account" # noqa 301
command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account"
ignore_errors: true
- name: "Get secrets"

View File

@ -22,7 +22,7 @@ export TF_VAR_AWS_SECRET_ACCESS_KEY ="xxx"
export TF_VAR_AWS_SSH_KEY_NAME="yyy"
export TF_VAR_AWS_DEFAULT_REGION="zzz"
```
- Update `contrib/terraform/aws/terraform.tfvars` with your data. By default, the Terraform scripts use CoreOS as base image. If you want to change this behaviour, see note "Using other distrib than CoreOs" below.
- Update `contrib/terraform/aws/terraform.tfvars` with your data. By default, the Terraform scripts use Ubuntu 18.04 LTS (Bionic) as base image. If you want to change this behaviour, see note "Using other distrib than Ubuntu" below.
- Create an AWS EC2 SSH Key
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
@ -41,12 +41,12 @@ ssh -F ./ssh-bastion.conf user@$ip
- Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag.
Example (this one assumes you are using CoreOS)
Example (this one assumes you are using Ubuntu)
```commandline
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=core -b --become-user=root --flush-cache
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=ubuntu -b --become-user=root --flush-cache
```
***Using other distrib than CoreOs***
If you want to use another distribution than CoreOS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
***Using other distrib than Ubuntu***
If you want to use another distribution than Ubuntu 18.04 (Bionic) LTS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
For example, to use:
- Debian Jessie, replace 'data "aws_ami" "distro"' in variables.tf with

View File

@ -3,9 +3,9 @@ terraform {
}
provider "aws" {
access_key = "${var.AWS_ACCESS_KEY_ID}"
secret_key = "${var.AWS_SECRET_ACCESS_KEY}"
region = "${var.AWS_DEFAULT_REGION}"
access_key = var.AWS_ACCESS_KEY_ID
secret_key = var.AWS_SECRET_ACCESS_KEY
region = var.AWS_DEFAULT_REGION
}
data "aws_availability_zones" "available" {}
@ -18,30 +18,30 @@ data "aws_availability_zones" "available" {}
module "aws-vpc" {
source = "./modules/vpc"
aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}"
aws_avail_zones = "${slice(data.aws_availability_zones.available.names, 0, 2)}"
aws_cidr_subnets_private = "${var.aws_cidr_subnets_private}"
aws_cidr_subnets_public = "${var.aws_cidr_subnets_public}"
default_tags = "${var.default_tags}"
aws_cluster_name = var.aws_cluster_name
aws_vpc_cidr_block = var.aws_vpc_cidr_block
aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, 2)
aws_cidr_subnets_private = var.aws_cidr_subnets_private
aws_cidr_subnets_public = var.aws_cidr_subnets_public
default_tags = var.default_tags
}
module "aws-elb" {
source = "./modules/elb"
aws_cluster_name = "${var.aws_cluster_name}"
aws_vpc_id = "${module.aws-vpc.aws_vpc_id}"
aws_avail_zones = "${slice(data.aws_availability_zones.available.names, 0, 2)}"
aws_subnet_ids_public = "${module.aws-vpc.aws_subnet_ids_public}"
aws_elb_api_port = "${var.aws_elb_api_port}"
k8s_secure_api_port = "${var.k8s_secure_api_port}"
default_tags = "${var.default_tags}"
aws_cluster_name = var.aws_cluster_name
aws_vpc_id = module.aws-vpc.aws_vpc_id
aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, 2)
aws_subnet_ids_public = module.aws-vpc.aws_subnet_ids_public
aws_elb_api_port = var.aws_elb_api_port
k8s_secure_api_port = var.k8s_secure_api_port
default_tags = var.default_tags
}
module "aws-iam" {
source = "./modules/iam"
aws_cluster_name = "${var.aws_cluster_name}"
aws_cluster_name = var.aws_cluster_name
}
/*
@ -50,22 +50,22 @@ module "aws-iam" {
*/
resource "aws_instance" "bastion-server" {
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_bastion_size}"
count = "${length(var.aws_cidr_subnets_public)}"
ami = data.aws_ami.distro.id
instance_type = var.aws_bastion_size
count = length(var.aws_cidr_subnets_public)
associate_public_ip_address = true
availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public, count.index)}"
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_public, count.index)
vpc_security_group_ids = "${module.aws-vpc.aws_security_group}"
vpc_security_group_ids = module.aws-vpc.aws_security_group
key_name = "${var.AWS_SSH_KEY_NAME}"
key_name = var.AWS_SSH_KEY_NAME
tags = "${merge(var.default_tags, map(
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-bastion-${count.index}",
"Cluster", "${var.aws_cluster_name}",
"Role", "bastion-${var.aws_cluster_name}-${count.index}"
))}"
))
}
/*
@ -74,71 +74,71 @@ resource "aws_instance" "bastion-server" {
*/
resource "aws_instance" "k8s-master" {
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_kube_master_size}"
ami = data.aws_ami.distro.id
instance_type = var.aws_kube_master_size
count = "${var.aws_kube_master_num}"
count = var.aws_kube_master_num
availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private, count.index)}"
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = "${module.aws-vpc.aws_security_group}"
vpc_security_group_ids = module.aws-vpc.aws_security_group
iam_instance_profile = "${module.aws-iam.kube-master-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}"
iam_instance_profile = module.aws-iam.kube-master-profile
key_name = var.AWS_SSH_KEY_NAME
tags = "${merge(var.default_tags, map(
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-master${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "master"
))}"
))
}
resource "aws_elb_attachment" "attach_master_nodes" {
count = "${var.aws_kube_master_num}"
elb = "${module.aws-elb.aws_elb_api_id}"
instance = "${element(aws_instance.k8s-master.*.id, count.index)}"
count = var.aws_kube_master_num
elb = module.aws-elb.aws_elb_api_id
instance = element(aws_instance.k8s-master.*.id, count.index)
}
resource "aws_instance" "k8s-etcd" {
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_etcd_size}"
ami = data.aws_ami.distro.id
instance_type = var.aws_etcd_size
count = "${var.aws_etcd_num}"
count = var.aws_etcd_num
availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private, count.index)}"
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = "${module.aws-vpc.aws_security_group}"
vpc_security_group_ids = module.aws-vpc.aws_security_group
key_name = "${var.AWS_SSH_KEY_NAME}"
key_name = var.AWS_SSH_KEY_NAME
tags = "${merge(var.default_tags, map(
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-etcd${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "etcd"
))}"
))
}
resource "aws_instance" "k8s-worker" {
ami = "${data.aws_ami.distro.id}"
instance_type = "${var.aws_kube_worker_size}"
ami = data.aws_ami.distro.id
instance_type = var.aws_kube_worker_size
count = "${var.aws_kube_worker_num}"
count = var.aws_kube_worker_num
availability_zone = "${element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)}"
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private, count.index)}"
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, 2), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = "${module.aws-vpc.aws_security_group}"
vpc_security_group_ids = module.aws-vpc.aws_security_group
iam_instance_profile = "${module.aws-iam.kube-worker-profile}"
key_name = "${var.AWS_SSH_KEY_NAME}"
iam_instance_profile = module.aws-iam.kube-worker-profile
key_name = var.AWS_SSH_KEY_NAME
tags = "${merge(var.default_tags, map(
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-worker${count.index}",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member",
"Role", "worker"
))}"
))
}
/*
@ -146,16 +146,16 @@ resource "aws_instance" "k8s-worker" {
*
*/
data "template_file" "inventory" {
template = "${file("${path.module}/templates/inventory.tpl")}"
template = file("${path.module}/templates/inventory.tpl")
vars = {
public_ip_address_bastion = "${join("\n", formatlist("bastion ansible_host=%s", aws_instance.bastion-server.*.public_ip))}"
connection_strings_master = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
connection_strings_node = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.tags.Name, aws_instance.k8s-worker.*.private_ip))}"
connection_strings_etcd = "${join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.tags.Name, aws_instance.k8s-etcd.*.private_ip))}"
list_master = "${join("\n", aws_instance.k8s-master.*.tags.Name)}"
list_node = "${join("\n", aws_instance.k8s-worker.*.tags.Name)}"
list_etcd = "${join("\n", aws_instance.k8s-etcd.*.tags.Name)}"
public_ip_address_bastion = join("\n", formatlist("bastion ansible_host=%s", aws_instance.bastion-server.*.public_ip))
connection_strings_master = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-master.*.private_dns, aws_instance.k8s-master.*.private_ip))
connection_strings_node = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.private_dns, aws_instance.k8s-worker.*.private_ip))
connection_strings_etcd = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.private_dns, aws_instance.k8s-etcd.*.private_ip))
list_master = join("\n", aws_instance.k8s-master.*.private_dns)
list_node = join("\n", aws_instance.k8s-worker.*.private_dns)
list_etcd = join("\n", aws_instance.k8s-etcd.*.private_dns)
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
}
}
@ -166,6 +166,6 @@ resource "null_resource" "inventories" {
}
triggers = {
template = "${data.template_file.inventory.rendered}"
template = data.template_file.inventory.rendered
}
}

View File

@ -1,19 +1,19 @@
resource "aws_security_group" "aws-elb" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
vpc_id = "${var.aws_vpc_id}"
vpc_id = var.aws_vpc_id
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
))
}
resource "aws_security_group_rule" "aws-allow-api-access" {
type = "ingress"
from_port = "${var.aws_elb_api_port}"
to_port = "${var.k8s_secure_api_port}"
from_port = var.aws_elb_api_port
to_port = var.k8s_secure_api_port
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.aws-elb.id}"
security_group_id = aws_security_group.aws-elb.id
}
resource "aws_security_group_rule" "aws-allow-api-egress" {
@ -22,19 +22,19 @@ resource "aws_security_group_rule" "aws-allow-api-egress" {
to_port = 65535
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.aws-elb.id}"
security_group_id = aws_security_group.aws-elb.id
}
# Create a new AWS ELB for K8S API
resource "aws_elb" "aws-elb-api" {
name = "kubernetes-elb-${var.aws_cluster_name}"
subnets = var.aws_subnet_ids_public
security_groups = ["${aws_security_group.aws-elb.id}"]
security_groups = [aws_security_group.aws-elb.id]
listener {
instance_port = "${var.k8s_secure_api_port}"
instance_port = var.k8s_secure_api_port
instance_protocol = "tcp"
lb_port = "${var.aws_elb_api_port}"
lb_port = var.aws_elb_api_port
lb_protocol = "tcp"
}
@ -51,7 +51,7 @@ resource "aws_elb" "aws-elb-api" {
connection_draining = true
connection_draining_timeout = 400
tags = "${merge(var.default_tags, map(
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-elb-api"
))}"
))
}

View File

@ -1,7 +1,7 @@
output "aws_elb_api_id" {
value = "${aws_elb.aws-elb-api.id}"
value = aws_elb.aws-elb-api.id
}
output "aws_elb_api_fqdn" {
value = "${aws_elb.aws-elb-api.dns_name}"
value = aws_elb.aws-elb-api.dns_name
}

View File

@ -42,7 +42,7 @@ EOF
resource "aws_iam_role_policy" "kube-master" {
name = "kubernetes-${var.aws_cluster_name}-master"
role = "${aws_iam_role.kube-master.id}"
role = aws_iam_role.kube-master.id
policy = <<EOF
{
@ -77,7 +77,7 @@ EOF
resource "aws_iam_role_policy" "kube-worker" {
name = "kubernetes-${var.aws_cluster_name}-node"
role = "${aws_iam_role.kube-worker.id}"
role = aws_iam_role.kube-worker.id
policy = <<EOF
{
@ -132,10 +132,10 @@ EOF
resource "aws_iam_instance_profile" "kube-master" {
name = "kube_${var.aws_cluster_name}_master_profile"
role = "${aws_iam_role.kube-master.name}"
role = aws_iam_role.kube-master.name
}
resource "aws_iam_instance_profile" "kube-worker" {
name = "kube_${var.aws_cluster_name}_node_profile"
role = "${aws_iam_role.kube-worker.name}"
role = aws_iam_role.kube-worker.name
}

View File

@ -1,7 +1,7 @@
output "kube-master-profile" {
value = "${aws_iam_instance_profile.kube-master.name }"
value = aws_iam_instance_profile.kube-master.name
}
output "kube-worker-profile" {
value = "${aws_iam_instance_profile.kube-worker.name }"
value = aws_iam_instance_profile.kube-worker.name
}

View File

@ -1,55 +1,55 @@
resource "aws_vpc" "cluster-vpc" {
cidr_block = "${var.aws_vpc_cidr_block}"
cidr_block = var.aws_vpc_cidr_block
#DNS Related Entries
enable_dns_support = true
enable_dns_hostnames = true
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-vpc"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-vpc"
))
}
resource "aws_eip" "cluster-nat-eip" {
count = "${length(var.aws_cidr_subnets_public)}"
count = length(var.aws_cidr_subnets_public)
vpc = true
}
resource "aws_internet_gateway" "cluster-vpc-internetgw" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
vpc_id = aws_vpc.cluster-vpc.id
tags = "${merge(var.default_tags, map(
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-internetgw"
))}"
))
}
resource "aws_subnet" "cluster-vpc-subnets-public" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
count = "${length(var.aws_avail_zones)}"
availability_zone = "${element(var.aws_avail_zones, count.index)}"
cidr_block = "${element(var.aws_cidr_subnets_public, count.index)}"
vpc_id = aws_vpc.cluster-vpc.id
count = length(var.aws_avail_zones)
availability_zone = element(var.aws_avail_zones, count.index)
cidr_block = element(var.aws_cidr_subnets_public, count.index)
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public",
"kubernetes.io/cluster/${var.aws_cluster_name}", "member"
))
}
resource "aws_nat_gateway" "cluster-nat-gateway" {
count = "${length(var.aws_cidr_subnets_public)}"
allocation_id = "${element(aws_eip.cluster-nat-eip.*.id, count.index)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id, count.index)}"
count = length(var.aws_cidr_subnets_public)
allocation_id = element(aws_eip.cluster-nat-eip.*.id, count.index)
subnet_id = element(aws_subnet.cluster-vpc-subnets-public.*.id, count.index)
}
resource "aws_subnet" "cluster-vpc-subnets-private" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
count = "${length(var.aws_avail_zones)}"
availability_zone = "${element(var.aws_avail_zones, count.index)}"
cidr_block = "${element(var.aws_cidr_subnets_private, count.index)}"
vpc_id = aws_vpc.cluster-vpc.id
count = length(var.aws_avail_zones)
availability_zone = element(var.aws_avail_zones, count.index)
cidr_block = element(var.aws_cidr_subnets_private, count.index)
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
))
}
#Routing in VPC
@ -57,53 +57,53 @@ resource "aws_subnet" "cluster-vpc-subnets-private" {
#TODO: Do we need two routing tables for each subnet for redundancy or is one enough?
resource "aws_route_table" "kubernetes-public" {
vpc_id = "${aws_vpc.cluster-vpc.id}"
vpc_id = aws_vpc.cluster-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.cluster-vpc-internetgw.id}"
gateway_id = aws_internet_gateway.cluster-vpc-internetgw.id
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-routetable-public"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-routetable-public"
))
}
resource "aws_route_table" "kubernetes-private" {
count = "${length(var.aws_cidr_subnets_private)}"
vpc_id = "${aws_vpc.cluster-vpc.id}"
count = length(var.aws_cidr_subnets_private)
vpc_id = aws_vpc.cluster-vpc.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = "${element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)}"
nat_gateway_id = element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)
}
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
))
}
resource "aws_route_table_association" "kubernetes-public" {
count = "${length(var.aws_cidr_subnets_public)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-public.*.id,count.index)}"
route_table_id = "${aws_route_table.kubernetes-public.id}"
count = length(var.aws_cidr_subnets_public)
subnet_id = element(aws_subnet.cluster-vpc-subnets-public.*.id, count.index)
route_table_id = aws_route_table.kubernetes-public.id
}
resource "aws_route_table_association" "kubernetes-private" {
count = "${length(var.aws_cidr_subnets_private)}"
subnet_id = "${element(aws_subnet.cluster-vpc-subnets-private.*.id,count.index)}"
route_table_id = "${element(aws_route_table.kubernetes-private.*.id,count.index)}"
count = length(var.aws_cidr_subnets_private)
subnet_id = element(aws_subnet.cluster-vpc-subnets-private.*.id, count.index)
route_table_id = element(aws_route_table.kubernetes-private.*.id, count.index)
}
#Kubernetes Security Groups
resource "aws_security_group" "kubernetes" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup"
vpc_id = "${aws_vpc.cluster-vpc.id}"
vpc_id = aws_vpc.cluster-vpc.id
tags = "${merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup"
))}"
tags = merge(var.default_tags, map(
"Name", "kubernetes-${var.aws_cluster_name}-securitygroup"
))
}
resource "aws_security_group_rule" "allow-all-ingress" {
@ -111,8 +111,8 @@ resource "aws_security_group_rule" "allow-all-ingress" {
from_port = 0
to_port = 65535
protocol = "-1"
cidr_blocks = ["${var.aws_vpc_cidr_block}"]
security_group_id = "${aws_security_group.kubernetes.id}"
cidr_blocks = [var.aws_vpc_cidr_block]
security_group_id = aws_security_group.kubernetes.id
}
resource "aws_security_group_rule" "allow-all-egress" {
@ -121,7 +121,7 @@ resource "aws_security_group_rule" "allow-all-egress" {
to_port = 65535
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.kubernetes.id}"
security_group_id = aws_security_group.kubernetes.id
}
resource "aws_security_group_rule" "allow-ssh-connections" {
@ -130,5 +130,5 @@ resource "aws_security_group_rule" "allow-ssh-connections" {
to_port = 22
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.kubernetes.id}"
security_group_id = aws_security_group.kubernetes.id
}

View File

@ -1,5 +1,5 @@
output "aws_vpc_id" {
value = "${aws_vpc.cluster-vpc.id}"
value = aws_vpc.cluster-vpc.id
}
output "aws_subnet_ids_private" {
@ -15,5 +15,5 @@ output "aws_security_group" {
}
output "default_tags" {
value = "${var.default_tags}"
value = var.default_tags
}

View File

@ -1,17 +1,17 @@
output "bastion_ip" {
value = "${join("\n", aws_instance.bastion-server.*.public_ip)}"
value = join("\n", aws_instance.bastion-server.*.public_ip)
}
output "masters" {
value = "${join("\n", aws_instance.k8s-master.*.private_ip)}"
value = join("\n", aws_instance.k8s-master.*.private_ip)
}
output "workers" {
value = "${join("\n", aws_instance.k8s-worker.*.private_ip)}"
value = join("\n", aws_instance.k8s-worker.*.private_ip)
}
output "etcd" {
value = "${join("\n", aws_instance.k8s-etcd.*.private_ip)}"
value = join("\n", aws_instance.k8s-etcd.*.private_ip)
}
output "aws_elb_api_fqdn" {
@ -19,9 +19,9 @@ output "aws_elb_api_fqdn" {
}
output "inventory" {
value = "${data.template_file.inventory.rendered}"
value = data.template_file.inventory.rendered
}
output "default_tags" {
value = "${var.default_tags}"
value = var.default_tags
}

View File

@ -25,7 +25,7 @@ data "aws_ami" "distro" {
filter {
name = "name"
values = ["CoreOS-stable-*"]
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
filter {
@ -33,7 +33,7 @@ data "aws_ami" "distro" {
values = ["hvm"]
}
owners = ["595879546273"] #CoreOS
owners = ["099720109477"] # Canonical
}
//AWS VPC Variables

View File

@ -1,11 +1,11 @@
# Kubernetes on Openstack with Terraform
# Kubernetes on OpenStack with Terraform
Provision a Kubernetes cluster with [Terraform](https://www.terraform.io) on
Openstack.
OpenStack.
## Status
This will install a Kubernetes cluster on an Openstack Cloud. It should work on
This will install a Kubernetes cluster on an OpenStack Cloud. It should work on
most modern installs of OpenStack that support the basic services.
### Known compatible public clouds
@ -38,6 +38,16 @@ hosts where that makes sense. You have the option of creating bastion hosts
inside the private subnet to access the nodes there. Alternatively, a node with
a floating IP can be used as a jump host to nodes without.
#### Using an existing router
It is possible to use an existing router instead of creating one. To use an
existing router set the router\_id variable to the uuid of the router you wish
to use.
For example:
```
router_id = "00c542e7-6f46-4535-ae95-984c7f0391a3"
```
### Kubernetes Nodes
You can create many different kubernetes topologies by setting the number of
different classes of hosts. For each class there are options for allocating
@ -62,9 +72,9 @@ specify:
- Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks
- Other properties related to provisioning the hosts
Even if you are using Container Linux by CoreOS for your cluster, you will still
Even if you are using Flatcar Container Linux by Kinvolk for your cluster, you will still
need the GlusterFS VMs to be based on either Debian or RedHat based images.
Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through
Flatcar Container Linux by Kinvolk cannot serve GlusterFS, but can connect to it through
binaries available on hyperkube v1.4.3_coreos.0 or higher.
## Requirements
@ -254,6 +264,107 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`etcd_root_volume_size_in_gb` | Size of the root volume for etcd nodes, 0 to use ephemeral storage |
|`bastion_root_volume_size_in_gb` | Size of the root volume for bastions, 0 to use ephemeral storage |
|`use_server_group` | Create and use openstack nova servergroups, default: false |
|`k8s_nodes` | Map containing worker node definition, see explanation below |
##### k8s_nodes
Allows a custom defintion of worker nodes giving the operator full control over individual node flavor and
availability zone placement. To enable the use of this mode set the `number_of_k8s_nodes` and
`number_of_k8s_nodes_no_floating_ip` variables to 0. Then define your desired worker node configuration
using the `k8s_nodes` variable.
For example:
```
k8s_nodes = {
"1" = {
"az" = "sto1"
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
"floating_ip" = true
},
"2" = {
"az" = "sto2"
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
"floating_ip" = true
},
"3" = {
"az" = "sto3"
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
"floating_ip" = true
}
}
```
Would result in the same configuration as:
```
number_of_k8s_nodes = 3
flavor_k8s_node = "83d8b44a-26a0-4f02-a981-079446926445"
az_list = ["sto1", "sto2", "sto3"]
```
And:
```
k8s_nodes = {
"ing-1" = {
"az" = "sto1"
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
"floating_ip" = true
},
"ing-2" = {
"az" = "sto2"
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
"floating_ip" = true
},
"ing-3" = {
"az" = "sto3"
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
"floating_ip" = true
},
"big-1" = {
"az" = "sto1"
"flavor" = "3f73fc93-ec61-4808-88df-2580d94c1a9b"
"floating_ip" = false
},
"big-2" = {
"az" = "sto2"
"flavor" = "3f73fc93-ec61-4808-88df-2580d94c1a9b"
"floating_ip" = false
},
"big-3" = {
"az" = "sto3"
"flavor" = "3f73fc93-ec61-4808-88df-2580d94c1a9b"
"floating_ip" = false
},
"small-1" = {
"az" = "sto1"
"flavor" = "7a6a998f-ac7f-4fb8-a534-2175b254f75e"
"floating_ip" = false
},
"small-2" = {
"az" = "sto2"
"flavor" = "7a6a998f-ac7f-4fb8-a534-2175b254f75e"
"floating_ip" = false
},
"small-3" = {
"az" = "sto3"
"flavor" = "7a6a998f-ac7f-4fb8-a534-2175b254f75e"
"floating_ip" = false
}
}
```
Would result in three nodes in each availability zone each with their own separate naming,
flavor and floating ip configuration.
The "schema":
```
k8s_nodes = {
"key | node name suffix, must be unique" = {
"az" = string
"flavor" = string
"floating_ip" = bool
},
}
```
All values are required.
#### Terraform state files
@ -371,7 +482,7 @@ So, either a bastion host, or at least master/node with a floating IP are requir
#### Test access
Make sure you can connect to the hosts. Note that Container Linux by CoreOS will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
Make sure you can connect to the hosts. Note that Flatcar Container Linux by Kinvolk will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
```
$ ansible -i inventory/$CLUSTER/hosts -m ping all
@ -399,7 +510,7 @@ Edit `inventory/$CLUSTER/group_vars/all/all.yml`:
# Directory where the binaries will be installed
# Default:
# bin_dir: /usr/local/bin
# For Container Linux by CoreOS:
# For Flatcar Container Linux by Kinvolk:
bin_dir: /opt/bin
```
- and **cloud_provider**:
@ -420,7 +531,7 @@ kube_network_plugin: flannel
# Can be docker_dns, host_resolvconf or none
# Default:
# resolvconf_mode: docker_dns
# For Container Linux by CoreOS:
# For Flatcar Container Linux by Kinvolk:
resolvconf_mode: host_resolvconf
```
- Set max amount of attached cinder volume per host (default 256)
@ -494,3 +605,81 @@ $ ansible-playbook --become -i inventory/$CLUSTER/hosts ./contrib/network-storag
## What's next
Try out your new Kubernetes cluster with the [Hello Kubernetes service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/).
## Appendix
### Migration from `number_of_k8s_nodes*` to `k8s_nodes`
If you currently have a cluster defined using the `number_of_k8s_nodes*` variables and wish
to migrate to the `k8s_nodes` style you can do it like so:
```ShellSession
$ terraform state list
module.compute.data.openstack_images_image_v2.gfs_image
module.compute.data.openstack_images_image_v2.vm_image
module.compute.openstack_compute_floatingip_associate_v2.k8s_master[0]
module.compute.openstack_compute_floatingip_associate_v2.k8s_node[0]
module.compute.openstack_compute_floatingip_associate_v2.k8s_node[1]
module.compute.openstack_compute_floatingip_associate_v2.k8s_node[2]
module.compute.openstack_compute_instance_v2.k8s_master[0]
module.compute.openstack_compute_instance_v2.k8s_node[0]
module.compute.openstack_compute_instance_v2.k8s_node[1]
module.compute.openstack_compute_instance_v2.k8s_node[2]
module.compute.openstack_compute_keypair_v2.k8s
module.compute.openstack_compute_servergroup_v2.k8s_etcd[0]
module.compute.openstack_compute_servergroup_v2.k8s_master[0]
module.compute.openstack_compute_servergroup_v2.k8s_node[0]
module.compute.openstack_networking_secgroup_rule_v2.bastion[0]
module.compute.openstack_networking_secgroup_rule_v2.egress[0]
module.compute.openstack_networking_secgroup_rule_v2.k8s
module.compute.openstack_networking_secgroup_rule_v2.k8s_allowed_remote_ips[0]
module.compute.openstack_networking_secgroup_rule_v2.k8s_allowed_remote_ips[1]
module.compute.openstack_networking_secgroup_rule_v2.k8s_allowed_remote_ips[2]
module.compute.openstack_networking_secgroup_rule_v2.k8s_master[0]
module.compute.openstack_networking_secgroup_rule_v2.worker[0]
module.compute.openstack_networking_secgroup_rule_v2.worker[1]
module.compute.openstack_networking_secgroup_rule_v2.worker[2]
module.compute.openstack_networking_secgroup_rule_v2.worker[3]
module.compute.openstack_networking_secgroup_rule_v2.worker[4]
module.compute.openstack_networking_secgroup_v2.bastion[0]
module.compute.openstack_networking_secgroup_v2.k8s
module.compute.openstack_networking_secgroup_v2.k8s_master
module.compute.openstack_networking_secgroup_v2.worker
module.ips.null_resource.dummy_dependency
module.ips.openstack_networking_floatingip_v2.k8s_master[0]
module.ips.openstack_networking_floatingip_v2.k8s_node[0]
module.ips.openstack_networking_floatingip_v2.k8s_node[1]
module.ips.openstack_networking_floatingip_v2.k8s_node[2]
module.network.openstack_networking_network_v2.k8s[0]
module.network.openstack_networking_router_interface_v2.k8s[0]
module.network.openstack_networking_router_v2.k8s[0]
module.network.openstack_networking_subnet_v2.k8s[0]
$ terraform state mv 'module.compute.openstack_compute_floatingip_associate_v2.k8s_node[0]' 'module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes["1"]'
Move "module.compute.openstack_compute_floatingip_associate_v2.k8s_node[0]" to "module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes[\"1\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.compute.openstack_compute_floatingip_associate_v2.k8s_node[1]' 'module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes["2"]'
Move "module.compute.openstack_compute_floatingip_associate_v2.k8s_node[1]" to "module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes[\"2\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.compute.openstack_compute_floatingip_associate_v2.k8s_node[2]' 'module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes["3"]'
Move "module.compute.openstack_compute_floatingip_associate_v2.k8s_node[2]" to "module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes[\"3\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.compute.openstack_compute_instance_v2.k8s_node[0]' 'module.compute.openstack_compute_instance_v2.k8s_node["1"]'
Move "module.compute.openstack_compute_instance_v2.k8s_node[0]" to "module.compute.openstack_compute_instance_v2.k8s_node[\"1\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.compute.openstack_compute_instance_v2.k8s_node[1]' 'module.compute.openstack_compute_instance_v2.k8s_node["2"]'
Move "module.compute.openstack_compute_instance_v2.k8s_node[1]" to "module.compute.openstack_compute_instance_v2.k8s_node[\"2\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.compute.openstack_compute_instance_v2.k8s_node[2]' 'module.compute.openstack_compute_instance_v2.k8s_node["3"]'
Move "module.compute.openstack_compute_instance_v2.k8s_node[2]" to "module.compute.openstack_compute_instance_v2.k8s_node[\"3\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.ips.openstack_networking_floatingip_v2.k8s_node[0]' 'module.ips.openstack_networking_floatingip_v2.k8s_node["1"]'
Move "module.ips.openstack_networking_floatingip_v2.k8s_node[0]" to "module.ips.openstack_networking_floatingip_v2.k8s_node[\"1\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.ips.openstack_networking_floatingip_v2.k8s_node[1]' 'module.ips.openstack_networking_floatingip_v2.k8s_node["2"]'
Move "module.ips.openstack_networking_floatingip_v2.k8s_node[1]" to "module.ips.openstack_networking_floatingip_v2.k8s_node[\"2\"]"
Successfully moved 1 object(s).
$ terraform state mv 'module.ips.openstack_networking_floatingip_v2.k8s_node[2]' 'module.ips.openstack_networking_floatingip_v2.k8s_node["3"]'
Move "module.ips.openstack_networking_floatingip_v2.k8s_node[2]" to "module.ips.openstack_networking_floatingip_v2.k8s_node[\"3\"]"
Successfully moved 1 object(s).
```
Of course for nodes without floating ips those steps can be omitted.

View File

@ -5,97 +5,104 @@ provider "openstack" {
module "network" {
source = "./modules/network"
external_net = "${var.external_net}"
network_name = "${var.network_name}"
subnet_cidr = "${var.subnet_cidr}"
cluster_name = "${var.cluster_name}"
dns_nameservers = "${var.dns_nameservers}"
network_dns_domain = "${var.network_dns_domain}"
use_neutron = "${var.use_neutron}"
external_net = var.external_net
network_name = var.network_name
subnet_cidr = var.subnet_cidr
cluster_name = var.cluster_name
dns_nameservers = var.dns_nameservers
network_dns_domain = var.network_dns_domain
use_neutron = var.use_neutron
router_id = var.router_id
}
module "ips" {
source = "./modules/ips"
number_of_k8s_masters = "${var.number_of_k8s_masters}"
number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}"
number_of_k8s_nodes = "${var.number_of_k8s_nodes}"
floatingip_pool = "${var.floatingip_pool}"
number_of_bastions = "${var.number_of_bastions}"
external_net = "${var.external_net}"
network_name = "${var.network_name}"
router_id = "${module.network.router_id}"
number_of_k8s_masters = var.number_of_k8s_masters
number_of_k8s_masters_no_etcd = var.number_of_k8s_masters_no_etcd
number_of_k8s_nodes = var.number_of_k8s_nodes
floatingip_pool = var.floatingip_pool
number_of_bastions = var.number_of_bastions
external_net = var.external_net
network_name = var.network_name
router_id = module.network.router_id
k8s_nodes = var.k8s_nodes
}
module "compute" {
source = "./modules/compute"
cluster_name = "${var.cluster_name}"
az_list = "${var.az_list}"
number_of_k8s_masters = "${var.number_of_k8s_masters}"
number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}"
number_of_etcd = "${var.number_of_etcd}"
number_of_k8s_masters_no_floating_ip = "${var.number_of_k8s_masters_no_floating_ip}"
number_of_k8s_masters_no_floating_ip_no_etcd = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
number_of_k8s_nodes = "${var.number_of_k8s_nodes}"
number_of_bastions = "${var.number_of_bastions}"
number_of_k8s_nodes_no_floating_ip = "${var.number_of_k8s_nodes_no_floating_ip}"
number_of_gfs_nodes_no_floating_ip = "${var.number_of_gfs_nodes_no_floating_ip}"
bastion_root_volume_size_in_gb = "${var.bastion_root_volume_size_in_gb}"
etcd_root_volume_size_in_gb = "${var.etcd_root_volume_size_in_gb}"
master_root_volume_size_in_gb = "${var.master_root_volume_size_in_gb}"
node_root_volume_size_in_gb = "${var.node_root_volume_size_in_gb}"
gfs_root_volume_size_in_gb = "${var.gfs_root_volume_size_in_gb}"
gfs_volume_size_in_gb = "${var.gfs_volume_size_in_gb}"
public_key_path = "${var.public_key_path}"
image = "${var.image}"
image_gfs = "${var.image_gfs}"
ssh_user = "${var.ssh_user}"
ssh_user_gfs = "${var.ssh_user_gfs}"
flavor_k8s_master = "${var.flavor_k8s_master}"
flavor_k8s_node = "${var.flavor_k8s_node}"
flavor_etcd = "${var.flavor_etcd}"
flavor_gfs_node = "${var.flavor_gfs_node}"
network_name = "${var.network_name}"
flavor_bastion = "${var.flavor_bastion}"
k8s_master_fips = "${module.ips.k8s_master_fips}"
k8s_master_no_etcd_fips = "${module.ips.k8s_master_no_etcd_fips}"
k8s_node_fips = "${module.ips.k8s_node_fips}"
bastion_fips = "${module.ips.bastion_fips}"
bastion_allowed_remote_ips = "${var.bastion_allowed_remote_ips}"
master_allowed_remote_ips = "${var.master_allowed_remote_ips}"
k8s_allowed_remote_ips = "${var.k8s_allowed_remote_ips}"
k8s_allowed_egress_ips = "${var.k8s_allowed_egress_ips}"
supplementary_master_groups = "${var.supplementary_master_groups}"
supplementary_node_groups = "${var.supplementary_node_groups}"
worker_allowed_ports = "${var.worker_allowed_ports}"
wait_for_floatingip = "${var.wait_for_floatingip}"
use_access_ip = "${var.use_access_ip}"
use_server_groups = "${var.use_server_groups}"
cluster_name = var.cluster_name
az_list = var.az_list
az_list_node = var.az_list_node
number_of_k8s_masters = var.number_of_k8s_masters
number_of_k8s_masters_no_etcd = var.number_of_k8s_masters_no_etcd
number_of_etcd = var.number_of_etcd
number_of_k8s_masters_no_floating_ip = var.number_of_k8s_masters_no_floating_ip
number_of_k8s_masters_no_floating_ip_no_etcd = var.number_of_k8s_masters_no_floating_ip_no_etcd
number_of_k8s_nodes = var.number_of_k8s_nodes
number_of_bastions = var.number_of_bastions
number_of_k8s_nodes_no_floating_ip = var.number_of_k8s_nodes_no_floating_ip
number_of_gfs_nodes_no_floating_ip = var.number_of_gfs_nodes_no_floating_ip
k8s_nodes = var.k8s_nodes
bastion_root_volume_size_in_gb = var.bastion_root_volume_size_in_gb
etcd_root_volume_size_in_gb = var.etcd_root_volume_size_in_gb
master_root_volume_size_in_gb = var.master_root_volume_size_in_gb
node_root_volume_size_in_gb = var.node_root_volume_size_in_gb
gfs_root_volume_size_in_gb = var.gfs_root_volume_size_in_gb
gfs_volume_size_in_gb = var.gfs_volume_size_in_gb
master_volume_type = var.master_volume_type
public_key_path = var.public_key_path
image = var.image
image_gfs = var.image_gfs
ssh_user = var.ssh_user
ssh_user_gfs = var.ssh_user_gfs
flavor_k8s_master = var.flavor_k8s_master
flavor_k8s_node = var.flavor_k8s_node
flavor_etcd = var.flavor_etcd
flavor_gfs_node = var.flavor_gfs_node
network_name = var.network_name
flavor_bastion = var.flavor_bastion
k8s_master_fips = module.ips.k8s_master_fips
k8s_master_no_etcd_fips = module.ips.k8s_master_no_etcd_fips
k8s_node_fips = module.ips.k8s_node_fips
k8s_nodes_fips = module.ips.k8s_nodes_fips
bastion_fips = module.ips.bastion_fips
bastion_allowed_remote_ips = var.bastion_allowed_remote_ips
master_allowed_remote_ips = var.master_allowed_remote_ips
k8s_allowed_remote_ips = var.k8s_allowed_remote_ips
k8s_allowed_egress_ips = var.k8s_allowed_egress_ips
supplementary_master_groups = var.supplementary_master_groups
supplementary_node_groups = var.supplementary_node_groups
master_allowed_ports = var.master_allowed_ports
worker_allowed_ports = var.worker_allowed_ports
wait_for_floatingip = var.wait_for_floatingip
use_access_ip = var.use_access_ip
use_server_groups = var.use_server_groups
network_id = "${module.network.router_id}"
network_id = module.network.router_id
}
output "private_subnet_id" {
value = "${module.network.subnet_id}"
value = module.network.subnet_id
}
output "floating_network_id" {
value = "${var.external_net}"
value = var.external_net
}
output "router_id" {
value = "${module.network.router_id}"
value = module.network.router_id
}
output "k8s_master_fips" {
value = "${concat(module.ips.k8s_master_fips, module.ips.k8s_master_no_etcd_fips)}"
value = concat(module.ips.k8s_master_fips, module.ips.k8s_master_no_etcd_fips)
}
output "k8s_node_fips" {
value = "${module.ips.k8s_node_fips}"
value = var.number_of_k8s_nodes > 0 ? module.ips.k8s_node_fips : [for key, value in module.ips.k8s_nodes_fips : value.address]
}
output "bastion_fips" {
value = "${module.ips.bastion_fips}"
value = module.ips.bastion_fips
}

File diff suppressed because it is too large Load Diff

View File

@ -1,7 +1,11 @@
variable "cluster_name" {}
variable "az_list" {
type = "list"
type = list(string)
}
variable "az_list_node" {
type = list(string)
}
variable "number_of_k8s_masters" {}
@ -34,6 +38,8 @@ variable "gfs_root_volume_size_in_gb" {}
variable "gfs_volume_size_in_gb" {}
variable "master_volume_type" {}
variable "public_key_path" {}
variable "image" {}
@ -61,37 +67,43 @@ variable "network_id" {
}
variable "k8s_master_fips" {
type = "list"
type = list
}
variable "k8s_master_no_etcd_fips" {
type = "list"
type = list
}
variable "k8s_node_fips" {
type = "list"
type = list
}
variable "k8s_nodes_fips" {
type = map
}
variable "bastion_fips" {
type = "list"
type = list
}
variable "bastion_allowed_remote_ips" {
type = "list"
type = list
}
variable "master_allowed_remote_ips" {
type = "list"
type = list
}
variable "k8s_allowed_remote_ips" {
type = "list"
type = list
}
variable "k8s_allowed_egress_ips" {
type = "list"
type = list
}
variable "k8s_nodes" {}
variable "wait_for_floatingip" {}
variable "supplementary_master_groups" {
@ -102,12 +114,16 @@ variable "supplementary_node_groups" {
default = ""
}
variable "master_allowed_ports" {
type = list
}
variable "worker_allowed_ports" {
type = "list"
type = list
}
variable "use_access_ip" {}
variable "use_server_groups" {
type = bool
}
}

View File

@ -1,29 +1,36 @@
resource "null_resource" "dummy_dependency" {
triggers = {
dependency_id = "${var.router_id}"
dependency_id = var.router_id
}
}
resource "openstack_networking_floatingip_v2" "k8s_master" {
count = "${var.number_of_k8s_masters}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
count = var.number_of_k8s_masters
pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency]
}
resource "openstack_networking_floatingip_v2" "k8s_master_no_etcd" {
count = "${var.number_of_k8s_masters_no_etcd}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
count = var.number_of_k8s_masters_no_etcd
pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency]
}
resource "openstack_networking_floatingip_v2" "k8s_node" {
count = "${var.number_of_k8s_nodes}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
count = var.number_of_k8s_nodes
pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency]
}
resource "openstack_networking_floatingip_v2" "bastion" {
count = "${var.number_of_bastions}"
pool = "${var.floatingip_pool}"
depends_on = ["null_resource.dummy_dependency"]
count = var.number_of_bastions
pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency]
}
resource "openstack_networking_floatingip_v2" "k8s_nodes" {
for_each = var.number_of_k8s_nodes == 0 ? { for key, value in var.k8s_nodes : key => value if value.floating_ip } : {}
pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency]
}

View File

@ -1,15 +1,19 @@
output "k8s_master_fips" {
value = "${openstack_networking_floatingip_v2.k8s_master[*].address}"
value = openstack_networking_floatingip_v2.k8s_master[*].address
}
output "k8s_master_no_etcd_fips" {
value = "${openstack_networking_floatingip_v2.k8s_master_no_etcd[*].address}"
value = openstack_networking_floatingip_v2.k8s_master_no_etcd[*].address
}
output "k8s_node_fips" {
value = "${openstack_networking_floatingip_v2.k8s_node[*].address}"
value = openstack_networking_floatingip_v2.k8s_node[*].address
}
output "k8s_nodes_fips" {
value = openstack_networking_floatingip_v2.k8s_nodes
}
output "bastion_fips" {
value = "${openstack_networking_floatingip_v2.bastion[*].address}"
value = openstack_networking_floatingip_v2.bastion[*].address
}

View File

@ -14,4 +14,6 @@ variable "network_name" {}
variable "router_id" {
default = ""
}
}
variable "k8s_nodes" {}

View File

@ -1,28 +1,33 @@
resource "openstack_networking_router_v2" "k8s" {
name = "${var.cluster_name}-router"
count = "${var.use_neutron}"
count = var.use_neutron == 1 && var.router_id == null ? 1 : 0
admin_state_up = "true"
external_network_id = "${var.external_net}"
external_network_id = var.external_net
}
data "openstack_networking_router_v2" "k8s" {
router_id = var.router_id
count = var.use_neutron == 1 && var.router_id != null ? 1 : 0
}
resource "openstack_networking_network_v2" "k8s" {
name = "${var.network_name}"
count = "${var.use_neutron}"
dns_domain = var.network_dns_domain != null ? "${var.network_dns_domain}" : null
name = var.network_name
count = var.use_neutron
dns_domain = var.network_dns_domain != null ? var.network_dns_domain : null
admin_state_up = "true"
}
resource "openstack_networking_subnet_v2" "k8s" {
name = "${var.cluster_name}-internal-network"
count = "${var.use_neutron}"
network_id = "${openstack_networking_network_v2.k8s[count.index].id}"
cidr = "${var.subnet_cidr}"
count = var.use_neutron
network_id = openstack_networking_network_v2.k8s[count.index].id
cidr = var.subnet_cidr
ip_version = 4
dns_nameservers = "${var.dns_nameservers}"
dns_nameservers = var.dns_nameservers
}
resource "openstack_networking_router_interface_v2" "k8s" {
count = "${var.use_neutron}"
router_id = "${openstack_networking_router_v2.k8s[count.index].id}"
subnet_id = "${openstack_networking_subnet_v2.k8s[count.index].id}"
count = var.use_neutron
router_id = "%{if openstack_networking_router_v2.k8s != []}${openstack_networking_router_v2.k8s[count.index].id}%{else}${var.router_id}%{endif}"
subnet_id = openstack_networking_subnet_v2.k8s[count.index].id
}

View File

@ -1,11 +1,11 @@
output "router_id" {
value = "${element(concat(openstack_networking_router_v2.k8s.*.id, list("")), 0)}"
value = "%{if var.use_neutron == 1} ${var.router_id == null ? element(concat(openstack_networking_router_v2.k8s.*.id, [""]), 0) : var.router_id} %{else} %{endif}"
}
output "router_internal_port_id" {
value = "${element(concat(openstack_networking_router_interface_v2.k8s.*.id, list("")), 0)}"
value = element(concat(openstack_networking_router_interface_v2.k8s.*.id, [""]), 0)
}
output "subnet_id" {
value = "${element(concat(openstack_networking_subnet_v2.k8s.*.id, list("")), 0)}"
value = element(concat(openstack_networking_subnet_v2.k8s.*.id, [""]), 0)
}

View File

@ -7,9 +7,11 @@ variable "network_dns_domain" {}
variable "cluster_name" {}
variable "dns_nameservers" {
type = "list"
type = list
}
variable "subnet_cidr" {}
variable "use_neutron" {}
variable "router_id" {}

View File

@ -3,8 +3,14 @@ variable "cluster_name" {
}
variable "az_list" {
description = "List of Availability Zones available in your OpenStack cluster"
type = "list"
description = "List of Availability Zones to use for masters in your OpenStack cluster"
type = list(string)
default = ["nova"]
}
variable "az_list_node" {
description = "List of Availability Zones to use for nodes in your OpenStack cluster"
type = list(string)
default = ["nova"]
}
@ -68,6 +74,10 @@ variable "gfs_volume_size_in_gb" {
default = 75
}
variable "master_volume_type" {
default = "Default"
}
variable "public_key_path" {
description = "The path of the ssh pub key"
default = "~/.ssh/id_rsa.pub"
@ -125,7 +135,7 @@ variable "network_name" {
variable "network_dns_domain" {
description = "dns_domain for the internal network"
type = "string"
type = string
default = null
}
@ -136,13 +146,13 @@ variable "use_neutron" {
variable "subnet_cidr" {
description = "Subnet CIDR block."
type = "string"
type = string
default = "10.0.0.0/24"
}
variable "dns_nameservers" {
description = "An array of DNS name server names used by hosts in this subnet."
type = "list"
type = list
default = []
}
@ -172,30 +182,36 @@ variable "supplementary_node_groups" {
variable "bastion_allowed_remote_ips" {
description = "An array of CIDRs allowed to SSH to hosts"
type = "list"
type = list(string)
default = ["0.0.0.0/0"]
}
variable "master_allowed_remote_ips" {
description = "An array of CIDRs allowed to access API of masters"
type = "list"
type = list(string)
default = ["0.0.0.0/0"]
}
variable "k8s_allowed_remote_ips" {
description = "An array of CIDRs allowed to SSH to hosts"
type = "list"
type = list(string)
default = []
}
variable "k8s_allowed_egress_ips" {
description = "An array of CIDRs allowed for egress traffic"
type = "list"
type = list(string)
default = ["0.0.0.0/0"]
}
variable "master_allowed_ports" {
type = list
default = []
}
variable "worker_allowed_ports" {
type = "list"
type = list
default = [
{
@ -213,4 +229,14 @@ variable "use_access_ip" {
variable "use_server_groups" {
default = false
}
}
variable "router_id" {
description = "uuid of an externally defined router to use"
default = null
}
variable "k8s_nodes" {
default = {}
}

View File

@ -176,7 +176,7 @@ If you have deployed and destroyed a previous iteration of your cluster, you wil
#### Test access
Make sure you can connect to the hosts. Note that Container Linux by CoreOS will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
Make sure you can connect to the hosts. Note that Flatcar Container Linux by Kinvolk will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
```
$ ansible -i inventory/$CLUSTER/hosts -m ping all

View File

@ -223,8 +223,8 @@ def packet_device(resource, tfvars=None):
'provider': 'packet',
}
if raw_attrs['operating_system'] == 'coreos_stable':
# For CoreOS set the ssh_user to core
if raw_attrs['operating_system'] == 'flatcar_stable':
# For Flatcar set the ssh_user to core
attrs.update({'ansible_ssh_user': 'core'})
# add groups based on attrs
@ -319,9 +319,7 @@ def openstack_host(resource, module_name):
# attrs specific to Mantl
attrs.update({
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)),
'role': attrs['metadata'].get('role', 'none'),
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
'role': attrs['metadata'].get('role', 'none')
})
# add groups based on attrs
@ -331,10 +329,6 @@ def openstack_host(resource, module_name):
for item in list(attrs['metadata'].items()))
groups.append('os_region=' + attrs['region'])
# groups specific to Mantl
groups.append('role=' + attrs['metadata'].get('role', 'none'))
groups.append('dc=' + attrs['consul_dc'])
# groups specific to kubespray
for group in attrs['metadata'].get('kubespray_groups', "").split(","):
groups.append(group)
@ -357,7 +351,7 @@ def iter_host_ips(hosts, ips):
'ansible_ssh_host': ip,
})
if 'use_access_ip' in host[1]['metadata'] and ihost[1]['metadata']['use_access_ip'] == "0":
if 'use_access_ip' in host[1]['metadata'] and host[1]['metadata']['use_access_ip'] == "0":
host[1].pop('access_ip')
yield host

View File

@ -13,7 +13,7 @@
/usr/local/share/ca-certificates/vault-ca.crt
{%- elif ansible_os_family == "RedHat" -%}
/etc/pki/ca-trust/source/anchors/vault-ca.crt
{%- elif ansible_os_family in ["Coreos", "Container Linux by CoreOS"] -%}
{%- elif ansible_os_family in ["Flatcar Container Linux by Kinvolk"] -%}
/etc/ssl/certs/vault-ca.pem
{%- endif %}
@ -23,9 +23,9 @@
dest: "{{ ca_cert_path }}"
register: vault_ca_cert
- name: bootstrap/ca_trust | update ca-certificates (Debian/Ubuntu/CoreOS)
- name: bootstrap/ca_trust | update ca-certificates (Debian/Ubuntu/Flatcar)
command: update-ca-certificates
when: vault_ca_cert.changed and ansible_os_family in ["Debian", "CoreOS", "Coreos", "Container Linux by CoreOS"]
when: vault_ca_cert.changed and ansible_os_family in ["Debian", "Flatcar Container Linux by Kinvolk"]
- name: bootstrap/ca_trust | update ca-certificates (RedHat)
command: update-ca-trust extract

View File

@ -6,11 +6,11 @@
- name: bootstrap/start_vault_temp | Start single node Vault with file backend
command: >
docker run -d --cap-add=IPC_LOCK --name {{ vault_temp_container_name }}
-p {{ vault_port }}:{{ vault_port }}
-e 'VAULT_LOCAL_CONFIG={{ vault_temp_config|to_json }}'
-v /etc/vault:/etc/vault
{{ vault_image_repo }}:{{ vault_version }} server
docker run -d --cap-add=IPC_LOCK --name {{ vault_temp_container_name }}
-p {{ vault_port }}:{{ vault_port }}
-e 'VAULT_LOCAL_CONFIG={{ vault_temp_config|to_json }}'
-v /etc/vault:/etc/vault
{{ vault_image_repo }}:{{ vault_version }} server
- name: bootstrap/start_vault_temp | Start again single node Vault with file backend
command: docker start {{ vault_temp_container_name }}

View File

@ -21,9 +21,9 @@
- name: bootstrap/sync_secrets | Print out warning message if secrets are not available and vault is initialized
pause:
prompt: >
Vault orchestration may not be able to proceed. The Vault cluster is initialized, but
'root_token' or 'unseal_keys' were not found in {{ vault_secrets_dir }}. These are
needed for many vault orchestration steps.
Vault orchestration may not be able to proceed. The Vault cluster is initialized, but
'root_token' or 'unseal_keys' were not found in {{ vault_secrets_dir }}. These are
needed for many vault orchestration steps.
when: vault_cluster_is_initialized and not vault_secrets_available
- name: bootstrap/sync_secrets | Cat root_token from a vault host

View File

@ -25,6 +25,6 @@
- name: check_etcd | Fail if etcd is not available and needed
fail:
msg: >
Unable to start Vault cluster! Etcd is not available at
{{ vault_etcd_url.split(',') | first }} however it is needed by Vault as a backend.
Unable to start Vault cluster! Etcd is not available at
{{ vault_etcd_url.split(',') | first }} however it is needed by Vault as a backend.
when: vault_etcd_needed|d() and not vault_etcd_available

View File

@ -36,6 +36,7 @@
{{ etcd_access_addresses.split(',') | first }}/v3alpha/kv/range
register: vault_etcd_exists
retries: 4
until: vault_etcd_exists.status == 200
delay: "{{ retry_stagger | random + 3 }}"
run_once: true
when: not vault_is_running and vault_etcd_available
@ -45,7 +46,7 @@
set_fact:
vault_cluster_is_initialized: >-
{{ vault_is_initialized or
hostvars[item]['vault_is_initialized'] or
('value' in vault_etcd_exists.stdout|default('')) }}
hostvars[item]['vault_is_initialized'] or
('value' in vault_etcd_exists.stdout|default('')) }}
with_items: "{{ groups.vault }}"
run_once: true

View File

@ -6,9 +6,9 @@
ca_cert: "{{ vault_cert_dir }}/ca.pem"
name: "{{ create_role_name }}"
rules: >-
{%- if create_role_policy_rules|d("default") == "default" -%}
{{
{ 'path': {
{%- if create_role_policy_rules|d("default") == "default" -%}
{{
{ 'path': {
create_role_mount_path + '/issue/' + create_role_name: {'policy': 'write'},
create_role_mount_path + '/roles/' + create_role_name: {'policy': 'read'}
}} | to_json + '\n'
@ -24,13 +24,13 @@
ca_cert: "{{ vault_cert_dir }}/ca.pem"
secret: "{{ create_role_mount_path }}/roles/{{ create_role_name }}"
data: |
{%- if create_role_options|d("default") == "default" -%}
{
allow_any_name: true
}
{%- else -%}
{{ create_role_options | to_json }}
{%- endif -%}
{%- if create_role_options|d("default") == "default" -%}
{
allow_any_name: true
}
{%- else -%}
{{ create_role_options | to_json }}
{%- endif -%}
## Userpass based auth method

View File

@ -18,8 +18,8 @@
- name: shared/gen_userpass | Copy credentials to all hosts in the group
copy:
content: >
{{
{'username': gen_userpass_username,
'password': gen_userpass_password} | to_nice_json(indent=4)
}}
{{
{'username': gen_userpass_username,
'password': gen_userpass_password} | to_nice_json(indent=4)
}}
dest: "{{ vault_roles_dir }}/{{ gen_userpass_role }}/userpass"

View File

@ -1,2 +1,2 @@
[Service]
Environment={% if http_proxy %}"HTTP_PROXY={{ http_proxy }}"{% endif %} {% if https_proxy %}"HTTPS_PROXY={{ https_proxy }}"{% endif %} {% if no_proxy %}"NO_PROXY={{ no_proxy }}"{% endif %}
Environment={% if http_proxy %}"HTTP_PROXY={{ http_proxy }}"{% endif %} {% if https_proxy %}"HTTPS_PROXY={{ https_proxy }}"{% endif %} {% if no_proxy %}"NO_PROXY={{ no_proxy }}"{% endif %}

View File

@ -7,7 +7,9 @@
* [Integration](docs/integration.md)
* [Upgrades](/docs/upgrades.md)
* [HA Mode](docs/ha-mode.md)
* [Adding/replacing a node](docs/nodes.md)
* [Large deployments](docs/large-deployments.md)
* [Air-Gap Installation](docs/offline-environment.md)
* CNI
* [Calico](docs/calico.md)
* [Contiv](docs/contiv.md)
@ -15,6 +17,8 @@
* [Kube Router](docs/kube-router.md)
* [Weave](docs/weave.md)
* [Multus](docs/multus.md)
* Ingress
* [Ambassador](docs/ambassador.md)
* [Cloud providers](docs/cloud.md)
* [AWS](docs/aws.md)
* [Azure](docs/azure.md)
@ -22,9 +26,9 @@
* [Packet](/docs/packet.md)
* [vSphere](/docs/vsphere.md)
* Operating Systems
* [Atomic](docs/atomic.md)
* [Debian](docs/debian.md)
* [Coreos](docs/coreos.md)
* [Flatcar Container Linux](docs/flatcar.md)
* [Fedora CoreOS](docs/fcos.md)
* [OpenSUSE](docs/opensuse.md)
* Advanced
* [Proxy](/docs/proxy.md)
@ -36,4 +40,6 @@
* Developers
* [Test cases](docs/test_cases.md)
* [Vagrant](docs/vagrant.md)
* [CI Matrix](docs/ci.md)
* [CI Setup](docs/ci-setup.md)
* [Roadmap](docs/roadmap.md)

87
docs/ambassador.md Normal file
View File

@ -0,0 +1,87 @@
# Ambassador
The [Ambassador API Gateway](https://github.com/datawire/ambassador) provides all the functionality of a traditional ingress controller
(e.g., path-based routing) while exposing many additional capabilities such as authentication,
URL rewriting, CORS, rate limiting, and automatic metrics collection.
## Installation
### Configuration
* `ingress_ambassador_namespace` (default `ambassador`): namespace for installing Ambassador.
* `ingress_ambassador_update_window` (default `0 0 * * SUN`): _crontab_-like expression
for specifying when the Operator should try to update the Ambassador API Gateway.
* `ingress_ambassador_version` (defaulkt: `*`): SemVer rule for versions allowed for
installation/updates.
* `ingress_ambassador_secure_port` (default: 443): HTTPS port to listen at.
* `ingress_ambassador_insecure_port` (default: 80): HTTP port to listen at.
### Ambassador Operator
This Ambassador addon deploys the Ambassador Operator, which in turn will install
the [Ambassador API Gateway](https://github.com/datawire/ambassador) in
a Kubernetes cluster.
The Ambassador Operator is a Kubernetes Operator that controls Ambassador's complete lifecycle
in your cluster, automating many of the repeatable tasks you would otherwise have to perform
yourself. Once installed, the Operator will complete installations and seamlessly upgrade to new
versions of Ambassador as they become available.
## Usage
The following example creates simple http-echo services and an `Ingress` object
to route to these services.
Note well that the [Ambassador API Gateway](https://github.com/datawire/ambassador) will automatically load balance `Ingress` resources
that include the annotation `kubernetes.io/ingress.class=ambassador`. All the other
resources will be just ignored.
```yaml
kind: Pod
apiVersion: v1
metadata:
name: foo-app
labels:
app: foo
spec:
containers:
- name: foo-app
image: hashicorp/http-echo
args:
- "-text=foo"
---
kind: Service
apiVersion: v1
metadata:
name: foo-service
spec:
selector:
app: foo
ports:
# Default port used by the image
- port: 5678
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: ambassador
spec:
rules:
- http:
paths:
- path: /foo
backend:
serviceName: foo-service
servicePort: 5678
```
Now you can test that the ingress is working with curl:
```console
$ export AMB_IP=$(kubectl get service ambassador -n ambassador -o 'go-template={{range .status.loadBalancer.ingress}}{{print .ip "\n"}}{{end}}')
$ curl $AMB_IP/foo
foo
```

View File

@ -68,7 +68,7 @@ Optional variables are located in the `inventory/sample/group_vars/all.yml`.
Mandatory variables that are common for at least one role (or a node group) can be found in the
`inventory/sample/group_vars/k8s-cluster.yml`.
There are also role vars for docker, kubernetes preinstall and master roles.
According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
According to the [ansible docs](https://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
those cannot be overridden from the group vars. In order to override, one should use
the `-e` runtime flags (most simple way) or other layers described in the docs.
@ -137,6 +137,8 @@ The following tags are defined in playbooks:
| upgrade | Upgrading, f.e. container images/binaries
| upload | Distributing images/binaries across hosts
| weave | Network plugin Weave
| ingress_alb | AWS ALB Ingress Controller
| ambassador | Ambassador Ingress Controller
Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all
tags found in the codebase. New tags will be listed with the empty "Used for"
@ -181,4 +183,8 @@ bastion ansible_host=x.x.x.x
```
For more information about Ansible and bastion hosts, read
[Running Ansible Through an SSH Bastion Host](http://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)
[Running Ansible Through an SSH Bastion Host](https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)
## Mitogen
You can use [mitogen](mitogen.md) to speed up kubespray.

View File

@ -1,22 +0,0 @@
# Atomic host bootstrap
Atomic host testing has been done with the network plugin flannel. Change the inventory var `kube_network_plugin: flannel`.
Note: Flannel is the only plugin that has currently been tested with atomic
## Vagrant
* For bootstrapping with Vagrant, use box centos/atomic-host or fedora/atomic-host
* Update VagrantFile variable `local_release_dir` to `/var/vagrant/temp`.
* Update `vm_memory = 2048` and `vm_cpus = 2`
* Networking on vagrant hosts has to be brought up manually once they are booted.
```ShellSession
vagrant ssh
sudo /sbin/ifup enp0s8
```
* For users of vagrant-libvirt download centos/atomic-host qcow2 format from <https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/>
* For users of vagrant-libvirt download fedora/atomic-host qcow2 format from <https://dl.fedoraproject.org/pub/alt/atomic/stable/>
Then you can proceed to [cluster deployment](#run-deployment)

87
docs/aws-ebs-csi.md Normal file
View File

@ -0,0 +1,87 @@
# AWS EBS CSI Driver
AWS EBS CSI driver allows you to provision EBS volumes for pods in EC2 instances. The old in-tree AWS cloud provider is deprecated and will be removed in future versions of Kubernetes. So transitioning to the CSI driver is advised.
To enable AWS EBS CSI driver, uncomment the `aws_ebs_csi_enabled` option in `group_vars/all/aws.yml` and set it to `true`.
To set the number of replicas for the AWS CSI controller, you can change `aws_ebs_csi_controller_replicas` option in `group_vars/all/aws.yml`.
Make sure to add a role, for your EC2 instances hosting Kubernetes, that allows it to do the actions necessary to request a volume and attach it: [AWS CSI Policy](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/example-iam-policy.json)
If you want to deploy the AWS EBS storage class used with the CSI Driver, you should set `persistent_volumes_enabled` in `group_vars/k8s-cluster/k8s-cluster.yml` to `true`.
You can now run the kubespray playbook (cluster.yml) to deploy Kubernetes over AWS EC2 with EBS CSI Driver enabled.
## Usage example
To check if AWS EBS CSI Driver is deployed properly, check that the ebs-csi pods are running:
```ShellSession
$ kubectl -n kube-system get pods | grep ebs
ebs-csi-controller-85d86bccc5-8gtq5 4/4 Running 4 40s
ebs-csi-node-n4b99 3/3 Running 3 40s
```
Check the associated storage class (if you enabled persistent_volumes):
```ShellSession
$ kubectl get storageclass
NAME PROVISIONER AGE
ebs-sc ebs.csi.aws.com 45s
```
You can run a PVC and an example Pod using this file `ebs-pod.yml`:
```yml
--
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-claim
```
Apply this conf to your cluster: ```kubectl apply -f ebs-pod.yml```
You should see the PVC provisioned and bound:
```ShellSession
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ebs-claim Bound pvc-0034cb9e-1ddd-4b3f-bb9e-0b5edbf5194c 1Gi RWO ebs-sc 50s
```
And the volume mounted to the example Pod (wait until the Pod is Running):
```ShellSession
$ kubectl exec -it app -- df -h | grep data
/dev/nvme1n1 1014M 34M 981M 4% /data
```
## More info
For further information about the AWS EBS CSI Driver, you can refer to this page: [AWS EBS Driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/).

119
docs/azure-csi.md Normal file
View File

@ -0,0 +1,119 @@
# Azure Disk CSI Driver
The Azure Disk CSI driver allows you to provision volumes for pods with a Kubernetes deployment over Azure Cloud. The CSI driver replaces to volume provioning done by the in-tree azure cloud provider which is deprecated.
This documentation is an updated version of the in-tree Azure cloud provider documentation (azure.md).
To deploy Azure Disk CSI driver, uncomment the `azure_csi_enabled` option in `group_vars/all/azure.yml` and set it to `true`.
## Azure Disk CSI Storage Class
If you want to deploy the Azure Disk storage class to provision volumes dynamically, you should set `persistent_volumes_enabled` in `group_vars/k8s-cluster/k8s-cluster.yml` to `true`.
## Parameters
Before creating the instances you must first set the `azure_csi_` variables in the `group_vars/all.yml` file.
All of the values can be retrieved using the azure cli tool which can be downloaded here: <https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest>
After installation you have to run `az login` to get access to your account.
### azure\_csi\_tenant\_id + azure\_csi\_subscription\_id
Run `az account show` to retrieve your subscription id and tenant id:
`azure_csi_tenant_id` -> tenantId field
`azure_csi_subscription_id` -> id field
### azure\_csi\_location
The region your instances are located in, it can be something like `francecentral` or `norwayeast`. A full list of region names can be retrieved via `az account list-locations`
### azure\_csi\_resource\_group
The name of the resource group your instances are in, a list of your resource groups can be retrieved via `az group list`
Or you can do `az vm list | grep resourceGroup` and get the resource group corresponding to the VMs of your cluster.
The resource group name is not case sensitive.
### azure\_csi\_vnet\_name
The name of the virtual network your instances are in, can be retrieved via `az network vnet list`
### azure\_csi\_vnet\_resource\_group
The name of the resource group your vnet is in, can be retrieved via `az network vnet list | grep resourceGroup` and get the resource group corresponding to the vnet of your cluster.
### azure\_csi\_subnet\_name
The name of the subnet your instances are in, can be retrieved via `az network vnet subnet list --resource-group RESOURCE_GROUP --vnet-name VNET_NAME`
### azure\_csi\_security\_group\_name
The name of the network security group your instances are in, can be retrieved via `az network nsg list`
### azure\_csi\_aad\_client\_id + azure\_csi\_aad\_client\_secret
These will have to be generated first:
- Create an Azure AD Application with:
`az ad app create --display-name kubespray --identifier-uris http://kubespray --homepage http://kubespray.com --password CLIENT_SECRET`
Display name, identifier-uri, homepage and the password can be chosen
Note the AppId in the output.
- Create Service principal for the application with:
`az ad sp create --id AppId`
This is the AppId from the last command
- Create the role assignment with:
`az role assignment create --role "Owner" --assignee http://kubespray --subscription SUBSCRIPTION_ID`
azure\_csi\_aad\_client\_id must be set to the AppId, azure\_csi\_aad\_client\_secret is your chosen secret.
### azure\_csi\_use\_instance\_metadata
Use instance metadata service where possible. Boolean value.
## Test the Azure Disk CSI driver
To test the dynamic provisioning using Azure CSI driver, make sure to have the storage class deployed (through persistent volumes), and apply the following manifest:
```yml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-azuredisk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: disk.csi.azure.com
---
kind: Pod
apiVersion: v1
metadata:
name: nginx-azuredisk
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- image: nginx
name: nginx-azuredisk
command:
- "/bin/sh"
- "-c"
- while true; do echo $(date) >> /mnt/azuredisk/outfile; sleep 1; done
volumeMounts:
- name: azuredisk
mountPath: "/mnt/azuredisk"
volumes:
- name: azuredisk
persistentVolumeClaim:
claimName: pvc-azuredisk
```

View File

@ -1,57 +1,84 @@
# Azure
To deploy Kubernetes on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'azure'`.
To deploy Kubernetes on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all/all.yml` and set it to `'azure'`.
All your instances are required to run in a resource group and a routing table has to be attached to the subnet your instances are in.
Not all features are supported yet though, for a list of the current status have a look [here](https://github.com/colemickens/azure-kubernetes-status)
Not all features are supported yet though, for a list of the current status have a look [here](https://github.com/Azure/AKS)
## Parameters
Before creating the instances you must first set the `azure_` variables in the `group_vars/all.yml` file.
Before creating the instances you must first set the `azure_` variables in the `group_vars/all/all.yml` file.
All of the values can be retrieved using the azure cli tool which can be downloaded here: <https://docs.microsoft.com/en-gb/azure/xplat-cli-install>
After installation you have to run `azure login` to get access to your account.
After installation you have to run `az login` to get access to your account.
### azure_cloud
Azure Stack has different API endpoints, depending on the Azure Stack deployment. These need to be provided to the Azure SDK.
Possible values are: `AzureChinaCloud`, `AzureGermanCloud`, `AzurePublicCloud` and `AzureUSGovernmentCloud`.
The full list of existing settings for the AzureChinaCloud, AzureGermanCloud, AzurePublicCloud and AzureUSGovernmentCloud
is available in the source code [here](https://github.com/kubernetes-sigs/cloud-provider-azure/blob/master/docs/cloud-provider-config.md)
### azure\_tenant\_id + azure\_subscription\_id
run `azure account show` to retrieve your subscription id and tenant id:
run `az account show` to retrieve your subscription id and tenant id:
`azure_tenant_id` -> Tenant ID field
`azure_subscription_id` -> ID field
### azure\_location
The region your instances are located, can be something like `westeurope` or `westcentralus`. A full list of region names can be retrieved via `azure location list`
The region your instances are located, can be something like `westeurope` or `westcentralus`. A full list of region names can be retrieved via `az account list-locations`
### azure\_resource\_group
The name of the resource group your instances are in, can be retrieved via `azure group list`
The name of the resource group your instances are in, can be retrieved via `az group list`
### azure\_vmtype
The type of the vm. Supported values are `standard` or `vmss`. If vm is type of `Virtal Machines` then value is `standard`. If vm is part of `Virtaul Machine Scale Sets` then value is `vmss`
### azure\_vnet\_name
The name of the virtual network your instances are in, can be retrieved via `azure network vnet list`
The name of the virtual network your instances are in, can be retrieved via `az network vnet list`
### azure\_vnet\_resource\_group
The name of the resource group that contains the vnet.
### azure\_subnet\_name
The name of the subnet your instances are in, can be retrieved via `azure network vnet subnet list --resource-group RESOURCE_GROUP --vnet-name VNET_NAME`
The name of the subnet your instances are in, can be retrieved via `az network vnet subnet list --resource-group RESOURCE_GROUP --vnet-name VNET_NAME`
### azure\_security\_group\_name
The name of the network security group your instances are in, can be retrieved via `azure network nsg list`
The name of the network security group your instances are in, can be retrieved via `az network nsg list`
### azure\_security\_group\_resource\_group
The name of the resource group that contains the network security group. Defaults to `azure_vnet_resource_group`
### azure\_route\_table\_name
The name of the route table used with your instances.
### azure\_route\_table\_resource\_group
The name of the resource group that contains the route table. Defaults to `azure_vnet_resource_group`
### azure\_aad\_client\_id + azure\_aad\_client\_secret
These will have to be generated first:
- Create an Azure AD Application with:
`azure ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET`
`az ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET`
display name, identifier-uri, homepage and the password can be chosen
Note the AppId in the output.
- Create Service principal for the application with:
`azure ad sp create --id AppId`
`az ad sp create --id AppId`
This is the AppId from the last command
- Create the role assignment with:
`azure role assignment create --role "Owner" --assignee http://kubernetes --subscription SUBSCRIPTION_ID`
`az role assignment create --role "Owner" --assignee http://kubernetes --subscription SUBSCRIPTION_ID`
azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your chosen secret.

View File

@ -12,55 +12,55 @@ Check if the calico-node container is running
docker ps | grep calico
```
The **calicoctl** command allows to check the status of the network workloads.
The **calicoctl.sh** is wrap script with configured acces credentials for command calicoctl allows to check the status of the network workloads.
* Check the status of Calico nodes
```ShellSession
calicoctl node status
calicoctl.sh node status
```
or for versions prior to *v1.0.0*:
```ShellSession
calicoctl status
calicoctl.sh status
```
* Show the configured network subnet for containers
```ShellSession
calicoctl get ippool -o wide
calicoctl.sh get ippool -o wide
```
or for versions prior to *v1.0.0*:
```ShellSession
calicoctl pool show
calicoctl.sh pool show
```
* Show the workloads (ip addresses of containers and their located)
* Show the workloads (ip addresses of containers and their location)
```ShellSession
calicoctl get workloadEndpoint -o wide
calicoctl.sh get workloadEndpoint -o wide
```
and
```ShellSession
calicoctl get hostEndpoint -o wide
calicoctl.sh get hostEndpoint -o wide
```
or for versions prior *v1.0.0*:
```ShellSession
calicoctl endpoint show --detail
calicoctl.sh endpoint show --detail
```
## Configuration
### Optional : Define network backend
In some cases you may want to define Calico network backend. Allowed values are 'bird', 'gobgp' or 'none'. Bird is a default value.
In some cases you may want to define Calico network backend. Allowed values are `bird`, `vxlan` or `none`. Bird is a default value.
To re-define you need to edit the inventory and add a group variable `calico_network_backend`
@ -199,9 +199,29 @@ To re-define health host please set the following variable in your inventory:
calico_healthhost: "0.0.0.0"
```
## Config encapsulation for cross server traffic
Calico supports two types of encapsulation: [VXLAN and IP in IP](https://docs.projectcalico.org/v3.11/networking/vxlan-ipip). VXLAN is supported in some environments where IP in IP is not (for example, Azure).
*IP in IP* and *VXLAN* is mutualy exclusive modes.
Configure Ip in Ip mode. Possible values is `Always`, `CrossSubnet`, `Never`.
```yml
calico_ipip_mode: 'Always'
```
Configure VXLAN mode. Possible values is `Always`, `CrossSubnet`, `Never`.
```yml
calico_vxlan_mode: 'Never'
```
If you use VXLAN mode, BGP networking is not required. You can disable BGP to reduce the moving parts in your cluster by `calico_network_backend: vxlan`
## Cloud providers configuration
Please refer to the official documentation, for example [GCE configuration](http://docs.projectcalico.org/v1.5/getting-started/docker/installation/gce) requires a security rule for calico ip-ip tunnels. Note, calico is always configured with ``ipip: true`` if the cloud provider was defined.
Please refer to the official documentation, for example [GCE configuration](http://docs.projectcalico.org/v1.5/getting-started/docker/installation/gce) requires a security rule for calico ip-ip tunnels. Note, calico is always configured with ``calico_ipip_mode: Always`` if the cloud provider was defined.
### Optional : Ignore kernel's RPF check setting
@ -215,6 +235,15 @@ Note that in OpenStack you must allow `ipip` traffic in your security groups,
otherwise you will experience timeouts.
To do this you must add a rule which allows it, for example:
### Optional : Felix configuration via extraenvs of calico node
Possible environment variable parameters for [configuring Felix](https://docs.projectcalico.org/reference/felix/configuration)
```yml
calico_node_extra_envs:
FELIX_DEVICEROUTESOURCEADDRESS: 172.17.0.1
```
```ShellSession
neutron security-group-rule-create --protocol 4 --direction egress k8s-a0tp4t
neutron security-group-rule-create --protocol 4 --direction igress k8s-a0tp4t

9
docs/centos8.md Normal file
View File

@ -0,0 +1,9 @@
# RHEL / CentOS 8
RHEL / CentOS 8 ships only with iptables-nft (ie without iptables-legacy)
The only tested configuration for now is using Calico CNI
You need to use K8S 1.17+ and to add `calico_iptables_backend: "NFT"` or `calico_iptables_backend: "Auto"` to your configuration
If you have containers that are using iptables in the host network namespace (`hostNetwork=true`),
you need to ensure they are using iptables-nft.
An example how k8s do the autodetection can be found [in this PR](https://github.com/kubernetes/kubernetes/pull/82966)

20
docs/ci-setup.md Normal file
View File

@ -0,0 +1,20 @@
# CI Setup
## Pipeline
1. unit-tests: fast jobs for fast feedback (linting, etc...)
2. deploy-part1: small number of jobs to test if the PR works with default settings
3. deploy-part2: slow jobs testing different platforms, OS, settings, CNI, etc...
4. deploy-part3: very slow jobs (upgrades, etc...)
## Runners
Kubespray has 3 types of GitLab runners:
- packet runners: used for E2E jobs (usually long)
- light runners: used for short lived jobs
- auto scaling runners: used for on-demand resources, see [GitLab docs](https://docs.gitlab.com/runner/configuration/autoscale.html) for more info
## Vagrant
Vagrant jobs are using the [quay.io/kubespray/vagrant](/test-infra/vagrant-docker/Dockerfile) docker image with `/var/run/libvirt/libvirt-sock` exposed from the host, allowing the container to boot VMs on the host.

54
docs/ci.md Normal file
View File

@ -0,0 +1,54 @@
# CI test coverage
To generate this Matrix run `./tests/scripts/md-table/main.py`
## docker
| OS / CNI | calico | canal | cilium | contiv | flannel | kube-ovn | kube-router | macvlan | ovn4nfv | weave |
|---| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
amazon | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
centos7 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :white_check_mark: |
centos8 | :white_check_mark: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: | :x: |
debian10 | :x: | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian9 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: |
fedora31 | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
fedora32 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :white_check_mark: |
opensuse | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
oracle7 | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu16 | :x: | :white_check_mark: | :x: | :white_check_mark: | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :white_check_mark: |
ubuntu18 | :white_check_mark: | :x: | :white_check_mark: | :x: | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :white_check_mark: |
ubuntu20 | :white_check_mark: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
## crio
| OS / CNI | calico | canal | cilium | contiv | flannel | kube-ovn | kube-router | macvlan | ovn4nfv | weave |
|---| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
centos7 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
centos8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian10 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora31 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora32 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
opensuse | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
oracle7 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu16 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu18 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
## containerd
| OS / CNI | calico | canal | cilium | contiv | flannel | kube-ovn | kube-router | macvlan | ovn4nfv | weave |
|---| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
amazon | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
centos7 | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
centos8 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian10 | :white_check_mark: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
debian9 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora31 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
fedora32 | :x: | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: | :x: |
opensuse | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
oracle7 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu16 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |
ubuntu18 | :x: | :x: | :x: | :x: | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
ubuntu20 | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: | :x: |

13
docs/cilium.md Normal file
View File

@ -0,0 +1,13 @@
# Cilium
## Kube-proxy replacement with Cilium
Cilium can run without kube-proxy by setting `cilium_kube_proxy_replacement`
to `strict`.
Without kube-proxy, cilium needs to know the address of the kube-apiserver
and this must be set globally for all cilium components (agents and operators).
Hence, in this configuration in Kubespray, Cilium will always contact
the external loadbalancer (even from a node in the control plane)
and if there is no external load balancer It will ignore any local load
balancer deployed by Kubespray and **only contacts the first master**.

View File

@ -1,4 +1,4 @@
# Comparaison
# Comparison
## Kubespray vs [Kops](https://github.com/kubernetes/kops)

31
docs/containerd.md Normal file
View File

@ -0,0 +1,31 @@
# conrainerd
[containerd] An industry-standard container runtime with an emphasis on simplicity, robustness and portability
Kubespray supports basic functionality for using containerd as the default container runtime in a cluster.
_To use the containerd container runtime set the following variables:_
## k8s-cluster.yml
```yaml
container_manager: containerd
```
## Containerd config
Example: define registry mirror for docker hub
```yaml
containerd_config:
grpc:
max_recv_message_size: 16777216
max_send_message_size: 16777216
debug:
level: ""
registries:
"docker.io":
- "https://mirror.gcr.io"
- "https://registry-1.docker.io"
```
[containerd]: https://containerd.io/

Some files were not shown because too many files have changed in this diff Show More