Compare commits
394 Commits
release-2.
...
reduce-vm-
Author | SHA1 | Date | |
---|---|---|---|
2214071a82 | |||
dedc00661a | |||
0624a3061a | |||
3082fa3d0f | |||
d85b29aae1 | |||
af593465b2 | |||
870049523f | |||
184b1add54 | |||
37d824fd2d | |||
ff48144607 | |||
0faa805525 | |||
bc21433a05 | |||
19851bb07c | |||
7f7b65d388 | |||
d50f61eae5 | |||
77bfb53455 | |||
0e449ca75e | |||
f6d9ff4196 | |||
21aba10e08 | |||
bd9d90e00c | |||
5616a4a3ee | |||
4b9349a052 | |||
7e0a407250 | |||
1173711acc | |||
998e04e5a7 | |||
40cbdceb3c | |||
e54e7c0e1d | |||
53ad8d9126 | |||
96bb0a3e12 | |||
76dae63c69 | |||
fae41172ed | |||
f85111f6d4 | |||
30d057a0a8 | |||
4123cf13ef | |||
5d01dfa179 | |||
4dbfd42f1d | |||
0b464b5239 | |||
dac4705ebe | |||
d5f6838fba | |||
354ffe7bd6 | |||
427f868718 | |||
d7756d85ef | |||
2c2b2ed96e | |||
361d2def09 | |||
f47ad82991 | |||
f488ecb6cc | |||
08293f2ef7 | |||
fe1a2d5dd9 | |||
73c2722d00 | |||
a5714a8c6b | |||
e410e30694 | |||
0b2533143f | |||
3e4ea1065a | |||
6dbb09435c | |||
d8a4aea9bc | |||
a8f58c244b | |||
169280ba64 | |||
fa03f4ffd0 | |||
7aa415e707 | |||
cd459a04f3 | |||
a00b0c48fe | |||
8a1ee990a2 | |||
523d016767 | |||
d321e42d9e | |||
a512b861e0 | |||
d870a3ee4e | |||
41036e3b53 | |||
975362249c | |||
ce2642f27b | |||
5dc12b2a15 | |||
edc33888a3 | |||
8c12dedf05 | |||
1697182e73 | |||
1c638bdb06 | |||
7eaf2bc4b8 | |||
0b0faf8f72 | |||
9bb38163c2 | |||
a09c73a356 | |||
d94f3ce965 | |||
966a8b95de | |||
a01d0c047a | |||
21e8809186 | |||
4cb688d5e4 | |||
e385ac7b40 | |||
5ce530c909 | |||
f82cf29a8a | |||
9f62f60814 | |||
315cfe1edd | |||
e01355834b | |||
001df231a6 | |||
def88b26a4 | |||
537891a380 | |||
85ae701b0f | |||
e57e958a39 | |||
91dea023ae | |||
245454855d | |||
3a112e834c | |||
cf0de0904c | |||
d772350b04 | |||
3351dc0925 | |||
f0e20705aa | |||
ff4e572e0c | |||
97e71da97b | |||
a7f98116ca | |||
088b1b0cec | |||
11f35e462c | |||
da3ff1cc11 | |||
663fcd104c | |||
a2019c1c24 | |||
3a43ac4506 | |||
f91e00a61b | |||
c6bdc38776 | |||
08a7010e80 | |||
538deff9ea | |||
cd7d11fea2 | |||
23b56e3f89 | |||
eee5b5890d | |||
ab0ef182fb | |||
4db3e2c3cf | |||
3d19e744f0 | |||
929c818b63 | |||
4baa2c8704 | |||
f3065cc5c4 | |||
ed2059395c | |||
8919901ed5 | |||
cc0c3d73dc | |||
dd0f42171f | |||
1b870a1862 | |||
8a423abd0f | |||
3ec2e497c6 | |||
7844b8dbac | |||
e87040d5ba | |||
d58343d201 | |||
b2cce8d6dc | |||
3067e565c0 | |||
c6fcbf6ee0 | |||
fdf5988ea8 | |||
a7d42824be | |||
9ef6678b7e | |||
70a54451b1 | |||
c6758fe544 | |||
10315590c7 | |||
03ac02afe4 | |||
fd83ec9d91 | |||
c58497cde9 | |||
baf4842774 | |||
01c86af77f | |||
e7d29715b4 | |||
30da721f82 | |||
a1cf8291a9 | |||
ef95eb078a | |||
7ddc175b70 | |||
3305383873 | |||
7f6ca804a1 | |||
7f785a5e4e | |||
eff331ad32 | |||
71fa66c08d | |||
26af6c7fda | |||
43c1e3b15e | |||
69bf6639f3 | |||
c275b3db37 | |||
66eaba3775 | |||
44950efc34 | |||
90b0151caf | |||
04e40f2e6f | |||
7a9def547e | |||
4317723d3c | |||
26034b296e | |||
e250bb65bb | |||
12c8d0456f | |||
667bb2c913 | |||
d40b073f97 | |||
5d822ad8cb | |||
4a259ee3f0 | |||
b34b7e0385 | |||
a0d2bda742 | |||
c13b21e830 | |||
9442f28c60 | |||
8fa5ae1865 | |||
65b0604db7 | |||
082ac10fbb | |||
8d5091a3f7 | |||
b60220c597 | |||
bf42ccee4e | |||
bfbb3f8d33 | |||
250b80ee7c | |||
ffda3656d1 | |||
f5474ec6cc | |||
ad9f194c24 | |||
ef7197f925 | |||
9648300994 | |||
4b0a134bc9 | |||
ad565ad922 | |||
65e22481c6 | |||
6f419aa18e | |||
c698790122 | |||
de4d6a69ee | |||
989ba207e9 | |||
f2bdd4bb2f | |||
200b630319 | |||
21289db181 | |||
c9a44e4089 | |||
0dbde7536f | |||
8d53c1723c | |||
dce68e6839 | |||
11c01ef600 | |||
785366c2de | |||
e3ea19307a | |||
ee8b909a67 | |||
1d119f1a3c | |||
4ea1a0132e | |||
0ddf872163 | |||
a487667b9d | |||
7863fde552 | |||
758d34a7d1 | |||
c80f2cd573 | |||
0e26f6f3e2 | |||
ab0163a3ad | |||
2eb588bed9 | |||
a88bad7947 | |||
89d42a7716 | |||
b4dd8b4313 | |||
4fc1fc729e | |||
13e1f33898 | |||
de2c4429a4 | |||
22bb0976d5 | |||
a2ed5fcd3d | |||
6497ecc767 | |||
54fb75f0e0 | |||
5a405336ae | |||
fd6bb0f7fd | |||
0e971a37aa | |||
4e52fb7a1f | |||
3e7b568d3e | |||
a45a40a398 | |||
4cb1f529d1 | |||
fe819a6ec3 | |||
df5a06dc70 | |||
64447e745e | |||
78eb74c252 | |||
669589f761 | |||
b7a83531e7 | |||
a9e29a9eb2 | |||
a0a2f40295 | |||
7b7c9f509e | |||
beb2660aa8 | |||
3f78bf9298 | |||
06a2a3ed6c | |||
eb40523388 | |||
50fbfa2a9a | |||
747d8bb4c2 | |||
e90cae9344 | |||
bb67d9524d | |||
a306f15a74 | |||
8c09c3fda2 | |||
a656b7ed9a | |||
2e8b72e278 | |||
ddf5c6ee12 | |||
eda7ea5695 | |||
08c0b34270 | |||
1a86b4cb6d | |||
aea150e5dc | |||
ee2dd4fd28 | |||
c3b674526d | |||
565eab901b | |||
c3315ac742 | |||
da9b34d1b0 | |||
243ca5d08f | |||
29ea790c30 | |||
ae780e6a9b | |||
471326f458 | |||
7395c27932 | |||
d435edefc4 | |||
eb73f1d27d | |||
9a31f3285a | |||
45a070f1ba | |||
ccb742c7ab | |||
cb848fa7cb | |||
8abf49ae13 | |||
8f2390a120 | |||
81a3f81aa1 | |||
0fb404c775 | |||
51069223f5 | |||
17b51240c9 | |||
306103ed05 | |||
eb628efbc4 | |||
2c3ea84e6f | |||
85f15900a4 | |||
af1f318852 | |||
b31afe235f | |||
a9321aaf86 | |||
d2944d2813 | |||
fe02d21d23 | |||
5160e7e20b | |||
c440106eff | |||
a1c47b1b20 | |||
93724ed29c | |||
75fecf1542 | |||
0d7bdc6cca | |||
c87d70b04b | |||
fa7a504fa5 | |||
612cfdceb1 | |||
70bb19dd23 | |||
94d3f65f09 | |||
cf3ac625da | |||
c2e3071a33 | |||
21e8b96e22 | |||
3acacc6150 | |||
d583d331b5 | |||
b321ca3e64 | |||
6b1188e3dc | |||
0d4f57aa22 | |||
bc5b38a771 | |||
f46910eac3 | |||
adb8ff14b9 | |||
7ba85710ad | |||
cbd3a83a06 | |||
eb015c0362 | |||
17681a7e31 | |||
cca7615456 | |||
a4b15690b8 | |||
32743868c7 | |||
7d221be408 | |||
2d75077d4a | |||
802da0bcb0 | |||
6305dd39e9 | |||
b3f6d05131 | |||
8ebeb88e57 | |||
c9d685833b | |||
f3332af3f2 | |||
870065517f | |||
267a8c6025 | |||
edff3f8afd | |||
cdc8d17d0b | |||
8f0e553e11 | |||
5f9a7b9d49 | |||
af7bc17c9a | |||
e2b62ba154 | |||
5da421c178 | |||
becb6267fb | |||
34754ccb38 | |||
dcd0edce40 | |||
7a0030b145 | |||
fa9e41047e | |||
f5f1f9478c | |||
6a70f02662 | |||
3bc0dfb354 | |||
418df29ff0 | |||
1f47d5b74f | |||
e52d70885e | |||
3f1409d87d | |||
0b2e5b2f82 | |||
228efcba0e | |||
401ea552c2 | |||
8cce6df80a | |||
3e522a9f59 | |||
ae45de3584 | |||
513b6dd6ad | |||
e65050d3f4 | |||
4a8a47d438 | |||
b2d8ec68a4 | |||
d3101d65aa | |||
abaddb4c9b | |||
acb86c23f9 | |||
bea5034ddf | |||
5194d8306e | |||
4846f33136 | |||
de8d1f1a3b | |||
ddd7aa844c | |||
1fd31ccc28 | |||
6f520eacf7 | |||
a0eb7c0d5c | |||
94322ef72e | |||
c6ab6406c2 | |||
2c132dccba | |||
7919a47165 | |||
7b2586943b | |||
f964b3438d | |||
09f3caedaa | |||
fe4b1f6dee | |||
bc5e33791f | |||
d669b93c4f | |||
a81c6d5448 | |||
6b34e3ef08 | |||
dbdc4d4123 | |||
c24c279df7 | |||
0f243d751f | |||
31f6d38cd2 | |||
c31bb9aca7 | |||
748b0b294d | |||
af8210dfea | |||
493969588e | |||
293573c665 | |||
5ffdb7355a |
@ -36,3 +36,4 @@ exclude_paths:
|
||||
# Generated files
|
||||
- tests/files/custom_cni/cilium.yaml
|
||||
- venv
|
||||
- .github
|
||||
|
@ -5,4 +5,4 @@ roles/kubernetes/control-plane/defaults/main/main.yml jinja[spacing]
|
||||
roles/kubernetes/kubeadm/defaults/main.yml jinja[spacing]
|
||||
roles/kubernetes/node/defaults/main.yml jinja[spacing]
|
||||
roles/kubernetes/preinstall/defaults/main.yml jinja[spacing]
|
||||
roles/kubespray-defaults/defaults/main.yaml jinja[spacing]
|
||||
roles/kubespray-defaults/defaults/main/main.yml jinja[spacing]
|
||||
|
1
.gitattributes
vendored
Normal file
1
.gitattributes
vendored
Normal file
@ -0,0 +1 @@
|
||||
docs/_sidebar.md linguist-generated=true
|
44
.github/ISSUE_TEMPLATE/bug-report.md
vendored
44
.github/ISSUE_TEMPLATE/bug-report.md
vendored
@ -1,44 +0,0 @@
|
||||
---
|
||||
name: Bug Report
|
||||
about: Report a bug encountered while operating Kubernetes
|
||||
labels: kind/bug
|
||||
|
||||
---
|
||||
<!--
|
||||
Please, be ready for followup questions, and please respond in a timely
|
||||
manner. If we can't reproduce a bug or think a feature already exists, we
|
||||
might close your issue. If we're wrong, PLEASE feel free to reopen it and
|
||||
explain why.
|
||||
-->
|
||||
|
||||
**Environment**:
|
||||
- **Cloud provider or hardware configuration:**
|
||||
|
||||
- **OS (`printf "$(uname -srm)\n$(cat /etc/os-release)\n"`):**
|
||||
|
||||
- **Version of Ansible** (`ansible --version`):
|
||||
|
||||
- **Version of Python** (`python --version`):
|
||||
|
||||
|
||||
**Kubespray version (commit) (`git rev-parse --short HEAD`):**
|
||||
|
||||
|
||||
**Network plugin used**:
|
||||
|
||||
|
||||
**Full inventory with variables (`ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"`):**
|
||||
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
|
||||
|
||||
**Command used to invoke ansible**:
|
||||
|
||||
|
||||
**Output of ansible run**:
|
||||
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
|
||||
|
||||
**Anything else do we need to know**:
|
||||
<!-- By running scripts/collect-info.yaml you can get a lot of useful informations.
|
||||
Script can be started by:
|
||||
ansible-playbook -i <inventory_file_path> -u <ssh_user> -e ansible_ssh_user=<ssh_user> -b --become-user=root -e dir=`pwd` scripts/collect-info.yaml
|
||||
(If you using CoreOS remember to add '-e ansible_python_interpreter=/opt/bin/python').
|
||||
After running this command you can find logs in `pwd`/logs.tar.gz. You can even upload somewhere entire file and paste link here.-->
|
124
.github/ISSUE_TEMPLATE/bug-report.yaml
vendored
Normal file
124
.github/ISSUE_TEMPLATE/bug-report.yaml
vendored
Normal file
@ -0,0 +1,124 @@
|
||||
---
|
||||
name: Bug Report
|
||||
description: Report a bug encountered while using Kubespray
|
||||
labels: kind/bug
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Please, be ready for followup questions, and please respond in a timely
|
||||
manner. If we can't reproduce a bug or think a feature already exists, we
|
||||
might close your issue. If we're wrong, PLEASE feel free to reopen it and
|
||||
explain why.
|
||||
- type: textarea
|
||||
id: problem
|
||||
attributes:
|
||||
label: What happened?
|
||||
description: |
|
||||
Please provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: expected
|
||||
attributes:
|
||||
label: What did you expect to happen?
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: repro
|
||||
attributes:
|
||||
label: How can we reproduce it (as minimally and precisely as possible)?
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: '### Environment'
|
||||
|
||||
- type: textarea
|
||||
id: os
|
||||
attributes:
|
||||
label: OS
|
||||
placeholder: 'printf "$(uname -srm)\n$(cat /etc/os-release)\n"'
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: ansible_version
|
||||
attributes:
|
||||
label: Version of Ansible
|
||||
placeholder: 'ansible --version'
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: input
|
||||
id: python_version
|
||||
attributes:
|
||||
label: Version of Python
|
||||
placeholder: 'python --version'
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: input
|
||||
id: kubespray_version
|
||||
attributes:
|
||||
label: Version of Kubespray (commit)
|
||||
placeholder: 'git rev-parse --short HEAD'
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: dropdown
|
||||
id: network_plugin
|
||||
attributes:
|
||||
label: Network plugin used
|
||||
options:
|
||||
- calico
|
||||
- cilium
|
||||
- cni
|
||||
- custom_cni
|
||||
- flannel
|
||||
- kube-ovn
|
||||
- kube-router
|
||||
- macvlan
|
||||
- meta
|
||||
- multus
|
||||
- ovn4nfv
|
||||
- weave
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: inventory
|
||||
attributes:
|
||||
label: Full inventory with variables
|
||||
placeholder: 'ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"'
|
||||
description: We recommend using snippets services like https://gist.github.com/ etc.
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: input
|
||||
id: ansible_command
|
||||
attributes:
|
||||
label: Command used to invoke ansible
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: ansible_output
|
||||
attributes:
|
||||
label: Output of ansible run
|
||||
description: We recommend using snippets services like https://gist.github.com/ etc.
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: anything_else
|
||||
attributes:
|
||||
label: Anything else we need to know
|
||||
description: |
|
||||
By running scripts/collect-info.yaml you can get a lot of useful informations.
|
||||
Script can be started by:
|
||||
ansible-playbook -i <inventory_file_path> -u <ssh_user> -e ansible_ssh_user=<ssh_user> -b --become-user=root -e dir=`pwd` scripts/collect-info.yaml
|
||||
(If you using CoreOS remember to add '-e ansible_python_interpreter=/opt/bin/python').
|
||||
After running this command you can find logs in `pwd`/logs.tar.gz. You can even upload somewhere entire file and paste link here
|
5
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
5
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
---
|
||||
contact_links:
|
||||
- name: Support Request
|
||||
url: https://kubernetes.slack.com/channels/kubespray
|
||||
about: Support request or question relating to Kubernetes
|
11
.github/ISSUE_TEMPLATE/enhancement.md
vendored
11
.github/ISSUE_TEMPLATE/enhancement.md
vendored
@ -1,11 +0,0 @@
|
||||
---
|
||||
name: Enhancement Request
|
||||
about: Suggest an enhancement to the Kubespray project
|
||||
labels: kind/feature
|
||||
|
||||
---
|
||||
<!-- Please only use this template for submitting enhancement requests -->
|
||||
|
||||
**What would you like to be added**:
|
||||
|
||||
**Why is this needed**:
|
20
.github/ISSUE_TEMPLATE/enhancement.yaml
vendored
Normal file
20
.github/ISSUE_TEMPLATE/enhancement.yaml
vendored
Normal file
@ -0,0 +1,20 @@
|
||||
---
|
||||
name: Enhancement Request
|
||||
description: Suggest an enhancement to the Kubespray project
|
||||
labels: kind/feature
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: Please only use this template for submitting enhancement requests
|
||||
- type: textarea
|
||||
id: what
|
||||
attributes:
|
||||
label: What would you like to be added
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: why
|
||||
attributes:
|
||||
label: Why is this needed
|
||||
validations:
|
||||
required: true
|
20
.github/ISSUE_TEMPLATE/failing-test.md
vendored
20
.github/ISSUE_TEMPLATE/failing-test.md
vendored
@ -1,20 +0,0 @@
|
||||
---
|
||||
name: Failing Test
|
||||
about: Report test failures in Kubespray CI jobs
|
||||
labels: kind/failing-test
|
||||
|
||||
---
|
||||
|
||||
<!-- Please only use this template for submitting reports about failing tests in Kubespray CI jobs -->
|
||||
|
||||
**Which jobs are failing**:
|
||||
|
||||
**Which test(s) are failing**:
|
||||
|
||||
**Since when has it been failing**:
|
||||
|
||||
**Testgrid link**:
|
||||
|
||||
**Reason for failure**:
|
||||
|
||||
**Anything else we need to know**:
|
41
.github/ISSUE_TEMPLATE/failing-test.yaml
vendored
Normal file
41
.github/ISSUE_TEMPLATE/failing-test.yaml
vendored
Normal file
@ -0,0 +1,41 @@
|
||||
---
|
||||
name: Failing Test
|
||||
description: Report test failures in Kubespray CI jobs
|
||||
labels: kind/failing-test
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: Please only use this template for submitting reports about failing tests in Kubespray CI jobs
|
||||
- type: textarea
|
||||
id: failing_jobs
|
||||
attributes:
|
||||
label: Which jobs are failing ?
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: failing_tests
|
||||
attributes:
|
||||
label: Which tests are failing ?
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: input
|
||||
id: since_when
|
||||
attributes:
|
||||
label: Since when has it been failing ?
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: failure_reason
|
||||
attributes:
|
||||
label: Reason for failure
|
||||
description: If you don't know and have no guess, just put "Unknown"
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: anything_else
|
||||
attributes:
|
||||
label: Anything else we need to know
|
18
.github/ISSUE_TEMPLATE/support.md
vendored
18
.github/ISSUE_TEMPLATE/support.md
vendored
@ -1,18 +0,0 @@
|
||||
---
|
||||
name: Support Request
|
||||
about: Support request or question relating to Kubespray
|
||||
labels: kind/support
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
STOP -- PLEASE READ!
|
||||
|
||||
GitHub is not the right place for support requests.
|
||||
|
||||
If you're looking for help, check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubespray) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
|
||||
|
||||
You can also post your question on the [Kubernetes Slack](http://slack.k8s.io/) or the [Discuss Kubernetes](https://discuss.kubernetes.io/) forum.
|
||||
|
||||
If the matter is security related, please disclose it privately via https://kubernetes.io/security/.
|
||||
-->
|
7
.github/dependabot.yml
vendored
Normal file
7
.github/dependabot.yml
vendored
Normal file
@ -0,0 +1,7 @@
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: "pip"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
labels: [ "dependencies" ]
|
2
.gitignore
vendored
2
.gitignore
vendored
@ -3,6 +3,8 @@
|
||||
**/vagrant_ansible_inventory
|
||||
*.iml
|
||||
temp
|
||||
contrib/offline/container-images
|
||||
contrib/offline/container-images.tar.gz
|
||||
contrib/offline/offline-files
|
||||
contrib/offline/offline-files.tar.gz
|
||||
.idea
|
||||
|
@ -9,7 +9,7 @@ stages:
|
||||
- deploy-special
|
||||
|
||||
variables:
|
||||
KUBESPRAY_VERSION: v2.22.1
|
||||
KUBESPRAY_VERSION: v2.25.0
|
||||
FAILFASTCI_NAMESPACE: 'kargo-ci'
|
||||
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
|
||||
ANSIBLE_FORCE_COLOR: "true"
|
||||
@ -77,7 +77,6 @@ ci-authorized:
|
||||
include:
|
||||
- .gitlab-ci/build.yml
|
||||
- .gitlab-ci/lint.yml
|
||||
- .gitlab-ci/shellcheck.yml
|
||||
- .gitlab-ci/terraform.yml
|
||||
- .gitlab-ci/packet.yml
|
||||
- .gitlab-ci/vagrant.yml
|
||||
|
@ -1,110 +1,40 @@
|
||||
---
|
||||
yamllint:
|
||||
extends: .job
|
||||
stage: unit-tests
|
||||
tags: [light]
|
||||
variables:
|
||||
LANG: C.UTF-8
|
||||
generate-pre-commit:
|
||||
image: 'mikefarah/yq@sha256:bcb889a1f9bdb0613c8a054542d02360c2b1b35521041be3e1bd8fbd0534d411'
|
||||
stage: build
|
||||
before_script: []
|
||||
script:
|
||||
- yamllint --strict .
|
||||
except: ['triggers', 'master']
|
||||
- >
|
||||
yq -r < .pre-commit-config.yaml '.repos[].hooks[].id' |
|
||||
sed 's/^/ - /' |
|
||||
cat .gitlab-ci/pre-commit-dynamic-stub.yml - > pre-commit-generated.yml
|
||||
artifacts:
|
||||
paths:
|
||||
- pre-commit-generated.yml
|
||||
|
||||
run-pre-commit:
|
||||
stage: unit-tests
|
||||
trigger:
|
||||
include:
|
||||
- artifact: pre-commit-generated.yml
|
||||
job: generate-pre-commit
|
||||
strategy: depend
|
||||
|
||||
vagrant-validate:
|
||||
extends: .job
|
||||
stage: unit-tests
|
||||
tags: [light]
|
||||
variables:
|
||||
VAGRANT_VERSION: 2.3.4
|
||||
VAGRANT_VERSION: 2.3.7
|
||||
script:
|
||||
- ./tests/scripts/vagrant-validate.sh
|
||||
except: ['triggers', 'master']
|
||||
|
||||
ansible-lint:
|
||||
extends: .job
|
||||
stage: unit-tests
|
||||
tags: [light]
|
||||
script:
|
||||
- ansible-lint -v
|
||||
except: ['triggers', 'master']
|
||||
|
||||
syntax-check:
|
||||
extends: .job
|
||||
stage: unit-tests
|
||||
tags: [light]
|
||||
variables:
|
||||
ANSIBLE_INVENTORY: inventory/local-tests.cfg
|
||||
ANSIBLE_REMOTE_USER: root
|
||||
ANSIBLE_BECOME: "true"
|
||||
ANSIBLE_BECOME_USER: root
|
||||
ANSIBLE_VERBOSITY: "3"
|
||||
script:
|
||||
- ansible-playbook --syntax-check cluster.yml
|
||||
- ansible-playbook --syntax-check playbooks/cluster.yml
|
||||
- ansible-playbook --syntax-check upgrade-cluster.yml
|
||||
- ansible-playbook --syntax-check playbooks/upgrade_cluster.yml
|
||||
- ansible-playbook --syntax-check reset.yml
|
||||
- ansible-playbook --syntax-check playbooks/reset.yml
|
||||
- ansible-playbook --syntax-check extra_playbooks/upgrade-only-k8s.yml
|
||||
except: ['triggers', 'master']
|
||||
|
||||
collection-build-install-sanity-check:
|
||||
extends: .job
|
||||
stage: unit-tests
|
||||
tags: [light]
|
||||
variables:
|
||||
ANSIBLE_COLLECTIONS_PATH: "./ansible_collections"
|
||||
script:
|
||||
- ansible-galaxy collection build
|
||||
- ansible-galaxy collection install kubernetes_sigs-kubespray-$(grep "^version:" galaxy.yml | awk '{print $2}').tar.gz
|
||||
- ansible-galaxy collection list $(egrep -i '(name:\s+|namespace:\s+)' galaxy.yml | awk '{print $2}' | tr '\n' '.' | sed 's|\.$||g') | grep "^kubernetes_sigs.kubespray"
|
||||
- test -f ansible_collections/kubernetes_sigs/kubespray/playbooks/cluster.yml
|
||||
- test -f ansible_collections/kubernetes_sigs/kubespray/playbooks/reset.yml
|
||||
except: ['triggers', 'master']
|
||||
|
||||
tox-inventory-builder:
|
||||
stage: unit-tests
|
||||
tags: [light]
|
||||
extends: .job
|
||||
before_script:
|
||||
- ./tests/scripts/rebase.sh
|
||||
script:
|
||||
- pip3 install tox
|
||||
- cd contrib/inventory_builder && tox
|
||||
except: ['triggers', 'master']
|
||||
|
||||
markdownlint:
|
||||
stage: unit-tests
|
||||
tags: [light]
|
||||
image: node
|
||||
before_script:
|
||||
- npm install -g markdownlint-cli@0.22.0
|
||||
script:
|
||||
- markdownlint $(find . -name '*.md' | grep -vF './.git') --ignore docs/_sidebar.md --ignore contrib/dind/README.md
|
||||
|
||||
check-readme-versions:
|
||||
stage: unit-tests
|
||||
tags: [light]
|
||||
image: python:3
|
||||
script:
|
||||
- tests/scripts/check_readme_versions.sh
|
||||
|
||||
# TODO: convert to pre-commit hook
|
||||
check-galaxy-version:
|
||||
stage: unit-tests
|
||||
tags: [light]
|
||||
image: python:3
|
||||
script:
|
||||
- tests/scripts/check_galaxy_version.sh
|
||||
|
||||
check-typo:
|
||||
stage: unit-tests
|
||||
tags: [light]
|
||||
image: python:3
|
||||
script:
|
||||
- tests/scripts/check_typo.sh
|
||||
|
||||
ci-matrix:
|
||||
stage: unit-tests
|
||||
tags: [light]
|
||||
image: python:3
|
||||
script:
|
||||
- tests/scripts/md-table/test.sh
|
||||
|
@ -61,23 +61,23 @@ molecule_cri-o:
|
||||
molecule_kata:
|
||||
extends: .molecule
|
||||
stage: deploy-part3
|
||||
allow_failure: true
|
||||
script:
|
||||
- ./tests/scripts/molecule_run.sh -i container-engine/kata-containers
|
||||
when: on_success
|
||||
when: manual
|
||||
# FIXME: this test is broken (perma-failing)
|
||||
|
||||
molecule_gvisor:
|
||||
extends: .molecule
|
||||
stage: deploy-part3
|
||||
allow_failure: true
|
||||
script:
|
||||
- ./tests/scripts/molecule_run.sh -i container-engine/gvisor
|
||||
when: on_success
|
||||
when: manual
|
||||
# FIXME: this test is broken (perma-failing)
|
||||
|
||||
molecule_youki:
|
||||
extends: .molecule
|
||||
stage: deploy-part3
|
||||
allow_failure: true
|
||||
script:
|
||||
- ./tests/scripts/molecule_run.sh -i container-engine/youki
|
||||
when: on_success
|
||||
when: manual
|
||||
# FIXME: this test is broken (perma-failing)
|
||||
|
@ -31,8 +31,8 @@ packet_cleanup_old:
|
||||
- make cleanup-packet
|
||||
after_script: []
|
||||
|
||||
# The ubuntu20-calico-aio jobs are meant as early stages to prevent running the full CI if something is horribly broken
|
||||
packet_ubuntu20-calico-aio:
|
||||
# The ubuntu20-calico-all-in-one jobs are meant as early stages to prevent running the full CI if something is horribly broken
|
||||
packet_ubuntu20-calico-all-in-one:
|
||||
stage: deploy-part1
|
||||
extends: .packet_pr
|
||||
when: on_success
|
||||
@ -41,22 +41,37 @@ packet_ubuntu20-calico-aio:
|
||||
|
||||
# ### PR JOBS PART2
|
||||
|
||||
packet_ubuntu20-aio-docker:
|
||||
packet_ubuntu20-all-in-one-docker:
|
||||
stage: deploy-part2
|
||||
extends: .packet_pr
|
||||
when: on_success
|
||||
|
||||
packet_ubuntu20-calico-aio-hardening:
|
||||
packet_ubuntu20-calico-all-in-one-hardening:
|
||||
stage: deploy-part2
|
||||
extends: .packet_pr
|
||||
when: on_success
|
||||
|
||||
packet_ubuntu22-aio-docker:
|
||||
packet_ubuntu22-all-in-one-docker:
|
||||
stage: deploy-part2
|
||||
extends: .packet_pr
|
||||
when: on_success
|
||||
|
||||
packet_ubuntu22-calico-aio:
|
||||
packet_ubuntu22-calico-all-in-one:
|
||||
stage: deploy-part2
|
||||
extends: .packet_pr
|
||||
when: on_success
|
||||
|
||||
packet_ubuntu24-all-in-one-docker:
|
||||
stage: deploy-part2
|
||||
extends: .packet_pr
|
||||
when: on_success
|
||||
|
||||
packet_ubuntu24-calico-all-in-one:
|
||||
stage: deploy-part2
|
||||
extends: .packet_pr
|
||||
when: on_success
|
||||
|
||||
packet_ubuntu24-calico-etcd-datastore:
|
||||
stage: deploy-part2
|
||||
extends: .packet_pr
|
||||
when: on_success
|
||||
@ -169,6 +184,11 @@ packet_almalinux8-docker:
|
||||
extends: .packet_pr
|
||||
when: on_success
|
||||
|
||||
packet_amazon-linux-2-all-in-one:
|
||||
stage: deploy-part2
|
||||
extends: .packet_pr
|
||||
when: on_success
|
||||
|
||||
packet_fedora38-docker-weave:
|
||||
stage: deploy-part2
|
||||
extends: .packet_pr
|
||||
@ -178,7 +198,7 @@ packet_fedora38-docker-weave:
|
||||
packet_opensuse-docker-cilium:
|
||||
stage: deploy-part2
|
||||
extends: .packet_pr
|
||||
when: manual
|
||||
when: on_success
|
||||
|
||||
# ### MANUAL JOBS
|
||||
|
||||
@ -235,11 +255,6 @@ packet_fedora37-calico-swap-selinux:
|
||||
extends: .packet_pr
|
||||
when: manual
|
||||
|
||||
packet_amazon-linux-2-aio:
|
||||
stage: deploy-part2
|
||||
extends: .packet_pr
|
||||
when: manual
|
||||
|
||||
packet_almalinux8-calico-nodelocaldns-secondary:
|
||||
stage: deploy-part2
|
||||
extends: .packet_pr
|
||||
@ -260,6 +275,11 @@ packet_debian11-kubelet-csr-approver:
|
||||
extends: .packet_pr
|
||||
when: manual
|
||||
|
||||
packet_debian12-custom-cni-helm:
|
||||
stage: deploy-part2
|
||||
extends: .packet_pr
|
||||
when: manual
|
||||
|
||||
# ### PR JOBS PART3
|
||||
# Long jobs (45min+)
|
||||
|
||||
|
17
.gitlab-ci/pre-commit-dynamic-stub.yml
Normal file
17
.gitlab-ci/pre-commit-dynamic-stub.yml
Normal file
@ -0,0 +1,17 @@
|
||||
---
|
||||
# stub pipeline for dynamic generation
|
||||
pre-commit:
|
||||
tags:
|
||||
- light
|
||||
image: 'ghcr.io/pre-commit-ci/runner-image@sha256:aaf2c7b38b22286f2d381c11673bec571c28f61dd086d11b43a1c9444a813cef'
|
||||
variables:
|
||||
PRE_COMMIT_HOME: /pre-commit-cache
|
||||
script:
|
||||
- pre-commit run -a $HOOK_ID
|
||||
cache:
|
||||
key: pre-commit-$HOOK_ID
|
||||
paths:
|
||||
- /pre-commit-cache
|
||||
parallel:
|
||||
matrix:
|
||||
- HOOK_ID:
|
@ -1,16 +0,0 @@
|
||||
---
|
||||
shellcheck:
|
||||
extends: .job
|
||||
stage: unit-tests
|
||||
tags: [light]
|
||||
variables:
|
||||
SHELLCHECK_VERSION: v0.7.1
|
||||
before_script:
|
||||
- ./tests/scripts/rebase.sh
|
||||
- curl --silent --location "https://github.com/koalaman/shellcheck/releases/download/"${SHELLCHECK_VERSION}"/shellcheck-"${SHELLCHECK_VERSION}".linux.x86_64.tar.xz" | tar -xJv
|
||||
- cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/
|
||||
- shellcheck --version
|
||||
script:
|
||||
# Run shellcheck for all *.sh
|
||||
- find . -name '*.sh' -not -path './.git/*' | xargs shellcheck --severity error
|
||||
except: ['triggers', 'master']
|
@ -18,12 +18,12 @@
|
||||
- ./tests/scripts/testcases_run.sh
|
||||
after_script:
|
||||
- chronic ./tests/scripts/testcases_cleanup.sh
|
||||
allow_failure: true
|
||||
|
||||
vagrant_ubuntu20-calico-dual-stack:
|
||||
stage: deploy-part2
|
||||
extends: .vagrant
|
||||
when: on_success
|
||||
when: manual
|
||||
# FIXME: this test if broken (perma-failing)
|
||||
|
||||
vagrant_ubuntu20-weave-medium:
|
||||
stage: deploy-part2
|
||||
@ -55,7 +55,8 @@ vagrant_ubuntu20-kube-router-svc-proxy:
|
||||
vagrant_fedora37-kube-router:
|
||||
stage: deploy-part2
|
||||
extends: .vagrant
|
||||
when: on_success
|
||||
when: manual
|
||||
# FIXME: this test if broken (perma-failing)
|
||||
|
||||
vagrant_centos7-kube-router:
|
||||
stage: deploy-part2
|
||||
|
@ -1,3 +0,0 @@
|
||||
---
|
||||
MD013: false
|
||||
MD029: false
|
4
.md_style.rb
Normal file
4
.md_style.rb
Normal file
@ -0,0 +1,4 @@
|
||||
all
|
||||
exclude_rule 'MD013'
|
||||
exclude_rule 'MD029'
|
||||
rule 'MD007', :indent => 2
|
@ -1,8 +1,7 @@
|
||||
---
|
||||
repos:
|
||||
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v3.4.0
|
||||
rev: v4.6.0
|
||||
hooks:
|
||||
- id: check-added-large-files
|
||||
- id: check-case-conflict
|
||||
@ -16,47 +15,60 @@ repos:
|
||||
- id: trailing-whitespace
|
||||
|
||||
- repo: https://github.com/adrienverge/yamllint.git
|
||||
rev: v1.27.1
|
||||
rev: v1.35.1
|
||||
hooks:
|
||||
- id: yamllint
|
||||
args: [--strict]
|
||||
|
||||
- repo: https://github.com/markdownlint/markdownlint
|
||||
rev: v0.11.0
|
||||
rev: v0.12.0
|
||||
hooks:
|
||||
- id: markdownlint
|
||||
args: [ -r, "~MD013,~MD029" ]
|
||||
exclude: "^.git"
|
||||
exclude: "^.github|(^docs/_sidebar\\.md$)"
|
||||
|
||||
- repo: https://github.com/jumanjihouse/pre-commit-hooks
|
||||
rev: 3.0.0
|
||||
- repo: https://github.com/shellcheck-py/shellcheck-py
|
||||
rev: v0.10.0.1
|
||||
hooks:
|
||||
- id: shellcheck
|
||||
args: [ --severity, "error" ]
|
||||
args: ["--severity=error"]
|
||||
exclude: "^.git"
|
||||
files: "\\.sh$"
|
||||
|
||||
- repo: local
|
||||
- repo: https://github.com/ansible/ansible-lint
|
||||
rev: v24.5.0
|
||||
hooks:
|
||||
- id: ansible-lint
|
||||
name: ansible-lint
|
||||
entry: ansible-lint -v
|
||||
language: python
|
||||
pass_filenames: false
|
||||
additional_dependencies:
|
||||
- .[community]
|
||||
- ansible==9.5.1
|
||||
- jsonschema==4.22.0
|
||||
- jmespath==1.0.1
|
||||
- netaddr==1.2.1
|
||||
- distlib
|
||||
|
||||
- repo: https://github.com/VannTen/misspell
|
||||
# Waiting on https://github.com/golangci/misspell/pull/19 to get merged
|
||||
rev: 8592a4e
|
||||
hooks:
|
||||
- id: misspell
|
||||
exclude: "OWNERS_ALIASES$"
|
||||
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: ansible-syntax-check
|
||||
name: ansible-syntax-check
|
||||
entry: env ANSIBLE_INVENTORY=inventory/local-tests.cfg ANSIBLE_REMOTE_USER=root ANSIBLE_BECOME="true" ANSIBLE_BECOME_USER=root ANSIBLE_VERBOSITY="3" ansible-playbook --syntax-check
|
||||
language: python
|
||||
files: "^cluster.yml|^upgrade-cluster.yml|^reset.yml|^extra_playbooks/upgrade-only-k8s.yml"
|
||||
additional_dependencies:
|
||||
- ansible==9.5.1
|
||||
|
||||
- id: tox-inventory-builder
|
||||
name: tox-inventory-builder
|
||||
entry: bash -c "cd contrib/inventory_builder && tox"
|
||||
language: python
|
||||
pass_filenames: false
|
||||
additional_dependencies:
|
||||
- tox==4.15.0
|
||||
|
||||
- id: check-readme-versions
|
||||
name: check-readme-versions
|
||||
@ -64,8 +76,36 @@ repos:
|
||||
language: script
|
||||
pass_filenames: false
|
||||
|
||||
- id: ci-matrix
|
||||
name: ci-matrix
|
||||
entry: tests/scripts/md-table/test.sh
|
||||
- id: collection-build-install
|
||||
name: Build and install kubernetes-sigs.kubespray Ansible collection
|
||||
language: python
|
||||
additional_dependencies:
|
||||
- ansible-core>=2.16.4
|
||||
- distlib
|
||||
entry: tests/scripts/collection-build-install.sh
|
||||
pass_filenames: false
|
||||
|
||||
- id: generate-docs-sidebar
|
||||
name: generate-docs-sidebar
|
||||
entry: scripts/gen_docs_sidebar.sh
|
||||
language: script
|
||||
pass_filenames: false
|
||||
|
||||
- id: ci-matrix
|
||||
name: ci-matrix
|
||||
entry: tests/scripts/md-table/main.py
|
||||
language: python
|
||||
pass_filenames: false
|
||||
additional_dependencies:
|
||||
- jinja2
|
||||
- pathlib
|
||||
- pyaml
|
||||
|
||||
- id: jinja-syntax-check
|
||||
name: jinja-syntax-check
|
||||
entry: tests/scripts/check-templates.py
|
||||
language: python
|
||||
types:
|
||||
- jinja
|
||||
additional_dependencies:
|
||||
- jinja2
|
||||
|
@ -3,6 +3,7 @@ extends: default
|
||||
|
||||
ignore: |
|
||||
.git/
|
||||
.github/
|
||||
# Generated file
|
||||
tests/files/custom_cni/cilium.yaml
|
||||
|
||||
|
61
Dockerfile
61
Dockerfile
@ -1,5 +1,8 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
# Use imutable image tags rather than mutable tags (like ubuntu:22.04)
|
||||
FROM ubuntu:jammy-20230308
|
||||
FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37
|
||||
|
||||
# Some tools like yamllint need this
|
||||
# Pip needs this as well at the moment to install ansible
|
||||
# (and potentially other packages)
|
||||
@ -7,7 +10,37 @@ FROM ubuntu:jammy-20230308
|
||||
ENV LANG=C.UTF-8 \
|
||||
DEBIAN_FRONTEND=noninteractive \
|
||||
PYTHONDONTWRITEBYTECODE=1
|
||||
|
||||
WORKDIR /kubespray
|
||||
|
||||
# hadolint ignore=DL3008
|
||||
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||
apt-get update -q \
|
||||
&& apt-get install -yq --no-install-recommends \
|
||||
curl \
|
||||
python3 \
|
||||
python3-pip \
|
||||
sshpass \
|
||||
vim \
|
||||
rsync \
|
||||
openssh-client \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* /var/log/*
|
||||
|
||||
RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
|
||||
--mount=type=cache,sharing=locked,id=pipcache,mode=0777,target=/root/.cache/pip \
|
||||
pip install --no-compile --no-cache-dir -r requirements.txt \
|
||||
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;
|
||||
|
||||
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
|
||||
|
||||
RUN --mount=type=bind,source=roles/kubespray-defaults/defaults/main/main.yml,target=roles/kubespray-defaults/defaults/main/main.yml \
|
||||
KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main/main.yml) \
|
||||
OS_ARCHITECTURE=$(dpkg --print-architecture) \
|
||||
&& curl -L "https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${OS_ARCHITECTURE}/kubectl" -o /usr/local/bin/kubectl \
|
||||
&& echo "$(curl -L "https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${OS_ARCHITECTURE}/kubectl.sha256")" /usr/local/bin/kubectl | sha256sum --check \
|
||||
&& chmod a+x /usr/local/bin/kubectl
|
||||
|
||||
COPY *.yml ./
|
||||
COPY *.cfg ./
|
||||
COPY roles ./roles
|
||||
@ -17,29 +50,3 @@ COPY library ./library
|
||||
COPY extra_playbooks ./extra_playbooks
|
||||
COPY playbooks ./playbooks
|
||||
COPY plugins ./plugins
|
||||
|
||||
RUN apt update -q \
|
||||
&& apt install -yq --no-install-recommends \
|
||||
curl \
|
||||
python3 \
|
||||
python3-pip \
|
||||
sshpass \
|
||||
vim \
|
||||
rsync \
|
||||
openssh-client \
|
||||
&& pip install --no-compile --no-cache-dir \
|
||||
ansible==7.6.0 \
|
||||
ansible-core==2.14.6 \
|
||||
cryptography==41.0.1 \
|
||||
jinja2==3.1.2 \
|
||||
netaddr==0.8.0 \
|
||||
jmespath==1.0.1 \
|
||||
MarkupSafe==2.1.3 \
|
||||
ruamel.yaml==0.17.21 \
|
||||
passlib==1.7.4 \
|
||||
&& KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
|
||||
&& curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl -o /usr/local/bin/kubectl \
|
||||
&& echo $(curl -L https://dl.k8s.io/release/$KUBE_VERSION/bin/linux/$(dpkg --print-architecture)/kubectl.sha256) /usr/local/bin/kubectl | sha256sum --check \
|
||||
&& chmod a+x /usr/local/bin/kubectl \
|
||||
&& rm -rf /var/lib/apt/lists/* /var/log/* \
|
||||
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;
|
||||
|
@ -1,31 +1,24 @@
|
||||
aliases:
|
||||
kubespray-approvers:
|
||||
- mattymo
|
||||
- chadswen
|
||||
- mirwan
|
||||
- miouge1
|
||||
- luckysb
|
||||
- cristicalin
|
||||
- floryut
|
||||
- oomichi
|
||||
- cristicalin
|
||||
- liupeng0518
|
||||
- yankay
|
||||
- mzaian
|
||||
kubespray-reviewers:
|
||||
- holmsten
|
||||
- bozzo
|
||||
- eppo
|
||||
- oomichi
|
||||
- jayonlau
|
||||
- cristicalin
|
||||
- liupeng0518
|
||||
- yankay
|
||||
- cyclinder
|
||||
- mzaian
|
||||
- mrfreezeex
|
||||
- erikjiang
|
||||
kubespray-emeritus_approvers:
|
||||
- riverzhang
|
||||
- atoms
|
||||
- ant31
|
||||
kubespray-reviewers:
|
||||
- cyclinder
|
||||
- erikjiang
|
||||
- mrfreezeex
|
||||
- mzaian
|
||||
- vannten
|
||||
- yankay
|
||||
kubespray-emeritus_approvers:
|
||||
- atoms
|
||||
- chadswen
|
||||
- luckysb
|
||||
- mattymo
|
||||
- miouge1
|
||||
- riverzhang
|
||||
- woopstar
|
||||
|
143
README.md
143
README.md
@ -5,7 +5,7 @@
|
||||
If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
|
||||
You can get your invite [here](http://slack.k8s.io/)
|
||||
|
||||
- Can be deployed on **[AWS](docs/aws.md), GCE, [Azure](docs/azure.md), [OpenStack](docs/openstack.md), [vSphere](docs/vsphere.md), [Equinix Metal](docs/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
|
||||
- Can be deployed on **[AWS](docs/cloud_providers/aws.md), GCE, [Azure](docs/cloud_providers/azure.md), [OpenStack](docs/cloud_providers/openstack.md), [vSphere](docs/cloud_providers/vsphere.md), [Equinix Metal](docs/cloud_providers/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
|
||||
- **Highly available** cluster
|
||||
- **Composable** (Choice of the network plugin for instance)
|
||||
- Supports most popular **Linux distributions**
|
||||
@ -19,7 +19,7 @@ Below are several ways to use Kubespray to deploy a Kubernetes cluster.
|
||||
|
||||
#### Usage
|
||||
|
||||
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
|
||||
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
|
||||
then run the following steps:
|
||||
|
||||
```ShellSession
|
||||
@ -75,18 +75,18 @@ You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mou
|
||||
to access the inventory and SSH key in the container, like this:
|
||||
|
||||
```ShellSession
|
||||
git checkout v2.22.1
|
||||
docker pull quay.io/kubespray/kubespray:v2.22.1
|
||||
git checkout v2.25.0
|
||||
docker pull quay.io/kubespray/kubespray:v2.25.0
|
||||
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
|
||||
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
|
||||
quay.io/kubespray/kubespray:v2.22.1 bash
|
||||
quay.io/kubespray/kubespray:v2.25.0 bash
|
||||
# Inside the container you may now run the kubespray playbooks:
|
||||
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
|
||||
```
|
||||
|
||||
#### Collection
|
||||
|
||||
See [here](docs/ansible_collection.md) if you wish to use this repository as an Ansible collection
|
||||
See [here](docs/ansible/ansible_collection.md) if you wish to use this repository as an Ansible collection
|
||||
|
||||
### Vagrant
|
||||
|
||||
@ -99,7 +99,7 @@ python -V && pip -V
|
||||
|
||||
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
|
||||
|
||||
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
|
||||
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
|
||||
then run the following step:
|
||||
|
||||
```ShellSession
|
||||
@ -109,80 +109,79 @@ vagrant up
|
||||
## Documents
|
||||
|
||||
- [Requirements](#requirements)
|
||||
- [Kubespray vs ...](docs/comparisons.md)
|
||||
- [Getting started](docs/getting-started.md)
|
||||
- [Setting up your first cluster](docs/setting-up-your-first-cluster.md)
|
||||
- [Ansible inventory and tags](docs/ansible.md)
|
||||
- [Integration with existing ansible repo](docs/integration.md)
|
||||
- [Deployment data variables](docs/vars.md)
|
||||
- [DNS stack](docs/dns-stack.md)
|
||||
- [HA mode](docs/ha-mode.md)
|
||||
- [Kubespray vs ...](docs/getting_started/comparisons.md)
|
||||
- [Getting started](docs/getting_started/getting-started.md)
|
||||
- [Setting up your first cluster](docs/getting_started/setting-up-your-first-cluster.md)
|
||||
- [Ansible inventory and tags](docs/ansible/ansible.md)
|
||||
- [Integration with existing ansible repo](docs/operations/integration.md)
|
||||
- [Deployment data variables](docs/ansible/vars.md)
|
||||
- [DNS stack](docs/advanced/dns-stack.md)
|
||||
- [HA mode](docs/operations/ha-mode.md)
|
||||
- [Network plugins](#network-plugins)
|
||||
- [Vagrant install](docs/vagrant.md)
|
||||
- [Flatcar Container Linux bootstrap](docs/flatcar.md)
|
||||
- [Fedora CoreOS bootstrap](docs/fcos.md)
|
||||
- [Debian Jessie setup](docs/debian.md)
|
||||
- [openSUSE setup](docs/opensuse.md)
|
||||
- [Downloaded artifacts](docs/downloads.md)
|
||||
- [Cloud providers](docs/cloud.md)
|
||||
- [OpenStack](docs/openstack.md)
|
||||
- [AWS](docs/aws.md)
|
||||
- [Azure](docs/azure.md)
|
||||
- [vSphere](docs/vsphere.md)
|
||||
- [Equinix Metal](docs/equinix-metal.md)
|
||||
- [Large deployments](docs/large-deployments.md)
|
||||
- [Adding/replacing a node](docs/nodes.md)
|
||||
- [Upgrades basics](docs/upgrades.md)
|
||||
- [Air-Gap installation](docs/offline-environment.md)
|
||||
- [NTP](docs/ntp.md)
|
||||
- [Hardening](docs/hardening.md)
|
||||
- [Mirror](docs/mirror.md)
|
||||
- [Roadmap](docs/roadmap.md)
|
||||
- [Vagrant install](docs/developers/vagrant.md)
|
||||
- [Flatcar Container Linux bootstrap](docs/operating_systems/flatcar.md)
|
||||
- [Fedora CoreOS bootstrap](docs/operating_systems/fcos.md)
|
||||
- [openSUSE setup](docs/operating_systems/opensuse.md)
|
||||
- [Downloaded artifacts](docs/advanced/downloads.md)
|
||||
- [Cloud providers](docs/cloud_providers/cloud.md)
|
||||
- [OpenStack](docs/cloud_providers/openstack.md)
|
||||
- [AWS](docs/cloud_providers/aws.md)
|
||||
- [Azure](docs/cloud_providers/azure.md)
|
||||
- [vSphere](docs/cloud_providers/vsphere.md)
|
||||
- [Equinix Metal](docs/cloud_providers/equinix-metal.md)
|
||||
- [Large deployments](docs/operations/large-deployments.md)
|
||||
- [Adding/replacing a node](docs/operations/nodes.md)
|
||||
- [Upgrades basics](docs/operations/upgrades.md)
|
||||
- [Air-Gap installation](docs/operations/offline-environment.md)
|
||||
- [NTP](docs/advanced/ntp.md)
|
||||
- [Hardening](docs/operations/hardening.md)
|
||||
- [Mirror](docs/operations/mirror.md)
|
||||
- [Roadmap](docs/roadmap/roadmap.md)
|
||||
|
||||
## Supported Linux Distributions
|
||||
|
||||
- **Flatcar Container Linux by Kinvolk**
|
||||
- **Debian** Bookworm, Bullseye, Buster
|
||||
- **Ubuntu** 20.04, 22.04
|
||||
- **CentOS/RHEL** 7, [8, 9](docs/centos.md#centos-8)
|
||||
- **Ubuntu** 20.04, 22.04, 24.04
|
||||
- **CentOS/RHEL** 7, [8, 9](docs/operating_systems/centos.md#centos-8)
|
||||
- **Fedora** 37, 38
|
||||
- **Fedora CoreOS** (see [fcos Note](docs/fcos.md))
|
||||
- **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md))
|
||||
- **openSUSE** Leap 15.x/Tumbleweed
|
||||
- **Oracle Linux** 7, [8, 9](docs/centos.md#centos-8)
|
||||
- **Alma Linux** [8, 9](docs/centos.md#centos-8)
|
||||
- **Rocky Linux** [8, 9](docs/centos.md#centos-8)
|
||||
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/kylinlinux.md))
|
||||
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/amazonlinux.md))
|
||||
- **UOS Linux** (experimental: see [uos linux notes](docs/uoslinux.md))
|
||||
- **openEuler** (experimental: see [openEuler notes](docs/openeuler.md))
|
||||
- **Oracle Linux** 7, [8, 9](docs/operating_systems/centos.md#centos-8)
|
||||
- **Alma Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
|
||||
- **Rocky Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
|
||||
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/operating_systems/kylinlinux.md))
|
||||
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/operating_systems/amazonlinux.md))
|
||||
- **UOS Linux** (experimental: see [uos linux notes](docs/operating_systems/uoslinux.md))
|
||||
- **openEuler** (experimental: see [openEuler notes](docs/operating_systems/openeuler.md))
|
||||
|
||||
Note: Upstart/SysV init based OS types are not supported.
|
||||
|
||||
## Supported Components
|
||||
|
||||
- Core
|
||||
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.27.5
|
||||
- [etcd](https://github.com/etcd-io/etcd) v3.5.7
|
||||
- [docker](https://www.docker.com/) v20.10 (see note)
|
||||
- [containerd](https://containerd.io/) v1.7.5
|
||||
- [cri-o](http://cri-o.io/) v1.27 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
|
||||
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.29.5
|
||||
- [etcd](https://github.com/etcd-io/etcd) v3.5.12
|
||||
- [docker](https://www.docker.com/) v26.1
|
||||
- [containerd](https://containerd.io/) v1.7.16
|
||||
- [cri-o](http://cri-o.io/) v1.29.1 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
|
||||
- Network Plugin
|
||||
- [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0
|
||||
- [calico](https://github.com/projectcalico/calico) v3.25.2
|
||||
- [cilium](https://github.com/cilium/cilium) v1.13.4
|
||||
- [calico](https://github.com/projectcalico/calico) v3.27.3
|
||||
- [cilium](https://github.com/cilium/cilium) v1.15.4
|
||||
- [flannel](https://github.com/flannel-io/flannel) v0.22.0
|
||||
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.11.5
|
||||
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.5.1
|
||||
- [kube-router](https://github.com/cloudnativelabs/kube-router) v2.0.0
|
||||
- [multus](https://github.com/k8snetworkplumbingwg/multus-cni) v3.8
|
||||
- [weave](https://github.com/weaveworks/weave) v2.8.1
|
||||
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.5.12
|
||||
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.8.0
|
||||
- Application
|
||||
- [cert-manager](https://github.com/jetstack/cert-manager) v1.11.1
|
||||
- [coredns](https://github.com/coredns/coredns) v1.10.1
|
||||
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.8.1
|
||||
- [cert-manager](https://github.com/jetstack/cert-manager) v1.13.2
|
||||
- [coredns](https://github.com/coredns/coredns) v1.11.1
|
||||
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.10.1
|
||||
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.4
|
||||
- [argocd](https://argoproj.github.io/) v2.8.0
|
||||
- [helm](https://helm.sh/) v3.12.3
|
||||
- [argocd](https://argoproj.github.io/) v2.11.0
|
||||
- [helm](https://helm.sh/) v3.14.2
|
||||
- [metallb](https://metallb.universe.tf/) v0.13.9
|
||||
- [registry](https://github.com/distribution/distribution) v2.8.1
|
||||
- Storage Plugin
|
||||
@ -190,21 +189,21 @@ Note: Upstart/SysV init based OS types are not supported.
|
||||
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
|
||||
- [aws-ebs-csi-plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) v0.5.0
|
||||
- [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) v1.10.0
|
||||
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) v1.22.0
|
||||
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) v1.29.0
|
||||
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) v1.9.2
|
||||
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) v0.0.24
|
||||
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) v2.5.0
|
||||
- [node-feature-discovery](https://github.com/kubernetes-sigs/node-feature-discovery) v0.14.2
|
||||
|
||||
## Container Runtime Notes
|
||||
|
||||
- Supported Docker versions are 18.09, 19.03, 20.10, 23.0 and 24.0. The *recommended* Docker version is 20.10 (except on Debian bookworm which without supporting for 20.10 and below any more). `Kubelet` might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. the YUM ``versionlock`` plugin or ``apt pin``).
|
||||
- The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20)
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Minimum required version of Kubernetes is v1.25**
|
||||
- **Minimum required version of Kubernetes is v1.28**
|
||||
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
|
||||
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
|
||||
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/operations/offline-environment.md))
|
||||
- The target servers are configured to allow **IPv4 forwarding**.
|
||||
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
|
||||
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
|
||||
@ -225,7 +224,7 @@ These limits are safeguarded by Kubespray. Actual requirements for your workload
|
||||
|
||||
You can choose among ten network plugins. (default: `calico`, except Vagrant uses `flannel`)
|
||||
|
||||
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
|
||||
- [flannel](docs/CNI/flannel.md): gre/vxlan (layer 2) networking.
|
||||
|
||||
- [Calico](https://docs.tigera.io/calico/latest/about/) is a networking and network policy provider. Calico supports a flexible set of networking options
|
||||
designed to give you the most efficient networking across a range of situations, including non-overlay
|
||||
@ -234,32 +233,32 @@ You can choose among ten network plugins. (default: `calico`, except Vagrant use
|
||||
|
||||
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
|
||||
|
||||
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
|
||||
- [weave](docs/CNI/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
|
||||
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
|
||||
|
||||
- [kube-ovn](docs/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
|
||||
- [kube-ovn](docs/CNI/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
|
||||
|
||||
- [kube-router](docs/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
|
||||
- [kube-router](docs/CNI/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
|
||||
simplicity and high performance: it uses IPVS to provide Kube Services Proxy (if setup to replace kube-proxy),
|
||||
iptables for network policies, and BGP for ods L3 networking (with optionally BGP peering with out-of-cluster BGP peers).
|
||||
It can also optionally advertise routes to Kubernetes cluster Pods CIDRs, ClusterIPs, ExternalIPs and LoadBalancerIPs.
|
||||
|
||||
- [macvlan](docs/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
|
||||
- [macvlan](docs/CNI/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
|
||||
|
||||
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
|
||||
- [multus](docs/CNI/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
|
||||
|
||||
- [custom_cni](roles/network-plugin/custom_cni/) : You can specify some manifests that will be applied to the clusters to bring you own CNI and use non-supported ones by Kubespray.
|
||||
See `tests/files/custom_cni/README.md` and `tests/files/custom_cni/values.yaml`for an example with a CNI provided by a Helm Chart.
|
||||
|
||||
The network plugin to use is defined by the variable `kube_network_plugin`. There is also an
|
||||
option to leverage built-in cloud provider networking instead.
|
||||
See also [Network checker](docs/netcheck.md).
|
||||
See also [Network checker](docs/advanced/netcheck.md).
|
||||
|
||||
## Ingress Plugins
|
||||
|
||||
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
|
||||
|
||||
- [metallb](docs/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
|
||||
- [metallb](docs/ingress/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
|
||||
|
||||
## Community docs and resources
|
||||
|
||||
@ -280,4 +279,4 @@ See also [Network checker](docs/netcheck.md).
|
||||
|
||||
CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Equinix Metal](https://metal.equinix.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/).
|
||||
|
||||
See the [test matrix](docs/test_cases.md) for details.
|
||||
See the [test matrix](docs/developers/test_cases.md) for details.
|
||||
|
24
RELEASE.md
24
RELEASE.md
@ -3,17 +3,19 @@
|
||||
The Kubespray Project is released on an as-needed basis. The process is as follows:
|
||||
|
||||
1. An issue is proposing a new release with a changelog since the last release. Please see [a good sample issue](https://github.com/kubernetes-sigs/kubespray/issues/8325)
|
||||
2. At least one of the [approvers](OWNERS_ALIASES) must approve this release
|
||||
3. The `kube_version_min_required` variable is set to `n-1`
|
||||
4. Remove hashes for [EOL versions](https://github.com/kubernetes/website/blob/main/content/en/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
|
||||
5. Create the release note with [Kubernetes Release Notes Generator](https://github.com/kubernetes/release/blob/master/cmd/release-notes/README.md). See the following `Release note creation` section for the details.
|
||||
6. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
|
||||
7. An approver creates a release branch in the form `release-X.Y`
|
||||
8. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) container images are built and tagged. See the following `Container image creation` section for the details.
|
||||
9. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
|
||||
10. The release issue is closed
|
||||
11. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
|
||||
12. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
|
||||
1. At least one of the [approvers](OWNERS_ALIASES) must approve this release
|
||||
1. (Only for major releases) The `kube_version_min_required` variable is set to `n-1`
|
||||
1. (Only for major releases) Remove hashes for [EOL versions](https://github.com/kubernetes/website/blob/main/content/en/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
|
||||
1. Create the release note with [Kubernetes Release Notes Generator](https://github.com/kubernetes/release/blob/master/cmd/release-notes/README.md). See the following `Release note creation` section for the details.
|
||||
1. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
|
||||
1. (Only for major releases) An approver creates a release branch in the form `release-X.Y`
|
||||
1. (For major releases) On the `master` branch: bump the version in `galaxy.yml` to the next expected major release (X.y.0 with y = Y + 1), make a Pull Request.
|
||||
1. (For minor releases) On the `release-X.Y` branch: bump the version in `galaxy.yml` to the next expected minor release (X.Y.z with z = Z + 1), make a Pull Request.
|
||||
1. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) container images are built and tagged. See the following `Container image creation` section for the details.
|
||||
1. (Only for major releases) The `KUBESPRAY_VERSION` in `.gitlab-ci.yml` is upgraded to the version we just released # TODO clarify this, this variable is for testing upgrades.
|
||||
1. The release issue is closed
|
||||
1. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
|
||||
1. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
|
||||
|
||||
## Major/minor releases and milestones
|
||||
|
||||
|
31
Vagrantfile
vendored
31
Vagrantfile
vendored
@ -21,13 +21,15 @@ SUPPORTED_OS = {
|
||||
"flatcar-edge" => {box: "flatcar-edge", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["edge"]},
|
||||
"ubuntu2004" => {box: "generic/ubuntu2004", user: "vagrant"},
|
||||
"ubuntu2204" => {box: "generic/ubuntu2204", user: "vagrant"},
|
||||
"ubuntu2404" => {box: "bento/ubuntu-24.04", user: "vagrant"},
|
||||
"centos" => {box: "centos/7", user: "vagrant"},
|
||||
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
|
||||
"centos8" => {box: "centos/8", user: "vagrant"},
|
||||
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
|
||||
"almalinux8" => {box: "almalinux/8", user: "vagrant"},
|
||||
"almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"},
|
||||
"rockylinux8" => {box: "generic/rocky8", user: "vagrant"},
|
||||
"rockylinux8" => {box: "rockylinux/8", user: "vagrant"},
|
||||
"rockylinux9" => {box: "rockylinux/9", user: "vagrant"},
|
||||
"fedora37" => {box: "fedora/37-cloud-base", user: "vagrant"},
|
||||
"fedora38" => {box: "fedora/38-cloud-base", user: "vagrant"},
|
||||
"opensuse" => {box: "opensuse/Leap-15.4.x86_64", user: "vagrant"},
|
||||
@ -36,6 +38,8 @@ SUPPORTED_OS = {
|
||||
"oraclelinux8" => {box: "generic/oracle8", user: "vagrant"},
|
||||
"rhel7" => {box: "generic/rhel7", user: "vagrant"},
|
||||
"rhel8" => {box: "generic/rhel8", user: "vagrant"},
|
||||
"debian11" => {box: "debian/bullseye64", user: "vagrant"},
|
||||
"debian12" => {box: "debian/bookworm64", user: "vagrant"},
|
||||
}
|
||||
|
||||
if File.exist?(CONFIG)
|
||||
@ -77,7 +81,10 @@ $libvirt_nested ||= false
|
||||
$ansible_verbosity ||= false
|
||||
$ansible_tags ||= ENV['VAGRANT_ANSIBLE_TAGS'] || ""
|
||||
|
||||
$vagrant_dir ||= File.join(File.dirname(__FILE__), ".vagrant")
|
||||
|
||||
$playbook ||= "cluster.yml"
|
||||
$extra_vars ||= {}
|
||||
|
||||
host_vars = {}
|
||||
|
||||
@ -96,7 +103,7 @@ $inventory = File.absolute_path($inventory, File.dirname(__FILE__))
|
||||
# if $inventory has a hosts.ini file use it, otherwise copy over
|
||||
# vars etc to where vagrant expects dynamic inventory to be
|
||||
if ! File.exist?(File.join(File.dirname($inventory), "hosts.ini"))
|
||||
$vagrant_ansible = File.join(File.dirname(__FILE__), ".vagrant", "provisioners", "ansible")
|
||||
$vagrant_ansible = File.join(File.absolute_path($vagrant_dir), "provisioners", "ansible")
|
||||
FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible)
|
||||
$vagrant_inventory = File.join($vagrant_ansible,"inventory")
|
||||
FileUtils.rm_f($vagrant_inventory)
|
||||
@ -182,6 +189,14 @@ Vagrant.configure("2") do |config|
|
||||
lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "scsi"
|
||||
end
|
||||
end
|
||||
node.vm.provider :virtualbox do |vb|
|
||||
# always make /dev/sd{a/b/c} so that CI can ensure that
|
||||
# virtualbox and libvirt will have the same devices to use for OSDs
|
||||
(1..$kube_node_instances_with_disks_number).each do |d|
|
||||
vb.customize ['createhd', '--filename', "disk-#{i}-#{driverletters[d]}-#{DISK_UUID}.disk", '--size', $kube_node_instances_with_disks_size] # 10GB disk
|
||||
vb.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', d, '--device', 0, '--type', 'hdd', '--medium', "disk-#{i}-#{driverletters[d]}-#{DISK_UUID}.disk", '--nonrotational', 'on', '--mtype', 'normal']
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
if $expose_docker_tcp
|
||||
@ -232,6 +247,13 @@ Vagrant.configure("2") do |config|
|
||||
SHELL
|
||||
end
|
||||
|
||||
# Rockylinux boxes needs UEFI
|
||||
if ["rockylinux8", "rockylinux9"].include? $os
|
||||
config.vm.provider "libvirt" do |domain|
|
||||
domain.loader = "/usr/share/OVMF/x64/OVMF_CODE.fd"
|
||||
end
|
||||
end
|
||||
|
||||
# Disable firewalld on oraclelinux/redhat vms
|
||||
if ["oraclelinux","oraclelinux8","rhel7","rhel8","rockylinux8"].include? $os
|
||||
node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld"
|
||||
@ -255,7 +277,8 @@ Vagrant.configure("2") do |config|
|
||||
"kubectl_localhost": "True",
|
||||
"local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}",
|
||||
"local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}",
|
||||
"ansible_ssh_user": SUPPORTED_OS[$os][:user]
|
||||
"ansible_ssh_user": SUPPORTED_OS[$os][:user],
|
||||
"unsafe_show_logs": "True"
|
||||
}
|
||||
|
||||
# Only execute the Ansible provisioner once, when all the machines are up and ready.
|
||||
@ -263,6 +286,7 @@ Vagrant.configure("2") do |config|
|
||||
if i == $num_instances
|
||||
node.vm.provision "ansible" do |ansible|
|
||||
ansible.playbook = $playbook
|
||||
ansible.compatibility_mode = "2.0"
|
||||
ansible.verbose = $ansible_verbosity
|
||||
$ansible_inventory_path = File.join( $inventory, "hosts.ini")
|
||||
if File.exist?($ansible_inventory_path)
|
||||
@ -273,6 +297,7 @@ Vagrant.configure("2") do |config|
|
||||
ansible.host_key_checking = false
|
||||
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
|
||||
ansible.host_vars = host_vars
|
||||
ansible.extra_vars = $extra_vars
|
||||
if $ansible_tags != ""
|
||||
ansible.tags = [$ansible_tags]
|
||||
end
|
||||
|
@ -1,6 +1,6 @@
|
||||
[ssh_connection]
|
||||
pipelining=True
|
||||
ansible_ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
|
||||
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
|
||||
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
|
||||
[defaults]
|
||||
# https://github.com/ansible/ansible/issues/56930 (to ignore group names with - and .)
|
||||
|
@ -49,7 +49,7 @@ If you need to delete all resources from a resource group, simply call:
|
||||
|
||||
## Installing Ansible and the dependencies
|
||||
|
||||
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
|
||||
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
|
||||
|
||||
## Generating an inventory for kubespray
|
||||
|
||||
|
@ -5,13 +5,17 @@
|
||||
Container image collecting script for offline deployment
|
||||
|
||||
This script has two features:
|
||||
(1) Get container images from an environment which is deployed online.
|
||||
(1) Get container images from an environment which is deployed online, or set IMAGES_FROM_FILE
|
||||
environment variable to get images from a file (e.g. temp/images.list after running the
|
||||
./generate_list.sh script).
|
||||
(2) Deploy local container registry and register the container images to the registry.
|
||||
|
||||
Step(1) should be done online site as a preparation, then we bring the gotten images
|
||||
to the target offline environment. if images are from a private registry,
|
||||
you need to set `PRIVATE_REGISTRY` environment variable.
|
||||
Then we will run step(2) for registering the images to local registry.
|
||||
Then we will run step(2) for registering the images to local registry, or to an existing
|
||||
registry set by the `DESTINATION_REGISTRY` environment variable. By default, the local registry
|
||||
will run on port 5000. This can be changed with the `REGISTRY_PORT` environment variable
|
||||
|
||||
Step(1) can be operated with:
|
||||
|
||||
@ -27,7 +31,7 @@ manage-offline-container-images.sh register
|
||||
|
||||
## generate_list.sh
|
||||
|
||||
This script generates the list of downloaded files and the list of container images by `roles/download/defaults/main/main.yml` file.
|
||||
This script generates the list of downloaded files and the list of container images by `roles/kubespray-defaults/defaults/main/download.yml` file.
|
||||
|
||||
Run this script will execute `generate_list.yml` playbook in kubespray root directory and generate four files,
|
||||
all downloaded files url in files.list, all container images in images.list, jinja2 templates in *.template.
|
||||
|
@ -5,7 +5,7 @@ CURRENT_DIR=$(cd $(dirname $0); pwd)
|
||||
TEMP_DIR="${CURRENT_DIR}/temp"
|
||||
REPO_ROOT_DIR="${CURRENT_DIR%/contrib/offline}"
|
||||
|
||||
: ${DOWNLOAD_YML:="roles/download/defaults/main/main.yml"}
|
||||
: ${DOWNLOAD_YML:="roles/kubespray-defaults/defaults/main/download.yml"}
|
||||
|
||||
mkdir -p ${TEMP_DIR}
|
||||
|
||||
@ -19,7 +19,7 @@ sed -n '/^downloads:/,/download_defaults:/p' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
|
||||
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/\"//g' > ${TEMP_DIR}/images.list.template
|
||||
|
||||
# add kube-* images to images list template
|
||||
# Those container images are downloaded by kubeadm, then roles/download/defaults/main/main.yml
|
||||
# Those container images are downloaded by kubeadm, then roles/kubespray-defaults/defaults/main/download.yml
|
||||
# doesn't contain those images. That is reason why here needs to put those images into the
|
||||
# list separately.
|
||||
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"
|
||||
|
@ -12,27 +12,40 @@ RETRY_COUNT=5
|
||||
function create_container_image_tar() {
|
||||
set -e
|
||||
|
||||
IMAGES=$(kubectl describe pods --all-namespaces | grep " Image:" | awk '{print $2}' | sort | uniq)
|
||||
if [ -z "${IMAGES_FROM_FILE}" ]; then
|
||||
echo "Getting images from current \"$(kubectl config current-context)\""
|
||||
|
||||
IMAGES=$(mktemp --suffix=-images)
|
||||
trap 'rm -f "${IMAGES}"' EXIT
|
||||
|
||||
kubectl describe cronjobs,jobs,pods --all-namespaces | grep " Image:" | awk '{print $2}' | sort | uniq > "${IMAGES}"
|
||||
# NOTE: etcd and pause cannot be seen as pods.
|
||||
# The pause image is used for --pod-infra-container-image option of kubelet.
|
||||
EXT_IMAGES=$(kubectl cluster-info dump | egrep "quay.io/coreos/etcd:|registry.k8s.io/pause:" | sed s@\"@@g)
|
||||
IMAGES="${IMAGES} ${EXT_IMAGES}"
|
||||
kubectl cluster-info dump | grep -E "quay.io/coreos/etcd:|registry.k8s.io/pause:" | sed s@\"@@g >> "${IMAGES}"
|
||||
else
|
||||
echo "Getting images from file \"${IMAGES_FROM_FILE}\""
|
||||
if [ ! -f "${IMAGES_FROM_FILE}" ]; then
|
||||
echo "${IMAGES_FROM_FILE} is not a file"
|
||||
exit 1
|
||||
fi
|
||||
IMAGES=$(realpath $IMAGES_FROM_FILE)
|
||||
fi
|
||||
|
||||
rm -f ${IMAGE_TAR_FILE}
|
||||
rm -rf ${IMAGE_DIR}
|
||||
mkdir ${IMAGE_DIR}
|
||||
cd ${IMAGE_DIR}
|
||||
|
||||
sudo docker pull registry:latest
|
||||
sudo docker save -o registry-latest.tar registry:latest
|
||||
sudo ${runtime} pull registry:latest
|
||||
sudo ${runtime} save -o registry-latest.tar registry:latest
|
||||
|
||||
for image in ${IMAGES}
|
||||
while read -r image
|
||||
do
|
||||
FILE_NAME="$(echo ${image} | sed s@"/"@"-"@g | sed s/":"/"-"/g)".tar
|
||||
FILE_NAME="$(echo ${image} | sed s@"/"@"-"@g | sed s/":"/"-"/g | sed -E 's/\@.*//g')".tar
|
||||
set +e
|
||||
for step in $(seq 1 ${RETRY_COUNT})
|
||||
do
|
||||
sudo docker pull ${image}
|
||||
sudo ${runtime} pull ${image}
|
||||
if [ $? -eq 0 ]; then
|
||||
break
|
||||
fi
|
||||
@ -42,24 +55,26 @@ function create_container_image_tar() {
|
||||
fi
|
||||
done
|
||||
set -e
|
||||
sudo docker save -o ${FILE_NAME} ${image}
|
||||
sudo ${runtime} save -o ${FILE_NAME} ${image}
|
||||
|
||||
# NOTE: Here removes the following repo parts from each image
|
||||
# so that these parts will be replaced with Kubespray.
|
||||
# - kube_image_repo: "registry.k8s.io"
|
||||
# - gcr_image_repo: "gcr.io"
|
||||
# - ghcr_image_repo: "ghcr.io"
|
||||
# - docker_image_repo: "docker.io"
|
||||
# - quay_image_repo: "quay.io"
|
||||
FIRST_PART=$(echo ${image} | awk -F"/" '{print $1}')
|
||||
if [ "${FIRST_PART}" = "registry.k8s.io" ] ||
|
||||
[ "${FIRST_PART}" = "gcr.io" ] ||
|
||||
[ "${FIRST_PART}" = "ghcr.io" ] ||
|
||||
[ "${FIRST_PART}" = "docker.io" ] ||
|
||||
[ "${FIRST_PART}" = "quay.io" ] ||
|
||||
[ "${FIRST_PART}" = "${PRIVATE_REGISTRY}" ]; then
|
||||
image=$(echo ${image} | sed s@"${FIRST_PART}/"@@)
|
||||
image=$(echo ${image} | sed s@"${FIRST_PART}/"@@ | sed -E 's/\@.*/\n/g')
|
||||
fi
|
||||
echo "${FILE_NAME} ${image}" >> ${IMAGE_LIST}
|
||||
done
|
||||
done < "${IMAGES}"
|
||||
|
||||
cd ..
|
||||
sudo chown ${USER} ${IMAGE_DIR}/*
|
||||
@ -72,6 +87,16 @@ function create_container_image_tar() {
|
||||
}
|
||||
|
||||
function register_container_images() {
|
||||
create_registry=false
|
||||
REGISTRY_PORT=${REGISTRY_PORT:-"5000"}
|
||||
|
||||
if [ -z "${DESTINATION_REGISTRY}" ]; then
|
||||
echo "DESTINATION_REGISTRY not set, will create local registry"
|
||||
create_registry=true
|
||||
DESTINATION_REGISTRY="$(hostname):${REGISTRY_PORT}"
|
||||
fi
|
||||
echo "Images will be pushed to ${DESTINATION_REGISTRY}"
|
||||
|
||||
if [ ! -f ${IMAGE_TAR_FILE} ]; then
|
||||
echo "${IMAGE_TAR_FILE} should exist."
|
||||
exit 1
|
||||
@ -81,39 +106,47 @@ function register_container_images() {
|
||||
fi
|
||||
|
||||
# To avoid "http: server gave http response to https client" error.
|
||||
LOCALHOST_NAME=$(hostname)
|
||||
if [ -d /etc/docker/ ]; then
|
||||
set -e
|
||||
# Ubuntu18.04, RHEL7/CentOS7
|
||||
cp ${CURRENT_DIR}/docker-daemon.json ${TEMP_DIR}/docker-daemon.json
|
||||
sed -i s@"HOSTNAME"@"${LOCALHOST_NAME}"@ ${TEMP_DIR}/docker-daemon.json
|
||||
sed -i s@"HOSTNAME"@"$(hostname)"@ ${TEMP_DIR}/docker-daemon.json
|
||||
sudo cp ${TEMP_DIR}/docker-daemon.json /etc/docker/daemon.json
|
||||
elif [ -d /etc/containers/ ]; then
|
||||
set -e
|
||||
# RHEL8/CentOS8
|
||||
cp ${CURRENT_DIR}/registries.conf ${TEMP_DIR}/registries.conf
|
||||
sed -i s@"HOSTNAME"@"${LOCALHOST_NAME}"@ ${TEMP_DIR}/registries.conf
|
||||
sed -i s@"HOSTNAME"@"$(hostname)"@ ${TEMP_DIR}/registries.conf
|
||||
sudo cp ${TEMP_DIR}/registries.conf /etc/containers/registries.conf
|
||||
else
|
||||
echo "docker package(docker-ce, etc.) should be installed"
|
||||
echo "runtime package(docker-ce, podman, nerctl, etc.) should be installed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
tar -zxvf ${IMAGE_TAR_FILE}
|
||||
sudo docker load -i ${IMAGE_DIR}/registry-latest.tar
|
||||
|
||||
if [ "${create_registry}" ]; then
|
||||
sudo ${runtime} load -i ${IMAGE_DIR}/registry-latest.tar
|
||||
set +e
|
||||
sudo docker container inspect registry >/dev/null 2>&1
|
||||
|
||||
sudo ${runtime} container inspect registry >/dev/null 2>&1
|
||||
if [ $? -ne 0 ]; then
|
||||
sudo docker run --restart=always -d -p 5000:5000 --name registry registry:latest
|
||||
sudo ${runtime} run --restart=always -d -p "${REGISTRY_PORT}":"${REGISTRY_PORT}" --name registry registry:latest
|
||||
fi
|
||||
set -e
|
||||
fi
|
||||
|
||||
while read -r line; do
|
||||
file_name=$(echo ${line} | awk '{print $1}')
|
||||
raw_image=$(echo ${line} | awk '{print $2}')
|
||||
new_image="${LOCALHOST_NAME}:5000/${raw_image}"
|
||||
org_image=$(sudo docker load -i ${IMAGE_DIR}/${file_name} | head -n1 | awk '{print $3}')
|
||||
image_id=$(sudo docker image inspect ${org_image} | grep "\"Id\":" | awk -F: '{print $3}'| sed s/'\",'//)
|
||||
new_image="${DESTINATION_REGISTRY}/${raw_image}"
|
||||
load_image=$(sudo ${runtime} load -i ${IMAGE_DIR}/${file_name} | head -n1)
|
||||
org_image=$(echo "${load_image}" | awk '{print $3}')
|
||||
# special case for tags containing the digest when using docker or podman as the container runtime
|
||||
if [ "${org_image}" == "ID:" ]; then
|
||||
org_image=$(echo "${load_image}" | awk '{print $4}')
|
||||
fi
|
||||
image_id=$(sudo ${runtime} image inspect ${org_image} | grep "\"Id\":" | awk -F: '{print $3}'| sed s/'\",'//)
|
||||
if [ -z "${file_name}" ]; then
|
||||
echo "Failed to get file_name for line ${line}"
|
||||
exit 1
|
||||
@ -130,32 +163,48 @@ function register_container_images() {
|
||||
echo "Failed to get image_id for file ${file_name}"
|
||||
exit 1
|
||||
fi
|
||||
sudo docker load -i ${IMAGE_DIR}/${file_name}
|
||||
sudo docker tag ${image_id} ${new_image}
|
||||
sudo docker push ${new_image}
|
||||
sudo ${runtime} load -i ${IMAGE_DIR}/${file_name}
|
||||
sudo ${runtime} tag ${image_id} ${new_image}
|
||||
sudo ${runtime} push ${new_image}
|
||||
done <<< "$(cat ${IMAGE_LIST})"
|
||||
|
||||
echo "Succeeded to register container images to local registry."
|
||||
echo "Please specify ${LOCALHOST_NAME}:5000 for the following options in your inventry:"
|
||||
echo "Please specify \"${DESTINATION_REGISTRY}\" for the following options in your inventry:"
|
||||
echo "- kube_image_repo"
|
||||
echo "- gcr_image_repo"
|
||||
echo "- docker_image_repo"
|
||||
echo "- quay_image_repo"
|
||||
}
|
||||
|
||||
# get runtime command
|
||||
if command -v nerdctl 1>/dev/null 2>&1; then
|
||||
runtime="nerdctl"
|
||||
elif command -v podman 1>/dev/null 2>&1; then
|
||||
runtime="podman"
|
||||
elif command -v docker 1>/dev/null 2>&1; then
|
||||
runtime="docker"
|
||||
else
|
||||
echo "No supported container runtime found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "${OPTION}" == "create" ]; then
|
||||
create_container_image_tar
|
||||
elif [ "${OPTION}" == "register" ]; then
|
||||
register_container_images
|
||||
else
|
||||
echo "This script has two features:"
|
||||
echo "(1) Get container images from an environment which is deployed online."
|
||||
echo "(1) Get container images from an environment which is deployed online, or set IMAGES_FROM_FILE"
|
||||
echo " environment variable to get images from a file (e.g. temp/images.list after running the"
|
||||
echo " ./generate_list.sh script)."
|
||||
echo "(2) Deploy local container registry and register the container images to the registry."
|
||||
echo ""
|
||||
echo "Step(1) should be done online site as a preparation, then we bring"
|
||||
echo "the gotten images to the target offline environment. if images are from"
|
||||
echo "a private registry, you need to set PRIVATE_REGISTRY environment variable."
|
||||
echo "Then we will run step(2) for registering the images to local registry."
|
||||
echo "Then we will run step(2) for registering the images to local registry, or to an existing"
|
||||
echo "registry set by the DESTINATION_REGISTRY environment variable. By default, the local registry"
|
||||
echo "will run on port 5000. This can be changed with the REGISTRY_PORT environment variable"
|
||||
echo ""
|
||||
echo "${IMAGE_TAR_FILE} is created to contain your container images."
|
||||
echo "Please keep this file and bring it to your offline environment."
|
||||
|
@ -17,7 +17,12 @@ rm -rf "${OFFLINE_FILES_DIR}"
|
||||
rm "${OFFLINE_FILES_ARCHIVE}"
|
||||
mkdir "${OFFLINE_FILES_DIR}"
|
||||
|
||||
wget -x -P "${OFFLINE_FILES_DIR}" -i "${FILES_LIST}"
|
||||
while read -r url; do
|
||||
if ! wget -x -P "${OFFLINE_FILES_DIR}" "${url}"; then
|
||||
exit 1
|
||||
fi
|
||||
done < "${FILES_LIST}"
|
||||
|
||||
tar -czvf "${OFFLINE_FILES_ARCHIVE}" "${OFFLINE_FILES_DIR_NAME}"
|
||||
|
||||
[ -n "$NO_HTTP_SERVER" ] && echo "skip to run nginx" && exit 0
|
||||
@ -38,7 +43,7 @@ sudo "${runtime}" container inspect nginx >/dev/null 2>&1
|
||||
if [ $? -ne 0 ]; then
|
||||
sudo "${runtime}" run \
|
||||
--restart=always -d -p ${NGINX_PORT}:80 \
|
||||
--volume "${OFFLINE_FILES_DIR}:/usr/share/nginx/html/download" \
|
||||
--volume "${OFFLINE_FILES_DIR}":/usr/share/nginx/html/download \
|
||||
--volume "${CURRENT_DIR}"/nginx.conf:/etc/nginx/nginx.conf \
|
||||
--name nginx nginx:alpine
|
||||
fi
|
||||
|
@ -1,5 +1,3 @@
|
||||
# See the OWNERS docs at https://go.k8s.io/owners
|
||||
|
||||
approvers:
|
||||
- holmsten
|
||||
- miouge1
|
||||
|
@ -50,70 +50,32 @@ Example (this one assumes you are using Ubuntu)
|
||||
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=ubuntu -b --become-user=root --flush-cache
|
||||
```
|
||||
|
||||
***Using other distrib than Ubuntu***
|
||||
If you want to use another distribution than Ubuntu 18.04 (Bionic) LTS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
|
||||
## Using other distrib than Ubuntu***
|
||||
|
||||
For example, to use:
|
||||
To leverage a Linux distribution other than Ubuntu 18.04 (Bionic) LTS for your Terraform configurations, you can adjust the AMI search filters within the 'data "aws_ami" "distro"' block by utilizing variables in your `terraform.tfvars` file. This approach ensures a flexible configuration that adapts to various Linux distributions without directly modifying the core Terraform files.
|
||||
|
||||
- Debian Jessie, replace 'data "aws_ami" "distro"' in variables.tf with
|
||||
### Example Usages
|
||||
|
||||
```ini
|
||||
data "aws_ami" "distro" {
|
||||
most_recent = true
|
||||
- **Debian Jessie**: To configure the usage of Debian Jessie, insert the subsequent lines into your `terraform.tfvars`:
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["debian-jessie-amd64-hvm-*"]
|
||||
}
|
||||
```hcl
|
||||
ami_name_pattern = "debian-jessie-amd64-hvm-*"
|
||||
ami_owners = ["379101102735"]
|
||||
```
|
||||
|
||||
filter {
|
||||
name = "virtualization-type"
|
||||
values = ["hvm"]
|
||||
}
|
||||
- **Ubuntu 16.04**: To utilize Ubuntu 16.04 instead, apply the following configuration in your `terraform.tfvars`:
|
||||
|
||||
owners = ["379101102735"]
|
||||
}
|
||||
```
|
||||
```hcl
|
||||
ami_name_pattern = "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*"
|
||||
ami_owners = ["099720109477"]
|
||||
```
|
||||
|
||||
- Ubuntu 16.04, replace 'data "aws_ami" "distro"' in variables.tf with
|
||||
- **Centos 7**: For employing Centos 7, incorporate these lines into your `terraform.tfvars`:
|
||||
|
||||
```ini
|
||||
data "aws_ami" "distro" {
|
||||
most_recent = true
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*"]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "virtualization-type"
|
||||
values = ["hvm"]
|
||||
}
|
||||
|
||||
owners = ["099720109477"]
|
||||
}
|
||||
```
|
||||
|
||||
- Centos 7, replace 'data "aws_ami" "distro"' in variables.tf with
|
||||
|
||||
```ini
|
||||
data "aws_ami" "distro" {
|
||||
most_recent = true
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["dcos-centos7-*"]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "virtualization-type"
|
||||
values = ["hvm"]
|
||||
}
|
||||
|
||||
owners = ["688023202711"]
|
||||
}
|
||||
```
|
||||
```hcl
|
||||
ami_name_pattern = "dcos-centos7-*"
|
||||
ami_owners = ["688023202711"]
|
||||
```
|
||||
|
||||
## Connecting to Kubernetes
|
||||
|
||||
|
@ -20,20 +20,38 @@ variable "aws_cluster_name" {
|
||||
description = "Name of AWS Cluster"
|
||||
}
|
||||
|
||||
variable "ami_name_pattern" {
|
||||
description = "The name pattern to use for AMI lookup"
|
||||
type = string
|
||||
default = "debian-10-amd64-*"
|
||||
}
|
||||
|
||||
variable "ami_virtualization_type" {
|
||||
description = "The virtualization type to use for AMI lookup"
|
||||
type = string
|
||||
default = "hvm"
|
||||
}
|
||||
|
||||
variable "ami_owners" {
|
||||
description = "The owners to use for AMI lookup"
|
||||
type = list(string)
|
||||
default = ["136693071363"]
|
||||
}
|
||||
|
||||
data "aws_ami" "distro" {
|
||||
most_recent = true
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["debian-10-amd64-*"]
|
||||
values = [var.ami_name_pattern]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "virtualization-type"
|
||||
values = ["hvm"]
|
||||
values = [var.ami_virtualization_type]
|
||||
}
|
||||
|
||||
owners = ["136693071363"] # Debian-10
|
||||
owners = var.ami_owners
|
||||
}
|
||||
|
||||
//AWS VPC Variables
|
||||
|
@ -35,7 +35,7 @@ now six total etcd replicas.
|
||||
## Requirements
|
||||
|
||||
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
|
||||
- [Install Ansible dependencies](/docs/ansible.md#installing-ansible)
|
||||
- [Install Ansible dependencies](/docs/ansible/ansible.md#installing-ansible)
|
||||
- Account with Equinix Metal
|
||||
- An SSH key pair
|
||||
|
||||
|
@ -7,7 +7,7 @@ terraform {
|
||||
required_providers {
|
||||
equinix = {
|
||||
source = "equinix/equinix"
|
||||
version = "~> 1.14"
|
||||
version = "1.24.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -12,7 +12,7 @@ ssh_public_keys = [
|
||||
machines = {
|
||||
"master-0" : {
|
||||
"node_type" : "master",
|
||||
"size" : "Medium",
|
||||
"size" : "standard.medium",
|
||||
"boot_disk" : {
|
||||
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||
"root_partition_size" : 50,
|
||||
@ -22,7 +22,7 @@ machines = {
|
||||
},
|
||||
"worker-0" : {
|
||||
"node_type" : "worker",
|
||||
"size" : "Large",
|
||||
"size" : "standard.large",
|
||||
"boot_disk" : {
|
||||
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||
"root_partition_size" : 50,
|
||||
@ -32,7 +32,7 @@ machines = {
|
||||
},
|
||||
"worker-1" : {
|
||||
"node_type" : "worker",
|
||||
"size" : "Large",
|
||||
"size" : "standard.large",
|
||||
"boot_disk" : {
|
||||
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||
"root_partition_size" : 50,
|
||||
@ -42,7 +42,7 @@ machines = {
|
||||
},
|
||||
"worker-2" : {
|
||||
"node_type" : "worker",
|
||||
"size" : "Large",
|
||||
"size" : "standard.large",
|
||||
"boot_disk" : {
|
||||
"image_name" : "Linux Ubuntu 20.04 LTS 64-bit",
|
||||
"root_partition_size" : 50,
|
||||
|
@ -1,29 +1,25 @@
|
||||
data "exoscale_compute_template" "os_image" {
|
||||
data "exoscale_template" "os_image" {
|
||||
for_each = var.machines
|
||||
|
||||
zone = var.zone
|
||||
name = each.value.boot_disk.image_name
|
||||
}
|
||||
|
||||
data "exoscale_compute" "master_nodes" {
|
||||
for_each = exoscale_compute.master
|
||||
data "exoscale_compute_instance" "master_nodes" {
|
||||
for_each = exoscale_compute_instance.master
|
||||
|
||||
id = each.value.id
|
||||
|
||||
# Since private IP address is not assigned until the nics are created we need this
|
||||
depends_on = [exoscale_nic.master_private_network_nic]
|
||||
zone = var.zone
|
||||
}
|
||||
|
||||
data "exoscale_compute" "worker_nodes" {
|
||||
for_each = exoscale_compute.worker
|
||||
data "exoscale_compute_instance" "worker_nodes" {
|
||||
for_each = exoscale_compute_instance.worker
|
||||
|
||||
id = each.value.id
|
||||
|
||||
# Since private IP address is not assigned until the nics are created we need this
|
||||
depends_on = [exoscale_nic.worker_private_network_nic]
|
||||
zone = var.zone
|
||||
}
|
||||
|
||||
resource "exoscale_network" "private_network" {
|
||||
resource "exoscale_private_network" "private_network" {
|
||||
zone = var.zone
|
||||
name = "${var.prefix}-network"
|
||||
|
||||
@ -34,25 +30,29 @@ resource "exoscale_network" "private_network" {
|
||||
netmask = cidrnetmask(var.private_network_cidr)
|
||||
}
|
||||
|
||||
resource "exoscale_compute" "master" {
|
||||
resource "exoscale_compute_instance" "master" {
|
||||
for_each = {
|
||||
for name, machine in var.machines :
|
||||
name => machine
|
||||
if machine.node_type == "master"
|
||||
}
|
||||
|
||||
display_name = "${var.prefix}-${each.key}"
|
||||
template_id = data.exoscale_compute_template.os_image[each.key].id
|
||||
size = each.value.size
|
||||
name = "${var.prefix}-${each.key}"
|
||||
template_id = data.exoscale_template.os_image[each.key].id
|
||||
type = each.value.size
|
||||
disk_size = each.value.boot_disk.root_partition_size + each.value.boot_disk.node_local_partition_size + each.value.boot_disk.ceph_partition_size
|
||||
state = "Running"
|
||||
zone = var.zone
|
||||
security_groups = [exoscale_security_group.master_sg.name]
|
||||
security_group_ids = [exoscale_security_group.master_sg.id]
|
||||
network_interface {
|
||||
network_id = exoscale_private_network.private_network.id
|
||||
}
|
||||
elastic_ip_ids = [exoscale_elastic_ip.control_plane_lb.id]
|
||||
|
||||
user_data = templatefile(
|
||||
"${path.module}/templates/cloud-init.tmpl",
|
||||
{
|
||||
eip_ip_address = exoscale_ipaddress.ingress_controller_lb.ip_address
|
||||
eip_ip_address = exoscale_elastic_ip.ingress_controller_lb.ip_address
|
||||
node_local_partition_size = each.value.boot_disk.node_local_partition_size
|
||||
ceph_partition_size = each.value.boot_disk.ceph_partition_size
|
||||
root_partition_size = each.value.boot_disk.root_partition_size
|
||||
@ -62,25 +62,29 @@ resource "exoscale_compute" "master" {
|
||||
)
|
||||
}
|
||||
|
||||
resource "exoscale_compute" "worker" {
|
||||
resource "exoscale_compute_instance" "worker" {
|
||||
for_each = {
|
||||
for name, machine in var.machines :
|
||||
name => machine
|
||||
if machine.node_type == "worker"
|
||||
}
|
||||
|
||||
display_name = "${var.prefix}-${each.key}"
|
||||
template_id = data.exoscale_compute_template.os_image[each.key].id
|
||||
size = each.value.size
|
||||
name = "${var.prefix}-${each.key}"
|
||||
template_id = data.exoscale_template.os_image[each.key].id
|
||||
type = each.value.size
|
||||
disk_size = each.value.boot_disk.root_partition_size + each.value.boot_disk.node_local_partition_size + each.value.boot_disk.ceph_partition_size
|
||||
state = "Running"
|
||||
zone = var.zone
|
||||
security_groups = [exoscale_security_group.worker_sg.name]
|
||||
security_group_ids = [exoscale_security_group.worker_sg.id]
|
||||
network_interface {
|
||||
network_id = exoscale_private_network.private_network.id
|
||||
}
|
||||
elastic_ip_ids = [exoscale_elastic_ip.ingress_controller_lb.id]
|
||||
|
||||
user_data = templatefile(
|
||||
"${path.module}/templates/cloud-init.tmpl",
|
||||
{
|
||||
eip_ip_address = exoscale_ipaddress.ingress_controller_lb.ip_address
|
||||
eip_ip_address = exoscale_elastic_ip.ingress_controller_lb.ip_address
|
||||
node_local_partition_size = each.value.boot_disk.node_local_partition_size
|
||||
ceph_partition_size = each.value.boot_disk.ceph_partition_size
|
||||
root_partition_size = each.value.boot_disk.root_partition_size
|
||||
@ -90,41 +94,33 @@ resource "exoscale_compute" "worker" {
|
||||
)
|
||||
}
|
||||
|
||||
resource "exoscale_nic" "master_private_network_nic" {
|
||||
for_each = exoscale_compute.master
|
||||
|
||||
compute_id = each.value.id
|
||||
network_id = exoscale_network.private_network.id
|
||||
}
|
||||
|
||||
resource "exoscale_nic" "worker_private_network_nic" {
|
||||
for_each = exoscale_compute.worker
|
||||
|
||||
compute_id = each.value.id
|
||||
network_id = exoscale_network.private_network.id
|
||||
}
|
||||
|
||||
resource "exoscale_security_group" "master_sg" {
|
||||
name = "${var.prefix}-master-sg"
|
||||
description = "Security group for Kubernetes masters"
|
||||
}
|
||||
|
||||
resource "exoscale_security_group_rules" "master_sg_rules" {
|
||||
resource "exoscale_security_group_rule" "master_sg_rule_ssh" {
|
||||
security_group_id = exoscale_security_group.master_sg.id
|
||||
|
||||
for_each = toset(var.ssh_whitelist)
|
||||
# SSH
|
||||
ingress {
|
||||
type = "INGRESS"
|
||||
start_port = 22
|
||||
end_port = 22
|
||||
protocol = "TCP"
|
||||
cidr_list = var.ssh_whitelist
|
||||
ports = ["22"]
|
||||
}
|
||||
cidr = each.value
|
||||
}
|
||||
|
||||
resource "exoscale_security_group_rule" "master_sg_rule_k8s_api" {
|
||||
security_group_id = exoscale_security_group.master_sg.id
|
||||
|
||||
for_each = toset(var.api_server_whitelist)
|
||||
# Kubernetes API
|
||||
ingress {
|
||||
type = "INGRESS"
|
||||
start_port = 6443
|
||||
end_port = 6443
|
||||
protocol = "TCP"
|
||||
cidr_list = var.api_server_whitelist
|
||||
ports = ["6443"]
|
||||
}
|
||||
cidr = each.value
|
||||
}
|
||||
|
||||
resource "exoscale_security_group" "worker_sg" {
|
||||
@ -132,62 +128,64 @@ resource "exoscale_security_group" "worker_sg" {
|
||||
description = "security group for kubernetes worker nodes"
|
||||
}
|
||||
|
||||
resource "exoscale_security_group_rules" "worker_sg_rules" {
|
||||
resource "exoscale_security_group_rule" "worker_sg_rule_ssh" {
|
||||
security_group_id = exoscale_security_group.worker_sg.id
|
||||
|
||||
# SSH
|
||||
ingress {
|
||||
for_each = toset(var.ssh_whitelist)
|
||||
type = "INGRESS"
|
||||
start_port = 22
|
||||
end_port = 22
|
||||
protocol = "TCP"
|
||||
cidr_list = var.ssh_whitelist
|
||||
ports = ["22"]
|
||||
}
|
||||
cidr = each.value
|
||||
}
|
||||
|
||||
resource "exoscale_security_group_rule" "worker_sg_rule_http" {
|
||||
security_group_id = exoscale_security_group.worker_sg.id
|
||||
|
||||
# HTTP(S)
|
||||
ingress {
|
||||
for_each = toset(["80", "443"])
|
||||
type = "INGRESS"
|
||||
start_port = each.value
|
||||
end_port = each.value
|
||||
protocol = "TCP"
|
||||
cidr_list = ["0.0.0.0/0"]
|
||||
ports = ["80", "443"]
|
||||
}
|
||||
cidr = "0.0.0.0/0"
|
||||
}
|
||||
|
||||
# Kubernetes Nodeport
|
||||
ingress {
|
||||
|
||||
resource "exoscale_security_group_rule" "worker_sg_rule_nodeport" {
|
||||
security_group_id = exoscale_security_group.worker_sg.id
|
||||
|
||||
# HTTP(S)
|
||||
for_each = toset(var.nodeport_whitelist)
|
||||
type = "INGRESS"
|
||||
start_port = 30000
|
||||
end_port = 32767
|
||||
protocol = "TCP"
|
||||
cidr_list = var.nodeport_whitelist
|
||||
ports = ["30000-32767"]
|
||||
cidr = each.value
|
||||
}
|
||||
|
||||
resource "exoscale_elastic_ip" "ingress_controller_lb" {
|
||||
zone = var.zone
|
||||
healthcheck {
|
||||
mode = "http"
|
||||
port = 80
|
||||
uri = "/healthz"
|
||||
interval = 10
|
||||
timeout = 2
|
||||
strikes_ok = 2
|
||||
strikes_fail = 3
|
||||
}
|
||||
}
|
||||
|
||||
resource "exoscale_ipaddress" "ingress_controller_lb" {
|
||||
resource "exoscale_elastic_ip" "control_plane_lb" {
|
||||
zone = var.zone
|
||||
healthcheck_mode = "http"
|
||||
healthcheck_port = 80
|
||||
healthcheck_path = "/healthz"
|
||||
healthcheck_interval = 10
|
||||
healthcheck_timeout = 2
|
||||
healthcheck_strikes_ok = 2
|
||||
healthcheck_strikes_fail = 3
|
||||
}
|
||||
|
||||
resource "exoscale_secondary_ipaddress" "ingress_controller_lb" {
|
||||
for_each = exoscale_compute.worker
|
||||
|
||||
compute_id = each.value.id
|
||||
ip_address = exoscale_ipaddress.ingress_controller_lb.ip_address
|
||||
}
|
||||
|
||||
resource "exoscale_ipaddress" "control_plane_lb" {
|
||||
zone = var.zone
|
||||
healthcheck_mode = "tcp"
|
||||
healthcheck_port = 6443
|
||||
healthcheck_interval = 10
|
||||
healthcheck_timeout = 2
|
||||
healthcheck_strikes_ok = 2
|
||||
healthcheck_strikes_fail = 3
|
||||
}
|
||||
|
||||
resource "exoscale_secondary_ipaddress" "control_plane_lb" {
|
||||
for_each = exoscale_compute.master
|
||||
|
||||
compute_id = each.value.id
|
||||
ip_address = exoscale_ipaddress.control_plane_lb.ip_address
|
||||
healthcheck {
|
||||
mode = "tcp"
|
||||
port = 6443
|
||||
interval = 10
|
||||
timeout = 2
|
||||
strikes_ok = 2
|
||||
strikes_fail = 3
|
||||
}
|
||||
}
|
||||
|
@ -1,19 +1,19 @@
|
||||
output "master_ip_addresses" {
|
||||
value = {
|
||||
for key, instance in exoscale_compute.master :
|
||||
for key, instance in exoscale_compute_instance.master :
|
||||
instance.name => {
|
||||
"private_ip" = contains(keys(data.exoscale_compute.master_nodes), key) ? data.exoscale_compute.master_nodes[key].private_network_ip_addresses[0] : ""
|
||||
"public_ip" = exoscale_compute.master[key].ip_address
|
||||
"private_ip" = contains(keys(data.exoscale_compute_instance.master_nodes), key) ? data.exoscale_compute_instance.master_nodes[key].private_network_ip_addresses[0] : ""
|
||||
"public_ip" = exoscale_compute_instance.master[key].ip_address
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output "worker_ip_addresses" {
|
||||
value = {
|
||||
for key, instance in exoscale_compute.worker :
|
||||
for key, instance in exoscale_compute_instance.worker :
|
||||
instance.name => {
|
||||
"private_ip" = contains(keys(data.exoscale_compute.worker_nodes), key) ? data.exoscale_compute.worker_nodes[key].private_network_ip_addresses[0] : ""
|
||||
"public_ip" = exoscale_compute.worker[key].ip_address
|
||||
"private_ip" = contains(keys(data.exoscale_compute_instance.worker_nodes), key) ? data.exoscale_compute_instance.worker_nodes[key].private_network_ip_addresses[0] : ""
|
||||
"public_ip" = exoscale_compute_instance.worker[key].ip_address
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -23,9 +23,9 @@ output "cluster_private_network_cidr" {
|
||||
}
|
||||
|
||||
output "ingress_controller_lb_ip_address" {
|
||||
value = exoscale_ipaddress.ingress_controller_lb.ip_address
|
||||
value = exoscale_elastic_ip.ingress_controller_lb.ip_address
|
||||
}
|
||||
|
||||
output "control_plane_lb_ip_address" {
|
||||
value = exoscale_ipaddress.control_plane_lb.ip_address
|
||||
value = exoscale_elastic_ip.control_plane_lb.ip_address
|
||||
}
|
||||
|
@ -72,6 +72,7 @@ The setup looks like following
|
||||
|
||||
```bash
|
||||
./generate-inventory.sh > sample-inventory/inventory.ini
|
||||
```
|
||||
|
||||
* Export Variables:
|
||||
|
||||
|
2
contrib/terraform/openstack/.gitignore
vendored
2
contrib/terraform/openstack/.gitignore
vendored
@ -1,5 +1,5 @@
|
||||
.terraform
|
||||
*.tfvars
|
||||
!sample-inventory\/cluster.tfvars
|
||||
!sample-inventory/cluster.tfvars
|
||||
*.tfstate
|
||||
*.tfstate.backup
|
||||
|
@ -24,6 +24,7 @@ most modern installs of OpenStack that support the basic services.
|
||||
- [Ultimum](https://ultimum.io/)
|
||||
- [VexxHost](https://vexxhost.com/)
|
||||
- [Zetta](https://www.zetta.io/)
|
||||
- [Cloudify](https://www.cloudify.ro/en)
|
||||
|
||||
## Approach
|
||||
|
||||
@ -97,9 +98,10 @@ binaries available on hyperkube v1.4.3_coreos.0 or higher.
|
||||
|
||||
## Module Architecture
|
||||
|
||||
The configuration is divided into three modules:
|
||||
The configuration is divided into four modules:
|
||||
|
||||
- Network
|
||||
- Loadbalancer
|
||||
- IPs
|
||||
- Compute
|
||||
|
||||
@ -269,11 +271,18 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|
||||
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube_node` for tainting them as nodes, empty by default. |
|
||||
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube_ingress` for running ingress controller pods, empty by default. |
|
||||
|`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default |
|
||||
|`bastion_allowed_remote_ipv6_ips` | List of IPv6 CIDR allowed to initiate a SSH connection, `["::/0"]` by default |
|
||||
|`master_allowed_remote_ips` | List of CIDR blocks allowed to initiate an API connection, `["0.0.0.0/0"]` by default |
|
||||
|`master_allowed_remote_ipv6_ips` | List of IPv6 CIDR blocks allowed to initiate an API connection, `["::/0"]` by default |
|
||||
|`bastion_allowed_ports` | List of ports to open on bastion node, `[]` by default |
|
||||
|`bastion_allowed_ports_ipv6` | List of ports to open on bastion node for IPv6 CIDR blocks, `[]` by default |
|
||||
|`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default |
|
||||
|`k8s_allowed_remote_ips_ipv6` | List of IPv6 CIDR allowed to initiate a SSH connection, empty by default |
|
||||
|`k8s_allowed_egress_ipv6_ips` | List of IPv6 CIDRs allowed for egress traffic, `["::/0"]` by default |
|
||||
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|
||||
|`worker_allowed_ports_ipv6` | List of ports to open on worker nodes for IPv6 CIDR blocks, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "::/0"}]` by default |
|
||||
|`master_allowed_ports` | List of ports to open on master nodes, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "0.0.0.0/0"}]`, empty by default |
|
||||
|`master_allowed_ports_ipv6` | List of ports to open on master nodes for IPv6 CIDR blocks, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "::/0"}]`, empty by default |
|
||||
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|
||||
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|
||||
|`master_volume_type` | Volume type of the root volume for control_plane, 'Default' by default |
|
||||
@ -290,6 +299,10 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|
||||
|`force_null_port_security` | Set `null` instead of `true` or `false` for `port_security`. `false` by default |
|
||||
|`k8s_nodes` | Map containing worker node definition, see explanation below |
|
||||
|`k8s_masters` | Map containing master node definition, see explanation for k8s_nodes and `sample-inventory/cluster.tfvars` |
|
||||
| `k8s_master_loadbalancer_enabled`| Enable and use an Octavia load balancer for the K8s master nodes |
|
||||
| `k8s_master_loadbalancer_listener_port` | Define via which port the K8s Api should be exposed. `6443` by default |
|
||||
| `k8s_master_loadbalancer_server_port` | Define via which port the K8S api is available on the mas. `6443` by default |
|
||||
| `k8s_master_loadbalancer_public_ip` | Specify if an existing floating IP should be used for the load balancer. A new floating IP is assigned by default |
|
||||
|
||||
##### k8s_nodes
|
||||
|
||||
@ -318,6 +331,7 @@ k8s_nodes:
|
||||
mount_path: string # Path to where the partition should be mounted
|
||||
partition_start: string # Where the partition should start (e.g. 10GB ). Note, if you set the partition_start to 0 there will be no space left for the root partition
|
||||
partition_end: string # Where the partition should end (e.g. 10GB or -1 for end of volume)
|
||||
netplan_critical_dhcp_interface: string # Name of interface to set the dhcp flag critical = true, to circumvent [this issue](https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1776013).
|
||||
```
|
||||
|
||||
For example:
|
||||
@ -605,7 +619,7 @@ Edit `inventory/$CLUSTER/group_vars/k8s_cluster/k8s_cluster.yml`:
|
||||
|
||||
- Set variable **kube_network_plugin** to your desired networking plugin.
|
||||
- **flannel** works out-of-the-box
|
||||
- **calico** requires [configuring OpenStack Neutron ports](/docs/openstack.md) to allow service and pod subnets
|
||||
- **calico** requires [configuring OpenStack Neutron ports](/docs/cloud_providers/openstack.md) to allow service and pod subnets
|
||||
|
||||
```yml
|
||||
# Choose network plugin (calico, weave or flannel)
|
||||
|
@ -77,14 +77,21 @@ module "compute" {
|
||||
k8s_nodes_fips = module.ips.k8s_nodes_fips
|
||||
bastion_fips = module.ips.bastion_fips
|
||||
bastion_allowed_remote_ips = var.bastion_allowed_remote_ips
|
||||
bastion_allowed_remote_ipv6_ips = var.bastion_allowed_remote_ipv6_ips
|
||||
master_allowed_remote_ips = var.master_allowed_remote_ips
|
||||
master_allowed_remote_ipv6_ips = var.master_allowed_remote_ipv6_ips
|
||||
k8s_allowed_remote_ips = var.k8s_allowed_remote_ips
|
||||
k8s_allowed_remote_ips_ipv6 = var.k8s_allowed_remote_ips_ipv6
|
||||
k8s_allowed_egress_ips = var.k8s_allowed_egress_ips
|
||||
k8s_allowed_egress_ipv6_ips = var.k8s_allowed_egress_ipv6_ips
|
||||
supplementary_master_groups = var.supplementary_master_groups
|
||||
supplementary_node_groups = var.supplementary_node_groups
|
||||
master_allowed_ports = var.master_allowed_ports
|
||||
master_allowed_ports_ipv6 = var.master_allowed_ports_ipv6
|
||||
worker_allowed_ports = var.worker_allowed_ports
|
||||
worker_allowed_ports_ipv6 = var.worker_allowed_ports_ipv6
|
||||
bastion_allowed_ports = var.bastion_allowed_ports
|
||||
bastion_allowed_ports_ipv6 = var.bastion_allowed_ports_ipv6
|
||||
use_access_ip = var.use_access_ip
|
||||
master_server_group_policy = var.master_server_group_policy
|
||||
node_server_group_policy = var.node_server_group_policy
|
||||
@ -105,6 +112,24 @@ module "compute" {
|
||||
]
|
||||
}
|
||||
|
||||
module "loadbalancer" {
|
||||
source = "./modules/loadbalancer"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
subnet_id = module.network.subnet_id
|
||||
floatingip_pool = var.floatingip_pool
|
||||
k8s_master_ips = module.compute.k8s_master_ips
|
||||
k8s_master_loadbalancer_enabled = var.k8s_master_loadbalancer_enabled
|
||||
k8s_master_loadbalancer_listener_port = var.k8s_master_loadbalancer_listener_port
|
||||
k8s_master_loadbalancer_server_port = var.k8s_master_loadbalancer_server_port
|
||||
k8s_master_loadbalancer_public_ip = var.k8s_master_loadbalancer_public_ip
|
||||
|
||||
depends_on = [
|
||||
module.compute.k8s_master
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
output "private_subnet_id" {
|
||||
value = module.network.subnet_id
|
||||
}
|
||||
|
@ -19,8 +19,8 @@ data "cloudinit_config" "cloudinit" {
|
||||
part {
|
||||
content_type = "text/cloud-config"
|
||||
content = templatefile("${path.module}/templates/cloudinit.yaml.tmpl", {
|
||||
# template_file doesn't support lists
|
||||
extra_partitions = ""
|
||||
extra_partitions = [],
|
||||
netplan_critical_dhcp_interface = ""
|
||||
})
|
||||
}
|
||||
}
|
||||
@ -70,6 +70,36 @@ resource "openstack_networking_secgroup_rule_v2" "k8s_master_ports" {
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s_master.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "k8s_master_ipv6_ingress" {
|
||||
count = length(var.master_allowed_remote_ipv6_ips)
|
||||
direction = "ingress"
|
||||
ethertype = "IPv6"
|
||||
protocol = "tcp"
|
||||
port_range_min = "6443"
|
||||
port_range_max = "6443"
|
||||
remote_ip_prefix = var.master_allowed_remote_ipv6_ips[count.index]
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s_master.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "k8s_master_ports_ipv6_ingress" {
|
||||
count = length(var.master_allowed_ports_ipv6)
|
||||
direction = "ingress"
|
||||
ethertype = "IPv6"
|
||||
protocol = lookup(var.master_allowed_ports_ipv6[count.index], "protocol", "tcp")
|
||||
port_range_min = lookup(var.master_allowed_ports_ipv6[count.index], "port_range_min")
|
||||
port_range_max = lookup(var.master_allowed_ports_ipv6[count.index], "port_range_max")
|
||||
remote_ip_prefix = lookup(var.master_allowed_ports_ipv6[count.index], "remote_ip_prefix", "::/0")
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s_master.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "master_egress_ipv6" {
|
||||
count = length(var.k8s_allowed_egress_ipv6_ips)
|
||||
direction = "egress"
|
||||
ethertype = "IPv6"
|
||||
remote_ip_prefix = var.k8s_allowed_egress_ipv6_ips[count.index]
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s_master.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_v2" "bastion" {
|
||||
name = "${var.cluster_name}-bastion"
|
||||
count = var.number_of_bastions != "" ? 1 : 0
|
||||
@ -99,6 +129,28 @@ resource "openstack_networking_secgroup_rule_v2" "k8s_bastion_ports" {
|
||||
security_group_id = openstack_networking_secgroup_v2.bastion[0].id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "bastion_ipv6_ingress" {
|
||||
count = var.number_of_bastions != "" ? length(var.bastion_allowed_remote_ipv6_ips) : 0
|
||||
direction = "ingress"
|
||||
ethertype = "IPv6"
|
||||
protocol = "tcp"
|
||||
port_range_min = "22"
|
||||
port_range_max = "22"
|
||||
remote_ip_prefix = var.bastion_allowed_remote_ipv6_ips[count.index]
|
||||
security_group_id = openstack_networking_secgroup_v2.bastion[0].id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "k8s_bastion_ports_ipv6_ingress" {
|
||||
count = length(var.bastion_allowed_ports_ipv6)
|
||||
direction = "ingress"
|
||||
ethertype = "IPv6"
|
||||
protocol = lookup(var.bastion_allowed_ports_ipv6[count.index], "protocol", "tcp")
|
||||
port_range_min = lookup(var.bastion_allowed_ports_ipv6[count.index], "port_range_min")
|
||||
port_range_max = lookup(var.bastion_allowed_ports_ipv6[count.index], "port_range_max")
|
||||
remote_ip_prefix = lookup(var.bastion_allowed_ports_ipv6[count.index], "remote_ip_prefix", "::/0")
|
||||
security_group_id = openstack_networking_secgroup_v2.bastion[0].id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_v2" "k8s" {
|
||||
name = "${var.cluster_name}-k8s"
|
||||
description = "${var.cluster_name} - Kubernetes"
|
||||
@ -112,6 +164,13 @@ resource "openstack_networking_secgroup_rule_v2" "k8s" {
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "k8s_ipv6" {
|
||||
direction = "ingress"
|
||||
ethertype = "IPv6"
|
||||
remote_group_id = openstack_networking_secgroup_v2.k8s.id
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "k8s_allowed_remote_ips" {
|
||||
count = length(var.k8s_allowed_remote_ips)
|
||||
direction = "ingress"
|
||||
@ -123,6 +182,17 @@ resource "openstack_networking_secgroup_rule_v2" "k8s_allowed_remote_ips" {
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "k8s_allowed_remote_ips_ipv6" {
|
||||
count = length(var.k8s_allowed_remote_ips_ipv6)
|
||||
direction = "ingress"
|
||||
ethertype = "IPv6"
|
||||
protocol = "tcp"
|
||||
port_range_min = "22"
|
||||
port_range_max = "22"
|
||||
remote_ip_prefix = var.k8s_allowed_remote_ips_ipv6[count.index]
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "egress" {
|
||||
count = length(var.k8s_allowed_egress_ips)
|
||||
direction = "egress"
|
||||
@ -131,6 +201,14 @@ resource "openstack_networking_secgroup_rule_v2" "egress" {
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "egress_ipv6" {
|
||||
count = length(var.k8s_allowed_egress_ipv6_ips)
|
||||
direction = "egress"
|
||||
ethertype = "IPv6"
|
||||
remote_ip_prefix = var.k8s_allowed_egress_ipv6_ips[count.index]
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_v2" "worker" {
|
||||
name = "${var.cluster_name}-k8s-worker"
|
||||
description = "${var.cluster_name} - Kubernetes worker nodes"
|
||||
@ -155,6 +233,17 @@ resource "openstack_networking_secgroup_rule_v2" "worker" {
|
||||
security_group_id = openstack_networking_secgroup_v2.worker.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "worker_ipv6_ingress" {
|
||||
count = length(var.worker_allowed_ports_ipv6)
|
||||
direction = "ingress"
|
||||
ethertype = "IPv6"
|
||||
protocol = lookup(var.worker_allowed_ports_ipv6[count.index], "protocol", "tcp")
|
||||
port_range_min = lookup(var.worker_allowed_ports_ipv6[count.index], "port_range_min")
|
||||
port_range_max = lookup(var.worker_allowed_ports_ipv6[count.index], "port_range_max")
|
||||
remote_ip_prefix = lookup(var.worker_allowed_ports_ipv6[count.index], "remote_ip_prefix", "::/0")
|
||||
security_group_id = openstack_networking_secgroup_v2.worker.id
|
||||
}
|
||||
|
||||
resource "openstack_compute_servergroup_v2" "k8s_master" {
|
||||
count = var.master_server_group_policy != "" ? 1 : 0
|
||||
name = "k8s-master-srvgrp"
|
||||
@ -304,6 +393,10 @@ resource "openstack_networking_port_v2" "k8s_master_port" {
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [ allowed_address_pairs ]
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
@ -370,6 +463,10 @@ resource "openstack_networking_port_v2" "k8s_masters_port" {
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [ allowed_address_pairs ]
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
@ -434,6 +531,10 @@ resource "openstack_networking_port_v2" "k8s_master_no_etcd_port" {
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [ allowed_address_pairs ]
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
@ -560,6 +661,10 @@ resource "openstack_networking_port_v2" "k8s_master_no_floating_ip_port" {
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [ allowed_address_pairs ]
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
@ -620,6 +725,10 @@ resource "openstack_networking_port_v2" "k8s_master_no_floating_ip_no_etcd_port"
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [ allowed_address_pairs ]
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
@ -681,6 +790,10 @@ resource "openstack_networking_port_v2" "k8s_node_port" {
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [ allowed_address_pairs ]
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
@ -747,6 +860,10 @@ resource "openstack_networking_port_v2" "k8s_node_no_floating_ip_port" {
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [ allowed_address_pairs ]
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
@ -808,6 +925,10 @@ resource "openstack_networking_port_v2" "k8s_nodes_port" {
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [ allowed_address_pairs ]
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
@ -821,7 +942,8 @@ resource "openstack_compute_instance_v2" "k8s_nodes" {
|
||||
flavor_id = each.value.flavor
|
||||
key_pair = openstack_compute_keypair_v2.k8s.name
|
||||
user_data = each.value.cloudinit != null ? templatefile("${path.module}/templates/cloudinit.yaml.tmpl", {
|
||||
extra_partitions = each.value.cloudinit.extra_partitions
|
||||
extra_partitions = each.value.cloudinit.extra_partitions,
|
||||
netplan_critical_dhcp_interface = each.value.cloudinit.netplan_critical_dhcp_interface,
|
||||
}) : data.cloudinit_config.cloudinit.rendered
|
||||
|
||||
dynamic "block_device" {
|
||||
@ -850,7 +972,7 @@ resource "openstack_compute_instance_v2" "k8s_nodes" {
|
||||
|
||||
metadata = {
|
||||
ssh_user = var.ssh_user
|
||||
kubespray_groups = "kube_node,k8s_cluster,%{if each.value.floating_ip == false}no_floating,%{endif}${var.supplementary_node_groups}${each.value.extra_groups != null ? ",${each.value.extra_groups}" : ""}"
|
||||
kubespray_groups = "kube_node,k8s_cluster,%{if !each.value.floating_ip}no_floating,%{endif}${var.supplementary_node_groups}${each.value.extra_groups != null ? ",${each.value.extra_groups}" : ""}"
|
||||
depends_on = var.network_router_id
|
||||
use_access_ip = var.use_access_ip
|
||||
}
|
||||
|
3
contrib/terraform/openstack/modules/compute/outputs.tf
Normal file
3
contrib/terraform/openstack/modules/compute/outputs.tf
Normal file
@ -0,0 +1,3 @@
|
||||
output "k8s_master_ips" {
|
||||
value = concat(openstack_compute_instance_v2.k8s_master_no_floating_ip.*, openstack_compute_instance_v2.k8s_master_no_floating_ip_no_etcd.*)
|
||||
}
|
@ -1,4 +1,4 @@
|
||||
%{~ if length(extra_partitions) > 0 }
|
||||
%{~ if length(extra_partitions) > 0 || netplan_critical_dhcp_interface != "" }
|
||||
#cloud-config
|
||||
bootcmd:
|
||||
%{~ for idx, partition in extra_partitions }
|
||||
@ -8,11 +8,26 @@ bootcmd:
|
||||
%{~ endfor }
|
||||
|
||||
runcmd:
|
||||
%{~ if netplan_critical_dhcp_interface != "" }
|
||||
- netplan apply
|
||||
%{~ endif }
|
||||
%{~ for idx, partition in extra_partitions }
|
||||
- mkdir -p ${partition.mount_path}
|
||||
- chown nobody:nogroup ${partition.mount_path}
|
||||
- mount ${partition.partition_path} ${partition.mount_path}
|
||||
%{~ endfor }
|
||||
%{~ endfor ~}
|
||||
|
||||
%{~ if netplan_critical_dhcp_interface != "" }
|
||||
write_files:
|
||||
- path: /etc/netplan/90-critical-dhcp.yaml
|
||||
content: |
|
||||
network:
|
||||
version: 2
|
||||
ethernets:
|
||||
${ netplan_critical_dhcp_interface }:
|
||||
dhcp4: true
|
||||
critical: true
|
||||
%{~ endif }
|
||||
|
||||
mounts:
|
||||
%{~ for idx, partition in extra_partitions }
|
||||
|
@ -104,18 +104,34 @@ variable "bastion_allowed_remote_ips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "bastion_allowed_remote_ipv6_ips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "master_allowed_remote_ips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "master_allowed_remote_ipv6_ips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "k8s_allowed_remote_ips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "k8s_allowed_remote_ips_ipv6" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "k8s_allowed_egress_ips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "k8s_allowed_egress_ipv6_ips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "k8s_masters" {
|
||||
type = map(object({
|
||||
az = string
|
||||
@ -142,13 +158,14 @@ variable "k8s_nodes" {
|
||||
additional_server_groups = optional(list(string))
|
||||
server_group = optional(string)
|
||||
cloudinit = optional(object({
|
||||
extra_partitions = list(object({
|
||||
extra_partitions = optional(list(object({
|
||||
volume_path = string
|
||||
partition_path = string
|
||||
partition_start = string
|
||||
partition_end = string
|
||||
mount_path = string
|
||||
}))
|
||||
})), [])
|
||||
netplan_critical_dhcp_interface = optional(string, "")
|
||||
}))
|
||||
}))
|
||||
}
|
||||
@ -171,14 +188,26 @@ variable "master_allowed_ports" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "master_allowed_ports_ipv6" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "worker_allowed_ports" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "worker_allowed_ports_ipv6" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "bastion_allowed_ports" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "bastion_allowed_ports_ipv6" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "use_access_ip" {}
|
||||
|
||||
variable "master_server_group_policy" {
|
||||
|
54
contrib/terraform/openstack/modules/loadbalancer/main.tf
Normal file
54
contrib/terraform/openstack/modules/loadbalancer/main.tf
Normal file
@ -0,0 +1,54 @@
|
||||
resource "openstack_lb_loadbalancer_v2" "k8s_lb" {
|
||||
count = var.k8s_master_loadbalancer_enabled ? 1 : 0
|
||||
name = "${var.cluster_name}-api-loadbalancer"
|
||||
vip_subnet_id = var.subnet_id
|
||||
}
|
||||
|
||||
resource "openstack_lb_listener_v2" "api_listener"{
|
||||
count = var.k8s_master_loadbalancer_enabled ? 1 : 0
|
||||
name = "api-listener"
|
||||
protocol = "TCP"
|
||||
protocol_port = var.k8s_master_loadbalancer_listener_port
|
||||
loadbalancer_id = openstack_lb_loadbalancer_v2.k8s_lb[0].id
|
||||
depends_on = [ openstack_lb_loadbalancer_v2.k8s_lb ]
|
||||
}
|
||||
|
||||
resource "openstack_lb_pool_v2" "api_pool" {
|
||||
count = var.k8s_master_loadbalancer_enabled ? 1 : 0
|
||||
name = "api-pool"
|
||||
protocol = "TCP"
|
||||
lb_method = "ROUND_ROBIN"
|
||||
listener_id = openstack_lb_listener_v2.api_listener[0].id
|
||||
depends_on = [ openstack_lb_listener_v2.api_listener ]
|
||||
}
|
||||
|
||||
resource "openstack_lb_member_v2" "lb_member" {
|
||||
count = var.k8s_master_loadbalancer_enabled ? length(var.k8s_master_ips) : 0
|
||||
name = var.k8s_master_ips[count.index].name
|
||||
pool_id = openstack_lb_pool_v2.api_pool[0].id
|
||||
address = var.k8s_master_ips[count.index].access_ip_v4
|
||||
protocol_port = var.k8s_master_loadbalancer_server_port
|
||||
depends_on = [ openstack_lb_pool_v2.api_pool ]
|
||||
}
|
||||
|
||||
resource "openstack_lb_monitor_v2" "monitor" {
|
||||
count = var.k8s_master_loadbalancer_enabled ? 1 : 0
|
||||
name = "Api Monitor"
|
||||
pool_id = openstack_lb_pool_v2.api_pool[0].id
|
||||
type = "TCP"
|
||||
delay = 10
|
||||
timeout = 5
|
||||
max_retries = 5
|
||||
}
|
||||
|
||||
resource "openstack_networking_floatingip_v2" "floatip_1" {
|
||||
count = var.k8s_master_loadbalancer_enabled && var.k8s_master_loadbalancer_public_ip == "" ? 1 : 0
|
||||
pool = var.floatingip_pool
|
||||
}
|
||||
|
||||
resource "openstack_networking_floatingip_associate_v2" "public_ip" {
|
||||
count = var.k8s_master_loadbalancer_enabled ? 1 : 0
|
||||
floating_ip = var.k8s_master_loadbalancer_public_ip != "" ? var.k8s_master_loadbalancer_public_ip : openstack_networking_floatingip_v2.floatip_1[0].address
|
||||
port_id = openstack_lb_loadbalancer_v2.k8s_lb[0].vip_port_id
|
||||
depends_on = [ openstack_lb_loadbalancer_v2.k8s_lb ]
|
||||
}
|
@ -0,0 +1,15 @@
|
||||
variable "cluster_name" {}
|
||||
|
||||
variable "subnet_id" {}
|
||||
|
||||
variable "floatingip_pool" {}
|
||||
|
||||
variable "k8s_master_ips" {}
|
||||
|
||||
variable "k8s_master_loadbalancer_enabled" {}
|
||||
|
||||
variable "k8s_master_loadbalancer_listener_port" {}
|
||||
|
||||
variable "k8s_master_loadbalancer_server_port" {}
|
||||
|
||||
variable "k8s_master_loadbalancer_public_ip" {}
|
@ -0,0 +1,8 @@
|
||||
terraform {
|
||||
required_providers {
|
||||
openstack = {
|
||||
source = "terraform-provider-openstack/openstack"
|
||||
}
|
||||
}
|
||||
required_version = ">= 0.12.26"
|
||||
}
|
@ -220,30 +220,60 @@ variable "bastion_allowed_remote_ips" {
|
||||
default = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
variable "bastion_allowed_remote_ipv6_ips" {
|
||||
description = "An array of IPv6 CIDRs allowed to SSH to hosts"
|
||||
type = list(string)
|
||||
default = ["::/0"]
|
||||
}
|
||||
|
||||
variable "master_allowed_remote_ips" {
|
||||
description = "An array of CIDRs allowed to access API of masters"
|
||||
type = list(string)
|
||||
default = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
variable "master_allowed_remote_ipv6_ips" {
|
||||
description = "An array of IPv6 CIDRs allowed to access API of masters"
|
||||
type = list(string)
|
||||
default = ["::/0"]
|
||||
}
|
||||
|
||||
variable "k8s_allowed_remote_ips" {
|
||||
description = "An array of CIDRs allowed to SSH to hosts"
|
||||
type = list(string)
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "k8s_allowed_remote_ips_ipv6" {
|
||||
description = "An array of IPv6 CIDRs allowed to SSH to hosts"
|
||||
type = list(string)
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "k8s_allowed_egress_ips" {
|
||||
description = "An array of CIDRs allowed for egress traffic"
|
||||
type = list(string)
|
||||
default = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
variable "k8s_allowed_egress_ipv6_ips" {
|
||||
description = "An array of CIDRs allowed for egress IPv6 traffic"
|
||||
type = list(string)
|
||||
default = ["::/0"]
|
||||
}
|
||||
|
||||
variable "master_allowed_ports" {
|
||||
type = list(any)
|
||||
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "master_allowed_ports_ipv6" {
|
||||
type = list(any)
|
||||
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "worker_allowed_ports" {
|
||||
type = list(any)
|
||||
|
||||
@ -257,12 +287,31 @@ variable "worker_allowed_ports" {
|
||||
]
|
||||
}
|
||||
|
||||
variable "worker_allowed_ports_ipv6" {
|
||||
type = list(any)
|
||||
|
||||
default = [
|
||||
{
|
||||
"protocol" = "tcp"
|
||||
"port_range_min" = 30000
|
||||
"port_range_max" = 32767
|
||||
"remote_ip_prefix" = "::/0"
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
variable "bastion_allowed_ports" {
|
||||
type = list(any)
|
||||
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "bastion_allowed_ports_ipv6" {
|
||||
type = list(any)
|
||||
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "use_access_ip" {
|
||||
default = 1
|
||||
}
|
||||
@ -340,3 +389,23 @@ variable "group_vars_path" {
|
||||
type = string
|
||||
default = "./group_vars"
|
||||
}
|
||||
|
||||
variable "k8s_master_loadbalancer_enabled" {
|
||||
type = bool
|
||||
default = "false"
|
||||
}
|
||||
|
||||
variable "k8s_master_loadbalancer_listener_port" {
|
||||
type = string
|
||||
default = "6443"
|
||||
}
|
||||
|
||||
variable "k8s_master_loadbalancer_server_port" {
|
||||
type = string
|
||||
default = 6443
|
||||
}
|
||||
|
||||
variable "k8s_master_loadbalancer_public_ip" {
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
@ -140,4 +140,4 @@ terraform destroy --var-file cluster-settings.tfvars \
|
||||
* `backend_servers`: List of servers that traffic to the port should be forwarded to.
|
||||
* `server_groups`: Group servers together
|
||||
* `servers`: The servers that should be included in the group.
|
||||
* `anti_affinity`: If anti-affinity should be enabled, try to spread the VMs out on separate nodes.
|
||||
* `anti_affinity_policy`: Defines if a server group is an anti-affinity group. Setting this to "strict" or yes" will result in all servers in the group being placed on separate compute hosts. The value can be "strict", "yes" or "no". "strict" refers to strict policy doesn't allow servers in the same server group to be on the same host. "yes" refers to best-effort policy and tries to put servers on different hosts, but this is not guaranteed.
|
||||
|
@ -18,7 +18,7 @@ ssh_public_keys = [
|
||||
|
||||
# check list of available plan https://developers.upcloud.com/1.3/7-plans/
|
||||
machines = {
|
||||
"master-0" : {
|
||||
"control-plane-0" : {
|
||||
"node_type" : "master",
|
||||
# plan to use instead of custom cpu/mem
|
||||
"plan" : null,
|
||||
@ -118,6 +118,7 @@ worker_allowed_ports = []
|
||||
|
||||
loadbalancer_enabled = false
|
||||
loadbalancer_plan = "development"
|
||||
loadbalancer_proxy_protocol = false
|
||||
loadbalancers = {
|
||||
# "http" : {
|
||||
# "port" : 80,
|
||||
@ -133,9 +134,9 @@ loadbalancers = {
|
||||
server_groups = {
|
||||
# "control-plane" = {
|
||||
# servers = [
|
||||
# "master-0"
|
||||
# "control-plane-0"
|
||||
# ]
|
||||
# anti_affinity = true
|
||||
# anti_affinity_policy = "strict"
|
||||
# },
|
||||
# "workers" = {
|
||||
# servers = [
|
||||
@ -143,6 +144,6 @@ server_groups = {
|
||||
# "worker-1",
|
||||
# "worker-2"
|
||||
# ]
|
||||
# anti_affinity = true
|
||||
# anti_affinity_policy = "yes"
|
||||
# }
|
||||
}
|
@ -33,6 +33,7 @@ module "kubernetes" {
|
||||
|
||||
loadbalancer_enabled = var.loadbalancer_enabled
|
||||
loadbalancer_plan = var.loadbalancer_plan
|
||||
loadbalancer_outbound_proxy_protocol = var.loadbalancer_proxy_protocol ? "v2" : ""
|
||||
loadbalancers = var.loadbalancers
|
||||
|
||||
server_groups = var.server_groups
|
||||
|
@ -22,7 +22,7 @@ locals {
|
||||
|
||||
# If prefix is set, all resources will be prefixed with "${var.prefix}-"
|
||||
# Else don't prefix with anything
|
||||
resource-prefix = "%{ if var.prefix != ""}${var.prefix}-%{ endif }"
|
||||
resource-prefix = "%{if var.prefix != ""}${var.prefix}-%{endif}"
|
||||
}
|
||||
|
||||
resource "upcloud_network" "private" {
|
||||
@ -38,7 +38,7 @@ resource "upcloud_network" "private" {
|
||||
|
||||
resource "upcloud_storage" "additional_disks" {
|
||||
for_each = {
|
||||
for disk in local.disks: "${disk.node_name}_${disk.disk_name}" => disk.disk
|
||||
for disk in local.disks : "${disk.node_name}_${disk.disk_name}" => disk.disk
|
||||
}
|
||||
|
||||
size = each.value.size
|
||||
@ -165,7 +165,7 @@ resource "upcloud_firewall_rules" "master" {
|
||||
for_each = upcloud_server.master
|
||||
server_id = each.value.id
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.master_allowed_remote_ips
|
||||
|
||||
content {
|
||||
@ -181,7 +181,7 @@ resource "upcloud_firewall_rules" "master" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = length(var.master_allowed_remote_ips) > 0 ? [1] : []
|
||||
|
||||
content {
|
||||
@ -197,7 +197,7 @@ resource "upcloud_firewall_rules" "master" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.k8s_allowed_remote_ips
|
||||
|
||||
content {
|
||||
@ -213,7 +213,7 @@ resource "upcloud_firewall_rules" "master" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = length(var.k8s_allowed_remote_ips) > 0 ? [1] : []
|
||||
|
||||
content {
|
||||
@ -229,7 +229,7 @@ resource "upcloud_firewall_rules" "master" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.master_allowed_ports
|
||||
|
||||
content {
|
||||
@ -245,7 +245,7 @@ resource "upcloud_firewall_rules" "master" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||
|
||||
content {
|
||||
@ -261,7 +261,7 @@ resource "upcloud_firewall_rules" "master" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||
|
||||
content {
|
||||
@ -277,7 +277,7 @@ resource "upcloud_firewall_rules" "master" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||
|
||||
content {
|
||||
@ -293,7 +293,7 @@ resource "upcloud_firewall_rules" "master" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||
|
||||
content {
|
||||
@ -309,7 +309,7 @@ resource "upcloud_firewall_rules" "master" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.firewall_default_deny_in ? ["udp"] : []
|
||||
|
||||
content {
|
||||
@ -325,7 +325,7 @@ resource "upcloud_firewall_rules" "master" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.firewall_default_deny_in ? ["udp"] : []
|
||||
|
||||
content {
|
||||
@ -354,7 +354,7 @@ resource "upcloud_firewall_rules" "k8s" {
|
||||
for_each = upcloud_server.worker
|
||||
server_id = each.value.id
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.k8s_allowed_remote_ips
|
||||
|
||||
content {
|
||||
@ -370,7 +370,7 @@ resource "upcloud_firewall_rules" "k8s" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = length(var.k8s_allowed_remote_ips) > 0 ? [1] : []
|
||||
|
||||
content {
|
||||
@ -386,7 +386,7 @@ resource "upcloud_firewall_rules" "k8s" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.worker_allowed_ports
|
||||
|
||||
content {
|
||||
@ -402,7 +402,7 @@ resource "upcloud_firewall_rules" "k8s" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||
|
||||
content {
|
||||
@ -418,7 +418,7 @@ resource "upcloud_firewall_rules" "k8s" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||
|
||||
content {
|
||||
@ -434,7 +434,7 @@ resource "upcloud_firewall_rules" "k8s" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||
|
||||
content {
|
||||
@ -450,7 +450,7 @@ resource "upcloud_firewall_rules" "k8s" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.firewall_default_deny_in ? ["tcp", "udp"] : []
|
||||
|
||||
content {
|
||||
@ -466,7 +466,7 @@ resource "upcloud_firewall_rules" "k8s" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.firewall_default_deny_in ? ["udp"] : []
|
||||
|
||||
content {
|
||||
@ -482,7 +482,7 @@ resource "upcloud_firewall_rules" "k8s" {
|
||||
}
|
||||
}
|
||||
|
||||
dynamic firewall_rule {
|
||||
dynamic "firewall_rule" {
|
||||
for_each = var.firewall_default_deny_in ? ["udp"] : []
|
||||
|
||||
content {
|
||||
@ -521,6 +521,9 @@ resource "upcloud_loadbalancer_backend" "lb_backend" {
|
||||
|
||||
loadbalancer = upcloud_loadbalancer.lb[0].id
|
||||
name = "lb-backend-${each.key}"
|
||||
properties {
|
||||
outbound_proxy_protocol = var.loadbalancer_outbound_proxy_protocol
|
||||
}
|
||||
}
|
||||
|
||||
resource "upcloud_loadbalancer_frontend" "lb_frontend" {
|
||||
@ -535,7 +538,7 @@ resource "upcloud_loadbalancer_frontend" "lb_frontend" {
|
||||
|
||||
resource "upcloud_loadbalancer_static_backend_member" "lb_backend_member" {
|
||||
for_each = {
|
||||
for be_server in local.lb_backend_servers:
|
||||
for be_server in local.lb_backend_servers :
|
||||
"${be_server.server_name}-lb-backend-${be_server.lb_name}" => be_server
|
||||
if var.loadbalancer_enabled
|
||||
}
|
||||
@ -552,7 +555,7 @@ resource "upcloud_loadbalancer_static_backend_member" "lb_backend_member" {
|
||||
resource "upcloud_server_group" "server_groups" {
|
||||
for_each = var.server_groups
|
||||
title = each.key
|
||||
anti_affinity = each.value.anti_affinity
|
||||
anti_affinity_policy = each.value.anti_affinity_policy
|
||||
labels = {}
|
||||
members = [for server in each.value.servers : merge(upcloud_server.master, upcloud_server.worker)[server].id]
|
||||
}
|
@ -3,8 +3,8 @@ output "master_ip" {
|
||||
value = {
|
||||
for instance in upcloud_server.master :
|
||||
instance.hostname => {
|
||||
"public_ip": instance.network_interface[0].ip_address
|
||||
"private_ip": instance.network_interface[1].ip_address
|
||||
"public_ip" : instance.network_interface[0].ip_address
|
||||
"private_ip" : instance.network_interface[1].ip_address
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -13,8 +13,8 @@ output "worker_ip" {
|
||||
value = {
|
||||
for instance in upcloud_server.worker :
|
||||
instance.hostname => {
|
||||
"public_ip": instance.network_interface[0].ip_address
|
||||
"private_ip": instance.network_interface[1].ip_address
|
||||
"public_ip" : instance.network_interface[0].ip_address
|
||||
"private_ip" : instance.network_interface[1].ip_address
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -85,6 +85,10 @@ variable "loadbalancer_plan" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "loadbalancer_outbound_proxy_protocol" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "loadbalancers" {
|
||||
description = "Load balancers"
|
||||
|
||||
@ -99,7 +103,7 @@ variable "server_groups" {
|
||||
description = "Server groups"
|
||||
|
||||
type = map(object({
|
||||
anti_affinity = bool
|
||||
anti_affinity_policy = string
|
||||
servers = list(string)
|
||||
}))
|
||||
}
|
@ -3,7 +3,7 @@ terraform {
|
||||
required_providers {
|
||||
upcloud = {
|
||||
source = "UpCloudLtd/upcloud"
|
||||
version = "~>2.7.1"
|
||||
version = "~>2.12.0"
|
||||
}
|
||||
}
|
||||
required_version = ">= 0.13"
|
||||
|
@ -18,7 +18,7 @@ ssh_public_keys = [
|
||||
|
||||
# check list of available plan https://developers.upcloud.com/1.3/7-plans/
|
||||
machines = {
|
||||
"master-0" : {
|
||||
"control-plane-0" : {
|
||||
"node_type" : "master",
|
||||
# plan to use instead of custom cpu/mem
|
||||
"plan" : null,
|
||||
@ -28,7 +28,7 @@ machines = {
|
||||
"mem" : "4096"
|
||||
# The size of the storage in GB
|
||||
"disk_size" : 250
|
||||
"additional_disks": {}
|
||||
"additional_disks" : {}
|
||||
},
|
||||
"worker-0" : {
|
||||
"node_type" : "worker",
|
||||
@ -40,7 +40,7 @@ machines = {
|
||||
"mem" : "4096"
|
||||
# The size of the storage in GB
|
||||
"disk_size" : 250
|
||||
"additional_disks": {
|
||||
"additional_disks" : {
|
||||
# "some-disk-name-1": {
|
||||
# "size": 100,
|
||||
# "tier": "maxiops",
|
||||
@ -61,7 +61,7 @@ machines = {
|
||||
"mem" : "4096"
|
||||
# The size of the storage in GB
|
||||
"disk_size" : 250
|
||||
"additional_disks": {
|
||||
"additional_disks" : {
|
||||
# "some-disk-name-1": {
|
||||
# "size": 100,
|
||||
# "tier": "maxiops",
|
||||
@ -82,7 +82,7 @@ machines = {
|
||||
"mem" : "4096"
|
||||
# The size of the storage in GB
|
||||
"disk_size" : 250
|
||||
"additional_disks": {
|
||||
"additional_disks" : {
|
||||
# "some-disk-name-1": {
|
||||
# "size": 100,
|
||||
# "tier": "maxiops",
|
||||
@ -134,9 +134,9 @@ loadbalancers = {
|
||||
server_groups = {
|
||||
# "control-plane" = {
|
||||
# servers = [
|
||||
# "master-0"
|
||||
# "control-plane-0"
|
||||
# ]
|
||||
# anti_affinity = true
|
||||
# anti_affinity_policy = "strict"
|
||||
# },
|
||||
# "workers" = {
|
||||
# servers = [
|
||||
@ -144,6 +144,6 @@ server_groups = {
|
||||
# "worker-1",
|
||||
# "worker-2"
|
||||
# ]
|
||||
# anti_affinity = true
|
||||
# anti_affinity_policy = "yes"
|
||||
# }
|
||||
}
|
@ -121,6 +121,11 @@ variable "loadbalancer_plan" {
|
||||
default = "development"
|
||||
}
|
||||
|
||||
variable "loadbalancer_proxy_protocol" {
|
||||
type = bool
|
||||
default = false
|
||||
}
|
||||
|
||||
variable "loadbalancers" {
|
||||
description = "Load balancers"
|
||||
|
||||
@ -136,7 +141,7 @@ variable "server_groups" {
|
||||
description = "Server groups"
|
||||
|
||||
type = map(object({
|
||||
anti_affinity = bool
|
||||
anti_affinity_policy = string
|
||||
servers = list(string)
|
||||
}))
|
||||
|
||||
|
@ -3,7 +3,7 @@ terraform {
|
||||
required_providers {
|
||||
upcloud = {
|
||||
source = "UpCloudLtd/upcloud"
|
||||
version = "~>2.7.1"
|
||||
version = "~>2.12.0"
|
||||
}
|
||||
}
|
||||
required_version = ">= 0.13"
|
||||
|
@ -222,6 +222,14 @@ calico_node_livenessprobe_timeout: 10
|
||||
calico_node_readinessprobe_timeout: 10
|
||||
```
|
||||
|
||||
### Optional : Enable NAT with IPv6
|
||||
|
||||
To allow outgoing IPv6 traffic going from pods to the Internet, enable the following:
|
||||
|
||||
```yml
|
||||
nat_outgoing_ipv6: true # NAT outgoing ipv6 (default value: false).
|
||||
```
|
||||
|
||||
## Config encapsulation for cross server traffic
|
||||
|
||||
Calico supports two types of encapsulation: [VXLAN and IP in IP](https://docs.projectcalico.org/v3.11/networking/vxlan-ipip). VXLAN is the more mature implementation and enabled by default, please check your environment if you need *IP in IP* encapsulation.
|
||||
@ -235,7 +243,7 @@ If you are running your cluster with the default calico settings and are upgradi
|
||||
* perform a manual migration to vxlan before upgrading kubespray (see migrating from IP in IP to VXLAN below)
|
||||
* pin the pre-2.19 settings in your ansible inventory (see IP in IP mode settings below)
|
||||
|
||||
**Note:**: Vxlan in ipv6 only supported when kernel >= 3.12. So if your kernel version < 3.12, Please don't set `calico_vxlan_mode_ipv6: vxlanAlways`. More details see [#Issue 6877](https://github.com/projectcalico/calico/issues/6877).
|
||||
**Note:**: Vxlan in ipv6 only supported when kernel >= 3.12. So if your kernel version < 3.12, Please don't set `calico_vxlan_mode_ipv6: Always`. More details see [#Issue 6877](https://github.com/projectcalico/calico/issues/6877).
|
||||
|
||||
### IP in IP mode
|
||||
|
||||
@ -374,7 +382,7 @@ To clean up any ipvs leftovers:
|
||||
|
||||
Calico node, typha and kube-controllers need to be able to talk to the kubernetes API. Please reference the [Enabling eBPF Calico Docs](https://docs.projectcalico.org/maintenance/ebpf/enabling-bpf) for guidelines on how to do this.
|
||||
|
||||
Kubespray sets up the `kubernetes-services-endpoint` configmap based on the contents of the `loadbalancer_apiserver` inventory variable documented in [HA Mode](/docs/ha-mode.md).
|
||||
Kubespray sets up the `kubernetes-services-endpoint` configmap based on the contents of the `loadbalancer_apiserver` inventory variable documented in [HA Mode](/docs/operations/ha-mode.md).
|
||||
|
||||
If no external loadbalancer is used, Calico eBPF can also use the localhost loadbalancer option. We are able to do so only if you use the same port for the localhost apiserver loadbalancer and the kube-apiserver. In this case Calico Automatic Host Endpoints need to be enabled to allow services like `coredns` and `metrics-server` to communicate with the kubernetes host endpoint. See [this blog post](https://www.projectcalico.org/securing-kubernetes-nodes-with-calico-automatic-host-endpoints/) on enabling automatic host endpoints.
|
||||
|
@ -141,7 +141,7 @@ cilium_encryption_enabled: true
|
||||
cilium_encryption_type: "ipsec"
|
||||
```
|
||||
|
||||
The third variable is `cilium_ipsec_key.` You need to create a secret key string for this variable.
|
||||
The third variable is `cilium_ipsec_key`. You need to create a secret key string for this variable.
|
||||
Kubespray does not automate this process.
|
||||
Cilium documentation currently recommends creating a key using the following command:
|
||||
|
||||
@ -149,7 +149,11 @@ Cilium documentation currently recommends creating a key using the following com
|
||||
echo "3 rfc4106(gcm(aes)) $(echo $(dd if=/dev/urandom count=20 bs=1 2> /dev/null | xxd -p -c 64)) 128"
|
||||
```
|
||||
|
||||
Note that Kubespray handles secret creation. So you only need to pass the key as the `cilium_ipsec_key` variable.
|
||||
Note that Kubespray handles secret creation. So you only need to pass the key as the `cilium_ipsec_key` variable, base64 encoded:
|
||||
|
||||
```shell
|
||||
echo "cilium_ipsec_key: "$(echo -n "3 rfc4106(gcm(aes)) $(echo $(dd if=/dev/urandom count=20 bs=1 2> /dev/null | xxd -p -c 64)) 128" | base64 -w0)
|
||||
```
|
||||
|
||||
### Wireguard Encryption
|
||||
|
@ -35,15 +35,24 @@ containerd_registries_mirrors:
|
||||
skip_verify: false
|
||||
```
|
||||
|
||||
`containerd_registries_mirrors` is ignored for pulling images when `image_command_tool=nerdctl`
|
||||
(the default for `container_manager=containerd`). Use `crictl` instead, it supports
|
||||
`containerd_registries_mirrors` but lacks proper multi-arch support (see
|
||||
[#8375](https://github.com/kubernetes-sigs/kubespray/issues/8375)):
|
||||
containerd falls back to `https://{{ prefix }}` when none of the mirrors have the image.
|
||||
This can be changed with the [`server` field](https://github.com/containerd/containerd/blob/main/docs/hosts.md#server-field):
|
||||
|
||||
```yaml
|
||||
image_command_tool: crictl
|
||||
containerd_registries_mirrors:
|
||||
- prefix: docker.io
|
||||
mirrors:
|
||||
- host: https://mirror.gcr.io
|
||||
capabilities: ["pull", "resolve"]
|
||||
skip_verify: false
|
||||
- host: https://registry-1.docker.io
|
||||
capabilities: ["pull", "resolve"]
|
||||
skip_verify: false
|
||||
server: https://mirror.example.org
|
||||
```
|
||||
|
||||
The `containerd_registries` and `containerd_insecure_registries` configs are deprecated.
|
||||
|
||||
### Containerd Runtimes
|
||||
|
||||
Containerd supports multiple runtime configurations that can be used with
|
||||
@ -130,3 +139,13 @@ containerd_registries_mirrors:
|
||||
[RuntimeClass]: https://kubernetes.io/docs/concepts/containers/runtime-class/
|
||||
[runtime classes in containerd]: https://github.com/containerd/containerd/blob/main/docs/cri/config.md#runtime-classes
|
||||
[runtime-spec]: https://github.com/opencontainers/runtime-spec
|
||||
|
||||
### Optional : NRI
|
||||
|
||||
[Node Resource Interface](https://github.com/containerd/nri) (NRI) is disabled by default for the containerd. If you
|
||||
are using contained version v1.7.0 or above, then you can enable it with the
|
||||
following configuration:
|
||||
|
||||
```yaml
|
||||
nri_enabled: true
|
||||
```
|
@ -42,6 +42,22 @@ crio_registries:
|
||||
|
||||
[CRI-O]: https://cri-o.io/
|
||||
|
||||
The following is a method to enable insecure registries.
|
||||
|
||||
```yaml
|
||||
crio_insecure_registries:
|
||||
- 10.0.0.2:5000
|
||||
```
|
||||
|
||||
And you can config authentication for these registries after `crio_insecure_registries`.
|
||||
|
||||
```yaml
|
||||
crio_registry_auth:
|
||||
- registry: 10.0.0.2:5000
|
||||
username: user
|
||||
password: pass
|
||||
```
|
||||
|
||||
## Note about user namespaces
|
||||
|
||||
CRI-O has support for user namespaces. This feature is optional and can be enabled by setting the following two variables.
|
||||
@ -62,3 +78,13 @@ The `allowed_annotations` configures `crio.conf` accordingly.
|
||||
|
||||
The `crio_remap_enable` configures the `/etc/subuid` and `/etc/subgid` files to add an entry for the **containers** user.
|
||||
By default, 16M uids and gids are reserved for user namespaces (256 pods * 65536 uids/gids) at the end of the uid/gid space.
|
||||
|
||||
## Optional : NRI
|
||||
|
||||
[Node Resource Interface](https://github.com/containerd/nri) (NRI) is disabled by default for the CRI-O. If you
|
||||
are using CRI-O version v1.26.0 or above, then you can enable it with the
|
||||
following configuration:
|
||||
|
||||
```yaml
|
||||
nri_enabled: true
|
||||
```
|
@ -97,3 +97,9 @@ Adding extra options to pass to the docker daemon:
|
||||
## This string should be exactly as you wish it to appear.
|
||||
docker_options: ""
|
||||
```
|
||||
|
||||
For Debian based distributions, set the path to store the GPG key to avoid using the default one used in `apt_key` module (e.g. /etc/apt/trusted.gpg)
|
||||
|
||||
```yaml
|
||||
docker_repo_key_keyring: /etc/apt/trusted.gpg.d/docker.gpg
|
||||
```
|
154
docs/_sidebar.md
generated
154
docs/_sidebar.md
generated
@ -1,66 +1,94 @@
|
||||
* [Readme](/)
|
||||
* [Comparisons](/docs/comparisons.md)
|
||||
* [Getting started](/docs/getting-started.md)
|
||||
* [Ansible](docs/ansible.md)
|
||||
* [Variables](/docs/vars.md)
|
||||
* Operations
|
||||
* [Integration](docs/integration.md)
|
||||
* [Upgrades](/docs/upgrades.md)
|
||||
* [HA Mode](docs/ha-mode.md)
|
||||
* [Adding/replacing a node](docs/nodes.md)
|
||||
* [Large deployments](docs/large-deployments.md)
|
||||
* [Air-Gap Installation](docs/offline-environment.md)
|
||||
* CNI
|
||||
* [Calico](docs/calico.md)
|
||||
* [Flannel](docs/flannel.md)
|
||||
* [Kube Router](docs/kube-router.md)
|
||||
* [Kube OVN](docs/kube-ovn.md)
|
||||
* [Weave](docs/weave.md)
|
||||
* [Multus](docs/multus.md)
|
||||
* Ingress
|
||||
* [kube-vip](docs/kube-vip.md)
|
||||
* [ALB Ingress](docs/ingress_controller/alb_ingress_controller.md)
|
||||
* [MetalLB](docs/metallb.md)
|
||||
* [Nginx Ingress](docs/ingress_controller/ingress_nginx.md)
|
||||
* [Cloud providers](docs/cloud.md)
|
||||
* [AWS](docs/aws.md)
|
||||
* [Azure](docs/azure.md)
|
||||
* [OpenStack](/docs/openstack.md)
|
||||
* [Equinix Metal](/docs/equinix-metal.md)
|
||||
* [vSphere](/docs/vsphere.md)
|
||||
* [Operating Systems](docs/bootstrap-os.md)
|
||||
* [Debian](docs/debian.md)
|
||||
* [Flatcar Container Linux](docs/flatcar.md)
|
||||
* [Fedora CoreOS](docs/fcos.md)
|
||||
* [OpenSUSE](docs/opensuse.md)
|
||||
* [RedHat Enterprise Linux](docs/rhel.md)
|
||||
* [CentOS/OracleLinux/AlmaLinux/Rocky Linux](docs/centos.md)
|
||||
* [Kylin Linux Advanced Server V10](docs/kylinlinux.md)
|
||||
* [Amazon Linux 2](docs/amazonlinux.md)
|
||||
* [UOS Linux](docs/uoslinux.md)
|
||||
* [openEuler notes](docs/openeuler.md)
|
||||
* CRI
|
||||
* [Containerd](docs/containerd.md)
|
||||
* [Docker](docs/docker.md)
|
||||
* [CRI-O](docs/cri-o.md)
|
||||
* [Kata Containers](docs/kata-containers.md)
|
||||
* [gVisor](docs/gvisor.md)
|
||||
* Advanced
|
||||
* [Proxy](/docs/proxy.md)
|
||||
* [Downloads](docs/downloads.md)
|
||||
* [Netcheck](docs/netcheck.md)
|
||||
* [Cert Manager](docs/cert_manager.md)
|
||||
* [DNS Stack](docs/dns-stack.md)
|
||||
* [Kubernetes reliability](docs/kubernetes-reliability.md)
|
||||
* [Local Registry](docs/kubernetes-apps/registry.md)
|
||||
* [NTP](docs/ntp.md)
|
||||
* External Storage Provisioners
|
||||
* [RBD Provisioner](docs/kubernetes-apps/rbd_provisioner.md)
|
||||
* [CEPHFS Provisioner](docs/kubernetes-apps/cephfs_provisioner.md)
|
||||
* [Local Volume Provisioner](docs/kubernetes-apps/local_volume_provisioner.md)
|
||||
* [Arch](/docs/advanced/arch.md)
|
||||
* [Cert Manager](/docs/advanced/cert_manager.md)
|
||||
* [Dns-stack](/docs/advanced/dns-stack.md)
|
||||
* [Downloads](/docs/advanced/downloads.md)
|
||||
* [Gcp-lb](/docs/advanced/gcp-lb.md)
|
||||
* [Kubernetes-reliability](/docs/advanced/kubernetes-reliability.md)
|
||||
* [Mitogen](/docs/advanced/mitogen.md)
|
||||
* [Netcheck](/docs/advanced/netcheck.md)
|
||||
* [Ntp](/docs/advanced/ntp.md)
|
||||
* [Proxy](/docs/advanced/proxy.md)
|
||||
* [Registry](/docs/advanced/registry.md)
|
||||
* Ansible
|
||||
* [Ansible](/docs/ansible/ansible.md)
|
||||
* [Ansible Collection](/docs/ansible/ansible_collection.md)
|
||||
* [Vars](/docs/ansible/vars.md)
|
||||
* Cloud Providers
|
||||
* [Aws](/docs/cloud_providers/aws.md)
|
||||
* [Azure](/docs/cloud_providers/azure.md)
|
||||
* [Cloud](/docs/cloud_providers/cloud.md)
|
||||
* [Equinix-metal](/docs/cloud_providers/equinix-metal.md)
|
||||
* [Openstack](/docs/cloud_providers/openstack.md)
|
||||
* [Vsphere](/docs/cloud_providers/vsphere.md)
|
||||
* CNI
|
||||
* [Calico](/docs/CNI/calico.md)
|
||||
* [Cilium](/docs/CNI/cilium.md)
|
||||
* [Cni](/docs/CNI/cni.md)
|
||||
* [Flannel](/docs/CNI/flannel.md)
|
||||
* [Kube-ovn](/docs/CNI/kube-ovn.md)
|
||||
* [Kube-router](/docs/CNI/kube-router.md)
|
||||
* [Macvlan](/docs/CNI/macvlan.md)
|
||||
* [Multus](/docs/CNI/multus.md)
|
||||
* [Weave](/docs/CNI/weave.md)
|
||||
* CRI
|
||||
* [Containerd](/docs/CRI/containerd.md)
|
||||
* [Cri-o](/docs/CRI/cri-o.md)
|
||||
* [Docker](/docs/CRI/docker.md)
|
||||
* [Gvisor](/docs/CRI/gvisor.md)
|
||||
* [Kata-containers](/docs/CRI/kata-containers.md)
|
||||
* CSI
|
||||
* [Aws-ebs-csi](/docs/CSI/aws-ebs-csi.md)
|
||||
* [Azure-csi](/docs/CSI/azure-csi.md)
|
||||
* [Cinder-csi](/docs/CSI/cinder-csi.md)
|
||||
* [Gcp-pd-csi](/docs/CSI/gcp-pd-csi.md)
|
||||
* [Vsphere-csi](/docs/CSI/vsphere-csi.md)
|
||||
* Developers
|
||||
* [Test cases](docs/test_cases.md)
|
||||
* [Vagrant](docs/vagrant.md)
|
||||
* [CI Matrix](docs/ci.md)
|
||||
* [CI Setup](docs/ci-setup.md)
|
||||
* [Roadmap](docs/roadmap.md)
|
||||
* [Ci-setup](/docs/developers/ci-setup.md)
|
||||
* [Ci](/docs/developers/ci.md)
|
||||
* [Test Cases](/docs/developers/test_cases.md)
|
||||
* [Vagrant](/docs/developers/vagrant.md)
|
||||
* External Storage Provisioners
|
||||
* [Cephfs Provisioner](/docs/external_storage_provisioners/cephfs_provisioner.md)
|
||||
* [Local Volume Provisioner](/docs/external_storage_provisioners/local_volume_provisioner.md)
|
||||
* [Rbd Provisioner](/docs/external_storage_provisioners/rbd_provisioner.md)
|
||||
* [Scheduler Plugins](/docs/external_storage_provisioners/scheduler_plugins.md)
|
||||
* Getting Started
|
||||
* [Comparisons](/docs/getting_started/comparisons.md)
|
||||
* [Getting-started](/docs/getting_started/getting-started.md)
|
||||
* [Setting-up-your-first-cluster](/docs/getting_started/setting-up-your-first-cluster.md)
|
||||
* Ingress
|
||||
* [Alb Ingress Controller](/docs/ingress/alb_ingress_controller.md)
|
||||
* [Ingress Nginx](/docs/ingress/ingress_nginx.md)
|
||||
* [Kube-vip](/docs/ingress/kube-vip.md)
|
||||
* [Metallb](/docs/ingress/metallb.md)
|
||||
* Operating Systems
|
||||
* [Amazonlinux](/docs/operating_systems/amazonlinux.md)
|
||||
* [Bootstrap-os](/docs/operating_systems/bootstrap-os.md)
|
||||
* [Centos](/docs/operating_systems/centos.md)
|
||||
* [Fcos](/docs/operating_systems/fcos.md)
|
||||
* [Flatcar](/docs/operating_systems/flatcar.md)
|
||||
* [Kylinlinux](/docs/operating_systems/kylinlinux.md)
|
||||
* [Openeuler](/docs/operating_systems/openeuler.md)
|
||||
* [Opensuse](/docs/operating_systems/opensuse.md)
|
||||
* [Rhel](/docs/operating_systems/rhel.md)
|
||||
* [Uoslinux](/docs/operating_systems/uoslinux.md)
|
||||
* Operations
|
||||
* [Cgroups](/docs/operations/cgroups.md)
|
||||
* [Encrypting-secret-data-at-rest](/docs/operations/encrypting-secret-data-at-rest.md)
|
||||
* [Etcd](/docs/operations/etcd.md)
|
||||
* [Ha-mode](/docs/operations/ha-mode.md)
|
||||
* [Hardening](/docs/operations/hardening.md)
|
||||
* [Integration](/docs/operations/integration.md)
|
||||
* [Large-deployments](/docs/operations/large-deployments.md)
|
||||
* [Mirror](/docs/operations/mirror.md)
|
||||
* [Nodes](/docs/operations/nodes.md)
|
||||
* [Offline-environment](/docs/operations/offline-environment.md)
|
||||
* [Port-requirements](/docs/operations/port-requirements.md)
|
||||
* [Recover-control-plane](/docs/operations/recover-control-plane.md)
|
||||
* [Upgrades](/docs/operations/upgrades.md)
|
||||
* Roadmap
|
||||
* [Roadmap](/docs/roadmap/roadmap.md)
|
||||
* Upgrades
|
||||
* [Migrate Docker2containerd](/docs/upgrades/migrate_docker2containerd.md)
|
||||
|
@ -143,6 +143,22 @@ coredns_default_zone_cache_block: |
|
||||
}
|
||||
```
|
||||
|
||||
### Handle old/extra dns_domains
|
||||
|
||||
If you need to change the dns_domain of your cluster for whatever reason (switching to or from `cluster.local` for example),
|
||||
and you have workloads that embed it in their configuration you can use the variable `old_dns_domains`.
|
||||
This will add some configuration to coredns and nodelocaldns to ensure the DNS requests using the old domain are handled correctly.
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
old_dns_domains:
|
||||
- example1.com
|
||||
- example2.com
|
||||
dns_domain: cluster.local
|
||||
```
|
||||
|
||||
will make `my-svc.my-ns.svc.example1.com`, `my-svc.my-ns.svc.example2.com` and `my-svc.my-ns.svc.cluster.local` have the same DNS answer.
|
||||
|
||||
### systemd_resolved_disable_stub_listener
|
||||
|
||||
Whether or not to set `DNSStubListener=no` when using systemd-resolved. Defaults to `true` on Flatcar.
|
@ -32,7 +32,7 @@ Based on the table below and the available python version for your ansible host
|
||||
|
||||
| Ansible Version | Python Version |
|
||||
|-----------------|----------------|
|
||||
| 2.14 | 3.9-3.11 |
|
||||
| >= 2.16.4 | 3.10-3.12 |
|
||||
|
||||
## Inventory
|
||||
|
||||
@ -59,7 +59,7 @@ not _kube_node_.
|
||||
|
||||
There are also two special groups:
|
||||
|
||||
* **calico_rr** : explained for [advanced Calico networking cases](/docs/calico.md)
|
||||
* **calico_rr** : explained for [advanced Calico networking cases](/docs/CNI/calico.md)
|
||||
* **bastion** : configure a bastion host if your nodes are not directly reachable
|
||||
|
||||
Below is a complete inventory example:
|
||||
@ -231,6 +231,7 @@ The following tags are defined in playbooks:
|
||||
| services | Remove services (etcd, kubelet etc...) when resetting |
|
||||
| snapshot | Enabling csi snapshot |
|
||||
| snapshot-controller | Configuring csi snapshot controller |
|
||||
| system-packages | Install packages using OS package manager |
|
||||
| upgrade | Upgrading, f.e. container images/binaries |
|
||||
| upload | Distributing images/binaries across hosts |
|
||||
| vsphere-csi-driver | Configuring csi driver: vsphere |
|
||||
@ -285,7 +286,7 @@ For more information about Ansible and bastion hosts, read
|
||||
|
||||
## Mitogen
|
||||
|
||||
Mitogen support is deprecated, please see [mitogen related docs](/docs/mitogen.md) for usage and reasons for deprecation.
|
||||
Mitogen support is deprecated, please see [mitogen related docs](/docs/advanced/mitogen.md) for usage and reasons for deprecation.
|
||||
|
||||
## Beyond ansible 2.9
|
||||
|
@ -15,7 +15,7 @@ Kubespray can be installed as an [Ansible collection](https://docs.ansible.com/a
|
||||
collections:
|
||||
- name: https://github.com/kubernetes-sigs/kubespray
|
||||
type: git
|
||||
version: v2.22.1
|
||||
version: master # use the appropriate tag or branch for the version you need
|
||||
```
|
||||
|
||||
2. Install your collection
|
@ -34,10 +34,10 @@ Some variables of note include:
|
||||
|
||||
## Addressing variables
|
||||
|
||||
* *ip* - IP to use for binding services (host var)
|
||||
* *ip* - IP to use for binding services (host var). This would **usually** be the public ip.
|
||||
* *access_ip* - IP for other hosts to use to connect to. Often required when
|
||||
deploying from a cloud, such as OpenStack or GCE and you have separate
|
||||
public/floating and private IPs.
|
||||
public/floating and private IPs. This would **usually** be the private ip.
|
||||
* *ansible_default_ipv4.address* - Not Kubespray-specific, but it is used if ip
|
||||
and access_ip are undefined
|
||||
* *ip6* - IPv6 address to use for binding services. (host var)
|
||||
@ -46,11 +46,11 @@ Some variables of note include:
|
||||
* *loadbalancer_apiserver* - If defined, all hosts will connect to this
|
||||
address instead of localhost for kube_control_planes and kube_control_plane[0] for
|
||||
kube_nodes. See more details in the
|
||||
[HA guide](/docs/ha-mode.md).
|
||||
[HA guide](/docs/operations/ha-mode.md).
|
||||
* *loadbalancer_apiserver_localhost* - makes all hosts to connect to
|
||||
the apiserver internally load balanced endpoint. Mutual exclusive to the
|
||||
`loadbalancer_apiserver`. See more details in the
|
||||
[HA guide](/docs/ha-mode.md).
|
||||
[HA guide](/docs/operations/ha-mode.md).
|
||||
|
||||
## Cluster variables
|
||||
|
||||
@ -186,6 +186,8 @@ Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.m
|
||||
* *containerd_additional_runtimes* - Sets the additional Containerd runtimes used by the Kubernetes CRI plugin.
|
||||
[Default config](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/container-engine/containerd/defaults/main.yml) can be overridden in inventory vars.
|
||||
|
||||
* *crio_criu_support_enabled* - When set to `true`, enables the container checkpoint/restore in CRI-O. It's required to install [CRIU](https://criu.org/Installation) on the host when dumping/restoring checkpoints. And it's recommended to enable the feature gate `ContainerCheckpoint` so that the kubelet get a higher level API to simplify the operations (**Note**: It's still in experimental stage, just for container analytics so far). You can follow the [documentation](https://kubernetes.io/blog/2022/12/05/forensic-container-checkpointing-alpha/).
|
||||
|
||||
* *http_proxy/https_proxy/no_proxy/no_proxy_exclude_workers/additional_no_proxy* - Proxy variables for deploying behind a
|
||||
proxy. Note that no_proxy defaults to all internal cluster IPs and hostnames
|
||||
that correspond to each node.
|
||||
@ -218,6 +220,14 @@ Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.m
|
||||
|
||||
* *kubelet_cpu_manager_policy* - If set to `static`, allows pods with certain resource characteristics to be granted increased CPU affinity and exclusivity on the node. And it should be set with `kube_reserved` or `system-reserved`, enable this with the following guide:[Control CPU Management Policies on the Node](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/)
|
||||
|
||||
* *kubelet_cpu_manager_policy_options* - A dictionary of cpuManagerPolicyOptions to enable. Keep in mind to enable the corresponding feature gates and make sure to pass the booleans as string (i.e. don't forget the quotes)!
|
||||
|
||||
```yml
|
||||
kubelet_cpu_manager_policy_options:
|
||||
distribute-cpus-across-numa: "true"
|
||||
full-pcpus-only: "true"
|
||||
```
|
||||
|
||||
* *kubelet_topology_manager_policy* - Control the behavior of the allocation of CPU and Memory from different [NUMA](https://en.wikipedia.org/wiki/Non-uniform_memory_access) Nodes. Enable this with the following guide: [Control Topology Management Policies on a node](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager).
|
||||
|
||||
* *kubelet_topology_manager_scope* - The Topology Manager can deal with the alignment of resources in a couple of distinct scopes: `container` and `pod`. See [Topology Manager Scopes](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/#topology-manager-scopes).
|
||||
@ -243,7 +253,7 @@ node_labels:
|
||||
label2_name: label2_value
|
||||
```
|
||||
|
||||
* *node_taints* - Taints applied to nodes via kubelet --register-with-taints parameter.
|
||||
* *node_taints* - Taints applied to nodes via `kubectl taint node`.
|
||||
For example, taints can be set in the inventory as variables or more widely in group_vars.
|
||||
*node_taints* has to be defined as a list of strings in format `key=value:effect`, e.g.:
|
||||
|
||||
@ -252,8 +262,6 @@ node_taints:
|
||||
- "node.example.com/external=true:NoSchedule"
|
||||
```
|
||||
|
||||
* *podsecuritypolicy_enabled* - When set to `true`, enables the PodSecurityPolicy admission controller and defines two policies `privileged` (applying to all resources in `kube-system` namespace and kubelet) and `restricted` (applying all other namespaces).
|
||||
Addons deployed in kube-system namespaces are handled.
|
||||
* *kubernetes_audit* - When set to `true`, enables Auditing.
|
||||
The auditing parameters can be tuned via the following variables (which default values are shown below):
|
||||
* `audit_log_path`: /var/log/audit/kube-apiserver-audit.log
|
||||
@ -271,6 +279,12 @@ node_taints:
|
||||
* `audit_webhook_mode`: batch
|
||||
* `audit_webhook_batch_max_size`: 100
|
||||
* `audit_webhook_batch_max_wait`: 1s
|
||||
* *kubectl_alias* - Bash alias of kubectl to interact with Kubernetes cluster much easier.
|
||||
|
||||
* *remove_anonymous_access* - When set to `true`, removes the `kubeadm:bootstrap-signer-clusterinfo` rolebinding created by kubeadm.
|
||||
By default, kubeadm creates a rolebinding in the `kube-public` namespace which grants permissions to anonymous users. This rolebinding allows kubeadm to discover and validate cluster information during the join phase.
|
||||
In a nutshell, this option removes the rolebinding after the init phase of the first control plane node and then configures kubeadm to use file discovery for the join phase of other nodes.
|
||||
This option does not remove the anonymous authentication feature of the API server.
|
||||
|
||||
### Custom flags for Kube Components
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user