Compare commits
997 Commits
Author | SHA1 | Date | |
---|---|---|---|
5fd2b151b9 | |||
3c107ef4dc | |||
a5f93d6013 | |||
38338e848d | |||
e9518072a8 | |||
10dbd0afbd | |||
1dce56e2f8 | |||
1f0b2eac12 | |||
d9539e0f27 | |||
0909368339 | |||
091b634ea1 | |||
d18804b0bb | |||
a8b5b856d1 | |||
1d2a18b355 | |||
4a59340182 | |||
aa33613b98 | |||
96372c15e2 | |||
f365b32c60 | |||
5af2c42bde | |||
c0400e9db5 | |||
f7447837c5 | |||
a4dbee3e38 | |||
fb7899aa06 | |||
6d54d9f49a | |||
6546869c42 | |||
aa79a02f9c | |||
447febcdd6 | |||
61732847b6 | |||
fcd9d97f10 | |||
b6b5d52f78 | |||
4b6f29d5e1 | |||
f5d5230034 | |||
8dc19374cc | |||
a8f2af0503 | |||
d8a2941e9e | |||
55b6d0bbdd | |||
a3c044b657 | |||
4a2abc1a46 | |||
410c78f2e5 | |||
3b5830a1cf | |||
ab7df10a7d | |||
93663e987c | |||
6114266b84 | |||
97f96a6376 | |||
58062be2a3 | |||
031cf565ec | |||
5ec4efe88e | |||
e02aae71a1 | |||
1f9f885379 | |||
80509673d2 | |||
b902110d75 | |||
2c23027794 | |||
15589dd88f | |||
1a7f52c889 | |||
24cbf2287c | |||
a56d9de502 | |||
95e14ffb54 | |||
6139ee3add | |||
f0c0390646 | |||
e7a1949d85 | |||
ff8cb46bb9 | |||
399cb9707a | |||
6d9cd2d720 | |||
622537bd33 | |||
9169f840c2 | |||
79996b557b | |||
be8e5e1fdc | |||
bb0c3537cb | |||
36a5143478 | |||
7b86b87dca | |||
53affb9bc0 | |||
0fe2b66097 | |||
385f7f6e75 | |||
9f1e3db906 | |||
b63d900625 | |||
ac295de64c | |||
111571b67a | |||
a4bce333a3 | |||
c53a6eca86 | |||
7c2785e083 | |||
aab4149ab0 | |||
89a4b92753 | |||
5414a410bd | |||
ad796d188d | |||
de8cd5cd7f | |||
cc93c4fe12 | |||
c456a311d6 | |||
ed4b4b8482 | |||
8d9f207836 | |||
2a3164e040 | |||
f10d1327d4 | |||
d314174149 | |||
9885fe73dc | |||
f2cf323ecf | |||
cf4f2b4f14 | |||
fbc13ea6dc | |||
b8bc8eee41 | |||
11380769cd | |||
ee62c99eb1 | |||
843d439898 | |||
8d5da5cfca | |||
5a2c75a3cb | |||
c1e4cef75b | |||
5d73b9ccc5 | |||
9efe1fe09d | |||
4bbec963e6 | |||
348fc5b109 | |||
101864c050 | |||
fe150d4e4d | |||
048ac264a3 | |||
add7570a94 | |||
db77bd9588 | |||
768fe05eea | |||
1c48a001df | |||
a7276901a3 | |||
b0fa189b3c | |||
cc57152cc0 | |||
046f3eebcb | |||
ea874899c7 | |||
1782d19e1f | |||
e2476fbd0b | |||
07cd81ef58 | |||
92f542938c | |||
9df2306ee9 | |||
495d0b659a | |||
a2f8f17270 | |||
0e2329b59e | |||
7c011b14df | |||
ad68b23d8a | |||
670d977dfb | |||
70143d87bf | |||
e21ca5433a | |||
68ad4ff4d9 | |||
d7b0ff3de6 | |||
725f9ea3bd | |||
a9684648ab | |||
6b1dfa4ae6 | |||
9cc73bdf08 | |||
114ab5e4e6 | |||
77ebf4531c | |||
1551fe01f9 | |||
29874baf8a | |||
e6fe9d5807 | |||
81317505eb | |||
d57c27ffcf | |||
8d7b25d4f0 | |||
8e809aed01 | |||
b4c87c669b | |||
bca704e7e9 | |||
d50eb60827 | |||
dbd9aaf1ea | |||
d20d5e648f | |||
96640e68e2 | |||
3e007df97c | |||
06584ee3aa | |||
26e3142c95 | |||
33585fa673 | |||
665ce82d71 | |||
fb78bfaaae | |||
b4ce221002 | |||
444b1dafdc | |||
d6174b22e9 | |||
c75f394707 | |||
94ce99eb0a | |||
0515814e0c | |||
c87f4f613e | |||
f12e9fa22a | |||
3ca11b70c4 | |||
1cfaf927c9 | |||
45135ad3e4 | |||
9c06dd2863 | |||
b2088b72dd | |||
4e721bfd9d | |||
f52ed9f91e | |||
88f3b86410 | |||
3117858dcd | |||
8c36915ea0 | |||
5176e5c968 | |||
e95c733a81 | |||
15c2919ecc | |||
774f4dbbf7 | |||
b1e852a785 | |||
42ea4d2cfd | |||
9fd14cb6ea | |||
4e34803b1e | |||
7abcf6e0b9 | |||
e5ad0836bc | |||
2c50f20429 | |||
a15d626771 | |||
fd9b26675e | |||
eb33f085b6 | |||
fb774d4317 | |||
459bee6d2c | |||
6e080cd9b0 | |||
8a5ba6b20c | |||
c3ec3ff902 | |||
284a21012c | |||
7897c34ba3 | |||
8cc84e132a | |||
00ad151186 | |||
4265149463 | |||
ee8d6ab4fc | |||
a80745b5bd | |||
bd3f2d5cef | |||
e9c591e6de | |||
710d5ae48e | |||
fc769eb870 | |||
eec2ed5809 | |||
f7dd20f21c | |||
bfc9bcb8c7 | |||
8eb26c21be | |||
3c66e4cdba | |||
f0f2b81276 | |||
45ed6de315 | |||
c9290182be | |||
893538d8e6 | |||
246c8209c1 | |||
36fe2cb5ea | |||
9d6cc3a8d5 | |||
8870178a2d | |||
b0079ccd77 | |||
1772d122b2 | |||
756ae926ba | |||
2c1db56213 | |||
d672cef21c | |||
27e239c8d6 | |||
f1d7af11ee | |||
59a097b255 | |||
d40783022b | |||
7a3a473ccf | |||
2cdf752481 | |||
26f93feb2d | |||
d4aba0af48 | |||
42d12afbc6 | |||
022468ae3e | |||
3bb42cc66a | |||
8b5b27bb51 | |||
7328e0e1ac | |||
eeaf2ea4cf | |||
42eb8e4663 | |||
c13d0db0cc | |||
dba2026002 | |||
a62f74259c | |||
a2331fec55 | |||
b6872a0be3 | |||
bc7a73ca2c | |||
c405944e9d | |||
7eab889c07 | |||
bb55f68f95 | |||
658543c949 | |||
5b382668f5 | |||
b7692fad09 | |||
fbdda81515 | |||
7484888e42 | |||
f783a638a3 | |||
2d18e19263 | |||
ff7d489f2d | |||
6d29a5981c | |||
10b75d1d51 | |||
aa447585c4 | |||
f6c32c3ea3 | |||
d208896c46 | |||
08506f5139 | |||
2c4b11f321 | |||
d890d2f277 | |||
793f3990a0 | |||
9d439d2e5b | |||
db03f17486 | |||
dff78f616e | |||
d3a4d8dc24 | |||
dc58159d16 | |||
b60d5647a2 | |||
2bcfb3fea3 | |||
66f27ed1f3 | |||
cb84b93930 | |||
32a5453473 | |||
97d126ac8b | |||
deea7bb87b | |||
1bd1825ecb | |||
20e36191bb | |||
769566f36c | |||
ddd230485b | |||
5ee0cbaa42 | |||
ff675d40f9 | |||
0eebe43c08 | |||
069636e5b4 | |||
a03540dabc | |||
f6d69d0a00 | |||
cc2f26b8e9 | |||
3e687bbe9a | |||
c5113d3352 | |||
4d9712a3ef | |||
5b9b2c0973 | |||
a5af87758a | |||
8b11de5425 | |||
ff928e0e66 | |||
952191db99 | |||
61adca2a6d | |||
9872ed4bb2 | |||
3aa2d56da9 | |||
6a398724b6 | |||
af3823bced | |||
1e601bb2ef | |||
e4d240b1b7 | |||
e3470b28c5 | |||
e9a48770a7 | |||
0322b69f63 | |||
e587e82f7f | |||
5f5199bf53 | |||
876c4df1b6 | |||
e68ec257a3 | |||
216e0b2a52 | |||
ab0ff2ab3c | |||
5cd65f9c45 | |||
4e47c267fb | |||
cb47bbf753 | |||
c41d200a95 | |||
771d537ff3 | |||
8ca1f4ce44 | |||
625ec529ff | |||
caa81f3ac2 | |||
8092f57695 | |||
965a1234d3 | |||
15bc445a9c | |||
bb72de0dc9 | |||
6da0ecfa55 | |||
1ccc10baf8 | |||
45c2900e71 | |||
eb583dd2f3 | |||
f6233ffc9a | |||
46ee9faca9 | |||
f320b79c0c | |||
6cc05c103a | |||
88577b9889 | |||
5821f9748a | |||
c58bd33af7 | |||
cf7c60029b | |||
046e315bfd | |||
251800eb16 | |||
fe16fecd8f | |||
9ea9604b3f | |||
a32cd85eb7 | |||
95b460ae94 | |||
57e467c03c | |||
764a2fd5a8 | |||
d197130148 | |||
39d68822ed | |||
4ece73d432 | |||
60a217766f | |||
309240cd6f | |||
6b0d26ddf0 | |||
5aa8df163e | |||
881dc8172c | |||
aff441a01f | |||
44a14d0b3e | |||
f106bf5bc4 | |||
39b8336f3f | |||
a6bc284abd | |||
6b7b8a2303 | |||
8f20d90f88 | |||
047f098660 | |||
3b2554217b | |||
672d50393c | |||
d4467ab1c6 | |||
ebeb57ee7c | |||
f9355ea14d | |||
2ca6819cdf | |||
437372021d | |||
78ac01add7 | |||
3b3938c6a6 | |||
36fc05d2fd | |||
7abc747b56 | |||
9f976e568d | |||
9d7142f476 | |||
50f77cca1d | |||
33ebf124c4 | |||
03e162b342 | |||
d8b06f3e2f | |||
d6f206b5fd | |||
357a15ffd4 | |||
a3f892c76c | |||
2778ac61a4 | |||
c7b00caeaa | |||
7fe255e5bb | |||
93f7a26896 | |||
3d617fbf88 | |||
c59c3a1bcf | |||
4c0bf6225a | |||
b11662a887 | |||
11f1f71b3b | |||
0e9d1e09e3 | |||
65d2a3b0e5 | |||
8165da3f3d | |||
4b7347f1cd | |||
e6902d8ecc | |||
a5137affeb | |||
a423927ac9 | |||
31c2922752 | |||
7e81855e24 | |||
2510092599 | |||
6113a3f350 | |||
7d6fc1d680 | |||
91a101c855 | |||
1de127470f | |||
40de468413 | |||
c402feffbd | |||
f74d6b084b | |||
dd022f2dbc | |||
19928dea2b | |||
21273926ce | |||
c03bab3246 | |||
71347322d6 | |||
c9769965b8 | |||
52cee1f57f | |||
056f4b6c00 | |||
3919d666c1 | |||
8c8d978cd8 | |||
dea4210da1 | |||
a6344f7561 | |||
c490e5c8a1 | |||
84052ff0b6 | |||
9ca374a88d | |||
648aa7422d | |||
41aefd131b | |||
2e90d3fe76 | |||
4f33c6cfe6 | |||
f4e6fdc193 | |||
9d069d54d6 | |||
fb0ee9d84a | |||
016b7893c6 | |||
1724772b20 | |||
a6a5d0e068 | |||
d548cb6ac2 | |||
d9641771ed | |||
aaa3f1c491 | |||
5889f7af0e | |||
5579cddbdb | |||
2b6866484e | |||
34a27b0127 | |||
948d1d61ff | |||
c96a9bfdfd | |||
4e80ac1cb3 | |||
550bda951e | |||
6b27508c93 | |||
6684766c5f | |||
5fd43b7cf0 | |||
336e2b8c84 | |||
ee69ac857e | |||
6caf5b0ac3 | |||
0f461282c8 | |||
ab7c110880 | |||
5046466dae | |||
0cc581b2da | |||
7dde23e60b | |||
e4a48cf53b | |||
a3fe1e78df | |||
5f2bb3319b | |||
429b08a408 | |||
ec0317e0e4 | |||
613e3b65ac | |||
dfb9063b3f | |||
284354c0da | |||
82ee60fe8b | |||
73a8c24089 | |||
d313be4420 | |||
83750d14e3 | |||
123532d2a4 | |||
1a05b5980f | |||
a3a772be7b | |||
42a5055d3c | |||
a93639650f | |||
71a230a4fa | |||
0643ed968f | |||
1572aaf6ca | |||
5803de1ac5 | |||
13874f4610 | |||
341ea5a6ea | |||
93be5afb60 | |||
5ed3916f82 | |||
7760f75ae0 | |||
390764c2b4 | |||
9926395e5b | |||
422428908a | |||
76c43f62e2 | |||
b69d5f6e6e | |||
0db441b28f | |||
e3ebabc3b0 | |||
d0867c8d03 | |||
b46458a18f | |||
3ae29d763e | |||
125cb0aa64 | |||
783871a253 | |||
8294a9f1db | |||
ef43b21597 | |||
6fdcaa1a63 | |||
d47a2d03b4 | |||
739cf59953 | |||
2e386dfbdc | |||
ccbb2ee3ae | |||
eb78ce4c4e | |||
6084e05a6b | |||
da8a604c4c | |||
df2b2d7417 | |||
d87b0344b5 | |||
2606e8e1c8 | |||
b62de1dcb1 | |||
37057ba969 | |||
b58512bbda | |||
c90045c506 | |||
8b91a43576 | |||
602dba45ea | |||
d240073f65 | |||
69f09e0f18 | |||
cca26ae3d7 | |||
1c1894cdd3 | |||
26a0406669 | |||
a746d63177 | |||
0fc5e70c18 | |||
b74c2f89f0 | |||
7ac7fc33a7 | |||
9339903a85 | |||
33c8d0a1a7 | |||
5488571108 | |||
28fbfbbbe7 | |||
18cdab3671 | |||
311baeed5d | |||
f4d4d490af | |||
256a4e1f29 | |||
c50c6672f3 | |||
1345dd07f7 | |||
e83010b739 | |||
0dbde9e923 | |||
d4193bbd22 | |||
b92404fd0a | |||
9f01331595 | |||
82076f90a3 | |||
e165bb19a0 | |||
8168689caa | |||
c71b078c8e | |||
caa8efbf86 | |||
bcec5553c5 | |||
9ac744779c | |||
4e76bced53 | |||
60f263b629 | |||
ea57ce7514 | |||
439a2e2678 | |||
346eca5748 | |||
643b28f9d3 | |||
1938c96239 | |||
5dc8f5229f | |||
f35e5e864f | |||
47b4242613 | |||
92c4428cfd | |||
d97673c13f | |||
f61071312a | |||
234608433e | |||
36b6ae9a3c | |||
977f82c32c | |||
1f6dce61ba | |||
6f07da9f41 | |||
ac7cc4b7d1 | |||
d591b59205 | |||
c6f2102073 | |||
612266f3e5 | |||
5fbfa1481e | |||
430a87d887 | |||
0c953101ff | |||
07c144d8a6 | |||
298ab8e89e | |||
8812be1e14 | |||
4268996680 | |||
34232a170a | |||
0fa90ec9e8 | |||
f073ee91ea | |||
cf502735e9 | |||
252a30aee8 | |||
677c4c4cb6 | |||
6a457720a4 | |||
f2de250b10 | |||
6cb9bd2619 | |||
e727bd52f1 | |||
d2c57142d3 | |||
9be099466d | |||
acae5d4286 | |||
637eabccce | |||
e6cfbe42db | |||
15aec7cd87 | |||
b5d3f9b2fe | |||
e38258381f | |||
e8a1c7a53f | |||
5bf9b5345e | |||
2af71f31b4 | |||
4662b41de6 | |||
ff5a48c9f9 | |||
4fb4ac120b | |||
c7fef6cb76 | |||
6a7308d5c7 | |||
4419662fa0 | |||
b91f8630a3 | |||
5668e5f767 | |||
aa0d7ea5d0 | |||
c52c5f5056 | |||
90fc407420 | |||
9fb391fed5 | |||
fbc55da2bf | |||
1b1f5f22d4 | |||
66da43bbbc | |||
731d32afda | |||
b4688701ea | |||
af4c41f32e | |||
d0a1e15ef3 | |||
a4da0e4ee2 | |||
7d816aecf1 | |||
a63b05efbc | |||
7f212ca9cb | |||
296eccd238 | |||
f94eb0b997 | |||
a70c3b661e | |||
0f246bfba4 | |||
8141b72d5e | |||
277c5d74cc | |||
7a86b6c73e | |||
52a85d5757 | |||
a76e5dbb11 | |||
c3e5aac18e | |||
10b38ab9ff | |||
32cd6e99b2 | |||
a2540e3318 | |||
0b874e8db2 | |||
192136df20 | |||
ab8fdba484 | |||
342e6d6823 | |||
dfe7bfd127 | |||
51f55f3748 | |||
a709cd9aa1 | |||
d4dfdf68a6 | |||
a5c21ab2e8 | |||
c1690c91c2 | |||
e8195b65e4 | |||
c9cff5c845 | |||
da20d9eda4 | |||
a2bdcabc33 | |||
1e8ee99d1a | |||
a07260959d | |||
5fdea4b947 | |||
83da5d7657 | |||
1761f9f891 | |||
b3282cd0bb | |||
65ece3bc1d | |||
e2d6b92370 | |||
bcd912e854 | |||
8251781efb | |||
3b7eaf66b6 | |||
1d148e9755 | |||
d84ed1b4b3 | |||
baf80b7d7e | |||
9777b3c177 | |||
d2151500b6 | |||
e101b72a72 | |||
b847a43c61 | |||
19f5093034 | |||
585102ee20 | |||
ee7ac22f0d | |||
0b67c23d42 | |||
f1ba247844 | |||
2fa7ee0cf9 | |||
40fbb3691d | |||
d9b1435621 | |||
72ab34f210 | |||
67ca186dd1 | |||
85fa3efc06 | |||
8531ec9186 | |||
8c3f5f0831 | |||
c4beee38f6 | |||
247a1a6e6e | |||
a4396cfca0 | |||
53b72920a5 | |||
536454b079 | |||
95bb8075f5 | |||
708d2fbd61 | |||
103f09b470 | |||
c4c312c2e6 | |||
d7babeba2e | |||
9e59c74c24 | |||
d94253ff6a | |||
dc90c594c6 | |||
094c2c75f3 | |||
9ddace0566 | |||
47061a31e2 | |||
33d897bcb6 | |||
bf94d6f45e | |||
1556d1c63e | |||
c2093b128d | |||
153b82a803 | |||
587c8f4701 | |||
922c6897d1 | |||
eb6025a184 | |||
c43f9bc705 | |||
cd2847c1b9 | |||
309d6a49b6 | |||
8281b98e19 | |||
0e99cbb0ad | |||
7c7adc7198 | |||
c1ebd84bf0 | |||
26aeebb372 | |||
c7de2a524b | |||
e924504928 | |||
63908108b2 | |||
9bc5da9780 | |||
4a7d8c6fea | |||
722aacb633 | |||
ab0581e114 | |||
68808534b3 | |||
0500f27db8 | |||
cb92b30c25 | |||
67147cf435 | |||
96a2439c38 | |||
e8f97aa437 | |||
87757d4fcf | |||
33de89b69f | |||
9e86f1672b | |||
28aade3e06 | |||
35276de37e | |||
492218a3e1 | |||
a740e521d2 | |||
bdc183114a | |||
7de87d958e | |||
ffce277c0c | |||
c226b4e5cb | |||
094f4d02b8 | |||
ba615ff94e | |||
5240465f39 | |||
ef6a59bbd3 | |||
cd123f7f24 | |||
0984b23f0e | |||
d9dca20d7f | |||
d8bebcd201 | |||
f576d70b3c | |||
ae5ff890d4 | |||
24ee97d558 | |||
f949bfd46c | |||
242e96d251 | |||
66d9a6ebbc | |||
4e28f1de4e | |||
9b8a757526 | |||
a894a8c7bc | |||
962155e463 | |||
c90c981bb2 | |||
04fe83daa0 | |||
50d0ab2944 | |||
608e7dfab2 | |||
c6e3a8dbbd | |||
1884d89d3b | |||
ed95f9ab81 | |||
9f8466a186 | |||
8c869a2e3e | |||
743ad0eb5c | |||
5253b3ec13 | |||
ebf8231c9a | |||
adceaf60e1 | |||
96c63cc0b6 | |||
5f2fa6d76f | |||
bd064e8094 | |||
8f4e879ca7 | |||
a19ab91b1b | |||
4f627baf71 | |||
3914d51a7e | |||
bd6c12d686 | |||
9135c134ec | |||
59d71740b1 | |||
078d27c0f1 | |||
180f2d1fde | |||
391b155a98 | |||
47982ea21c | |||
d0e31f9457 | |||
7803ae6acb | |||
97de82bbcc | |||
bd1d49fabd | |||
928bbeaf0f | |||
343a26434d | |||
107da007b1 | |||
fb980e4542 | |||
f12ad6a56f | |||
5691086ba2 | |||
831a54e9b7 | |||
fd64f4d2a0 | |||
3cd89bed45 | |||
5b2568adf1 | |||
48a85ce8f8 | |||
936927a54f | |||
8418daa544 | |||
5c22133492 | |||
e69b9f6dcb | |||
b03093be73 | |||
bc44d5deb3 | |||
8ab86ac49d | |||
850b7466cd | |||
652cbedee5 | |||
bf96b92def | |||
ab21f4d169 | |||
64a39fdb86 | |||
2192bcccbd | |||
7237a925eb | |||
34ed6e1a08 | |||
8cbdf73eba | |||
624a964cda | |||
a14dfe74e1 | |||
f2e822810a | |||
a192111e6a | |||
4271dd6645 | |||
457ed11b49 | |||
9f8da6c225 | |||
ed9a521d6d | |||
68fafd030d | |||
f49926413a | |||
e8aec5f4f0 | |||
c51ed4bbb7 | |||
ba4ad51c26 | |||
785b84fd43 | |||
15ce66b2f5 | |||
9949c2b34e | |||
7e6d7caf4b | |||
48c64a1f72 | |||
6297e5ea93 | |||
0c315e0ff4 | |||
b7fcabea7b | |||
999141f0fd | |||
f5f6e44369 | |||
0c2183c10a | |||
cd38ecc378 | |||
1771f18437 | |||
72807965a8 | |||
611c7744a1 | |||
9baf9e569b | |||
ede3aad2ab | |||
143a75ccde | |||
62218c1497 | |||
8a238cda3d | |||
706d8c7968 | |||
cb3cc6f523 | |||
87fd8415da | |||
edcd5bf67f | |||
9528caa1d7 | |||
3f32e5973f | |||
a17e466a29 | |||
ff03c82151 | |||
152c409022 | |||
a46d4efba6 | |||
fca384e24c | |||
ec64eda2bc | |||
20adb604cc | |||
57a1ce28c4 | |||
39caf94790 | |||
ba4c89a12e | |||
b013b125bc | |||
e786010584 | |||
01397678df | |||
fae77970ac | |||
e737ed8105 | |||
b2dd01a0b0 | |||
323ff78206 | |||
8659693c76 | |||
c3a8f379e8 | |||
ad18f229c5 | |||
2feac2956a | |||
60d6195a9e | |||
c0cf506fb4 | |||
a649aa8b7e | |||
7fef64dacd | |||
91fca69aa0 | |||
3fef552978 | |||
50364ab571 | |||
a4e32c748a | |||
c48bc34a34 | |||
451ee18c4a | |||
4ee3699933 | |||
caa2555b1d | |||
09851621de | |||
05c8a29688 | |||
793d665db4 | |||
50da691d45 | |||
6f1fe0cda2 | |||
ab007e4ab8 | |||
03dd43e97d | |||
4f92417a5d | |||
3016ab79cb | |||
b2d6626363 | |||
98e2d6957a | |||
779299de15 | |||
bf5582b01f | |||
7e94d31c8b | |||
896f59267a | |||
21b0a3649d | |||
3bb6066558 | |||
64be24dd20 | |||
f8ffe53709 | |||
4d3f6c6533 | |||
6163fe166e | |||
6eff3f0fce | |||
6358cf788f | |||
6915278f65 | |||
b33713da4a | |||
83c1bd516d | |||
5d24cabc83 | |||
7127e6de54 | |||
cea8f1d381 | |||
bedcca922c | |||
faf50ea698 | |||
a323335d36 | |||
f15dda0248 | |||
8d71d56809 | |||
cf472a6b4c | |||
fd6ac61afc | |||
16a1926f94 | |||
839974bad0 | |||
4566d60e6f | |||
49a7278563 | |||
8676f8761f | |||
b9781fa7c2 | |||
08052f60da | |||
44230a4e86 | |||
90ffb8489a | |||
238f6e8a0b | |||
ef7cf3bf11 | |||
e7d5b7af67 | |||
359e55f6e4 | |||
dd29c8064f | |||
c7bd2a2a1e | |||
87fa167efa | |||
baaa6efc2b | |||
cece179bd4 | |||
56b92812fa | |||
2cbbcee351 | |||
f5508b1794 | |||
8f7d552401 | |||
bcd6ecb7fb | |||
65666fc28a | |||
b4734c280a | |||
dd61f685b8 | |||
641ce3358a | |||
4984b57aa2 | |||
87d8d87c6e | |||
283c4169ac | |||
d5f11b2442 | |||
5edc81c627 | |||
391413f7e7 | |||
c05c60a5d2 | |||
87b42e34e0 | |||
be0bec9eab | |||
cb59559835 | |||
078b67c50f | |||
e95c4739f5 | |||
32877bdc7b | |||
5e3af86c26 | |||
ec1073def8 | |||
28e530e005 | |||
9e9aba4e3a | |||
de038530ef | |||
337977e868 | |||
1c2bdbacb1 | |||
9715962356 | |||
5afbe181ce | |||
a5094f2a6a | |||
9156d1ecfd | |||
fe5ec398bf | |||
babf42f03a | |||
859f6322a0 | |||
815c5fa43c | |||
10b2466d82 | |||
f68d8f3757 | |||
9b083b62cf | |||
59614fc60d | |||
b54af6b42f | |||
7cab7e5fef | |||
4c5735cef8 | |||
58e1db6aae | |||
63ae6ba5b5 | |||
f58b4d3dd6 | |||
d3a8584212 | |||
51f1ae1e9e | |||
4271126bae | |||
049f5015c1 | |||
6ab671c88b | |||
d73ac90acf | |||
adf6e2f7b1 | |||
fb0803cf4c | |||
806834a6e9 | |||
8415634016 | |||
319f687ced | |||
8127e8f8e8 | |||
dd46cc64a4 | |||
2d5862a94d | |||
3d45a81006 | |||
51a0996087 | |||
80ac2ec6fc | |||
5d61b5e813 | |||
b769636435 |
47
.github/ISSUE_TEMPLATE.md
vendored
Normal file
47
.github/ISSUE_TEMPLATE.md
vendored
Normal file
@ -0,0 +1,47 @@
|
||||
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions.-->
|
||||
|
||||
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one):
|
||||
|
||||
<!--
|
||||
If this is a BUG REPORT, please:
|
||||
- Fill in as much of the template below as you can. If you leave out
|
||||
information, we can't help you as well.
|
||||
|
||||
If this is a FEATURE REQUEST, please:
|
||||
- Describe *in detail* the feature/behavior/change you'd like to see.
|
||||
|
||||
In both cases, be ready for followup questions, and please respond in a timely
|
||||
manner. If we can't reproduce a bug or think a feature already exists, we
|
||||
might close your issue. If we're wrong, PLEASE feel free to reopen it and
|
||||
explain why.
|
||||
-->
|
||||
|
||||
**Environment**:
|
||||
- **Cloud provider or hardware configuration:**
|
||||
|
||||
- **OS (`printf "$(uname -srm)\n$(cat /etc/os-release)\n"`):**
|
||||
|
||||
- **Version of Ansible** (`ansible --version`):
|
||||
|
||||
|
||||
**Kargo version (commit) (`git rev-parse --short HEAD`):**
|
||||
|
||||
|
||||
**Network plugin used**:
|
||||
|
||||
|
||||
**Copy of your inventory file:**
|
||||
|
||||
|
||||
**Command used to invoke ansible**:
|
||||
|
||||
|
||||
**Output of ansible run**:
|
||||
<!-- We recommend using snippets services like https://gist.github.com/ etc. -->
|
||||
|
||||
**Anything else do we need to know**:
|
||||
<!-- By running scripts/collect-info.yaml you can get a lot of useful informations.
|
||||
Script can be started by:
|
||||
ansible-playbook -i <inventory_file_path> -u <ssh_user> -e ansible_ssh_user=<ssh_user> -b --become-user=root -e dir=`pwd` scripts/collect-info.yaml
|
||||
(If you using CoreOS remember to add '-e ansible_python_interpreter=/opt/bin/python').
|
||||
After running this command you can find logs in `pwd`/logs.tar.gz. You can even upload somewhere entire file and paste link here.-->
|
13
.gitignore
vendored
Normal file
13
.gitignore
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
.vagrant
|
||||
*.retry
|
||||
inventory/vagrant_ansible_inventory
|
||||
temp
|
||||
.idea
|
||||
.tox
|
||||
.cache
|
||||
*.egg-info
|
||||
*.pyc
|
||||
*.pyo
|
||||
*.tfstate
|
||||
*.tfstate.backup
|
||||
/ssh-bastion.conf
|
455
.gitlab-ci.yml
Normal file
455
.gitlab-ci.yml
Normal file
@ -0,0 +1,455 @@
|
||||
stages:
|
||||
- unit-tests
|
||||
- deploy-gce-part1
|
||||
- deploy-gce-part2
|
||||
- deploy-gce-special
|
||||
|
||||
variables:
|
||||
FAILFASTCI_NAMESPACE: 'kargo-ci'
|
||||
# DOCKER_HOST: tcp://localhost:2375
|
||||
ANSIBLE_FORCE_COLOR: "true"
|
||||
|
||||
# asia-east1-a
|
||||
# asia-northeast1-a
|
||||
# europe-west1-b
|
||||
# us-central1-a
|
||||
# us-east1-b
|
||||
# us-west1-a
|
||||
|
||||
before_script:
|
||||
- pip install ansible
|
||||
- pip install netaddr
|
||||
- pip install apache-libcloud==0.20.1
|
||||
- pip install boto==2.9.0
|
||||
- mkdir -p /.ssh
|
||||
- cp tests/ansible.cfg .
|
||||
|
||||
.job: &job
|
||||
tags:
|
||||
- kubernetes
|
||||
- docker
|
||||
image: quay.io/ant31/kargo:master
|
||||
|
||||
.docker_service: &docker_service
|
||||
services:
|
||||
- docker:dind
|
||||
|
||||
.create_cluster: &create_cluster
|
||||
<<: *job
|
||||
<<: *docker_service
|
||||
|
||||
.gce_variables: &gce_variables
|
||||
GCE_USER: travis
|
||||
SSH_USER: $GCE_USER
|
||||
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
|
||||
CONTAINER_ENGINE: docker
|
||||
PRIVATE_KEY: $GCE_PRIVATE_KEY
|
||||
GS_ACCESS_KEY_ID: $GS_KEY
|
||||
GS_SECRET_ACCESS_KEY: $GS_SECRET
|
||||
ANSIBLE_KEEP_REMOTE_FILES: "1"
|
||||
BOOTSTRAP_OS: none
|
||||
RESOLVCONF_MODE: docker_dns
|
||||
LOG_LEVEL: "-vv"
|
||||
ETCD_DEPLOYMENT: "docker"
|
||||
KUBELET_DEPLOYMENT: "docker"
|
||||
MAGIC: "ci check this"
|
||||
|
||||
.gce: &gce
|
||||
<<: *job
|
||||
<<: *docker_service
|
||||
cache:
|
||||
key: "$CI_BUILD_REF_NAME"
|
||||
paths:
|
||||
- downloads/
|
||||
- $HOME/.cache
|
||||
stage: deploy-gce
|
||||
before_script:
|
||||
- docker info
|
||||
- pip install ansible==2.1.3.0
|
||||
- pip install netaddr
|
||||
- pip install apache-libcloud==0.20.1
|
||||
- pip install boto==2.9.0
|
||||
- mkdir -p /.ssh
|
||||
- cp tests/ansible.cfg .
|
||||
- mkdir -p $HOME/.ssh
|
||||
- echo $PRIVATE_KEY | base64 -d > $HOME/.ssh/id_rsa
|
||||
- echo $GCE_PEM_FILE | base64 -d > $HOME/.ssh/gce
|
||||
- echo $GCE_CREDENTIALS > $HOME/.ssh/gce.json
|
||||
- chmod 400 $HOME/.ssh/id_rsa
|
||||
- ansible-playbook --version
|
||||
- cp tests/ansible.cfg .
|
||||
- export PYPATH=$([ $BOOTSTRAP_OS = none ] && echo /usr/bin/python || echo /opt/bin/python)
|
||||
script:
|
||||
- pwd
|
||||
- ls
|
||||
- echo ${PWD}
|
||||
- >
|
||||
ansible-playbook tests/cloud_playbooks/create-gce.yml -i tests/local_inventory/hosts.cfg -c local $LOG_LEVEL
|
||||
-e mode=${CLUSTER_MODE}
|
||||
-e test_id=${TEST_ID}
|
||||
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
|
||||
-e gce_project_id=${GCE_PROJECT_ID}
|
||||
-e gce_service_account_email=${GCE_ACCOUNT}
|
||||
-e gce_credentials_file=${HOME}/.ssh/gce.json
|
||||
-e cloud_image=${CLOUD_IMAGE}
|
||||
-e inventory_path=${PWD}/inventory/inventory.ini
|
||||
-e cloud_region=${CLOUD_REGION}
|
||||
|
||||
# Create cluster
|
||||
- >
|
||||
ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS
|
||||
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
|
||||
--private-key=${HOME}/.ssh/id_rsa
|
||||
-e bootstrap_os=${BOOTSTRAP_OS}
|
||||
-e ansible_python_interpreter=${PYPATH}
|
||||
-e download_run_once=true
|
||||
-e download_localhost=true
|
||||
-e deploy_netchecker=true
|
||||
-e resolvconf_mode=${RESOLVCONF_MODE}
|
||||
-e local_release_dir=${PWD}/downloads
|
||||
-e etcd_deployment_type=${ETCD_DEPLOYMENT}
|
||||
-e kubelet_deployment_type=${KUBELET_DEPLOYMENT}
|
||||
cluster.yml
|
||||
|
||||
|
||||
# Tests Cases
|
||||
## Test Master API
|
||||
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root tests/testcases/010_check-apiserver.yml $LOG_LEVEL
|
||||
|
||||
## Ping the between 2 pod
|
||||
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root tests/testcases/030_check-network.yml $LOG_LEVEL
|
||||
|
||||
## Advanced DNS checks
|
||||
- ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root tests/testcases/040_check-network-adv.yml $LOG_LEVEL
|
||||
|
||||
after_script:
|
||||
- >
|
||||
ansible-playbook -i inventory/inventory.ini tests/cloud_playbooks/delete-gce.yml -c local $LOG_LEVEL
|
||||
-e mode=${CLUSTER_MODE}
|
||||
-e test_id=${TEST_ID}
|
||||
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
|
||||
-e gce_project_id=${GCE_PROJECT_ID}
|
||||
-e gce_service_account_email=${GCE_ACCOUNT}
|
||||
-e gce_credentials_file=${HOME}/.ssh/gce.json
|
||||
-e cloud_image=${CLOUD_IMAGE}
|
||||
-e inventory_path=${PWD}/inventory/inventory.ini
|
||||
-e cloud_region=${CLOUD_REGION}
|
||||
|
||||
# Test matrix. Leave the comments for markup scripts.
|
||||
.coreos_calico_sep_variables: &coreos_calico_sep_variables
|
||||
# stage: deploy-gce-part1
|
||||
KUBE_NETWORK_PLUGIN: calico
|
||||
CLOUD_IMAGE: coreos-stable
|
||||
CLOUD_REGION: us-west1-b
|
||||
CLUSTER_MODE: separated
|
||||
BOOTSTRAP_OS: coreos
|
||||
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
|
||||
|
||||
.debian8_canal_ha_variables: &debian8_canal_ha_variables
|
||||
# stage: deploy-gce-part1
|
||||
KUBE_NETWORK_PLUGIN: canal
|
||||
CLOUD_IMAGE: debian-8-kubespray
|
||||
CLOUD_REGION: us-east1-b
|
||||
CLUSTER_MODE: ha
|
||||
|
||||
.rhel7_weave_variables: &rhel7_weave_variables
|
||||
# stage: deploy-gce-part1
|
||||
KUBE_NETWORK_PLUGIN: weave
|
||||
CLOUD_IMAGE: rhel-7
|
||||
CLOUD_REGION: europe-west1-b
|
||||
CLUSTER_MODE: default
|
||||
|
||||
.centos7_flannel_variables: ¢os7_flannel_variables
|
||||
# stage: deploy-gce-part2
|
||||
KUBE_NETWORK_PLUGIN: flannel
|
||||
CLOUD_IMAGE: centos-7
|
||||
CLOUD_REGION: us-west1-a
|
||||
CLUSTER_MODE: default
|
||||
|
||||
.debian8_calico_variables: &debian8_calico_variables
|
||||
# stage: deploy-gce-part2
|
||||
KUBE_NETWORK_PLUGIN: calico
|
||||
CLOUD_IMAGE: debian-8-kubespray
|
||||
CLOUD_REGION: us-central1-b
|
||||
CLUSTER_MODE: default
|
||||
|
||||
.coreos_canal_variables: &coreos_canal_variables
|
||||
# stage: deploy-gce-part2
|
||||
KUBE_NETWORK_PLUGIN: canal
|
||||
CLOUD_IMAGE: coreos-stable
|
||||
CLOUD_REGION: us-east1-b
|
||||
CLUSTER_MODE: default
|
||||
BOOTSTRAP_OS: coreos
|
||||
RESOLVCONF_MODE: host_resolvconf # This is required as long as the CoreOS stable channel uses docker < 1.12
|
||||
|
||||
.rhel7_canal_sep_variables: &rhel7_canal_sep_variables
|
||||
# stage: deploy-gce-special
|
||||
KUBE_NETWORK_PLUGIN: canal
|
||||
CLOUD_IMAGE: rhel-7
|
||||
CLOUD_REGION: us-east1-b
|
||||
CLUSTER_MODE: separated
|
||||
|
||||
.ubuntu_weave_sep_variables: &ubuntu_weave_sep_variables
|
||||
# stage: deploy-gce-special
|
||||
KUBE_NETWORK_PLUGIN: weave
|
||||
CLOUD_IMAGE: ubuntu-1604-xenial
|
||||
CLOUD_REGION: us-central1-b
|
||||
CLUSTER_MODE: separated
|
||||
|
||||
.centos7_calico_ha_variables: ¢os7_calico_ha_variables
|
||||
# stage: deploy-gce-special
|
||||
KUBE_NETWORK_PLUGIN: calico
|
||||
CLOUD_IMAGE: centos-7
|
||||
CLOUD_REGION: europe-west1-b
|
||||
CLUSTER_MODE: ha
|
||||
|
||||
.coreos_alpha_weave_ha_variables: &coreos_alpha_weave_ha_variables
|
||||
# stage: deploy-gce-special
|
||||
KUBE_NETWORK_PLUGIN: weave
|
||||
CLOUD_IMAGE: coreos-alpha
|
||||
CLOUD_REGION: us-west1-a
|
||||
CLUSTER_MODE: ha
|
||||
BOOTSTRAP_OS: coreos
|
||||
|
||||
.ubuntu_rkt_sep_variables: &ubuntu_rkt_sep_variables
|
||||
# stage: deploy-gce-part1
|
||||
KUBE_NETWORK_PLUGIN: flannel
|
||||
CLOUD_IMAGE: ubuntu-1604-xenial
|
||||
CLOUD_REGION: us-central1-b
|
||||
CLUSTER_MODE: separated
|
||||
ETCD_DEPLOYMENT: rkt
|
||||
KUBELET_DEPLOYMENT: rkt
|
||||
|
||||
# Builds for PRs only (premoderated by unit-tests step) and triggers (auto)
|
||||
coreos-calico-sep:
|
||||
stage: deploy-gce-part1
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *coreos_calico_sep_variables
|
||||
when: on_success
|
||||
except: ['triggers']
|
||||
only: [/^pr-.*$/]
|
||||
|
||||
coreos-calico-sep-triggers:
|
||||
stage: deploy-gce-part1
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *coreos_calico_sep_variables
|
||||
when: on_success
|
||||
only: ['triggers']
|
||||
|
||||
centos7-flannel:
|
||||
stage: deploy-gce-part2
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *centos7_flannel_variables
|
||||
when: on_success
|
||||
except: ['triggers']
|
||||
only: [/^pr-.*$/]
|
||||
|
||||
centos7-flannel-triggers:
|
||||
stage: deploy-gce-part1
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *centos7_flannel_variables
|
||||
when: on_success
|
||||
only: ['triggers']
|
||||
|
||||
ubuntu-weave-sep:
|
||||
stage: deploy-gce-special
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *ubuntu_weave_sep_variables
|
||||
when: on_success
|
||||
except: ['triggers']
|
||||
only: [/^pr-.*$/]
|
||||
|
||||
ubuntu-weave-sep-triggers:
|
||||
stage: deploy-gce-part1
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *ubuntu_weave_sep_variables
|
||||
when: on_success
|
||||
only: ['triggers']
|
||||
|
||||
# More builds for PRs/merges (manual) and triggers (auto)
|
||||
debian8-canal-ha:
|
||||
stage: deploy-gce-part1
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *debian8_canal_ha_variables
|
||||
when: manual
|
||||
except: ['triggers']
|
||||
only: ['master', /^pr-.*$/]
|
||||
|
||||
debian8-canal-ha-triggers:
|
||||
stage: deploy-gce-part1
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *debian8_canal_ha_variables
|
||||
when: on_success
|
||||
only: ['triggers']
|
||||
|
||||
rhel7-weave:
|
||||
stage: deploy-gce-part1
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *rhel7_weave_variables
|
||||
when: manual
|
||||
except: ['triggers']
|
||||
only: ['master', /^pr-.*$/]
|
||||
|
||||
rhel7-weave-triggers:
|
||||
stage: deploy-gce-part1
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *rhel7_weave_variables
|
||||
when: on_success
|
||||
only: ['triggers']
|
||||
|
||||
debian8-calico:
|
||||
stage: deploy-gce-part2
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *debian8_calico_variables
|
||||
when: manual
|
||||
except: ['triggers']
|
||||
only: ['master', /^pr-.*$/]
|
||||
|
||||
debian8-calico-triggers:
|
||||
stage: deploy-gce-part1
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *debian8_calico_variables
|
||||
when: on_success
|
||||
only: ['triggers']
|
||||
|
||||
coreos-canal:
|
||||
stage: deploy-gce-part2
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *coreos_canal_variables
|
||||
when: manual
|
||||
except: ['triggers']
|
||||
only: ['master', /^pr-.*$/]
|
||||
|
||||
coreos-canal-triggers:
|
||||
stage: deploy-gce-part1
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *coreos_canal_variables
|
||||
when: on_success
|
||||
only: ['triggers']
|
||||
|
||||
rhel7-canal-sep:
|
||||
stage: deploy-gce-special
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *rhel7_canal_sep_variables
|
||||
when: manual
|
||||
except: ['triggers']
|
||||
only: ['master', /^pr-.*$/,]
|
||||
|
||||
rhel7-canal-sep-triggers:
|
||||
stage: deploy-gce-part1
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *rhel7_canal_sep_variables
|
||||
when: on_success
|
||||
only: ['triggers']
|
||||
|
||||
centos7-calico-ha:
|
||||
stage: deploy-gce-special
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *centos7_calico_ha_variables
|
||||
when: manual
|
||||
except: ['triggers']
|
||||
only: ['master', /^pr-.*$/]
|
||||
|
||||
centos7-calico-ha-triggers:
|
||||
stage: deploy-gce-part1
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *centos7_calico_ha_variables
|
||||
when: on_success
|
||||
only: ['triggers']
|
||||
|
||||
# no triggers yet https://github.com/kubernetes-incubator/kargo/issues/613
|
||||
coreos-alpha-weave-ha:
|
||||
stage: deploy-gce-special
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *coreos_alpha_weave_ha_variables
|
||||
when: manual
|
||||
except: ['triggers']
|
||||
only: ['master', /^pr-.*$/]
|
||||
|
||||
ubuntu-rkt-sep:
|
||||
stage: deploy-gce-part1
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *ubuntu_rkt_sep_variables
|
||||
when: manual
|
||||
except: ['triggers']
|
||||
only: ['master', /^pr-.*$/]
|
||||
|
||||
# Premoderated with manual actions
|
||||
syntax-check:
|
||||
<<: *job
|
||||
stage: unit-tests
|
||||
before_script:
|
||||
- apt-get -y install jq
|
||||
script:
|
||||
- ansible-playbook -i inventory/local-tests.cfg -u root -e ansible_ssh_user=root -b --become-user=root cluster.yml -vvv --syntax-check
|
||||
- /bin/sh scripts/premoderator.sh
|
||||
except: ['triggers', 'master']
|
||||
|
||||
tox-inventory-builder:
|
||||
stage: unit-tests
|
||||
<<: *job
|
||||
script:
|
||||
- pip install tox
|
||||
- cd contrib/inventory_builder && tox
|
||||
when: manual
|
||||
except: ['triggers', 'master']
|
49
.gitmodules
vendored
49
.gitmodules
vendored
@ -1,49 +0,0 @@
|
||||
[submodule "roles/apps/k8s-kube-ui"]
|
||||
path = roles/apps/k8s-kube-ui
|
||||
url = https://github.com/ansibl8s/k8s-kube-ui.git
|
||||
branch = v1.0
|
||||
[submodule "roles/apps/k8s-kubedns"]
|
||||
path = roles/apps/k8s-kubedns
|
||||
url = https://github.com/ansibl8s/k8s-kubedns.git
|
||||
branch = v1.0
|
||||
[submodule "roles/apps/k8s-common"]
|
||||
path = roles/apps/k8s-common
|
||||
url = https://github.com/ansibl8s/k8s-common.git
|
||||
branch = v1.0
|
||||
[submodule "roles/apps/k8s-redis"]
|
||||
path = roles/apps/k8s-redis
|
||||
url = https://github.com/ansibl8s/k8s-redis.git
|
||||
branch = v1.0
|
||||
[submodule "roles/apps/k8s-elasticsearch"]
|
||||
path = roles/apps/k8s-elasticsearch
|
||||
url = https://github.com/ansibl8s/k8s-elasticsearch.git
|
||||
[submodule "roles/apps/k8s-fabric8"]
|
||||
path = roles/apps/k8s-fabric8
|
||||
url = https://github.com/ansibl8s/k8s-fabric8.git
|
||||
branch = v1.0
|
||||
[submodule "roles/apps/k8s-memcached"]
|
||||
path = roles/apps/k8s-memcached
|
||||
url = https://github.com/ansibl8s/k8s-memcached.git
|
||||
branch = v1.0
|
||||
[submodule "roles/apps/k8s-postgres"]
|
||||
path = roles/apps/k8s-postgres
|
||||
url = https://github.com/ansibl8s/k8s-postgres.git
|
||||
branch = v1.0
|
||||
[submodule "roles/apps/k8s-kubedash"]
|
||||
path = roles/apps/k8s-kubedash
|
||||
url = https://github.com/ansibl8s/k8s-kubedash.git
|
||||
[submodule "roles/apps/k8s-heapster"]
|
||||
path = roles/apps/k8s-heapster
|
||||
url = https://github.com/ansibl8s/k8s-heapster.git
|
||||
[submodule "roles/apps/k8s-influxdb"]
|
||||
path = roles/apps/k8s-influxdb
|
||||
url = https://github.com/ansibl8s/k8s-influxdb.git
|
||||
[submodule "roles/apps/k8s-kube-logstash"]
|
||||
path = roles/apps/k8s-kube-logstash
|
||||
url = https://github.com/ansibl8s/k8s-kube-logstash.git
|
||||
[submodule "roles/apps/k8s-etcd"]
|
||||
path = roles/apps/k8s-etcd
|
||||
url = https://github.com/ansibl8s/k8s-etcd.git
|
||||
[submodule "roles/apps/k8s-rabbitmq"]
|
||||
path = roles/apps/k8s-rabbitmq
|
||||
url = https://github.com/ansibl8s/k8s-rabbitmq.git
|
||||
|
41
.travis.yml
41
.travis.yml
@ -1,41 +0,0 @@
|
||||
sudo: required
|
||||
dist: trusty
|
||||
language: python
|
||||
python: "2.7"
|
||||
|
||||
addons:
|
||||
hosts:
|
||||
- node1
|
||||
|
||||
env:
|
||||
- SITE=cluster.yml
|
||||
|
||||
before_install:
|
||||
- sudo apt-get update -qq
|
||||
|
||||
install:
|
||||
# Install Ansible.
|
||||
- sudo -H pip install ansible
|
||||
- sudo -H pip install netaddr
|
||||
|
||||
cache:
|
||||
directories:
|
||||
- $HOME/releases
|
||||
- $HOME/.cache/pip
|
||||
|
||||
before_script:
|
||||
- export PATH=$PATH:/usr/local/bin
|
||||
|
||||
script:
|
||||
# Check the role/playbook's syntax.
|
||||
- "sudo -H ansible-playbook -i inventory/local-tests.cfg $SITE --syntax-check"
|
||||
|
||||
# Run the role/playbook with ansible-playbook.
|
||||
- "sudo -H ansible-playbook -i inventory/local-tests.cfg $SITE --connection=local"
|
||||
|
||||
# Run the role/playbook again, checking to make sure it's idempotent.
|
||||
- >
|
||||
sudo -H ansible-playbook -i inventory/local-tests.cfg $SITE --connection=local
|
||||
| tee /dev/stderr | grep -q 'changed=0.*failed=0'
|
||||
&& (echo 'Idempotence test: pass' && exit 0)
|
||||
|| (echo 'Idempotence test: fail' && exit 1)
|
161
.travis.yml.bak
Normal file
161
.travis.yml.bak
Normal file
@ -0,0 +1,161 @@
|
||||
sudo: required
|
||||
|
||||
services:
|
||||
- docker
|
||||
|
||||
git:
|
||||
depth: 5
|
||||
|
||||
env:
|
||||
global:
|
||||
GCE_USER=travis
|
||||
SSH_USER=$GCE_USER
|
||||
TEST_ID=$TRAVIS_JOB_NUMBER
|
||||
CONTAINER_ENGINE=docker
|
||||
PRIVATE_KEY=$GCE_PRIVATE_KEY
|
||||
GS_ACCESS_KEY_ID=$GS_KEY
|
||||
GS_SECRET_ACCESS_KEY=$GS_SECRET
|
||||
ANSIBLE_KEEP_REMOTE_FILES=1
|
||||
CLUSTER_MODE=default
|
||||
BOOTSTRAP_OS=none
|
||||
matrix:
|
||||
# Debian Jessie
|
||||
- >-
|
||||
KUBE_NETWORK_PLUGIN=canal
|
||||
CLOUD_IMAGE=debian-8-kubespray
|
||||
CLOUD_REGION=asia-east1-a
|
||||
CLUSTER_MODE=ha
|
||||
- >-
|
||||
KUBE_NETWORK_PLUGIN=calico
|
||||
CLOUD_IMAGE=debian-8-kubespray
|
||||
CLOUD_REGION=europe-west1-c
|
||||
CLUSTER_MODE=default
|
||||
|
||||
# Centos 7
|
||||
- >-
|
||||
KUBE_NETWORK_PLUGIN=flannel
|
||||
CLOUD_IMAGE=centos-7
|
||||
CLOUD_REGION=asia-northeast1-c
|
||||
CLUSTER_MODE=default
|
||||
- >-
|
||||
KUBE_NETWORK_PLUGIN=calico
|
||||
CLOUD_IMAGE=centos-7
|
||||
CLOUD_REGION=us-central1-b
|
||||
CLUSTER_MODE=ha
|
||||
|
||||
# Redhat 7
|
||||
- >-
|
||||
KUBE_NETWORK_PLUGIN=weave
|
||||
CLOUD_IMAGE=rhel-7
|
||||
CLOUD_REGION=us-east1-c
|
||||
CLUSTER_MODE=default
|
||||
|
||||
# CoreOS stable
|
||||
#- >-
|
||||
# KUBE_NETWORK_PLUGIN=weave
|
||||
# CLOUD_IMAGE=coreos-stable
|
||||
# CLOUD_REGION=europe-west1-b
|
||||
# CLUSTER_MODE=ha
|
||||
# BOOTSTRAP_OS=coreos
|
||||
- >-
|
||||
KUBE_NETWORK_PLUGIN=canal
|
||||
CLOUD_IMAGE=coreos-stable
|
||||
CLOUD_REGION=us-west1-b
|
||||
CLUSTER_MODE=default
|
||||
BOOTSTRAP_OS=coreos
|
||||
|
||||
# Extra cases for separated roles
|
||||
- >-
|
||||
KUBE_NETWORK_PLUGIN=canal
|
||||
CLOUD_IMAGE=rhel-7
|
||||
CLOUD_REGION=asia-northeast1-b
|
||||
CLUSTER_MODE=separate
|
||||
- >-
|
||||
KUBE_NETWORK_PLUGIN=weave
|
||||
CLOUD_IMAGE=ubuntu-1604-xenial
|
||||
CLOUD_REGION=europe-west1-d
|
||||
CLUSTER_MODE=separate
|
||||
- >-
|
||||
KUBE_NETWORK_PLUGIN=calico
|
||||
CLOUD_IMAGE=coreos-stable
|
||||
CLOUD_REGION=us-central1-f
|
||||
CLUSTER_MODE=separate
|
||||
BOOTSTRAP_OS=coreos
|
||||
|
||||
matrix:
|
||||
allow_failures:
|
||||
- env: KUBE_NETWORK_PLUGIN=weave CLOUD_IMAGE=coreos-stable CLOUD_REGION=europe-west1-b CLUSTER_MODE=ha BOOTSTRAP_OS=coreos
|
||||
|
||||
before_install:
|
||||
# Install Ansible.
|
||||
- pip install --user ansible
|
||||
- pip install --user netaddr
|
||||
# W/A https://github.com/ansible/ansible-modules-core/issues/5196#issuecomment-253766186
|
||||
- pip install --user apache-libcloud==0.20.1
|
||||
- pip install --user boto==2.9.0 -U
|
||||
# Load cached docker images
|
||||
- if [ -d /var/tmp/releases ]; then find /var/tmp/releases -type f -name "*.tar" | xargs -I {} sh -c "zcat {} | docker load"; fi
|
||||
|
||||
cache:
|
||||
- directories:
|
||||
- $HOME/.cache/pip
|
||||
- $HOME/.local
|
||||
- /var/tmp/releases
|
||||
|
||||
before_script:
|
||||
- echo "RUN $TRAVIS_JOB_NUMBER $KUBE_NETWORK_PLUGIN $CONTAINER_ENGINE "
|
||||
- mkdir -p $HOME/.ssh
|
||||
- echo $PRIVATE_KEY | base64 -d > $HOME/.ssh/id_rsa
|
||||
- echo $GCE_PEM_FILE | base64 -d > $HOME/.ssh/gce
|
||||
- chmod 400 $HOME/.ssh/id_rsa
|
||||
- chmod 755 $HOME/.local/bin/ansible-playbook
|
||||
- $HOME/.local/bin/ansible-playbook --version
|
||||
- cp tests/ansible.cfg .
|
||||
- export PYPATH=$([ $BOOTSTRAP_OS = none ] && echo /usr/bin/python || echo /opt/bin/python)
|
||||
# - "echo $HOME/.local/bin/ansible-playbook -i inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root -e '{\"cloud_provider\": true}' $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN} setup-kubernetes/cluster.yml"
|
||||
|
||||
script:
|
||||
- >
|
||||
$HOME/.local/bin/ansible-playbook tests/cloud_playbooks/create-gce.yml -i tests/local_inventory/hosts.cfg -c local $LOG_LEVEL
|
||||
-e mode=${CLUSTER_MODE}
|
||||
-e test_id=${TEST_ID}
|
||||
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
|
||||
-e gce_project_id=${GCE_PROJECT_ID}
|
||||
-e gce_service_account_email=${GCE_ACCOUNT}
|
||||
-e gce_pem_file=${HOME}/.ssh/gce
|
||||
-e cloud_image=${CLOUD_IMAGE}
|
||||
-e inventory_path=${PWD}/inventory/inventory.ini
|
||||
-e cloud_region=${CLOUD_REGION}
|
||||
|
||||
# Create cluster with netchecker app deployed
|
||||
- >
|
||||
$HOME/.local/bin/ansible-playbook -i inventory/inventory.ini -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS
|
||||
-b --become-user=root -e cloud_provider=gce $LOG_LEVEL -e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
|
||||
-e bootstrap_os=${BOOTSTRAP_OS}
|
||||
-e ansible_python_interpreter=${PYPATH}
|
||||
-e download_run_once=true
|
||||
-e download_localhost=true
|
||||
-e local_release_dir=/var/tmp/releases
|
||||
-e deploy_netchecker=true
|
||||
cluster.yml
|
||||
|
||||
# Tests Cases
|
||||
## Test Master API
|
||||
- $HOME/.local/bin/ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root tests/testcases/010_check-apiserver.yml $LOG_LEVEL
|
||||
## Ping the between 2 pod
|
||||
- $HOME/.local/bin/ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root tests/testcases/030_check-network.yml $LOG_LEVEL
|
||||
## Advanced DNS checks
|
||||
- $HOME/.local/bin/ansible-playbook -i inventory/inventory.ini -e ansible_python_interpreter=${PYPATH} -u $SSH_USER -e ansible_ssh_user=$SSH_USER $SSH_ARGS -b --become-user=root tests/testcases/040_check-network-adv.yml $LOG_LEVEL
|
||||
|
||||
after_script:
|
||||
- >
|
||||
$HOME/.local/bin/ansible-playbook -i inventory/inventory.ini tests/cloud_playbooks/delete-gce.yml -c local $LOG_LEVEL
|
||||
-e mode=${CLUSTER_MODE}
|
||||
-e test_id=${TEST_ID}
|
||||
-e kube_network_plugin=${KUBE_NETWORK_PLUGIN}
|
||||
-e gce_project_id=${GCE_PROJECT_ID}
|
||||
-e gce_service_account_email=${GCE_ACCOUNT}
|
||||
-e gce_pem_file=${HOME}/.ssh/gce
|
||||
-e cloud_image=${CLOUD_IMAGE}
|
||||
-e inventory_path=${PWD}/inventory/inventory.ini
|
||||
-e cloud_region=${CLOUD_REGION}
|
10
CONTRIBUTING.md
Normal file
10
CONTRIBUTING.md
Normal file
@ -0,0 +1,10 @@
|
||||
# Contributing guidelines
|
||||
|
||||
## How to become a contributor and submit your own code
|
||||
|
||||
### Contributing A Patch
|
||||
|
||||
1. Submit an issue describing your proposed change to the repo in question.
|
||||
2. The [repo owners](OWNERS) will respond to your issue promptly.
|
||||
3. Fork the desired repo, develop and test your code changes.
|
||||
4. Submit a pull request.
|
201
LICENSE
Normal file
201
LICENSE
Normal file
@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright 2016 Kubespray
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
9
OWNERS
Normal file
9
OWNERS
Normal file
@ -0,0 +1,9 @@
|
||||
# See the OWNERS file documentation:
|
||||
# https://github.com/kubernetes/kubernetes/blob/master/docs/devel/owners.md
|
||||
|
||||
owners:
|
||||
- Smana
|
||||
- ant31
|
||||
- bogdando
|
||||
- mattymo
|
||||
- rsmitty
|
326
README.md
326
README.md
@ -1,279 +1,101 @@
|
||||
[](https://travis-ci.org/ansibl8s/setup-kubernetes)
|
||||
kubernetes-ansible
|
||||
========
|
||||

|
||||
|
||||
Install and configure a Multi-Master/HA kubernetes cluster including network plugin.
|
||||
##Deploy a production ready kubernetes cluster
|
||||
|
||||
### Requirements
|
||||
Tested on **Debian Wheezy/Jessie** and **Ubuntu** (14.10, 15.04, 15.10).
|
||||
Should work on **RedHat/Fedora/Centos** platforms (to be tested)
|
||||
* The target servers must have access to the Internet in order to pull docker imaqes.
|
||||
* The firewalls are not managed, you'll need to implement your own rules the way you used to.
|
||||
* Ansible v1.9.x and python-netaddr
|
||||
If you have questions, join us on the [kubernetes slack](https://slack.k8s.io), channel **#kargo**.
|
||||
|
||||
### Components
|
||||
* [kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.1.3
|
||||
* [etcd](https://github.com/coreos/etcd/releases) v2.2.2
|
||||
* [calicoctl](https://github.com/projectcalico/calico-docker/releases) v0.13.0
|
||||
* [flanneld](https://github.com/coreos/flannel/releases) v0.5.5
|
||||
* [docker](https://www.docker.com/) v1.9.1
|
||||
- Can be deployed on **AWS, GCE, Azure, OpenStack or Baremetal**
|
||||
- **High available** cluster
|
||||
- **Composable** (Choice of the network plugin for instance)
|
||||
- Support most popular **Linux distributions**
|
||||
- **Continuous integration tests**
|
||||
|
||||
Quickstart
|
||||
-------------------------
|
||||
The following steps will quickly setup a kubernetes cluster with default configuration.
|
||||
These defaults are good for tests purposes.
|
||||
|
||||
Edit the inventory according to the number of servers
|
||||
```
|
||||
[downloader]
|
||||
localhost ansible_connection=local ansible_python_interpreter=python2
|
||||
To deploy the cluster you can use :
|
||||
|
||||
[kube-master]
|
||||
10.115.99.31
|
||||
[**kargo-cli**](https://github.com/kubespray/kargo-cli) <br>
|
||||
**Ansible** usual commands and [**inventory builder**](https://github.com/kubernetes-incubator/kargo/blob/master/contrib/inventory_builder/inventory.py) <br>
|
||||
**vagrant** by simply running `vagrant up` (for tests purposes) <br>
|
||||
|
||||
[etcd]
|
||||
10.115.99.31
|
||||
10.115.99.32
|
||||
10.115.99.33
|
||||
|
||||
[kube-node]
|
||||
10.115.99.32
|
||||
10.115.99.33
|
||||
* [Requirements](#requirements)
|
||||
* [Kargo vs ...](docs/comparisons.md)
|
||||
* [Getting started](docs/getting-started.md)
|
||||
* [Ansible inventory and tags](docs/ansible.md)
|
||||
* [Deployment data variables](docs/vars.md)
|
||||
* [DNS stack](docs/dns-stack.md)
|
||||
* [HA mode](docs/ha-mode.md)
|
||||
* [Network plugins](#network-plugins)
|
||||
* [Vagrant install](docs/vagrant.md)
|
||||
* [CoreOS bootstrap](docs/coreos.md)
|
||||
* [Downloaded artifacts](docs/downloads.md)
|
||||
* [Cloud providers](docs/cloud.md)
|
||||
* [OpenStack](docs/openstack.md)
|
||||
* [AWS](docs/aws.md)
|
||||
* [Azure](docs/azure.md)
|
||||
* [Large deployments](docs/large-deployments.md)
|
||||
* [Upgrades basics](docs/upgrades.md)
|
||||
* [Roadmap](docs/roadmap.md)
|
||||
|
||||
[k8s-cluster:children]
|
||||
kube-node
|
||||
kube-master
|
||||
```
|
||||
Supported Linux distributions
|
||||
===============
|
||||
|
||||
Run the playbook
|
||||
```
|
||||
ansible-playbook -i inventory/inventory.cfg cluster.yml -u root
|
||||
```
|
||||
* **Container Linux by CoreOS**
|
||||
* **Debian** Jessie
|
||||
* **Ubuntu** 16.04
|
||||
* **CentOS/RHEL** 7
|
||||
|
||||
You can jump directly to "*Available apps, installation procedure*"
|
||||
Note: Upstart/SysV init based OS types are not supported.
|
||||
|
||||
Versions of supported components
|
||||
--------------------------------
|
||||
|
||||
Ansible
|
||||
-------------------------
|
||||
### Variables
|
||||
The main variables to change are located in the directory ```inventory/group_vars/all.yml```.
|
||||
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.5.1 <br>
|
||||
[etcd](https://github.com/coreos/etcd/releases) v3.0.6 <br>
|
||||
[flanneld](https://github.com/coreos/flannel/releases) v0.6.2 <br>
|
||||
[calicoctl](https://github.com/projectcalico/calico-docker/releases) v0.23.0 <br>
|
||||
[canal](https://github.com/projectcalico/canal) (given calico/flannel versions) <br>
|
||||
[weave](http://weave.works/) v1.6.1 <br>
|
||||
[docker](https://www.docker.com/) v1.12.5 <br>
|
||||
[rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 <br>
|
||||
|
||||
### Inventory
|
||||
Below is an example of an inventory.
|
||||
Note : The bgp vars local_as and peers are not mandatory if the var **'peer_with_router'** is set to false
|
||||
By default this variable is set to false and therefore all the nodes are configure in **'node-mesh'** mode.
|
||||
In node-mesh mode the nodes peers with all the nodes in order to exchange routes.
|
||||
Note: rkt support as docker alternative is limited to control plane (etcd and
|
||||
kubelet). Docker is still used for Kubernetes cluster workloads and network
|
||||
plugins' related OS services. Also note, only one of the supported network
|
||||
plugins can be deployed for a given single cluster.
|
||||
|
||||
```
|
||||
Requirements
|
||||
--------------
|
||||
|
||||
[downloader]
|
||||
localhost ansible_connection=local ansible_python_interpreter=python2
|
||||
* The target servers must have **access to the Internet** in order to pull docker images.
|
||||
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
|
||||
in order to avoid any issue during deployment you should disable your firewall.
|
||||
* The target servers are configured to allow **IPv4 forwarding**.
|
||||
* **Copy your ssh keys** to all the servers part of your inventory.
|
||||
* **Ansible v2.2 (or newer) and python-netaddr**
|
||||
|
||||
[kube-master]
|
||||
node1 ansible_ssh_host=10.99.0.26
|
||||
node2 ansible_ssh_host=10.99.0.27
|
||||
|
||||
[etcd]
|
||||
node1 ansible_ssh_host=10.99.0.26
|
||||
node2 ansible_ssh_host=10.99.0.27
|
||||
node3 ansible_ssh_host=10.99.0.4
|
||||
## Network plugins
|
||||
You can choose between 4 network plugins. (default: `flannel` with vxlan backend)
|
||||
|
||||
[kube-node]
|
||||
node2 ansible_ssh_host=10.99.0.27
|
||||
node3 ansible_ssh_host=10.99.0.4
|
||||
node4 ansible_ssh_host=10.99.0.5
|
||||
node5 ansible_ssh_host=10.99.0.36
|
||||
node6 ansible_ssh_host=10.99.0.37
|
||||
* [**flannel**](docs/flannel.md): gre/vxlan (layer 2) networking.
|
||||
|
||||
[paris]
|
||||
node1 ansible_ssh_host=10.99.0.26
|
||||
node3 ansible_ssh_host=10.99.0.4 local_as=xxxxxxxx
|
||||
node4 ansible_ssh_host=10.99.0.5 local_as=xxxxxxxx
|
||||
* [**calico**](docs/calico.md): bgp (layer 3) networking.
|
||||
|
||||
[new-york]
|
||||
node2 ansible_ssh_host=10.99.0.27
|
||||
node5 ansible_ssh_host=10.99.0.36 local_as=xxxxxxxx
|
||||
node6 ansible_ssh_host=10.99.0.37 local_as=xxxxxxxx
|
||||
* [**canal**](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
|
||||
|
||||
[k8s-cluster:children]
|
||||
kube-node
|
||||
kube-master
|
||||
```
|
||||
* **weave**: Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. <br>
|
||||
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)).
|
||||
|
||||
### Playbook
|
||||
```
|
||||
---
|
||||
- hosts: downloader
|
||||
sudo: no
|
||||
roles:
|
||||
- { role: download, tags: download }
|
||||
The choice is defined with the variable `kube_network_plugin`. There is also an
|
||||
option to leverage built-in cloud provider networking instead.
|
||||
See also [Network checker](docs/netcheck.md).
|
||||
|
||||
- hosts: k8s-cluster
|
||||
roles:
|
||||
- { role: kubernetes/preinstall, tags: preinstall }
|
||||
- { role: docker, tags: docker }
|
||||
- { role: kubernetes/node, tags: node }
|
||||
- { role: etcd, tags: etcd }
|
||||
- { role: dnsmasq, tags: dnsmasq }
|
||||
- { role: network_plugin, tags: ['calico', 'flannel', 'network'] }
|
||||
## CI Tests
|
||||
|
||||
- hosts: kube-master
|
||||
roles:
|
||||
- { role: kubernetes/master, tags: master }
|
||||

|
||||
|
||||
```
|
||||
[](https://gitlab.com/kargo-ci/kubernetes-incubator__kargo/pipelines) </br>
|
||||
|
||||
### Run
|
||||
It is possible to define variables for different environments.
|
||||
For instance, in order to deploy the cluster on 'dev' environment run the following command.
|
||||
```
|
||||
ansible-playbook -i inventory/dev/inventory.cfg cluster.yml -u root
|
||||
```
|
||||
|
||||
Kubernetes
|
||||
-------------------------
|
||||
### Multi master notes
|
||||
* You can choose where to install the master components. If you want your master node to act both as master (api,scheduler,controller) and node (e.g. accept workloads, create pods ...),
|
||||
the server address has to be present on both groups 'kube-master' and 'kube-node'.
|
||||
|
||||
* Almost all kubernetes components are running into pods except *kubelet*. These pods are managed by kubelet which ensure they're always running
|
||||
|
||||
* For safety reasons, you should have at least two master nodes and 3 etcd servers
|
||||
|
||||
* Kube-proxy doesn't support multiple apiservers on startup ([Issue 18174](https://github.com/kubernetes/kubernetes/issues/18174)). An external loadbalancer needs to be configured.
|
||||
In order to do so, some variables have to be used '**loadbalancer_apiserver**' and '**apiserver_loadbalancer_domain_name**'
|
||||
|
||||
|
||||
### Network Overlay
|
||||
You can choose between 2 network plugins. Only one must be chosen.
|
||||
|
||||
* **flannel**: gre/vxlan (layer 2) networking. ([official docs](https://github.com/coreos/flannel))
|
||||
|
||||
* **calico**: bgp (layer 3) networking. ([official docs](http://docs.projectcalico.org/en/0.13/))
|
||||
|
||||
The choice is defined with the variable '**kube_network_plugin**'
|
||||
|
||||
### Expose a service
|
||||
There are several loadbalancing solutions.
|
||||
The one i found suitable for kubernetes are [Vulcand](http://vulcand.io/) and [Haproxy](http://www.haproxy.org/)
|
||||
|
||||
My cluster is working with haproxy and kubernetes services are configured with the loadbalancing type '**nodePort**'.
|
||||
eg: each node opens the same tcp port and forwards the traffic to the target pod wherever it is located.
|
||||
|
||||
Then Haproxy can be configured to request kubernetes's api in order to loadbalance on the proper tcp port on the nodes.
|
||||
|
||||
Please refer to the proper kubernetes documentation on [Services](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md)
|
||||
|
||||
### Check cluster status
|
||||
|
||||
#### Kubernetes components
|
||||
|
||||
* Check the status of the processes
|
||||
```
|
||||
systemctl status kubelet
|
||||
```
|
||||
|
||||
* Check the logs
|
||||
```
|
||||
journalctl -ae -u kubelet
|
||||
```
|
||||
|
||||
* Check the NAT rules
|
||||
```
|
||||
iptables -nLv -t nat
|
||||
```
|
||||
|
||||
For the master nodes you'll have to see the docker logs for the apiserver
|
||||
```
|
||||
docker logs [apiserver docker id]
|
||||
```
|
||||
|
||||
|
||||
### Available apps, installation procedure
|
||||
|
||||
There are two ways of installing new apps
|
||||
|
||||
#### Ansible galaxy
|
||||
|
||||
Additionnal apps can be installed with ```ansible-galaxy```.
|
||||
|
||||
ou'll need to edit the file '*requirements.yml*' in order to chose needed apps.
|
||||
The list of available apps are available [there](https://github.com/ansibl8s)
|
||||
|
||||
For instance it is **strongly recommanded** to install a dns server which resolves kubernetes service names.
|
||||
In order to use this role you'll need the following entries in the file '*requirements.yml*'
|
||||
Please refer to the [k8s-kubedns readme](https://github.com/ansibl8s/k8s-kubedns) for additionnal info.
|
||||
```
|
||||
- src: https://github.com/ansibl8s/k8s-common.git
|
||||
path: roles/apps
|
||||
# version: v1.0
|
||||
|
||||
- src: https://github.com/ansibl8s/k8s-kubedns.git
|
||||
path: roles/apps
|
||||
# version: v1.0
|
||||
```
|
||||
**Note**: the role common is required by all the apps and provides the tasks and libraries needed.
|
||||
|
||||
And empty the apps directory
|
||||
```
|
||||
rm -rf roles/apps/*
|
||||
```
|
||||
|
||||
Then download the roles with ansible-galaxy
|
||||
```
|
||||
ansible-galaxy install -r requirements.yml
|
||||
```
|
||||
|
||||
#### Git submodules
|
||||
Alternatively the roles can be installed as git submodules.
|
||||
That way is easier if you want to do some changes and commit them.
|
||||
|
||||
You can list available submodules with the following command:
|
||||
```
|
||||
grep path .gitmodules | sed 's/.*= //'
|
||||
```
|
||||
|
||||
In order to install the dns addon you'll need to follow these steps
|
||||
```
|
||||
git submodule init roles/apps/k8s-common roles/apps/k8s-kubedns
|
||||
git submodule update
|
||||
```
|
||||
|
||||
Finally update the playbook ```apps.yml``` with the chosen roles, and run it
|
||||
```
|
||||
...
|
||||
- hosts: kube-master
|
||||
roles:
|
||||
- { role: apps/k8s-kubedns, tags: ['kubedns', 'apps'] }
|
||||
...
|
||||
```
|
||||
|
||||
```
|
||||
ansible-playbook -i inventory/inventory.cfg apps.yml -u root
|
||||
```
|
||||
|
||||
|
||||
#### Calico networking
|
||||
Check if the calico-node container is running
|
||||
```
|
||||
docker ps | grep calico
|
||||
```
|
||||
|
||||
The **calicoctl** command allows to check the status of the network workloads.
|
||||
* Check the status of Calico nodes
|
||||
```
|
||||
calicoctl status
|
||||
```
|
||||
|
||||
* Show the configured network subnet for containers
|
||||
```
|
||||
calicoctl pool show
|
||||
```
|
||||
|
||||
* Show the workloads (ip addresses of containers and their located)
|
||||
```
|
||||
calicoctl endpoint show --detail
|
||||
```
|
||||
#### Flannel networking
|
||||
|
||||
Congrats ! now you can walk through [kubernetes basics](http://kubernetes.io/v1.1/basicstutorials.html)
|
||||
CI/end-to-end tests sponsored by Google (GCE), and [teuto.net](https://teuto.net/) for OpenStack.
|
||||
See the [test matrix](docs/test_cases.md) for details.
|
||||
|
9
RELEASE.md
Normal file
9
RELEASE.md
Normal file
@ -0,0 +1,9 @@
|
||||
# Release Process
|
||||
|
||||
The Kargo Project is released on an as-needed basis. The process is as follows:
|
||||
|
||||
1. An issue is proposing a new release with a changelog since the last release
|
||||
2. At least on of the [OWNERS](OWNERS) must LGTM this release
|
||||
3. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION`
|
||||
4. The release issue is closed
|
||||
5. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kargo $VERSION is released`
|
128
Vagrantfile
vendored
Normal file
128
Vagrantfile
vendored
Normal file
@ -0,0 +1,128 @@
|
||||
# -*- mode: ruby -*-
|
||||
# # vi: set ft=ruby :
|
||||
|
||||
require 'fileutils'
|
||||
|
||||
Vagrant.require_version ">= 1.8.0"
|
||||
|
||||
CONFIG = File.join(File.dirname(__FILE__), "vagrant/config.rb")
|
||||
|
||||
# Defaults for config options defined in CONFIG
|
||||
$num_instances = 3
|
||||
$instance_name_prefix = "k8s"
|
||||
$vm_gui = false
|
||||
$vm_memory = 1536
|
||||
$vm_cpus = 1
|
||||
$shared_folders = {}
|
||||
$forwarded_ports = {}
|
||||
$subnet = "172.17.8"
|
||||
$box = "bento/ubuntu-16.04"
|
||||
|
||||
host_vars = {}
|
||||
|
||||
if File.exist?(CONFIG)
|
||||
require CONFIG
|
||||
end
|
||||
|
||||
# if $inventory is not set, try to use example
|
||||
$inventory = File.join(File.dirname(__FILE__), "inventory") if ! $inventory
|
||||
|
||||
# if $inventory has a hosts file use it, otherwise copy over vars etc
|
||||
# to where vagrant expects dynamic inventory to be.
|
||||
if ! File.exist?(File.join(File.dirname($inventory), "hosts"))
|
||||
$vagrant_ansible = File.join(File.dirname(__FILE__), ".vagrant",
|
||||
"provisioners", "ansible")
|
||||
FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible)
|
||||
if ! File.exist?(File.join($vagrant_ansible,"inventory"))
|
||||
FileUtils.ln_s($inventory, $vagrant_ansible)
|
||||
end
|
||||
end
|
||||
|
||||
if Vagrant.has_plugin?("vagrant-proxyconf")
|
||||
$no_proxy = ENV['NO_PROXY'] || ENV['no_proxy'] || "127.0.0.1,localhost"
|
||||
(1..$num_instances).each do |i|
|
||||
$no_proxy += ",#{$subnet}.#{i+100}"
|
||||
end
|
||||
end
|
||||
|
||||
Vagrant.configure("2") do |config|
|
||||
# always use Vagrants insecure key
|
||||
config.ssh.insert_key = false
|
||||
config.vm.box = $box
|
||||
|
||||
# plugin conflict
|
||||
if Vagrant.has_plugin?("vagrant-vbguest") then
|
||||
config.vbguest.auto_update = false
|
||||
end
|
||||
|
||||
(1..$num_instances).each do |i|
|
||||
config.vm.define vm_name = "%s-%02d" % [$instance_name_prefix, i] do |config|
|
||||
config.vm.hostname = vm_name
|
||||
|
||||
if Vagrant.has_plugin?("vagrant-proxyconf")
|
||||
config.proxy.http = ENV['HTTP_PROXY'] || ENV['http_proxy'] || ""
|
||||
config.proxy.https = ENV['HTTPS_PROXY'] || ENV['https_proxy'] || ""
|
||||
config.proxy.no_proxy = $no_proxy
|
||||
end
|
||||
|
||||
if $expose_docker_tcp
|
||||
config.vm.network "forwarded_port", guest: 2375, host: ($expose_docker_tcp + i - 1), auto_correct: true
|
||||
end
|
||||
|
||||
$forwarded_ports.each do |guest, host|
|
||||
config.vm.network "forwarded_port", guest: guest, host: host, auto_correct: true
|
||||
end
|
||||
|
||||
["vmware_fusion", "vmware_workstation"].each do |vmware|
|
||||
config.vm.provider vmware do |v|
|
||||
v.vmx['memsize'] = $vm_memory
|
||||
v.vmx['numvcpus'] = $vm_cpus
|
||||
end
|
||||
end
|
||||
|
||||
config.vm.provider :virtualbox do |vb|
|
||||
vb.gui = $vm_gui
|
||||
vb.memory = $vm_memory
|
||||
vb.cpus = $vm_cpus
|
||||
end
|
||||
|
||||
ip = "#{$subnet}.#{i+100}"
|
||||
host_vars[vm_name] = {
|
||||
"ip" => ip,
|
||||
#"access_ip" => ip,
|
||||
"flannel_interface" => ip,
|
||||
"flannel_backend_type" => "host-gw",
|
||||
"local_release_dir" => "/vagrant/temp",
|
||||
"download_run_once" => "False"
|
||||
}
|
||||
config.vm.network :private_network, ip: ip
|
||||
|
||||
# Only execute once the Ansible provisioner,
|
||||
# when all the machines are up and ready.
|
||||
if i == $num_instances
|
||||
config.vm.provision "ansible" do |ansible|
|
||||
ansible.playbook = "cluster.yml"
|
||||
if File.exist?(File.join(File.dirname($inventory), "hosts"))
|
||||
ansible.inventory_path = $inventory
|
||||
end
|
||||
ansible.sudo = true
|
||||
ansible.limit = "all"
|
||||
ansible.host_key_checking = false
|
||||
ansible.raw_arguments = ["--forks=#{$num_instances}"]
|
||||
ansible.host_vars = host_vars
|
||||
#ansible.tags = ['download']
|
||||
ansible.groups = {
|
||||
# The first three nodes should be etcd servers
|
||||
"etcd" => ["#{$instance_name_prefix}-0[1:3]"],
|
||||
# The first two nodes should be masters
|
||||
"kube-master" => ["#{$instance_name_prefix}-0[1:2]"],
|
||||
# all nodes should be kube nodes
|
||||
"kube-node" => ["#{$instance_name_prefix}-0[1:#{$num_instances}]"],
|
||||
"k8s-cluster:children" => ["kube-master", "kube-node"],
|
||||
}
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
9
ansible.cfg
Normal file
9
ansible.cfg
Normal file
@ -0,0 +1,9 @@
|
||||
[ssh_connection]
|
||||
pipelining=True
|
||||
#ssh_args = -F ./ssh-bastion.conf -o ControlMaster=auto -o ControlPersist=30m
|
||||
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
|
||||
[defaults]
|
||||
host_key_checking=False
|
||||
gathering = smart
|
||||
fact_caching = jsonfile
|
||||
fact_caching_connection = /tmp
|
29
apps.yml
29
apps.yml
@ -1,29 +0,0 @@
|
||||
---
|
||||
- hosts: kube-master
|
||||
roles:
|
||||
# System
|
||||
- { role: apps/k8s-kubedns, tags: ['kubedns', 'kube-system'] }
|
||||
|
||||
# Databases
|
||||
- { role: apps/k8s-postgres, tags: 'postgres' }
|
||||
- { role: apps/k8s-elasticsearch, tags: 'elasticsearch' }
|
||||
- { role: apps/k8s-memcached, tags: 'memcached' }
|
||||
- { role: apps/k8s-redis, tags: 'redis' }
|
||||
|
||||
# Msg Broker
|
||||
- { role: apps/k8s-rabbitmq, tags: 'rabbitmq' }
|
||||
|
||||
# Monitoring
|
||||
- { role: apps/k8s-influxdb, tags: ['influxdb', 'kube-system']}
|
||||
- { role: apps/k8s-heapster, tags: ['heapster', 'kube-system']}
|
||||
- { role: apps/k8s-kubedash, tags: ['kubedash', 'kube-system']}
|
||||
|
||||
# logging
|
||||
- { role: apps/k8s-kube-logstash, tags: 'kube-logstash'}
|
||||
|
||||
# Console
|
||||
- { role: apps/k8s-fabric8, tags: 'fabric8' }
|
||||
- { role: apps/k8s-kube-ui, tags: ['kube-ui', 'kube-system']}
|
||||
|
||||
# ETCD
|
||||
- { role: apps/k8s-etcd, tags: 'etcd'}
|
63
cluster.yml
63
cluster.yml
@ -1,18 +1,67 @@
|
||||
---
|
||||
- hosts: downloader
|
||||
sudo: no
|
||||
- hosts: localhost
|
||||
gather_facts: False
|
||||
roles:
|
||||
- { role: download, tags: download }
|
||||
- bastion-ssh-config
|
||||
tags: [localhost, bastion]
|
||||
|
||||
- hosts: k8s-cluster
|
||||
- hosts: k8s-cluster:etcd:calico-rr
|
||||
any_errors_fatal: true
|
||||
gather_facts: false
|
||||
vars:
|
||||
# Need to disable pipelining for bootstrap-os as some systems have requiretty in sudoers set, which makes pipelining
|
||||
# fail. bootstrap-os fixes this on these systems, so in later plays it can be enabled.
|
||||
ansible_ssh_pipelining: false
|
||||
roles:
|
||||
- bootstrap-os
|
||||
tags:
|
||||
- bootstrap-os
|
||||
|
||||
- hosts: k8s-cluster:etcd:calico-rr
|
||||
any_errors_fatal: true
|
||||
vars:
|
||||
ansible_ssh_pipelining: true
|
||||
gather_facts: true
|
||||
|
||||
- hosts: k8s-cluster:etcd:calico-rr
|
||||
any_errors_fatal: true
|
||||
roles:
|
||||
- { role: kubernetes/preinstall, tags: preinstall }
|
||||
- { role: docker, tags: docker }
|
||||
- { role: kubernetes/node, tags: node }
|
||||
- { role: rkt, tags: rkt, when: "'rkt' in [ etcd_deployment_type, kubelet_deployment_type ]" }
|
||||
|
||||
- hosts: etcd:!k8s-cluster
|
||||
any_errors_fatal: true
|
||||
roles:
|
||||
- { role: etcd, tags: etcd }
|
||||
- { role: dnsmasq, tags: dnsmasq }
|
||||
- { role: network_plugin, tags: ['calico', 'flannel', 'network'] }
|
||||
|
||||
- hosts: k8s-cluster
|
||||
any_errors_fatal: true
|
||||
roles:
|
||||
- { role: etcd, tags: etcd }
|
||||
- { role: kubernetes/node, tags: node }
|
||||
- { role: network_plugin, tags: network }
|
||||
|
||||
- hosts: kube-master
|
||||
any_errors_fatal: true
|
||||
roles:
|
||||
- { role: kubernetes/master, tags: master }
|
||||
- { role: kubernetes-apps/lib, tags: apps }
|
||||
- { role: kubernetes-apps/network_plugin, tags: network }
|
||||
|
||||
- hosts: calico-rr
|
||||
any_errors_fatal: true
|
||||
roles:
|
||||
- { role: network_plugin/calico/rr, tags: network }
|
||||
|
||||
- hosts: k8s-cluster
|
||||
any_errors_fatal: true
|
||||
roles:
|
||||
- { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq }
|
||||
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf }
|
||||
|
||||
- hosts: kube-master[0]
|
||||
any_errors_fatal: true
|
||||
roles:
|
||||
- { role: kubernetes-apps/lib, tags: apps }
|
||||
- { role: kubernetes-apps, tags: apps }
|
||||
|
59
code-of-conduct.md
Normal file
59
code-of-conduct.md
Normal file
@ -0,0 +1,59 @@
|
||||
## Kubernetes Community Code of Conduct
|
||||
|
||||
### Contributor Code of Conduct
|
||||
|
||||
As contributors and maintainers of this project, and in the interest of fostering
|
||||
an open and welcoming community, we pledge to respect all people who contribute
|
||||
through reporting issues, posting feature requests, updating documentation,
|
||||
submitting pull requests or patches, and other activities.
|
||||
|
||||
We are committed to making participation in this project a harassment-free experience for
|
||||
everyone, regardless of level of experience, gender, gender identity and expression,
|
||||
sexual orientation, disability, personal appearance, body size, race, ethnicity, age,
|
||||
religion, or nationality.
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
* The use of sexualized language or imagery
|
||||
* Personal attacks
|
||||
* Trolling or insulting/derogatory comments
|
||||
* Public or private harassment
|
||||
* Publishing other's private information, such as physical or electronic addresses,
|
||||
without explicit permission
|
||||
* Other unethical or unprofessional conduct.
|
||||
|
||||
Project maintainers have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions that are not
|
||||
aligned to this Code of Conduct. By adopting this Code of Conduct, project maintainers
|
||||
commit themselves to fairly and consistently applying these principles to every aspect
|
||||
of managing this project. Project maintainers who do not follow or enforce the Code of
|
||||
Conduct may be permanently removed from the project team.
|
||||
|
||||
This code of conduct applies both within project spaces and in public spaces
|
||||
when an individual is representing the project or its community.
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by
|
||||
opening an issue or contacting one or more of the project maintainers.
|
||||
|
||||
This Code of Conduct is adapted from the Contributor Covenant
|
||||
(http://contributor-covenant.org), version 1.2.0, available at
|
||||
http://contributor-covenant.org/version/1/2/0/
|
||||
|
||||
### Kubernetes Events Code of Conduct
|
||||
|
||||
Kubernetes events are working conferences intended for professional networking and collaboration in the
|
||||
Kubernetes community. Attendees are expected to behave according to professional standards and in accordance
|
||||
with their employer's policies on appropriate workplace behavior.
|
||||
|
||||
While at Kubernetes events or related social networking opportunities, attendees should not engage in
|
||||
discriminatory or offensive speech or actions regarding gender, sexuality, race, or religion. Speakers should
|
||||
be especially aware of these concerns.
|
||||
|
||||
The Kubernetes team does not condone any statements by speakers contrary to these standards. The Kubernetes
|
||||
team reserves the right to deny entrance and/or eject from an event (without refund) any individual found to
|
||||
be engaging in discriminatory or offensive speech or actions.
|
||||
|
||||
Please bring any concerns to to the immediate attention of Kubernetes event staff
|
||||
|
||||
|
||||
[]()
|
2
contrib/azurerm/.gitignore
vendored
Normal file
2
contrib/azurerm/.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
.generated
|
||||
/inventory
|
64
contrib/azurerm/README.md
Normal file
64
contrib/azurerm/README.md
Normal file
@ -0,0 +1,64 @@
|
||||
# Kubernetes on Azure with Azure Resource Group Templates
|
||||
|
||||
Provision the base infrastructure for a Kubernetes cluster by using [Azure Resource Group Templates](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates)
|
||||
|
||||
## Status
|
||||
|
||||
This will provision the base infrastructure (vnet, vms, nics, ips, ...) needed for Kubernetes in Azure into the specified
|
||||
Resource Group. It will not install Kubernetes itself, this has to be done in a later step by yourself (using kargo of course).
|
||||
|
||||
## Requirements
|
||||
|
||||
- [Install azure-cli](https://docs.microsoft.com/en-us/azure/xplat-cli-install)
|
||||
- [Login with azure-cli](https://docs.microsoft.com/en-us/azure/xplat-cli-connect)
|
||||
- Dedicated Resource Group created in the Azure Portal or through azure-cli
|
||||
|
||||
## Configuration through group_vars/all
|
||||
|
||||
You have to modify at least one variable in group_vars/all, which is the **cluster_name** variable. It must be globally
|
||||
unique due to some restrictions in Azure. Most other variables should be self explanatory if you have some basic Kubernetes
|
||||
experience.
|
||||
|
||||
## Bastion host
|
||||
|
||||
You can enable the use of a Bastion Host by changing **use_bastion** in group_vars/all to **true**. The generated
|
||||
templates will then include an additional bastion VM which can then be used to connect to the masters and nodes. The option
|
||||
also removes all public IPs from all other VMs.
|
||||
|
||||
## Generating and applying
|
||||
|
||||
To generate and apply the templates, call:
|
||||
|
||||
```shell
|
||||
$ ./apply-rg.sh <resource_group_name>
|
||||
```
|
||||
|
||||
If you change something in the configuration (e.g. number of nodes) later, you can call this again and Azure will
|
||||
take care about creating/modifying whatever is needed.
|
||||
|
||||
## Clearing a resource group
|
||||
|
||||
If you need to delete all resources from a resource group, simply call:
|
||||
|
||||
```shell
|
||||
$ ./clear-rg.sh <resource_group_name>
|
||||
```
|
||||
|
||||
**WARNING** this really deletes everything from your resource group, including everything that was later created by you!
|
||||
|
||||
|
||||
## Generating an inventory for kargo
|
||||
|
||||
After you have applied the templates, you can generate an inventory with this call:
|
||||
|
||||
```shell
|
||||
$ ./generate-inventory.sh <resource_group_name>
|
||||
```
|
||||
|
||||
It will create the file ./inventory which can then be used with kargo, e.g.:
|
||||
|
||||
```shell
|
||||
$ cd kargo-root-dir
|
||||
$ ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/group_vars/all.yml" cluster.yml
|
||||
```
|
||||
|
19
contrib/azurerm/apply-rg.sh
Executable file
19
contrib/azurerm/apply-rg.sh
Executable file
@ -0,0 +1,19 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
AZURE_RESOURCE_GROUP="$1"
|
||||
|
||||
if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
|
||||
echo "AZURE_RESOURCE_GROUP is missing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ansible-playbook generate-templates.yml
|
||||
|
||||
azure group deployment create -f ./.generated/network.json -g $AZURE_RESOURCE_GROUP
|
||||
azure group deployment create -f ./.generated/storage.json -g $AZURE_RESOURCE_GROUP
|
||||
azure group deployment create -f ./.generated/availability-sets.json -g $AZURE_RESOURCE_GROUP
|
||||
azure group deployment create -f ./.generated/bastion.json -g $AZURE_RESOURCE_GROUP
|
||||
azure group deployment create -f ./.generated/masters.json -g $AZURE_RESOURCE_GROUP
|
||||
azure group deployment create -f ./.generated/minions.json -g $AZURE_RESOURCE_GROUP
|
14
contrib/azurerm/clear-rg.sh
Executable file
14
contrib/azurerm/clear-rg.sh
Executable file
@ -0,0 +1,14 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
AZURE_RESOURCE_GROUP="$1"
|
||||
|
||||
if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
|
||||
echo "AZURE_RESOURCE_GROUP is missing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ansible-playbook generate-templates.yml
|
||||
|
||||
azure group deployment create -g "$AZURE_RESOURCE_GROUP" -f ./.generated/clear-rg.json -m Complete
|
12
contrib/azurerm/generate-inventory.sh
Executable file
12
contrib/azurerm/generate-inventory.sh
Executable file
@ -0,0 +1,12 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
AZURE_RESOURCE_GROUP="$1"
|
||||
|
||||
if [ "$AZURE_RESOURCE_GROUP" == "" ]; then
|
||||
echo "AZURE_RESOURCE_GROUP is missing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ansible-playbook generate-inventory.yml -e azure_resource_group="$AZURE_RESOURCE_GROUP"
|
5
contrib/azurerm/generate-inventory.yml
Normal file
5
contrib/azurerm/generate-inventory.yml
Normal file
@ -0,0 +1,5 @@
|
||||
---
|
||||
- hosts: localhost
|
||||
gather_facts: False
|
||||
roles:
|
||||
- generate-inventory
|
5
contrib/azurerm/generate-templates.yml
Normal file
5
contrib/azurerm/generate-templates.yml
Normal file
@ -0,0 +1,5 @@
|
||||
---
|
||||
- hosts: localhost
|
||||
gather_facts: False
|
||||
roles:
|
||||
- generate-templates
|
26
contrib/azurerm/group_vars/all
Normal file
26
contrib/azurerm/group_vars/all
Normal file
@ -0,0 +1,26 @@
|
||||
|
||||
# Due to some Azure limitations, this name must be globally unique
|
||||
cluster_name: example
|
||||
|
||||
# Set this to true if you do not want to have public IPs for your masters and minions. This will provision a bastion
|
||||
# node that can be used to access the masters and minions
|
||||
use_bastion: false
|
||||
|
||||
number_of_k8s_masters: 3
|
||||
number_of_k8s_nodes: 3
|
||||
|
||||
masters_vm_size: Standard_A2
|
||||
masters_os_disk_size: 1000
|
||||
|
||||
minions_vm_size: Standard_A2
|
||||
minions_os_disk_size: 1000
|
||||
|
||||
admin_username: devops
|
||||
admin_password: changeme
|
||||
ssh_public_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLRzcxbsFDdEibiyXCSdIFh7bKbXso1NqlKjEyPTptf3aBXHEhVil0lJRjGpTlpfTy7PHvXFbXIOCdv9tOmeH1uxWDDeZawgPFV6VSZ1QneCL+8bxzhjiCn8133wBSPZkN8rbFKd9eEUUBfx8ipCblYblF9FcidylwtMt5TeEmXk8yRVkPiCuEYuDplhc2H0f4PsK3pFb5aDVdaDT3VeIypnOQZZoUxHWqm6ThyHrzLJd3SrZf+RROFWW1uInIDf/SZlXojczUYoffxgT1lERfOJCHJXsqbZWugbxQBwqsVsX59+KPxFFo6nV88h3UQr63wbFx52/MXkX4WrCkAHzN ablock-vwfs@dell-lappy"
|
||||
|
||||
# Azure CIDRs
|
||||
azure_vnet_cidr: 10.0.0.0/8
|
||||
azure_admin_cidr: 10.241.2.0/24
|
||||
azure_masters_cidr: 10.0.4.0/24
|
||||
azure_minions_cidr: 10.240.0.0/16
|
11
contrib/azurerm/roles/generate-inventory/tasks/main.yml
Normal file
11
contrib/azurerm/roles/generate-inventory/tasks/main.yml
Normal file
@ -0,0 +1,11 @@
|
||||
---
|
||||
|
||||
- name: Query Azure VMs
|
||||
command: azure vm list-ip-address --json {{ azure_resource_group }}
|
||||
register: vm_list_cmd
|
||||
|
||||
- set_fact:
|
||||
vm_list: "{{ vm_list_cmd.stdout }}"
|
||||
|
||||
- name: Generate inventory
|
||||
template: src=inventory.j2 dest="{{playbook_dir}}/inventory"
|
@ -0,0 +1,33 @@
|
||||
|
||||
{% for vm in vm_list %}
|
||||
{% if not use_bastion or vm.name == 'bastion' %}
|
||||
{{ vm.name }} ansible_ssh_host={{ vm.networkProfile.networkInterfaces[0].expanded.ipConfigurations[0].publicIPAddress.expanded.ipAddress }} ip={{ vm.networkProfile.networkInterfaces[0].expanded.ipConfigurations[0].privateIPAddress }}
|
||||
{% else %}
|
||||
{{ vm.name }} ansible_ssh_host={{ vm.networkProfile.networkInterfaces[0].expanded.ipConfigurations[0].privateIPAddress }}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
|
||||
[kube-master]
|
||||
{% for vm in vm_list %}
|
||||
{% if 'kube-master' in vm.tags.roles %}
|
||||
{{ vm.name }}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
|
||||
[etcd]
|
||||
{% for vm in vm_list %}
|
||||
{% if 'etcd' in vm.tags.roles %}
|
||||
{{ vm.name }}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
|
||||
[kube-node]
|
||||
{% for vm in vm_list %}
|
||||
{% if 'kube-node' in vm.tags.roles %}
|
||||
{{ vm.name }}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
|
||||
[k8s-cluster:children]
|
||||
kube-node
|
||||
kube-master
|
37
contrib/azurerm/roles/generate-templates/defaults/main.yml
Normal file
37
contrib/azurerm/roles/generate-templates/defaults/main.yml
Normal file
@ -0,0 +1,37 @@
|
||||
apiVersion: "2015-06-15"
|
||||
|
||||
virtualNetworkName: "KubVNET"
|
||||
|
||||
subnetAdminName: "ad-subnet"
|
||||
subnetMastersName: "master-subnet"
|
||||
subnetMinionsName: "minion-subnet"
|
||||
|
||||
routeTableName: "routetable"
|
||||
securityGroupName: "secgroup"
|
||||
|
||||
nameSuffix: "{{cluster_name}}"
|
||||
|
||||
availabilitySetMasters: "master-avs"
|
||||
availabilitySetMinions: "minion-avs"
|
||||
|
||||
faultDomainCount: 3
|
||||
updateDomainCount: 10
|
||||
|
||||
bastionVmSize: Standard_A0
|
||||
bastionVMName: bastion
|
||||
bastionIPAddressName: bastion-pubip
|
||||
|
||||
disablePasswordAuthentication: true
|
||||
|
||||
sshKeyPath: "/home/{{admin_username}}/.ssh/authorized_keys"
|
||||
|
||||
imageReference:
|
||||
publisher: "OpenLogic"
|
||||
offer: "CentOS"
|
||||
sku: "7.2"
|
||||
version: "latest"
|
||||
imageReferenceJson: "{{imageReference|to_json}}"
|
||||
|
||||
storageAccountName: "sa{{nameSuffix | replace('-', '')}}"
|
||||
storageAccountType: "Standard_LRS"
|
||||
|
14
contrib/azurerm/roles/generate-templates/tasks/main.yml
Normal file
14
contrib/azurerm/roles/generate-templates/tasks/main.yml
Normal file
@ -0,0 +1,14 @@
|
||||
- set_fact:
|
||||
base_dir: "{{playbook_dir}}/.generated/"
|
||||
|
||||
- file: path={{base_dir}} state=directory recurse=true
|
||||
|
||||
- template: src={{item}} dest="{{base_dir}}/{{item}}"
|
||||
with_items:
|
||||
- network.json
|
||||
- storage.json
|
||||
- availability-sets.json
|
||||
- bastion.json
|
||||
- masters.json
|
||||
- minions.json
|
||||
- clear-rg.json
|
@ -0,0 +1,30 @@
|
||||
{
|
||||
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
},
|
||||
"variables": {
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"type": "Microsoft.Compute/availabilitySets",
|
||||
"name": "{{availabilitySetMasters}}",
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"location": "[resourceGroup().location]",
|
||||
"properties": {
|
||||
"PlatformFaultDomainCount": "{{faultDomainCount}}",
|
||||
"PlatformUpdateDomainCount": "{{updateDomainCount}}"
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "Microsoft.Compute/availabilitySets",
|
||||
"name": "{{availabilitySetMinions}}",
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"location": "[resourceGroup().location]",
|
||||
"properties": {
|
||||
"PlatformFaultDomainCount": "{{faultDomainCount}}",
|
||||
"PlatformUpdateDomainCount": "{{updateDomainCount}}"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
@ -0,0 +1,99 @@
|
||||
{
|
||||
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
},
|
||||
"variables": {
|
||||
"vnetID": "[resourceId('Microsoft.Network/virtualNetworks', '{{virtualNetworkName}}')]",
|
||||
"subnetAdminRef": "[concat(variables('vnetID'),'/subnets/', '{{subnetAdminName}}')]"
|
||||
},
|
||||
"resources": [
|
||||
{% if use_bastion %}
|
||||
{
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"type": "Microsoft.Network/publicIPAddresses",
|
||||
"name": "{{bastionIPAddressName}}",
|
||||
"location": "[resourceGroup().location]",
|
||||
"properties": {
|
||||
"publicIPAllocationMethod": "Static"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"type": "Microsoft.Network/networkInterfaces",
|
||||
"name": "{{bastionVMName}}-nic",
|
||||
"location": "[resourceGroup().location]",
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.Network/publicIPAddresses/', '{{bastionIPAddressName}}')]"
|
||||
],
|
||||
"properties": {
|
||||
"ipConfigurations": [
|
||||
{
|
||||
"name": "BastionIpConfig",
|
||||
"properties": {
|
||||
"privateIPAllocationMethod": "Dynamic",
|
||||
"publicIPAddress": {
|
||||
"id": "[resourceId('Microsoft.Network/publicIPAddresses', '{{bastionIPAddressName}}')]"
|
||||
},
|
||||
"subnet": {
|
||||
"id": "[variables('subnetAdminRef')]"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"type": "Microsoft.Compute/virtualMachines",
|
||||
"name": "{{bastionVMName}}",
|
||||
"location": "[resourceGroup().location]",
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.Network/networkInterfaces/', '{{bastionVMName}}-nic')]"
|
||||
],
|
||||
"tags": {
|
||||
"roles": "bastion"
|
||||
},
|
||||
"properties": {
|
||||
"hardwareProfile": {
|
||||
"vmSize": "{{bastionVmSize}}"
|
||||
},
|
||||
"osProfile": {
|
||||
"computerName": "{{bastionVMName}}",
|
||||
"adminUsername": "{{admin_username}}",
|
||||
"adminPassword": "{{admin_password}}",
|
||||
"linuxConfiguration": {
|
||||
"disablePasswordAuthentication": "true",
|
||||
"ssh": {
|
||||
"publicKeys": [
|
||||
{
|
||||
"path": "{{sshKeyPath}}",
|
||||
"keyData": "{{ssh_public_key}}"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"storageProfile": {
|
||||
"imageReference": {{imageReferenceJson}},
|
||||
"osDisk": {
|
||||
"name": "osdisk",
|
||||
"vhd": {
|
||||
"uri": "[concat('http://', '{{storageAccountName}}', '.blob.core.windows.net/vhds/', '{{bastionVMName}}', '-osdisk.vhd')]"
|
||||
},
|
||||
"caching": "ReadWrite",
|
||||
"createOption": "FromImage"
|
||||
}
|
||||
},
|
||||
"networkProfile": {
|
||||
"networkInterfaces": [
|
||||
{
|
||||
"id": "[resourceId('Microsoft.Network/networkInterfaces', '{{bastionVMName}}-nic')]"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
{% endif %}
|
||||
]
|
||||
}
|
@ -0,0 +1,8 @@
|
||||
{
|
||||
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {},
|
||||
"variables": {},
|
||||
"resources": [],
|
||||
"outputs": {}
|
||||
}
|
196
contrib/azurerm/roles/generate-templates/templates/masters.json
Normal file
196
contrib/azurerm/roles/generate-templates/templates/masters.json
Normal file
@ -0,0 +1,196 @@
|
||||
{
|
||||
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
},
|
||||
"variables": {
|
||||
"lbDomainName": "{{nameSuffix}}-api",
|
||||
"lbPublicIPAddressName": "kubernetes-api-pubip",
|
||||
"lbPublicIPAddressType": "Static",
|
||||
"lbPublicIPAddressID": "[resourceId('Microsoft.Network/publicIPAddresses',variables('lbPublicIPAddressName'))]",
|
||||
"lbName": "kubernetes-api",
|
||||
"lbID": "[resourceId('Microsoft.Network/loadBalancers',variables('lbName'))]",
|
||||
|
||||
"vnetID": "[resourceId('Microsoft.Network/virtualNetworks', '{{virtualNetworkName}}')]",
|
||||
"kubeMastersSubnetRef": "[concat(variables('vnetID'),'/subnets/', '{{subnetMastersName}}')]"
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"type": "Microsoft.Network/publicIPAddresses",
|
||||
"name": "[variables('lbPublicIPAddressName')]",
|
||||
"location": "[resourceGroup().location]",
|
||||
"properties": {
|
||||
"publicIPAllocationMethod": "[variables('lbPublicIPAddressType')]",
|
||||
"dnsSettings": {
|
||||
"domainNameLabel": "[variables('lbDomainName')]"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"name": "[variables('lbName')]",
|
||||
"type": "Microsoft.Network/loadBalancers",
|
||||
"location": "[resourceGroup().location]",
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.Network/publicIPAddresses/', variables('lbPublicIPAddressName'))]"
|
||||
],
|
||||
"properties": {
|
||||
"frontendIPConfigurations": [
|
||||
{
|
||||
"name": "kube-api-frontend",
|
||||
"properties": {
|
||||
"publicIPAddress": {
|
||||
"id": "[variables('lbPublicIPAddressID')]"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"backendAddressPools": [
|
||||
{
|
||||
"name": "kube-api-backend"
|
||||
}
|
||||
],
|
||||
"loadBalancingRules": [
|
||||
{
|
||||
"name": "kube-api",
|
||||
"properties": {
|
||||
"frontendIPConfiguration": {
|
||||
"id": "[concat(variables('lbID'), '/frontendIPConfigurations/kube-api-frontend')]"
|
||||
},
|
||||
"backendAddressPool": {
|
||||
"id": "[concat(variables('lbID'), '/backendAddressPools/kube-api-backend')]"
|
||||
},
|
||||
"protocol": "tcp",
|
||||
"frontendPort": 443,
|
||||
"backendPort": 443,
|
||||
"enableFloatingIP": false,
|
||||
"idleTimeoutInMinutes": 5,
|
||||
"probe": {
|
||||
"id": "[concat(variables('lbID'), '/probes/kube-api')]"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"probes": [
|
||||
{
|
||||
"name": "kube-api",
|
||||
"properties": {
|
||||
"protocol": "tcp",
|
||||
"port": 443,
|
||||
"intervalInSeconds": 5,
|
||||
"numberOfProbes": 2
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{% for i in range(number_of_k8s_masters) %}
|
||||
{% if not use_bastion %}
|
||||
{
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"type": "Microsoft.Network/publicIPAddresses",
|
||||
"name": "master-{{i}}-pubip",
|
||||
"location": "[resourceGroup().location]",
|
||||
"properties": {
|
||||
"publicIPAllocationMethod": "Static"
|
||||
}
|
||||
},
|
||||
{% endif %}
|
||||
{
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"type": "Microsoft.Network/networkInterfaces",
|
||||
"name": "master-{{i}}-nic",
|
||||
"location": "[resourceGroup().location]",
|
||||
"dependsOn": [
|
||||
{% if not use_bastion %}
|
||||
"[concat('Microsoft.Network/publicIPAddresses/', 'master-{{i}}-pubip')]",
|
||||
{% endif %}
|
||||
"[concat('Microsoft.Network/loadBalancers/', variables('lbName'))]"
|
||||
],
|
||||
"properties": {
|
||||
"ipConfigurations": [
|
||||
{
|
||||
"name": "MastersIpConfig",
|
||||
"properties": {
|
||||
"privateIPAllocationMethod": "Dynamic",
|
||||
{% if not use_bastion %}
|
||||
"publicIPAddress": {
|
||||
"id": "[resourceId('Microsoft.Network/publicIPAddresses', 'master-{{i}}-pubip')]"
|
||||
},
|
||||
{% endif %}
|
||||
"subnet": {
|
||||
"id": "[variables('kubeMastersSubnetRef')]"
|
||||
},
|
||||
"loadBalancerBackendAddressPools": [
|
||||
{
|
||||
"id": "[concat(variables('lbID'), '/backendAddressPools/kube-api-backend')]"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
],
|
||||
"networkSecurityGroup": {
|
||||
"id": "[resourceId('Microsoft.Network/networkSecurityGroups', '{{securityGroupName}}')]"
|
||||
},
|
||||
"enableIPForwarding": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "Microsoft.Compute/virtualMachines",
|
||||
"name": "master-{{i}}",
|
||||
"location": "[resourceGroup().location]",
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.Network/networkInterfaces/', 'master-{{i}}-nic')]"
|
||||
],
|
||||
"tags": {
|
||||
"roles": "kube-master,etcd"
|
||||
},
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"properties": {
|
||||
"availabilitySet": {
|
||||
"id": "[resourceId('Microsoft.Compute/availabilitySets', '{{availabilitySetMasters}}')]"
|
||||
},
|
||||
"hardwareProfile": {
|
||||
"vmSize": "{{masters_vm_size}}"
|
||||
},
|
||||
"osProfile": {
|
||||
"computerName": "master-{{i}}",
|
||||
"adminUsername": "{{admin_username}}",
|
||||
"adminPassword": "{{admin_password}}",
|
||||
"linuxConfiguration": {
|
||||
"disablePasswordAuthentication": "{{disablePasswordAuthentication}}",
|
||||
"ssh": {
|
||||
"publicKeys": [
|
||||
{
|
||||
"path": "{{sshKeyPath}}",
|
||||
"keyData": "{{ssh_public_key}}"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"storageProfile": {
|
||||
"imageReference": {{imageReferenceJson}},
|
||||
"osDisk": {
|
||||
"name": "ma{{nameSuffix}}{{i}}",
|
||||
"vhd": {
|
||||
"uri": "[concat('http://','{{storageAccountName}}','.blob.core.windows.net/vhds/master-{{i}}.vhd')]"
|
||||
},
|
||||
"caching": "ReadWrite",
|
||||
"createOption": "FromImage",
|
||||
"diskSizeGB": "{{masters_os_disk_size}}"
|
||||
}
|
||||
},
|
||||
"networkProfile": {
|
||||
"networkInterfaces": [
|
||||
{
|
||||
"id": "[resourceId('Microsoft.Network/networkInterfaces', 'master-{{i}}-nic')]"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
} {% if not loop.last %},{% endif %}
|
||||
{% endfor %}
|
||||
]
|
||||
}
|
113
contrib/azurerm/roles/generate-templates/templates/minions.json
Normal file
113
contrib/azurerm/roles/generate-templates/templates/minions.json
Normal file
@ -0,0 +1,113 @@
|
||||
{
|
||||
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
},
|
||||
"variables": {
|
||||
"vnetID": "[resourceId('Microsoft.Network/virtualNetworks', '{{virtualNetworkName}}')]",
|
||||
"kubeMinionsSubnetRef": "[concat(variables('vnetID'),'/subnets/', '{{subnetMinionsName}}')]"
|
||||
},
|
||||
"resources": [
|
||||
{% for i in range(number_of_k8s_nodes) %}
|
||||
{% if not use_bastion %}
|
||||
{
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"type": "Microsoft.Network/publicIPAddresses",
|
||||
"name": "minion-{{i}}-pubip",
|
||||
"location": "[resourceGroup().location]",
|
||||
"properties": {
|
||||
"publicIPAllocationMethod": "Static"
|
||||
}
|
||||
},
|
||||
{% endif %}
|
||||
{
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"type": "Microsoft.Network/networkInterfaces",
|
||||
"name": "minion-{{i}}-nic",
|
||||
"location": "[resourceGroup().location]",
|
||||
"dependsOn": [
|
||||
{% if not use_bastion %}
|
||||
"[concat('Microsoft.Network/publicIPAddresses/', 'minion-{{i}}-pubip')]"
|
||||
{% endif %}
|
||||
],
|
||||
"properties": {
|
||||
"ipConfigurations": [
|
||||
{
|
||||
"name": "MinionsIpConfig",
|
||||
"properties": {
|
||||
"privateIPAllocationMethod": "Dynamic",
|
||||
{% if not use_bastion %}
|
||||
"publicIPAddress": {
|
||||
"id": "[resourceId('Microsoft.Network/publicIPAddresses', 'minion-{{i}}-pubip')]"
|
||||
},
|
||||
{% endif %}
|
||||
"subnet": {
|
||||
"id": "[variables('kubeMinionsSubnetRef')]"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"networkSecurityGroup": {
|
||||
"id": "[resourceId('Microsoft.Network/networkSecurityGroups', '{{securityGroupName}}')]"
|
||||
},
|
||||
"enableIPForwarding": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "Microsoft.Compute/virtualMachines",
|
||||
"name": "minion-{{i}}",
|
||||
"location": "[resourceGroup().location]",
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.Network/networkInterfaces/', 'minion-{{i}}-nic')]"
|
||||
],
|
||||
"tags": {
|
||||
"roles": "kube-node"
|
||||
},
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"properties": {
|
||||
"availabilitySet": {
|
||||
"id": "[resourceId('Microsoft.Compute/availabilitySets', '{{availabilitySetMinions}}')]"
|
||||
},
|
||||
"hardwareProfile": {
|
||||
"vmSize": "{{minions_vm_size}}"
|
||||
},
|
||||
"osProfile": {
|
||||
"computerName": "minion-{{i}}",
|
||||
"adminUsername": "{{admin_username}}",
|
||||
"adminPassword": "{{admin_password}}",
|
||||
"linuxConfiguration": {
|
||||
"disablePasswordAuthentication": "{{disablePasswordAuthentication}}",
|
||||
"ssh": {
|
||||
"publicKeys": [
|
||||
{
|
||||
"path": "{{sshKeyPath}}",
|
||||
"keyData": "{{ssh_public_key}}"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"storageProfile": {
|
||||
"imageReference": {{imageReferenceJson}},
|
||||
"osDisk": {
|
||||
"name": "mi{{nameSuffix}}{{i}}",
|
||||
"vhd": {
|
||||
"uri": "[concat('http://','{{storageAccountName}}','.blob.core.windows.net/vhds/minion-{{i}}.vhd')]"
|
||||
},
|
||||
"caching": "ReadWrite",
|
||||
"createOption": "FromImage",
|
||||
"diskSizeGB": "{{minions_os_disk_size}}"
|
||||
}
|
||||
},
|
||||
"networkProfile": {
|
||||
"networkInterfaces": [
|
||||
{
|
||||
"id": "[resourceId('Microsoft.Network/networkInterfaces', 'minion-{{i}}-nic')]"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
} {% if not loop.last %},{% endif %}
|
||||
{% endfor %}
|
||||
]
|
||||
}
|
109
contrib/azurerm/roles/generate-templates/templates/network.json
Normal file
109
contrib/azurerm/roles/generate-templates/templates/network.json
Normal file
@ -0,0 +1,109 @@
|
||||
{
|
||||
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
},
|
||||
"variables": {
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"type": "Microsoft.Network/routeTables",
|
||||
"name": "{{routeTableName}}",
|
||||
"location": "[resourceGroup().location]",
|
||||
"properties": {
|
||||
"routes": [
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "Microsoft.Network/virtualNetworks",
|
||||
"name": "{{virtualNetworkName}}",
|
||||
"location": "[resourceGroup().location]",
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.Network/routeTables/', '{{routeTableName}}')]"
|
||||
],
|
||||
"properties": {
|
||||
"addressSpace": {
|
||||
"addressPrefixes": [
|
||||
"{{azure_vnet_cidr}}"
|
||||
]
|
||||
},
|
||||
"subnets": [
|
||||
{
|
||||
"name": "{{subnetMastersName}}",
|
||||
"properties": {
|
||||
"addressPrefix": "{{azure_masters_cidr}}",
|
||||
"routeTable": {
|
||||
"id": "[resourceId('Microsoft.Network/routeTables', '{{routeTableName}}')]"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "{{subnetMinionsName}}",
|
||||
"properties": {
|
||||
"addressPrefix": "{{azure_minions_cidr}}",
|
||||
"routeTable": {
|
||||
"id": "[resourceId('Microsoft.Network/routeTables', '{{routeTableName}}')]"
|
||||
}
|
||||
}
|
||||
}
|
||||
{% if use_bastion %}
|
||||
,{
|
||||
"name": "{{subnetAdminName}}",
|
||||
"properties": {
|
||||
"addressPrefix": "{{azure_admin_cidr}}",
|
||||
"routeTable": {
|
||||
"id": "[resourceId('Microsoft.Network/routeTables', '{{routeTableName}}')]"
|
||||
}
|
||||
}
|
||||
}
|
||||
{% endif %}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"type": "Microsoft.Network/networkSecurityGroups",
|
||||
"name": "{{securityGroupName}}",
|
||||
"location": "[resourceGroup().location]",
|
||||
"properties": {
|
||||
"securityRules": [
|
||||
{% if not use_bastion %}
|
||||
{
|
||||
"name": "ssh",
|
||||
"properties": {
|
||||
"description": "Allow SSH",
|
||||
"protocol": "Tcp",
|
||||
"sourcePortRange": "*",
|
||||
"destinationPortRange": "22",
|
||||
"sourceAddressPrefix": "Internet",
|
||||
"destinationAddressPrefix": "*",
|
||||
"access": "Allow",
|
||||
"priority": 100,
|
||||
"direction": "Inbound"
|
||||
}
|
||||
},
|
||||
{% endif %}
|
||||
{
|
||||
"name": "kube-api",
|
||||
"properties": {
|
||||
"description": "Allow secure kube-api",
|
||||
"protocol": "Tcp",
|
||||
"sourcePortRange": "*",
|
||||
"destinationPortRange": "443",
|
||||
"sourceAddressPrefix": "Internet",
|
||||
"destinationAddressPrefix": "*",
|
||||
"access": "Allow",
|
||||
"priority": 101,
|
||||
"direction": "Inbound"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"resources": [],
|
||||
"dependsOn": []
|
||||
}
|
||||
]
|
||||
}
|
@ -0,0 +1,19 @@
|
||||
{
|
||||
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
},
|
||||
"variables": {
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"type": "Microsoft.Storage/storageAccounts",
|
||||
"name": "{{storageAccountName}}",
|
||||
"location": "[resourceGroup().location]",
|
||||
"apiVersion": "{{apiVersion}}",
|
||||
"properties": {
|
||||
"accountType": "{{storageAccountType}}"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
297
contrib/inventory_builder/inventory.py
Normal file
297
contrib/inventory_builder/inventory.py
Normal file
@ -0,0 +1,297 @@
|
||||
#!/usr/bin/python3
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Usage: inventory.py ip1 [ip2 ...]
|
||||
# Examples: inventory.py 10.10.1.3 10.10.1.4 10.10.1.5
|
||||
#
|
||||
# Advanced usage:
|
||||
# Add another host after initial creation: inventory.py 10.10.1.5
|
||||
# Delete a host: inventory.py -10.10.1.3
|
||||
# Delete a host by id: inventory.py -node1
|
||||
#
|
||||
# Load a YAML or JSON file with inventory data: inventory.py load hosts.yaml
|
||||
# YAML file should be in the following format:
|
||||
# group1:
|
||||
# host1:
|
||||
# ip: X.X.X.X
|
||||
# var: val
|
||||
# group2:
|
||||
# host2:
|
||||
# ip: X.X.X.X
|
||||
|
||||
from collections import OrderedDict
|
||||
try:
|
||||
import configparser
|
||||
except ImportError:
|
||||
import ConfigParser as configparser
|
||||
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
|
||||
ROLES = ['kube-master', 'all', 'k8s-cluster:children', 'kube-node', 'etcd']
|
||||
PROTECTED_NAMES = ROLES
|
||||
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'load']
|
||||
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True,
|
||||
'0': False, 'no': False, 'false': False, 'off': False}
|
||||
|
||||
|
||||
def get_var_as_bool(name, default):
|
||||
value = os.environ.get(name, '')
|
||||
return _boolean_states.get(value.lower(), default)
|
||||
|
||||
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory.cfg")
|
||||
DEBUG = get_var_as_bool("DEBUG", True)
|
||||
HOST_PREFIX = os.environ.get("HOST_PREFIX", "node")
|
||||
|
||||
|
||||
class KargoInventory(object):
|
||||
|
||||
def __init__(self, changed_hosts=None, config_file=None):
|
||||
self.config = configparser.ConfigParser(allow_no_value=True,
|
||||
delimiters=('\t', ' '))
|
||||
self.config_file = config_file
|
||||
if self.config_file:
|
||||
self.config.read(self.config_file)
|
||||
|
||||
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
|
||||
self.parse_command(changed_hosts[0], changed_hosts[1:])
|
||||
sys.exit(0)
|
||||
|
||||
self.ensure_required_groups(ROLES)
|
||||
|
||||
if changed_hosts:
|
||||
self.hosts = self.build_hostnames(changed_hosts)
|
||||
self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES)
|
||||
self.set_kube_master(list(self.hosts.keys())[:2])
|
||||
self.set_all(self.hosts)
|
||||
self.set_k8s_cluster()
|
||||
self.set_kube_node(self.hosts.keys())
|
||||
self.set_etcd(list(self.hosts.keys())[:3])
|
||||
else: # Show help if no options
|
||||
self.show_help()
|
||||
sys.exit(0)
|
||||
|
||||
self.write_config(self.config_file)
|
||||
|
||||
def write_config(self, config_file):
|
||||
if config_file:
|
||||
with open(config_file, 'w') as f:
|
||||
self.config.write(f)
|
||||
else:
|
||||
print("WARNING: Unable to save config. Make sure you set "
|
||||
"CONFIG_FILE env var.")
|
||||
|
||||
def debug(self, msg):
|
||||
if DEBUG:
|
||||
print("DEBUG: {0}".format(msg))
|
||||
|
||||
def get_ip_from_opts(self, optstring):
|
||||
opts = optstring.split(' ')
|
||||
for opt in opts:
|
||||
if '=' not in opt:
|
||||
continue
|
||||
k, v = opt.split('=')
|
||||
if k == "ip":
|
||||
return v
|
||||
raise ValueError("IP parameter not found in options")
|
||||
|
||||
def ensure_required_groups(self, groups):
|
||||
for group in groups:
|
||||
try:
|
||||
self.debug("Adding group {0}".format(group))
|
||||
self.config.add_section(group)
|
||||
except configparser.DuplicateSectionError:
|
||||
pass
|
||||
|
||||
def get_host_id(self, host):
|
||||
'''Returns integer host ID (without padding) from a given hostname.'''
|
||||
try:
|
||||
short_hostname = host.split('.')[0]
|
||||
return int(re.findall("\d+$", short_hostname)[-1])
|
||||
except IndexError:
|
||||
raise ValueError("Host name must end in an integer")
|
||||
|
||||
def build_hostnames(self, changed_hosts):
|
||||
existing_hosts = OrderedDict()
|
||||
highest_host_id = 0
|
||||
try:
|
||||
for host, opts in self.config.items('all'):
|
||||
existing_hosts[host] = opts
|
||||
host_id = self.get_host_id(host)
|
||||
if host_id > highest_host_id:
|
||||
highest_host_id = host_id
|
||||
except configparser.NoSectionError:
|
||||
pass
|
||||
|
||||
# FIXME(mattymo): Fix condition where delete then add reuses highest id
|
||||
next_host_id = highest_host_id + 1
|
||||
|
||||
all_hosts = existing_hosts.copy()
|
||||
for host in changed_hosts:
|
||||
if host[0] == "-":
|
||||
realhost = host[1:]
|
||||
if self.exists_hostname(all_hosts, realhost):
|
||||
self.debug("Marked {0} for deletion.".format(realhost))
|
||||
all_hosts.pop(realhost)
|
||||
elif self.exists_ip(all_hosts, realhost):
|
||||
self.debug("Marked {0} for deletion.".format(realhost))
|
||||
self.delete_host_by_ip(all_hosts, realhost)
|
||||
elif host[0].isdigit():
|
||||
if self.exists_hostname(all_hosts, host):
|
||||
self.debug("Skipping existing host {0}.".format(host))
|
||||
continue
|
||||
elif self.exists_ip(all_hosts, host):
|
||||
self.debug("Skipping existing host {0}.".format(host))
|
||||
continue
|
||||
|
||||
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id)
|
||||
next_host_id += 1
|
||||
all_hosts[next_host] = "ansible_host={0} ip={1}".format(
|
||||
host, host)
|
||||
elif host[0].isalpha():
|
||||
raise Exception("Adding hosts by hostname is not supported.")
|
||||
|
||||
return all_hosts
|
||||
|
||||
def exists_hostname(self, existing_hosts, hostname):
|
||||
return hostname in existing_hosts.keys()
|
||||
|
||||
def exists_ip(self, existing_hosts, ip):
|
||||
for host_opts in existing_hosts.values():
|
||||
if ip == self.get_ip_from_opts(host_opts):
|
||||
return True
|
||||
return False
|
||||
|
||||
def delete_host_by_ip(self, existing_hosts, ip):
|
||||
for hostname, host_opts in existing_hosts.items():
|
||||
if ip == self.get_ip_from_opts(host_opts):
|
||||
del existing_hosts[hostname]
|
||||
return
|
||||
raise ValueError("Unable to find host by IP: {0}".format(ip))
|
||||
|
||||
def purge_invalid_hosts(self, hostnames, protected_names=[]):
|
||||
for role in self.config.sections():
|
||||
for host, _ in self.config.items(role):
|
||||
if host not in hostnames and host not in protected_names:
|
||||
self.debug("Host {0} removed from role {1}".format(host,
|
||||
role))
|
||||
self.config.remove_option(role, host)
|
||||
|
||||
def add_host_to_group(self, group, host, opts=""):
|
||||
self.debug("adding host {0} to group {1}".format(host, group))
|
||||
self.config.set(group, host, opts)
|
||||
|
||||
def set_kube_master(self, hosts):
|
||||
for host in hosts:
|
||||
self.add_host_to_group('kube-master', host)
|
||||
|
||||
def set_all(self, hosts):
|
||||
for host, opts in hosts.items():
|
||||
self.add_host_to_group('all', host, opts)
|
||||
|
||||
def set_k8s_cluster(self):
|
||||
self.add_host_to_group('k8s-cluster:children', 'kube-node')
|
||||
self.add_host_to_group('k8s-cluster:children', 'kube-master')
|
||||
|
||||
def set_kube_node(self, hosts):
|
||||
for host in hosts:
|
||||
self.add_host_to_group('kube-node', host)
|
||||
|
||||
def set_etcd(self, hosts):
|
||||
for host in hosts:
|
||||
self.add_host_to_group('etcd', host)
|
||||
|
||||
def load_file(self, files=None):
|
||||
'''Directly loads JSON, or YAML file to inventory.'''
|
||||
|
||||
if not files:
|
||||
raise Exception("No input file specified.")
|
||||
|
||||
import json
|
||||
import yaml
|
||||
|
||||
for filename in list(files):
|
||||
# Try JSON, then YAML
|
||||
try:
|
||||
with open(filename, 'r') as f:
|
||||
data = json.load(f)
|
||||
except ValueError:
|
||||
try:
|
||||
with open(filename, 'r') as f:
|
||||
data = yaml.load(f)
|
||||
print("yaml")
|
||||
except ValueError:
|
||||
raise Exception("Cannot read %s as JSON, YAML, or CSV",
|
||||
filename)
|
||||
|
||||
self.ensure_required_groups(ROLES)
|
||||
self.set_k8s_cluster()
|
||||
for group, hosts in data.items():
|
||||
self.ensure_required_groups([group])
|
||||
for host, opts in hosts.items():
|
||||
optstring = "ansible_host={0} ip={0}".format(opts['ip'])
|
||||
for key, val in opts.items():
|
||||
if key == "ip":
|
||||
continue
|
||||
optstring += " {0}={1}".format(key, val)
|
||||
|
||||
self.add_host_to_group('all', host, optstring)
|
||||
self.add_host_to_group(group, host)
|
||||
self.write_config(self.config_file)
|
||||
|
||||
def parse_command(self, command, args=None):
|
||||
if command == 'help':
|
||||
self.show_help()
|
||||
elif command == 'print_cfg':
|
||||
self.print_config()
|
||||
elif command == 'print_ips':
|
||||
self.print_ips()
|
||||
elif command == 'load':
|
||||
self.load_file(args)
|
||||
else:
|
||||
raise Exception("Invalid command specified.")
|
||||
|
||||
def show_help(self):
|
||||
help_text = '''Usage: inventory.py ip1 [ip2 ...]
|
||||
Examples: inventory.py 10.10.1.3 10.10.1.4 10.10.1.5
|
||||
|
||||
Available commands:
|
||||
help - Display this message
|
||||
print_cfg - Write inventory file to stdout
|
||||
print_ips - Write a space-delimited list of IPs from "all" group
|
||||
|
||||
Advanced usage:
|
||||
Add another host after initial creation: inventory.py 10.10.1.5
|
||||
Delete a host: inventory.py -10.10.1.3
|
||||
Delete a host by id: inventory.py -node1'''
|
||||
print(help_text)
|
||||
|
||||
def print_config(self):
|
||||
self.config.write(sys.stdout)
|
||||
|
||||
def print_ips(self):
|
||||
ips = []
|
||||
for host, opts in self.config.items('all'):
|
||||
ips.append(self.get_ip_from_opts(opts))
|
||||
print(' '.join(ips))
|
||||
|
||||
|
||||
def main(argv=None):
|
||||
if not argv:
|
||||
argv = sys.argv[1:]
|
||||
KargoInventory(argv, CONFIG_FILE)
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
1
contrib/inventory_builder/requirements.txt
Normal file
1
contrib/inventory_builder/requirements.txt
Normal file
@ -0,0 +1 @@
|
||||
configparser>=3.3.0
|
48
contrib/inventory_builder/requirements.yml
Normal file
48
contrib/inventory_builder/requirements.yml
Normal file
@ -0,0 +1,48 @@
|
||||
---
|
||||
- src: https://gitlab.com/kubespray-ansibl8s/k8s-common.git
|
||||
path: roles/apps
|
||||
scm: git
|
||||
|
||||
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-dashboard.git
|
||||
# path: roles/apps
|
||||
# scm: git
|
||||
#
|
||||
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-kubedns.git
|
||||
# path: roles/apps
|
||||
# scm: git
|
||||
#
|
||||
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-elasticsearch.git
|
||||
# path: roles/apps
|
||||
# scm: git
|
||||
#
|
||||
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-redis.git
|
||||
# path: roles/apps
|
||||
# scm: git
|
||||
#
|
||||
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-memcached.git
|
||||
# path: roles/apps
|
||||
# scm: git
|
||||
#
|
||||
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-postgres.git
|
||||
# path: roles/apps
|
||||
# scm: git
|
||||
#
|
||||
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-pgbouncer.git
|
||||
# path: roles/apps
|
||||
# scm: git
|
||||
#
|
||||
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-heapster.git
|
||||
# path: roles/apps
|
||||
# scm: git
|
||||
#
|
||||
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-influxdb.git
|
||||
# path: roles/apps
|
||||
# scm: git
|
||||
#
|
||||
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-kubedash.git
|
||||
# path: roles/apps
|
||||
# scm: git
|
||||
#
|
||||
#- src: https://gitlab.com/kubespray-ansibl8s/k8s-kube-logstash.git
|
||||
# path: roles/apps
|
||||
# scm: git
|
3
contrib/inventory_builder/setup.cfg
Normal file
3
contrib/inventory_builder/setup.cfg
Normal file
@ -0,0 +1,3 @@
|
||||
[metadata]
|
||||
name = kargo-inventory-builder
|
||||
version = 0.1
|
29
contrib/inventory_builder/setup.py
Normal file
29
contrib/inventory_builder/setup.py
Normal file
@ -0,0 +1,29 @@
|
||||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
|
||||
import setuptools
|
||||
|
||||
# In python < 2.7.4, a lazy loading of package `pbr` will break
|
||||
# setuptools if some other modules registered functions in `atexit`.
|
||||
# solution from: http://bugs.python.org/issue15881#msg170215
|
||||
try:
|
||||
import multiprocessing # noqa
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
setuptools.setup(
|
||||
setup_requires=[],
|
||||
pbr=False)
|
3
contrib/inventory_builder/test-requirements.txt
Normal file
3
contrib/inventory_builder/test-requirements.txt
Normal file
@ -0,0 +1,3 @@
|
||||
hacking>=0.10.2
|
||||
pytest>=2.8.0
|
||||
mock>=1.3.0
|
212
contrib/inventory_builder/tests/test_inventory.py
Normal file
212
contrib/inventory_builder/tests/test_inventory.py
Normal file
@ -0,0 +1,212 @@
|
||||
# Copyright 2016 Mirantis, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import mock
|
||||
import unittest
|
||||
|
||||
from collections import OrderedDict
|
||||
import sys
|
||||
|
||||
path = "./contrib/inventory_builder/"
|
||||
if path not in sys.path:
|
||||
sys.path.append(path)
|
||||
|
||||
import inventory
|
||||
|
||||
|
||||
class TestInventory(unittest.TestCase):
|
||||
@mock.patch('inventory.sys')
|
||||
def setUp(self, sys_mock):
|
||||
sys_mock.exit = mock.Mock()
|
||||
super(TestInventory, self).setUp()
|
||||
self.data = ['10.90.3.2', '10.90.3.3', '10.90.3.4']
|
||||
self.inv = inventory.KargoInventory()
|
||||
|
||||
def test_get_ip_from_opts(self):
|
||||
optstring = "ansible_host=10.90.3.2 ip=10.90.3.2"
|
||||
expected = "10.90.3.2"
|
||||
result = self.inv.get_ip_from_opts(optstring)
|
||||
self.assertEqual(expected, result)
|
||||
|
||||
def test_get_ip_from_opts_invalid(self):
|
||||
optstring = "notanaddr=value something random!chars:D"
|
||||
self.assertRaisesRegexp(ValueError, "IP parameter not found",
|
||||
self.inv.get_ip_from_opts, optstring)
|
||||
|
||||
def test_ensure_required_groups(self):
|
||||
groups = ['group1', 'group2']
|
||||
self.inv.ensure_required_groups(groups)
|
||||
for group in groups:
|
||||
self.assertTrue(group in self.inv.config.sections())
|
||||
|
||||
def test_get_host_id(self):
|
||||
hostnames = ['node99', 'no99de01', '01node01', 'node1.domain',
|
||||
'node3.xyz123.aaa']
|
||||
expected = [99, 1, 1, 1, 3]
|
||||
for hostname, expected in zip(hostnames, expected):
|
||||
result = self.inv.get_host_id(hostname)
|
||||
self.assertEqual(expected, result)
|
||||
|
||||
def test_get_host_id_invalid(self):
|
||||
bad_hostnames = ['node', 'no99de', '01node', 'node.111111']
|
||||
for hostname in bad_hostnames:
|
||||
self.assertRaisesRegexp(ValueError, "Host name must end in an",
|
||||
self.inv.get_host_id, hostname)
|
||||
|
||||
def test_build_hostnames_add_one(self):
|
||||
changed_hosts = ['10.90.0.2']
|
||||
expected = OrderedDict([('node1',
|
||||
'ansible_host=10.90.0.2 ip=10.90.0.2')])
|
||||
result = self.inv.build_hostnames(changed_hosts)
|
||||
self.assertEqual(expected, result)
|
||||
|
||||
def test_build_hostnames_add_duplicate(self):
|
||||
changed_hosts = ['10.90.0.2']
|
||||
expected = OrderedDict([('node1',
|
||||
'ansible_host=10.90.0.2 ip=10.90.0.2')])
|
||||
self.inv.config['all'] = expected
|
||||
result = self.inv.build_hostnames(changed_hosts)
|
||||
self.assertEqual(expected, result)
|
||||
|
||||
def test_build_hostnames_add_two(self):
|
||||
changed_hosts = ['10.90.0.2', '10.90.0.3']
|
||||
expected = OrderedDict([
|
||||
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||
self.inv.config['all'] = OrderedDict()
|
||||
result = self.inv.build_hostnames(changed_hosts)
|
||||
self.assertEqual(expected, result)
|
||||
|
||||
def test_build_hostnames_delete_first(self):
|
||||
changed_hosts = ['-10.90.0.2']
|
||||
existing_hosts = OrderedDict([
|
||||
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||
self.inv.config['all'] = existing_hosts
|
||||
expected = OrderedDict([
|
||||
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||
result = self.inv.build_hostnames(changed_hosts)
|
||||
self.assertEqual(expected, result)
|
||||
|
||||
def test_exists_hostname_positive(self):
|
||||
hostname = 'node1'
|
||||
expected = True
|
||||
existing_hosts = OrderedDict([
|
||||
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||
result = self.inv.exists_hostname(existing_hosts, hostname)
|
||||
self.assertEqual(expected, result)
|
||||
|
||||
def test_exists_hostname_negative(self):
|
||||
hostname = 'node99'
|
||||
expected = False
|
||||
existing_hosts = OrderedDict([
|
||||
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||
result = self.inv.exists_hostname(existing_hosts, hostname)
|
||||
self.assertEqual(expected, result)
|
||||
|
||||
def test_exists_ip_positive(self):
|
||||
ip = '10.90.0.2'
|
||||
expected = True
|
||||
existing_hosts = OrderedDict([
|
||||
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||
result = self.inv.exists_ip(existing_hosts, ip)
|
||||
self.assertEqual(expected, result)
|
||||
|
||||
def test_exists_ip_negative(self):
|
||||
ip = '10.90.0.200'
|
||||
expected = False
|
||||
existing_hosts = OrderedDict([
|
||||
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||
result = self.inv.exists_ip(existing_hosts, ip)
|
||||
self.assertEqual(expected, result)
|
||||
|
||||
def test_delete_host_by_ip_positive(self):
|
||||
ip = '10.90.0.2'
|
||||
expected = OrderedDict([
|
||||
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||
existing_hosts = OrderedDict([
|
||||
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||
self.inv.delete_host_by_ip(existing_hosts, ip)
|
||||
self.assertEqual(expected, existing_hosts)
|
||||
|
||||
def test_delete_host_by_ip_negative(self):
|
||||
ip = '10.90.0.200'
|
||||
existing_hosts = OrderedDict([
|
||||
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3')])
|
||||
self.assertRaisesRegexp(ValueError, "Unable to find host",
|
||||
self.inv.delete_host_by_ip, existing_hosts, ip)
|
||||
|
||||
def test_purge_invalid_hosts(self):
|
||||
proper_hostnames = ['node1', 'node2']
|
||||
bad_host = 'doesnotbelong2'
|
||||
existing_hosts = OrderedDict([
|
||||
('node1', 'ansible_host=10.90.0.2 ip=10.90.0.2'),
|
||||
('node2', 'ansible_host=10.90.0.3 ip=10.90.0.3'),
|
||||
('doesnotbelong2', 'whateveropts=ilike')])
|
||||
self.inv.config['all'] = existing_hosts
|
||||
self.inv.purge_invalid_hosts(proper_hostnames)
|
||||
self.assertTrue(bad_host not in self.inv.config['all'].keys())
|
||||
|
||||
def test_add_host_to_group(self):
|
||||
group = 'etcd'
|
||||
host = 'node1'
|
||||
opts = 'ip=10.90.0.2'
|
||||
|
||||
self.inv.add_host_to_group(group, host, opts)
|
||||
self.assertEqual(self.inv.config[group].get(host), opts)
|
||||
|
||||
def test_set_kube_master(self):
|
||||
group = 'kube-master'
|
||||
host = 'node1'
|
||||
|
||||
self.inv.set_kube_master([host])
|
||||
self.assertTrue(host in self.inv.config[group])
|
||||
|
||||
def test_set_all(self):
|
||||
group = 'all'
|
||||
hosts = OrderedDict([
|
||||
('node1', 'opt1'),
|
||||
('node2', 'opt2')])
|
||||
|
||||
self.inv.set_all(hosts)
|
||||
for host, opt in hosts.items():
|
||||
self.assertEqual(self.inv.config[group].get(host), opt)
|
||||
|
||||
def test_set_k8s_cluster(self):
|
||||
group = 'k8s-cluster:children'
|
||||
expected_hosts = ['kube-node', 'kube-master']
|
||||
|
||||
self.inv.set_k8s_cluster()
|
||||
for host in expected_hosts:
|
||||
self.assertTrue(host in self.inv.config[group])
|
||||
|
||||
def test_set_kube_node(self):
|
||||
group = 'kube-node'
|
||||
host = 'node1'
|
||||
|
||||
self.inv.set_kube_node([host])
|
||||
self.assertTrue(host in self.inv.config[group])
|
||||
|
||||
def test_set_etcd(self):
|
||||
group = 'etcd'
|
||||
host = 'node1'
|
||||
|
||||
self.inv.set_etcd([host])
|
||||
self.assertTrue(host in self.inv.config[group])
|
28
contrib/inventory_builder/tox.ini
Normal file
28
contrib/inventory_builder/tox.ini
Normal file
@ -0,0 +1,28 @@
|
||||
[tox]
|
||||
minversion = 1.6
|
||||
skipsdist = True
|
||||
envlist = pep8, py27
|
||||
|
||||
[testenv]
|
||||
whitelist_externals = py.test
|
||||
usedevelop = True
|
||||
deps =
|
||||
-r{toxinidir}/requirements.txt
|
||||
-r{toxinidir}/test-requirements.txt
|
||||
setenv = VIRTUAL_ENV={envdir}
|
||||
passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
|
||||
commands = py.test -vv #{posargs:./tests}
|
||||
|
||||
[testenv:pep8]
|
||||
usedevelop = False
|
||||
whitelist_externals = bash
|
||||
commands =
|
||||
bash -c "find {toxinidir}/* -type f -name '*.py' -print0 | xargs -0 flake8"
|
||||
|
||||
[testenv:venv]
|
||||
commands = {posargs}
|
||||
|
||||
[flake8]
|
||||
show-source = true
|
||||
builtins = _
|
||||
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg
|
92
contrib/network-storage/glusterfs/README.md
Normal file
92
contrib/network-storage/glusterfs/README.md
Normal file
@ -0,0 +1,92 @@
|
||||
# Deploying a Kargo Kubernetes Cluster with GlusterFS
|
||||
|
||||
You can either deploy using Ansible on its own by supplying your own inventory file or by using Terraform to create the VMs and then providing a dynamic inventory to Ansible. The following two sections are self-contained, you don't need to go through one to use the other. So, if you want to provision with Terraform, you can skip the **Using an Ansible inventory** section, and if you want to provision with a pre-built ansible inventory, you can neglect the **Using Terraform and Ansible** section.
|
||||
|
||||
## Using an Ansible inventory
|
||||
|
||||
In the same directory of this ReadMe file you should find a file named `inventory.example` which contains an example setup. Please note that, additionally to the Kubernetes nodes/masters, we define a set of machines for GlusterFS and we add them to the group `[gfs-cluster]`, which in turn is added to the larger `[network-storage]` group as a child group.
|
||||
|
||||
Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/k8s_gfs_inventory`. Make sure that the settings on `inventory/group_vars/all.yml` make sense with your deployment. Then execute change to the kargo root folder, and execute (supposing that the machines are all using ubuntu):
|
||||
|
||||
```
|
||||
ansible-playbook -b --become-user=root -i inventory/k8s_gfs_inventory --user=ubuntu ./cluster.yml
|
||||
```
|
||||
|
||||
This will provision your Kubernetes cluster. Then, to provision and configure the GlusterFS cluster, from the same directory execute:
|
||||
|
||||
```
|
||||
ansible-playbook -b --become-user=root -i inventory/k8s_gfs_inventory --user=ubuntu ./contrib/network-storage/glusterfs/glusterfs.yml
|
||||
```
|
||||
|
||||
If your machines are not using Ubuntu, you need to change the `--user=ubuntu` to the correct user. Alternatively, if your Kubernetes machines are using one OS and your GlusterFS a different one, you can instead specify the `ansible_ssh_user=<correct-user>` variable in the inventory file that you just created, for each machine/VM:
|
||||
|
||||
```
|
||||
k8s-master-1 ansible_ssh_host=192.168.0.147 ip=192.168.0.147 ansible_ssh_user=core
|
||||
k8s-master-node-1 ansible_ssh_host=192.168.0.148 ip=192.168.0.148 ansible_ssh_user=core
|
||||
k8s-master-node-2 ansible_ssh_host=192.168.0.146 ip=192.168.0.146 ansible_ssh_user=core
|
||||
```
|
||||
|
||||
## Using Terraform and Ansible
|
||||
|
||||
First step is to fill in a `my-kargo-gluster-cluster.tfvars` file with the specification desired for your cluster. An example with all required variables would look like:
|
||||
|
||||
```
|
||||
cluster_name = "cluster1"
|
||||
number_of_k8s_masters = "1"
|
||||
number_of_k8s_masters_no_floating_ip = "2"
|
||||
number_of_k8s_nodes_no_floating_ip = "0"
|
||||
number_of_k8s_nodes = "0"
|
||||
public_key_path = "~/.ssh/my-desired-key.pub"
|
||||
image = "Ubuntu 16.04"
|
||||
ssh_user = "ubuntu"
|
||||
flavor_k8s_node = "node-flavor-id-in-your-openstack"
|
||||
flavor_k8s_master = "master-flavor-id-in-your-openstack"
|
||||
network_name = "k8s-network"
|
||||
floatingip_pool = "net_external"
|
||||
|
||||
# GlusterFS variables
|
||||
flavor_gfs_node = "gluster-flavor-id-in-your-openstack"
|
||||
image_gfs = "Ubuntu 16.04"
|
||||
number_of_gfs_nodes_no_floating_ip = "3"
|
||||
gfs_volume_size_in_gb = "50"
|
||||
ssh_user_gfs = "ubuntu"
|
||||
```
|
||||
|
||||
As explained in the general terraform/openstack guide, you need to source your OpenStack credentials file, add your ssh-key to the ssh-agent and setup environment variables for terraform:
|
||||
|
||||
```
|
||||
$ source ~/.stackrc
|
||||
$ eval $(ssh-agent -s)
|
||||
$ ssh-add ~/.ssh/my-desired-key
|
||||
$ echo Setting up Terraform creds && \
|
||||
export TF_VAR_username=${OS_USERNAME} && \
|
||||
export TF_VAR_password=${OS_PASSWORD} && \
|
||||
export TF_VAR_tenant=${OS_TENANT_NAME} && \
|
||||
export TF_VAR_auth_url=${OS_AUTH_URL}
|
||||
```
|
||||
|
||||
Then, standing on the kargo directory (root base of the Git checkout), issue the following terraform command to create the VMs for the cluster:
|
||||
|
||||
```
|
||||
terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kargo-gluster-cluster.tfvars contrib/terraform/openstack
|
||||
```
|
||||
|
||||
This will create both your Kubernetes and Gluster VMs. Make sure that the ansible file `contrib/terraform/openstack/group_vars/all.yml` includes any ansible variable that you want to setup (like, for instance, the type of machine for bootstrapping).
|
||||
|
||||
Then, provision your Kubernetes (Kargo) cluster with the following ansible call:
|
||||
|
||||
```
|
||||
ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./cluster.yml
|
||||
```
|
||||
|
||||
Finally, provision the glusterfs nodes and add the Persistent Volume setup for GlusterFS in Kubernetes through the following ansible call:
|
||||
|
||||
```
|
||||
ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./contrib/network-storage/glusterfs/glusterfs.yml
|
||||
```
|
||||
|
||||
If you need to destroy the cluster, you can run:
|
||||
|
||||
```
|
||||
terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kargo-gluster-cluster.tfvars contrib/terraform/openstack
|
||||
```
|
17
contrib/network-storage/glusterfs/glusterfs.yml
Normal file
17
contrib/network-storage/glusterfs/glusterfs.yml
Normal file
@ -0,0 +1,17 @@
|
||||
---
|
||||
- hosts: all
|
||||
gather_facts: true
|
||||
|
||||
- hosts: gfs-cluster
|
||||
roles:
|
||||
- { role: glusterfs/server }
|
||||
|
||||
- hosts: k8s-cluster
|
||||
roles:
|
||||
- { role: glusterfs/client }
|
||||
|
||||
- hosts: kube-master[0]
|
||||
roles:
|
||||
- { role: kubernetes-pv/lib }
|
||||
- { role: kubernetes-pv }
|
||||
|
44
contrib/network-storage/glusterfs/inventory.example
Normal file
44
contrib/network-storage/glusterfs/inventory.example
Normal file
@ -0,0 +1,44 @@
|
||||
# ## Configure 'ip' variable to bind kubernetes services on a
|
||||
# ## different ip than the default iface
|
||||
# node1 ansible_ssh_host=95.54.0.12 # ip=10.3.0.1
|
||||
# node2 ansible_ssh_host=95.54.0.13 # ip=10.3.0.2
|
||||
# node3 ansible_ssh_host=95.54.0.14 # ip=10.3.0.3
|
||||
# node4 ansible_ssh_host=95.54.0.15 # ip=10.3.0.4
|
||||
# node5 ansible_ssh_host=95.54.0.16 # ip=10.3.0.5
|
||||
# node6 ansible_ssh_host=95.54.0.17 # ip=10.3.0.6
|
||||
#
|
||||
# ## GlusterFS nodes
|
||||
# ## Set disk_volume_device_1 to desired device for gluster brick, if different to /dev/vdb (default).
|
||||
# ## As in the previous case, you can set ip to give direct communication on internal IPs
|
||||
# gfs_node1 ansible_ssh_host=95.54.0.18 # disk_volume_device_1=/dev/vdc ip=10.3.0.7
|
||||
# gfs_node2 ansible_ssh_host=95.54.0.19 # disk_volume_device_1=/dev/vdc ip=10.3.0.8
|
||||
# gfs_node1 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc ip=10.3.0.9
|
||||
|
||||
# [kube-master]
|
||||
# node1
|
||||
# node2
|
||||
|
||||
# [etcd]
|
||||
# node1
|
||||
# node2
|
||||
# node3
|
||||
|
||||
# [kube-node]
|
||||
# node2
|
||||
# node3
|
||||
# node4
|
||||
# node5
|
||||
# node6
|
||||
|
||||
# [k8s-cluster:children]
|
||||
# kube-node
|
||||
# kube-master
|
||||
|
||||
# [gfs-cluster]
|
||||
# gfs_node1
|
||||
# gfs_node2
|
||||
# gfs_node3
|
||||
|
||||
# [network-storage:children]
|
||||
# gfs-cluster
|
||||
|
44
contrib/network-storage/glusterfs/roles/glusterfs/README.md
Normal file
44
contrib/network-storage/glusterfs/roles/glusterfs/README.md
Normal file
@ -0,0 +1,44 @@
|
||||
# Ansible Role: GlusterFS
|
||||
|
||||
[](https://travis-ci.org/geerlingguy/ansible-role-glusterfs)
|
||||
|
||||
Installs and configures GlusterFS on Linux.
|
||||
|
||||
## Requirements
|
||||
|
||||
For GlusterFS to connect between servers, TCP ports `24007`, `24008`, and `24009`/`49152`+ (that port, plus an additional incremented port for each additional server in the cluster; the latter if GlusterFS is version 3.4+), and TCP/UDP port `111` must be open. You can open these using whatever firewall you wish (this can easily be configured using the `geerlingguy.firewall` role).
|
||||
|
||||
This role performs basic installation and setup of Gluster, but it does not configure or mount bricks (volumes), since that step is easier to do in a series of plays in your own playbook. Ansible 1.9+ includes the [`gluster_volume`](https://docs.ansible.com/gluster_volume_module.html) module to ease the management of Gluster volumes.
|
||||
|
||||
## Role Variables
|
||||
|
||||
Available variables are listed below, along with default values (see `defaults/main.yml`):
|
||||
|
||||
glusterfs_default_release: ""
|
||||
|
||||
You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy).
|
||||
|
||||
glusterfs_ppa_use: yes
|
||||
glusterfs_ppa_version: "3.5"
|
||||
|
||||
For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](http://www.gluster.org/community/documentation/index.php/Getting_started_install) for more info.
|
||||
|
||||
## Dependencies
|
||||
|
||||
None.
|
||||
|
||||
## Example Playbook
|
||||
|
||||
- hosts: server
|
||||
roles:
|
||||
- geerlingguy.glusterfs
|
||||
|
||||
For a real-world use example, read through [Simple GlusterFS Setup with Ansible](http://www.jeffgeerling.com/blog/simple-glusterfs-setup-ansible), a blog post by this role's author, which is included in Chapter 8 of [Ansible for DevOps](https://www.ansiblefordevops.com/).
|
||||
|
||||
## License
|
||||
|
||||
MIT / BSD
|
||||
|
||||
## Author Information
|
||||
|
||||
This role was created in 2015 by [Jeff Geerling](http://www.jeffgeerling.com/), author of [Ansible for DevOps](https://www.ansiblefordevops.com/).
|
@ -0,0 +1,11 @@
|
||||
---
|
||||
# For Ubuntu.
|
||||
glusterfs_default_release: ""
|
||||
glusterfs_ppa_use: yes
|
||||
glusterfs_ppa_version: "3.8"
|
||||
|
||||
# Gluster configuration.
|
||||
gluster_mount_dir: /mnt/gluster
|
||||
gluster_volume_node_mount_dir: /mnt/xfs-drive-gluster
|
||||
gluster_brick_dir: "{{ gluster_volume_node_mount_dir }}/brick"
|
||||
gluster_brick_name: gluster
|
@ -0,0 +1,30 @@
|
||||
---
|
||||
dependencies: []
|
||||
|
||||
galaxy_info:
|
||||
author: geerlingguy
|
||||
description: GlusterFS installation for Linux.
|
||||
company: "Midwestern Mac, LLC"
|
||||
license: "license (BSD, MIT)"
|
||||
min_ansible_version: 2.0
|
||||
platforms:
|
||||
- name: EL
|
||||
versions:
|
||||
- 6
|
||||
- 7
|
||||
- name: Ubuntu
|
||||
versions:
|
||||
- precise
|
||||
- trusty
|
||||
- xenial
|
||||
- name: Debian
|
||||
versions:
|
||||
- wheezy
|
||||
- jessie
|
||||
galaxy_tags:
|
||||
- system
|
||||
- networking
|
||||
- cloud
|
||||
- clustering
|
||||
- files
|
||||
- sharing
|
@ -0,0 +1,16 @@
|
||||
---
|
||||
# This is meant for Ubuntu and RedHat installations, where apparently the glusterfs-client is not used from inside
|
||||
# hyperkube and needs to be installed as part of the system.
|
||||
|
||||
# Setup/install tasks.
|
||||
- include: setup-RedHat.yml
|
||||
when: ansible_os_family == 'RedHat' and groups['gfs-cluster'] is defined
|
||||
|
||||
- include: setup-Debian.yml
|
||||
when: ansible_os_family == 'Debian' and groups['gfs-cluster'] is defined
|
||||
|
||||
- name: Ensure Gluster mount directories exist.
|
||||
file: "path={{ item }} state=directory mode=0775"
|
||||
with_items:
|
||||
- "{{ gluster_mount_dir }}"
|
||||
when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined
|
@ -0,0 +1,24 @@
|
||||
---
|
||||
- name: Add PPA for GlusterFS.
|
||||
apt_repository:
|
||||
repo: 'ppa:gluster/glusterfs-{{ glusterfs_ppa_version }}'
|
||||
state: present
|
||||
update_cache: yes
|
||||
register: glusterfs_ppa_added
|
||||
when: glusterfs_ppa_use
|
||||
|
||||
- name: Ensure GlusterFS client will reinstall if the PPA was just added.
|
||||
apt:
|
||||
name: "{{ item }}"
|
||||
state: absent
|
||||
with_items:
|
||||
- glusterfs-client
|
||||
when: glusterfs_ppa_added.changed
|
||||
|
||||
- name: Ensure GlusterFS client is installed.
|
||||
apt:
|
||||
name: "{{ item }}"
|
||||
state: installed
|
||||
default_release: "{{ glusterfs_default_release }}"
|
||||
with_items:
|
||||
- glusterfs-client
|
@ -0,0 +1,10 @@
|
||||
---
|
||||
- name: Install Prerequisites
|
||||
yum: name={{ item }} state=present
|
||||
with_items:
|
||||
- "centos-release-gluster{{ glusterfs_default_release }}"
|
||||
|
||||
- name: Install Packages
|
||||
yum: name={{ item }} state=present
|
||||
with_items:
|
||||
- glusterfs-client
|
@ -0,0 +1,13 @@
|
||||
---
|
||||
# For Ubuntu.
|
||||
glusterfs_default_release: ""
|
||||
glusterfs_ppa_use: yes
|
||||
glusterfs_ppa_version: "3.8"
|
||||
|
||||
# Gluster configuration.
|
||||
gluster_mount_dir: /mnt/gluster
|
||||
gluster_volume_node_mount_dir: /mnt/xfs-drive-gluster
|
||||
gluster_brick_dir: "{{ gluster_volume_node_mount_dir }}/brick"
|
||||
gluster_brick_name: gluster
|
||||
# Default device to mount for xfs formatting, terraform overrides this by setting the variable in the inventory.
|
||||
disk_volume_device_1: /dev/vdb
|
@ -0,0 +1,30 @@
|
||||
---
|
||||
dependencies: []
|
||||
|
||||
galaxy_info:
|
||||
author: geerlingguy
|
||||
description: GlusterFS installation for Linux.
|
||||
company: "Midwestern Mac, LLC"
|
||||
license: "license (BSD, MIT)"
|
||||
min_ansible_version: 2.0
|
||||
platforms:
|
||||
- name: EL
|
||||
versions:
|
||||
- 6
|
||||
- 7
|
||||
- name: Ubuntu
|
||||
versions:
|
||||
- precise
|
||||
- trusty
|
||||
- xenial
|
||||
- name: Debian
|
||||
versions:
|
||||
- wheezy
|
||||
- jessie
|
||||
galaxy_tags:
|
||||
- system
|
||||
- networking
|
||||
- cloud
|
||||
- clustering
|
||||
- files
|
||||
- sharing
|
@ -0,0 +1,82 @@
|
||||
---
|
||||
# Include variables and define needed variables.
|
||||
- name: Include OS-specific variables.
|
||||
include_vars: "{{ ansible_os_family }}.yml"
|
||||
|
||||
# Instal xfs package
|
||||
- name: install xfs Debian
|
||||
apt: name=xfsprogs state=present
|
||||
when: ansible_os_family == "Debian"
|
||||
|
||||
- name: install xfs RedHat
|
||||
yum: name=xfsprogs state=present
|
||||
when: ansible_os_family == "RedHat"
|
||||
|
||||
# Format external volumes in xfs
|
||||
- name: Format volumes in xfs
|
||||
filesystem: "fstype=xfs dev={{ disk_volume_device_1 }}"
|
||||
|
||||
# Mount external volumes
|
||||
- name: mounting new xfs filesystem
|
||||
mount: "name={{ gluster_volume_node_mount_dir }} src={{ disk_volume_device_1 }} fstype=xfs state=mounted"
|
||||
|
||||
# Setup/install tasks.
|
||||
- include: setup-RedHat.yml
|
||||
when: ansible_os_family == 'RedHat'
|
||||
|
||||
- include: setup-Debian.yml
|
||||
when: ansible_os_family == 'Debian'
|
||||
|
||||
- name: Ensure GlusterFS is started and enabled at boot.
|
||||
service: "name={{ glusterfs_daemon }} state=started enabled=yes"
|
||||
|
||||
- name: Ensure Gluster brick and mount directories exist.
|
||||
file: "path={{ item }} state=directory mode=0775"
|
||||
with_items:
|
||||
- "{{ gluster_brick_dir }}"
|
||||
- "{{ gluster_mount_dir }}"
|
||||
|
||||
- name: Configure Gluster volume.
|
||||
gluster_volume:
|
||||
state: present
|
||||
name: "{{ gluster_brick_name }}"
|
||||
brick: "{{ gluster_brick_dir }}"
|
||||
replicas: "{{ groups['gfs-cluster'] | length }}"
|
||||
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
|
||||
host: "{{ inventory_hostname }}"
|
||||
force: yes
|
||||
run_once: true
|
||||
|
||||
- name: Mount glusterfs to retrieve disk size
|
||||
mount:
|
||||
name: "{{ gluster_mount_dir }}"
|
||||
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
|
||||
fstype: glusterfs
|
||||
opts: "defaults,_netdev"
|
||||
state: mounted
|
||||
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
|
||||
|
||||
- name: Get Gluster disk size
|
||||
setup: filter=ansible_mounts
|
||||
register: mounts_data
|
||||
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
|
||||
|
||||
- name: Set Gluster disk size to variable
|
||||
set_fact:
|
||||
gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024*1024*1024)) | int }}"
|
||||
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
|
||||
|
||||
- name: Create file on GlusterFS
|
||||
template:
|
||||
dest: "{{ gluster_mount_dir }}/.test-file.txt"
|
||||
src: test-file.txt
|
||||
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
|
||||
|
||||
- name: Unmount glusterfs
|
||||
mount:
|
||||
name: "{{ gluster_mount_dir }}"
|
||||
fstype: glusterfs
|
||||
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
|
||||
state: unmounted
|
||||
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
|
||||
|
@ -0,0 +1,26 @@
|
||||
---
|
||||
- name: Add PPA for GlusterFS.
|
||||
apt_repository:
|
||||
repo: 'ppa:gluster/glusterfs-{{ glusterfs_ppa_version }}'
|
||||
state: present
|
||||
update_cache: yes
|
||||
register: glusterfs_ppa_added
|
||||
when: glusterfs_ppa_use
|
||||
|
||||
- name: Ensure GlusterFS will reinstall if the PPA was just added.
|
||||
apt:
|
||||
name: "{{ item }}"
|
||||
state: absent
|
||||
with_items:
|
||||
- glusterfs-server
|
||||
- glusterfs-client
|
||||
when: glusterfs_ppa_added.changed
|
||||
|
||||
- name: Ensure GlusterFS is installed.
|
||||
apt:
|
||||
name: "{{ item }}"
|
||||
state: installed
|
||||
default_release: "{{ glusterfs_default_release }}"
|
||||
with_items:
|
||||
- glusterfs-server
|
||||
- glusterfs-client
|
@ -0,0 +1,11 @@
|
||||
---
|
||||
- name: Install Prerequisites
|
||||
yum: name={{ item }} state=present
|
||||
with_items:
|
||||
- "centos-release-gluster{{ glusterfs_default_release }}"
|
||||
|
||||
- name: Install Packages
|
||||
yum: name={{ item }} state=present
|
||||
with_items:
|
||||
- glusterfs-server
|
||||
- glusterfs-client
|
@ -0,0 +1 @@
|
||||
test file
|
@ -0,0 +1,5 @@
|
||||
---
|
||||
- hosts: all
|
||||
|
||||
roles:
|
||||
- role_under_test
|
@ -0,0 +1,2 @@
|
||||
---
|
||||
glusterfs_daemon: glusterfs-server
|
@ -0,0 +1,2 @@
|
||||
---
|
||||
glusterfs_daemon: glusterd
|
@ -0,0 +1,19 @@
|
||||
---
|
||||
- name: Kubernetes Apps | Lay Down k8s GlusterFS Endpoint and PV
|
||||
template: src={{item.file}} dest={{kube_config_dir}}/{{item.dest}}
|
||||
with_items:
|
||||
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
|
||||
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
|
||||
register: gluster_pv
|
||||
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined
|
||||
|
||||
- name: Kubernetes Apps | Set GlusterFS endpoint and PV
|
||||
kube:
|
||||
name: glusterfs
|
||||
namespace: default
|
||||
kubectl: "{{bin_dir}}/kubectl"
|
||||
resource: "{{item.item.type}}"
|
||||
filename: "{{kube_config_dir}}/{{item.item.dest}}"
|
||||
state: "{{item.changed | ternary('latest','present') }}"
|
||||
with_items: "{{ gluster_pv.results }}"
|
||||
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined
|
@ -0,0 +1,24 @@
|
||||
{
|
||||
"kind": "Endpoints",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "glusterfs"
|
||||
},
|
||||
"subsets": [
|
||||
{% for host in groups['gfs-cluster'] %}
|
||||
{
|
||||
"addresses": [
|
||||
{
|
||||
"ip": "{{hostvars[host]['ip']|default(hostvars[host].ansible_default_ipv4['address'])}}"
|
||||
}
|
||||
],
|
||||
"ports": [
|
||||
{
|
||||
"port": 1
|
||||
}
|
||||
]
|
||||
}{%- if not loop.last %}, {% endif -%}
|
||||
{% endfor %}
|
||||
]
|
||||
}
|
||||
|
@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: glusterfs
|
||||
spec:
|
||||
capacity:
|
||||
storage: "{{ hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb }}Gi"
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
glusterfs:
|
||||
endpoints: glusterfs
|
||||
path: gluster
|
||||
readOnly: false
|
||||
persistentVolumeReclaimPolicy: Retain
|
1
contrib/network-storage/glusterfs/roles/kubernetes-pv/lib
Symbolic link
1
contrib/network-storage/glusterfs/roles/kubernetes-pv/lib
Symbolic link
@ -0,0 +1 @@
|
||||
../../../../../roles/kubernetes-apps/lib
|
@ -0,0 +1,2 @@
|
||||
dependencies:
|
||||
- {role: kubernetes-pv/ansible, tags: apps}
|
2
contrib/terraform/aws/.gitignore
vendored
Normal file
2
contrib/terraform/aws/.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
*.tfstate*
|
||||
inventory
|
261
contrib/terraform/aws/00-create-infrastructure.tf
Executable file
261
contrib/terraform/aws/00-create-infrastructure.tf
Executable file
@ -0,0 +1,261 @@
|
||||
variable "deploymentName" {
|
||||
type = "string"
|
||||
description = "The desired name of your deployment."
|
||||
}
|
||||
|
||||
variable "numControllers"{
|
||||
type = "string"
|
||||
description = "Desired # of controllers."
|
||||
}
|
||||
|
||||
variable "numEtcd" {
|
||||
type = "string"
|
||||
description = "Desired # of etcd nodes. Should be an odd number."
|
||||
}
|
||||
|
||||
variable "numNodes" {
|
||||
type = "string"
|
||||
description = "Desired # of nodes."
|
||||
}
|
||||
|
||||
variable "volSizeController" {
|
||||
type = "string"
|
||||
description = "Volume size for the controllers (GB)."
|
||||
}
|
||||
|
||||
variable "volSizeEtcd" {
|
||||
type = "string"
|
||||
description = "Volume size for etcd (GB)."
|
||||
}
|
||||
|
||||
variable "volSizeNodes" {
|
||||
type = "string"
|
||||
description = "Volume size for nodes (GB)."
|
||||
}
|
||||
|
||||
variable "subnet" {
|
||||
type = "string"
|
||||
description = "The subnet in which to put your cluster."
|
||||
}
|
||||
|
||||
variable "securityGroups" {
|
||||
type = "string"
|
||||
description = "The sec. groups in which to put your cluster."
|
||||
}
|
||||
|
||||
variable "ami"{
|
||||
type = "string"
|
||||
description = "AMI to use for all VMs in cluster."
|
||||
}
|
||||
|
||||
variable "SSHKey" {
|
||||
type = "string"
|
||||
description = "SSH key to use for VMs."
|
||||
}
|
||||
|
||||
variable "master_instance_type" {
|
||||
type = "string"
|
||||
description = "Size of VM to use for masters."
|
||||
}
|
||||
|
||||
variable "etcd_instance_type" {
|
||||
type = "string"
|
||||
description = "Size of VM to use for etcd."
|
||||
}
|
||||
|
||||
variable "node_instance_type" {
|
||||
type = "string"
|
||||
description = "Size of VM to use for nodes."
|
||||
}
|
||||
|
||||
variable "terminate_protect" {
|
||||
type = "string"
|
||||
default = "false"
|
||||
}
|
||||
|
||||
variable "awsRegion" {
|
||||
type = "string"
|
||||
}
|
||||
|
||||
provider "aws" {
|
||||
region = "${var.awsRegion}"
|
||||
}
|
||||
|
||||
variable "iam_prefix" {
|
||||
type = "string"
|
||||
description = "Prefix name for IAM profiles"
|
||||
}
|
||||
|
||||
resource "aws_iam_instance_profile" "kubernetes_master_profile" {
|
||||
name = "${var.iam_prefix}_kubernetes_master_profile"
|
||||
roles = ["${aws_iam_role.kubernetes_master_role.name}"]
|
||||
}
|
||||
|
||||
resource "aws_iam_role" "kubernetes_master_role" {
|
||||
name = "${var.iam_prefix}_kubernetes_master_role"
|
||||
assume_role_policy = <<EOF
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Principal": { "Service": "ec2.amazonaws.com"},
|
||||
"Action": "sts:AssumeRole"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
resource "aws_iam_role_policy" "kubernetes_master_policy" {
|
||||
name = "${var.iam_prefix}_kubernetes_master_policy"
|
||||
role = "${aws_iam_role.kubernetes_master_role.id}"
|
||||
policy = <<EOF
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["ec2:*"],
|
||||
"Resource": ["*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["elasticloadbalancing:*"],
|
||||
"Resource": ["*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "s3:*",
|
||||
"Resource": "*"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
resource "aws_iam_instance_profile" "kubernetes_node_profile" {
|
||||
name = "${var.iam_prefix}_kubernetes_node_profile"
|
||||
roles = ["${aws_iam_role.kubernetes_node_role.name}"]
|
||||
}
|
||||
|
||||
resource "aws_iam_role" "kubernetes_node_role" {
|
||||
name = "${var.iam_prefix}_kubernetes_node_role"
|
||||
assume_role_policy = <<EOF
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Principal": { "Service": "ec2.amazonaws.com"},
|
||||
"Action": "sts:AssumeRole"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
resource "aws_iam_role_policy" "kubernetes_node_policy" {
|
||||
name = "${var.iam_prefix}_kubernetes_node_policy"
|
||||
role = "${aws_iam_role.kubernetes_node_role.id}"
|
||||
policy = <<EOF
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "s3:*",
|
||||
"Resource": "*"
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "ec2:Describe*",
|
||||
"Resource": "*"
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "ec2:AttachVolume",
|
||||
"Resource": "*"
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "ec2:DetachVolume",
|
||||
"Resource": "*"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
resource "aws_instance" "master" {
|
||||
count = "${var.numControllers}"
|
||||
ami = "${var.ami}"
|
||||
instance_type = "${var.master_instance_type}"
|
||||
subnet_id = "${var.subnet}"
|
||||
vpc_security_group_ids = ["${var.securityGroups}"]
|
||||
key_name = "${var.SSHKey}"
|
||||
disable_api_termination = "${var.terminate_protect}"
|
||||
iam_instance_profile = "${aws_iam_instance_profile.kubernetes_master_profile.id}"
|
||||
root_block_device {
|
||||
volume_size = "${var.volSizeController}"
|
||||
}
|
||||
tags {
|
||||
Name = "${var.deploymentName}-master-${count.index + 1}"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_instance" "etcd" {
|
||||
count = "${var.numEtcd}"
|
||||
ami = "${var.ami}"
|
||||
instance_type = "${var.etcd_instance_type}"
|
||||
subnet_id = "${var.subnet}"
|
||||
vpc_security_group_ids = ["${var.securityGroups}"]
|
||||
key_name = "${var.SSHKey}"
|
||||
disable_api_termination = "${var.terminate_protect}"
|
||||
root_block_device {
|
||||
volume_size = "${var.volSizeEtcd}"
|
||||
}
|
||||
tags {
|
||||
Name = "${var.deploymentName}-etcd-${count.index + 1}"
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
resource "aws_instance" "minion" {
|
||||
count = "${var.numNodes}"
|
||||
ami = "${var.ami}"
|
||||
instance_type = "${var.node_instance_type}"
|
||||
subnet_id = "${var.subnet}"
|
||||
vpc_security_group_ids = ["${var.securityGroups}"]
|
||||
key_name = "${var.SSHKey}"
|
||||
disable_api_termination = "${var.terminate_protect}"
|
||||
iam_instance_profile = "${aws_iam_instance_profile.kubernetes_node_profile.id}"
|
||||
root_block_device {
|
||||
volume_size = "${var.volSizeNodes}"
|
||||
}
|
||||
tags {
|
||||
Name = "${var.deploymentName}-minion-${count.index + 1}"
|
||||
}
|
||||
}
|
||||
|
||||
output "kubernetes_master_profile" {
|
||||
value = "${aws_iam_instance_profile.kubernetes_master_profile.id}"
|
||||
}
|
||||
|
||||
output "kubernetes_node_profile" {
|
||||
value = "${aws_iam_instance_profile.kubernetes_node_profile.id}"
|
||||
}
|
||||
|
||||
output "master-ip" {
|
||||
value = "${join(", ", aws_instance.master.*.private_ip)}"
|
||||
}
|
||||
|
||||
output "etcd-ip" {
|
||||
value = "${join(", ", aws_instance.etcd.*.private_ip)}"
|
||||
}
|
||||
|
||||
output "minion-ip" {
|
||||
value = "${join(", ", aws_instance.minion.*.private_ip)}"
|
||||
}
|
||||
|
||||
|
37
contrib/terraform/aws/01-create-inventory.tf
Executable file
37
contrib/terraform/aws/01-create-inventory.tf
Executable file
@ -0,0 +1,37 @@
|
||||
variable "SSHUser" {
|
||||
type = "string"
|
||||
description = "SSH User for VMs."
|
||||
}
|
||||
|
||||
resource "null_resource" "ansible-provision" {
|
||||
|
||||
depends_on = ["aws_instance.master","aws_instance.etcd","aws_instance.minion"]
|
||||
|
||||
##Create Master Inventory
|
||||
provisioner "local-exec" {
|
||||
command = "echo \"[kube-master]\" > inventory"
|
||||
}
|
||||
provisioner "local-exec" {
|
||||
command = "echo \"${join("\n",formatlist("%s ansible_ssh_user=%s", aws_instance.master.*.private_ip, var.SSHUser))}\" >> inventory"
|
||||
}
|
||||
|
||||
##Create ETCD Inventory
|
||||
provisioner "local-exec" {
|
||||
command = "echo \"\n[etcd]\" >> inventory"
|
||||
}
|
||||
provisioner "local-exec" {
|
||||
command = "echo \"${join("\n",formatlist("%s ansible_ssh_user=%s", aws_instance.etcd.*.private_ip, var.SSHUser))}\" >> inventory"
|
||||
}
|
||||
|
||||
##Create Nodes Inventory
|
||||
provisioner "local-exec" {
|
||||
command = "echo \"\n[kube-node]\" >> inventory"
|
||||
}
|
||||
provisioner "local-exec" {
|
||||
command = "echo \"${join("\n",formatlist("%s ansible_ssh_user=%s", aws_instance.minion.*.private_ip, var.SSHUser))}\" >> inventory"
|
||||
}
|
||||
|
||||
provisioner "local-exec" {
|
||||
command = "echo \"\n[k8s-cluster:children]\nkube-node\nkube-master\" >> inventory"
|
||||
}
|
||||
}
|
28
contrib/terraform/aws/README.md
Normal file
28
contrib/terraform/aws/README.md
Normal file
@ -0,0 +1,28 @@
|
||||
## Kubernetes on AWS with Terraform
|
||||
|
||||
**Overview:**
|
||||
|
||||
- This will create nodes in a VPC inside of AWS
|
||||
|
||||
- A dynamic number of masters, etcd, and nodes can be created
|
||||
|
||||
- These scripts currently expect Private IP connectivity with the nodes that are created. This means that you may need a tunnel to your VPC or to run these scripts from a VM inside the VPC. Will be looking into how to work around this later.
|
||||
|
||||
**How to Use:**
|
||||
|
||||
- Export the variables for your Amazon credentials:
|
||||
|
||||
```
|
||||
export AWS_ACCESS_KEY_ID="xxx"
|
||||
export AWS_SECRET_ACCESS_KEY="yyy"
|
||||
```
|
||||
|
||||
- Update contrib/terraform/aws/terraform.tfvars with your data
|
||||
|
||||
- Run with `terraform apply`
|
||||
|
||||
- Once the infrastructure is created, you can run the kubespray playbooks and supply contrib/terraform/aws/inventory with the `-i` flag.
|
||||
|
||||
**Future Work:**
|
||||
|
||||
- Update the inventory creation file to be something a little more reasonable. It's just a local-exec from Terraform now, using terraform.py or something may make sense in the future.
|
22
contrib/terraform/aws/terraform.tfvars
Normal file
22
contrib/terraform/aws/terraform.tfvars
Normal file
@ -0,0 +1,22 @@
|
||||
deploymentName="test-kube-deploy"
|
||||
|
||||
numControllers="2"
|
||||
numEtcd="3"
|
||||
numNodes="2"
|
||||
|
||||
volSizeController="20"
|
||||
volSizeEtcd="20"
|
||||
volSizeNodes="20"
|
||||
|
||||
awsRegion="us-west-2"
|
||||
subnet="subnet-xxxxx"
|
||||
ami="ami-32a85152"
|
||||
securityGroups="sg-xxxxx"
|
||||
SSHUser="core"
|
||||
SSHKey="my-key"
|
||||
|
||||
master_instance_type="m3.xlarge"
|
||||
etcd_instance_type="m3.xlarge"
|
||||
node_instance_type="m3.xlarge"
|
||||
|
||||
terminate_protect="false"
|
171
contrib/terraform/openstack/README.md
Normal file
171
contrib/terraform/openstack/README.md
Normal file
@ -0,0 +1,171 @@
|
||||
# Kubernetes on Openstack with Terraform
|
||||
|
||||
Provision a Kubernetes cluster with [Terraform](https://www.terraform.io) on
|
||||
Openstack.
|
||||
|
||||
## Status
|
||||
|
||||
This will install a Kubernetes cluster on an Openstack Cloud. It has been tested on a
|
||||
OpenStack Cloud provided by [BlueBox](https://www.blueboxcloud.com/) and on OpenStack at [EMBL-EBI's](http://www.ebi.ac.uk/) [EMBASSY Cloud](http://www.embassycloud.org/). This should work on most modern installs of OpenStack that support the basic
|
||||
services.
|
||||
|
||||
There are some assumptions made to try and ensure it will work on your openstack cluster.
|
||||
|
||||
* floating-ips are used for access, but you can have masters and nodes that don't use floating-ips if needed. You need currently at least 1 floating ip, which we would suggest is used on a master.
|
||||
* you already have a suitable OS image in glance
|
||||
* you already have both an internal network and a floating-ip pool created
|
||||
* you have security-groups enabled
|
||||
|
||||
|
||||
## Requirements
|
||||
|
||||
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
|
||||
|
||||
## Terraform
|
||||
|
||||
Terraform will be used to provision all of the OpenStack resources. It is also used to deploy and provision the software
|
||||
requirements.
|
||||
|
||||
### Prep
|
||||
|
||||
#### OpenStack
|
||||
|
||||
Ensure your OpenStack credentials are loaded in environment variables. This can be done by downloading a credentials .rc file from your OpenStack dashboard and sourcing it:
|
||||
|
||||
```
|
||||
$ source ~/.stackrc
|
||||
```
|
||||
|
||||
You will need two networks before installing, an internal network and
|
||||
an external (floating IP Pool) network. The internet network can be shared as
|
||||
we use security groups to provide network segregation. Due to the many
|
||||
differences between OpenStack installs the Terraform does not attempt to create
|
||||
these for you.
|
||||
|
||||
By default Terraform will expect that your networks are called `internal` and
|
||||
`external`. You can change this by altering the Terraform variables `network_name` and `floatingip_pool`. This can be done on a new variables file or through environment variables.
|
||||
|
||||
A full list of variables you can change can be found at [variables.tf](variables.tf).
|
||||
|
||||
All OpenStack resources will use the Terraform variable `cluster_name` (
|
||||
default `example`) in their name to make it easier to track. For example the
|
||||
first compute resource will be named `example-kubernetes-1`.
|
||||
|
||||
#### Terraform
|
||||
|
||||
Ensure your local ssh-agent is running and your ssh key has been added. This
|
||||
step is required by the terraform provisioner:
|
||||
|
||||
```
|
||||
$ eval $(ssh-agent -s)
|
||||
$ ssh-add ~/.ssh/id_rsa
|
||||
```
|
||||
|
||||
|
||||
Ensure that you have your Openstack credentials loaded into Terraform
|
||||
environment variables. Likely via a command similar to:
|
||||
|
||||
```
|
||||
$ echo Setting up Terraform creds && \
|
||||
export TF_VAR_username=${OS_USERNAME} && \
|
||||
export TF_VAR_password=${OS_PASSWORD} && \
|
||||
export TF_VAR_tenant=${OS_TENANT_NAME} && \
|
||||
export TF_VAR_auth_url=${OS_AUTH_URL}
|
||||
```
|
||||
|
||||
If you want to provision master or node VMs that don't use floating ips, write on a `my-terraform-vars.tfvars` file, for example:
|
||||
|
||||
```
|
||||
number_of_k8s_masters = "1"
|
||||
number_of_k8s_masters_no_floating_ip = "2"
|
||||
number_of_k8s_nodes_no_floating_ip = "1"
|
||||
number_of_k8s_nodes = "0"
|
||||
```
|
||||
This will provision one VM as master using a floating ip, two additional masters using no floating ips (these will only have private ips inside your tenancy) and one VM as node, again without a floating ip.
|
||||
|
||||
Additionally, now the terraform based installation supports provisioning of a GlusterFS shared file system based on a separate set of VMs, running either a Debian or RedHat based set of VMs. To enable this, you need to add to your `my-terraform-vars.tfvars` the following variables:
|
||||
|
||||
```
|
||||
# Flavour depends on your openstack installation, you can get available flavours through `nova list-flavors`
|
||||
flavor_gfs_node = "af659280-5b8a-42b5-8865-a703775911da"
|
||||
# This is the name of an image already available in your openstack installation.
|
||||
image_gfs = "Ubuntu 15.10"
|
||||
number_of_gfs_nodes_no_floating_ip = "3"
|
||||
# This is the size of the non-ephemeral volumes to be attached to store the GlusterFS bricks.
|
||||
gfs_volume_size_in_gb = "50"
|
||||
# The user needed for the image choosen for GlusterFS.
|
||||
ssh_user_gfs = "ubuntu"
|
||||
```
|
||||
|
||||
If these variables are provided, this will give rise to a new ansible group called `gfs-cluster`, for which we have added ansible roles to execute in the ansible provisioning step. If you are using Container Linux by CoreOS, these GlusterFS VM necessarily need to be either Debian or RedHat based VMs, Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through binaries available on hyperkube v1.4.3_coreos.0 or higher.
|
||||
|
||||
|
||||
# Provision a Kubernetes Cluster on OpenStack
|
||||
|
||||
If not using a tfvars file for your setup, then execute:
|
||||
```
|
||||
terraform apply -state=contrib/terraform/openstack/terraform.tfstate contrib/terraform/openstack
|
||||
openstack_compute_secgroup_v2.k8s_master: Creating...
|
||||
description: "" => "example - Kubernetes Master"
|
||||
name: "" => "example-k8s-master"
|
||||
rule.#: "" => "<computed>"
|
||||
...
|
||||
...
|
||||
Apply complete! Resources: 9 added, 0 changed, 0 destroyed.
|
||||
|
||||
The state of your infrastructure has been saved to the path
|
||||
below. This state is required to modify and destroy your
|
||||
infrastructure, so keep it safe. To inspect the complete state
|
||||
use the `terraform show` command.
|
||||
|
||||
State path: contrib/terraform/openstack/terraform.tfstate
|
||||
```
|
||||
|
||||
Alternatively, if you wrote your terraform variables on a file `my-terraform-vars.tfvars`, your command would look like:
|
||||
```
|
||||
terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-terraform-vars.tfvars contrib/terraform/openstack
|
||||
```
|
||||
|
||||
if you choose to add masters or nodes without floating ips (only internal ips on your OpenStack tenancy), this script will create as well a file `contrib/terraform/openstack/k8s-cluster.yml` with an ssh command for ansible to be able to access your machines tunneling through the first floating ip used. If you want to manually handling the ssh tunneling to these machines, please delete or move that file. If you want to use this, just leave it there, as ansible will pick it up automatically.
|
||||
|
||||
Make sure you can connect to the hosts:
|
||||
|
||||
```
|
||||
$ ansible -i contrib/terraform/openstack/hosts -m ping all
|
||||
example-k8s_node-1 | SUCCESS => {
|
||||
"changed": false,
|
||||
"ping": "pong"
|
||||
}
|
||||
example-etcd-1 | SUCCESS => {
|
||||
"changed": false,
|
||||
"ping": "pong"
|
||||
}
|
||||
example-k8s-master-1 | SUCCESS => {
|
||||
"changed": false,
|
||||
"ping": "pong"
|
||||
}
|
||||
```
|
||||
|
||||
if you are deploying a system that needs bootstrapping, like Container Linux by CoreOS, these might have a state `FAILED` due to Container Linux by CoreOS not having python. As long as the state is not `UNREACHABLE`, this is fine.
|
||||
|
||||
if it fails try to connect manually via SSH ... it could be somthing as simple as a stale host key.
|
||||
|
||||
Deploy kubernetes:
|
||||
|
||||
```
|
||||
$ ansible-playbook --become -i contrib/terraform/openstack/hosts cluster.yml
|
||||
```
|
||||
|
||||
# clean up:
|
||||
|
||||
```
|
||||
$ terraform destroy
|
||||
Do you really want to destroy?
|
||||
Terraform will delete all your managed infrastructure.
|
||||
There is no undo. Only 'yes' will be accepted to confirm.
|
||||
|
||||
Enter a value: yes
|
||||
...
|
||||
...
|
||||
Apply complete! Resources: 0 added, 0 changed, 12 destroyed.
|
||||
```
|
1
contrib/terraform/openstack/ansible_bastion_template.txt
Normal file
1
contrib/terraform/openstack/ansible_bastion_template.txt
Normal file
@ -0,0 +1 @@
|
||||
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -W %h:%p -q USER@BASTION_ADDRESS"'
|
165
contrib/terraform/openstack/group_vars/all.yml
Normal file
165
contrib/terraform/openstack/group_vars/all.yml
Normal file
@ -0,0 +1,165 @@
|
||||
# Valid bootstrap options (required): ubuntu, coreos, none
|
||||
bootstrap_os: none
|
||||
|
||||
# Directory where the binaries will be installed
|
||||
bin_dir: /usr/local/bin
|
||||
|
||||
# Where the binaries will be downloaded.
|
||||
# Note: ensure that you've enough disk space (about 1G)
|
||||
local_release_dir: "/tmp/releases"
|
||||
# Random shifts for retrying failed ops like pushing/downloading
|
||||
retry_stagger: 5
|
||||
|
||||
# Uncomment this line for Container Linux by CoreOS only.
|
||||
# Directory where python binary is installed
|
||||
# ansible_python_interpreter: "/opt/bin/python"
|
||||
|
||||
# This is the group that the cert creation scripts chgrp the
|
||||
# cert files to. Not really changable...
|
||||
kube_cert_group: kube-cert
|
||||
|
||||
# Cluster Loglevel configuration
|
||||
kube_log_level: 2
|
||||
|
||||
# Users to create for basic auth in Kubernetes API via HTTP
|
||||
kube_api_pwd: "changeme"
|
||||
kube_users:
|
||||
kube:
|
||||
pass: "{{kube_api_pwd}}"
|
||||
role: admin
|
||||
root:
|
||||
pass: "changeme"
|
||||
role: admin
|
||||
|
||||
# Kubernetes cluster name, also will be used as DNS domain
|
||||
cluster_name: cluster.local
|
||||
# Subdomains of DNS domain to be resolved via /etc/resolv.conf
|
||||
ndots: 5
|
||||
# Deploy netchecker app to verify DNS resolve as an HTTP service
|
||||
deploy_netchecker: false
|
||||
|
||||
# For some environments, each node has a pubilcally accessible
|
||||
# address and an address it should bind services to. These are
|
||||
# really inventory level variables, but described here for consistency.
|
||||
#
|
||||
# When advertising access, the access_ip will be used, but will defer to
|
||||
# ip and then the default ansible ip when unspecified.
|
||||
#
|
||||
# When binding to restrict access, the ip variable will be used, but will
|
||||
# defer to the default ansible ip when unspecified.
|
||||
#
|
||||
# The ip variable is used for specific address binding, e.g. listen address
|
||||
# for etcd. This is use to help with environments like Vagrant or multi-nic
|
||||
# systems where one address should be preferred over another.
|
||||
# ip: 10.2.2.2
|
||||
#
|
||||
# The access_ip variable is used to define how other nodes should access
|
||||
# the node. This is used in flannel to allow other flannel nodes to see
|
||||
# this node for example. The access_ip is really useful AWS and Google
|
||||
# environments where the nodes are accessed remotely by the "public" ip,
|
||||
# but don't know about that address themselves.
|
||||
# access_ip: 1.1.1.1
|
||||
|
||||
# Etcd access modes:
|
||||
# Enable multiaccess to configure clients to access all of the etcd members directly
|
||||
# as the "http://hostX:port, http://hostY:port, ..." and ignore the proxy loadbalancers.
|
||||
# This may be the case if clients support and loadbalance multiple etcd servers natively.
|
||||
etcd_multiaccess: true
|
||||
|
||||
# Assume there are no internal loadbalancers for apiservers exist and listen on
|
||||
# kube_apiserver_port (default 443)
|
||||
loadbalancer_apiserver_localhost: true
|
||||
|
||||
# Choose network plugin (calico, weave or flannel)
|
||||
kube_network_plugin: flannel
|
||||
|
||||
# Kubernetes internal network for services, unused block of space.
|
||||
kube_service_addresses: 10.233.0.0/18
|
||||
|
||||
# internal network. When used, it will assign IP
|
||||
# addresses from this range to individual pods.
|
||||
# This network must be unused in your network infrastructure!
|
||||
kube_pods_subnet: 10.233.64.0/18
|
||||
|
||||
# internal network total size (optional). This is the prefix of the
|
||||
# entire network. Must be unused in your environment.
|
||||
# kube_network_prefix: 18
|
||||
|
||||
# internal network node size allocation (optional). This is the size allocated
|
||||
# to each node on your network. With these defaults you should have
|
||||
# room for 4096 nodes with 254 pods per node.
|
||||
kube_network_node_prefix: 24
|
||||
|
||||
# With calico it is possible to distributed routes with border routers of the datacenter.
|
||||
peer_with_router: false
|
||||
# Warning : enabling router peering will disable calico's default behavior ('node mesh').
|
||||
# The subnets of each nodes will be distributed by the datacenter router
|
||||
|
||||
# The port the API Server will be listening on.
|
||||
kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
|
||||
kube_apiserver_port: 443 # (https)
|
||||
kube_apiserver_insecure_port: 8080 # (http)
|
||||
|
||||
# Internal DNS configuration.
|
||||
# Kubernetes can create and mainatain its own DNS server to resolve service names
|
||||
# into appropriate IP addresses. It's highly advisable to run such DNS server,
|
||||
# as it greatly simplifies configuration of your applications - you can use
|
||||
# service names instead of magic environment variables.
|
||||
|
||||
# Can be dnsmasq_kubedns, kubedns or none
|
||||
dns_mode: dnsmasq_kubedns
|
||||
|
||||
# Can be docker_dns, host_resolvconf or none
|
||||
resolvconf_mode: docker_dns
|
||||
|
||||
## Upstream dns servers used by dnsmasq
|
||||
#upstream_dns_servers:
|
||||
# - 8.8.8.8
|
||||
# - 8.8.4.4
|
||||
|
||||
dns_domain: "{{ cluster_name }}"
|
||||
|
||||
# Ip address of the kubernetes skydns service
|
||||
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
|
||||
dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}"
|
||||
|
||||
# There are some changes specific to the cloud providers
|
||||
# for instance we need to encapsulate packets with some network plugins
|
||||
# If set the possible values are either 'gce', 'aws', 'azure' or 'openstack'
|
||||
# When openstack is used make sure to source in the openstack credentials
|
||||
# like you would do when using nova-client before starting the playbook.
|
||||
# When azure is used, you need to also set the following variables.
|
||||
# cloud_provider:
|
||||
|
||||
# see docs/azure.md for details on how to get these values
|
||||
#azure_tenant_id:
|
||||
#azure_subscription_id:
|
||||
#azure_aad_client_id:
|
||||
#azure_aad_client_secret:
|
||||
#azure_resource_group:
|
||||
#azure_location:
|
||||
#azure_subnet_name:
|
||||
#azure_security_group_name:
|
||||
#azure_vnet_name:
|
||||
|
||||
|
||||
## Set these proxy values in order to update docker daemon to use proxies
|
||||
# http_proxy: ""
|
||||
# https_proxy: ""
|
||||
# no_proxy: ""
|
||||
|
||||
# Path used to store Docker data
|
||||
docker_daemon_graph: "/var/lib/docker"
|
||||
|
||||
## A string of extra options to pass to the docker daemon.
|
||||
## This string should be exactly as you wish it to appear.
|
||||
## An obvious use case is allowing insecure-registry access
|
||||
## to self hosted registries like so:
|
||||
docker_options: "--insecure-registry={{ kube_service_addresses }} --graph={{ docker_daemon_graph }}"
|
||||
|
||||
# K8s image pull policy (imagePullPolicy)
|
||||
k8s_image_pull_policy: IfNotPresent
|
||||
|
||||
# default packages to install within the cluster
|
||||
kpm_packages: []
|
||||
# - name: kube-system/grafana
|
1
contrib/terraform/openstack/hosts
Symbolic link
1
contrib/terraform/openstack/hosts
Symbolic link
@ -0,0 +1 @@
|
||||
../terraform.py
|
167
contrib/terraform/openstack/kubespray.tf
Normal file
167
contrib/terraform/openstack/kubespray.tf
Normal file
@ -0,0 +1,167 @@
|
||||
resource "openstack_networking_floatingip_v2" "k8s_master" {
|
||||
count = "${var.number_of_k8s_masters}"
|
||||
pool = "${var.floatingip_pool}"
|
||||
}
|
||||
|
||||
resource "openstack_networking_floatingip_v2" "k8s_node" {
|
||||
count = "${var.number_of_k8s_nodes}"
|
||||
pool = "${var.floatingip_pool}"
|
||||
}
|
||||
|
||||
|
||||
resource "openstack_compute_keypair_v2" "k8s" {
|
||||
name = "kubernetes-${var.cluster_name}"
|
||||
public_key = "${file(var.public_key_path)}"
|
||||
}
|
||||
|
||||
resource "openstack_compute_secgroup_v2" "k8s_master" {
|
||||
name = "${var.cluster_name}-k8s-master"
|
||||
description = "${var.cluster_name} - Kubernetes Master"
|
||||
}
|
||||
|
||||
resource "openstack_compute_secgroup_v2" "k8s" {
|
||||
name = "${var.cluster_name}-k8s"
|
||||
description = "${var.cluster_name} - Kubernetes"
|
||||
rule {
|
||||
ip_protocol = "tcp"
|
||||
from_port = "22"
|
||||
to_port = "22"
|
||||
cidr = "0.0.0.0/0"
|
||||
}
|
||||
rule {
|
||||
ip_protocol = "icmp"
|
||||
from_port = "-1"
|
||||
to_port = "-1"
|
||||
cidr = "0.0.0.0/0"
|
||||
}
|
||||
rule {
|
||||
ip_protocol = "tcp"
|
||||
from_port = "1"
|
||||
to_port = "65535"
|
||||
self = true
|
||||
}
|
||||
rule {
|
||||
ip_protocol = "udp"
|
||||
from_port = "1"
|
||||
to_port = "65535"
|
||||
self = true
|
||||
}
|
||||
rule {
|
||||
ip_protocol = "icmp"
|
||||
from_port = "-1"
|
||||
to_port = "-1"
|
||||
self = true
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "k8s_master" {
|
||||
name = "${var.cluster_name}-k8s-master-${count.index+1}"
|
||||
count = "${var.number_of_k8s_masters}"
|
||||
image_name = "${var.image}"
|
||||
flavor_id = "${var.flavor_k8s_master}"
|
||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||
network {
|
||||
name = "${var.network_name}"
|
||||
}
|
||||
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
|
||||
"${openstack_compute_secgroup_v2.k8s.name}" ]
|
||||
floating_ip = "${element(openstack_networking_floatingip_v2.k8s_master.*.address, count.index)}"
|
||||
metadata = {
|
||||
ssh_user = "${var.ssh_user}"
|
||||
kubespray_groups = "etcd,kube-master,kube-node,k8s-cluster"
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
|
||||
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
|
||||
count = "${var.number_of_k8s_masters_no_floating_ip}"
|
||||
image_name = "${var.image}"
|
||||
flavor_id = "${var.flavor_k8s_master}"
|
||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||
network {
|
||||
name = "${var.network_name}"
|
||||
}
|
||||
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}",
|
||||
"${openstack_compute_secgroup_v2.k8s.name}" ]
|
||||
metadata = {
|
||||
ssh_user = "${var.ssh_user}"
|
||||
kubespray_groups = "etcd,kube-master,kube-node,k8s-cluster"
|
||||
}
|
||||
provisioner "local-exec" {
|
||||
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/k8s-cluster.yml"
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "k8s_node" {
|
||||
name = "${var.cluster_name}-k8s-node-${count.index+1}"
|
||||
count = "${var.number_of_k8s_nodes}"
|
||||
image_name = "${var.image}"
|
||||
flavor_id = "${var.flavor_k8s_node}"
|
||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||
network {
|
||||
name = "${var.network_name}"
|
||||
}
|
||||
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}" ]
|
||||
floating_ip = "${element(openstack_networking_floatingip_v2.k8s_node.*.address, count.index)}"
|
||||
metadata = {
|
||||
ssh_user = "${var.ssh_user}"
|
||||
kubespray_groups = "kube-node,k8s-cluster"
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
|
||||
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
|
||||
count = "${var.number_of_k8s_nodes_no_floating_ip}"
|
||||
image_name = "${var.image}"
|
||||
flavor_id = "${var.flavor_k8s_node}"
|
||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||
network {
|
||||
name = "${var.network_name}"
|
||||
}
|
||||
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}" ]
|
||||
metadata = {
|
||||
ssh_user = "${var.ssh_user}"
|
||||
kubespray_groups = "kube-node,k8s-cluster"
|
||||
}
|
||||
provisioner "local-exec" {
|
||||
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/k8s-cluster.yml"
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
|
||||
name = "${var.cluster_name}-gfs-nephe-vol-${count.index+1}"
|
||||
count = "${var.number_of_gfs_nodes_no_floating_ip}"
|
||||
description = "Non-ephemeral volume for GlusterFS"
|
||||
size = "${var.gfs_volume_size_in_gb}"
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
|
||||
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
|
||||
count = "${var.number_of_gfs_nodes_no_floating_ip}"
|
||||
image_name = "${var.image_gfs}"
|
||||
flavor_id = "${var.flavor_gfs_node}"
|
||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||
network {
|
||||
name = "${var.network_name}"
|
||||
}
|
||||
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}" ]
|
||||
metadata = {
|
||||
ssh_user = "${var.ssh_user_gfs}"
|
||||
kubespray_groups = "gfs-cluster,network-storage"
|
||||
}
|
||||
volume {
|
||||
volume_id = "${element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index)}"
|
||||
}
|
||||
provisioner "local-exec" {
|
||||
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/gfs-cluster.yml"
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
#output "msg" {
|
||||
# value = "Your hosts are ready to go!\nYour ssh hosts are: ${join(", ", openstack_networking_floatingip_v2.k8s_master.*.address )}"
|
||||
#}
|
90
contrib/terraform/openstack/variables.tf
Normal file
90
contrib/terraform/openstack/variables.tf
Normal file
@ -0,0 +1,90 @@
|
||||
variable "cluster_name" {
|
||||
default = "example"
|
||||
}
|
||||
|
||||
variable "number_of_k8s_masters" {
|
||||
default = 2
|
||||
}
|
||||
|
||||
variable "number_of_k8s_masters_no_floating_ip" {
|
||||
default = 2
|
||||
}
|
||||
|
||||
variable "number_of_k8s_nodes" {
|
||||
default = 1
|
||||
}
|
||||
|
||||
variable "number_of_k8s_nodes_no_floating_ip" {
|
||||
default = 1
|
||||
}
|
||||
|
||||
variable "number_of_gfs_nodes_no_floating_ip" {
|
||||
default = 0
|
||||
}
|
||||
|
||||
variable "gfs_volume_size_in_gb" {
|
||||
default = 75
|
||||
}
|
||||
|
||||
variable "public_key_path" {
|
||||
description = "The path of the ssh pub key"
|
||||
default = "~/.ssh/id_rsa.pub"
|
||||
}
|
||||
|
||||
variable "image" {
|
||||
description = "the image to use"
|
||||
default = "ubuntu-14.04"
|
||||
}
|
||||
|
||||
variable "image_gfs" {
|
||||
description = "Glance image to use for GlusterFS"
|
||||
default = "ubuntu-16.04"
|
||||
}
|
||||
|
||||
variable "ssh_user" {
|
||||
description = "used to fill out tags for ansible inventory"
|
||||
default = "ubuntu"
|
||||
}
|
||||
|
||||
variable "ssh_user_gfs" {
|
||||
description = "used to fill out tags for ansible inventory"
|
||||
default = "ubuntu"
|
||||
}
|
||||
|
||||
variable "flavor_k8s_master" {
|
||||
default = 3
|
||||
}
|
||||
|
||||
variable "flavor_k8s_node" {
|
||||
default = 3
|
||||
}
|
||||
|
||||
variable "flavor_gfs_node" {
|
||||
default = 3
|
||||
}
|
||||
|
||||
variable "network_name" {
|
||||
description = "name of the internal network to use"
|
||||
default = "internal"
|
||||
}
|
||||
|
||||
variable "floatingip_pool" {
|
||||
description = "name of the floating ip pool to use"
|
||||
default = "external"
|
||||
}
|
||||
|
||||
variable "username" {
|
||||
description = "Your openstack username"
|
||||
}
|
||||
|
||||
variable "password" {
|
||||
description = "Your openstack password"
|
||||
}
|
||||
|
||||
variable "tenant" {
|
||||
description = "Your openstack tenant/project"
|
||||
}
|
||||
|
||||
variable "auth_url" {
|
||||
description = "Your openstack auth URL"
|
||||
}
|
746
contrib/terraform/terraform.py
Executable file
746
contrib/terraform/terraform.py
Executable file
@ -0,0 +1,746 @@
|
||||
#!/usr/bin/env python
|
||||
#
|
||||
# Copyright 2015 Cisco Systems, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# original: https://github.com/CiscoCloud/terraform.py
|
||||
|
||||
"""\
|
||||
Dynamic inventory for Terraform - finds all `.tfstate` files below the working
|
||||
directory and generates an inventory based on them.
|
||||
"""
|
||||
from __future__ import unicode_literals, print_function
|
||||
import argparse
|
||||
from collections import defaultdict
|
||||
from functools import wraps
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
|
||||
VERSION = '0.3.0pre'
|
||||
|
||||
|
||||
def tfstates(root=None):
|
||||
root = root or os.getcwd()
|
||||
for dirpath, _, filenames in os.walk(root):
|
||||
for name in filenames:
|
||||
if os.path.splitext(name)[-1] == '.tfstate':
|
||||
yield os.path.join(dirpath, name)
|
||||
|
||||
|
||||
def iterresources(filenames):
|
||||
for filename in filenames:
|
||||
with open(filename, 'r') as json_file:
|
||||
state = json.load(json_file)
|
||||
for module in state['modules']:
|
||||
name = module['path'][-1]
|
||||
for key, resource in module['resources'].items():
|
||||
yield name, key, resource
|
||||
|
||||
## READ RESOURCES
|
||||
PARSERS = {}
|
||||
|
||||
|
||||
def _clean_dc(dcname):
|
||||
# Consul DCs are strictly alphanumeric with underscores and hyphens -
|
||||
# ensure that the consul_dc attribute meets these requirements.
|
||||
return re.sub('[^\w_\-]', '-', dcname)
|
||||
|
||||
|
||||
def iterhosts(resources):
|
||||
'''yield host tuples of (name, attributes, groups)'''
|
||||
for module_name, key, resource in resources:
|
||||
resource_type, name = key.split('.', 1)
|
||||
try:
|
||||
parser = PARSERS[resource_type]
|
||||
except KeyError:
|
||||
continue
|
||||
|
||||
yield parser(resource, module_name)
|
||||
|
||||
|
||||
def parses(prefix):
|
||||
def inner(func):
|
||||
PARSERS[prefix] = func
|
||||
return func
|
||||
|
||||
return inner
|
||||
|
||||
|
||||
def calculate_mantl_vars(func):
|
||||
"""calculate Mantl vars"""
|
||||
|
||||
@wraps(func)
|
||||
def inner(*args, **kwargs):
|
||||
name, attrs, groups = func(*args, **kwargs)
|
||||
|
||||
# attrs
|
||||
if attrs.get('role', '') == 'control':
|
||||
attrs['consul_is_server'] = True
|
||||
else:
|
||||
attrs['consul_is_server'] = False
|
||||
|
||||
# groups
|
||||
if attrs.get('publicly_routable', False):
|
||||
groups.append('publicly_routable')
|
||||
|
||||
return name, attrs, groups
|
||||
|
||||
return inner
|
||||
|
||||
|
||||
def _parse_prefix(source, prefix, sep='.'):
|
||||
for compkey, value in source.items():
|
||||
try:
|
||||
curprefix, rest = compkey.split(sep, 1)
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
if curprefix != prefix or rest == '#':
|
||||
continue
|
||||
|
||||
yield rest, value
|
||||
|
||||
|
||||
def parse_attr_list(source, prefix, sep='.'):
|
||||
attrs = defaultdict(dict)
|
||||
for compkey, value in _parse_prefix(source, prefix, sep):
|
||||
idx, key = compkey.split(sep, 1)
|
||||
attrs[idx][key] = value
|
||||
|
||||
return attrs.values()
|
||||
|
||||
|
||||
def parse_dict(source, prefix, sep='.'):
|
||||
return dict(_parse_prefix(source, prefix, sep))
|
||||
|
||||
|
||||
def parse_list(source, prefix, sep='.'):
|
||||
return [value for _, value in _parse_prefix(source, prefix, sep)]
|
||||
|
||||
|
||||
def parse_bool(string_form):
|
||||
token = string_form.lower()[0]
|
||||
|
||||
if token == 't':
|
||||
return True
|
||||
elif token == 'f':
|
||||
return False
|
||||
else:
|
||||
raise ValueError('could not convert %r to a bool' % string_form)
|
||||
|
||||
|
||||
@parses('triton_machine')
|
||||
@calculate_mantl_vars
|
||||
def triton_machine(resource, module_name):
|
||||
raw_attrs = resource['primary']['attributes']
|
||||
name = raw_attrs.get('name')
|
||||
groups = []
|
||||
|
||||
attrs = {
|
||||
'id': raw_attrs['id'],
|
||||
'dataset': raw_attrs['dataset'],
|
||||
'disk': raw_attrs['disk'],
|
||||
'firewall_enabled': parse_bool(raw_attrs['firewall_enabled']),
|
||||
'image': raw_attrs['image'],
|
||||
'ips': parse_list(raw_attrs, 'ips'),
|
||||
'memory': raw_attrs['memory'],
|
||||
'name': raw_attrs['name'],
|
||||
'networks': parse_list(raw_attrs, 'networks'),
|
||||
'package': raw_attrs['package'],
|
||||
'primary_ip': raw_attrs['primaryip'],
|
||||
'root_authorized_keys': raw_attrs['root_authorized_keys'],
|
||||
'state': raw_attrs['state'],
|
||||
'tags': parse_dict(raw_attrs, 'tags'),
|
||||
'type': raw_attrs['type'],
|
||||
'user_data': raw_attrs['user_data'],
|
||||
'user_script': raw_attrs['user_script'],
|
||||
|
||||
# ansible
|
||||
'ansible_ssh_host': raw_attrs['primaryip'],
|
||||
'ansible_ssh_port': 22,
|
||||
'ansible_ssh_user': 'root', # it's "root" on Triton by default
|
||||
|
||||
# generic
|
||||
'public_ipv4': raw_attrs['primaryip'],
|
||||
'provider': 'triton',
|
||||
}
|
||||
|
||||
# private IPv4
|
||||
for ip in attrs['ips']:
|
||||
if ip.startswith('10') or ip.startswith('192.168'): # private IPs
|
||||
attrs['private_ipv4'] = ip
|
||||
break
|
||||
|
||||
if 'private_ipv4' not in attrs:
|
||||
attrs['private_ipv4'] = attrs['public_ipv4']
|
||||
|
||||
# attrs specific to Mantl
|
||||
attrs.update({
|
||||
'consul_dc': _clean_dc(attrs['tags'].get('dc', 'none')),
|
||||
'role': attrs['tags'].get('role', 'none'),
|
||||
'ansible_python_interpreter': attrs['tags'].get('python_bin', 'python')
|
||||
})
|
||||
|
||||
# add groups based on attrs
|
||||
groups.append('triton_image=' + attrs['image'])
|
||||
groups.append('triton_package=' + attrs['package'])
|
||||
groups.append('triton_state=' + attrs['state'])
|
||||
groups.append('triton_firewall_enabled=%s' % attrs['firewall_enabled'])
|
||||
groups.extend('triton_tags_%s=%s' % item
|
||||
for item in attrs['tags'].items())
|
||||
groups.extend('triton_network=' + network
|
||||
for network in attrs['networks'])
|
||||
|
||||
# groups specific to Mantl
|
||||
groups.append('role=' + attrs['role'])
|
||||
groups.append('dc=' + attrs['consul_dc'])
|
||||
|
||||
return name, attrs, groups
|
||||
|
||||
|
||||
@parses('digitalocean_droplet')
|
||||
@calculate_mantl_vars
|
||||
def digitalocean_host(resource, tfvars=None):
|
||||
raw_attrs = resource['primary']['attributes']
|
||||
name = raw_attrs['name']
|
||||
groups = []
|
||||
|
||||
attrs = {
|
||||
'id': raw_attrs['id'],
|
||||
'image': raw_attrs['image'],
|
||||
'ipv4_address': raw_attrs['ipv4_address'],
|
||||
'locked': parse_bool(raw_attrs['locked']),
|
||||
'metadata': json.loads(raw_attrs.get('user_data', '{}')),
|
||||
'region': raw_attrs['region'],
|
||||
'size': raw_attrs['size'],
|
||||
'ssh_keys': parse_list(raw_attrs, 'ssh_keys'),
|
||||
'status': raw_attrs['status'],
|
||||
# ansible
|
||||
'ansible_ssh_host': raw_attrs['ipv4_address'],
|
||||
'ansible_ssh_port': 22,
|
||||
'ansible_ssh_user': 'root', # it's always "root" on DO
|
||||
# generic
|
||||
'public_ipv4': raw_attrs['ipv4_address'],
|
||||
'private_ipv4': raw_attrs.get('ipv4_address_private',
|
||||
raw_attrs['ipv4_address']),
|
||||
'provider': 'digitalocean',
|
||||
}
|
||||
|
||||
# attrs specific to Mantl
|
||||
attrs.update({
|
||||
'consul_dc': _clean_dc(attrs['metadata'].get('dc', attrs['region'])),
|
||||
'role': attrs['metadata'].get('role', 'none'),
|
||||
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
|
||||
})
|
||||
|
||||
# add groups based on attrs
|
||||
groups.append('do_image=' + attrs['image'])
|
||||
groups.append('do_locked=%s' % attrs['locked'])
|
||||
groups.append('do_region=' + attrs['region'])
|
||||
groups.append('do_size=' + attrs['size'])
|
||||
groups.append('do_status=' + attrs['status'])
|
||||
groups.extend('do_metadata_%s=%s' % item
|
||||
for item in attrs['metadata'].items())
|
||||
|
||||
# groups specific to Mantl
|
||||
groups.append('role=' + attrs['role'])
|
||||
groups.append('dc=' + attrs['consul_dc'])
|
||||
|
||||
return name, attrs, groups
|
||||
|
||||
|
||||
@parses('softlayer_virtualserver')
|
||||
@calculate_mantl_vars
|
||||
def softlayer_host(resource, module_name):
|
||||
raw_attrs = resource['primary']['attributes']
|
||||
name = raw_attrs['name']
|
||||
groups = []
|
||||
|
||||
attrs = {
|
||||
'id': raw_attrs['id'],
|
||||
'image': raw_attrs['image'],
|
||||
'ipv4_address': raw_attrs['ipv4_address'],
|
||||
'metadata': json.loads(raw_attrs.get('user_data', '{}')),
|
||||
'region': raw_attrs['region'],
|
||||
'ram': raw_attrs['ram'],
|
||||
'cpu': raw_attrs['cpu'],
|
||||
'ssh_keys': parse_list(raw_attrs, 'ssh_keys'),
|
||||
'public_ipv4': raw_attrs['ipv4_address'],
|
||||
'private_ipv4': raw_attrs['ipv4_address_private'],
|
||||
'ansible_ssh_host': raw_attrs['ipv4_address'],
|
||||
'ansible_ssh_port': 22,
|
||||
'ansible_ssh_user': 'root',
|
||||
'provider': 'softlayer',
|
||||
}
|
||||
|
||||
# attrs specific to Mantl
|
||||
attrs.update({
|
||||
'consul_dc': _clean_dc(attrs['metadata'].get('dc', attrs['region'])),
|
||||
'role': attrs['metadata'].get('role', 'none'),
|
||||
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
|
||||
})
|
||||
|
||||
# groups specific to Mantl
|
||||
groups.append('role=' + attrs['role'])
|
||||
groups.append('dc=' + attrs['consul_dc'])
|
||||
|
||||
return name, attrs, groups
|
||||
|
||||
|
||||
@parses('openstack_compute_instance_v2')
|
||||
@calculate_mantl_vars
|
||||
def openstack_host(resource, module_name):
|
||||
raw_attrs = resource['primary']['attributes']
|
||||
name = raw_attrs['name']
|
||||
groups = []
|
||||
|
||||
attrs = {
|
||||
'access_ip_v4': raw_attrs['access_ip_v4'],
|
||||
'access_ip_v6': raw_attrs['access_ip_v6'],
|
||||
'ip': raw_attrs['network.0.fixed_ip_v4'],
|
||||
'flavor': parse_dict(raw_attrs, 'flavor',
|
||||
sep='_'),
|
||||
'id': raw_attrs['id'],
|
||||
'image': parse_dict(raw_attrs, 'image',
|
||||
sep='_'),
|
||||
'key_pair': raw_attrs['key_pair'],
|
||||
'metadata': parse_dict(raw_attrs, 'metadata'),
|
||||
'network': parse_attr_list(raw_attrs, 'network'),
|
||||
'region': raw_attrs.get('region', ''),
|
||||
'security_groups': parse_list(raw_attrs, 'security_groups'),
|
||||
# ansible
|
||||
'ansible_ssh_port': 22,
|
||||
# workaround for an OpenStack bug where hosts have a different domain
|
||||
# after they're restarted
|
||||
'host_domain': 'novalocal',
|
||||
'use_host_domain': True,
|
||||
# generic
|
||||
'public_ipv4': raw_attrs['access_ip_v4'],
|
||||
'private_ipv4': raw_attrs['access_ip_v4'],
|
||||
'provider': 'openstack',
|
||||
}
|
||||
|
||||
if 'floating_ip' in raw_attrs:
|
||||
attrs['private_ipv4'] = raw_attrs['network.0.fixed_ip_v4']
|
||||
|
||||
try:
|
||||
attrs.update({
|
||||
'ansible_ssh_host': raw_attrs['access_ip_v4'],
|
||||
'publicly_routable': True,
|
||||
})
|
||||
except (KeyError, ValueError):
|
||||
attrs.update({'ansible_ssh_host': '', 'publicly_routable': False})
|
||||
|
||||
# attrs specific to Ansible
|
||||
if 'metadata.ssh_user' in raw_attrs:
|
||||
attrs['ansible_ssh_user'] = raw_attrs['metadata.ssh_user']
|
||||
|
||||
if 'volume.#' in raw_attrs.keys() and int(raw_attrs['volume.#']) > 0:
|
||||
device_index = 1
|
||||
for key, value in raw_attrs.items():
|
||||
match = re.search("^volume.*.device$", key)
|
||||
if match:
|
||||
attrs['disk_volume_device_'+str(device_index)] = value
|
||||
device_index += 1
|
||||
|
||||
|
||||
# attrs specific to Mantl
|
||||
attrs.update({
|
||||
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)),
|
||||
'role': attrs['metadata'].get('role', 'none'),
|
||||
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
|
||||
})
|
||||
|
||||
# add groups based on attrs
|
||||
groups.append('os_image=' + attrs['image']['name'])
|
||||
groups.append('os_flavor=' + attrs['flavor']['name'])
|
||||
groups.extend('os_metadata_%s=%s' % item
|
||||
for item in attrs['metadata'].items())
|
||||
groups.append('os_region=' + attrs['region'])
|
||||
|
||||
# groups specific to Mantl
|
||||
groups.append('role=' + attrs['metadata'].get('role', 'none'))
|
||||
groups.append('dc=' + attrs['consul_dc'])
|
||||
|
||||
# groups specific to kubespray
|
||||
for group in attrs['metadata'].get('kubespray_groups', "").split(","):
|
||||
groups.append(group)
|
||||
|
||||
return name, attrs, groups
|
||||
|
||||
|
||||
@parses('aws_instance')
|
||||
@calculate_mantl_vars
|
||||
def aws_host(resource, module_name):
|
||||
name = resource['primary']['attributes']['tags.Name']
|
||||
raw_attrs = resource['primary']['attributes']
|
||||
|
||||
groups = []
|
||||
|
||||
attrs = {
|
||||
'ami': raw_attrs['ami'],
|
||||
'availability_zone': raw_attrs['availability_zone'],
|
||||
'ebs_block_device': parse_attr_list(raw_attrs, 'ebs_block_device'),
|
||||
'ebs_optimized': parse_bool(raw_attrs['ebs_optimized']),
|
||||
'ephemeral_block_device': parse_attr_list(raw_attrs,
|
||||
'ephemeral_block_device'),
|
||||
'id': raw_attrs['id'],
|
||||
'key_name': raw_attrs['key_name'],
|
||||
'private': parse_dict(raw_attrs, 'private',
|
||||
sep='_'),
|
||||
'public': parse_dict(raw_attrs, 'public',
|
||||
sep='_'),
|
||||
'root_block_device': parse_attr_list(raw_attrs, 'root_block_device'),
|
||||
'security_groups': parse_list(raw_attrs, 'security_groups'),
|
||||
'subnet': parse_dict(raw_attrs, 'subnet',
|
||||
sep='_'),
|
||||
'tags': parse_dict(raw_attrs, 'tags'),
|
||||
'tenancy': raw_attrs['tenancy'],
|
||||
'vpc_security_group_ids': parse_list(raw_attrs,
|
||||
'vpc_security_group_ids'),
|
||||
# ansible-specific
|
||||
'ansible_ssh_port': 22,
|
||||
'ansible_ssh_host': raw_attrs['public_ip'],
|
||||
# generic
|
||||
'public_ipv4': raw_attrs['public_ip'],
|
||||
'private_ipv4': raw_attrs['private_ip'],
|
||||
'provider': 'aws',
|
||||
}
|
||||
|
||||
# attrs specific to Ansible
|
||||
if 'tags.sshUser' in raw_attrs:
|
||||
attrs['ansible_ssh_user'] = raw_attrs['tags.sshUser']
|
||||
if 'tags.sshPrivateIp' in raw_attrs:
|
||||
attrs['ansible_ssh_host'] = raw_attrs['private_ip']
|
||||
|
||||
# attrs specific to Mantl
|
||||
attrs.update({
|
||||
'consul_dc': _clean_dc(attrs['tags'].get('dc', module_name)),
|
||||
'role': attrs['tags'].get('role', 'none'),
|
||||
'ansible_python_interpreter': attrs['tags'].get('python_bin','python')
|
||||
})
|
||||
|
||||
# groups specific to Mantl
|
||||
groups.extend(['aws_ami=' + attrs['ami'],
|
||||
'aws_az=' + attrs['availability_zone'],
|
||||
'aws_key_name=' + attrs['key_name'],
|
||||
'aws_tenancy=' + attrs['tenancy']])
|
||||
groups.extend('aws_tag_%s=%s' % item for item in attrs['tags'].items())
|
||||
groups.extend('aws_vpc_security_group=' + group
|
||||
for group in attrs['vpc_security_group_ids'])
|
||||
groups.extend('aws_subnet_%s=%s' % subnet
|
||||
for subnet in attrs['subnet'].items())
|
||||
|
||||
# groups specific to Mantl
|
||||
groups.append('role=' + attrs['role'])
|
||||
groups.append('dc=' + attrs['consul_dc'])
|
||||
|
||||
return name, attrs, groups
|
||||
|
||||
|
||||
@parses('google_compute_instance')
|
||||
@calculate_mantl_vars
|
||||
def gce_host(resource, module_name):
|
||||
name = resource['primary']['id']
|
||||
raw_attrs = resource['primary']['attributes']
|
||||
groups = []
|
||||
|
||||
# network interfaces
|
||||
interfaces = parse_attr_list(raw_attrs, 'network_interface')
|
||||
for interface in interfaces:
|
||||
interface['access_config'] = parse_attr_list(interface,
|
||||
'access_config')
|
||||
for key in interface.keys():
|
||||
if '.' in key:
|
||||
del interface[key]
|
||||
|
||||
# general attrs
|
||||
attrs = {
|
||||
'can_ip_forward': raw_attrs['can_ip_forward'] == 'true',
|
||||
'disks': parse_attr_list(raw_attrs, 'disk'),
|
||||
'machine_type': raw_attrs['machine_type'],
|
||||
'metadata': parse_dict(raw_attrs, 'metadata'),
|
||||
'network': parse_attr_list(raw_attrs, 'network'),
|
||||
'network_interface': interfaces,
|
||||
'self_link': raw_attrs['self_link'],
|
||||
'service_account': parse_attr_list(raw_attrs, 'service_account'),
|
||||
'tags': parse_list(raw_attrs, 'tags'),
|
||||
'zone': raw_attrs['zone'],
|
||||
# ansible
|
||||
'ansible_ssh_port': 22,
|
||||
'provider': 'gce',
|
||||
}
|
||||
|
||||
# attrs specific to Ansible
|
||||
if 'metadata.ssh_user' in raw_attrs:
|
||||
attrs['ansible_ssh_user'] = raw_attrs['metadata.ssh_user']
|
||||
|
||||
# attrs specific to Mantl
|
||||
attrs.update({
|
||||
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)),
|
||||
'role': attrs['metadata'].get('role', 'none'),
|
||||
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
|
||||
})
|
||||
|
||||
try:
|
||||
attrs.update({
|
||||
'ansible_ssh_host': interfaces[0]['access_config'][0]['nat_ip'] or interfaces[0]['access_config'][0]['assigned_nat_ip'],
|
||||
'public_ipv4': interfaces[0]['access_config'][0]['nat_ip'] or interfaces[0]['access_config'][0]['assigned_nat_ip'],
|
||||
'private_ipv4': interfaces[0]['address'],
|
||||
'publicly_routable': True,
|
||||
})
|
||||
except (KeyError, ValueError):
|
||||
attrs.update({'ansible_ssh_host': '', 'publicly_routable': False})
|
||||
|
||||
# add groups based on attrs
|
||||
groups.extend('gce_image=' + disk['image'] for disk in attrs['disks'])
|
||||
groups.append('gce_machine_type=' + attrs['machine_type'])
|
||||
groups.extend('gce_metadata_%s=%s' % (key, value)
|
||||
for (key, value) in attrs['metadata'].items()
|
||||
if key not in set(['sshKeys']))
|
||||
groups.extend('gce_tag=' + tag for tag in attrs['tags'])
|
||||
groups.append('gce_zone=' + attrs['zone'])
|
||||
|
||||
if attrs['can_ip_forward']:
|
||||
groups.append('gce_ip_forward')
|
||||
if attrs['publicly_routable']:
|
||||
groups.append('gce_publicly_routable')
|
||||
|
||||
# groups specific to Mantl
|
||||
groups.append('role=' + attrs['metadata'].get('role', 'none'))
|
||||
groups.append('dc=' + attrs['consul_dc'])
|
||||
|
||||
return name, attrs, groups
|
||||
|
||||
|
||||
@parses('vsphere_virtual_machine')
|
||||
@calculate_mantl_vars
|
||||
def vsphere_host(resource, module_name):
|
||||
raw_attrs = resource['primary']['attributes']
|
||||
network_attrs = parse_dict(raw_attrs, 'network_interface')
|
||||
network = parse_dict(network_attrs, '0')
|
||||
ip_address = network.get('ipv4_address', network['ip_address'])
|
||||
name = raw_attrs['name']
|
||||
groups = []
|
||||
|
||||
attrs = {
|
||||
'id': raw_attrs['id'],
|
||||
'ip_address': ip_address,
|
||||
'private_ipv4': ip_address,
|
||||
'public_ipv4': ip_address,
|
||||
'metadata': parse_dict(raw_attrs, 'custom_configuration_parameters'),
|
||||
'ansible_ssh_port': 22,
|
||||
'provider': 'vsphere',
|
||||
}
|
||||
|
||||
try:
|
||||
attrs.update({
|
||||
'ansible_ssh_host': ip_address,
|
||||
})
|
||||
except (KeyError, ValueError):
|
||||
attrs.update({'ansible_ssh_host': '', })
|
||||
|
||||
attrs.update({
|
||||
'consul_dc': _clean_dc(attrs['metadata'].get('consul_dc', module_name)),
|
||||
'role': attrs['metadata'].get('role', 'none'),
|
||||
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python')
|
||||
})
|
||||
|
||||
# attrs specific to Ansible
|
||||
if 'ssh_user' in attrs['metadata']:
|
||||
attrs['ansible_ssh_user'] = attrs['metadata']['ssh_user']
|
||||
|
||||
groups.append('role=' + attrs['role'])
|
||||
groups.append('dc=' + attrs['consul_dc'])
|
||||
|
||||
return name, attrs, groups
|
||||
|
||||
@parses('azure_instance')
|
||||
@calculate_mantl_vars
|
||||
def azure_host(resource, module_name):
|
||||
name = resource['primary']['attributes']['name']
|
||||
raw_attrs = resource['primary']['attributes']
|
||||
|
||||
groups = []
|
||||
|
||||
attrs = {
|
||||
'automatic_updates': raw_attrs['automatic_updates'],
|
||||
'description': raw_attrs['description'],
|
||||
'hosted_service_name': raw_attrs['hosted_service_name'],
|
||||
'id': raw_attrs['id'],
|
||||
'image': raw_attrs['image'],
|
||||
'ip_address': raw_attrs['ip_address'],
|
||||
'location': raw_attrs['location'],
|
||||
'name': raw_attrs['name'],
|
||||
'reverse_dns': raw_attrs['reverse_dns'],
|
||||
'security_group': raw_attrs['security_group'],
|
||||
'size': raw_attrs['size'],
|
||||
'ssh_key_thumbprint': raw_attrs['ssh_key_thumbprint'],
|
||||
'subnet': raw_attrs['subnet'],
|
||||
'username': raw_attrs['username'],
|
||||
'vip_address': raw_attrs['vip_address'],
|
||||
'virtual_network': raw_attrs['virtual_network'],
|
||||
'endpoint': parse_attr_list(raw_attrs, 'endpoint'),
|
||||
# ansible
|
||||
'ansible_ssh_port': 22,
|
||||
'ansible_ssh_user': raw_attrs['username'],
|
||||
'ansible_ssh_host': raw_attrs['vip_address'],
|
||||
}
|
||||
|
||||
# attrs specific to mantl
|
||||
attrs.update({
|
||||
'consul_dc': attrs['location'].lower().replace(" ", "-"),
|
||||
'role': attrs['description']
|
||||
})
|
||||
|
||||
# groups specific to mantl
|
||||
groups.extend(['azure_image=' + attrs['image'],
|
||||
'azure_location=' + attrs['location'].lower().replace(" ", "-"),
|
||||
'azure_username=' + attrs['username'],
|
||||
'azure_security_group=' + attrs['security_group']])
|
||||
|
||||
# groups specific to mantl
|
||||
groups.append('role=' + attrs['role'])
|
||||
groups.append('dc=' + attrs['consul_dc'])
|
||||
|
||||
return name, attrs, groups
|
||||
|
||||
|
||||
@parses('clc_server')
|
||||
@calculate_mantl_vars
|
||||
def clc_server(resource, module_name):
|
||||
raw_attrs = resource['primary']['attributes']
|
||||
name = raw_attrs.get('id')
|
||||
groups = []
|
||||
md = parse_dict(raw_attrs, 'metadata')
|
||||
attrs = {
|
||||
'metadata': md,
|
||||
'ansible_ssh_port': md.get('ssh_port', 22),
|
||||
'ansible_ssh_user': md.get('ssh_user', 'root'),
|
||||
'provider': 'clc',
|
||||
'publicly_routable': False,
|
||||
}
|
||||
|
||||
try:
|
||||
attrs.update({
|
||||
'public_ipv4': raw_attrs['public_ip_address'],
|
||||
'private_ipv4': raw_attrs['private_ip_address'],
|
||||
'ansible_ssh_host': raw_attrs['public_ip_address'],
|
||||
'publicly_routable': True,
|
||||
})
|
||||
except (KeyError, ValueError):
|
||||
attrs.update({
|
||||
'ansible_ssh_host': raw_attrs['private_ip_address'],
|
||||
'private_ipv4': raw_attrs['private_ip_address'],
|
||||
})
|
||||
|
||||
attrs.update({
|
||||
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)),
|
||||
'role': attrs['metadata'].get('role', 'none'),
|
||||
})
|
||||
|
||||
groups.append('role=' + attrs['role'])
|
||||
groups.append('dc=' + attrs['consul_dc'])
|
||||
return name, attrs, groups
|
||||
|
||||
|
||||
|
||||
## QUERY TYPES
|
||||
def query_host(hosts, target):
|
||||
for name, attrs, _ in hosts:
|
||||
if name == target:
|
||||
return attrs
|
||||
|
||||
return {}
|
||||
|
||||
|
||||
def query_list(hosts):
|
||||
groups = defaultdict(dict)
|
||||
meta = {}
|
||||
|
||||
for name, attrs, hostgroups in hosts:
|
||||
for group in set(hostgroups):
|
||||
groups[group].setdefault('hosts', [])
|
||||
groups[group]['hosts'].append(name)
|
||||
|
||||
meta[name] = attrs
|
||||
|
||||
groups['_meta'] = {'hostvars': meta}
|
||||
return groups
|
||||
|
||||
|
||||
def query_hostfile(hosts):
|
||||
out = ['## begin hosts generated by terraform.py ##']
|
||||
out.extend(
|
||||
'{}\t{}'.format(attrs['ansible_ssh_host'].ljust(16), name)
|
||||
for name, attrs, _ in hosts
|
||||
)
|
||||
|
||||
out.append('## end hosts generated by terraform.py ##')
|
||||
return '\n'.join(out)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
__file__, __doc__,
|
||||
formatter_class=argparse.ArgumentDefaultsHelpFormatter, )
|
||||
modes = parser.add_mutually_exclusive_group(required=True)
|
||||
modes.add_argument('--list',
|
||||
action='store_true',
|
||||
help='list all variables')
|
||||
modes.add_argument('--host', help='list variables for a single host')
|
||||
modes.add_argument('--version',
|
||||
action='store_true',
|
||||
help='print version and exit')
|
||||
modes.add_argument('--hostfile',
|
||||
action='store_true',
|
||||
help='print hosts as a /etc/hosts snippet')
|
||||
parser.add_argument('--pretty',
|
||||
action='store_true',
|
||||
help='pretty-print output JSON')
|
||||
parser.add_argument('--nometa',
|
||||
action='store_true',
|
||||
help='with --list, exclude hostvars')
|
||||
default_root = os.environ.get('TERRAFORM_STATE_ROOT',
|
||||
os.path.abspath(os.path.join(os.path.dirname(__file__),
|
||||
'..', '..', )))
|
||||
parser.add_argument('--root',
|
||||
default=default_root,
|
||||
help='custom root to search for `.tfstate`s in')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.version:
|
||||
print('%s %s' % (__file__, VERSION))
|
||||
parser.exit()
|
||||
|
||||
hosts = iterhosts(iterresources(tfstates(args.root)))
|
||||
if args.list:
|
||||
output = query_list(hosts)
|
||||
if args.nometa:
|
||||
del output['_meta']
|
||||
print(json.dumps(output, indent=4 if args.pretty else None))
|
||||
elif args.host:
|
||||
output = query_host(hosts, args.host)
|
||||
print(json.dumps(output, indent=4 if args.pretty else None))
|
||||
elif args.hostfile:
|
||||
output = query_hostfile(hosts)
|
||||
print(output)
|
||||
|
||||
parser.exit()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
164
docs/ansible.md
Normal file
164
docs/ansible.md
Normal file
@ -0,0 +1,164 @@
|
||||
Ansible variables
|
||||
===============
|
||||
|
||||
|
||||
Inventory
|
||||
-------------
|
||||
The inventory is composed of 3 groups:
|
||||
|
||||
* **kube-node** : list of kubernetes nodes where the pods will run.
|
||||
* **kube-master** : list of servers where kubernetes master components (apiserver, scheduler, controller) will run.
|
||||
Note: if you want the server to act both as master and node the server must be defined on both groups _kube-master_ and _kube-node_
|
||||
* **etcd**: list of server to compose the etcd server. you should have at least 3 servers for failover purposes.
|
||||
|
||||
Below is a complete inventory example:
|
||||
|
||||
```
|
||||
## Configure 'ip' variable to bind kubernetes services on a
|
||||
## different ip than the default iface
|
||||
node1 ansible_ssh_host=95.54.0.12 # ip=10.3.0.1
|
||||
node2 ansible_ssh_host=95.54.0.13 # ip=10.3.0.2
|
||||
node3 ansible_ssh_host=95.54.0.14 # ip=10.3.0.3
|
||||
node4 ansible_ssh_host=95.54.0.15 # ip=10.3.0.4
|
||||
node5 ansible_ssh_host=95.54.0.16 # ip=10.3.0.5
|
||||
node6 ansible_ssh_host=95.54.0.17 # ip=10.3.0.6
|
||||
|
||||
[kube-master]
|
||||
node1
|
||||
node2
|
||||
|
||||
[etcd]
|
||||
node1
|
||||
node2
|
||||
node3
|
||||
|
||||
[kube-node]
|
||||
node2
|
||||
node3
|
||||
node4
|
||||
node5
|
||||
node6
|
||||
|
||||
[k8s-cluster:children]
|
||||
kube-node
|
||||
kube-master
|
||||
etcd
|
||||
```
|
||||
|
||||
Group vars and overriding variables precedence
|
||||
----------------------------------------------
|
||||
|
||||
The group variables to control main deployment options are located in the directory ``inventory/group_vars``.
|
||||
|
||||
There are also role vars for docker, rkt, kubernetes preinstall and master roles.
|
||||
According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
|
||||
those cannot be overriden from the group vars. In order to override, one should use
|
||||
the `-e ` runtime flags (most simple way) or other layers described in the docs.
|
||||
|
||||
Kargo uses only a few layers to override things (or expect them to
|
||||
be overriden for roles):
|
||||
|
||||
Layer | Comment
|
||||
------|--------
|
||||
**role defaults** | provides best UX to override things for Kargo deployments
|
||||
inventory vars | Unused
|
||||
**inventory group_vars** | Expects users to use ``all.yml``,``k8s-cluster.yml`` etc. to override things
|
||||
inventory host_vars | Unused
|
||||
playbook group_vars | Unuses
|
||||
playbook host_vars | Unused
|
||||
**host facts** | Kargo overrides for internal roles' logic, like state flags
|
||||
play vars | Unused
|
||||
play vars_prompt | Unused
|
||||
play vars_files | Unused
|
||||
registered vars | Unused
|
||||
set_facts | Kargo overrides those, for some places
|
||||
**role and include vars** | Provides bad UX to override things! Use extra vars to enforce
|
||||
block vars (only for tasks in block) | Kargo overrides for internal roles' logic
|
||||
task vars (only for the task) | Unused for roles, but only for helper scripts
|
||||
**extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml``
|
||||
|
||||
Ansible tags
|
||||
------------
|
||||
The following tags are defined in playbooks:
|
||||
|
||||
| Tag name | Used for
|
||||
|--------------------------|---------
|
||||
| apps | K8s apps definitions
|
||||
| azure | Cloud-provider Azure
|
||||
| bastion | Setup ssh config for bastion
|
||||
| bootstrap-os | Anything related to host OS configuration
|
||||
| calico | Network plugin Calico
|
||||
| canal | Network plugin Canal
|
||||
| cloud-provider | Cloud-provider related tasks
|
||||
| dnsmasq | Configuring DNS stack for hosts and K8s apps
|
||||
| docker | Configuring docker for hosts
|
||||
| download | Fetching container images to a delegate host
|
||||
| etcd | Configuring etcd cluster
|
||||
| etcd-pre-upgrade | Upgrading etcd cluster
|
||||
| etcd-secrets | Configuring etcd certs/keys
|
||||
| etchosts | Configuring /etc/hosts entries for hosts
|
||||
| facts | Gathering facts and misc check results
|
||||
| flannel | Network plugin flannel
|
||||
| gce | Cloud-provider GCP
|
||||
| hyperkube | Manipulations with K8s hyperkube image
|
||||
| k8s-pre-upgrade | Upgrading K8s cluster
|
||||
| k8s-secrets | Configuring K8s certs/keys
|
||||
| kpm | Installing K8s apps definitions with KPM
|
||||
| kube-apiserver | Configuring self-hosted kube-apiserver
|
||||
| kube-controller-manager | Configuring self-hosted kube-controller-manager
|
||||
| kubectl | Installing kubectl and bash completion
|
||||
| kubelet | Configuring kubelet service
|
||||
| kube-proxy | Configuring self-hosted kube-proxy
|
||||
| kube-scheduler | Configuring self-hosted kube-scheduler
|
||||
| localhost | Special steps for the localhost (ansible runner)
|
||||
| master | Configuring K8s master node role
|
||||
| netchecker | Installing netchecker K8s app
|
||||
| network | Configuring networking plugins for K8s
|
||||
| nginx | Configuring LB for kube-apiserver instances
|
||||
| node | Configuring K8s minion (compute) node role
|
||||
| openstack | Cloud-provider OpenStack
|
||||
| preinstall | Preliminary configuration steps
|
||||
| resolvconf | Configuring /etc/resolv.conf for hosts/apps
|
||||
| upgrade | Upgrading, f.e. container images/binaries
|
||||
| upload | Distributing images/binaries across hosts
|
||||
| weave | Network plugin Weave
|
||||
|
||||
Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all
|
||||
tags found in the codebase. New tags will be listed with the empty "Used for"
|
||||
field.
|
||||
|
||||
Example commands
|
||||
----------------
|
||||
Example command to filter and apply only DNS configuration tasks and skip
|
||||
everything else related to host OS configuration and downloading images of containers:
|
||||
|
||||
```
|
||||
ansible-playbook -i inventory/inventory.ini cluster.yml --tags preinstall,dnsmasq,facts --skip-tags=download,bootstrap-os
|
||||
```
|
||||
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
|
||||
```
|
||||
ansible-playbook -i inventory/inventory.ini -e dns_server='' cluster.yml --tags resolvconf
|
||||
```
|
||||
And this prepares all container images localy (at the ansible runner node) without installing
|
||||
or upgrading related stuff or trying to upload container to K8s cluster nodes:
|
||||
```
|
||||
ansible-playbook -i inventory/inventory.ini cluster.yaml \
|
||||
-e download_run_once=true -e download_localhost=true \
|
||||
--tags download --skip-tags upload,upgrade
|
||||
```
|
||||
|
||||
Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing.
|
||||
|
||||
Bastion host
|
||||
--------------
|
||||
If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
|
||||
you can use a so called *bastion* host to connect to your nodes. To specify and use a bastion,
|
||||
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
|
||||
bastion host.
|
||||
|
||||
```
|
||||
bastion ansible_ssh_host=x.x.x.x
|
||||
```
|
||||
|
||||
For more information about Ansible and bastion hosts, read
|
||||
[Running Ansible Through an SSH Bastion Host](http://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)
|
10
docs/aws.md
Normal file
10
docs/aws.md
Normal file
@ -0,0 +1,10 @@
|
||||
AWS
|
||||
===============
|
||||
|
||||
To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`.
|
||||
|
||||
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes/kubernetes/tree/master/cluster/aws/templates/iam). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
|
||||
|
||||
The next step is to make sure the hostnames in your `inventory` file are identical to your internal hostnames in AWS. This may look something like `ip-111-222-333-444.us-west-2.compute.internal`. You can then specify how Ansible connects to these instances with `ansible_ssh_host` and `ansible_ssh_user`.
|
||||
|
||||
You can now create your cluster!
|
56
docs/azure.md
Normal file
56
docs/azure.md
Normal file
@ -0,0 +1,56 @@
|
||||
Azure
|
||||
===============
|
||||
|
||||
To deploy kubespray on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'azure'`.
|
||||
|
||||
All your instances are required to run in a resource group and a routing table has to be attached to the subnet your instances are in.
|
||||
|
||||
Not all features are supported yet though, for a list of the current status have a look [here](https://github.com/colemickens/azure-kubernetes-status)
|
||||
|
||||
### Parameters
|
||||
|
||||
Before creating the instances you must first set the `azure_` variables in the `group_vars/all.yml` file.
|
||||
|
||||
All of the values can be retrieved using the azure cli tool which can be downloaded here: https://docs.microsoft.com/en-gb/azure/xplat-cli-install
|
||||
After installation you have to run `azure login` to get access to your account.
|
||||
|
||||
|
||||
#### azure\_tenant\_id + azure\_subscription\_id
|
||||
run `azure account show` to retrieve your subscription id and tenant id:
|
||||
`azure_tenant_id` -> Tenant ID field
|
||||
`azure_subscription_id` -> ID field
|
||||
|
||||
|
||||
#### azure\_location
|
||||
The region your instances are located, can be something like `westeurope` or `westcentralus`. A full list of region names can be retrieved via `azure location list`
|
||||
|
||||
|
||||
#### azure\_resource\_group
|
||||
The name of the resource group your instances are in, can be retrieved via `azure group list`
|
||||
|
||||
#### azure\_vnet\_name
|
||||
The name of the virtual network your instances are in, can be retrieved via `azure network vnet list`
|
||||
|
||||
#### azure\_subnet\_name
|
||||
The name of the subnet your instances are in, can be retrieved via `azure network vnet subnet list RESOURCE_GROUP VNET_NAME`
|
||||
|
||||
#### azure\_security\_group\_name
|
||||
The name of the network security group your instances are in, can be retrieved via `azure network nsg list`
|
||||
|
||||
#### azure\_aad\_client\_id + azure\_aad\_client\_secret
|
||||
These will have to be generated first:
|
||||
- Create an Azure AD Application with:
|
||||
`azure ad app create --name kubernetes --identifier-uris http://kubernetes --home-page http://example.com --password CLIENT_SECRET`
|
||||
The name, identifier-uri, home-page and the password can be choosen
|
||||
Note the AppId in the output.
|
||||
- Create Service principal for the application with:
|
||||
`azure ad sp create --applicationId AppId`
|
||||
This is the AppId from the last command
|
||||
- Create the role assignment with:
|
||||
`azure role assignment create --spn http://kubernetes -o "Owner" -c /subscriptions/SUBSCRIPTION_ID`
|
||||
|
||||
azure\_aad\_client\_id musst be set to the AppId, azure\_aad\_client\_secret is your choosen secret.
|
||||
|
||||
## Provisioning Azure with Resource Group Templates
|
||||
|
||||
You'll find Resource Group Templates and scripts to provision the required infrastructore to Azure in [*contrib/azurerm*](../contrib/azurerm/README.md)
|
153
docs/calico.md
Normal file
153
docs/calico.md
Normal file
@ -0,0 +1,153 @@
|
||||
Calico
|
||||
===========
|
||||
|
||||
Check if the calico-node container is running
|
||||
|
||||
```
|
||||
docker ps | grep calico
|
||||
```
|
||||
|
||||
The **calicoctl** command allows to check the status of the network workloads.
|
||||
* Check the status of Calico nodes
|
||||
|
||||
```
|
||||
calicoctl node status
|
||||
```
|
||||
|
||||
or for versions prior *v1.0.0*:
|
||||
|
||||
```
|
||||
calicoctl status
|
||||
```
|
||||
|
||||
* Show the configured network subnet for containers
|
||||
|
||||
```
|
||||
calicoctl get ippool -o wide
|
||||
```
|
||||
|
||||
or for versions prior *v1.0.0*:
|
||||
|
||||
```
|
||||
calicoctl pool show
|
||||
```
|
||||
|
||||
* Show the workloads (ip addresses of containers and their located)
|
||||
|
||||
```
|
||||
calicoctl get workloadEndpoint -o wide
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```
|
||||
calicoctl get hostEndpoint -o wide
|
||||
```
|
||||
|
||||
or for versions prior *v1.0.0*:
|
||||
|
||||
```
|
||||
calicoctl endpoint show --detail
|
||||
```
|
||||
|
||||
##### Optional : Define network backend
|
||||
|
||||
In some cases you may want to define Calico network backend. Allowed values are 'bird', 'gobgp' or 'none'. Bird is a default value.
|
||||
|
||||
To re-define you need to edit the inventory and add a group variable `calico_network_backend`
|
||||
|
||||
```
|
||||
calico_network_backend: none
|
||||
```
|
||||
|
||||
##### Optional : BGP Peering with border routers
|
||||
|
||||
In some cases you may want to route the pods subnet and so NAT is not needed on the nodes.
|
||||
For instance if you have a cluster spread on different locations and you want your pods to talk each other no matter where they are located.
|
||||
The following variables need to be set:
|
||||
`peer_with_router` to enable the peering with the datacenter's border router (default value: false).
|
||||
you'll need to edit the inventory and add a and a hostvar `local_as` by node.
|
||||
|
||||
```
|
||||
node1 ansible_ssh_host=95.54.0.12 local_as=xxxxxx
|
||||
```
|
||||
|
||||
##### Optional : Define global AS number
|
||||
|
||||
Optional parameter `global_as_num` defines Calico global AS number (`/calico/bgp/v1/global/as_num` etcd key).
|
||||
It defaults to "64512".
|
||||
|
||||
##### Optional : BGP Peering with route reflectors
|
||||
|
||||
At large scale you may want to disable full node-to-node mesh in order to
|
||||
optimize your BGP topology and improve `calico-node` containers' start times.
|
||||
|
||||
To do so you can deploy BGP route reflectors and peer `calico-node` with them as
|
||||
recommended here:
|
||||
|
||||
* https://hub.docker.com/r/calico/routereflector/
|
||||
* http://docs.projectcalico.org/v2.0/reference/private-cloud/l3-interconnect-fabric
|
||||
|
||||
You need to edit your inventory and add:
|
||||
|
||||
* `calico-rr` group with nodes in it. At the moment it's incompatible with
|
||||
`kube-node` due to BGP port conflict with `calico-node` container. So you
|
||||
should not have nodes in both `calico-rr` and `kube-node` groups.
|
||||
* `cluster_id` by route reflector node/group (see details
|
||||
[here](https://hub.docker.com/r/calico/routereflector/))
|
||||
|
||||
Here's an example of Kargo inventory with route reflectors:
|
||||
|
||||
```
|
||||
[all]
|
||||
rr0 ansible_ssh_host=10.210.1.10 ip=10.210.1.10
|
||||
rr1 ansible_ssh_host=10.210.1.11 ip=10.210.1.11
|
||||
node2 ansible_ssh_host=10.210.1.12 ip=10.210.1.12
|
||||
node3 ansible_ssh_host=10.210.1.13 ip=10.210.1.13
|
||||
node4 ansible_ssh_host=10.210.1.14 ip=10.210.1.14
|
||||
node5 ansible_ssh_host=10.210.1.15 ip=10.210.1.15
|
||||
|
||||
[kube-master]
|
||||
node2
|
||||
node3
|
||||
|
||||
[etcd]
|
||||
node2
|
||||
node3
|
||||
node4
|
||||
|
||||
[kube-node]
|
||||
node2
|
||||
node3
|
||||
node4
|
||||
node5
|
||||
|
||||
[k8s-cluster:children]
|
||||
kube-node
|
||||
kube-master
|
||||
|
||||
[calico-rr]
|
||||
rr0
|
||||
rr1
|
||||
|
||||
[rack0]
|
||||
rr0
|
||||
rr1
|
||||
node2
|
||||
node3
|
||||
node4
|
||||
node5
|
||||
|
||||
[rack0:vars]
|
||||
cluster_id="1.0.0.1"
|
||||
```
|
||||
|
||||
The inventory above will deploy the following topology assuming that calico's
|
||||
`global_as_num` is set to `65400`:
|
||||
|
||||

|
||||
|
||||
Cloud providers configuration
|
||||
=============================
|
||||
|
||||
Please refer to the official documentation, for example [GCE configuration](http://docs.projectcalico.org/v1.5/getting-started/docker/installation/gce) requires a security rule for calico ip-ip tunnels. Note, calico is always configured with ``ipip: true`` if the cloud provider was defined.
|
22
docs/cloud.md
Normal file
22
docs/cloud.md
Normal file
@ -0,0 +1,22 @@
|
||||
Cloud providers
|
||||
==============
|
||||
|
||||
#### Provisioning
|
||||
|
||||
You can use kargo-cli to start new instances on cloud providers
|
||||
here's an example
|
||||
```
|
||||
kargo [aws|gce] --nodes 2 --etcd 3 --cluster-name test-smana
|
||||
```
|
||||
|
||||
#### Deploy kubernetes
|
||||
|
||||
With kargo-cli
|
||||
```
|
||||
kargo deploy [--aws|--gce] -u admin
|
||||
```
|
||||
|
||||
Or ansible-playbook command
|
||||
```
|
||||
ansible-playbook -u smana -e ansible_ssh_user=admin -e cloud_provider=[aws|gce] -b --become-user=root -i inventory/single.cfg cluster.yml
|
||||
```
|
25
docs/comparisons.md
Normal file
25
docs/comparisons.md
Normal file
@ -0,0 +1,25 @@
|
||||
Kargo vs [Kops](https://github.com/kubernetes/kops)
|
||||
---------------
|
||||
|
||||
Kargo runs on bare metal and most clouds, using Ansible as its substrate for
|
||||
provisioning and orchestration. Kops performs the provisioning and orchestration
|
||||
itself, and as such is less flexible in deployment platforms. For people with
|
||||
familiarity with Ansible, existing Ansible deployments or the desire to run a
|
||||
Kubernetes cluster across multiple platforms, Kargo is a good choice. Kops,
|
||||
however, is more tightly integrated with the unique features of the clouds it
|
||||
supports so it could be a better choice if you know that you will only be using
|
||||
one platform for the foreseeable future.
|
||||
|
||||
Kargo vs [Kubeadm](https://github.com/kubernetes/kubeadm)
|
||||
------------------
|
||||
|
||||
Kubeadm provides domain Knowledge of Kubernetes clusters' life cycle
|
||||
management, including self-hosted layouts, dynamic discovery services and so
|
||||
on. Had it belong to the new [operators world](https://coreos.com/blog/introducing-operators.html),
|
||||
it would've likely been named a "Kubernetes cluster operator". Kargo however,
|
||||
does generic configuration management tasks from the "OS operators" ansible
|
||||
world, plus some initial K8s clustering (with networking plugins included) and
|
||||
control plane bootstrapping. Kargo [strives](https://github.com/kubernetes-incubator/kargo/issues/553)
|
||||
to adopt kubeadm as a tool in order to consume life cycle management domain
|
||||
knowledge from it and offload generic OS configuration things from it, which
|
||||
hopefully benefits both sides.
|
24
docs/coreos.md
Normal file
24
docs/coreos.md
Normal file
@ -0,0 +1,24 @@
|
||||
CoreOS bootstrap
|
||||
===============
|
||||
|
||||
Example with **kargo-cli**:
|
||||
|
||||
```
|
||||
kargo deploy --gce --coreos
|
||||
```
|
||||
|
||||
Or with Ansible:
|
||||
|
||||
Before running the cluster playbook you must satisfy the following requirements:
|
||||
|
||||
* On each CoreOS nodes a writable directory **/opt/bin** (~400M disk space)
|
||||
|
||||
* Uncomment the variable **ansible\_python\_interpreter** in the file `inventory/group_vars/all.yml`
|
||||
|
||||
* run the Python bootstrap playbook
|
||||
|
||||
```
|
||||
ansible-playbook -u smana -e ansible_ssh_user=smana -b --become-user=root -i inventory/inventory.cfg coreos-bootstrap.yml
|
||||
```
|
||||
|
||||
Then you can proceed to [cluster deployment](#run-deployment)
|
142
docs/dns-stack.md
Normal file
142
docs/dns-stack.md
Normal file
@ -0,0 +1,142 @@
|
||||
K8s DNS stack by Kargo
|
||||
======================
|
||||
|
||||
For K8s cluster nodes, kargo configures a [Kubernetes DNS](http://kubernetes.io/docs/admin/dns/)
|
||||
[cluster add-on](http://releases.k8s.io/master/cluster/addons/README.md)
|
||||
to serve as an authoritative DNS server for a given ``dns_domain`` and its
|
||||
``svc, default.svc`` default subdomains (a total of ``ndots: 5`` max levels).
|
||||
|
||||
Other nodes in the inventory, like external storage nodes or a separate etcd cluster
|
||||
node group, considered non-cluster and left up to the user to configure DNS resolve.
|
||||
|
||||
|
||||
DNS variables
|
||||
=============
|
||||
|
||||
There are several global variables which can be used to modify DNS settings:
|
||||
|
||||
#### ndots
|
||||
ndots value to be used in ``/etc/resolv.conf``
|
||||
|
||||
It is important to note that multiple search domains combined with high ``ndots``
|
||||
values lead to poor performance of DNS stack, so please choose it wisely.
|
||||
The dnsmasq DaemonSet can accept lower ``ndots`` values and return NXDOMAIN
|
||||
replies for [bogus internal FQDNS](https://github.com/kubernetes/kubernetes/issues/19634#issuecomment-253948954)
|
||||
before it even hits the kubedns app. This enables dnsmasq to serve as a
|
||||
protective, but still recursive resolver in front of kubedns.
|
||||
|
||||
#### searchdomains
|
||||
Custom search domains to be added in addition to the cluster search domains (``default.svc.{{ dns_domain }}, svc.{{ dns_domain }}``).
|
||||
|
||||
Most Linux systems limit the total number of search domains to 6 and the total length of all search domains
|
||||
to 256 characters. Depending on the length of ``dns_domain``, you're limitted to less then the total limit.
|
||||
|
||||
Please note that ``resolvconf_mode: docker_dns`` will automatically add your systems search domains as
|
||||
additional search domains. Please take this into the accounts for the limits.
|
||||
|
||||
#### nameservers
|
||||
This variable is only used by ``resolvconf_mode: host_resolvconf``. These nameservers are added to the hosts
|
||||
``/etc/resolv.conf`` *after* ``upstream_dns_servers`` and thus serve as backup nameservers. If this variable
|
||||
is not set, a default resolver is chosen (depending on cloud provider or 8.8.8.8 when no cloud provider is specified).
|
||||
|
||||
#### upstream_dns_servers
|
||||
DNS servers to be added *after* the cluster DNS. Used by all ``resolvconf_mode`` modes. These serve as backup
|
||||
DNS servers in early cluster deployment when no cluster DNS is available yet. These are also added as upstream
|
||||
DNS servers used by ``dnsmasq`` (when deployed with ``dns_mode: dnsmasq_kubedns``).
|
||||
|
||||
DNS modes supported by kargo
|
||||
============================
|
||||
|
||||
You can modify how kargo sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``.
|
||||
|
||||
## dns_mode
|
||||
``dns_mode`` configures how kargo will setup cluster DNS. There are three modes available:
|
||||
|
||||
#### dnsmasq_kubedns (default)
|
||||
This installs an additional dnsmasq DaemonSet which gives more flexibility and lifts some
|
||||
limitations (e.g. number of nameservers). Kubelet is instructed to use dnsmasq instead of kubedns/skydns.
|
||||
It is configured to forward all DNS queries belonging to cluster services to kubedns/skydns. All
|
||||
other queries are forwardet to the nameservers found in ``upstream_dns_servers`` or ``default_resolver``
|
||||
|
||||
#### kubedns
|
||||
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use kubedns/skydns for
|
||||
all queries.
|
||||
|
||||
#### none
|
||||
This does not install any of dnsmasq and kubedns/skydns. This basically disables cluster DNS completely and
|
||||
leaves you with a non functional cluster.
|
||||
|
||||
## resolvconf_mode
|
||||
``resolvconf_mode`` configures how kargo will setup DNS for ``hostNetwork: true`` PODs and non-k8s containers.
|
||||
There are three modes available:
|
||||
|
||||
#### docker_dns (default)
|
||||
This sets up the docker daemon with additional --dns/--dns-search/--dns-opt flags.
|
||||
|
||||
The following nameservers are added to the docker daemon (in the same order as listed here):
|
||||
* cluster nameserver (depends on dns_mode)
|
||||
* content of optional upstream_dns_servers variable
|
||||
* host system nameservers (read from hosts /etc/resolv.conf)
|
||||
|
||||
The following search domains are added to the docker daemon (in the same order as listed here):
|
||||
* cluster domains (``default.svc.{{ dns_domain }}``, ``svc.{{ dns_domain }}``)
|
||||
* content of optional searchdomains variable
|
||||
* host system search domains (read from hosts /etc/resolv.conf)
|
||||
|
||||
The following dns options are added to the docker daemon
|
||||
* ndots:{{ ndots }}
|
||||
* timeout:2
|
||||
* attempts:2
|
||||
|
||||
For normal PODs, k8s will ignore these options and setup its own DNS settings for the PODs, taking
|
||||
the --cluster_dns (either dnsmasq or kubedns, depending on dns_mode) kubelet option into account.
|
||||
For ``hostNetwork: true`` PODs however, k8s will let docker setup DNS settings. Docker containers which
|
||||
are not started/managed by k8s will also use these docker options.
|
||||
|
||||
The host system name servers are added to ensure name resolution is also working while cluster DNS is not
|
||||
running yet. This is especially important in early stages of cluster deployment. In this early stage,
|
||||
DNS queries to the cluster DNS will timeout after a few seconds, resulting in the system nameserver being
|
||||
used as a backup nameserver. After cluster DNS is running, all queries will be answered by the cluster DNS
|
||||
servers, which in turn will forward queries to the system nameserver if required.
|
||||
|
||||
#### host_resolvconf
|
||||
This activates the classic kargo behaviour that modifies the hosts ``/etc/resolv.conf`` file and dhclient
|
||||
configuration to point to the cluster dns server (either dnsmasq or kubedns, depending on dns_mode).
|
||||
|
||||
As cluster DNS is not available on early deployment stage, this mode is split into 2 stages. In the first
|
||||
stage (``dns_early: true``), ``/etc/resolv.conf`` is configured to use the DNS servers found in ``upstream_dns_servers``
|
||||
and ``nameservers``. Later, ``/etc/resolv.conf`` is reconfigured to use the cluster DNS server first, leaving
|
||||
the other nameservers as backups.
|
||||
|
||||
Also note, existing records will be purged from the `/etc/resolv.conf`,
|
||||
including resolvconf's base/head/cloud-init config files and those that come from dhclient.
|
||||
|
||||
#### none
|
||||
Does nothing regarding ``/etc/resolv.conf``. This leaves you with a cluster that works as expected in most cases.
|
||||
The only exception is that ``hostNetwork: true`` PODs and non-k8s managed containers will not be able to resolve
|
||||
cluster service names.
|
||||
|
||||
|
||||
Limitations
|
||||
-----------
|
||||
|
||||
* Kargo has yet ways to configure Kubedns addon to forward requests SkyDns can
|
||||
not answer with authority to arbitrary recursive resolvers. This task is left
|
||||
for future. See [official SkyDns docs](https://github.com/skynetservices/skydns)
|
||||
for details.
|
||||
|
||||
* There is
|
||||
[no way to specify a custom value](https://github.com/kubernetes/kubernetes/issues/33554)
|
||||
for the SkyDNS ``ndots`` param via an
|
||||
[option for KubeDNS](https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-dns/app/options/options.go)
|
||||
add-on, while SkyDNS supports it though.
|
||||
|
||||
* the ``searchdomains`` have a limitation of a 6 names and 256 chars
|
||||
length. Due to default ``svc, default.svc`` subdomains, the actual
|
||||
limits are a 4 names and 239 chars respectively.
|
||||
|
||||
* the ``nameservers`` have a limitation of a 3 servers, although there
|
||||
is a way to mitigate that with the ``upstream_dns_servers``,
|
||||
see below. Anyway, the ``nameservers`` can take no more than a two
|
||||
custom DNS servers because of one slot is reserved for a Kubernetes
|
||||
cluster needs.
|
42
docs/downloads.md
Normal file
42
docs/downloads.md
Normal file
@ -0,0 +1,42 @@
|
||||
Downloading binaries and containers
|
||||
===================================
|
||||
|
||||
Kargo supports several download/upload modes. The default is:
|
||||
|
||||
* Each node downloads binaries and container images on its own, which is
|
||||
``download_run_once: False``.
|
||||
* For K8s apps, pull policy is ``k8s_image_pull_policy: IfNotPresent``.
|
||||
* For system managed containers, like kubelet or etcd, pull policy is
|
||||
``download_always_pull: False``, which is pull if only the wanted repo and
|
||||
tag/sha256 digest differs from that the host has.
|
||||
|
||||
There is also a "pull once, push many" mode as well:
|
||||
|
||||
* Override the ``download_run_once: True`` to download container images only once
|
||||
then push to cluster nodes in batches. The default delegate node
|
||||
for pushing images is the first `kube-master`.
|
||||
* If your ansible runner node (aka the admin node) have password-less sudo and
|
||||
docker enabled, you may want to define the ``download_localhost: True``, which
|
||||
makes that node a delegate for pushing images while running the deployment with
|
||||
ansible. This maybe the case if cluster nodes cannot access each over via ssh
|
||||
or you want to use local docker images as a cache for multiple clusters.
|
||||
|
||||
Container images and binary files are described by the vars like ``foo_version``,
|
||||
``foo_download_url``, ``foo_checksum`` for binaries and ``foo_image_repo``,
|
||||
``foo_image_tag`` or optional ``foo_digest_checksum`` for containers.
|
||||
|
||||
Container images may be defined by its repo and tag, for example:
|
||||
`andyshinn/dnsmasq:2.72`. Or by repo and tag and sha256 digest:
|
||||
`andyshinn/dnsmasq@sha256:7c883354f6ea9876d176fe1d30132515478b2859d6fc0cbf9223ffdc09168193`.
|
||||
|
||||
Note, the sha256 digest and the image tag must be both specified and correspond
|
||||
to each other. The given example above is represented by the following vars:
|
||||
```
|
||||
dnsmasq_digest_checksum: 7c883354f6ea9876d176fe1d30132515478b2859d6fc0cbf9223ffdc09168193
|
||||
dnsmasq_image_repo: andyshinn/dnsmasq
|
||||
dnsmasq_image_tag: '2.72'
|
||||
```
|
||||
The full list of available vars may be found in the download's ansible role defaults.
|
||||
Those also allow to specify custom urls and local repositories for binaries and container
|
||||
images as well. See also the DNS stack docs for the related intranet configuration,
|
||||
so the hosts can resolve those urls and repos.
|
BIN
docs/figures/kargo-calico-rr.png
Normal file
BIN
docs/figures/kargo-calico-rr.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 40 KiB |
BIN
docs/figures/loadbalancer_localhost.png
Normal file
BIN
docs/figures/loadbalancer_localhost.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 57 KiB |
51
docs/flannel.md
Normal file
51
docs/flannel.md
Normal file
@ -0,0 +1,51 @@
|
||||
Flannel
|
||||
==============
|
||||
|
||||
* Flannel configuration file should have been created there
|
||||
|
||||
```
|
||||
cat /run/flannel/subnet.env
|
||||
FLANNEL_NETWORK=10.233.0.0/18
|
||||
FLANNEL_SUBNET=10.233.16.1/24
|
||||
FLANNEL_MTU=1450
|
||||
FLANNEL_IPMASQ=false
|
||||
```
|
||||
|
||||
* Check if the network interface has been created
|
||||
|
||||
```
|
||||
ip a show dev flannel.1
|
||||
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
|
||||
link/ether e2:f3:a7:0f:bf:cb brd ff:ff:ff:ff:ff:ff
|
||||
inet 10.233.16.0/18 scope global flannel.1
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 fe80::e0f3:a7ff:fe0f:bfcb/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
* Docker must be configured with a bridge ip in the flannel subnet.
|
||||
|
||||
```
|
||||
ps aux | grep docker
|
||||
root 20196 1.7 2.7 1260616 56840 ? Ssl 10:18 0:07 /usr/bin/docker daemon --bip=10.233.16.1/24 --mtu=1450
|
||||
```
|
||||
|
||||
* Try to run a container and check its ip address
|
||||
|
||||
```
|
||||
kubectl run test --image=busybox --command -- tail -f /dev/null
|
||||
replicationcontroller "test" created
|
||||
|
||||
kubectl describe po test-34ozs | grep ^IP
|
||||
IP: 10.233.16.2
|
||||
```
|
||||
|
||||
```
|
||||
kubectl exec test-34ozs -- ip a show dev eth0
|
||||
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
|
||||
link/ether 02:42:0a:e9:2b:03 brd ff:ff:ff:ff:ff:ff
|
||||
inet 10.233.16.2/24 scope global eth0
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 fe80::42:aff:fee9:2b03/64 scope link tentative flags 08
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
32
docs/getting-started.md
Normal file
32
docs/getting-started.md
Normal file
@ -0,0 +1,32 @@
|
||||
Getting started
|
||||
===============
|
||||
|
||||
The easiest way to run the deployement is to use the **kargo-cli** tool.
|
||||
A complete documentation can be found in its [github repository](https://github.com/kubespray/kargo-cli).
|
||||
|
||||
Here is a simple example on AWS:
|
||||
|
||||
* Create instances and generate the inventory
|
||||
|
||||
```
|
||||
kargo aws --instances 3
|
||||
```
|
||||
|
||||
* Run the deployment
|
||||
|
||||
```
|
||||
kargo deploy --aws -u centos -n calico
|
||||
```
|
||||
|
||||
Building your own inventory
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Ansible inventory can be stored in 3 formats: YAML, JSON, or inifile. There is
|
||||
an example inventory located
|
||||
[here](https://github.com/kubernetes-incubator/kargo/blob/master/inventory/inventory.example).
|
||||
|
||||
You can use an
|
||||
[inventory generator](https://github.com/kubernetes-incubator/kargo/blob/master/contrib/inventory_generator/inventory_generator.py)
|
||||
to create or modify an Ansible inventory. Currently, it is limited in
|
||||
functionality and is only use for making a basic Kargo cluster, but it does
|
||||
support creating large clusters.
|
105
docs/ha-mode.md
Normal file
105
docs/ha-mode.md
Normal file
@ -0,0 +1,105 @@
|
||||
HA endpoints for K8s
|
||||
====================
|
||||
|
||||
The following components require a highly available endpoints:
|
||||
* etcd cluster,
|
||||
* kube-apiserver service instances.
|
||||
|
||||
The latter relies on a 3rd side reverse proxies, like Nginx or HAProxy, to
|
||||
achieve the same goal.
|
||||
|
||||
Etcd
|
||||
----
|
||||
|
||||
Etcd proxies are deployed on each node in the `k8s-cluster` group. A proxy is
|
||||
a separate etcd process. It has a `localhost:2379` frontend and all of the etcd
|
||||
cluster members as backends. Note that the `access_ip` is used as the backend
|
||||
IP, if specified. Frontend endpoints cannot be accessed externally as they are
|
||||
bound to a localhost only.
|
||||
|
||||
The `etcd_access_endpoint` fact provides an access pattern for clients. And the
|
||||
`etcd_multiaccess` (defaults to `false`) group var controlls that behavior.
|
||||
When enabled, it makes deployed components to access the etcd cluster members
|
||||
directly: `http://ip1:2379, http://ip2:2379,...`. This mode assumes the clients
|
||||
do a loadbalancing and handle HA for connections. Note, a pod definition of a
|
||||
flannel networking plugin always uses a single `--etcd-server` endpoint!
|
||||
|
||||
|
||||
Kube-apiserver
|
||||
--------------
|
||||
|
||||
K8s components require a loadbalancer to access the apiservers via a reverse
|
||||
proxy. Kargo includes support for an nginx-based proxy that resides on each
|
||||
non-master Kubernetes node. This is referred to as localhost loadbalancing. It
|
||||
is less efficient than a dedicated load balancer because it creates extra
|
||||
health checks on the Kubernetes apiserver, but is more practical for scenarios
|
||||
where an external LB or virtual IP management is inconvenient.
|
||||
|
||||
This option is configured by the variable `loadbalancer_apiserver_localhost`.
|
||||
you will need to configure your own loadbalancer to achieve HA. Note that
|
||||
deploying a loadbalancer is up to a user and is not covered by ansible roles
|
||||
in Kargo. By default, it only configures a non-HA endpoint, which points to
|
||||
the `access_ip` or IP address of the first server node in the `kube-master`
|
||||
group. It can also configure clients to use endpoints for a given loadbalancer
|
||||
type. The following diagram shows how traffic to the apiserver is directed.
|
||||
|
||||

|
||||
|
||||
Note: Kubernetes master nodes still use insecure localhost access because
|
||||
there are bugs in Kubernetes <1.5.0 in using TLS auth on master role
|
||||
services. This makes backends receiving unencrypted traffic and may be a
|
||||
security issue when interconnecting different nodes, or maybe not, if those
|
||||
belong to the isolated management network without external access.
|
||||
|
||||
A user may opt to use an external loadbalancer (LB) instead. An external LB
|
||||
provides access for external clients, while the internal LB accepts client
|
||||
connections only to the localhost.
|
||||
Given a frontend `VIP` address and `IP1, IP2` addresses of backends, here is
|
||||
an example configuration for a HAProxy service acting as an external LB:
|
||||
```
|
||||
listen kubernetes-apiserver-https
|
||||
bind <VIP>:8383
|
||||
option ssl-hello-chk
|
||||
mode tcp
|
||||
timeout client 3h
|
||||
timeout server 3h
|
||||
server master1 <IP1>:443
|
||||
server master2 <IP2>:443
|
||||
balance roundrobin
|
||||
```
|
||||
|
||||
And the corresponding example global vars config:
|
||||
```
|
||||
apiserver_loadbalancer_domain_name: "lb-apiserver.kubernetes.local"
|
||||
loadbalancer_apiserver:
|
||||
address: <VIP>
|
||||
port: 8383
|
||||
```
|
||||
|
||||
This domain name, or default "lb-apiserver.kubernetes.local", will be inserted
|
||||
into the `/etc/hosts` file of all servers in the `k8s-cluster` group. Note that
|
||||
the HAProxy service should as well be HA and requires a VIP management, which
|
||||
is out of scope of this doc. Specifying an external LB overrides any internal
|
||||
localhost LB configuration.
|
||||
|
||||
Note: In order to achieve HA for HAProxy instances, those must be running on
|
||||
the each node in the `k8s-cluster` group as well, but require no VIP, thus
|
||||
no VIP management.
|
||||
|
||||
Access endpoints are evaluated automagically, as the following:
|
||||
|
||||
| Endpoint type | kube-master | non-master |
|
||||
|------------------------------|---------------|---------------------|
|
||||
| Local LB | http://lc:p | https://lc:sp |
|
||||
| External LB, no internal | https://lb:lp | https://lb:lp |
|
||||
| No ext/int LB (default) | http://lc:p | https://m[0].aip:sp |
|
||||
|
||||
Where:
|
||||
* `m[0]` - the first node in the `kube-master` group;
|
||||
* `lb` - LB FQDN, `apiserver_loadbalancer_domain_name`;
|
||||
* `lc` - localhost;
|
||||
* `p` - insecure port, `kube_apiserver_insecure_port`
|
||||
* `sp` - secure port, `kube_apiserver_port`;
|
||||
* `lp` - LB port, `loadbalancer_apiserver.port`, defers to the secure port;
|
||||
* `ip` - the node IP, defers to the ansible IP;
|
||||
* `aip` - `access_ip`, defers to the ip.
|
31
docs/large-deployments.md
Normal file
31
docs/large-deployments.md
Normal file
@ -0,0 +1,31 @@
|
||||
Large deployments of K8s
|
||||
========================
|
||||
|
||||
For a large scaled deployments, consider the following configuration changes:
|
||||
|
||||
* Tune [ansible settings](http://docs.ansible.com/ansible/intro_configuration.html)
|
||||
for `forks` and `timeout` vars to fit large numbers of nodes being deployed.
|
||||
|
||||
* Override containers' `foo_image_repo` vars to point to intranet registry.
|
||||
|
||||
* Override the ``download_run_once: true`` and/or ``download_localhost: true``.
|
||||
See download modes for details.
|
||||
|
||||
* Adjust the `retry_stagger` global var as appropriate. It should provide sane
|
||||
load on a delegate (the first K8s master node) then retrying failed
|
||||
push or download operations.
|
||||
|
||||
* Tune parameters for DNS related applications (dnsmasq daemon set, kubedns
|
||||
replication controller). Those are ``dns_replicas``, ``dns_cpu_limit``,
|
||||
``dns_cpu_requests``, ``dns_memory_limit``, ``dns_memory_requests``.
|
||||
Please note that limits must always be greater than or equal to requests.
|
||||
|
||||
* Tune CPU/memory limits and requests. Those are located in roles' defaults
|
||||
and named like ``foo_memory_limit``, ``foo_memory_requests`` and
|
||||
``foo_cpu_limit``, ``foo_cpu_requests``. Note that 'Mi' memory units for K8s
|
||||
will be submitted as 'M', if applied for ``docker run``, and cpu K8s units will
|
||||
end up with the 'm' skipped for docker as well. This is required as docker does not
|
||||
understand k8s units well.
|
||||
|
||||
For example, when deploying 200 nodes, you may want to run ansible with
|
||||
``--forks=50``, ``--timeout=600`` and define the ``retry_stagger: 60``.
|
41
docs/netcheck.md
Normal file
41
docs/netcheck.md
Normal file
@ -0,0 +1,41 @@
|
||||
Network Checker Application
|
||||
===========================
|
||||
|
||||
With the ``deploy_netchecker`` var enabled (defaults to false), Kargo deploys a
|
||||
Network Checker Application from the 3rd side `l23network/mcp-netchecker` docker
|
||||
images. It consists of the server and agents trying to reach the server by usual
|
||||
for Kubernetes applications network connectivity meanings. Therefore, this
|
||||
automagically verifies a pod to pod connectivity via the cluster IP and checks
|
||||
if DNS resolve is functioning as well.
|
||||
|
||||
The checks are run by agents on a periodic basis and cover standard and host network
|
||||
pods as well. The history of performed checks may be found in the agents' application
|
||||
logs.
|
||||
|
||||
To get the most recent and cluster-wide network connectivity report, run from
|
||||
any of the cluster nodes:
|
||||
```
|
||||
curl http://localhost:31081/api/v1/connectivity_check
|
||||
```
|
||||
Note that Kargo does not invoke the check but only deploys the application, if
|
||||
requested.
|
||||
|
||||
There are related application specifc variables:
|
||||
```
|
||||
netchecker_port: 31081
|
||||
agent_report_interval: 15
|
||||
netcheck_namespace: default
|
||||
agent_img: "quay.io/l23network/mcp-netchecker-agent:v0.1"
|
||||
server_img: "quay.io/l23network/mcp-netchecker-server:v0.1"
|
||||
```
|
||||
|
||||
Note that the application verifies DNS resolve for FQDNs comprising only the
|
||||
combination of the ``netcheck_namespace.dns_domain`` vars, for example the
|
||||
``netchecker-service.default.cluster.local``. If you want to deploy the application
|
||||
to the non default namespace, make sure as well to adjust the ``searchdomains`` var
|
||||
so the resulting search domain records to contain that namespace, like:
|
||||
|
||||
```
|
||||
search: foospace.cluster.local default.cluster.local ...
|
||||
nameserver: ...
|
||||
```
|
48
docs/openstack.md
Normal file
48
docs/openstack.md
Normal file
@ -0,0 +1,48 @@
|
||||
OpenStack
|
||||
===============
|
||||
|
||||
To deploy kubespray on [OpenStack](https://www.openstack.org/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'openstack'`.
|
||||
|
||||
After that make sure to source in your OpenStack credentials like you would do when using `nova-client` by using `source path/to/your/openstack-rc`.
|
||||
|
||||
The next step is to make sure the hostnames in your `inventory` file are identical to your instance names in OpenStack.
|
||||
Otherwise [cinder](https://wiki.openstack.org/wiki/Cinder) won't work as expected.
|
||||
|
||||
Unless you are using calico you can now run the playbook.
|
||||
|
||||
**Additional step needed when using calico:**
|
||||
|
||||
Calico does not encapsulate all packages with the hosts ip addresses. Instead the packages will be routed with the PODs ip addresses directly.
|
||||
OpenStack will filter and drop all packages from ips it does not know to prevent spoofing.
|
||||
|
||||
In order to make calico work on OpenStack you will need to tell OpenStack to allow calicos packages by allowing the network it uses.
|
||||
|
||||
First you will need the ids of your OpenStack instances that will run kubernetes:
|
||||
|
||||
nova list --tenant Your-Tenant
|
||||
+--------------------------------------+--------+----------------------------------+--------+-------------+
|
||||
| ID | Name | Tenant ID | Status | Power State |
|
||||
+--------------------------------------+--------+----------------------------------+--------+-------------+
|
||||
| e1f48aad-df96-4bce-bf61-62ae12bf3f95 | k8s-1 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
|
||||
| 725cd548-6ea3-426b-baaa-e7306d3c8052 | k8s-2 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
|
||||
|
||||
Then you can use the instance ids to find the connected [neutron](https://wiki.openstack.org/wiki/Neutron) ports:
|
||||
|
||||
neutron port-list -c id -c device_id
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| id | device_id |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| 5662a4e0-e646-47f0-bf88-d80fbd2d99ef | e1f48aad-df96-4bce-bf61-62ae12bf3f95 |
|
||||
| e5ae2045-a1e1-4e99-9aac-4353889449a7 | 725cd548-6ea3-426b-baaa-e7306d3c8052 |
|
||||
|
||||
Given the port ids on the left, you can set the `allowed_address_pairs` in neutron:
|
||||
|
||||
# allow kube_service_addresses network
|
||||
neutron port-update 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed_address_pairs list=true type=dict ip_address=10.233.0.0/18
|
||||
neutron port-update e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed_address_pairs list=true type=dict ip_address=10.233.0.0/18
|
||||
|
||||
# allow kube_pods_subnet network
|
||||
neutron port-update 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed_address_pairs list=true type=dict ip_address=10.233.64.0/18
|
||||
neutron port-update e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed_address_pairs list=true type=dict ip_address=10.233.64.0/18
|
||||
|
||||
Now you can finally run the playbook.
|
93
docs/roadmap.md
Normal file
93
docs/roadmap.md
Normal file
@ -0,0 +1,93 @@
|
||||
Kargo's roadmap
|
||||
=================
|
||||
|
||||
### Kubeadm
|
||||
- Propose kubeadm as an option in order to setup the kubernetes cluster.
|
||||
That would probably improve deployment speed and certs management [#553](https://github.com/kubespray/kargo/issues/553)
|
||||
|
||||
### Self deployment (pull-mode) [#320](https://github.com/kubespray/kargo/issues/320)
|
||||
- the playbook would install and configure docker/rkt and the etcd cluster
|
||||
- the following data would be inserted into etcd: certs,tokens,users,inventory,group_vars.
|
||||
- a "kubespray" container would be deployed (kargo-cli, ansible-playbook, kpm)
|
||||
- to be discussed, a way to provide the inventory
|
||||
- **self deployment** of the node from inside a container [#321](https://github.com/kubespray/kargo/issues/321)
|
||||
|
||||
### Provisionning and cloud providers
|
||||
- Terraform to provision instances on **GCE, AWS, Openstack, Digital Ocean, Azure**
|
||||
- On AWS autoscaling, multi AZ
|
||||
- On Azure autoscaling, create loadbalancer [#297](https://github.com/kubespray/kargo/issues/297)
|
||||
- On GCE be able to create a loadbalancer automatically (IAM ?) [#280](https://github.com/kubespray/kargo/issues/280)
|
||||
- **TLS boostrap** support for kubelet [#234](https://github.com/kubespray/kargo/issues/234)
|
||||
(related issues: https://github.com/kubernetes/kubernetes/pull/20439 <br>
|
||||
https://github.com/kubernetes/kubernetes/issues/18112)
|
||||
|
||||
### Tests
|
||||
- Run kubernetes e2e tests
|
||||
- migrate to jenkins
|
||||
(a test is currently a deployment on a 3 node cluste, testing k8s api, ping between 2 pods)
|
||||
- Full tests on GCE per day (All OS's, all network plugins)
|
||||
- trigger a single test per pull request
|
||||
- single test with the Ansible version n-1 per day
|
||||
- Test idempotency on on single OS but for all network plugins/container engines
|
||||
- single test on AWS per day
|
||||
- test different achitectures :
|
||||
- 3 instances, 3 are members of the etcd cluster, 2 of them acting as master and node, 1 as node
|
||||
- 5 instances, 3 are etcd and nodes, 2 are masters only
|
||||
- 7 instances, 3 etcd only, 2 masters, 2 nodes
|
||||
- test scale up cluster: +1 etcd, +1 master, +1 node
|
||||
|
||||
### Lifecycle
|
||||
- Adopt the kubeadm tool by delegating CM tasks it is capable to accomplish well [#553](https://github.com/kubespray/kargo/issues/553)
|
||||
- Drain worker node when upgrading k8s components in a worker node. [#154](https://github.com/kubespray/kargo/issues/154)
|
||||
- Drain worker node when shutting down/deleting an instance
|
||||
|
||||
### Networking
|
||||
- romana.io support [#160](https://github.com/kubespray/kargo/issues/160)
|
||||
- Configure network policy for Calico. [#159](https://github.com/kubespray/kargo/issues/159)
|
||||
- Opencontrail
|
||||
- Canal
|
||||
- Cloud Provider native networking (instead of our network plugins)
|
||||
|
||||
### High availability
|
||||
- (to be discussed) option to set a loadbalancer for the apiservers like ucarp/packemaker/keepalived
|
||||
While waiting for the issue [kubernetes/kubernetes#18174](https://github.com/kubernetes/kubernetes/issues/18174) to be fixed.
|
||||
|
||||
### Kargo-cli
|
||||
- Delete instances
|
||||
- `kargo vagrant` to setup a test cluster locally
|
||||
- `kargo azure` for Microsoft Azure support
|
||||
- switch to Terraform instead of Ansible for provisionning
|
||||
- update $HOME/.kube/config when a cluster is deployed. Optionally switch to this context
|
||||
|
||||
### Kargo API
|
||||
- Perform all actions through an **API**
|
||||
- Store inventories / configurations of mulltiple clusters
|
||||
- make sure that state of cluster is completely saved in no more than one config file beyond hosts inventory
|
||||
|
||||
### Addons (with kpm)
|
||||
Include optionals deployments to init the cluster:
|
||||
##### Monitoring
|
||||
- Heapster / Grafana ....
|
||||
- **Prometheus**
|
||||
|
||||
##### Others
|
||||
|
||||
##### Dashboards:
|
||||
- kubernetes-dashboard
|
||||
- Fabric8
|
||||
- Tectonic
|
||||
- Cockpit
|
||||
|
||||
##### Paas like
|
||||
- Openshift Origin
|
||||
- Openstack
|
||||
- Deis Workflow
|
||||
|
||||
### Others
|
||||
- remove nodes (adding is already supported)
|
||||
- being able to choose any k8s version (almost done)
|
||||
- **rkt** support [#59](https://github.com/kubespray/kargo/issues/59)
|
||||
- Review documentation (split in categories)
|
||||
- **consul** -> if officialy supported by k8s
|
||||
- flex volumes options (e.g. **torrus** support) [#312](https://github.com/kubespray/kargo/issues/312)
|
||||
- Clusters federation option (aka **ubernetes**) [#329](https://github.com/kubespray/kargo/issues/329)
|
54
docs/test_cases.md
Normal file
54
docs/test_cases.md
Normal file
@ -0,0 +1,54 @@
|
||||
Travis CI test matrix
|
||||
=====================
|
||||
|
||||
GCE instances
|
||||
-------------
|
||||
|
||||
Here is the test matrix for the Travis CI gates:
|
||||
|
||||
| Network plugin| OS type| GCE region| Nodes layout|
|
||||
|-------------------------|-------------------------|-------------------------|-------------------------|
|
||||
| canal| debian-8-kubespray| asia-east1-a| ha|
|
||||
| calico| debian-8-kubespray| europe-west1-c| default|
|
||||
| flannel| centos-7| asia-northeast1-c| default|
|
||||
| calico| centos-7| us-central1-b| ha|
|
||||
| weave| rhel-7| us-east1-c| default|
|
||||
| canal| coreos-stable| us-west1-b| default|
|
||||
| canal| rhel-7| asia-northeast1-b| separate|
|
||||
| weave| ubuntu-1604-xenial| europe-west1-d| separate|
|
||||
| calico| coreos-stable| us-central1-f| separate|
|
||||
|
||||
Where the nodes layout `default` is a non-HA two nodes setup with the separate `kube-node`
|
||||
and the `etcd` group merged with the `kube-master`. The `separate` layout is when
|
||||
there is only node of each type, which is a kube master, compute and etcd cluster member.
|
||||
And the `ha` layout stands for a two etcd nodes, two masters and a single worker node,
|
||||
partially intersecting though.
|
||||
|
||||
Note, the canal network plugin deploys flannel as well plus calico policy controller.
|
||||
|
||||
Hint: the command
|
||||
```
|
||||
bash scripts/gen_matrix.sh
|
||||
```
|
||||
will (hopefully) generate the CI test cases from the current ``.travis.yml``.
|
||||
|
||||
Gitlab CI test matrix
|
||||
=====================
|
||||
|
||||
GCE instances
|
||||
-------------
|
||||
|
||||
| Stage| Network plugin| OS type| GCE region| Nodes layout
|
||||
|--------------------|--------------------|--------------------|--------------------|--------------------|
|
||||
| part1| calico| coreos-stable| us-west1-b| separated|
|
||||
| part1| canal| debian-8-kubespray| us-east1-b| ha|
|
||||
| part1| weave| rhel-7| europe-west1-b| default|
|
||||
| part2| flannel| centos-7| us-west1-a| default|
|
||||
| part2| calico| debian-8-kubespray| us-central1-b| default|
|
||||
| part2| canal| coreos-stable| us-east1-b| default|
|
||||
| special| canal| rhel-7| us-east1-b| separated|
|
||||
| special| weave| ubuntu-1604-xenial| us-central1-b| separated|
|
||||
| special| calico| centos-7| europe-west1-b| ha|
|
||||
| special| weave| coreos-alpha| us-west1-a| ha|
|
||||
|
||||
The "Stage" means a build step of the build pipeline. The steps are ordered as `part1->part2->special`.
|
47
docs/upgrades.md
Normal file
47
docs/upgrades.md
Normal file
@ -0,0 +1,47 @@
|
||||
Upgrading Kubernetes in Kargo
|
||||
=============================
|
||||
|
||||
#### Description
|
||||
|
||||
Kargo handles upgrades the same way it handles initial deployment. That is to
|
||||
say that each component is laid down in a fixed order. You should be able to
|
||||
upgrade from Kargo tag 2.0 up to the current master without difficulty. You can
|
||||
also individually control versions of components by explicitly defining their
|
||||
versions. Here are all version vars for each component:
|
||||
|
||||
* docker_version
|
||||
* kube_version
|
||||
* etcd_version
|
||||
* calico_version
|
||||
* calico_cni_version
|
||||
* weave_version
|
||||
* flannel_version
|
||||
* kubedns_version
|
||||
|
||||
#### Example
|
||||
|
||||
If you wanted to upgrade just kube_version from v1.4.3 to v1.4.6, you could
|
||||
deploy the following way:
|
||||
|
||||
```
|
||||
ansible-playbook cluster.yml -i inventory/inventory.cfg -e kube_version=v1.4.3
|
||||
```
|
||||
|
||||
And then repeat with v1.4.6 as kube_version:
|
||||
|
||||
```
|
||||
ansible-playbook cluster.yml -i inventory/inventory.cfg -e kube_version=v1.4.6
|
||||
```
|
||||
|
||||
#### Upgrade order
|
||||
|
||||
As mentioned above, components are upgraded in the order in which they were
|
||||
installed in the Ansible playbook. The order of component installation is as
|
||||
follows:
|
||||
|
||||
# Docker
|
||||
# etcd
|
||||
# kubelet and kube-proxy
|
||||
# network_plugin (such as Calico or Weave)
|
||||
# kube-apiserver, kube-scheduler, and kube-controller-manager
|
||||
# Add-ons (such as KubeDNS)
|
41
docs/vagrant.md
Normal file
41
docs/vagrant.md
Normal file
@ -0,0 +1,41 @@
|
||||
Vagrant Install
|
||||
=================
|
||||
|
||||
Assuming you have Vagrant (1.8+) installed with virtualbox (it may work
|
||||
with vmware, but is untested) you should be able to launch a 3 node
|
||||
Kubernetes cluster by simply running `$ vagrant up`.<br />
|
||||
|
||||
This will spin up 3 VMs and install kubernetes on them. Once they are
|
||||
completed you can connect to any of them by running <br />
|
||||
`$ vagrant ssh k8s-0[1..3]`.
|
||||
|
||||
```
|
||||
$ vagrant up
|
||||
Bringing machine 'k8s-01' up with 'virtualbox' provider...
|
||||
Bringing machine 'k8s-02' up with 'virtualbox' provider...
|
||||
Bringing machine 'k8s-03' up with 'virtualbox' provider...
|
||||
==> k8s-01: Box 'bento/ubuntu-14.04' could not be found. Attempting to find and install...
|
||||
...
|
||||
...
|
||||
k8s-03: Running ansible-playbook...
|
||||
|
||||
PLAY [k8s-cluster] *************************************************************
|
||||
|
||||
TASK [setup] *******************************************************************
|
||||
ok: [k8s-03]
|
||||
ok: [k8s-01]
|
||||
ok: [k8s-02]
|
||||
...
|
||||
...
|
||||
PLAY RECAP *********************************************************************
|
||||
k8s-01 : ok=157 changed=66 unreachable=0 failed=0
|
||||
k8s-02 : ok=137 changed=59 unreachable=0 failed=0
|
||||
k8s-03 : ok=86 changed=51 unreachable=0 failed=0
|
||||
|
||||
$ vagrant ssh k8s-01
|
||||
vagrant@k8s-01:~$ kubectl get nodes
|
||||
NAME STATUS AGE
|
||||
k8s-01 Ready 45s
|
||||
k8s-02 Ready 45s
|
||||
k8s-03 Ready 45s
|
||||
```
|
100
docs/vars.md
Normal file
100
docs/vars.md
Normal file
@ -0,0 +1,100 @@
|
||||
Configurable Parameters in Kargo
|
||||
================================
|
||||
|
||||
#### Generic Ansible variables
|
||||
|
||||
You can view facts gathered by Ansible automatically
|
||||
[here](http://docs.ansible.com/ansible/playbooks_variables.html#information-discovered-from-systems-facts).
|
||||
|
||||
Some variables of note include:
|
||||
|
||||
* *ansible_user*: user to connect to via SSH
|
||||
* *ansible_default_ipv4.address*: IP address Ansible automatically chooses.
|
||||
Generated based on the output from the command ``ip -4 route get 8.8.8.8``
|
||||
|
||||
#### Common vars that are used in Kargo
|
||||
|
||||
* *calico_version* - Specify version of Calico to use
|
||||
* *calico_cni_version* - Specify version of Calico CNI plugin to use
|
||||
* *docker_version* - Specify version of Docker to used (should be quoted
|
||||
string)
|
||||
* *etcd_version* - Specify version of ETCD to use
|
||||
* *ipip* - Enables Calico ipip encapsulation by default
|
||||
* *hyperkube_image_repo* - Specify the Docker repository where Hyperkube
|
||||
resides
|
||||
* *hyperkube_image_tag* - Specify the Docker tag where Hyperkube resides
|
||||
* *kube_network_plugin* - Changes k8s plugin to Calico
|
||||
* *kube_proxy_mode* - Changes k8s proxy mode to iptables mode
|
||||
* *kube_version* - Specify a given Kubernetes hyperkube version
|
||||
* *searchdomains* - Array of DNS domains to search when looking up hostnames
|
||||
* *nameservers* - Array of nameservers to use for DNS lookup
|
||||
|
||||
#### Addressing variables
|
||||
|
||||
* *ip* - IP to use for binding services (host var)
|
||||
* *access_ip* - IP for other hosts to use to connect to. Often required when
|
||||
deploying from a cloud, such as OpenStack or GCE and you have separate
|
||||
public/floating and private IPs.
|
||||
* *ansible_default_ipv4.address* - Not Kargo-specific, but it is used if ip
|
||||
and access_ip are undefined
|
||||
* *loadbalancer_apiserver* - If defined, all hosts will connect to this
|
||||
address instead of localhost for kube-masters and kube-master[0] for
|
||||
kube-nodes. See more details in the
|
||||
[HA guide](https://github.com/kubernetes-incubator/kargo/blob/master/docs/ha-mode.md).
|
||||
* *loadbalancer_apiserver_localhost* - If enabled, all hosts will connect to
|
||||
the apiserver internally load balanced endpoint. See more details in the
|
||||
[HA guide](https://github.com/kubernetes-incubator/kargo/blob/master/docs/ha-mode.md).
|
||||
|
||||
#### Cluster variables
|
||||
|
||||
Kubernetes needs some parameters in order to get deployed. These are the
|
||||
following default cluster paramters:
|
||||
|
||||
* *cluster_name* - Name of cluster (default is cluster.local)
|
||||
* *domain_name* - Name of cluster DNS domain (default is cluster.local)
|
||||
* *kube_network_plugin* - Plugin to use for container networking
|
||||
* *kube_service_addresses* - Subnet for cluster IPs (default is
|
||||
10.233.0.0/18). Must not overlap with kube_pods_subnet
|
||||
* *kube_pods_subnet* - Subnet for Pod IPs (default is 10.233.64.0/18). Must not
|
||||
overlap with kube_service_addresses.
|
||||
* *kube_network_node_prefix* - Subnet allocated per-node for pod IPs. Remainin
|
||||
bits in kube_pods_subnet dictates how many kube-nodes can be in cluster.
|
||||
* *dns_setup* - Enables dnsmasq
|
||||
* *dns_server* - Cluster IP for dnsmasq (default is 10.233.0.2)
|
||||
* *skydns_server* - Cluster IP for KubeDNS (default is 10.233.0.3)
|
||||
* *cloud_provider* - Enable extra Kubelet option if operating inside GCE or
|
||||
OpenStack (default is unset)
|
||||
* *kube_hostpath_dynamic_provisioner* - Required for use of PetSets type in
|
||||
Kubernetes
|
||||
|
||||
Note, if cloud providers have any use of the ``10.233.0.0/16``, like instances'
|
||||
private addresses, make sure to pick another values for ``kube_service_addresses``
|
||||
and ``kube_pods_subnet``, for example from the ``172.18.0.0/16``.
|
||||
|
||||
#### DNS variables
|
||||
|
||||
By default, dnsmasq gets set up with 8.8.8.8 as an upstream DNS server and all
|
||||
other settings from your existing /etc/resolv.conf are lost. Set the following
|
||||
variables to match your requirements.
|
||||
|
||||
* *upstream_dns_servers* - Array of upstream DNS servers configured on host in
|
||||
addition to Kargo deployed DNS
|
||||
* *nameservers* - Array of DNS servers configured for use in dnsmasq
|
||||
* *searchdomains* - Array of up to 4 search domains
|
||||
* *skip_dnsmasq* - Don't set up dnsmasq (use only KubeDNS)
|
||||
|
||||
For more information, see [DNS
|
||||
Stack](https://github.com/kubernetes-incubator/kargo/blob/master/docs/dns-stack.md).
|
||||
|
||||
#### Other service variables
|
||||
|
||||
* *docker_options* - Commonly used to set
|
||||
``--insecure-registry=myregistry.mydomain:5000``
|
||||
* *http_proxy/https_proxy/no_proxy* - Proxy variables for deploying behind a
|
||||
proxy
|
||||
|
||||
#### User accounts
|
||||
|
||||
Kargo sets up two Kubernetes accounts by default: ``root`` and ``kube``. Their
|
||||
passwords default to changeme. You can set this by changing ``kube_api_pwd``.
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user