Compare commits

..

94 Commits

Author SHA1 Message Date
ec77f046fb Update CCP installation instructions 2016-07-13 16:48:11 +02:00
66a178c614 Another minor fix in readme 2016-07-13 16:08:02 +02:00
95a2bcdd9d Fixed docs 2016-07-13 16:07:11 +02:00
7ab62170e0 All patches are merged, no need to pull reviews 2016-07-13 15:33:02 +02:00
8c37d0aa1f Update the list of patchsets for ccp-neutron 2016-07-13 13:38:12 +02:00
438a4bdeca Update ccp-neutron patch sets 2016-07-12 19:14:17 +02:00
da0a973dd4 Added a list of openstack cli commands for demo 2016-07-12 18:57:50 +02:00
d8bef773ee Add pulling of custom patches for ccp-neutron 2016-07-12 16:56:41 +00:00
64608f06cf Bugfixes
- Don't create registry pod if it already exists
- Fix shell commands
2016-07-12 17:56:40 +02:00
d450e1f06f Refactor CCP deployment part 2016-07-12 17:20:58 +02:00
569d0081d3 Minor fixes in deploy-config.yaml style and doc 2016-07-12 12:15:01 +02:00
ba5466cacf Added some OpenStack CLI command examples 2016-07-11 17:26:06 +02:00
6b094db607 Fix nodes in label-nodes.sh 2016-07-11 14:16:02 +02:00
d0dd69399e Change nodes generation for Vagrant
Master IP is .10, nodes start from .11
2016-07-11 13:00:57 +02:00
f33f447b3d Fix dynamic ansible inventory 2016-07-11 10:28:25 +00:00
efaf6328a2 Add python to provision 2016-07-08 19:11:13 +02:00
a5a34c98a5 Move node bootstrap to ansible 2016-07-08 18:52:42 +02:00
6a287973d9 Update default nodes specs for vagrant lab 2016-07-08 18:32:30 +02:00
d066f0c9e9 Move args to env 2016-07-08 16:04:47 +00:00
b7f3ff5ce9 We need python-ipaddr on master node for dynamic inv 2016-07-08 17:52:26 +02:00
48ec698314 Merge branch 'master' of https://github.com/adidenko/vagrant-k8s 2016-07-08 15:46:31 +00:00
737a83788f Add reading group variables 2016-07-08 15:45:02 +00:00
9471173f6a Update docs 2016-07-08 17:23:43 +02:00
c4e3266031 Fix a bug in nodes_to_inv.py 2016-07-08 17:22:52 +02:00
8d3abdb489 Switch to dynamic ansible inventory
We don't need to install and use kargo cli anymore.
2016-07-08 17:10:09 +02:00
e89f4ac7ee Add labeling nodes 2016-07-08 15:04:31 +02:00
99db440287 Added check for CCP images build 2016-07-08 14:32:43 +02:00
e6358d825e Bugfixes in ccp playbooks 2016-07-08 14:23:00 +02:00
8b3112d287 Switch to using upstream fuel-ccp project 2016-07-07 19:50:16 +02:00
b4dfd8c973 Update to deploy of ccp 2016-07-06 13:17:08 +02:00
a153ac231a Refactor to use new deploy config in ccp 2016-07-06 13:14:23 +02:00
5b4c365b8c Change kube version to 1.2.4 2016-07-05 16:26:51 +02:00
a08fb131fb Remove unneeded curl from netchecker deploy script 2016-07-05 15:24:09 +02:00
ba2c3f052f Fix namespace deletion 2016-07-05 15:14:14 +02:00
c8a488cfbe Added missing parameter to sed command 2016-07-05 14:53:28 +02:00
fad80d8595 Add another workaround for hostnetwork pods 2016-07-05 14:51:01 +02:00
21f1c82fb0 Setup a ip ro workaround for cluste IPs 2016-07-05 12:43:35 +02:00
6ec957a255 Add DNS servers 2016-07-04 17:16:56 +02:00
76b49bfe30 Enable build back 2016-07-04 15:51:35 +02:00
7d14763cf0 Add prebuilt images as option 2016-07-04 15:22:30 +02:00
4c300a57b5 Add ip route workaround for DNS clusterIP 2016-07-04 15:13:49 +02:00
e68d6575cd Add dnsutils to vagrant nodes 2016-07-04 11:32:33 +02:00
11b6e31c55 Fix mcp.conf for prebuilt packages 2016-07-01 18:13:19 +02:00
4d295d567b Switch to using prebuilt CCP images 2016-07-01 17:19:41 +02:00
ca8ef29ae4 Add temp fix for bug in mcp builder 2016-07-01 16:34:20 +02:00
9be65f8c19 Update mcp.conf according to upstream 2016-07-01 16:24:42 +02:00
b70b8a7c39 Update README.md 2016-07-01 16:02:47 +02:00
687cc01151 Fix typos in readme 2016-07-01 15:59:21 +02:00
25b986ede7 Fix list nesting 2016-07-01 15:58:39 +02:00
1e294b25c1 Update readme 2016-07-01 15:57:41 +02:00
5c369d6d40 Added deploy-netchecker.sh script 2016-07-01 15:55:57 +02:00
7b1e29f855 Comment out hacking of resolv.conf 2016-07-01 15:32:45 +02:00
80ee1f2d9e Hack resolv.conf for ALL containers 2016-06-30 19:04:41 +02:00
87856513c6 Hack for resolv.conf is back, a bit different this time 2016-06-30 19:01:01 +02:00
f7f560de2e No need to hack resolv.conf in dockerfiles 2016-06-30 18:18:33 +02:00
7a8ead07d8 Added some packages for provisioning 2016-06-30 18:01:45 +02:00
46f99befee Rename chapter in readme 2016-06-30 17:19:21 +02:00
3563dbe9e8 Add useful k8s commands 2016-06-30 17:18:36 +02:00
fec601a238 Minor improvements in commands and readme 2016-06-30 16:56:52 +02:00
aad4edaf47 Fix paths in playbooks 2016-06-30 16:50:29 +02:00
bb3a57a719 Hack all problem images and put resolv.conf there 2016-06-30 16:12:26 +02:00
6be93a3b87 Another fix in path for inventory 2016-06-30 15:56:57 +02:00
333d9daea8 Fix readme 2016-06-30 15:47:28 +02:00
8b53ff8ef7 Remove kpm from provisioning, kargo installs it 2016-06-30 15:46:02 +02:00
8334a9e1e4 Fix paths to kargo 2016-06-30 15:35:00 +02:00
f1e5bc81f8 Fix path to nodes list 2016-06-30 15:32:46 +02:00
26646b4a79 Fix cwd 2016-06-30 15:31:30 +02:00
eddd1251eb Fix paths 2016-06-30 15:28:36 +02:00
ba710ade23 Fix typo 2016-06-30 15:27:25 +02:00
25d19720c0 Huge refactoring
Split scripts and instructuins into two parts: lab preparation and
deployment.
2016-06-30 15:22:39 +02:00
17e3108b0c Save output of mcp-microservice to separate logs 2016-06-30 12:08:14 +02:00
f304dd4cf3 Switch to new vagrant image and update ccp-pull 2016-06-30 11:53:54 +02:00
7dcc7c31f6 Update CCP repos 2016-06-29 17:45:51 +02:00
4ca2931ae9 Add new virtual drive for /var/lib/docker
Otherwise 10G is not enough to host all the CCP images and it
leads to deployment failures.
2016-06-29 16:40:34 +02:00
9744972f4a Replace sleep with wait loop 2016-06-29 14:21:34 +02:00
f770ae82e6 Updated readme in examples 2016-06-29 12:48:21 +02:00
aa9578ba99 Updated examples readme 2016-06-29 12:46:32 +02:00
898e79a49e Update CCP refs 2016-06-29 11:30:29 +02:00
8d80265392 Use ansible instead of kargo-cli to deploy k8s 2016-06-29 11:24:20 +02:00
8acd4396d6 Remove some hardcode for CCP installation 2016-06-28 18:26:51 +02:00
a47f9394bb Updated CCP example with nodePort 2016-06-28 16:56:34 +02:00
5cc37db4bf Minor improvements in nodes list generation 2016-06-28 16:54:34 +02:00
3eb2ec101e Added curl to bootstrap scripts 2016-06-28 15:32:15 +02:00
84d85e41a9 Added example for Horizone exposing via nodePort 2016-06-28 14:42:36 +02:00
96add56527 Update node labels according to new CCP nodeSelector 2016-06-28 14:06:53 +02:00
d7f9d4a590 Fix bug in shell command 2016-06-28 12:58:05 +02:00
26e61fc9be Another update related to CCP upstream changes 2016-06-28 10:56:46 +02:00
7546d75513 Update reviews for ccp and fix missing script 2016-06-28 10:55:23 +02:00
1fb6f36e9c Remove extra space 2016-06-27 18:04:14 +02:00
df4fe074f0 Added CCP deployment scripts 2016-06-27 17:57:29 +02:00
0c9826c60f Added kubedash external service to examples 2016-06-23 16:53:13 +02:00
d7a11887f6 Added example how to expose k8s dashboard 2016-06-23 16:43:11 +02:00
39dd4c1aaa New playbooks for k8s service and examples
- kubedns moved to playbooks dir
- new ansible playbooks added for kubedash and kube-dashboard
- examples for k8s deployments and services added
2016-06-22 18:43:39 +02:00
9c5c0f2697 Minor update in README 2016-06-20 16:12:18 +02:00
33 changed files with 804 additions and 73 deletions

1
.gitignore vendored
View File

@ -1 +1,2 @@
ssh
nodes

148
README.md
View File

@ -4,30 +4,158 @@ Scripts to create libvirt lab with vagrant and prepare some stuff for `k8s` depl
Requirements
============
------------
* `libvirt`
* `vagrant`
* `vagrant-libvirt` plugin
* `vagrant-libvirt` plugin (`vagrant plugin install vagrant-libvirt`)
* `$USER` should be able to connect to libvirt (test with `virsh list --all`)
How-to
======
Vargant lab preparation
-----------------------
* Change default IP pool for vagrant networks if you want:
```bash
export VAGRANT_POOL="10.100.0.0/16"
```
* Clone this repo
```bash
git clone https://github.com/adidenko/vagrant-k8s
cd vagrant-k8s
```
* Prepare the virtual lab:
```bash
export VAGRANT_POOL="10.100.0.0/16"
git clone https://github.com/adidenko/vagrant-k8s
cd vagrant-k8s
vagrant up
```
* Login to master node and deploy k8s with kargo:
Deployment on a lab
-------------------
* Login to master node and sudo to root:
```bash
vagrant ssh $USER-k8s-01
# Inside your master VM run this:
vagrant ssh $USER-k8s-00
sudo su -
```
* Clone this repo
```bash
git clone https://github.com/adidenko/vagrant-k8s ~/mcp
```
* Install required software and pull needed repos:
```bash
cd ~/mcp
./bootstrap-master.sh
```
* Check `nodes` list and make sure you have SSH access to them
```bash
cd ~/mcp
cat nodes
ansible all -m ping -i nodes_to_inv.py
```
* Deploy k8s using kargo playbooks
```bash
cd ~/mcp
./deploy-k8s.kargo.sh
```
* Deploy OpenStack CCP:
```bash
cd ~/mcp
# Build CCP images
ansible-playbook -i nodes_to_inv.py playbooks/ccp-build.yaml
# Deploy CCP
ansible-playbook -i nodes_to_inv.py playbooks/ccp-deploy.yaml
```
* Wait for CCP deployment to complete
```bash
# On k8s master node
# Check CCP pods, all should become running
kubectl --namespace=openstack get pods -o wide
# Check CCP jobs status, wait until all complete
kubectl --namespace=openstack get jobs
```
* Check Horizon:
```bash
# On k8s master node check nodePort of Horizon service
HORIZON_PORT=$(kubectl --namespace=openstack get svc/horizon -o go-template='{{(index .spec.ports 0).nodePort}}')
echo $HORIZON_PORT
# Access Horizon via nodePort
curl -i -s $ANY_K8S_NODE_IP:$HORIZON_PORT
```
Working with kubernetes
-----------------------
* Login to one of your kube-master nodes and run:
```bash
# List images in registry
curl -s 127.0.0.1:31500/v2/_catalog | python -mjson.tool
# Check CCP jobs status
kubectl --namespace=openstack get jobs
# Check CCP pods
kubectl --namespace=openstack get pods -o wide
```
* Troubleshooting
```bash
# Get logs from pod
kubectl --namespace=openstack logs $POD_NAME
# Exec command from pod
kubectl --namespace=openstack exec $POD_NAME -- cat /etc/resolv.conf
kubectl --namespace=openstack exec $POD_NAME -- curl http://etcd-client:2379/health
# Run a container
docker run -t -i 127.0.0.1:31500/mcp/neutron-dhcp-agent /bin/bash
```
* Network checker
```bash
cd ~/mcp
./deploy-netchecker.sh
# or in openstack namespace
./deploy-netchecker.sh openstack
```
* CCP
```bash
# Run a bash in one of containers
docker run -t -i 127.0.0.1:31500/mcp/nova-base /bin/bash
# Inside container export credentials
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_AUTH_URL=http://keystone:35357
# Run CLI commands
openstack service list
neutron agent-list
```

48
Vagrantfile vendored
View File

@ -1,32 +1,37 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
pool = ENV["VAGRANT_POOL"] || "10.250.0.0/16"
ENV["VAGRANT_DEFAULT_PROVIDER"] = "libvirt"
pool = ENV["VAGRANT_POOL"] || "10.210.0.0/16"
prefix = pool.gsub(/\.\d+\.\d+\/16$/, "")
$num_instances = 7
$vm_memory = 2048
$num_instances = 4
$vm_memory = 6144
$vm_cpus = 2
$master_memory = 1024
$master_cpus = 1
$user = ENV["USER"]
$public_subnet = prefix.to_s + ".0"
$private_subnet = prefix.to_s + ".1"
$mgmt_cidr = prefix.to_s + ".2.0/24"
$neutron_subnet = "172.30.250"
$instance_name_prefix = "#{$user}-k8s"
# Boxes with libvirt provider support:
#$box = "yk0/ubuntu-xenial" #900M
#$box = "centos/7"
$box = "nrclark/xenial64-minimal-libvirt"
#$box = "nrclark/xenial64-minimal-libvirt"
$box = "peru/ubuntu-16.04-server-amd64"
# Create SSH keys for future lab
system 'bash ssh-keygen.sh'
system 'bash vagrant-scripts/ssh-keygen.sh'
# Create nodes list for future kargo deployment
nodes=""
(2..$num_instances).each do |i|
(1..$num_instances-1).each do |i|
ip = "#{$private_subnet}.#{i+10}"
nodes = "#{nodes}#{ip}\n"
end
@ -34,13 +39,9 @@ File.open("nodes", 'w') { |file| file.write(nodes) }
# Create the lab
Vagrant.configure("2") do |config|
(1..$num_instances).each do |i|
(0..$num_instances-1).each do |i|
# First node would be master node
if i == 1
master = true
else
master = false
end
master = i == 0
config.ssh.insert_key = false
vm_name = "%s-%02d" % [$instance_name_prefix, i]
@ -52,8 +53,13 @@ Vagrant.configure("2") do |config|
# Libvirt provider settings
test_vm.vm.provider :libvirt do |domain|
domain.uri = "qemu+unix:///system"
if master
domain.memory = $master_memory
domain.cpus = $master_cpus
else
domain.memory = $vm_memory
domain.cpus = $vm_cpus
end
domain.driver = "kvm"
domain.host = "localhost"
domain.connect_via_ssh = false
@ -66,6 +72,8 @@ Vagrant.configure("2") do |config|
domain.cpu_mode = "host-passthrough"
domain.volume_cache = "unsafe"
domain.disk_bus = "virtio"
# DISABLED: switched to new box which has 100G / partition
#domain.storage :file, :type => 'qcow2', :bus => 'virtio', :size => '20G', :device => 'vdb'
end
# Networks and interfaces
@ -85,17 +93,21 @@ Vagrant.configure("2") do |config|
:libvirt__network_name => "#{$instance_name_prefix}-private",
:libvirt__dhcp_enabled => false,
:libvirt__forward_mode => "none"
# "neutron" isolated network
test_vm.vm.network :private_network,
:ip => "#{$neutron_subnet}.#{i+10}",
:model_type => "e1000",
:libvirt__network_name => "#{$instance_name_prefix}-neutron",
:libvirt__dhcp_enabled => false,
:libvirt__forward_mode => "none"
# Provisioning
config.vm.provision "file", source: "ssh", destination: "~/ssh"
if master
config.vm.provision "deploy-k8s", type: "file", source: "deploy-k8s.kargo.sh", destination: "~/deploy-k8s.kargo.sh"
config.vm.provision "custom.yaml", type: "file", source: "custom.yaml", destination: "~/custom.yaml"
config.vm.provision "kubedns.yaml", type: "file", source: "kubedns.yaml", destination: "~/kubedns.yaml"
config.vm.provision "nodes", type: "file", source: "nodes", destination: "~/nodes"
config.vm.provision "bootstrap", type: "shell", path: "bootstrap-master.sh"
config.vm.provision "nodes", type: "file", source: "nodes", destination: "/var/tmp/nodes"
config.vm.provision "bootstrap", type: "shell", path: "vagrant-scripts/provision-master.sh"
else
config.vm.provision "bootstrap", type: "shell", path: "bootstrap-node.sh"
config.vm.provision "bootstrap", type: "shell", path: "vagrant-scripts/provision-node.sh"
end
end

11
bak/deploy-ccp.sh Executable file
View File

@ -0,0 +1,11 @@
#!/bin/bash
set -e
INVENTORY="nodes_to_inv.py"
echo "Createing repository and CCP images, it may take a while..."
ansible-playbook -i $INVENTORY playbooks/ccp-build.yaml
echo "Deploying up OpenStack CCP..."
ansible-playbook -i $INVENTORY playbooks/ccp-deploy.yaml

View File

@ -1,31 +1,22 @@
#!/bin/bash
echo master > /var/tmp/role
# Packages
sudo apt-get --yes update
sudo apt-get --yes upgrade
sudo apt-get --yes install git screen vim telnet tcpdump python-setuptools gcc python-dev python-pip libssl-dev libffi-dev software-properties-common
apt-get --yes update
apt-get --yes upgrade
apt-get --yes install git screen vim telnet tcpdump python-setuptools gcc python-dev python-pip libssl-dev libffi-dev software-properties-common curl python-netaddr
# Get ansible-2.1+, vanilla ubuntu-16.04 ansible (2.0.0.2) is broken due to https://github.com/ansible/ansible/issues/13876
sudo sh -c 'apt-add-repository -y ppa:ansible/ansible;apt-get update;apt-get install -y ansible'
ansible --version || (
apt-add-repository -y ppa:ansible/ansible
apt-get update
apt-get install -y ansible
)
# Kargo-cli
sudo git clone https://github.com/kubespray/kargo-cli.git /root/kargo-cli
sudo sh -c 'cd /root/kargo-cli && python setup.py install'
# Copy/create nodes list
test -f ./nodes || cp /var/tmp/nodes ./nodes
# k8s deploy script and configs
sudo sh -c 'cp -a ~vagrant/deploy-k8s.kargo.sh /root/ && chmod 755 /root/deploy-k8s.kargo.sh'
sudo cp -a ~vagrant/custom.yaml /root/custom.yaml
sudo cp -a ~vagrant/kubedns.yaml /root/kubedns.yaml
# Either pull or copy microservices repos
cp -a /var/tmp/microservices* ./ccp/ || touch /var/tmp/ccp-download
# SSH keys and config
sudo rm -rf /root/.ssh
sudo mv ~vagrant/ssh /root/.ssh
sudo echo -e 'Host 10.*\n\tStrictHostKeyChecking no\n\tUserKnownHostsFile=/dev/null' >> /root/.ssh/config
sudo chown -R root: /root/.ssh
# Copy nodes list
sudo cp ~vagrant/nodes /root/nodes
# README
sudo echo 'cd /root/kargo ; ansible-playbook -vvv -i inv/inventory.cfg cluster.yml -u root -f 7' > /root/README
# Pull kargo
git clone https://github.com/kubespray/kargo ~/kargo

2
ccp/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
microservices-repos
microservices

16
ccp/ccp.conf Normal file
View File

@ -0,0 +1,16 @@
[DEFAULT]
deploy_config = /root/ccp/deploy-config.yaml
[builder]
push = True
[registry]
address = "127.0.0.1:31500"
[kubernetes]
namespace = "openstack"
[repositories]
skip_empty = True
protocol = https
port = 443

6
ccp/deploy-config.yaml Normal file
View File

@ -0,0 +1,6 @@
configs:
public_interface: "eth1"
private_interface: "eth2"
neutron_external_interface: "eth3"
neutron_logging_debug: "true"
neutron_plugin_agent: "openvswitch"

25
ccp/label-nodes.sh Executable file
View File

@ -0,0 +1,25 @@
#!/bin/bash
set -e
# FIXME: hardcoded roles
declare -A nodes
nodes=( \
["node1"]="openstack-controller=true"
["node2"]="openstack-compute=true"
["node3"]="openstack-compute=true"
)
label_nodes() {
all_label='openstack-compute-controller=true'
for i in "${!nodes[@]}"
do
node=$i
label=${nodes[$i]}
kubectl get nodes $node --show-labels | grep -q "$label" || kubectl label nodes $node $label
kubectl get nodes $node --show-labels | grep -q "$all_label" || kubectl label nodes $node $all_label
done
}
label_nodes

16
ccp/registry_pod.yaml Normal file
View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Pod
metadata:
name: registry
labels:
app: registry
spec:
containers:
- name: registry
image: registry:2
env:
imagePullPolicy: Always
ports:
- containerPort: 5000
hostPort: 5000

15
ccp/registry_svc.yaml Normal file
View File

@ -0,0 +1,15 @@
kind: "Service"
apiVersion: "v1"
metadata:
name: "registry"
spec:
selector:
app: "registry"
ports:
-
protocol: "TCP"
port: 5000
targetPort: 5000
nodePort: 31500
type: "NodePort"

View File

@ -1,3 +1,13 @@
# Kubernetes version
kube_version: "v1.2.4"
# Switch network to calico
kube_network_plugin: "calico"
# Kube-proxy should be iptables for calico
kube_proxy_mode: "iptables"
# Use non-tmpfs tmp dir
local_release_dir: "/var/tmp/releases"
# Upstream DNS servers with mirantis.net
upstream_dns_servers:
- 8.8.8.8
- 8.8.4.4
- /mirantis.net/172.18.32.6

View File

@ -1,26 +1,19 @@
#!/bin/bash
INVENTORY="kargo/inventory/inventory.cfg"
INVENTORY="nodes_to_inv.py"
nodes=""
i=1
for nodeip in `cat /root/nodes` ; do
i=$(( $i+1 ))
nodes+=" node${i}[ansible_ssh_host=${nodeip},ip=${nodeip}]"
done
if [ -f "$INVENTORY" ] ; then
echo "$INVENTORY already exists, if you want to recreate, pls remove it and re-run this script"
else
echo "Preparing inventory..."
kargo prepare -y --nodes $nodes
fi
echo "Installing requirements on nodes..."
ansible-playbook -i $INVENTORY playbooks/bootstrap-nodes.yaml
echo "Running deployment..."
kargo deploy -y --ansible-opts="-e @custom.yaml"
ansible-playbook -i $INVENTORY /root/kargo/cluster.yml -e @custom.yaml
deploy_res=$?
if [ "$deploy_res" -eq "0" ]; then
echo "Setting up kubedns..."
ansible-playbook -i $INVENTORY kubedns.yaml
ansible-playbook -i $INVENTORY playbooks/kubedns.yaml
echo "Setting up kubedashboard..."
ansible-playbook -i $INVENTORY playbooks/kubedashboard.yaml
echo "Setting up ip route work-around for DNS clusterIP availability..."
ansible-playbook -i $INVENTORY playbooks/ipro_for_cluster_ips.yaml
fi

36
deploy-netchecker.sh Executable file
View File

@ -0,0 +1,36 @@
#!/bin/bash
if [ -n "$1" ] ; then
NS="--namespace=$1"
fi
kubectl get nodes || exit 1
echo "Installing netchecker server"
git clone https://github.com/adidenko/netchecker-server
pushd netchecker-server
pushd docker
docker build -t 127.0.0.1:31500/netchecker/server:latest .
docker push 127.0.0.1:31500/netchecker/server:latest
popd
kubectl create -f netchecker-server_pod.yaml $NS
kubectl create -f netchecker-server_svc.yaml $NS
popd
echo "Installing netchecker agents"
git clone https://github.com/adidenko/netchecker-agent
pushd netchecker-agent
pushd docker
docker build -t 127.0.0.1:31500/netchecker/agent:latest .
docker push 127.0.0.1:31500/netchecker/agent:latest
popd
kubectl get nodes | grep Ready | awk '{print $1}' | xargs -I {} kubectl label nodes {} netchecker=agent
NUMNODES=`kubectl get nodes --show-labels | grep Ready | grep netchecker=agent | wc -l`
sed -e "s/replicas:.*/replicas: $NUMNODES/g" -i netchecker-agent_rc.yaml
kubectl create -f netchecker-agent_rc.yaml $NS
popd
echo "DONE"
echo
echo "use the following command to check agents:"
echo "curl -s -X GET 'http://localhost:31081/api/v1/agents/' | python -mjson.tool"

View File

@ -0,0 +1,25 @@
CCP examples
============
Some examples for Openstack CCP.
Expose Horizon
==============
* Get nodePort of Horizon service:
```bash
echo $(kubectl --namespace=openstack get svc/horizon -o go-template='{{(index .spec.ports 0).nodePort}}')
```
* NAT on your router/jump-box to any k8s minion public IP and nodePort to provide external access:
```bash
iptables -t nat -I PREROUTING -p tcp --dport 8080 -j DNAT --to-destination 10.210.0.12:32643
iptables -t nat -I POSTROUTING -d 10.210.0.12 ! -s 10.210.0.0/24 -j MASQUERADE
iptables -I FORWARD -d 10.210.0.12 -j ACCEPT
```
Where `10.210.0.12` is IP of one of your k8s minions and `32643` is nodePort of Horizon service.
* You can do the same for novnc:
```bash
echo $(kubectl --namespace=openstack get svc/nova-novncproxy -o go-template='{{(index .spec.ports 0).nodePort}}')
```

View File

@ -0,0 +1,36 @@
# This script should be executed inside k8s:
# docker run -t -i 127.0.0.1:31500/mcp/nova-base /bin/bash
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_AUTH_URL=http://keystone:35357
# Key
nova keypair-add test > test.pem
chmod 600 test.pem
# Flavor
nova flavor-create demo --is-public true auto 128 2 1
# Image
curl -O http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
glance image-create --name cirros --disk-format qcow2 --container-format bare --file cirros-0.3.4-x86_64-disk.img
# Aggregates
node2=`openstack hypervisor list | grep -o '[a-z]\+-k8s-02'`
node3=`openstack hypervisor list | grep -o '[a-z]\+-k8s-03'`
nova aggregate-create n2 n2
nova aggregate-add-host n2 $node2
nova aggregate-create n3 n3
nova aggregate-add-host n3 $node3
# Network
neutron net-create net1 --provider:network-type vxlan
neutron subnet-create net1 172.20.0.0/24 --name subnet1
# Instances
net_id=`neutron net-list | grep net1 | awk '{print $2}'`
nova boot ti02 --image cirros --flavor demo --nic net-id=$net_id --key-name test --availability-zone n2
nova boot ti03 --image cirros --flavor demo --nic net-id=$net_id --key-name test --availability-zone n3

View File

@ -0,0 +1,45 @@
Examples how to expose k8s services
===================================
Exposing dashboard via frontend and externalIPs
-----------------------------------------------
* Edit `kubernetes-dashboard.yaml` and update `externalIPs` to the list of external IPs of your k8s minions
* Run:
```bash
kubectl create -f kubernetes-dashboard.yaml --namespace=kube-system
```
* Access:
```bash
curl $ANY_MINION_EXTERNAL_IP:9090
```
Exposing dashboard via nodePort
-------------------------------
* Get nodePort of the service:
```bash
echo $(kubectl --namespace=kube-system get svc/kubernetes-dashboard -o go-template='{{(index .spec.ports 0).nodePort}}')
```
* NAT on your router/jump-box to any k8s minion public IP and nodePort to provide external access:
```bash
iptables -t nat -I PREROUTING -p tcp --dport 9090 -j DNAT --to-destination 10.210.0.12:32005
iptables -t nat -I POSTROUTING -d 10.210.0.12 ! -s 10.210.0.0/24 -j MASQUERADE
iptables -I FORWARD -d 10.210.0.12 -j ACCEPT
```
Where `10.210.0.12` is public IP of one of your k8s minions and `32005` is nodePort of `kubernetes-dashboard` service.
* Access:
```bash
curl 10.210.0.12:9090
```

View File

@ -0,0 +1,22 @@
apiVersion: v1
kind: Service
metadata:
name: kubedash-frontend
labels:
app: kubedash-frontend
tier: frontend
spec:
externalIPs:
- 10.210.0.12
- 10.210.0.13
- 10.210.0.14
- 10.210.0.15
- 10.210.0.16
- 10.210.0.17
ports:
- name: http
port: 8289
protocol: TCP
targetPort: 8289
selector:
name: kubedash

View File

@ -0,0 +1,22 @@
apiVersion: v1
kind: Service
metadata:
name: dashboard-frontend
labels:
app: dashboard-frontend
tier: frontend
spec:
externalIPs:
- 10.210.0.12
- 10.210.0.13
- 10.210.0.14
- 10.210.0.15
- 10.210.0.16
- 10.210.0.17
ports:
- name: http
port: 9090
protocol: TCP
targetPort: 9090
selector:
app: kubernetes-dashboard

View File

@ -0,0 +1,18 @@
Nginx example with external IPs
===============================
* Edit `nginx-frontend.yaml` and update `externalIPs` to the list of external IPs of your k8s minions
* Deploy:
```bash
kubectl create -f nginx-backends.yaml
kubectl create -f nginx-frontend.yaml
```
* Check:
```bash
curl $ANY_MINION_EXTERNAL_IP
```

View File

@ -0,0 +1,24 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-backend
spec:
replicas: 3
template:
metadata:
labels:
app: nginx-backend
tier: backend
spec:
containers:
- name: nginx
image: nginx
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 80

View File

@ -0,0 +1,22 @@
apiVersion: v1
kind: Service
metadata:
name: nginx-frontend
labels:
app: nginx-frontend
tier: frontend
spec:
externalIPs:
- 10.210.0.12
- 10.210.0.13
- 10.210.0.14
- 10.210.0.15
- 10.210.0.16
- 10.210.0.17
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-backend

97
nodes_to_inv.py Executable file
View File

@ -0,0 +1,97 @@
#!/usr/bin/env python
# A simple dynamic replacemant of 'kargo prepare'
# Generates ansible inventory from a list of IPs in 'nodes' file.
import argparse
import json
import os
import yaml
def read_nodes_from_file(filename):
f = open(filename, 'r')
content = [x.strip('\n') for x in f.readlines()]
return content
def read_vars_from_file(src="/root/kargo/inventory/group_vars/all.yml"):
with open(src, 'r') as f:
content = yaml.load(f)
return content
def nodes_to_hash(nodes_list, masters, group_vars):
nodes = {
'all': {
'hosts': [],
'vars': group_vars
},
'etcd': {
'hosts': [],
},
'kube-master': {
'hosts': [],
},
'kube-node': {
'hosts': [],
},
'k8s-cluster': {
'children': ['kube-node', 'kube-master']
},
'_meta': {
'hostvars': {}
}
}
i = 1
for node_ip in nodes_list:
node_name = "node%s" % i
nodes['all']['hosts'].append(node_name)
nodes['_meta']['hostvars'][node_name] = {
'ansible_ssh_host': node_ip,
'ip': node_ip,
}
nodes['kube-node']['hosts'].append(node_name)
if i <= masters:
nodes['kube-master']['hosts'].append(node_name)
if i <= 3:
nodes['etcd']['hosts'].append(node_name)
i += 1
return nodes
def main():
parser = argparse.ArgumentParser(description='Kargo inventory simulator')
parser.add_argument('--list', action='store_true')
parser.add_argument('--host', default=False)
args = parser.parse_args()
# Read params from ENV since ansible does not support passing args to dynamic inv scripts
if os.environ.get('K8S_NODES_FILE'):
nodes_file = os.environ['K8S_NODES_FILE']
else:
nodes_file = 'nodes'
if os.environ.get('K8S_MASTERS'):
masters = int(os.environ['K8S_MASTERS'])
else:
masters = 2
if os.environ.get('KARGO_GROUP_VARS'):
vars_file = os.environ['KARGO_GROUP_VARS']
else:
vars_file = "/root/kargo/inventory/group_vars/all.yml"
nodes_list = read_nodes_from_file(nodes_file)
if len(nodes_list) < 3:
print "Error: requires at least 3 nodes"
return
nodes = nodes_to_hash(nodes_list, masters, read_vars_from_file(vars_file))
if args.host:
print json.dumps(nodes['_meta']['hostvars'][args.host])
else:
print json.dumps(nodes)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,17 @@
- hosts: all
tasks:
- name: Install packages
package: name={{ item }} state=latest
with_items:
- python-pip
- screen
- vim
- telnet
- tcpdump
- traceroute
- iperf3
- nmap
- ethtool
- curl
- git
- dnsutils

69
playbooks/ccp-build.yaml Normal file
View File

@ -0,0 +1,69 @@
- hosts: kube-master
pre_tasks:
- name: Download fuel-ccp
git:
repo: https://git.openstack.org/openstack/fuel-ccp
dest: /usr/local/src/fuel-ccp
version: master
- name: Upload ccp configs to master nodes
synchronize:
src: ../ccp/
dest: /root/ccp/
tasks:
- name: Install CCP cli tool
shell: pip install -U fuel-ccp/
args:
chdir: /usr/local/src
creates: /usr/local/bin/mcp-microservices
- name: Get pods
shell: kubectl get pods
register: get_pod
run_once: true
- name: Get services
shell: kubectl get svc
register: get_svc
run_once: true
- name: Create registry pod
shell: kubectl create -f registry_pod.yaml
args:
chdir: /root/ccp
run_once: true
when: get_pod.stdout.find('registry') == -1
- name: Create registry svc
shell: kubectl create -f registry_svc.yaml
args:
chdir: /root/ccp
run_once: true
when: get_svc.stdout.find('registry') == -1
- name: Fetch CCP images
shell: mcp-microservices --config-file=/root/ccp/ccp.conf fetch
run_once: true
# - name: Patch fuel-ccp-neutron
# run_once: true
# args:
# chdir: /root/microservices-repos/fuel-ccp-neutron
# shell: git fetch https://git.openstack.org/openstack/fuel-ccp-neutron {{ item }} && git cherry-pick FETCH_HEAD
# with_items:
# - "refs/changes/96/340496/6"
- name: Build CCP images
shell: mcp-microservices --config-file=/root/ccp/ccp.conf build
run_once: true
- hosts: k8s-cluster
tasks:
- name: Check number of built images
shell: test $(curl -s 127.0.0.1:31500/v2/_catalog | python -mjson.tool | grep mcp/ | wc -l) -ge 29

27
playbooks/ccp-deploy.yaml Normal file
View File

@ -0,0 +1,27 @@
- hosts: kube-master
pre_tasks:
- name: Rsync CCP configs
synchronize:
src: ../ccp/
dest: /root/ccp/
tasks:
- name: Label nodes
shell: ./label-nodes.sh
args:
chdir: /root/ccp
run_once: true
- name: Get namespaces
shell: kubectl get namespace
register: get_ns
run_once: true
- name: Deploy CCP
shell: mcp-microservices --config-file=/root/ccp/ccp.conf deploy
args:
chdir: /root/ccp
run_once: true
when: get_ns.stdout.find('openstack') == -1

View File

@ -0,0 +1,24 @@
# FXIME: add persistent routing rule
- hosts: kube-master
tasks:
- name: Get kube service net
shell: grep KUBE_SERVICE_ADDRESSES /etc/kubernetes/kube-apiserver.env | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}/[0-9]{1,2}\b"
register: kube_service_addresses
run_once: true
- hosts: all
tasks:
- name: Get local IP
shell: "calicoctl status | grep IP: | awk '{print $2}'"
register: local_ip
- name: Get route
shell: ip ro ls | grep "^{{ hostvars[groups['kube-master'][0]]['kube_service_addresses']['stdout'] }}" || echo ""
register: local_route
- name: Clean up route
shell: ip ro del {{ hostvars[groups['kube-master'][0]]['kube_service_addresses']['stdout'] }} || true
when: local_route.stdout.find('{{ local_ip.stdout }}') == -1
- name: Setup route
shell: ip ro add {{ hostvars[groups['kube-master'][0]]['kube_service_addresses']['stdout'] }} via {{ local_ip.stdout }}
when: local_route.stdout.find('{{ local_ip.stdout }}') == -1
- name: Add openstack namespace to resolv.conf
shell: grep openstack.svc.cluster.local /etc/resolv.conf || sed '/^search / s/$/ openstack.svc.cluster.local/' -i /etc/resolv.conf

5
playbooks/kubedash.yaml Normal file
View File

@ -0,0 +1,5 @@
- hosts: kube-master
tasks:
- name: setup-kubedns
shell: kpm deploy kube-system/kubedash --namespace=kube-system
run_once: true

View File

@ -0,0 +1,5 @@
- hosts: kube-master
tasks:
- name: setup-kubedns
shell: kpm deploy kube-system/kubernetes-dashboard --namespace=kube-system
run_once: true

View File

@ -0,0 +1,14 @@
#!/bin/bash
echo master > /var/tmp/role
# Packages
sudo apt-get --yes update
sudo apt-get --yes upgrade
sudo apt-get --yes install screen git
# SSH keys and config
sudo rm -rf /root/.ssh
sudo mv ~vagrant/ssh /root/.ssh
sudo echo -e 'Host 10.*\n\tStrictHostKeyChecking no\n\tUserKnownHostsFile=/dev/null' >> /root/.ssh/config
sudo chown -R root: /root/.ssh

View File

@ -4,10 +4,7 @@ echo node > /var/tmp/role
# Packages
sudo apt-get --yes update
sudo apt-get --yes upgrade
sudo apt-get --yes install screen vim telnet tcpdump python-pip traceroute iperf3 nmap ethtool
# Pip
sudo pip install kpm
sudo apt-get --yes install python
# SSH
sudo rm -rf /root/.ssh