Files
kubespray/docs/cloud_providers/openstack.md
k8s-infra-cherrypick-robot 9b122fb5a8 [release-2.25] pre-commit: make hooks self contained + ci config (#11359)
* Use alternate self-sufficient shellcheck precommit

This pre-commit does not require prerequisite on the host, making it
easier to run in CI workflows.

* Switch to upstream ansible-lint pre-commit hook

This way, the hook is self contained and does not depend on a previous
virtualenv installation.

* pre-commit: fix hooks dependencies

- ansible-syntax-check
- tox-inventory-builder
- jinja-syntax-check

* Fix ci-matrix pre-commit hook

- Remove dependency of pydblite which fails to setup on recent pythons
- Discard shell script and put everything into pre-commit

* pre-commit: apply autofixes hooks and fix the rest manually

- markdownlint (manual fix)
- end-of-file-fixer
- requirements-txt-fixer
- trailing-whitespace

* Convert check_typo to pre-commit + use maintained version

client9/misspell is unmaintained, and has been forked by the golangci
team, see https://github.com/client9/misspell/issues/197#issuecomment-1596318684.

They haven't yet added a pre-commit config, so use my fork with the
pre-commit hook config until the pull request is merged.

* collection-build-install convert to pre-commit

* Run pre-commit hooks in dynamic pipeline

Use gitlab dynamic child pipelines feature to have one source of truth
for the pre-commit jobs, the pre-commit config file.

Use one cache per pre-commit. This should reduce the "fetching cache"
time steps in gitlab-ci, since each job will have a separate cache with
only its hook installed.

* Remove gitlab-ci job done in pre-commit

* pre-commit: adjust mardownlint default, md fixes

Use a style file as recommended by upstream. This makes for only one
source of truth.
Conserve previous upstream default for MD007 (upstream default changed
here https://github.com/markdownlint/markdownlint/pull/373)

* Update pre-commit hooks

---------

Co-authored-by: Max Gautier <mg@max.gautier.name>
2024-07-12 00:21:42 -07:00

6.2 KiB

OpenStack

Known compatible public clouds

Kubespray has been tested on a number of OpenStack Public Clouds including (in alphabetical order):

The OpenStack cloud provider

The cloud provider is configured to have Octavia by default in Kubespray.

  • Enable the external OpenStack cloud provider in group_vars/all/all.yml:

    cloud_provider: external
    external_cloud_provider: openstack
    
  • Enable Cinder CSI in group_vars/all/openstack.yml:

    cinder_csi_enabled: true
    
  • Enable topology support (optional), if your openstack provider has custom Zone names you can override the default "nova" zone by setting the variable cinder_topology_zones

    cinder_topology: true
    
  • Enabling cinder_csi_ignore_volume_az: true, ignores volumeAZ and schedules on any of the available node AZ.

    cinder_csi_ignore_volume_az: true
    
  • If you are using OpenStack loadbalancer(s) replace the openstack_lbaas_subnet_id with the new external_openstack_lbaas_subnet_id. Note The new cloud provider is using Octavia instead of Neutron LBaaS by default!

  • If you are in a case of a multi-nic OpenStack VMs (see kubernetes/cloud-provider-openstack#407 and #6083 for explanation), you should override the default OpenStack networking configuration:

    external_openstack_network_ipv6_disabled: false
    external_openstack_network_internal_networks: []
    external_openstack_network_public_networks: []
    
  • You can override the default OpenStack metadata configuration (see #6338 for explanation):

    external_openstack_metadata_search_order: "configDrive,metadataService"
    
  • Available variables for configuring lbaas:

    external_openstack_lbaas_enabled: true
    external_openstack_lbaas_floating_network_id: "Neutron network ID to get floating IP from"
    external_openstack_lbaas_floating_subnet_id: "Neutron subnet ID to get floating IP from"
    external_openstack_lbaas_method: ROUND_ROBIN
    external_openstack_lbaas_provider: amphora
    external_openstack_lbaas_subnet_id: "Neutron subnet ID to create LBaaS VIP"
    external_openstack_lbaas_network_id: "Neutron network ID to create LBaaS VIP"
    external_openstack_lbaas_manage_security_groups: false
    external_openstack_lbaas_create_monitor: false
    external_openstack_lbaas_monitor_delay: 5
    external_openstack_lbaas_monitor_max_retries: 1
    external_openstack_lbaas_monitor_timeout: 3
    external_openstack_lbaas_internal_lb: false
    
    
  • Run source path/to/your/openstack-rc to read your OpenStack credentials like OS_AUTH_URL, OS_USERNAME, OS_PASSWORD, etc. Those variables are used for accessing OpenStack from the external cloud provider.

  • Run the cluster.yml playbook

Additional step needed when using calico or kube-router

Being L3 CNI, calico and kube-router do not encapsulate all packages with the hosts' ip addresses. Instead the packets will be routed with the PODs ip addresses directly.

OpenStack will filter and drop all packets from ips it does not know to prevent spoofing.

In order to make L3 CNIs work on OpenStack you will need to tell OpenStack to allow pods packets by allowing the network they use.

First you will need the ids of your OpenStack instances that will run kubernetes:

openstack server list --project YOUR_PROJECT
+--------------------------------------+--------+----------------------------------+--------+-------------+
| ID                                   | Name   | Tenant ID                        | Status | Power State |
+--------------------------------------+--------+----------------------------------+--------+-------------+
| e1f48aad-df96-4bce-bf61-62ae12bf3f95 | k8s-1  | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running     |
| 725cd548-6ea3-426b-baaa-e7306d3c8052 | k8s-2  | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running     |

Then you can use the instance ids to find the connected neutron ports (though they are now configured through using OpenStack):

openstack port list -c id -c device_id --project YOUR_PROJECT
+--------------------------------------+--------------------------------------+
| id                                   | device_id                            |
+--------------------------------------+--------------------------------------+
| 5662a4e0-e646-47f0-bf88-d80fbd2d99ef | e1f48aad-df96-4bce-bf61-62ae12bf3f95 |
| e5ae2045-a1e1-4e99-9aac-4353889449a7 | 725cd548-6ea3-426b-baaa-e7306d3c8052 |

Given the port ids on the left, you can set the two allowed-address(es) in OpenStack. Note that you have to allow both kube_service_addresses (default 10.233.0.0/18) and kube_pods_subnet (default 10.233.64.0/18.)

# allow kube_service_addresses and kube_pods_subnet network
openstack port set 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
openstack port set e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18

If all the VMs in the tenant correspond to Kubespray deployment, you can "sweep run" above with:

openstack port list --device-owner=compute:nova -c ID -f value | xargs -tI@ openstack port set @ --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18

Now you can finally run the playbook.