-
Notifications
You must be signed in to change notification settings - Fork 40
Requirements
The entire aic-helm project is obviously helm driven. All components should work with 2.1.0 or later. Versions prior to 2.1.0 will have issues with components requiring StatefulSets and Kubernetes 1.5 which is a requirement for Ceph Persistent Volume Claim provisioning.
The aic-helm project assumes Canonical's MaaS as the foundational bootstrap provider. We create the MaaS service inside Kubernetes for ease of deployment and upgrades. This has a few requirements of its own and we continue to refine this service.
At the moment, this is not optional. We will strive to make a non-pvc path in all charts using an alternative persistent storage approach even if that is simply host directories but that currently, all charts make the assumption that dynamic volume provisioning is supported and target the storage class general which is of course configurable.
To support dynamic volume provisioning, the aic-helm project requires Kubernetes 1.5 in order to obtain rbd dynamic volume support. Although rbd volumes are supported in the stable v1.4 version, dynamic rbd volumes allowing PVCs are only supported in 1.5.0 and beyond. Note that while you can still use helm-2.0.0 with Kubernetes 1.5, you will not be able to use PetSets. We strongly recommend you do not try and use aic-helm without using helm 2.1.0 which provides Kubernetes 1.5 support. See the following helm issue for details.
This can be accomplished with a kubeadm based cluster install:
kubeadm init --use-kubernetes-version v1.5.0
Note that in addition to Kubernetes 1.5.0, you will need to replace the kube-controller-manager container with one that supports the rbd utilities. We have made a convenient container that you can drop in as a replacement. It is a ubuntu based container with the ceph tools and the kube-controller-manager binary from the 1.5.0 release available as a Dockerfile or a quay.io image you can update in your kubeadm manifest /etc/kubernetes/manifests/kube-controller-manager.json directly with image: quay.io/attcomdev/kube-controller-manager:v1.5.0
The kubelet should pick up the change and restart the container.
For the kube-controller-manager to be able to talk to the ceph-mon instances, ensure it can resolve ceph-mon.ceph (assuming you install the ceph chart into the ceph namespace). This is done by ensuring that both the baremetal host running the kubelet process and the kube-controller-manager container have the SkyDNS address and the appropriate search string in /etc/resolv.conf. This is covered in more detail in the ceph but a typical resolv.conf would look like this:
nameserver 10.32.0.2 ### skydns instance ip
nameserver 8.8.8.8
nameserver 8.8.4.4
search svc.cluster.local
Finally, you need to install Sigil to help in the generation of Ceph Secrets. You can do this by running the following command as root:
curl -L https://github.com/gliderlabs/sigil/releases/download/v0.4.0/sigil_0.4.0_Linux_x86_64.tgz | tar -zxC /usr/local/bin
The following charts are meant to provide libraries, lookup routines, and other common elements to all other charts. They are not meant to be installed directly.
- common
- bootstrap
The common chart houses endpoint lookup routines, template blocks we intend to share, and the bootstrap chart initializes a namespace for openstack-helm chart installation, which mainly ensures ceph secrets are present. If you manages these yourself, you do not need to leverage the bootstrap chart.
The following charts form the foundation to help establish an OpenStack control plane, including shared storage and bare metal provisioning
- ceph
- maas
These chart form the foundational layers necessary to boostrap an environment and may run in separate namespaces. The intention is to layer them. Each chart contains a README that may have further instructions or requirement for each foundational chart.
- mariadb
- rabbitmq
- memcached
These of course provide the infrastructure elements necessary to run OpenStack
- keystone
- glance
- nova (WIP)
The OpenStack charts are under development. They are loosely based on the work SAP is doing in their openstack-helm repository, but also borrow from the stackanetes project. Please see the Design document for more details but as a fundamental difference we want each OpenStack chart to be independently manageable as its own helm chart installation without requiring an OpenStack parent chart to drive environmental specifics. Instead, each chart is fed values overrides from an operator provided environmental override values file.