Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
102882c
Github Actions EKS provisioning
isandesh1986 Jun 29, 2022
5943b18
Few edits
isandesh1986 Jun 29, 2022
f301476
Edits
isandesh1986 Jun 29, 2022
ae0a26d
Merge branch 'devonfw:master' into EKS_Provisioning_Github
isandesh1986 Jul 7, 2022
877773a
adsads
isandesh1986 Jul 7, 2022
bbedb76
Create setup-eks-provisioning-pipeline.asciidoc
isandesh1986 Jul 7, 2022
3a6f08f
Automatic generation of documentation
isandesh1986 Jul 7, 2022
9711450
Create setup-eks-provisioning-pipeline.asciidoc
isandesh1986 Jul 7, 2022
2f53b3b
Automatic generation of documentation
isandesh1986 Jul 7, 2022
da83b8e
Update setup-eks-provisioning-pipeline.asciidoc
isandesh1986 Jul 7, 2022
522d286
Automatic generation of documentation
isandesh1986 Jul 7, 2022
bcbe0fe
Update setup-eks-provisioning-pipeline.asciidoc
isandesh1986 Jul 7, 2022
a78fb0e
Automatic generation of documentation
isandesh1986 Jul 7, 2022
21aeb3b
Update setup-eks-provisioning-pipeline.asciidoc
isandesh1986 Jul 7, 2022
e3e2190
Automatic generation of documentation
isandesh1986 Jul 7, 2022
c6f91e2
Update setup-eks-provisioning-pipeline.asciidoc
isandesh1986 Jul 7, 2022
c45bfaa
Automatic generation of documentation
isandesh1986 Jul 7, 2022
c515838
Update eks-pipeline.cfg
ultymatom Jul 20, 2022
d5bac0a
Update eks-pipeline.cfg
ultymatom Aug 5, 2022
4177209
Update eks-pipeline.cfg
ultymatom Aug 5, 2022
af5913d
Update eks-provisioning.yml.template
ultymatom Aug 5, 2022
933fa5c
Merge branch 'master' into EKS_Provisioning_Github
ultymatom Aug 22, 2022
ab11b2a
Automatic generation of documentation
ultymatom Aug 22, 2022
8db1caf
Update eks-pipeline.cfg
ultymatom Aug 23, 2022
3a29448
adding -p when creating var folder
ultymatom Aug 23, 2022
fc50959
Update eks-provisioning.yml.template
ultymatom Aug 25, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 28 additions & 41 deletions documentation/azure-devops/setup-eks-provisioning-pipeline.asciidoc
Original file line number Diff line number Diff line change
@@ -1,36 +1,40 @@
:provider: Azure DevOps
:pipeline_type: pipeline
:path_provider: azure-devops
:trigger_sentence_azure: This workflow will be configured to be executed inside a CI pipeline
:toc: macro
toc::[]
:idprefix:
:idseparator: -

= Setting up a AWS EKS provisioning pipeline on Azure DevOps
= Setting up the AWS EKS provisioning {pipeline_type} on {provider}
In this section we will create a {pipeline_type} which will provision an AWS EKS cluster. This {pipeline_type} will be configured to be manually triggered by the user. As part of EKS cluster provisioning, a NGINX Ingress controller is deployed and a .env file with the name `eks-variables` is created in .github folder, which contains, among others, the DNS name of the Ingress controller, that you you will need to add as CNAME record on the domains used in your application Ingress manifest files. Refer to the appendix to retrieve the DNS name of the Ingress controller independently.

In this section we will create a pipeline which will provision an AWS EKS cluster. This pipeline will be configured to be manually triggered by the user. As part of EKS cluster provisioning, a NGINX Ingress controller is deployed and a variable group with the name `eks-variables` is created, which contains, among others, the DNS name of the Ingress controller, that you you will need to add as CNAME record on the domains used in your application Ingress manifest files. Refer to the appendix to retrieve the DNS name of the Ingress controller independently.
The creation of the {pipeline_type} will follow the project workflow, so a new branch named `feature/eks-provisioning` will be created, the YAML file for the workflow and the terraform files for creating the cluster will be pushed to it.

The creation of the pipeline will follow the project workflow, so a new branch named `feature/eks-provisioning` will be created, the YAML file for the pipeline and the terraform files for creating the cluster will be pushed to it.
Then, a Pull Request (PR) will be created in order to merge the new branch into the appropriate branch (provided in `-b` flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using `-w` flag.

Then, a Pull Request (PR) will be created in order to merge the new branch into the appropiate branch (provided in `-b` flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using `-w` flag.

The script located at `/scripts/pipelines/azure-devops/pipeline_generator.sh` will automatically create this new branch, create the EKS provisioning pipeline based on the YAML template, create the Pull Request and, if it is possible, merge this new branch into the specified branch.
The script located at `/scripts/pipelines/{path_provider}/pipeline_generator.sh` will automatically create this new branch, create the EKS provisioning {pipeline_type} based on the YAML template, create the Pull Request and, if it is possible, merge this new branch into the specified branch.

=== Prerequisites

* Install the https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks[Terraform extension] for Azure DevOps.
* Create a service connection for connecting to an AWS account (as explained in the above Terraform extension link) and name it `AWS-Terraform-Connection`. If you already have a service connection available or you need a specific connection name, please update `eks-pipeline.cfg` accordingly.

* Install the https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks[Terraform extension] for Azure DevOps.
* Create a service connection for connecting to an AWS account (as explained in the above Terraform extension link) and name it `AWS-Terraform-Connection`. If you already have a service connection available or you need a specific connection name, please update `eks-pipeline.cfg` accordingly.

* A S3 Bucket. You can use an existing one or https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html#using-s3-commands-managing-buckets-creating[create a new one] with the following command:
```
aws s3 mb <bucket name>
# Example: aws s3 mb s3://terraform-state-bucket
# Example: aws s3 mb s3://terraformStateBucket
```

* An AWS IAM user with https://github.com/devonfw/hangar/blob/master/documentation/aws/setup-aws-account-iam-for-eks.asciidoc#check-iam-user-permissions[required permissions] to provision the EKS cluster.

* This script will commit and push the corresponding YAML template into your repository, so please be sure your local repository is up-to-date (i.e you have pulled the latest changes with `git pull`).

== Creating the pipeline using provided script
== Creating the {pipeline_type} using provided script

Before executing the pipeline generator, you will need to customize some input variables about the environment. Also, you may want to use existing VPC and subnets instead of creating new ones. To do so, you can either edit `terraform.tfvars` file or take advantage of the `set-terraform-variables.sh` script located at `/scripts/environment-provisioning/aws/eks`, which allows you to create or update values for the required variables, passing them as flags.
Before executing the workflow generator, you will need to customize some input variables about the environment. Also, you may want to use existing VPC and subnets instead of creating new ones. To do so, you can either edit `terraform.tfvars` file or take advantage of the `set-terraform-variables.sh` script located at `/scripts/environment-provisioning/aws/eks`, which allows you to create or update values for the required variables, passing them as flags.

Example: creating a new VPC on cluster creation:

Expand All @@ -41,50 +45,41 @@ Example: reusing existing VPC and subnets:
```
./set-terraform-variables.sh --region <region name> --instance_type <workers instance type> --existing_vpc_id <vpc id> --existing_vpc_private_subnets <array of subnet ids>
```
* Rancher is installed by default on the cluster after provisioning. If you wish to change this, please update `eks-pipeline.cfg` accordingly.

=== Usage
```
pipeline_generator.sh \
-c <config file path> \
-n <pipeline name> \
-d <project local path> \
--cluster-name <cluster name> \
--cluster-name <cluster name> \
--s3-bucket <s3 bucket name> \
--s3-key-path <s3 key path> \
[--aws-access-key <aws access key>] \
[--aws-secret-access-key <aws secret access key>] \
[--aws-region <aws region>] \
[--rancher] \
[-b <branch>] \
[-w]
```

NOTE: The config file for the EKS provisioning pipeline is located at `/scripts/pipelines/azure-devops/templates/eks/eks-pipeline.cfg`.
NOTE: The config file for the EKS provisioning workflow is located at `/scripts/pipelines/{path_provider}/templates/eks/eks-pipeline.cfg`.

=== Flags
```
-c, --config-file [Required] Configuration file containing pipeline definition.
-n, --pipeline-name [Required] Name that will be set to the pipeline.
-d, --local-directory [Required] Local directory of your project (the path should always be using '/' and not '\').
--cluster-name [Required] Name for the cluster."
--s3-bucket [Required] Name of the S3 bucket where the Terraform state of the cluster will be stored.
--s3-key-path [Required] Path within the S3 bucket where the Terraform state of the cluster will be stored.
--aws-access-key [Required, on first run] AWS account access key ID.
--aws-secret-access-key [Required, on first run] AWS account secret access key.
--aws-region [Required, on first run] AWS region for provisioning resources.
--rancher Install Rancher to manage the cluster.
-b, --target-branch Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided.
-w Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag.
-c, --config-file [Required] Configuration file containing workflow definition.
-n, --pipeline-name [Required] Name that will be set to the workflow.
-d, --local-directory [Required] Local directory of your project (the path should always be using '/' and not '\').
--cluster-name [Required] Name for the cluster."
--s3-bucket [Required] Name of the S3 bucket where the Terraform state of the cluster will be stored.
--s3-key-path [Required] Path within the S3 bucket where the Terraform state of the cluster will be stored.
-b, --target-branch Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided.
-w Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag.
```

=== Example

```
./pipeline_generator.sh -c ./templates/eks/eks-pipeline.cfg -n eks-provisioning -d C:/Users/$USERNAME/Desktop/quarkus-project --cluster-name hangar-eks-cluster --s3-bucket terraform-state-bucket --s3-key-path eks/state --rancher -b develop -w
./pipeline_generator.sh -c ./templates/eks/eks-pipeline.cfg -n eks-provisioning -d C:/Users/$USERNAME/Desktop/quarkus-project --cluster-name hangar-eks-cluster --s3-bucket terraformStateBucket --s3-key-path eks/state -b develop -w
```

NOTE: Rancher is installed on the cluster after provisioning when using the above command.

== Appendix: Interacting with the cluster

First, generate a `kubeconfig` file for accessing the AWS EKS cluster:
Expand All @@ -101,15 +96,7 @@ To get the DNS name of the NGINX Ingress controller on the EKS cluster, run the
kubectl get svc --namespace nginx-ingress nginx-ingress-nginx-ingress-controller -o jsonpath={.status.loadBalancer.ingress[0].hostname}
```

Rancher, if installed, will be available on `https://<ingress controller domain>/dashboard`. You will be asked for an initial password, which can be retrieved with:

```
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{"\n"}}'
```

== Appendix: Destroying the cluster

To destroy the provisioned resources, set `operation` pipeline variable value to `destroy` and run the pipeline.
Rancher will be available on `https://<ingress controller domain>/dashboard`.

== Appendix: Rancher resources

Expand Down
105 changes: 105 additions & 0 deletions documentation/github/setup-eks-provisioning-pipeline.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
:provider: GitHub
:pipeline_type: workflow
:path_provider: github
:trigger_sentence_github: This workflow will be configured to be executed inside a CI pipeline
:toc: macro
toc::[]
:idprefix:
:idseparator: -

= Setting up the AWS EKS provisioning {pipeline_type} on {provider}
In this section we will create a {pipeline_type} which will provision an AWS EKS cluster. This {pipeline_type} will be configured to be manually triggered by the user. As part of EKS cluster provisioning, a NGINX Ingress controller is deployed and a .env file with the name `eks-variables` is created in .github folder, which contains, among others, the DNS name of the Ingress controller, that you you will need to add as CNAME record on the domains used in your application Ingress manifest files. Refer to the appendix to retrieve the DNS name of the Ingress controller independently.

The creation of the {pipeline_type} will follow the project workflow, so a new branch named `feature/eks-provisioning` will be created, the YAML file for the workflow and the terraform files for creating the cluster will be pushed to it.

Then, a Pull Request (PR) will be created in order to merge the new branch into the appropriate branch (provided in `-b` flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using `-w` flag.

The script located at `/scripts/pipelines/{path_provider}/pipeline_generator.sh` will automatically create this new branch, create the EKS provisioning {pipeline_type} based on the YAML template, create the Pull Request and, if it is possible, merge this new branch into the specified branch.

=== Prerequisites

* Add AWS credentials as https://docs.github.com/en/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository[Github Secrets] in your repository.


* A S3 Bucket. You can use an existing one or https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html#using-s3-commands-managing-buckets-creating[create a new one] with the following command:
```
aws s3 mb <bucket name>
# Example: aws s3 mb s3://terraformStateBucket
```

* An AWS IAM user with https://github.com/devonfw/hangar/blob/master/documentation/aws/setup-aws-account-iam-for-eks.asciidoc#check-iam-user-permissions[required permissions] to provision the EKS cluster.

* This script will commit and push the corresponding YAML template into your repository, so please be sure your local repository is up-to-date (i.e you have pulled the latest changes with `git pull`).

== Creating the {pipeline_type} using provided script

Before executing the workflow generator, you will need to customize some input variables about the environment. Also, you may want to use existing VPC and subnets instead of creating new ones. To do so, you can either edit `terraform.tfvars` file or take advantage of the `set-terraform-variables.sh` script located at `/scripts/environment-provisioning/aws/eks`, which allows you to create or update values for the required variables, passing them as flags.

Example: creating a new VPC on cluster creation:

```
./set-terraform-variables.sh --region <region name> --instance_type <workers instance type> --vpc_name <vpc name> --vpc_cidr_block <vpc cidr block>
```
Example: reusing existing VPC and subnets:
```
./set-terraform-variables.sh --region <region name> --instance_type <workers instance type> --existing_vpc_id <vpc id> --existing_vpc_private_subnets <array of subnet ids>
```
* Rancher is installed by default on the cluster after provisioning. If you wish to change this, please update `eks-pipeline.cfg` accordingly.

=== Usage
```
pipeline_generator.sh \
-c <config file path> \
-n <pipeline name> \
-d <project local path> \
--cluster-name <cluster name> \
--s3-bucket <s3 bucket name> \
--s3-key-path <s3 key path> \
[-b <branch>] \
[-w]
```

NOTE: The config file for the EKS provisioning workflow is located at `/scripts/pipelines/{path_provider}/templates/eks/eks-pipeline.cfg`.

=== Flags
```
-c, --config-file [Required] Configuration file containing workflow definition.
-n, --pipeline-name [Required] Name that will be set to the workflow.
-d, --local-directory [Required] Local directory of your project (the path should always be using '/' and not '\').
--cluster-name [Required] Name for the cluster."
--s3-bucket [Required] Name of the S3 bucket where the Terraform state of the cluster will be stored.
--s3-key-path [Required] Path within the S3 bucket where the Terraform state of the cluster will be stored.
-b, --target-branch Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided.
-w Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag.
```

=== Example

```
./pipeline_generator.sh -c ./templates/eks/eks-pipeline.cfg -n eks-provisioning -d C:/Users/$USERNAME/Desktop/quarkus-project --cluster-name hangar-eks-cluster --s3-bucket terraformStateBucket --s3-key-path eks/state -b develop -w
```

== Appendix: Interacting with the cluster

First, generate a `kubeconfig` file for accessing the AWS EKS cluster:

```
aws eks update-kubeconfig --name <cluster name> --region <aws region>
```
Now you can use `kubectl` tool to communicate with the cluster.

To enable an IAM user to connect to the EKS cluster, please refer https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html[here].

To get the DNS name of the NGINX Ingress controller on the EKS cluster, run the below command:
```
kubectl get svc --namespace nginx-ingress nginx-ingress-nginx-ingress-controller -o jsonpath={.status.loadBalancer.ingress[0].hostname}
```

Rancher will be available on `https://<ingress controller domain>/dashboard`.

== Appendix: Rancher resources

* https://rancher.com/docs/rancher/v2.6/en/cluster-admin/cluster-access/kubectl/[Downloading `kubeconfig`].
* https://rancher.com/docs/rancher/v2.6/en/admin-settings/rbac/[RBAC]
* https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/[Monitoring]
* https://rancher.com/docs/rancher/v2.6/en/logging/[Logging]
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
:provider: Azure DevOps
:pipeline_type: pipeline
:path_provider: azure-devops
:trigger_sentence_azure: This workflow will be configured to be executed inside a CI pipeline
include::../common_templates/setup-eks-provisioning-pipeline.asciidoc[]
Loading