This project has been given the name nrlf which stands for National Records Locator (Futures) as a replacement of the existing NRL.
This project uses the Makefile to build, test and deploy. This will ensure that a developer can reproduce any operation that the CI/CD pipelines does, meaning they can test the application locally or on their own deployed dev environment.
- Before You Begin
- Getting Started
- Deploying
- Feature Tests
- OAuth Tokens for API request
- Route53 & Hosted Zones
- Sandbox
- Releases
- Reports
Before you start using this repository, you will need to:
- Follow the instructions on the Developer Onboarding Guide in confluence
- Install
asdfusing https://asdf-vm.com/guide/getting-started.html
Confirm asdf is installed and is working with:
asdf --version
Then install all the dependency packages with:
make configure
There are several ways to set up your AWS CLI access. The recommended way is to use granted. Follow the instructions on their website to install and configure granted.
One of the gotchas with using granted is that you need to ensure that you source the environment variables into your shell session. You can do this by running:
source assume <profile>
Where <profile> is one of the profiles which should be in your ~/.aws/config. You can customize the profile names to your liking.
From here on, you can use the AWS CLI as normal and run commands that need AWS access on that terminal session.
As a short guideline about profiles to assume for a typical workflow:
- Assume mgmt account for stack specific terraform deployment as indicated in
terraform/infrastructure/README.md. - Assume the specific environment for running feature tests against that environment.
In order to execute make commands that need AWS access, you will need to pull the NRLF certificates. In order to do this, make sure you have AWS CLI installed and configured, assume the mgmt account, then run:
make ENV=env truststore-pull-all
Where env is one of dev, qa , int, perftest, ref or prod.
To build packages:
make
To run the linters over your changes:
make lint
To run the unit tests:
make test
To run the local feature tests:
make test-features
To check your environment:
make check
this will provide a report of the dependencies in your environment and should highlight the things that are not configured correctly or things that are missing.
For the integration tests, you need to have deployed your infrastructure (using Terraform).
To run integration tests:
make test-integration
To run the Firehose integration tests:
make test-firehose-integration
To run all the feature integration tests:
make test-features-integration
To run individual feature test scenario(s) using the custom tag :
- Add
@custom_tagbefore each 'Scenario' that needs to be run (in each .feature file) - Run the command below:
make integration-test-with-custom_tag
To run all the feature integration tests and generate an interactive Allure report thereafter :
make test-features-integration-report
Integration tests can be debugged directly in VS Code using a launch configuration. Instructions on how to set this up for the first time and run the debugger are below.
To get started:
- Open VS Code and press
Ctrl+Shift+P - Search for “Add Configuration” and select it
- Choose Python > Module
- Replace the generated entry in
.vscode/launch.jsonwith an appropriate launch.json configuration. Below is a example which can be modified as required:
// .vscode/launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Debug Behave",
"type": "debugpy",
"request": "launch",
"module": "behave",
"args": [
"-D",
"env=example-env",
"tests/features",
"-D",
"integration_test=true",
"--tags=@custom_tag"
],
"console": "integratedTerminal",
"justMyCode": true,
"env": {
"PYTHONPATH": "${workspaceFolder}" // adds your project code to Python path
},
"cwd": "${workspaceFolder}" // resolves to the root directory
}
]
}Once steps 1-4 are done, "Debug Behave" should appear in the Run and Debug panel.
You can tailor the args section in .vscode/launch.json to suit your specific environment, tags, or test structure.
For example: To run only tests with the @api tag, set the --tags accordingly:
"args": ["--tags=@api"..]
To start debugging using the launch configuration from VS Code:
- Go to the Run and Debug panel
Ctrl+Shift+D - Ensure “Debug Behave” (if this is the name used in “launch.json”) is selected from the dropdown at the top
- Press
F5to start debugging
For smoke tests, you need to have deployed your infrastructure (using Terraform).
Before the first run of the smoke tests, you need to set the required permissions in your deployment. You can do this by running:
make set-smoketest-perms
To run the internal smoke tests against your stack, do this:
make test-smoke-internal
To run the smoke tests against the public access endpoints (via APIGEE proxies), do this:
make test-smoke-public
If the API changes, the API documentation needs to be updated in the appropriate API repo. This is done by making changes to the API specification .yaml files in each repo.
For Consumer API changes, update NRL Consumer API - consumer.yaml
For Producer API changes, update NRL Producer API - producer.yaml
Changes to the files in those repos will be reflected when each one is released. See the documentation in each repo for this process.
The NRLF is deployed using terraform. The infrastructure is split into two parts.
All account wide resources like Route 53 hosted zones or IAM roles are found in terraform/account-wide-infrastructure
All resources that are not account specific (lambdas, API gateways etc) can be found in terraform/infrastructure
Information on deploying these two parts:
Referring to the sample feature test below:
Scenario: Successfully create a Document Pointer of type Mental health crisis plan
Given {ACTOR TYPE} "{ACTOR}" (Organisation ID "{ORG_ID}") is requesting to {ACTION} Document Pointers
And {ACTOR TYPE} "{ACTOR}" is registered in the system for application "APP 1" (ID "{APP ID 1}") for document types
| system | value |
| http://snomed.info/sct | 736253002 |
And {ACTOR TYPE} "{ACTOR}" has authorisation headers for application "APP 2" (ID "{APP ID 2}")
When {ACTOR TYPE} "{ACTOR}" {ACTION} a Document Reference from DOCUMENT template
| property | value |
| identifier | 1234567890 |
| type | 736253002 |
| custodian | {ORG_ID} |
| subject | 9278693472 |
| contentType | application/pdf |
| url | https://example.org/my-doc.pdf |
Then the operation is {RESULT}The following notes should be made:
- ACTOR TYPE, ACTOR and ACTION are forced to be consistent throughout your test
- ACTOR TYPE, ACTOR, ACTION, ORG_ID, APP, APP ID, and RESULT are enums: their values are restricted to a predefined set
- ACTOR is equivalent to to both custodian and organisation
- The request method (GET, POST, ...) and slug (e.g. DocumentReference/_search) for ACTION is taken from the swagger.
- ”Given ... is requesting to” is mandatory: it sets up the base request
- ”And ... is registered to” sets up a org:app:doc-types entry in Auth table
- ”And ... has authorisation headers” sets up authorisation headers
Clients must provide OAuth access tokens when making requests to the NRLF APIs.
To create an access token for the dev environment, you can do the following:
make get-access-token
To create an access token for another environment:
$ make ENV=[env-name] get-access-token
Valid [env-name] values are dev, int, ref and prod for each associated NRLF environment.
Once you have your access token, you provide it as a bearer token in your API requests using the Authorization header, like this:
Authorization: Bearer <token>
If you need to get an API token for the nrl_sync application, the command is:
$ make ENV=[env-name] APP_ALIAS=nrl_sync get-access-token
There are 2 parts to the Route53 configuration:
In terraform/account-wide-infrastructure/prod/route53.tf, we have a Hosted Zone:
resource "aws_route53_zone" "dev-ns" {
name = "dev.internal.record-locator.devspineservices.nhs.uk"
}In terraform/account-wide-infrastructure/mgmt/route53.tf we have both a Hosted Zone and a Record per environment:
resource "aws_route53_zone" "prodspine" {
name = "record-locator.spineservices.nhs.uk"
tags = {
Environment = terraform.workspace
}
}
resource "aws_route53_record" "prodspine" {
zone_id = aws_route53_zone.prodspine.zone_id
name = "prod.internal.record-locator.spineservices.nhs.uk"
records = ["ns-904.awsdns-49.net.",
"ns-1539.awsdns-00.co.uk.",
"ns-1398.awsdns-46.org.",
"ns-300.awsdns-37.com."
]
ttl = 300
type = "NS"
}The records property is derived by first deploying to a specific environment, in this instance, production, and from the AWS Console navigating to the Route53 Hosted Zone that was just deployed and copying the "Value/Route traffic to" information into the records property. Finally, deploy to the mgmt account with the new information.
The public-facing sandbox is an additional persistent workspace (int-sandbox) deployed in our INT (int / test) environment, alongside the persistent workspace named ref. It is identical to our live API, except it is open to the world via Apigee (which implements rate limiting on our behalf).
In order to deploy to a sandbox environment (dev-sandbox, qa-sandbox, int-sandbox) you should use the GitHub Action for persistent environments, where you should select the option to deploy to the sandbox workspace.
Any workspace suffixed with -sandbox has a small amount of additional infrastructure deployed to clear and reseed the DynamoDB tables (auth and document pointers) using a Lambda running
on a cron schedule that can be found in the cron/seed_sandbox directory in the root of this project. The data used to seed the DynamoDB tables can found in the cron/seed_sandbox/data directory.
The configuration of organisations auth / permissions is dealt with in the "apigee" repos, i.e.
- https://github.com/NHSDigital/record-locator/producer
- https://github.com/NHSDigital/record-locator/consumer
Specifically, the configuration can be found in the file proxies/sandbox/apiproxy/resources/jsc/ConnectionMetadata.SetRequestHeaders.js in these repos.
💡 Developers should make sure that these align between the three repos according to any user journeys that they envisage.
Additionally, and less importantly, there are also fixed organization details in proxies/sandbox/apiproxy/resources/jsc/ClientRPDetailsHeader.SetRequestHeaders.js in these repos.
The process to create a new release is as follows:
- In Github Releases, press the "Draft new release" button.
- Press "Choose a tag" and enter the version of the release, say
v3.0.1. This will be the tag we use to release from. - Select
developfor the release Target - Press "Generate release notes" button. This will populate the description with everything that's changed since the last release.
- Enter the version of the release into the Release Title field, say
v3.0.1 - Arrange and update the description to accuruately represent the highlights of the release.
- Make sure the "Set as a pre-release" checkbox it set
- Press the "Publish release" button to complete the release process
Once your new release has been created, you can then deploy this release through the NRLF environments using the "Persistent Environment Deploy" Github Action. Once your release has been deployed to prod, edit the release and set the "Set as the latest release" checkbox.
If the Consumer API has changed, or the documentation for that API has changed, you will also need to release NRL Consumer API.
If the Producer API has changed, or the documentation for that API has changed, you will also need to release NRL Producer API.
Once you have a new release version ready, you can deploy it through our environments as follows:
- Use the "Persistent Environment Deploy" Github Action workflow to deploy the release tag to
dev,dev-sandbox,qa,qa-sandbox,int,int-sandboxandperftestenvironments. - If any issues arise in the deployment, fix the issues, create a new release version and start this process again.
- Once the deployments are complete, use the "Persistent Environment Deploy" Github Action workflow to deploy the release version to
ref. - Once that is complete, use the "Persistent Environment Deploy" workflow to deploy the release version to
prod.
Reports are provided as scripts in the reports/ directory. To run a report:
- Login to your AWS account on the command line, choosing the account that contains the resources you want to report on.
- Run your chosen report script, giving the script the resource names and parameters it requires. See each report script for details.
For example, to count the number of pointers from X26 in the pointers table in the dev environment:
$ poetry run python ./scripts/count_pointers_for_custodian.py \
nhsd-nrlf--dev-pointers-table \
X26
The reports scripts may require resources that could affect the performance of the live production system. Because of this, it is recommended that you take steps to minimise this impact before running reports.
If you are running a report against the DynamoDB pointers table in prod, you should create a copy (or restore a PITR backup) of the table and run your report against the copy.
Please ensure any duplicated resource/data is deleted from the prod environment once you have finished using it.