diff --git a/README.md b/README.md index 9c1b85a..22ff510 100644 --- a/README.md +++ b/README.md @@ -36,7 +36,9 @@ Available APIs at the moment: ### Build API -`POST /system/api/v1/build` - Perform the build of a custom image and push it to repository. +`POST /system/api/v1/build/start` - Perform the build of a custom image and push it to repository. + +`POST /system/api/v1/build/cleanup` - Perform cleanup of build jobs older than 24 hours (or different number of hours if otherwise specified) More informations [Here](docs/DEPLOYER.md) @@ -71,6 +73,7 @@ Taskfile supports the following tasks: * buildx: Build the docker image using buildx. Set PUSH=1 to push the image to the registry. * docker-login: Login to the docker registry. Set REGISTRY=ghcr or REGISTRY=dockerhub in .env to use the respective registry. * image-tag: Create a new tag for the current git commit. +* builder:clean: Cleanup old build jobs via api * builder:cleanjobs: Clean up old jobs * builder:delete-image: Delete an image from the registry * builder:get-image: Get an image from the registry diff --git a/TODO.md b/TODO.md index 04dc708..b0335f1 100644 --- a/TODO.md +++ b/TODO.md @@ -20,10 +20,12 @@ # TODO ## Tests -Add integration and unit tests +- [ ] Add integration tests +- [X] Add unit tests +- [ ] Add more unit tests ## Various - [ ] `openserverless.common.whis_user_data.py` - Add `with_` blocks for other new OpenServerless Services - [ ] `openserverless.common.whisk_user_generator` - Check if `generate_whisk_user_yaml` is complete -- [ ] cleanup config maps and builds +- [X] cleanup config maps and builds diff --git a/TaskfileBuilder.yml b/TaskfileBuilder.yml index 3477f37..7ce98dc 100644 --- a/TaskfileBuilder.yml +++ b/TaskfileBuilder.yml @@ -31,7 +31,7 @@ tasks: - if test -z "{{.KIND}}"; then echo "KIND IS NOT SET" && exit 1; fi - | echo '{"source": "{{.SOURCE}}", "target": "{{.TARGET}}", "kind": "{{.KIND}}", "file": "{{.REQUIREMENTS}}" }' | \ - curl -X POST $ADMIN_API_URL/api/v1/build -H "Content-Type: application/json" -H "Authorization: {{.AUTH}}" -d @- + curl -X POST $ADMIN_API_URL/api/v1/build/start -H "Content-Type: application/json" -H "Authorization: {{.AUTH}}" -d @- - sleep 5 - task: logs deps: @@ -39,6 +39,24 @@ tasks: # - updatetoml silent: true + clean: + desc: Cleanup old build jobs via api + vars: + AUTH: + sh: cat ~/.wskprops | grep "AUTH" | cut -d'=' -f2 | xargs -I {} + MAX_AGE_HOURS: + sh: | + if test -z "{{.MAX_AGE_HOURS}}"; + then echo "24"; + else echo "{{.MAX_AGE_HOURS}}"; + fi + cmds: + - | + echo '{"max_age_hours": "{{.MAX_AGE_HOURS}}" }' | \ + curl -X POST $ADMIN_API_URL/api/v1/build/cleanup -H "Content-Type: application/json" -H "Authorization: {{.AUTH}}" -d @- + silent: false + + logs: desc: Show logs of the last build job cmds: @@ -66,7 +84,7 @@ tasks: desc: List catalogs in the registry cmds: - curl -u $REGISTRY_USER:$REGISTRY_PASS $REGISTRY_HOST/v2/_catalog - silent: false + silent: true list-images: desc: List images in a specific catalog @@ -75,7 +93,7 @@ tasks: cmds: - if test -z "{{.CATALOG}}"; then echo "CATALOG IS NOT SET" && exit 1; fi - curl -u $REGISTRY_USER:$REGISTRY_PASS $REGISTRY_HOST/v2/{{.CATALOG}}/tags/list - silent: false + silent: true get-image: desc: Get an image from the registry diff --git a/docs/DEPLOYER.md b/docs/DEPLOYER.md index 6f4b9c1..52fd36b 100644 --- a/docs/DEPLOYER.md +++ b/docs/DEPLOYER.md @@ -19,15 +19,75 @@ --> # Deployer -These tasks are useful to interact with OpenServerless Admin Api Builder +Deployer is the implementation of the feature described +in [OpenServerless Issue 156](https://github.com/apache/openserverless/issues/156). -There are some tasks to interact with OpenServerless internal registry too. +Specifically, the deployer can extend a default runtime with user-defined +"requirements" by generating a new "extended" user runtime and pushing it to +OpenServerless’ internal Docker registry. + +Actually, the supported "requirements" are listed in the following table: + +| kind | requirement file | +|:-------|:-----------------| +| go | go.mod | +| java | pom.xml | +| nodejs | package.json | +| php | composer.json | +| python | requirements.txt | +| ruby | Gemfile | +| dotnet | project.json | + +*NOTE*: this list will be improved when new extendible runtimes will be ready. + +The "requirement" can be passed as base64 encoded string inside the `file` attribute +of the json body payload: + +```json +{ + "source": "apache/openserverless-runtime-python:v3.13-2506091954", + "target": "devel:python3.12-custom", + "kind": "python", + "file": "Z25ld3MKYmVhdXRpZnVsc291cDQ=" +} +``` + +By default the deployer will push to OpenServerless internal docker registry. +To detect the host, it will use the `registry_host` inside the Operator's config +map. +To authenticate, it will use the imagePullSecret named `registry-pull-secret` +(these credentials are valid to push and pull from the internal registry). + +The deployer supports also pushing to an external private docker registry, using +ops env: + +- `REGISTRY_HOST` - put here the hostname:port of the external private registry. +- `REGISTRY_SECRET` - put here the name of a kubernetes secret containing an +imagePullSecret able to push to the registry specified by `REGISTRY_HOST`. + +This project has also support tasks: + +- to test the build. +- to interact with OpenServerless internal registry too. + +See [Examples](#examples) section + +## Endpoints + +`POST /system/api/v1/build/start` - Perform the build of a custom image and push it to repository. + +`POST /system/api/v1/build/cleanup` - Perform cleanup of build jobs older than 24 hours (or different number of hours if otherwise specified) + +Both endpoints requires the wsk token in an `authorization` header. +The token will be used to check the user (the target image hash needs to be +always in the format `user:image-tag`). ## Available tasks task: Available tasks for this project: ``` +* builder:clean: Cleanup old build jobs via api * builder:cleanjobs: Clean up old jobs * builder:delete-image: Delete an image from the registry * builder:get-image: Get an image from the registry @@ -44,6 +104,12 @@ task: Available tasks for this project: `task builder:send SOURCE=apache/openserverless-runtime-python:v3.13-2506091954 TARGET=devel:python3.13-custom KIND=python REQUIREMENTS=$(base64 -i deploy/samples/requirements.txt)` +### Clenaup of old jobs via API + +`task builder:clean MAX_AGE_HOURS=2` + +MAX_AGE_HOURS, if not specified, has a default value of 24. + ### List images for the user `task builder:list-images CATALOG=devel` diff --git a/openserverless/common/kube_api_client.py b/openserverless/common/kube_api_client.py index d718ba2..a3cd82a 100644 --- a/openserverless/common/kube_api_client.py +++ b/openserverless/common/kube_api_client.py @@ -118,7 +118,7 @@ def create_whisk_user(self, whisk_user_dict, namespace="nuvolaris"): logging.error("create_whisk_user %s", ex) return False - def delete_whisk_user(self, username, namespace="nuvolaris"): + def delete_whisk_user(self, username: str, namespace="nuvolaris"): """ " Delete a whisk user using a DELETE operation param: username of the whisksusers resource to delete @@ -147,7 +147,7 @@ def delete_whisk_user(self, username, namespace="nuvolaris"): logging.error(f"delete_whisk_user {ex}") return False - def get_whisk_user(self, username, namespace="nuvolaris"): + def get_whisk_user(self, username: str, namespace="nuvolaris"): """ " Get a whisk user using a GET operation param: username of the whisksusers resource to delete @@ -210,7 +210,7 @@ def update_whisk_user(self, whisk_user_dict, namespace="nuvolaris"): logging.error(f"update_whisk_user {ex}") return False - def get_config_map(self, cm_name, namespace="nuvolaris"): + def get_config_map(self, cm_name: str, namespace="nuvolaris"): """ Get a ConfigMap by name. :param cm_name: Name of the ConfigMap. @@ -238,7 +238,7 @@ def get_config_map(self, cm_name, namespace="nuvolaris"): logging.error(f"get_config_map {ex}") return None - def post_config_map(self, cm_name, file_or_dir, namespace="nuvolaris"): + def post_config_map(self, cm_name: str, file_or_dir: str, namespace="nuvolaris"): """ Create a ConfigMap from a file or directory. :param cm_name: Name of the ConfigMap. @@ -291,7 +291,7 @@ def post_config_map(self, cm_name, file_or_dir, namespace="nuvolaris"): logging.error(f"post_config_map {ex}") return None - def delete_config_map(self, cm_name, namespace="nuvolaris"): + def delete_config_map(self, cm_name: str, namespace="nuvolaris"): """ Delete a ConfigMap by name. :param cm_name: Name of the ConfigMap to delete. @@ -320,7 +320,7 @@ def delete_config_map(self, cm_name, namespace="nuvolaris"): logging.error(f"delete_config_map {ex}") return False - def get_secret(self, secret_name, namespace="nuvolaris"): + def get_secret(self, secret_name: str, namespace="nuvolaris"): """ Get a Kubernetes secret by name. :param secret_name: Name of the secret. @@ -348,7 +348,7 @@ def get_secret(self, secret_name, namespace="nuvolaris"): logging.error(f"get_secret {ex}") return None - def post_secret(self, secret_name, secret_data, namespace="nuvolaris"): + def post_secret(self, secret_name: str, secret_data: dict, namespace="nuvolaris"): """ Create a Kubernetes secret. :param secret_name: Name of the secret. @@ -385,7 +385,7 @@ def post_secret(self, secret_name, secret_data, namespace="nuvolaris"): logging.error(f"post_secret {ex}") return None - def delete_secret(self, secret_name, namespace="nuvolaris"): + def delete_secret(self, secret_name: str, namespace="nuvolaris"): """ Delete a Kubernetes secret. :param secret_name: Name of the secret to delete. @@ -412,11 +412,70 @@ def delete_secret(self, secret_name, namespace="nuvolaris"): except Exception as ex: logging.error(f"delete_secret {ex}") return False + + def get_jobs(self, name_filter: str = None, namespace="nuvolaris"): + """ + Get all Kubernetes jobs in a specific namespace. + :param namespace: Namespace to list jobs from. + :return: List of jobs or None if failed. + """ + url = f"{self.host}/apis/batch/v1/namespaces/{namespace}/jobs" + headers = {"Authorization": self.token} + try: + logging.info(f"GET request to {url}") + response = req.get(url, headers=headers, verify=self.ssl_ca_cert) + + if response.status_code in [200, 202]: + logging.debug( + f"GET to {url} succeeded with {response.status_code}. Body {response.text}" + ) + + if name_filter: + jobs = json.loads(response.text)["items"] + filtered_jobs = [job for job in jobs if name_filter in job["metadata"]["name"]] + return filtered_jobs + + return json.loads(response.text)["items"] - def post_job(self, job_name, job_manifest, namespace="nuvolaris"): + logging.error( + f"GET to {url} failed with {response.status_code}. Body {response.text}" + ) + return None + except Exception as ex: + logging.error(f"get_jobs {ex}") + return None + + def delete_job(self, job_name: str, namespace="nuvolaris"): + """ + Delete a Kubernetes job by name. + :param job_name: Name of the job to delete. + :param namespace: Namespace where the job is located. + :return: True if deletion was successful, False otherwise. + """ + url = f"{self.host}/apis/batch/v1/namespaces/{namespace}/jobs/{job_name}" + headers = {"Authorization": self.token} + + try: + logging.info(f"DELETE request to {url}") + response = req.delete(url, headers=headers, verify=self.ssl_ca_cert) + + if response.status_code in [200, 202]: + logging.debug( + f"DELETE to {url} succeeded with {response.status_code}. Body {response.text}" + ) + return True + + logging.error( + f"DELETE to {url} failed with {response.status_code}. Body {response.text}" + ) + return False + except Exception as ex: + logging.error(f"delete_job {ex}") + return False + + def post_job(self, job_manifest: json, namespace="nuvolaris"): """ Create a Kubernetes job. - :param job_name: Name of the job. :param job_manifest: Dictionary containing the job manifest. :param namespace: Namespace where the job will be created. :return: The created job or None if failed. @@ -440,7 +499,7 @@ def post_job(self, job_name, job_manifest, namespace="nuvolaris"): logging.error(f"post_job {ex}") return None - def get_pod_by_job_name(self, job_name, namespace="nuvolaris"): + def get_pod_by_job_name(self, job_name: str, namespace="nuvolaris"): """ Get the pod name associated with a job by its name. :param job_name: Name of the job. @@ -474,7 +533,7 @@ def get_pod_by_job_name(self, job_name, namespace="nuvolaris"): logging.error(f"get_pod_by_job_name {ex}") return None - def stream_pod_logs(self, pod_name, namespace="nuvolaris"): + def stream_pod_logs(self, pod_name: str, namespace="nuvolaris"): """ Stream logs from a specific pod. :param pod_name: Name of the pod to stream logs from. @@ -487,7 +546,7 @@ def stream_pod_logs(self, pod_name, namespace="nuvolaris"): if line: print(line.decode()) - def check_job_status(self, job_name, namespace="nuvolaris"): + def check_job_status(self, job_name: str, namespace="nuvolaris"): """ Check the status of a job by its name. :param job_name: Name of the job to check. diff --git a/openserverless/impl/builder/build_service.py b/openserverless/impl/builder/build_service.py index 9f5d5b5..86e0086 100644 --- a/openserverless/impl/builder/build_service.py +++ b/openserverless/impl/builder/build_service.py @@ -19,48 +19,74 @@ from openserverless.common.kube_api_client import KubeApiClient import os import uuid +import logging +from datetime import datetime, timezone, timedelta +from types import SimpleNamespace JOB_NAME = "build" CM_NAME = "cm" - class BuildService: - def __init__(self, build_config, user_env=None): - self.build_config = build_config + """ + BuildService is responsible for managing the build process in a Kubernetes environment. + It handles the creation of Dockerfiles, ConfigMaps, and Kubernetes Jobs to build Docker images + based on the provided build configuration. + """ + def __init__(self, user_env=None): # A super userful Kube Api Client - self.kube_client = KubeApiClient() + self.kube_client = KubeApiClient() # generate a unique ID for the build self.id = str(uuid.uuid4()) - # define a unique ConfigMap and Job name based on the ID - self.cm = f"{CM_NAME}-{self.id}" - self.job_name = f"{JOB_NAME}-{self.id}" - # user environment variables self.user_env = user_env if user_env is not None else {} + self.user = self.user_env.get('wsk_user_name', '') + + # define a unique ConfigMap and Job name based on the ID + if len(self.user) > 0: + self.cm = f"{CM_NAME}-{self.user}-{self.id}" + self.job_name = f"{JOB_NAME}-{self.user}-{self.id}" + else: + self.cm = f"{CM_NAME}-{self.id}" + self.job_name = f"{JOB_NAME}-{self.id}" + # define registry host self.registry_host = self.get_registry_host() + logging.info(f"Using registry host: {self.registry_host}") + + # define registry auth + self.registry_auth = self.get_registry_auth() + logging.info(f"Using registry auth: {self.registry_auth}") - self.init() + # define demo mode + self.demo_mode = int(os.environ.get("DEMO_MODE", 0)) == 1 + logging.info(f"Using demo mode: {self.demo_mode}") - def init(self): + def init(self, build_config: dict): """ Initialize the build service by creating the necessary ConfigMap. """ + logging.info("Initializing BuildService") + + self.build_config = build_config # install the nuvolaris-buildkitd-conf ConfigMap if not present - cm = self.kube_client.get_config_map("nuvolaris-buildkitd-conf", namespace="nuvolaris") + cm = self.kube_client.get_config_map("nuvolaris-buildkitd-conf") if cm is None: - self.kube_client.post_config_map( + logging.info("Adding nuvolaris-buildkitd-conf ConfigMap") + status = self.kube_client.post_config_map( cm_name="nuvolaris-buildkitd-conf", file_or_dir="deploy/buildkit/buildkitd.toml", namespace="nuvolaris", ) + if status is None: + logging.error("Failed to create nuvolaris-buildkitd-conf ConfigMap") - def get_registry_host(self): + + def get_registry_host(self) -> str: """ Retrieve the registry host - firstly, check if the user environment has a registry host set @@ -77,10 +103,20 @@ def get_registry_host(self): annotations = ops_config_map['metadata']['annotations'] if 'registry_host' in annotations: registry_host = annotations['registry_host'] - + return registry_host + + def get_registry_auth(self) -> str: + """ + Get the name of the registry auth secret. If the user environment has a registry auth set, use it. + Otherwise, use the default 'registry-pull-secret'. + """ + if (self.user_env.get('REGISTRY_SECRET') is not None): + return self.build_config.get('REGISTRY_SECRET') + return 'registry-pull-secret' + def create_docker_file(self) -> str: """ Create a Dockerfile in the current directory. @@ -91,7 +127,11 @@ def create_docker_file(self) -> str: if 'file' in self.build_config: requirement_file = self.get_requirements_file_from_kind() dockerfile_content += f"COPY ./{requirement_file} /tmp/{requirement_file}\n" - dockerfile_content += "RUN echo \"/bin/extend\"\n" + if self.demo_mode: + dockerfile_content += "RUN echo \"/bin/extend\"\n" + else: + dockerfile_content += "RUN \"/bin/extend\"\n" + return dockerfile_content def get_requirements_file_from_kind(self) -> str: @@ -123,48 +163,111 @@ def build(self, image_name: str) -> str: """ import tempfile import base64 + + # firstly remove old build jobs + self.delete_old_build_jobs() tmpdirname = tempfile.mkdtemp() + logging.info(f"Starting the build to: {tmpdirname}") if 'file' in self.build_config: + logging.info("Decoding the requirements file from base64") # decode base64 self.build_config.get('file') - requirements = base64.b64decode(self.build_config.get('file')).decode('utf-8') + try: + requirements = base64.b64decode(self.build_config.get('file')).decode('utf-8') - requirement_file = self.get_requirements_file_from_kind() - with open(os.path.join(tmpdirname, requirement_file), 'w') as f: - f.write(requirements) + requirement_file = self.get_requirements_file_from_kind() + with open(os.path.join(tmpdirname, requirement_file), 'w') as f: + f.write(requirements) + + except Exception as e: + logging.error(f"Failed to decode the requirements file: {e}") + return None dockerfile_path = os.path.join(tmpdirname, "Dockerfile") + logging.info(f"Creating Dockerfile at: {dockerfile_path}") with open(dockerfile_path, "w") as dockerfile: dockerfile.write(self.create_docker_file()) - # check if the unzipped directory contains a Dockerfile and is not empty. - if not self.check_unzip_dir(tmpdirname): + # check if the directory contains a Dockerfile and is not empty. + if not self.check_build_dir(tmpdirname): return None # Create a ConfigMap for the build context + logging.info(f"Creating ConfigMap {self.cm} with build context") cm = self.kube_client.post_config_map( cm_name=self.cm, file_or_dir=tmpdirname, - namespace="nuvolaris", ) + logging.info(f"Removing temporary build directory: {tmpdirname}") shutil.rmtree(tmpdirname) if not cm: return None - # retrieve credentials to access the registry - # - + logging.info(f"ConfigMap {self.cm} created successfully") job_template = self.create_build_job(image_name) - job = self.kube_client.post_job(self.job_name, job_template) + job = self.kube_client.post_job(job_template) if not job: + logging.error(f"Failed to create job {self.job_name}") return None + + if not self.kube_client.delete_config_map(cm_name=self.cm): + logging.error(f"Failed to delete ConfigMap {self.cm}") return job + + def delete_old_build_jobs(self, max_age_hours: int = 24) -> int: + name_filter = f"build-{self.user}-" if self.user else "build" + jobs = self.kube_client.get_jobs(name_filter=name_filter) + + try: + cutoff_time = datetime.now(timezone.utc) - timedelta(hours=max_age_hours) + count = 0 + + for j in jobs: + job = SimpleNamespace(**j) + metadata = SimpleNamespace(**job.metadata) + status = SimpleNamespace(**job.status) + + if not metadata or not status: + continue + + job_name = metadata.name + + completed = False + # Check if job is completed + for c in status.conditions: + condition = SimpleNamespace(**c) + if condition.type == "Complete" and condition.status == "True": + completed = True + break + + if not completed: + continue + + # Check completion time + completion_time = status.completionTime + if not completion_time: + continue + job_completion_time = datetime.strptime(completion_time,"%Y-%m-%dT%H:%M:%SZ").replace(tzinfo=timezone.utc) + + if job_completion_time < cutoff_time: + logging.info (f"Deleting job {job_name} (completed at {completion_time})") + status = self.kube_client.delete_job(job_name=job_name) + if not status: + logging.error(f"Failed to delete job {job_name}") + else: + count+=1 + logging.info(f"Job {job_name} deleted successfully") + + return count + except Exception as e: + logging.error(f"Error deleting old build jobs: {e}") + return -1 - def check_unzip_dir(self, unzip_dir: str) -> bool: + def check_build_dir(self, unzip_dir: str) -> bool: """ Check if the unzipped directory contains a Dockerfile and is not empty.""" if not os.path.exists(unzip_dir): @@ -208,7 +311,7 @@ def create_build_job(self, image_name: str) -> dict: { "name": "docker-config", "secret": { - "secretName": "registry-pull-secret", + "secretName": self.registry_auth, "items": [ { "key": ".dockerconfigjson", diff --git a/openserverless/rest/build.py b/openserverless/rest/build.py index f19a186..f27a35a 100644 --- a/openserverless/rest/build.py +++ b/openserverless/rest/build.py @@ -17,7 +17,7 @@ # from openserverless import app from http import HTTPStatus -from flask import request +from flask import request, Response import openserverless.common.response_builder as res_builder from openserverless.common.utils import env_to_dict @@ -25,7 +25,23 @@ from openserverless.impl.builder.build_service import BuildService from openserverless.common.openwhisk_authorize import OpenwhiskAuthorize -@app.route('/system/api/v1/build', methods=['POST']) +def authorize() -> Response | dict: + normalized_headers = {key.lower(): value for key, value in request.headers.items()} + auth_header = normalized_headers.get('authorization', None) + + if auth_header is None: + return res_builder.build_error_message("Missing authorization header", 401) + + oa = OpenwhiskAuthorize() + try: + user_data = oa.login(auth_header) + return user_data + + + except AuthorizationError: + return res_builder.build_error_message("Invalid authorization", 401) + +@app.route('/system/api/v1/build/start', methods=['POST']) def build(): """ Build Endpoint @@ -72,47 +88,135 @@ def build(): description: Internal Server Error. Build process failed. schema: $ref: '#/definitions/Message' - """ + """ + auth_result = authorize() + if isinstance(auth_result, Response): + return auth_result + + env = env_to_dict(auth_result) + if env is None: + return res_builder.build_error_message("User environment not found", status_code=HTTPStatus.UNAUTHORIZED) - normalized_headers = {key.lower(): value for key, value in request.headers.items()} - auth_header = normalized_headers.get('authorization', None) + if (request.json is None): + return res_builder.build_error_message("No JSON payload provided for build.", status_code=HTTPStatus.BAD_REQUEST) + + json_data = request.json + if 'source' not in json_data: + return res_builder.build_error_message("No source provided for build.", status_code=HTTPStatus.BAD_REQUEST) + if 'target' not in json_data: + return res_builder.build_error_message("No target provided for build.", status_code=HTTPStatus.BAD_REQUEST) + if 'kind' not in json_data: + return res_builder.build_error_message("No kind provided for build.", status_code=HTTPStatus.BAD_REQUEST) + - if auth_header is None: - return res_builder.build_error_message("Missing authorization header", 401) + # validate the target + wsk_user_name = auth_result.get('login','').lower() + target = json_data.get('target') + target_user = str(target).split(':')[0] + if wsk_user_name != target_user: + return res_builder.build_error_message("Invalid target for the build.", status_code=HTTPStatus.BAD_REQUEST) - oa = OpenwhiskAuthorize() - try: - user_data = oa.login(auth_header) - env = env_to_dict(user_data) - if env is None: - return res_builder.build_error_message("User environment not found", status_code=HTTPStatus.UNAUTHORIZED) + env['wsk_user_name'] = wsk_user_name + build_service = BuildService(user_env=env) + build_service.init(build_config=json_data) + build_success = build_service.build(json_data.get('target')) # Replace with your desired image name - if (request.json is None): - return res_builder.build_error_message("No JSON payload provided for build.", status_code=HTTPStatus.BAD_REQUEST) - - json_data = request.json - if 'source' not in json_data: - return res_builder.build_error_message("No source provided for build.", status_code=HTTPStatus.BAD_REQUEST) - if 'target' not in json_data: - return res_builder.build_error_message("No target provided for build.", status_code=HTTPStatus.BAD_REQUEST) - if 'kind' not in json_data: - return res_builder.build_error_message("No kind provided for build.", status_code=HTTPStatus.BAD_REQUEST) - + if not build_success: + return res_builder.build_error_message("Build process failed.", status_code=HTTPStatus.INTERNAL_SERVER_ERROR) + + return res_builder.build_response_message("Build process initiated successfully.", status_code=HTTPStatus.OK) - # validate the target - target = json_data.get('target') - target_user = str(target).split(':')[0] - if user_data.get('login') != target_user: - return res_builder.build_error_message("Invalid target for the build.", status_code=HTTPStatus.BAD_REQUEST) +@app.route('/system/api/v1/build/cleanup', methods=['POST']) +def clean(): + """ + Cleanup Endpoint + --- + summary: Clean up old build jobs for the authenticated user. + description: > + This endpoint deletes build jobs older than a specified number of hours for the authenticated user. + The user must provide a valid JSON payload with the optional parameter `max_age_hours` to specify the age threshold. + If not provided, the default is 24 hours. + tags: + - Build + security: + - openwhiskBasicAuth: [] + consumes: + - application/json + operationId: cleanUpJobs + parameters: + - in: body + name: BuildRequest + required: true + schema: + type: object + properties: + max_age_hours: + type: integer + description: Maximum age of build jobs (in hours) to be deleted. + default: 24 + + responses: + '200': + description: Successfully cleaned up old build jobs. + content: + application/json: + schema: + type: object + properties: + message: + type: string + example: Cleaned up 5 jobs successfully. + '400': + description: Bad request. No JSON payload provided for cleanup. + content: + application/json: + schema: + type: object + properties: + error: + type: string + example: No JSON payload provided for cleanup. + '401': + description: Unauthorized. User environment not found. + content: + application/json: + schema: + type: object + properties: + error: + type: string + example: User environment not found + '500': + description: Internal server error. Failed to clean up old build jobs. + content: + application/json: + schema: + type: object + properties: + error: + type: string + example: Failed to clean up old build jobs. + """ - - build_service = BuildService(build_config=json_data, user_env=env) - build_success = build_service.build(json_data.get('target')) # Replace with your desired image name - - if not build_success: - return res_builder.build_error_message("Build process failed.", status_code=HTTPStatus.INTERNAL_SERVER_ERROR) - - return res_builder.build_response_message("Build process initiated successfully.", status_code=HTTPStatus.OK) + auth_result = authorize() + if isinstance(auth_result, Response): + return auth_result - except AuthorizationError: - return res_builder.build_error_message("Invalid authorization", 401) \ No newline at end of file + env = env_to_dict(auth_result) + if env is None: + return res_builder.build_error_message("User environment not found", status_code=HTTPStatus.UNAUTHORIZED) + + if (request.json is None): + return res_builder.build_error_message("No JSON payload provided for cleanup.", status_code=HTTPStatus.BAD_REQUEST) + + wsk_user_name = auth_result.get('login','').lower() + env['wsk_user_name'] = wsk_user_name + json_data = request.json + max_age_hours = int(json_data.get('max_age_hours', 24)) + + build_service = BuildService(user_env=env) + clean_result = build_service.delete_old_build_jobs(max_age_hours=max_age_hours) + if clean_result == -1: + return res_builder.build_error_message("Failed to clean up old build jobs.", status_code=HTTPStatus.INTERNAL_SERVER_ERROR) + + return res_builder.build_response_message(f"Cleaned up {clean_result} jobs successfully.", status_code=HTTPStatus.OK) \ No newline at end of file