From 9874ca2b89ef68aa10749a99adc1eb01643598a7 Mon Sep 17 00:00:00 2001 From: Iain Lane Date: Fri, 16 May 2025 10:41:46 +0100 Subject: [PATCH 1/7] feat(control-plane): add support for handling multiple events in a single invocation Currently we restrict the `scale-up` Lambda to only handle a single event at a time. In very busy environments this can prove to be a bottleneck: there are calls to GitHub and AWS APIs that happen each time, and they can end up taking long enough that we can't process job queued events faster than they arrive. In our environment we are also using a pool, and typically we have responded to the alerts generated by this (SQS queue length growing) by expanding the size of the pool. This helps because we will more frequently find that we don't need to scale up, which allows the lambdas to exit a bit earlier, so we can get through the queue faster. But it makes the environment much less responsive to changes in usage patterns. At its core, this Lambda's task is to construct an EC2 `CreateFleet` call to create instances, after working out how many are needed. This is a job that can be batched. We can take any number of events, calculate the diff between our current state and the number of jobs we have, capping at the maximum, and then issue a single call. The thing to be careful about is how to handle partial failures, if EC2 creates some of the instances we wanted but not all of them. Lambda has a configurable function response type which can be set to `ReportBatchItemFailures`. In this mode, we return a list of failed messages from our handler and those are retried. We can make use of this to give back as many events as we failed to process. Now we're potentially processing multiple events in a single Lambda, one thing we should optimise for is not recreating GitHub API clients. We need one client for the app itself, which we use to find out installation IDs, and then one client for each installation which is relevant to the batch of events we are processing. This is done by creating a new client the first time we see an event for a given installation. We also remove the same `batch_size = 1` constraint from the `job-retry` Lambda and make it configurable instead, using AWS's default of 10 for SQS if not configured. This Lambda is used to retry events that previously failed. However, instead of reporting failures to be retried, here we maintain the pre-existing fault-tolerant behaviour where errors are logged but explicitly do not cause message retries, avoiding infinite loops from persistent GitHub API issues or malformed events. Tests are added for all of this. --- README.md | 2 + .../control-plane/src/aws/runners.test.ts | 130 ++- .../control-plane/src/aws/runners.ts | 101 +- .../control-plane/src/lambda.test.ts | 185 +++- lambdas/functions/control-plane/src/lambda.ts | 54 +- lambdas/functions/control-plane/src/local.ts | 42 +- .../control-plane/src/pool/pool.test.ts | 24 +- .../functions/control-plane/src/pool/pool.ts | 2 +- .../src/scale-runners/ScaleError.test.ts | 76 ++ .../src/scale-runners/ScaleError.ts | 26 +- .../src/scale-runners/job-retry.test.ts | 92 ++ .../src/scale-runners/scale-up.test.ts | 944 +++++++++++++++--- .../src/scale-runners/scale-up.ts | 275 +++-- .../aws-powertools-util/src/logger/index.ts | 10 +- main.tf | 46 +- modules/multi-runner/README.md | 2 + modules/multi-runner/runners.tf | 46 +- modules/multi-runner/variables.tf | 12 + modules/runners/README.md | 2 + modules/runners/job-retry.tf | 50 +- modules/runners/job-retry/README.md | 2 +- modules/runners/job-retry/main.tf | 7 +- modules/runners/job-retry/variables.tf | 16 +- modules/runners/scale-up.tf | 10 +- modules/runners/variables.tf | 20 + variables.tf | 16 + 26 files changed, 1729 insertions(+), 463 deletions(-) create mode 100644 lambdas/functions/control-plane/src/scale-runners/ScaleError.test.ts diff --git a/README.md b/README.md index 80243b35ca..dbbae19ce8 100644 --- a/README.md +++ b/README.md @@ -155,6 +155,8 @@ Join our discord community via [this invite link](https://discord.gg/bxgXW8jJGh) | [key\_name](#input\_key\_name) | Key pair name | `string` | `null` | no | | [kms\_key\_arn](#input\_kms\_key\_arn) | Optional CMK Key ARN to be used for Parameter Store. This key must be in the current account. | `string` | `null` | no | | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | +| [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | +| [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | | [lambda\_principals](#input\_lambda\_principals) | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. |
list(object({
type = string
identifiers = list(string)
}))
| `[]` | no | | [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | diff --git a/lambdas/functions/control-plane/src/aws/runners.test.ts b/lambdas/functions/control-plane/src/aws/runners.test.ts index a02f62cd36..c4fd922fd0 100644 --- a/lambdas/functions/control-plane/src/aws/runners.test.ts +++ b/lambdas/functions/control-plane/src/aws/runners.test.ts @@ -1,26 +1,26 @@ +import { tracer } from '@aws-github-runner/aws-powertools-util'; import { CreateFleetCommand, - CreateFleetCommandInput, - CreateFleetInstance, - CreateFleetResult, + type CreateFleetCommandInput, + type CreateFleetInstance, + type CreateFleetResult, CreateTagsCommand, + type DefaultTargetCapacityType, DeleteTagsCommand, - DefaultTargetCapacityType, DescribeInstancesCommand, - DescribeInstancesResult, + type DescribeInstancesResult, EC2Client, SpotAllocationStrategy, TerminateInstancesCommand, } from '@aws-sdk/client-ec2'; -import { GetParameterCommand, GetParameterResult, PutParameterCommand, SSMClient } from '@aws-sdk/client-ssm'; -import { tracer } from '@aws-github-runner/aws-powertools-util'; +import { GetParameterCommand, type GetParameterResult, PutParameterCommand, SSMClient } from '@aws-sdk/client-ssm'; import { mockClient } from 'aws-sdk-client-mock'; import 'aws-sdk-client-mock-jest/vitest'; +import { beforeEach, describe, expect, it, vi } from 'vitest'; import ScaleError from './../scale-runners/ScaleError'; -import { createRunner, listEC2Runners, tag, untag, terminateRunner } from './runners'; -import { RunnerInfo, RunnerInputParameters, RunnerType } from './runners.d'; -import { describe, it, expect, beforeEach, vi } from 'vitest'; +import { createRunner, listEC2Runners, tag, terminateRunner, untag } from './runners'; +import type { RunnerInfo, RunnerInputParameters, RunnerType } from './runners.d'; process.env.AWS_REGION = 'eu-east-1'; const mockEC2Client = mockClient(EC2Client); @@ -110,7 +110,10 @@ describe('list instances', () => { it('check orphan tag.', async () => { const instances: DescribeInstancesResult = mockRunningInstances; - instances.Reservations![0].Instances![0].Tags!.push({ Key: 'ghr:orphan', Value: 'true' }); + instances.Reservations![0].Instances![0].Tags!.push({ + Key: 'ghr:orphan', + Value: 'true', + }); mockEC2Client.on(DescribeInstancesCommand).resolves(instances); const resp = await listEC2Runners(); @@ -132,7 +135,11 @@ describe('list instances', () => { it('filters instances on repo name', async () => { mockEC2Client.on(DescribeInstancesCommand).resolves(mockRunningInstances); - await listEC2Runners({ runnerType: 'Repo', runnerOwner: REPO_NAME, environment: undefined }); + await listEC2Runners({ + runnerType: 'Repo', + runnerOwner: REPO_NAME, + environment: undefined, + }); expect(mockEC2Client).toHaveReceivedCommandWith(DescribeInstancesCommand, { Filters: [ { Name: 'instance-state-name', Values: ['running', 'pending'] }, @@ -145,7 +152,11 @@ describe('list instances', () => { it('filters instances on org name', async () => { mockEC2Client.on(DescribeInstancesCommand).resolves(mockRunningInstances); - await listEC2Runners({ runnerType: 'Org', runnerOwner: ORG_NAME, environment: undefined }); + await listEC2Runners({ + runnerType: 'Org', + runnerOwner: ORG_NAME, + environment: undefined, + }); expect(mockEC2Client).toHaveReceivedCommandWith(DescribeInstancesCommand, { Filters: [ { Name: 'instance-state-name', Values: ['running', 'pending'] }, @@ -249,7 +260,9 @@ describe('terminate runner', () => { }; await terminateRunner(runner.instanceId); - expect(mockEC2Client).toHaveReceivedCommandWith(TerminateInstancesCommand, { InstanceIds: [runner.instanceId] }); + expect(mockEC2Client).toHaveReceivedCommandWith(TerminateInstancesCommand, { + InstanceIds: [runner.instanceId], + }); }); }); @@ -324,7 +337,10 @@ describe('create runner', () => { await createRunner(createRunnerConfig({ ...defaultRunnerConfig, type: type })); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, type: type }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + type: type, + }), }); }); @@ -333,24 +349,36 @@ describe('create runner', () => { mockEC2Client.on(CreateFleetCommand).resolves({ Instances: instances }); - await createRunner({ ...createRunnerConfig(defaultRunnerConfig), numberOfRunners: 2 }); + await createRunner({ + ...createRunnerConfig(defaultRunnerConfig), + numberOfRunners: 2, + }); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, totalTargetCapacity: 2 }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + totalTargetCapacity: 2, + }), }); }); it('calls create fleet of 1 instance with the on-demand capacity', async () => { await createRunner(createRunnerConfig({ ...defaultRunnerConfig, capacityType: 'on-demand' })); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, capacityType: 'on-demand' }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + capacityType: 'on-demand', + }), }); }); it('calls run instances with the on-demand capacity', async () => { await createRunner(createRunnerConfig({ ...defaultRunnerConfig, maxSpotPrice: '0.1' })); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, maxSpotPrice: '0.1' }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + maxSpotPrice: '0.1', + }), }); }); @@ -367,8 +395,16 @@ describe('create runner', () => { }, }; mockSSMClient.on(GetParameterCommand).resolves(paramValue); - await createRunner(createRunnerConfig({ ...defaultRunnerConfig, amiIdSsmParameterName: 'my-ami-id-param' })); - const expectedRequest = expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, imageId: 'ami-123' }); + await createRunner( + createRunnerConfig({ + ...defaultRunnerConfig, + amiIdSsmParameterName: 'my-ami-id-param', + }), + ); + const expectedRequest = expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + imageId: 'ami-123', + }); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, expectedRequest); expect(mockSSMClient).toHaveReceivedCommandWith(GetParameterCommand, { Name: 'my-ami-id-param', @@ -380,7 +416,10 @@ describe('create runner', () => { await createRunner(createRunnerConfig({ ...defaultRunnerConfig, tracingEnabled: true })); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, tracingEnabled: true }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + tracingEnabled: true, + }), }); }); }); @@ -419,9 +458,12 @@ describe('create runner with errors', () => { }); it('test ScaleError with multiple error.', async () => { - createFleetMockWithErrors(['UnfulfillableCapacity', 'SomeError']); + createFleetMockWithErrors(['UnfulfillableCapacity', 'MaxSpotInstanceCountExceeded', 'NotMappedError']); - await expect(createRunner(createRunnerConfig(defaultRunnerConfig))).rejects.toBeInstanceOf(ScaleError); + await expect(createRunner(createRunnerConfig(defaultRunnerConfig))).rejects.toMatchObject({ + name: 'ScaleError', + failedInstanceCount: 2, + }); expect(mockEC2Client).toHaveReceivedCommandWith( CreateFleetCommand, expectedCreateFleetRequest(defaultExpectedFleetRequestValues), @@ -465,7 +507,12 @@ describe('create runner with errors', () => { mockSSMClient.on(GetParameterCommand).rejects(new Error('Some error')); await expect( - createRunner(createRunnerConfig({ ...defaultRunnerConfig, amiIdSsmParameterName: 'my-ami-id-param' })), + createRunner( + createRunnerConfig({ + ...defaultRunnerConfig, + amiIdSsmParameterName: 'my-ami-id-param', + }), + ), ).rejects.toBeInstanceOf(Error); expect(mockEC2Client).not.toHaveReceivedCommand(CreateFleetCommand); expect(mockSSMClient).not.toHaveReceivedCommand(PutParameterCommand); @@ -530,7 +577,7 @@ describe('create runner with errors fail over to OnDemand', () => { }), }); - // second call with with OnDemand failback + // second call with with OnDemand fallback expect(mockEC2Client).toHaveReceivedNthCommandWith(2, CreateFleetCommand, { ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, @@ -540,17 +587,25 @@ describe('create runner with errors fail over to OnDemand', () => { }); }); - it('test InsufficientInstanceCapacity no failback.', async () => { + it('test InsufficientInstanceCapacity no fallback.', async () => { await expect( - createRunner(createRunnerConfig({ ...defaultRunnerConfig, onDemandFailoverOnError: [] })), + createRunner( + createRunnerConfig({ + ...defaultRunnerConfig, + onDemandFailoverOnError: [], + }), + ), ).rejects.toBeInstanceOf(Error); }); - it('test InsufficientInstanceCapacity with mutlipte instances and fallback to on demand .', async () => { + it('test InsufficientInstanceCapacity with multiple instances and fallback to on demand .', async () => { const instancesIds = ['i-123', 'i-456']; createFleetMockWithWithOnDemandFallback(['InsufficientInstanceCapacity'], instancesIds); - const instancesResult = await createRunner({ ...createRunnerConfig(defaultRunnerConfig), numberOfRunners: 2 }); + const instancesResult = await createRunner({ + ...createRunnerConfig(defaultRunnerConfig), + numberOfRunners: 2, + }); expect(instancesResult).toEqual(instancesIds); expect(mockEC2Client).toHaveReceivedCommandTimes(CreateFleetCommand, 2); @@ -580,7 +635,10 @@ describe('create runner with errors fail over to OnDemand', () => { createFleetMockWithWithOnDemandFallback(['UnfulfillableCapacity'], instancesIds); await expect( - createRunner({ ...createRunnerConfig(defaultRunnerConfig), numberOfRunners: 2 }), + createRunner({ + ...createRunnerConfig(defaultRunnerConfig), + numberOfRunners: 2, + }), ).rejects.toBeInstanceOf(Error); expect(mockEC2Client).toHaveReceivedCommandTimes(CreateFleetCommand, 1); @@ -626,7 +684,10 @@ function createFleetMockWithWithOnDemandFallback(errors: string[], instances?: s mockEC2Client .on(CreateFleetCommand) - .resolvesOnce({ Instances: [instanceesFirstCall], Errors: errors.map((e) => ({ ErrorCode: e })) }) + .resolvesOnce({ + Instances: [instanceesFirstCall], + Errors: errors.map((e) => ({ ErrorCode: e })), + }) .resolvesOnce({ Instances: [instancesSecondCall] }); } @@ -673,7 +734,10 @@ interface ExpectedFleetRequestValues { function expectedCreateFleetRequest(expectedValues: ExpectedFleetRequestValues): CreateFleetCommandInput { const tags = [ { Key: 'ghr:Application', Value: 'github-action-runner' }, - { Key: 'ghr:created_by', Value: expectedValues.totalTargetCapacity > 1 ? 'pool-lambda' : 'scale-up-lambda' }, + { + Key: 'ghr:created_by', + Value: expectedValues.totalTargetCapacity > 1 ? 'pool-lambda' : 'scale-up-lambda', + }, { Key: 'ghr:Type', Value: expectedValues.type }, { Key: 'ghr:Owner', Value: REPO_NAME }, ]; diff --git a/lambdas/functions/control-plane/src/aws/runners.ts b/lambdas/functions/control-plane/src/aws/runners.ts index 6779dd39d2..d95dc99fa4 100644 --- a/lambdas/functions/control-plane/src/aws/runners.ts +++ b/lambdas/functions/control-plane/src/aws/runners.ts @@ -166,53 +166,62 @@ async function processFleetResult( ): Promise { const instances: string[] = fleet.Instances?.flatMap((i) => i.InstanceIds?.flatMap((j) => j) || []) || []; - if (instances.length !== runnerParameters.numberOfRunners) { - logger.warn( - `${ - instances.length === 0 ? 'No' : instances.length + ' off ' + runnerParameters.numberOfRunners - } instances created.`, - { data: fleet }, - ); - const errors = fleet.Errors?.flatMap((e) => e.ErrorCode || '') || []; - - // Educated guess of errors that would make sense to retry based on the list - // https://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html - const scaleErrors = [ - 'UnfulfillableCapacity', - 'MaxSpotInstanceCountExceeded', - 'TargetCapacityLimitExceededException', - 'RequestLimitExceeded', - 'ResourceLimitExceeded', - 'MaxSpotInstanceCountExceeded', - 'MaxSpotFleetRequestCountExceeded', - 'InsufficientInstanceCapacity', - ]; - - if ( - errors.some((e) => runnerParameters.onDemandFailoverOnError?.includes(e)) && - runnerParameters.ec2instanceCriteria.targetCapacityType === 'spot' - ) { - logger.warn(`Create fleet failed, initatiing fall back to on demand instances.`); - logger.debug('Create fleet failed.', { data: fleet.Errors }); - const numberOfInstances = runnerParameters.numberOfRunners - instances.length; - const instancesOnDemand = await createRunner({ - ...runnerParameters, - numberOfRunners: numberOfInstances, - onDemandFailoverOnError: ['InsufficientInstanceCapacity'], - ec2instanceCriteria: { ...runnerParameters.ec2instanceCriteria, targetCapacityType: 'on-demand' }, - }); - instances.push(...instancesOnDemand); - return instances; - } else if (errors.some((e) => scaleErrors.includes(e))) { - logger.warn('Create fleet failed, ScaleError will be thrown to trigger retry for ephemeral runners.'); - logger.debug('Create fleet failed.', { data: fleet.Errors }); - throw new ScaleError('Failed to create instance, create fleet failed.'); - } else { - logger.warn('Create fleet failed, error not recognized as scaling error.', { data: fleet.Errors }); - throw Error('Create fleet failed, no instance created.'); - } + if (instances.length === runnerParameters.numberOfRunners) { + return instances; } - return instances; + + logger.warn( + `${ + instances.length === 0 ? 'No' : instances.length + ' off ' + runnerParameters.numberOfRunners + } instances created.`, + { data: fleet }, + ); + + const errors = fleet.Errors?.flatMap((e) => e.ErrorCode || '') || []; + + if ( + errors.some((e) => runnerParameters.onDemandFailoverOnError?.includes(e)) && + runnerParameters.ec2instanceCriteria.targetCapacityType === 'spot' + ) { + logger.warn(`Create fleet failed, initatiing fall back to on demand instances.`); + logger.debug('Create fleet failed.', { data: fleet.Errors }); + const numberOfInstances = runnerParameters.numberOfRunners - instances.length; + const instancesOnDemand = await createRunner({ + ...runnerParameters, + numberOfRunners: numberOfInstances, + onDemandFailoverOnError: ['InsufficientInstanceCapacity'], + ec2instanceCriteria: { ...runnerParameters.ec2instanceCriteria, targetCapacityType: 'on-demand' }, + }); + instances.push(...instancesOnDemand); + return instances; + } + + // Educated guess of errors that would make sense to retry based on the list + // https://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html + const scaleErrors = [ + 'UnfulfillableCapacity', + 'MaxSpotInstanceCountExceeded', + 'TargetCapacityLimitExceededException', + 'RequestLimitExceeded', + 'ResourceLimitExceeded', + 'MaxSpotInstanceCountExceeded', + 'MaxSpotFleetRequestCountExceeded', + 'InsufficientInstanceCapacity', + ]; + + const failedCount = countScaleErrors(errors, scaleErrors); + if (failedCount > 0) { + logger.warn('Create fleet failed, ScaleError will be thrown to trigger retry for ephemeral runners.'); + logger.debug('Create fleet failed.', { data: fleet.Errors }); + throw new ScaleError(failedCount); + } + + logger.warn('Create fleet failed, error not recognized as scaling error.', { data: fleet.Errors }); + throw Error('Create fleet failed, no instance created.'); +} + +function countScaleErrors(errors: string[], scaleErrors: string[]): number { + return errors.reduce((acc, e) => (scaleErrors.includes(e) ? acc + 1 : acc), 0); } async function getAmiIdOverride(runnerParameters: Runners.RunnerInputParameters): Promise { diff --git a/lambdas/functions/control-plane/src/lambda.test.ts b/lambdas/functions/control-plane/src/lambda.test.ts index 2c54a4d541..3e6a897e88 100644 --- a/lambdas/functions/control-plane/src/lambda.test.ts +++ b/lambdas/functions/control-plane/src/lambda.test.ts @@ -28,11 +28,11 @@ const sqsRecord: SQSRecord = { }, awsRegion: '', body: JSON.stringify(body), - eventSource: 'aws:SQS', + eventSource: 'aws:sqs', eventSourceARN: '', md5OfBody: '', messageAttributes: {}, - messageId: '', + messageId: 'abcd1234', receiptHandle: '', }; @@ -70,19 +70,33 @@ vi.mock('@aws-github-runner/aws-powertools-util'); vi.mock('@aws-github-runner/aws-ssm-util'); describe('Test scale up lambda wrapper.', () => { - it('Do not handle multiple record sets.', async () => { - await testInvalidRecords([sqsRecord, sqsRecord]); + it('Do not handle empty record sets.', async () => { + const sqsEventMultipleRecords: SQSEvent = { + Records: [], + }; + + await expect(scaleUpHandler(sqsEventMultipleRecords, context)).resolves.not.toThrow(); }); - it('Do not handle empty record sets.', async () => { - await testInvalidRecords([]); + it('Ignores non-sqs event sources.', async () => { + const record = { + ...sqsRecord, + eventSource: 'aws:non-sqs', + }; + + const sqsEventMultipleRecordsNonSQS: SQSEvent = { + Records: [record], + }; + + await expect(scaleUpHandler(sqsEventMultipleRecordsNonSQS, context)).resolves.not.toThrow(); + expect(scaleUp).toHaveBeenCalledWith([]); }); it('Scale without error should resolve.', async () => { const mock = vi.fn(scaleUp); mock.mockImplementation(() => { return new Promise((resolve) => { - resolve(); + resolve([]); }); }); await expect(scaleUpHandler(sqsEvent, context)).resolves.not.toThrow(); @@ -95,37 +109,150 @@ describe('Test scale up lambda wrapper.', () => { await expect(scaleUpHandler(sqsEvent, context)).resolves.not.toThrow(); }); - it('Scale should be rejected', async () => { - const error = new ScaleError('Scale should be rejected'); + it('Scale should create a batch failure message', async () => { + const error = new ScaleError(); const mock = vi.fn() as MockedFunction; mock.mockImplementation(() => { return Promise.reject(error); }); vi.mocked(scaleUp).mockImplementation(mock); - await expect(scaleUpHandler(sqsEvent, context)).rejects.toThrow(error); + await expect(scaleUpHandler(sqsEvent, context)).resolves.toEqual({ + batchItemFailures: [{ itemIdentifier: sqsRecord.messageId }], + }); }); -}); -async function testInvalidRecords(sqsRecords: SQSRecord[]) { - const mock = vi.fn(scaleUp); - const logWarnSpy = vi.spyOn(logger, 'warn'); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve(); + describe('Batch processing', () => { + beforeEach(() => { + vi.clearAllMocks(); + }); + + const createMultipleRecords = (count: number, eventSource = 'aws:sqs'): SQSRecord[] => { + return Array.from({ length: count }, (_, i) => ({ + ...sqsRecord, + eventSource, + messageId: `message-${i}`, + body: JSON.stringify({ + ...body, + id: i + 1, + }), + })); + }; + + it('Should handle multiple SQS records in a single invocation', async () => { + const records = createMultipleRecords(3); + const multiRecordEvent: SQSEvent = { Records: records }; + + const mock = vi.fn(scaleUp); + mock.mockImplementation(() => Promise.resolve([])); + vi.mocked(scaleUp).mockImplementation(mock); + + await expect(scaleUpHandler(multiRecordEvent, context)).resolves.not.toThrow(); + expect(scaleUp).toHaveBeenCalledWith( + expect.arrayContaining([ + expect.objectContaining({ messageId: 'message-0' }), + expect.objectContaining({ messageId: 'message-1' }), + expect.objectContaining({ messageId: 'message-2' }), + ]), + ); + }); + + it('Should return batch item failures for rejected messages', async () => { + const records = createMultipleRecords(3); + const multiRecordEvent: SQSEvent = { Records: records }; + + const mock = vi.fn(scaleUp); + mock.mockImplementation(() => Promise.resolve(['message-1', 'message-2'])); + vi.mocked(scaleUp).mockImplementation(mock); + + const result = await scaleUpHandler(multiRecordEvent, context); + expect(result).toEqual({ + batchItemFailures: [{ itemIdentifier: 'message-1' }, { itemIdentifier: 'message-2' }], + }); + }); + + it('Should filter out non-SQS event sources', async () => { + const sqsRecords = createMultipleRecords(2, 'aws:sqs'); + const nonSqsRecords = createMultipleRecords(1, 'aws:sns'); + const mixedEvent: SQSEvent = { + Records: [...sqsRecords, ...nonSqsRecords], + }; + + const mock = vi.fn(scaleUp); + mock.mockImplementation(() => Promise.resolve([])); + vi.mocked(scaleUp).mockImplementation(mock); + + await scaleUpHandler(mixedEvent, context); + expect(scaleUp).toHaveBeenCalledWith( + expect.arrayContaining([ + expect.objectContaining({ messageId: 'message-0' }), + expect.objectContaining({ messageId: 'message-1' }), + ]), + ); + expect(scaleUp).not.toHaveBeenCalledWith( + expect.arrayContaining([expect.objectContaining({ messageId: 'message-2' })]), + ); + }); + + it('Should sort messages by retry count', async () => { + const records = [ + { + ...sqsRecord, + messageId: 'high-retry', + body: JSON.stringify({ ...body, retryCounter: 5 }), + }, + { + ...sqsRecord, + messageId: 'low-retry', + body: JSON.stringify({ ...body, retryCounter: 1 }), + }, + { + ...sqsRecord, + messageId: 'no-retry', + body: JSON.stringify({ ...body }), + }, + ]; + const multiRecordEvent: SQSEvent = { Records: records }; + + const mock = vi.fn(scaleUp); + mock.mockImplementation((messages) => { + // Verify messages are sorted by retry count (ascending) + expect(messages[0].messageId).toBe('no-retry'); + expect(messages[1].messageId).toBe('low-retry'); + expect(messages[2].messageId).toBe('high-retry'); + return Promise.resolve([]); + }); + vi.mocked(scaleUp).mockImplementation(mock); + + await scaleUpHandler(multiRecordEvent, context); + }); + + it('Should return all failed messages when scaleUp throws non-ScaleError', async () => { + const records = createMultipleRecords(2); + const multiRecordEvent: SQSEvent = { Records: records }; + + const mock = vi.fn(scaleUp); + mock.mockImplementation(() => Promise.reject(new Error('Generic error'))); + vi.mocked(scaleUp).mockImplementation(mock); + + const result = await scaleUpHandler(multiRecordEvent, context); + expect(result).toEqual({ batchItemFailures: [] }); + }); + + it('Should throw when scaleUp throws ScaleError', async () => { + const records = createMultipleRecords(2); + const multiRecordEvent: SQSEvent = { Records: records }; + + const error = new ScaleError(2); + const mock = vi.fn(scaleUp); + mock.mockImplementation(() => Promise.reject(error)); + vi.mocked(scaleUp).mockImplementation(mock); + + await expect(scaleUpHandler(multiRecordEvent, context)).resolves.toEqual({ + batchItemFailures: [{ itemIdentifier: 'message-0' }, { itemIdentifier: 'message-1' }], + }); }); }); - const sqsEventMultipleRecords: SQSEvent = { - Records: sqsRecords, - }; - - await expect(scaleUpHandler(sqsEventMultipleRecords, context)).resolves.not.toThrow(); - - expect(logWarnSpy).toHaveBeenCalledWith( - expect.stringContaining( - 'Event ignored, only one record at the time can be handled, ensure the lambda batch size is set to 1.', - ), - ); -} +}); describe('Test scale down lambda wrapper.', () => { it('Scaling down no error.', async () => { diff --git a/lambdas/functions/control-plane/src/lambda.ts b/lambdas/functions/control-plane/src/lambda.ts index 3e3ab90557..e2a0451c95 100644 --- a/lambdas/functions/control-plane/src/lambda.ts +++ b/lambdas/functions/control-plane/src/lambda.ts @@ -1,34 +1,66 @@ import middy from '@middy/core'; import { logger, setContext } from '@aws-github-runner/aws-powertools-util'; import { captureLambdaHandler, tracer } from '@aws-github-runner/aws-powertools-util'; -import { Context, SQSEvent } from 'aws-lambda'; +import { Context, type SQSBatchItemFailure, type SQSBatchResponse, SQSEvent } from 'aws-lambda'; import { PoolEvent, adjust } from './pool/pool'; import ScaleError from './scale-runners/ScaleError'; import { scaleDown } from './scale-runners/scale-down'; -import { scaleUp } from './scale-runners/scale-up'; +import { type ActionRequestMessage, type ActionRequestMessageSQS, scaleUp } from './scale-runners/scale-up'; import { SSMCleanupOptions, cleanSSMTokens } from './scale-runners/ssm-housekeeper'; import { checkAndRetryJob } from './scale-runners/job-retry'; -export async function scaleUpHandler(event: SQSEvent, context: Context): Promise { +export async function scaleUpHandler(event: SQSEvent, context: Context): Promise { setContext(context, 'lambda.ts'); logger.logEventIfEnabled(event); - if (event.Records.length !== 1) { - logger.warn('Event ignored, only one record at the time can be handled, ensure the lambda batch size is set to 1.'); - return Promise.resolve(); + const sqsMessages: ActionRequestMessageSQS[] = []; + const warnedEventSources = new Set(); + + for (const { body, eventSource, messageId } of event.Records) { + if (eventSource !== 'aws:sqs') { + if (!warnedEventSources.has(eventSource)) { + logger.warn('Ignoring non-sqs event source', { eventSource }); + warnedEventSources.add(eventSource); + } + + continue; + } + + const payload = JSON.parse(body) as ActionRequestMessage; + sqsMessages.push({ ...payload, messageId }); } + // Sort messages by their retry count, so that we retry the same messages if + // there's a persistent failure. This should cause messages to be dropped + // quicker than if we retried in an arbitrary order. + sqsMessages.sort((l, r) => { + return (l.retryCounter ?? 0) - (r.retryCounter ?? 0); + }); + + const batchItemFailures: SQSBatchItemFailure[] = []; + try { - await scaleUp(event.Records[0].eventSource, JSON.parse(event.Records[0].body)); - return Promise.resolve(); + const rejectedMessageIds = await scaleUp(sqsMessages); + + for (const messageId of rejectedMessageIds) { + batchItemFailures.push({ + itemIdentifier: messageId, + }); + } + + return { batchItemFailures }; } catch (e) { if (e instanceof ScaleError) { - return Promise.reject(e); + batchItemFailures.push(...e.toBatchItemFailures(sqsMessages)); + logger.warn(`${e.detailedMessage} A retry will be attempted via SQS.`, { error: e }); } else { - logger.warn(`Ignoring error: ${e}`); - return Promise.resolve(); + logger.error(`Error processing batch (size: ${sqsMessages.length}): ${(e as Error).message}, ignoring batch`, { + error: e, + }); } + + return { batchItemFailures }; } } diff --git a/lambdas/functions/control-plane/src/local.ts b/lambdas/functions/control-plane/src/local.ts index 2166da58fd..0b06335c8a 100644 --- a/lambdas/functions/control-plane/src/local.ts +++ b/lambdas/functions/control-plane/src/local.ts @@ -1,21 +1,21 @@ import { logger } from '@aws-github-runner/aws-powertools-util'; -import { ActionRequestMessage, scaleUp } from './scale-runners/scale-up'; +import { scaleUpHandler } from './lambda'; +import { Context, SQSEvent } from 'aws-lambda'; -const sqsEvent = { +const sqsEvent: SQSEvent = { Records: [ { messageId: 'e8d74d08-644e-42ca-bf82-a67daa6c4dad', receiptHandle: - // eslint-disable-next-line max-len 'AQEBCpLYzDEKq4aKSJyFQCkJduSKZef8SJVOperbYyNhXqqnpFG5k74WygVAJ4O0+9nybRyeOFThvITOaS21/jeHiI5fgaM9YKuI0oGYeWCIzPQsluW5CMDmtvqv1aA8sXQ5n2x0L9MJkzgdIHTC3YWBFLQ2AxSveOyIHwW+cHLIFCAcZlOaaf0YtaLfGHGkAC4IfycmaijV8NSlzYgDuxrC9sIsWJ0bSvk5iT4ru/R4+0cjm7qZtGlc04k9xk5Fu6A+wRxMaIyiFRY+Ya19ykcevQldidmEjEWvN6CRToLgclk=', - body: { + body: JSON.stringify({ repositoryName: 'self-hosted', repositoryOwner: 'test-runners', eventType: 'workflow_job', id: 987654, installationId: 123456789, - }, + }), attributes: { ApproximateReceiveCount: '1', SentTimestamp: '1626450047230', @@ -34,12 +34,34 @@ const sqsEvent = { ], }; +const context: Context = { + awsRequestId: '1', + callbackWaitsForEmptyEventLoop: false, + functionName: '', + functionVersion: '', + getRemainingTimeInMillis: () => 0, + invokedFunctionArn: '', + logGroupName: '', + logStreamName: '', + memoryLimitInMB: '', + done: () => { + return; + }, + fail: () => { + return; + }, + succeed: () => { + return; + }, +}; + export function run(): void { - scaleUp(sqsEvent.Records[0].eventSource, sqsEvent.Records[0].body as ActionRequestMessage) - .then() - .catch((e) => { - logger.error(e); - }); + try { + scaleUpHandler(sqsEvent, context); + } catch (e: unknown) { + const message = e instanceof Error ? e.message : `${e}`; + logger.error(message, e instanceof Error ? { error: e } : {}); + } } run(); diff --git a/lambdas/functions/control-plane/src/pool/pool.test.ts b/lambdas/functions/control-plane/src/pool/pool.test.ts index 6dd389873b..c05a8b8cb7 100644 --- a/lambdas/functions/control-plane/src/pool/pool.test.ts +++ b/lambdas/functions/control-plane/src/pool/pool.test.ts @@ -190,11 +190,7 @@ describe('Test simple pool.', () => { it('Top up pool with pool size 2 registered.', async () => { await adjust({ poolSize: 3 }); expect(createRunners).toHaveBeenCalledTimes(1); - expect(createRunners).toHaveBeenCalledWith( - expect.anything(), - expect.objectContaining({ numberOfRunners: 1 }), - expect.anything(), - ); + expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 1, expect.anything()); }); it('Should not top up if pool size is reached.', async () => { @@ -270,11 +266,7 @@ describe('Test simple pool.', () => { it('Top up if the pool size is set to 5', async () => { await adjust({ poolSize: 5 }); // 2 idle, top up with 3 to match a pool of 5 - expect(createRunners).toHaveBeenCalledWith( - expect.anything(), - expect.objectContaining({ numberOfRunners: 3 }), - expect.anything(), - ); + expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 3, expect.anything()); }); }); @@ -289,11 +281,7 @@ describe('Test simple pool.', () => { it('Top up if the pool size is set to 5', async () => { await adjust({ poolSize: 5 }); // 2 idle, top up with 3 to match a pool of 5 - expect(createRunners).toHaveBeenCalledWith( - expect.anything(), - expect.objectContaining({ numberOfRunners: 3 }), - expect.anything(), - ); + expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 3, expect.anything()); }); }); @@ -343,11 +331,7 @@ describe('Test simple pool.', () => { await adjust({ poolSize: 5 }); // 2 idle, 2 prefixed idle top up with 1 to match a pool of 5 - expect(createRunners).toHaveBeenCalledWith( - expect.anything(), - expect.objectContaining({ numberOfRunners: 1 }), - expect.anything(), - ); + expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 1, expect.anything()); }); }); }); diff --git a/lambdas/functions/control-plane/src/pool/pool.ts b/lambdas/functions/control-plane/src/pool/pool.ts index 07477572ce..aa690e97f6 100644 --- a/lambdas/functions/control-plane/src/pool/pool.ts +++ b/lambdas/functions/control-plane/src/pool/pool.ts @@ -92,11 +92,11 @@ export async function adjust(event: PoolEvent): Promise { environment, launchTemplateName, subnets, - numberOfRunners: topUp, amiIdSsmParameterName, tracingEnabled, onDemandFailoverOnError, }, + topUp, githubInstallationClient, ); } else { diff --git a/lambdas/functions/control-plane/src/scale-runners/ScaleError.test.ts b/lambdas/functions/control-plane/src/scale-runners/ScaleError.test.ts new file mode 100644 index 0000000000..0a7478c12f --- /dev/null +++ b/lambdas/functions/control-plane/src/scale-runners/ScaleError.test.ts @@ -0,0 +1,76 @@ +import { describe, expect, it } from 'vitest'; +import type { ActionRequestMessageSQS } from './scale-up'; +import ScaleError from './ScaleError'; + +describe('ScaleError', () => { + describe('detailedMessage', () => { + it('should format message for single instance failure', () => { + const error = new ScaleError(1); + + expect(error.detailedMessage).toBe( + 'Failed to create instance, create fleet failed. (Failed to create 1 instance)', + ); + }); + + it('should format message for multiple instance failures', () => { + const error = new ScaleError(3); + + expect(error.detailedMessage).toBe( + 'Failed to create instance, create fleet failed. (Failed to create 3 instances)', + ); + }); + }); + + describe('toBatchItemFailures', () => { + const mockMessages: ActionRequestMessageSQS[] = [ + { messageId: 'msg-1', id: 1, eventType: 'workflow_job' }, + { messageId: 'msg-2', id: 2, eventType: 'workflow_job' }, + { messageId: 'msg-3', id: 3, eventType: 'workflow_job' }, + { messageId: 'msg-4', id: 4, eventType: 'workflow_job' }, + ]; + + it.each([ + { failedCount: 1, expected: [{ itemIdentifier: 'msg-1' }], description: 'default instance count' }, + { + failedCount: 2, + expected: [{ itemIdentifier: 'msg-1' }, { itemIdentifier: 'msg-2' }], + description: 'less than message count', + }, + { + failedCount: 4, + expected: [ + { itemIdentifier: 'msg-1' }, + { itemIdentifier: 'msg-2' }, + { itemIdentifier: 'msg-3' }, + { itemIdentifier: 'msg-4' }, + ], + description: 'equal to message count', + }, + { + failedCount: 10, + expected: [ + { itemIdentifier: 'msg-1' }, + { itemIdentifier: 'msg-2' }, + { itemIdentifier: 'msg-3' }, + { itemIdentifier: 'msg-4' }, + ], + description: 'more than message count', + }, + { failedCount: 0, expected: [], description: 'zero failed instances' }, + { failedCount: -1, expected: [], description: 'negative failed instances' }, + { failedCount: -10, expected: [], description: 'large negative failed instances' }, + ])('should handle $description (failedCount=$failedCount)', ({ failedCount, expected }) => { + const error = new ScaleError(failedCount); + const failures = error.toBatchItemFailures(mockMessages); + + expect(failures).toEqual(expected); + }); + + it('should handle empty message array', () => { + const error = new ScaleError(3); + const failures = error.toBatchItemFailures([]); + + expect(failures).toEqual([]); + }); + }); +}); diff --git a/lambdas/functions/control-plane/src/scale-runners/ScaleError.ts b/lambdas/functions/control-plane/src/scale-runners/ScaleError.ts index d7e71f8c33..9c1f474d17 100644 --- a/lambdas/functions/control-plane/src/scale-runners/ScaleError.ts +++ b/lambdas/functions/control-plane/src/scale-runners/ScaleError.ts @@ -1,8 +1,28 @@ +import type { SQSBatchItemFailure } from 'aws-lambda'; +import type { ActionRequestMessageSQS } from './scale-up'; + class ScaleError extends Error { - constructor(public message: string) { - super(message); + constructor(public readonly failedInstanceCount: number = 1) { + super('Failed to create instance, create fleet failed.'); this.name = 'ScaleError'; - this.stack = new Error().stack; + } + + /** + * Gets a formatted error message including the failed instance count + */ + public get detailedMessage(): string { + return `${this.message} (Failed to create ${this.failedInstanceCount} instance${this.failedInstanceCount !== 1 ? 's' : ''})`; + } + + /** + * Generate SQS batch item failures for the failed instances + */ + public toBatchItemFailures(messages: ActionRequestMessageSQS[]): SQSBatchItemFailure[] { + // Ensure we don't retry negative counts or more messages than available + const messagesToRetry = Math.max(0, Math.min(this.failedInstanceCount, messages.length)); + return messages.slice(0, messagesToRetry).map(({ messageId }) => ({ + itemIdentifier: messageId, + })); } } diff --git a/lambdas/functions/control-plane/src/scale-runners/job-retry.test.ts b/lambdas/functions/control-plane/src/scale-runners/job-retry.test.ts index c401ab4c2d..f807d06d8a 100644 --- a/lambdas/functions/control-plane/src/scale-runners/job-retry.test.ts +++ b/lambdas/functions/control-plane/src/scale-runners/job-retry.test.ts @@ -2,9 +2,11 @@ import { publishMessage } from '../aws/sqs'; import { publishRetryMessage, checkAndRetryJob } from './job-retry'; import { ActionRequestMessage, ActionRequestMessageRetry } from './scale-up'; import { getOctokit } from '../github/octokit'; +import { jobRetryCheck } from '../lambda'; import { Octokit } from '@octokit/rest'; import { createSingleMetric } from '@aws-github-runner/aws-powertools-util'; import { describe, it, expect, beforeEach, vi } from 'vitest'; +import type { SQSRecord } from 'aws-lambda'; vi.mock('../aws/sqs', async () => ({ publishMessage: vi.fn(), @@ -269,3 +271,93 @@ describe(`Test job retry check`, () => { expect(publishMessage).not.toHaveBeenCalled(); }); }); + +describe('Test job retry handler (batch processing)', () => { + const context = { + requestId: 'request-id', + functionName: 'function-name', + functionVersion: 'function-version', + invokedFunctionArn: 'invoked-function-arn', + memoryLimitInMB: '128', + awsRequestId: 'aws-request-id', + logGroupName: 'log-group-name', + logStreamName: 'log-stream-name', + remainingTimeInMillis: () => 30000, + done: () => {}, + fail: () => {}, + succeed: () => {}, + getRemainingTimeInMillis: () => 30000, + callbackWaitsForEmptyEventLoop: false, + }; + + function createSQSRecord(messageId: string): SQSRecord { + return { + messageId, + receiptHandle: 'receipt-handle', + body: JSON.stringify({ + eventType: 'workflow_job', + id: 123, + installationId: 456, + repositoryName: 'test-repo', + repositoryOwner: 'test-owner', + repoOwnerType: 'Organization', + retryCounter: 0, + }), + attributes: { + ApproximateReceiveCount: '1', + SentTimestamp: '1234567890', + SenderId: 'sender-id', + ApproximateFirstReceiveTimestamp: '1234567891', + }, + messageAttributes: {}, + md5OfBody: 'md5', + eventSource: 'aws:sqs', + eventSourceARN: 'arn:aws:sqs:region:account:queue', + awsRegion: 'us-east-1', + }; + } + + beforeEach(() => { + vi.clearAllMocks(); + process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; + process.env.JOB_QUEUE_SCALE_UP_URL = 'https://sqs.example.com/queue'; + }); + + it('should handle multiple records in a single batch', async () => { + mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ + data: { + status: 'queued', + }, + headers: {}, + })); + + const event = { + Records: [createSQSRecord('msg-1'), createSQSRecord('msg-2'), createSQSRecord('msg-3')], + }; + + await expect(jobRetryCheck(event, context)).resolves.not.toThrow(); + expect(publishMessage).toHaveBeenCalledTimes(3); + }); + + it('should continue processing other records when one fails', async () => { + mockCreateOctokitClient + .mockResolvedValueOnce(new Octokit()) // First record succeeds + .mockRejectedValueOnce(new Error('API error')) // Second record fails + .mockResolvedValueOnce(new Octokit()); // Third record succeeds + + mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ + data: { + status: 'queued', + }, + headers: {}, + })); + + const event = { + Records: [createSQSRecord('msg-1'), createSQSRecord('msg-2'), createSQSRecord('msg-3')], + }; + + await expect(jobRetryCheck(event, context)).resolves.not.toThrow(); + // There were two successful calls to publishMessage + expect(publishMessage).toHaveBeenCalledTimes(2); + }); +}); diff --git a/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts b/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts index 477ef147fb..b876d31d50 100644 --- a/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts +++ b/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts @@ -1,5 +1,4 @@ import { PutParameterCommand, SSMClient } from '@aws-sdk/client-ssm'; -import { Octokit } from '@octokit/rest'; import { mockClient } from 'aws-sdk-client-mock'; import 'aws-sdk-client-mock-jest/vitest'; // Using vi.mocked instead of jest-mock @@ -9,10 +8,10 @@ import { performance } from 'perf_hooks'; import * as ghAuth from '../github/auth'; import { createRunner, listEC2Runners } from './../aws/runners'; import { RunnerInputParameters } from './../aws/runners.d'; -import ScaleError from './ScaleError'; import * as scaleUpModule from './scale-up'; import { getParameter } from '@aws-github-runner/aws-ssm-util'; import { describe, it, expect, beforeEach, vi } from 'vitest'; +import type { Octokit } from '@octokit/rest'; const mockOctokit = { paginate: vi.fn(), @@ -29,6 +28,7 @@ const mockOctokit = { getRepoInstallation: vi.fn(), }, }; + const mockCreateRunner = vi.mocked(createRunner); const mockListRunners = vi.mocked(listEC2Runners); const mockSSMClient = mockClient(SSMClient); @@ -68,26 +68,33 @@ export type RunnerType = 'ephemeral' | 'non-ephemeral'; // for ephemeral and non-ephemeral runners const RUNNER_TYPES: RunnerType[] = ['ephemeral', 'non-ephemeral']; -const mocktokit = Octokit as vi.MockedClass; const mockedAppAuth = vi.mocked(ghAuth.createGithubAppAuth); const mockedInstallationAuth = vi.mocked(ghAuth.createGithubInstallationAuth); const mockCreateClient = vi.mocked(ghAuth.createOctokitClient); -const TEST_DATA: scaleUpModule.ActionRequestMessage = { +const TEST_DATA_SINGLE: scaleUpModule.ActionRequestMessageSQS = { id: 1, eventType: 'workflow_job', repositoryName: 'hello-world', repositoryOwner: 'Codertocat', installationId: 2, repoOwnerType: 'Organization', + messageId: 'foobar', }; +const TEST_DATA: scaleUpModule.ActionRequestMessageSQS[] = [ + { + ...TEST_DATA_SINGLE, + messageId: 'foobar', + }, +]; + const cleanEnv = process.env; const EXPECTED_RUNNER_PARAMS: RunnerInputParameters = { environment: 'unit-test-environment', runnerType: 'Org', - runnerOwner: TEST_DATA.repositoryOwner, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, numberOfRunners: 1, launchTemplateName: 'lt-1', ec2instanceCriteria: { @@ -134,14 +141,14 @@ beforeEach(() => { instanceId: 'i-1234', launchTime: new Date(), type: 'Org', - owner: TEST_DATA.repositoryOwner, + owner: TEST_DATA_SINGLE.repositoryOwner, }, ]); mockedAppAuth.mockResolvedValue({ type: 'app', token: 'token', - appId: TEST_DATA.installationId, + appId: TEST_DATA_SINGLE.installationId, expiresAt: 'some-date', }); mockedInstallationAuth.mockResolvedValue({ @@ -155,7 +162,7 @@ beforeEach(() => { installationId: 0, }); - mockCreateClient.mockResolvedValue(new mocktokit()); + mockCreateClient.mockResolvedValue(mockOctokit as unknown as Octokit); }); describe('scaleUp with GHES', () => { @@ -163,17 +170,12 @@ describe('scaleUp with GHES', () => { process.env.GHES_URL = 'https://github.enterprise.something'; }); - it('ignores non-sqs events', async () => { - expect.assertions(1); - await expect(scaleUpModule.scaleUp('aws:s3', TEST_DATA)).rejects.toEqual(Error('Cannot handle non-SQS events!')); - }); - it('checks queued workflows', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).toBeCalledWith({ - job_id: TEST_DATA.id, - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + job_id: TEST_DATA_SINGLE.id, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); @@ -181,7 +183,7 @@ describe('scaleUp with GHES', () => { mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ data: { total_count: 0 }, })); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toBeCalled(); }); @@ -200,18 +202,18 @@ describe('scaleUp with GHES', () => { }); it('gets the current org level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Org', - runnerOwner: TEST_DATA.repositoryOwner, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); @@ -219,35 +221,35 @@ describe('scaleUp with GHES', () => { it('does create a runner if maximum is set to -1', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '-1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toHaveBeenCalled(); expect(createRunner).toHaveBeenCalled(); }); it('creates a token when maximum runners has not been reached', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, }); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a runner with correct config', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with labels in a specific group', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with ami id override from ssm parameter', async () => { process.env.AMI_ID_SSM_PARAMETER_NAME = 'my-ami-id-param'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith({ ...expectedRunnerParams, amiIdSsmParameterName: 'my-ami-id-param' }); }); @@ -256,15 +258,15 @@ describe('scaleUp with GHES', () => { mockSSMgetParameter.mockImplementation(async () => { throw new Error('ParameterNotFound'); }); - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toBeInstanceOf(Error); + await expect(scaleUpModule.scaleUp(TEST_DATA)).rejects.toBeInstanceOf(Error); expect(mockOctokit.paginate).toHaveBeenCalledTimes(1); }); it('Discards event if it is a User repo and org level runners is enabled', async () => { process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; - const USER_REPO_TEST_DATA = { ...TEST_DATA }; - USER_REPO_TEST_DATA.repoOwnerType = 'User'; - await scaleUpModule.scaleUp('aws:sqs', USER_REPO_TEST_DATA); + const USER_REPO_TEST_DATA = structuredClone(TEST_DATA); + USER_REPO_TEST_DATA[0].repoOwnerType = 'User'; + await scaleUpModule.scaleUp(USER_REPO_TEST_DATA); expect(createRunner).not.toHaveBeenCalled(); }); @@ -272,7 +274,7 @@ describe('scaleUp with GHES', () => { mockSSMgetParameter.mockImplementation(async () => { throw new Error('ParameterNotFound'); }); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.paginate).toHaveBeenCalledTimes(1); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 2); expect(mockSSMClient).toHaveReceivedNthSpecificCommandWith(1, PutParameterCommand, { @@ -283,7 +285,7 @@ describe('scaleUp with GHES', () => { }); it('Does not create SSM parameter for runner group id if it exists', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.paginate).toHaveBeenCalledTimes(0); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 1); }); @@ -291,9 +293,9 @@ describe('scaleUp with GHES', () => { it('create start runner config for ephemeral runners ', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.generateRunnerJitconfigForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, name: 'unit-test-i-12345', runner_group_id: 1, labels: ['label1', 'label2'], @@ -314,7 +316,7 @@ describe('scaleUp with GHES', () => { it('create start runner config for non-ephemeral runners ', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; process.env.RUNNERS_MAXIMUM_COUNT = '2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.generateRunnerJitconfigForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalled(); expect(mockSSMClient).toHaveReceivedNthSpecificCommandWith(1, PutParameterCommand, { @@ -385,7 +387,7 @@ describe('scaleUp with GHES', () => { 'i-150', 'i-151', ]; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); const endTime = performance.now(); expect(endTime - startTime).toBeGreaterThan(1000); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 40); @@ -399,87 +401,307 @@ describe('scaleUp with GHES', () => { process.env.RUNNER_NAME_PREFIX = 'unit-test'; expectedRunnerParams = { ...EXPECTED_RUNNER_PARAMS }; expectedRunnerParams.runnerType = 'Repo'; - expectedRunnerParams.runnerOwner = `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`; - // `--url https://github.enterprise.something/${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + expectedRunnerParams.runnerOwner = `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`; + // `--url https://github.enterprise.something/${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, // `--token 1234abcd`, // ]; }); it('gets the current repo level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Repo', - runnerOwner: `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + runnerOwner: `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a token when maximum runners has not been reached', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('uses the default runner max count', async () => { process.env.RUNNERS_MAXIMUM_COUNT = undefined; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('creates a runner with correct config and labels', async () => { process.env.RUNNER_LABELS = 'label1,label2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner and ensure the group argument is ignored', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP_IGNORED'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('Check error is thrown', async () => { const mockCreateRunners = vi.mocked(createRunner); mockCreateRunners.mockRejectedValue(new Error('no retry')); - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toThrow('no retry'); + await expect(scaleUpModule.scaleUp(TEST_DATA)).rejects.toThrow('no retry'); mockCreateRunners.mockReset(); }); }); -}); -describe('scaleUp with public GH', () => { - it('ignores non-sqs events', async () => { - expect.assertions(1); - await expect(scaleUpModule.scaleUp('aws:s3', TEST_DATA)).rejects.toEqual(Error('Cannot handle non-SQS events!')); + describe('Batch processing', () => { + beforeEach(() => { + process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + process.env.RUNNERS_MAXIMUM_COUNT = '10'; + }); + + const createTestMessages = ( + count: number, + overrides: Partial[] = [], + ): scaleUpModule.ActionRequestMessageSQS[] => { + return Array.from({ length: count }, (_, i) => ({ + ...TEST_DATA_SINGLE, + id: i + 1, + messageId: `message-${i}`, + ...overrides[i], + })); + }; + + it('Should handle multiple messages for the same organization', async () => { + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(1); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 3, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, + }), + ); + }); + + it('Should handle multiple messages for different organizations', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'org1' }, + { repositoryOwner: 'org2' }, + { repositoryOwner: 'org1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'org1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'org2', + }), + ); + }); + + it('Should handle multiple messages for different repositories when org-level is disabled', async () => { + process.env.ENABLE_ORGANIZATION_RUNNERS = 'false'; + const messages = createTestMessages(3, [ + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + { repositoryOwner: 'owner1', repositoryName: 'repo2' }, + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'owner1/repo1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'owner1/repo2', + }), + ); + }); + + it('Should reject messages when maximum runners limit is reached', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '1'; // Set to 1 so with 1 existing, no new ones can be created + mockListRunners.mockImplementation(async () => [ + { + instanceId: 'i-existing', + launchTime: new Date(), + type: 'Org', + owner: TEST_DATA_SINGLE.repositoryOwner, + }, + ]); + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); // No runners should be created + expect(rejectedMessages).toHaveLength(3); // All 3 messages should be rejected + }); + + it('Should handle partial EC2 instance creation failures', async () => { + mockCreateRunner.mockImplementation(async () => ['i-12345']); // Only creates 1 instead of requested 3 + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(rejectedMessages).toHaveLength(2); // 3 requested - 1 created = 2 failed + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should filter out invalid event types for ephemeral runners', async () => { + const messages = createTestMessages(3, [ + { eventType: 'workflow_job' }, + { eventType: 'check_run' }, + { eventType: 'workflow_job' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only workflow_job events processed + }), + ); + expect(rejectedMessages).toContain('message-1'); // check_run event rejected + }); + + it('Should skip invalid repo owner types but not reject them', async () => { + const messages = createTestMessages(3, [ + { repoOwnerType: 'Organization' }, + { repoOwnerType: 'User' }, // Invalid for org-level runners + { repoOwnerType: 'Organization' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only Organization events processed + }), + ); + expect(rejectedMessages).not.toContain('message-1'); // User repo not rejected, just skipped + }); + + it('Should skip messages when jobs are not queued', async () => { + mockOctokit.actions.getJobForWorkflowRun.mockImplementation((params) => { + const isQueued = params.job_id === 1 || params.job_id === 3; // Only jobs 1 and 3 are queued + return { + data: { + status: isQueued ? 'queued' : 'completed', + }, + }; + }); + + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only queued jobs processed + }), + ); + }); + + it('Should create separate GitHub clients for different installations', async () => { + // Override the default mock to return different installation IDs + mockOctokit.apps.getOrgInstallation.mockReset(); + mockOctokit.apps.getOrgInstallation.mockImplementation((params) => ({ + data: { + id: params.org === 'org1' ? 100 : 200, + }, + })); + + const messages = createTestMessages(2, [ + { repositoryOwner: 'org1', installationId: 0 }, + { repositoryOwner: 'org2', installationId: 0 }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(3); // 1 app client, 2 repo installation clients + expect(mockedInstallationAuth).toHaveBeenCalledWith(100, 'https://github.enterprise.something/api/v3'); + expect(mockedInstallationAuth).toHaveBeenCalledWith(200, 'https://github.enterprise.something/api/v3'); + }); + + it('Should reuse GitHub clients for same installation', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(2); // 1 app client, 1 installation client + expect(mockedInstallationAuth).toHaveBeenCalledTimes(1); + }); + + it('Should return empty array when no valid messages to process', async () => { + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + const messages = createTestMessages(2, [ + { eventType: 'check_run' }, // Invalid for ephemeral + { eventType: 'check_run' }, // Invalid for ephemeral + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should handle unlimited runners configuration', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '-1'; + const messages = createTestMessages(10); + + await scaleUpModule.scaleUp(messages); + + expect(listEC2Runners).not.toHaveBeenCalled(); // No need to check current runners + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 10, // All messages processed + }), + ); + }); }); +}); +describe('scaleUp with public GH', () => { it('checks queued workflows', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).toBeCalledWith({ - job_id: TEST_DATA.id, - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + job_id: TEST_DATA_SINGLE.id, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('not checking queued workflows', async () => { process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).not.toBeCalled(); }); @@ -487,7 +709,7 @@ describe('scaleUp with public GH', () => { mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ data: { status: 'completed' }, })); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toBeCalled(); }); @@ -499,38 +721,38 @@ describe('scaleUp with public GH', () => { }); it('gets the current org level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Org', - runnerOwner: TEST_DATA.repositoryOwner, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a token when maximum runners has not been reached', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, }); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a runner with correct config', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with labels in s specific group', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); }); @@ -543,44 +765,44 @@ describe('scaleUp with public GH', () => { process.env.RUNNER_NAME_PREFIX = 'unit-test'; expectedRunnerParams = { ...EXPECTED_RUNNER_PARAMS }; expectedRunnerParams.runnerType = 'Repo'; - expectedRunnerParams.runnerOwner = `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`; + expectedRunnerParams.runnerOwner = `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`; }); it('gets the current repo level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Repo', - runnerOwner: `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + runnerOwner: `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a token when maximum runners has not been reached', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('creates a runner with correct config and labels', async () => { process.env.RUNNER_LABELS = 'label1,label2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with correct config and labels and on demand failover enabled.', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.ENABLE_ON_DEMAND_FAILOVER_FOR_ERRORS = JSON.stringify(['InsufficientInstanceCapacity']); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith({ ...expectedRunnerParams, onDemandFailoverOnError: ['InsufficientInstanceCapacity'], @@ -590,26 +812,25 @@ describe('scaleUp with public GH', () => { it('creates a runner and ensure the group argument is ignored', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP_IGNORED'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('ephemeral runners only run with workflow_job event, others should fail.', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; - await expect( - scaleUpModule.scaleUp('aws:sqs', { - ...TEST_DATA, - eventType: 'check_run', - }), - ).rejects.toBeInstanceOf(Error); + + const USER_REPO_TEST_DATA = structuredClone(TEST_DATA); + USER_REPO_TEST_DATA[0].eventType = 'check_run'; + + await expect(scaleUpModule.scaleUp(USER_REPO_TEST_DATA)).resolves.toEqual(['foobar']); }); it('creates a ephemeral runner with JIT config.', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; process.env.SSM_TOKEN_PATH = '/github-action-runners/default/runners/config'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).not.toBeCalled(); expect(createRunner).toBeCalledWith(expectedRunnerParams); @@ -631,7 +852,7 @@ describe('scaleUp with public GH', () => { process.env.ENABLE_JIT_CONFIG = 'false'; process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; process.env.SSM_TOKEN_PATH = '/github-action-runners/default/runners/config'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).not.toBeCalled(); expect(createRunner).toBeCalledWith(expectedRunnerParams); @@ -654,7 +875,7 @@ describe('scaleUp with public GH', () => { process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; process.env.RUNNER_LABELS = 'jit'; process.env.SSM_TOKEN_PATH = '/github-action-runners/default/runners/config'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).not.toBeCalled(); expect(createRunner).toBeCalledWith(expectedRunnerParams); @@ -674,21 +895,247 @@ describe('scaleUp with public GH', () => { it('creates a ephemeral runner after checking job is queued.', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; process.env.ENABLE_JOB_QUEUED_CHECK = 'true'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).toBeCalled(); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('disable auto update on the runner.', async () => { process.env.DISABLE_RUNNER_AUTOUPDATE = 'true'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); - it('Scaling error should cause reject so retry can be triggered.', async () => { + it('Scaling error should return failed message IDs so retry can be triggered.', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toBeInstanceOf(ScaleError); + await expect(scaleUpModule.scaleUp(TEST_DATA)).resolves.toEqual(['foobar']); + }); + }); + + describe('Batch processing', () => { + const createTestMessages = ( + count: number, + overrides: Partial[] = [], + ): scaleUpModule.ActionRequestMessageSQS[] => { + return Array.from({ length: count }, (_, i) => ({ + ...TEST_DATA_SINGLE, + id: i + 1, + messageId: `message-${i}`, + ...overrides[i], + })); + }; + + beforeEach(() => { + setDefaults(); + process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + process.env.RUNNERS_MAXIMUM_COUNT = '10'; + }); + + it('Should handle multiple messages for the same organization', async () => { + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(1); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 3, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, + }), + ); + }); + + it('Should handle multiple messages for different organizations', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'org1' }, + { repositoryOwner: 'org2' }, + { repositoryOwner: 'org1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'org1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'org2', + }), + ); + }); + + it('Should handle multiple messages for different repositories when org-level is disabled', async () => { + process.env.ENABLE_ORGANIZATION_RUNNERS = 'false'; + const messages = createTestMessages(3, [ + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + { repositoryOwner: 'owner1', repositoryName: 'repo2' }, + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'owner1/repo1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'owner1/repo2', + }), + ); + }); + + it('Should reject messages when maximum runners limit is reached', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '1'; // Set to 1 so with 1 existing, no new ones can be created + mockListRunners.mockImplementation(async () => [ + { + instanceId: 'i-existing', + launchTime: new Date(), + type: 'Org', + owner: TEST_DATA_SINGLE.repositoryOwner, + }, + ]); + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); // No runners should be created + expect(rejectedMessages).toHaveLength(3); // All 3 messages should be rejected + }); + + it('Should handle partial EC2 instance creation failures', async () => { + mockCreateRunner.mockImplementation(async () => ['i-12345']); // Only creates 1 instead of requested 3 + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(rejectedMessages).toHaveLength(2); // 3 requested - 1 created = 2 failed + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should filter out invalid event types for ephemeral runners', async () => { + const messages = createTestMessages(3, [ + { eventType: 'workflow_job' }, + { eventType: 'check_run' }, + { eventType: 'workflow_job' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only workflow_job events processed + }), + ); + expect(rejectedMessages).toContain('message-1'); // check_run event rejected + }); + + it('Should skip invalid repo owner types but not reject them', async () => { + const messages = createTestMessages(3, [ + { repoOwnerType: 'Organization' }, + { repoOwnerType: 'User' }, // Invalid for org-level runners + { repoOwnerType: 'Organization' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only Organization events processed + }), + ); + expect(rejectedMessages).not.toContain('message-1'); // User repo not rejected, just skipped + }); + + it('Should skip messages when jobs are not queued', async () => { + mockOctokit.actions.getJobForWorkflowRun.mockImplementation((params) => { + const isQueued = params.job_id === 1 || params.job_id === 3; // Only jobs 1 and 3 are queued + return { + data: { + status: isQueued ? 'queued' : 'completed', + }, + }; + }); + + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only queued jobs processed + }), + ); + }); + + it('Should create separate GitHub clients for different installations', async () => { + // Override the default mock to return different installation IDs + mockOctokit.apps.getOrgInstallation.mockReset(); + mockOctokit.apps.getOrgInstallation.mockImplementation((params) => ({ + data: { + id: params.org === 'org1' ? 100 : 200, + }, + })); + + const messages = createTestMessages(2, [ + { repositoryOwner: 'org1', installationId: 0 }, + { repositoryOwner: 'org2', installationId: 0 }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(3); // 1 app client, 2 repo installation clients + expect(mockedInstallationAuth).toHaveBeenCalledWith(100, ''); + expect(mockedInstallationAuth).toHaveBeenCalledWith(200, ''); + }); + + it('Should reuse GitHub clients for same installation', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(2); // 1 app client, 1 installation client + expect(mockedInstallationAuth).toHaveBeenCalledTimes(1); + }); + + it('Should return empty array when no valid messages to process', async () => { + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + const messages = createTestMessages(2, [ + { eventType: 'check_run' }, // Invalid for ephemeral + { eventType: 'check_run' }, // Invalid for ephemeral + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should handle unlimited runners configuration', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '-1'; + const messages = createTestMessages(10); + + await scaleUpModule.scaleUp(messages); + + expect(listEC2Runners).not.toHaveBeenCalled(); // No need to check current runners + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 10, // All messages processed + }), + ); }); }); }); @@ -698,17 +1145,12 @@ describe('scaleUp with Github Data Residency', () => { process.env.GHES_URL = 'https://companyname.ghe.com'; }); - it('ignores non-sqs events', async () => { - expect.assertions(1); - await expect(scaleUpModule.scaleUp('aws:s3', TEST_DATA)).rejects.toEqual(Error('Cannot handle non-SQS events!')); - }); - it('checks queued workflows', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).toBeCalledWith({ - job_id: TEST_DATA.id, - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + job_id: TEST_DATA_SINGLE.id, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); @@ -716,7 +1158,7 @@ describe('scaleUp with Github Data Residency', () => { mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ data: { total_count: 0 }, })); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toBeCalled(); }); @@ -735,18 +1177,18 @@ describe('scaleUp with Github Data Residency', () => { }); it('gets the current org level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Org', - runnerOwner: TEST_DATA.repositoryOwner, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); @@ -754,35 +1196,35 @@ describe('scaleUp with Github Data Residency', () => { it('does create a runner if maximum is set to -1', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '-1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toHaveBeenCalled(); expect(createRunner).toHaveBeenCalled(); }); it('creates a token when maximum runners has not been reached', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, }); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a runner with correct config', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with labels in a specific group', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with ami id override from ssm parameter', async () => { process.env.AMI_ID_SSM_PARAMETER_NAME = 'my-ami-id-param'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith({ ...expectedRunnerParams, amiIdSsmParameterName: 'my-ami-id-param' }); }); @@ -791,15 +1233,15 @@ describe('scaleUp with Github Data Residency', () => { mockSSMgetParameter.mockImplementation(async () => { throw new Error('ParameterNotFound'); }); - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toBeInstanceOf(Error); + await expect(scaleUpModule.scaleUp(TEST_DATA)).rejects.toBeInstanceOf(Error); expect(mockOctokit.paginate).toHaveBeenCalledTimes(1); }); it('Discards event if it is a User repo and org level runners is enabled', async () => { process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; - const USER_REPO_TEST_DATA = { ...TEST_DATA }; - USER_REPO_TEST_DATA.repoOwnerType = 'User'; - await scaleUpModule.scaleUp('aws:sqs', USER_REPO_TEST_DATA); + const USER_REPO_TEST_DATA = structuredClone(TEST_DATA); + USER_REPO_TEST_DATA[0].repoOwnerType = 'User'; + await scaleUpModule.scaleUp(USER_REPO_TEST_DATA); expect(createRunner).not.toHaveBeenCalled(); }); @@ -807,7 +1249,7 @@ describe('scaleUp with Github Data Residency', () => { mockSSMgetParameter.mockImplementation(async () => { throw new Error('ParameterNotFound'); }); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.paginate).toHaveBeenCalledTimes(1); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 2); expect(mockSSMClient).toHaveReceivedNthSpecificCommandWith(1, PutParameterCommand, { @@ -818,7 +1260,7 @@ describe('scaleUp with Github Data Residency', () => { }); it('Does not create SSM parameter for runner group id if it exists', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.paginate).toHaveBeenCalledTimes(0); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 1); }); @@ -826,9 +1268,9 @@ describe('scaleUp with Github Data Residency', () => { it('create start runner config for ephemeral runners ', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.generateRunnerJitconfigForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, name: 'unit-test-i-12345', runner_group_id: 1, labels: ['label1', 'label2'], @@ -849,7 +1291,7 @@ describe('scaleUp with Github Data Residency', () => { it('create start runner config for non-ephemeral runners ', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; process.env.RUNNERS_MAXIMUM_COUNT = '2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.generateRunnerJitconfigForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalled(); expect(mockSSMClient).toHaveReceivedNthSpecificCommandWith(1, PutParameterCommand, { @@ -920,7 +1362,7 @@ describe('scaleUp with Github Data Residency', () => { 'i-150', 'i-151', ]; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); const endTime = performance.now(); expect(endTime - startTime).toBeGreaterThan(1000); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 40); @@ -934,67 +1376,295 @@ describe('scaleUp with Github Data Residency', () => { process.env.RUNNER_NAME_PREFIX = 'unit-test'; expectedRunnerParams = { ...EXPECTED_RUNNER_PARAMS }; expectedRunnerParams.runnerType = 'Repo'; - expectedRunnerParams.runnerOwner = `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`; - // `--url https://companyname.ghe.com${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + expectedRunnerParams.runnerOwner = `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`; + // `--url https://companyname.ghe.com${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, // `--token 1234abcd`, // ]; }); it('gets the current repo level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Repo', - runnerOwner: `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + runnerOwner: `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a token when maximum runners has not been reached', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('uses the default runner max count', async () => { process.env.RUNNERS_MAXIMUM_COUNT = undefined; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('creates a runner with correct config and labels', async () => { process.env.RUNNER_LABELS = 'label1,label2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner and ensure the group argument is ignored', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP_IGNORED'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('Check error is thrown', async () => { const mockCreateRunners = vi.mocked(createRunner); mockCreateRunners.mockRejectedValue(new Error('no retry')); - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toThrow('no retry'); + await expect(scaleUpModule.scaleUp(TEST_DATA)).rejects.toThrow('no retry'); mockCreateRunners.mockReset(); }); }); + + describe('Batch processing', () => { + const createTestMessages = ( + count: number, + overrides: Partial[] = [], + ): scaleUpModule.ActionRequestMessageSQS[] => { + return Array.from({ length: count }, (_, i) => ({ + ...TEST_DATA_SINGLE, + id: i + 1, + messageId: `message-${i}`, + ...overrides[i], + })); + }; + + beforeEach(() => { + setDefaults(); + process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + process.env.RUNNERS_MAXIMUM_COUNT = '10'; + }); + + it('Should handle multiple messages for the same organization', async () => { + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(1); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 3, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, + }), + ); + }); + + it('Should handle multiple messages for different organizations', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'org1' }, + { repositoryOwner: 'org2' }, + { repositoryOwner: 'org1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'org1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'org2', + }), + ); + }); + + it('Should handle multiple messages for different repositories when org-level is disabled', async () => { + process.env.ENABLE_ORGANIZATION_RUNNERS = 'false'; + const messages = createTestMessages(3, [ + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + { repositoryOwner: 'owner1', repositoryName: 'repo2' }, + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'owner1/repo1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'owner1/repo2', + }), + ); + }); + + it('Should reject messages when maximum runners limit is reached', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '2'; + mockListRunners.mockImplementation(async () => [ + { + instanceId: 'i-existing', + launchTime: new Date(), + type: 'Org', + owner: TEST_DATA_SINGLE.repositoryOwner, + }, + ]); + + const messages = createTestMessages(5); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, // 2 max - 1 existing = 1 new + }), + ); + expect(rejectedMessages).toHaveLength(4); // 5 requested - 1 created = 4 rejected + }); + + it('Should handle partial EC2 instance creation failures', async () => { + mockCreateRunner.mockImplementation(async () => ['i-12345']); // Only creates 1 instead of requested 3 + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(rejectedMessages).toHaveLength(2); // 3 requested - 1 created = 2 failed + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should filter out invalid event types for ephemeral runners', async () => { + const messages = createTestMessages(3, [ + { eventType: 'workflow_job' }, + { eventType: 'check_run' }, + { eventType: 'workflow_job' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only workflow_job events processed + }), + ); + expect(rejectedMessages).toContain('message-1'); // check_run event rejected + }); + + it('Should skip invalid repo owner types but not reject them', async () => { + const messages = createTestMessages(3, [ + { repoOwnerType: 'Organization' }, + { repoOwnerType: 'User' }, // Invalid for org-level runners + { repoOwnerType: 'Organization' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only Organization events processed + }), + ); + expect(rejectedMessages).not.toContain('message-1'); // User repo not rejected, just skipped + }); + + it('Should skip messages when jobs are not queued', async () => { + mockOctokit.actions.getJobForWorkflowRun.mockImplementation((params) => { + const isQueued = params.job_id === 1 || params.job_id === 3; // Only jobs 1 and 3 are queued + return { + data: { + status: isQueued ? 'queued' : 'completed', + }, + }; + }); + + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only queued jobs processed + }), + ); + }); + + it('Should create separate GitHub clients for different installations', async () => { + mockOctokit.apps.getOrgInstallation.mockImplementation((params) => ({ + data: { + id: params.org === 'org1' ? 100 : 200, + }, + })); + + const messages = createTestMessages(2, [ + { repositoryOwner: 'org1', installationId: 0 }, + { repositoryOwner: 'org2', installationId: 0 }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(3); // 1 app client, 2 repo installation clients + expect(mockedInstallationAuth).toHaveBeenCalledWith(100, ''); + expect(mockedInstallationAuth).toHaveBeenCalledWith(200, ''); + }); + + it('Should reuse GitHub clients for same installation', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(2); // 1 app client, 1 installation client + expect(mockedInstallationAuth).toHaveBeenCalledTimes(1); + }); + + it('Should return empty array when no valid messages to process', async () => { + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + const messages = createTestMessages(2, [ + { eventType: 'check_run' }, // Invalid for ephemeral + { eventType: 'check_run' }, // Invalid for ephemeral + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should handle unlimited runners configuration', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '-1'; + const messages = createTestMessages(10); + + await scaleUpModule.scaleUp(messages); + + expect(listEC2Runners).not.toHaveBeenCalled(); // No need to check current runners + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 10, // All messages processed + }), + ); + }); + }); }); function defaultOctokitMockImpl() { @@ -1034,12 +1704,12 @@ function defaultOctokitMockImpl() { }; const mockInstallationIdReturnValueOrgs = { data: { - id: TEST_DATA.installationId, + id: TEST_DATA_SINGLE.installationId, }, }; const mockInstallationIdReturnValueRepos = { data: { - id: TEST_DATA.installationId, + id: TEST_DATA_SINGLE.installationId, }, }; diff --git a/lambdas/functions/control-plane/src/scale-runners/scale-up.ts b/lambdas/functions/control-plane/src/scale-runners/scale-up.ts index 638edd3232..35df7ea5d7 100644 --- a/lambdas/functions/control-plane/src/scale-runners/scale-up.ts +++ b/lambdas/functions/control-plane/src/scale-runners/scale-up.ts @@ -6,8 +6,6 @@ import yn from 'yn'; import { createGithubAppAuth, createGithubInstallationAuth, createOctokitClient } from '../github/auth'; import { createRunner, listEC2Runners, tag } from './../aws/runners'; import { RunnerInputParameters } from './../aws/runners.d'; -import ScaleError from './ScaleError'; -import { publishRetryMessage } from './job-retry'; import { metricGitHubAppRateLimit } from '../github/rate-limit'; const logger = createChildLogger('scale-up'); @@ -33,6 +31,10 @@ export interface ActionRequestMessage { retryCounter?: number; } +export interface ActionRequestMessageSQS extends ActionRequestMessage { + messageId: string; +} + export interface ActionRequestMessageRetry extends ActionRequestMessage { retryCounter: number; } @@ -114,7 +116,7 @@ function removeTokenFromLogging(config: string[]): string[] { } export async function getInstallationId( - ghesApiUrl: string, + githubAppClient: Octokit, enableOrgLevel: boolean, payload: ActionRequestMessage, ): Promise { @@ -122,16 +124,14 @@ export async function getInstallationId( return payload.installationId; } - const ghAuth = await createGithubAppAuth(undefined, ghesApiUrl); - const githubClient = await createOctokitClient(ghAuth.token, ghesApiUrl); return enableOrgLevel ? ( - await githubClient.apps.getOrgInstallation({ + await githubAppClient.apps.getOrgInstallation({ org: payload.repositoryOwner, }) ).data.id : ( - await githubClient.apps.getRepoInstallation({ + await githubAppClient.apps.getRepoInstallation({ owner: payload.repositoryOwner, repo: payload.repositoryName, }) @@ -211,23 +211,27 @@ async function getRunnerGroupByName(ghClient: Octokit, githubRunnerConfig: Creat export async function createRunners( githubRunnerConfig: CreateGitHubRunnerConfig, ec2RunnerConfig: CreateEC2RunnerConfig, + numberOfRunners: number, ghClient: Octokit, -): Promise { +): Promise { const instances = await createRunner({ runnerType: githubRunnerConfig.runnerType, runnerOwner: githubRunnerConfig.runnerOwner, - numberOfRunners: 1, + numberOfRunners, ...ec2RunnerConfig, }); if (instances.length !== 0) { await createStartRunnerConfig(githubRunnerConfig, instances, ghClient); } + + return instances; } -export async function scaleUp(eventSource: string, payload: ActionRequestMessage): Promise { - logger.info(`Received ${payload.eventType} from ${payload.repositoryOwner}/${payload.repositoryName}`); +export async function scaleUp(payloads: ActionRequestMessageSQS[]): Promise { + logger.info('Received scale up requests', { + n_requests: payloads.length, + }); - if (eventSource !== 'aws:sqs') throw Error('Cannot handle non-SQS events!'); const enableOrgLevel = yn(process.env.ENABLE_ORGANIZATION_RUNNERS, { default: true }); const maximumRunners = parseInt(process.env.RUNNERS_MAXIMUM_COUNT || '3'); const runnerLabels = process.env.RUNNER_LABELS || ''; @@ -252,103 +256,202 @@ export async function scaleUp(eventSource: string, payload: ActionRequestMessage ? (JSON.parse(process.env.ENABLE_ON_DEMAND_FAILOVER_FOR_ERRORS) as [string]) : []; - if (ephemeralEnabled && payload.eventType !== 'workflow_job') { - logger.warn(`${payload.eventType} event is not supported in combination with ephemeral runners.`); - throw Error( - `The event type ${payload.eventType} is not supported in combination with ephemeral runners.` + - `Please ensure you have enabled workflow_job events.`, - ); - } + const { ghesApiUrl, ghesBaseUrl } = getGitHubEnterpriseApiUrl(); - if (!isValidRepoOwnerTypeIfOrgLevelEnabled(payload, enableOrgLevel)) { - logger.warn( - `Repository ${payload.repositoryOwner}/${payload.repositoryName} does not belong to a GitHub` + - `organization and organization runners are enabled. This is not supported. Not scaling up for this event.` + - `Not throwing error to prevent re-queueing and just ignoring the event.`, - ); - return; + const ghAuth = await createGithubAppAuth(undefined, ghesApiUrl); + const githubAppClient = await createOctokitClient(ghAuth.token, ghesApiUrl); + + // A map of either owner or owner/repo name to Octokit client, so we use a + // single client per installation (set of messages), depending on how the app + // is installed. This is for a couple of reasons: + // - Sharing clients opens up the possibility of caching API calls. + // - Fetching a client for an installation actually requires a couple of API + // calls itself, which would get expensive if done for every message in a + // batch. + type MessagesWithClient = { + messages: ActionRequestMessageSQS[]; + githubInstallationClient: Octokit; + }; + + const validMessages = new Map(); + const invalidMessages: string[] = []; + for (const payload of payloads) { + const { eventType, messageId, repositoryName, repositoryOwner } = payload; + if (ephemeralEnabled && eventType !== 'workflow_job') { + logger.warn( + 'Event is not supported in combination with ephemeral runners. Please ensure you have enabled workflow_job events.', + { eventType, messageId }, + ); + + invalidMessages.push(messageId); + + continue; + } + + if (!isValidRepoOwnerTypeIfOrgLevelEnabled(payload, enableOrgLevel)) { + logger.warn( + `Repository does not belong to a GitHub organization and organization runners are enabled. This is not supported. Not scaling up for this event. Not throwing error to prevent re-queueing and just ignoring the event.`, + { + repository: `${repositoryOwner}/${repositoryName}`, + messageId, + }, + ); + + continue; + } + + const key = enableOrgLevel ? payload.repositoryOwner : `${payload.repositoryOwner}/${payload.repositoryName}`; + + let entry = validMessages.get(key); + + // If we've not seen this owner/repo before, we'll need to create a GitHub + // client for it. + if (entry === undefined) { + const installationId = await getInstallationId(githubAppClient, enableOrgLevel, payload); + const ghAuth = await createGithubInstallationAuth(installationId, ghesApiUrl); + const githubInstallationClient = await createOctokitClient(ghAuth.token, ghesApiUrl); + + entry = { + messages: [], + githubInstallationClient, + }; + + validMessages.set(key, entry); + } + + entry.messages.push(payload); } - const ephemeral = ephemeralEnabled && payload.eventType === 'workflow_job'; const runnerType = enableOrgLevel ? 'Org' : 'Repo'; - const runnerOwner = enableOrgLevel ? payload.repositoryOwner : `${payload.repositoryOwner}/${payload.repositoryName}`; addPersistentContextToChildLogger({ runner: { + ephemeral: ephemeralEnabled, type: runnerType, - owner: runnerOwner, namePrefix: runnerNamePrefix, - }, - github: { - event: payload.eventType, - workflow_job_id: payload.id.toString(), + n_events: Array.from(validMessages.values()).reduce((acc, group) => acc + group.messages.length, 0), }, }); - logger.info(`Received event`); + logger.info(`Received events`); - const { ghesApiUrl, ghesBaseUrl } = getGitHubEnterpriseApiUrl(); + for (const [group, { githubInstallationClient, messages }] of validMessages.entries()) { + // Work out how much we want to scale up by. + let scaleUp = 0; - const installationId = await getInstallationId(ghesApiUrl, enableOrgLevel, payload); - const ghAuth = await createGithubInstallationAuth(installationId, ghesApiUrl); - const githubInstallationClient = await createOctokitClient(ghAuth.token, ghesApiUrl); + for (const message of messages) { + const messageLogger = logger.createChild({ + persistentKeys: { + eventType: message.eventType, + group, + messageId: message.messageId, + repository: `${message.repositoryOwner}/${message.repositoryName}`, + }, + }); - if (!enableJobQueuedCheck || (await isJobQueued(githubInstallationClient, payload))) { - let scaleUp = true; - if (maximumRunners !== -1) { - const currentRunners = await listEC2Runners({ - environment, - runnerType, - runnerOwner, + if (enableJobQueuedCheck && !(await isJobQueued(githubInstallationClient, message))) { + messageLogger.info('No runner will be created, job is not queued.'); + + continue; + } + + scaleUp++; + } + + if (scaleUp === 0) { + logger.info('No runners will be created for this group, no valid messages found.'); + + continue; + } + + // Don't call the EC2 API if we can create an unlimited number of runners. + const currentRunners = + maximumRunners === -1 ? 0 : (await listEC2Runners({ environment, runnerType, runnerOwner: group })).length; + + logger.info('Current runners', { + currentRunners, + maximumRunners, + }); + + // Calculate how many runners we want to create. + const newRunners = + maximumRunners === -1 + ? // If we don't have an upper limit, scale up by the number of new jobs. + scaleUp + : // Otherwise, we do have a limit, so work out if `scaleUp` would exceed it. + Math.min(scaleUp, maximumRunners - currentRunners); + + const missingInstanceCount = Math.max(0, scaleUp - newRunners); + + if (missingInstanceCount > 0) { + logger.info('Not all runners will be created for this group, maximum number of runners reached.', { + desiredNewRunners: scaleUp, }); - logger.info(`Current runners: ${currentRunners.length} of ${maximumRunners}`); - scaleUp = currentRunners.length < maximumRunners; + + if (ephemeralEnabled) { + // This removes `missingInstanceCount` items from the start of the array + // so that, if we retry more messages later, we pick fresh ones. + invalidMessages.push(...messages.splice(0, missingInstanceCount).map(({ messageId }) => messageId)); + } + + // No runners will be created, so skip calling the EC2 API. + if (missingInstanceCount === scaleUp) { + continue; + } } - if (scaleUp) { - logger.info(`Attempting to launch a new runner`); + logger.info(`Attempting to launch new runners`, { + newRunners, + }); - await createRunners( - { - ephemeral, - enableJitConfig, - ghesBaseUrl, - runnerLabels, - runnerGroup, - runnerNamePrefix, - runnerOwner, - runnerType, - disableAutoUpdate, - ssmTokenPath, - ssmConfigPath, - }, - { - ec2instanceCriteria: { - instanceTypes, - targetCapacityType: instanceTargetCapacityType, - maxSpotPrice: instanceMaxSpotPrice, - instanceAllocationStrategy: instanceAllocationStrategy, - }, - environment, - launchTemplateName, - subnets, - amiIdSsmParameterName, - tracingEnabled, - onDemandFailoverOnError, + const instances = await createRunners( + { + ephemeral: ephemeralEnabled, + enableJitConfig, + ghesBaseUrl, + runnerLabels, + runnerGroup, + runnerNamePrefix, + runnerOwner: group, + runnerType, + disableAutoUpdate, + ssmTokenPath, + ssmConfigPath, + }, + { + ec2instanceCriteria: { + instanceTypes, + targetCapacityType: instanceTargetCapacityType, + maxSpotPrice: instanceMaxSpotPrice, + instanceAllocationStrategy: instanceAllocationStrategy, }, - githubInstallationClient, - ); + environment, + launchTemplateName, + subnets, + amiIdSsmParameterName, + tracingEnabled, + onDemandFailoverOnError, + }, + newRunners, + githubInstallationClient, + ); - await publishRetryMessage(payload); - } else { - logger.info('No runner will be created, maximum number of runners reached.'); - if (ephemeral) { - throw new ScaleError('No runners create: maximum of runners reached.'); - } + // Not all runners we wanted were created, let's reject enough items so that + // number of entries will be retried. + if (instances.length !== newRunners) { + const failedInstanceCount = newRunners - instances.length; + + logger.warn('Some runners failed to be created, rejecting some messages so the requests are retried', { + wanted: newRunners, + got: instances.length, + failedInstanceCount, + }); + + invalidMessages.push(...messages.slice(0, failedInstanceCount).map(({ messageId }) => messageId)); } - } else { - logger.info('No runner will be created, job is not queued.'); } + + return invalidMessages; } export function getGitHubEnterpriseApiUrl() { diff --git a/lambdas/libs/aws-powertools-util/src/logger/index.ts b/lambdas/libs/aws-powertools-util/src/logger/index.ts index 195b552a74..2bad191a83 100644 --- a/lambdas/libs/aws-powertools-util/src/logger/index.ts +++ b/lambdas/libs/aws-powertools-util/src/logger/index.ts @@ -9,7 +9,7 @@ const defaultValues = { }; function setContext(context: Context, module?: string) { - logger.addPersistentLogAttributes({ + logger.appendPersistentKeys({ 'aws-request-id': context.awsRequestId, 'function-name': context.functionName, module: module, @@ -17,7 +17,7 @@ function setContext(context: Context, module?: string) { // Add the context to all child loggers childLoggers.forEach((childLogger) => { - childLogger.addPersistentLogAttributes({ + childLogger.appendPersistentKeys({ 'aws-request-id': context.awsRequestId, 'function-name': context.functionName, }); @@ -25,14 +25,14 @@ function setContext(context: Context, module?: string) { } const logger = new Logger({ - persistentLogAttributes: { + persistentKeys: { ...defaultValues, }, }); function createChildLogger(module: string): Logger { const childLogger = logger.createChild({ - persistentLogAttributes: { + persistentKeys: { module: module, }, }); @@ -47,7 +47,7 @@ type LogAttributes = { function addPersistentContextToChildLogger(attributes: LogAttributes) { childLoggers.forEach((childLogger) => { - childLogger.addPersistentLogAttributes(attributes); + childLogger.appendPersistentKeys(attributes); }); } diff --git a/main.tf b/main.tf index 69a2a5a82d..65e9b30a83 100644 --- a/main.tf +++ b/main.tf @@ -210,28 +210,30 @@ module "runners" { credit_specification = var.runner_credit_specification cpu_options = var.runner_cpu_options - enable_runner_binaries_syncer = var.enable_runner_binaries_syncer - lambda_s3_bucket = var.lambda_s3_bucket - runners_lambda_s3_key = var.runners_lambda_s3_key - runners_lambda_s3_object_version = var.runners_lambda_s3_object_version - lambda_runtime = var.lambda_runtime - lambda_architecture = var.lambda_architecture - lambda_zip = var.runners_lambda_zip - lambda_scale_up_memory_size = var.runners_scale_up_lambda_memory_size - lambda_scale_down_memory_size = var.runners_scale_down_lambda_memory_size - lambda_timeout_scale_up = var.runners_scale_up_lambda_timeout - lambda_timeout_scale_down = var.runners_scale_down_lambda_timeout - lambda_subnet_ids = var.lambda_subnet_ids - lambda_security_group_ids = var.lambda_security_group_ids - lambda_tags = var.lambda_tags - tracing_config = var.tracing_config - logging_retention_in_days = var.logging_retention_in_days - logging_kms_key_id = var.logging_kms_key_id - enable_cloudwatch_agent = var.enable_cloudwatch_agent - cloudwatch_config = var.cloudwatch_config - runner_log_files = var.runner_log_files - runner_group_name = var.runner_group_name - runner_name_prefix = var.runner_name_prefix + enable_runner_binaries_syncer = var.enable_runner_binaries_syncer + lambda_s3_bucket = var.lambda_s3_bucket + runners_lambda_s3_key = var.runners_lambda_s3_key + runners_lambda_s3_object_version = var.runners_lambda_s3_object_version + lambda_runtime = var.lambda_runtime + lambda_architecture = var.lambda_architecture + lambda_event_source_mapping_batch_size = var.lambda_event_source_mapping_batch_size + lambda_event_source_mapping_maximum_batching_window_in_seconds = var.lambda_event_source_mapping_maximum_batching_window_in_seconds + lambda_zip = var.runners_lambda_zip + lambda_scale_up_memory_size = var.runners_scale_up_lambda_memory_size + lambda_scale_down_memory_size = var.runners_scale_down_lambda_memory_size + lambda_timeout_scale_up = var.runners_scale_up_lambda_timeout + lambda_timeout_scale_down = var.runners_scale_down_lambda_timeout + lambda_subnet_ids = var.lambda_subnet_ids + lambda_security_group_ids = var.lambda_security_group_ids + lambda_tags = var.lambda_tags + tracing_config = var.tracing_config + logging_retention_in_days = var.logging_retention_in_days + logging_kms_key_id = var.logging_kms_key_id + enable_cloudwatch_agent = var.enable_cloudwatch_agent + cloudwatch_config = var.cloudwatch_config + runner_log_files = var.runner_log_files + runner_group_name = var.runner_group_name + runner_name_prefix = var.runner_name_prefix scale_up_reserved_concurrent_executions = var.scale_up_reserved_concurrent_executions diff --git a/modules/multi-runner/README.md b/modules/multi-runner/README.md index 90ae802242..63dcb1c74d 100644 --- a/modules/multi-runner/README.md +++ b/modules/multi-runner/README.md @@ -137,6 +137,8 @@ module "multi-runner" { | [key\_name](#input\_key\_name) | Key pair name | `string` | `null` | no | | [kms\_key\_arn](#input\_kms\_key\_arn) | Optional CMK Key ARN to be used for Parameter Store. | `string` | `null` | no | | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | +| [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | +| [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | | [lambda\_principals](#input\_lambda\_principals) | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. |
list(object({
type = string
identifiers = list(string)
}))
| `[]` | no | | [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | diff --git a/modules/multi-runner/runners.tf b/modules/multi-runner/runners.tf index 811ab36260..d58e61f6ac 100644 --- a/modules/multi-runner/runners.tf +++ b/modules/multi-runner/runners.tf @@ -58,28 +58,30 @@ module "runners" { credit_specification = each.value.runner_config.credit_specification cpu_options = each.value.runner_config.cpu_options - enable_runner_binaries_syncer = each.value.runner_config.enable_runner_binaries_syncer - lambda_s3_bucket = var.lambda_s3_bucket - runners_lambda_s3_key = var.runners_lambda_s3_key - runners_lambda_s3_object_version = var.runners_lambda_s3_object_version - lambda_runtime = var.lambda_runtime - lambda_architecture = var.lambda_architecture - lambda_zip = var.runners_lambda_zip - lambda_scale_up_memory_size = var.scale_up_lambda_memory_size - lambda_timeout_scale_up = var.runners_scale_up_lambda_timeout - lambda_scale_down_memory_size = var.scale_down_lambda_memory_size - lambda_timeout_scale_down = var.runners_scale_down_lambda_timeout - lambda_subnet_ids = var.lambda_subnet_ids - lambda_security_group_ids = var.lambda_security_group_ids - lambda_tags = var.lambda_tags - tracing_config = var.tracing_config - logging_retention_in_days = var.logging_retention_in_days - logging_kms_key_id = var.logging_kms_key_id - enable_cloudwatch_agent = each.value.runner_config.enable_cloudwatch_agent - cloudwatch_config = try(coalesce(each.value.runner_config.cloudwatch_config, var.cloudwatch_config), null) - runner_log_files = each.value.runner_config.runner_log_files - runner_group_name = each.value.runner_config.runner_group_name - runner_name_prefix = each.value.runner_config.runner_name_prefix + enable_runner_binaries_syncer = each.value.runner_config.enable_runner_binaries_syncer + lambda_s3_bucket = var.lambda_s3_bucket + runners_lambda_s3_key = var.runners_lambda_s3_key + runners_lambda_s3_object_version = var.runners_lambda_s3_object_version + lambda_runtime = var.lambda_runtime + lambda_architecture = var.lambda_architecture + lambda_zip = var.runners_lambda_zip + lambda_scale_up_memory_size = var.scale_up_lambda_memory_size + lambda_event_source_mapping_batch_size = var.lambda_event_source_mapping_batch_size + lambda_event_source_mapping_maximum_batching_window_in_seconds = var.lambda_event_source_mapping_maximum_batching_window_in_seconds + lambda_timeout_scale_up = var.runners_scale_up_lambda_timeout + lambda_scale_down_memory_size = var.scale_down_lambda_memory_size + lambda_timeout_scale_down = var.runners_scale_down_lambda_timeout + lambda_subnet_ids = var.lambda_subnet_ids + lambda_security_group_ids = var.lambda_security_group_ids + lambda_tags = var.lambda_tags + tracing_config = var.tracing_config + logging_retention_in_days = var.logging_retention_in_days + logging_kms_key_id = var.logging_kms_key_id + enable_cloudwatch_agent = each.value.runner_config.enable_cloudwatch_agent + cloudwatch_config = try(coalesce(each.value.runner_config.cloudwatch_config, var.cloudwatch_config), null) + runner_log_files = each.value.runner_config.runner_log_files + runner_group_name = each.value.runner_config.runner_group_name + runner_name_prefix = each.value.runner_config.runner_name_prefix scale_up_reserved_concurrent_executions = each.value.runner_config.scale_up_reserved_concurrent_executions diff --git a/modules/multi-runner/variables.tf b/modules/multi-runner/variables.tf index 0cf8607c09..a63c60c6b8 100644 --- a/modules/multi-runner/variables.tf +++ b/modules/multi-runner/variables.tf @@ -724,3 +724,15 @@ variable "user_agent" { type = string default = "github-aws-runners" } + +variable "lambda_event_source_mapping_batch_size" { + description = "Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used." + type = number + default = 10 +} + +variable "lambda_event_source_mapping_maximum_batching_window_in_seconds" { + description = "Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch_size is greater than 10. Defaults to 0." + type = number + default = 0 +} diff --git a/modules/runners/README.md b/modules/runners/README.md index 0f2db503e5..2397f1f576 100644 --- a/modules/runners/README.md +++ b/modules/runners/README.md @@ -177,6 +177,8 @@ yarn run dist | [key\_name](#input\_key\_name) | Key pair name | `string` | `null` | no | | [kms\_key\_arn](#input\_kms\_key\_arn) | Optional CMK Key ARN to be used for Parameter Store. | `string` | `null` | no | | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | +| [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | +| [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | | [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_scale\_down\_memory\_size](#input\_lambda\_scale\_down\_memory\_size) | Memory size limit in MB for scale down lambda. | `number` | `512` | no | diff --git a/modules/runners/job-retry.tf b/modules/runners/job-retry.tf index e51c3903d4..130992667f 100644 --- a/modules/runners/job-retry.tf +++ b/modules/runners/job-retry.tf @@ -3,30 +3,32 @@ locals { job_retry_enabled = var.job_retry != null && var.job_retry.enable ? true : false job_retry = { - prefix = var.prefix - tags = local.tags - aws_partition = var.aws_partition - architecture = var.lambda_architecture - runtime = var.lambda_runtime - security_group_ids = var.lambda_security_group_ids - subnet_ids = var.lambda_subnet_ids - kms_key_arn = var.kms_key_arn - lambda_tags = var.lambda_tags - log_level = var.log_level - logging_kms_key_id = var.logging_kms_key_id - logging_retention_in_days = var.logging_retention_in_days - metrics = var.metrics - role_path = var.role_path - role_permissions_boundary = var.role_permissions_boundary - s3_bucket = var.lambda_s3_bucket - s3_key = var.runners_lambda_s3_key - s3_object_version = var.runners_lambda_s3_object_version - zip = var.lambda_zip - tracing_config = var.tracing_config - github_app_parameters = var.github_app_parameters - enable_organization_runners = var.enable_organization_runners - sqs_build_queue = var.sqs_build_queue - ghes_url = var.ghes_url + prefix = var.prefix + tags = local.tags + aws_partition = var.aws_partition + architecture = var.lambda_architecture + runtime = var.lambda_runtime + security_group_ids = var.lambda_security_group_ids + subnet_ids = var.lambda_subnet_ids + kms_key_arn = var.kms_key_arn + lambda_tags = var.lambda_tags + log_level = var.log_level + logging_kms_key_id = var.logging_kms_key_id + logging_retention_in_days = var.logging_retention_in_days + metrics = var.metrics + role_path = var.role_path + role_permissions_boundary = var.role_permissions_boundary + s3_bucket = var.lambda_s3_bucket + s3_key = var.runners_lambda_s3_key + s3_object_version = var.runners_lambda_s3_object_version + zip = var.lambda_zip + tracing_config = var.tracing_config + github_app_parameters = var.github_app_parameters + enable_organization_runners = var.enable_organization_runners + sqs_build_queue = var.sqs_build_queue + ghes_url = var.ghes_url + lambda_event_source_mapping_batch_size = var.lambda_event_source_mapping_batch_size + lambda_event_source_mapping_maximum_batching_window_in_seconds = var.lambda_event_source_mapping_maximum_batching_window_in_seconds } } diff --git a/modules/runners/job-retry/README.md b/modules/runners/job-retry/README.md index 91089a213b..4f4c80921c 100644 --- a/modules/runners/job-retry/README.md +++ b/modules/runners/job-retry/README.md @@ -42,7 +42,7 @@ The module is an inner module and used by the runner module when the opt-in feat | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| -| [config](#input\_config) | Configuration for the spot termination watcher lambda function.

`aws_partition`: Partition for the base arn if not 'aws'
`architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions.
`environment_variables`: Environment variables for the lambda.
`enable_organization_runners`: Enable organization runners.
`enable_metric`: Enable metric for the lambda. If `spot_warning` is set to true, the lambda will emit a metric when it detects a spot termination warning.
'ghes\_url': Optional GitHub Enterprise Server URL.
'user\_agent': Optional User-Agent header for GitHub API requests.
'github\_app\_parameters': Parameter Store for GitHub App Parameters.
'kms\_key\_arn': Optional CMK Key ARN instead of using the default AWS managed key.
`lambda_principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing.
`lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'.
`logging_kms_key_id`: Specifies the kms key id to encrypt the logs with
`logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.
`memory_size`: Memory size linit in MB of the lambda.
`metrics`: Configuration to enable metrics creation by the lambda.
`prefix`: The prefix used for naming resources.
`role_path`: The path that will be added to the role, if not set the environment name will be used.
`role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda.
`runtime`: AWS Lambda runtime.
`s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly.
`s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas.
`s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket.
`security_group_ids`: List of security group IDs associated with the Lambda function.
'sqs\_build\_queue': SQS queue for build events to re-publish job request.
`subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`.
`tag_filters`: Map of tags that will be used to filter the resources to be tracked. Only for which all tags are present and starting with the same value as the value in the map will be tracked.
`tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`timeout`: Time out of the lambda in seconds.
`tracing_config`: Configuration for lambda tracing.
`zip`: File location of the lambda zip file. |
object({
aws_partition = optional(string, null)
architecture = optional(string, null)
enable_organization_runners = bool
environment_variables = optional(map(string), {})
ghes_url = optional(string, null)
user_agent = optional(string, null)
github_app_parameters = object({
key_base64 = map(string)
id = map(string)
})
kms_key_arn = optional(string, null)
lambda_tags = optional(map(string), {})
log_level = optional(string, null)
logging_kms_key_id = optional(string, null)
logging_retention_in_days = optional(number, null)
memory_size = optional(number, null)
metrics = optional(object({
enable = optional(bool, false)
namespace = optional(string, null)
metric = optional(object({
enable_github_app_rate_limit = optional(bool, true)
enable_job_retry = optional(bool, true)
}), {})
}), {})
prefix = optional(string, null)
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
queue_encryption = optional(object({
kms_data_key_reuse_period_seconds = optional(number, null)
kms_master_key_id = optional(string, null)
sqs_managed_sse_enabled = optional(bool, true)
}), {})
role_path = optional(string, null)
role_permissions_boundary = optional(string, null)
runtime = optional(string, null)
security_group_ids = optional(list(string), [])
subnet_ids = optional(list(string), [])
s3_bucket = optional(string, null)
s3_key = optional(string, null)
s3_object_version = optional(string, null)
sqs_build_queue = object({
url = string
arn = string
})
tags = optional(map(string), {})
timeout = optional(number, 30)
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
zip = optional(string, null)
})
| n/a | yes | +| [config](#input\_config) | Configuration for the spot termination watcher lambda function.

`aws_partition`: Partition for the base arn if not 'aws'
`architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions.
`environment_variables`: Environment variables for the lambda.
`enable_organization_runners`: Enable organization runners.
`enable_metric`: Enable metric for the lambda. If `spot_warning` is set to true, the lambda will emit a metric when it detects a spot termination warning.
'ghes\_url': Optional GitHub Enterprise Server URL.
'user\_agent': Optional User-Agent header for GitHub API requests.
'github\_app\_parameters': Parameter Store for GitHub App Parameters.
'kms\_key\_arn': Optional CMK Key ARN instead of using the default AWS managed key.
`lambda_event_source_mapping_batch_size`: Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default will be used.
`lambda_event_source_mapping_maximum_batching_window_in_seconds`: Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10.
`lambda_principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing.
`lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'.
`logging_kms_key_id`: Specifies the kms key id to encrypt the logs with
`logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.
`memory_size`: Memory size linit in MB of the lambda.
`metrics`: Configuration to enable metrics creation by the lambda.
`prefix`: The prefix used for naming resources.
`role_path`: The path that will be added to the role, if not set the environment name will be used.
`role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda.
`runtime`: AWS Lambda runtime.
`s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly.
`s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas.
`s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket.
`security_group_ids`: List of security group IDs associated with the Lambda function.
'sqs\_build\_queue': SQS queue for build events to re-publish job request.
`subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`.
`tag_filters`: Map of tags that will be used to filter the resources to be tracked. Only for which all tags are present and starting with the same value as the value in the map will be tracked.
`tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`timeout`: Time out of the lambda in seconds.
`tracing_config`: Configuration for lambda tracing.
`zip`: File location of the lambda zip file. |
object({
aws_partition = optional(string, null)
architecture = optional(string, null)
enable_organization_runners = bool
environment_variables = optional(map(string), {})
ghes_url = optional(string, null)
user_agent = optional(string, null)
github_app_parameters = object({
key_base64 = map(string)
id = map(string)
})
kms_key_arn = optional(string, null)
lambda_event_source_mapping_batch_size = optional(number, 10)
lambda_event_source_mapping_maximum_batching_window_in_seconds = optional(number, 0)
lambda_tags = optional(map(string), {})
log_level = optional(string, null)
logging_kms_key_id = optional(string, null)
logging_retention_in_days = optional(number, null)
memory_size = optional(number, null)
metrics = optional(object({
enable = optional(bool, false)
namespace = optional(string, null)
metric = optional(object({
enable_github_app_rate_limit = optional(bool, true)
enable_job_retry = optional(bool, true)
}), {})
}), {})
prefix = optional(string, null)
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
queue_encryption = optional(object({
kms_data_key_reuse_period_seconds = optional(number, null)
kms_master_key_id = optional(string, null)
sqs_managed_sse_enabled = optional(bool, true)
}), {})
role_path = optional(string, null)
role_permissions_boundary = optional(string, null)
runtime = optional(string, null)
security_group_ids = optional(list(string), [])
subnet_ids = optional(list(string), [])
s3_bucket = optional(string, null)
s3_key = optional(string, null)
s3_object_version = optional(string, null)
sqs_build_queue = object({
url = string
arn = string
})
tags = optional(map(string), {})
timeout = optional(number, 30)
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
zip = optional(string, null)
})
| n/a | yes | ## Outputs diff --git a/modules/runners/job-retry/main.tf b/modules/runners/job-retry/main.tf index 9561c7db71..612c515f8c 100644 --- a/modules/runners/job-retry/main.tf +++ b/modules/runners/job-retry/main.tf @@ -44,9 +44,10 @@ module "job_retry" { } resource "aws_lambda_event_source_mapping" "job_retry" { - event_source_arn = aws_sqs_queue.job_retry_check_queue.arn - function_name = module.job_retry.lambda.function.arn - batch_size = 1 + event_source_arn = aws_sqs_queue.job_retry_check_queue.arn + function_name = module.job_retry.lambda.function.arn + batch_size = var.config.lambda_event_source_mapping_batch_size + maximum_batching_window_in_seconds = var.config.lambda_event_source_mapping_maximum_batching_window_in_seconds } resource "aws_lambda_permission" "job_retry" { diff --git a/modules/runners/job-retry/variables.tf b/modules/runners/job-retry/variables.tf index 4a8fe19fbf..f40bec1ba7 100644 --- a/modules/runners/job-retry/variables.tf +++ b/modules/runners/job-retry/variables.tf @@ -11,6 +11,8 @@ variable "config" { 'user_agent': Optional User-Agent header for GitHub API requests. 'github_app_parameters': Parameter Store for GitHub App Parameters. 'kms_key_arn': Optional CMK Key ARN instead of using the default AWS managed key. + `lambda_event_source_mapping_batch_size`: Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default will be used. + `lambda_event_source_mapping_maximum_batching_window_in_seconds`: Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch_size is greater than 10. `lambda_principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing. `lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment. `log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'. @@ -45,12 +47,14 @@ variable "config" { key_base64 = map(string) id = map(string) }) - kms_key_arn = optional(string, null) - lambda_tags = optional(map(string), {}) - log_level = optional(string, null) - logging_kms_key_id = optional(string, null) - logging_retention_in_days = optional(number, null) - memory_size = optional(number, null) + kms_key_arn = optional(string, null) + lambda_event_source_mapping_batch_size = optional(number, 10) + lambda_event_source_mapping_maximum_batching_window_in_seconds = optional(number, 0) + lambda_tags = optional(map(string), {}) + log_level = optional(string, null) + logging_kms_key_id = optional(string, null) + logging_retention_in_days = optional(number, null) + memory_size = optional(number, null) metrics = optional(object({ enable = optional(bool, false) namespace = optional(string, null) diff --git a/modules/runners/scale-up.tf b/modules/runners/scale-up.tf index 89d95a50d0..b1ea88652d 100644 --- a/modules/runners/scale-up.tf +++ b/modules/runners/scale-up.tf @@ -87,10 +87,12 @@ resource "aws_cloudwatch_log_group" "scale_up" { } resource "aws_lambda_event_source_mapping" "scale_up" { - event_source_arn = var.sqs_build_queue.arn - function_name = aws_lambda_function.scale_up.arn - batch_size = 1 - tags = var.tags + event_source_arn = var.sqs_build_queue.arn + function_name = aws_lambda_function.scale_up.arn + function_response_types = ["ReportBatchItemFailures"] + batch_size = var.lambda_event_source_mapping_batch_size + maximum_batching_window_in_seconds = var.lambda_event_source_mapping_maximum_batching_window_in_seconds + tags = var.tags } resource "aws_lambda_permission" "scale_runners_lambda" { diff --git a/modules/runners/variables.tf b/modules/runners/variables.tf index 856014564c..6310b8a442 100644 --- a/modules/runners/variables.tf +++ b/modules/runners/variables.tf @@ -770,3 +770,23 @@ variable "user_agent" { type = string default = null } + +variable "lambda_event_source_mapping_batch_size" { + description = "Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used." + type = number + default = 10 + validation { + condition = var.lambda_event_source_mapping_batch_size >= 1 && var.lambda_event_source_mapping_batch_size <= 1000 + error_message = "The batch size for the lambda event source mapping must be between 1 and 1000." + } +} + +variable "lambda_event_source_mapping_maximum_batching_window_in_seconds" { + description = "Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch_size is greater than 10. Defaults to 0." + type = number + default = 0 + validation { + condition = var.lambda_event_source_mapping_maximum_batching_window_in_seconds >= 0 && var.lambda_event_source_mapping_maximum_batching_window_in_seconds <= 300 + error_message = "Maximum batching window must be between 0 and 300 seconds." + } +} diff --git a/variables.tf b/variables.tf index 6d6a895873..f0d310f20f 100644 --- a/variables.tf +++ b/variables.tf @@ -1022,3 +1022,19 @@ variable "user_agent" { type = string default = "github-aws-runners" } + +variable "lambda_event_source_mapping_batch_size" { + description = "Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used." + type = number + default = 10 +} + +variable "lambda_event_source_mapping_maximum_batching_window_in_seconds" { + description = "Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch_size is greater than 10. Defaults to 0." + type = number + default = 0 + validation { + condition = var.lambda_event_source_mapping_maximum_batching_window_in_seconds >= 0 && var.lambda_event_source_mapping_maximum_batching_window_in_seconds <= 300 + error_message = "Maximum batching window must be between 0 and 300 seconds." + } +} From 82cb27f38522b5eb5e9a250c89bf034e567cc0b5 Mon Sep 17 00:00:00 2001 From: Iain Lane Date: Thu, 27 Nov 2025 15:36:34 +0000 Subject: [PATCH 2/7] fix mock setup --- lambdas/functions/control-plane/src/lambda.test.ts | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/lambdas/functions/control-plane/src/lambda.test.ts b/lambdas/functions/control-plane/src/lambda.test.ts index 3e6a897e88..bf5b648402 100644 --- a/lambdas/functions/control-plane/src/lambda.test.ts +++ b/lambdas/functions/control-plane/src/lambda.test.ts @@ -8,7 +8,7 @@ import { scaleDown } from './scale-runners/scale-down'; import { ActionRequestMessage, scaleUp } from './scale-runners/scale-up'; import { cleanSSMTokens } from './scale-runners/ssm-housekeeper'; import { checkAndRetryJob } from './scale-runners/job-retry'; -import { describe, it, expect, vi, MockedFunction } from 'vitest'; +import { describe, it, expect, vi, MockedFunction, beforeEach } from 'vitest'; const body: ActionRequestMessage = { eventType: 'workflow_job', @@ -160,9 +160,7 @@ describe('Test scale up lambda wrapper.', () => { const records = createMultipleRecords(3); const multiRecordEvent: SQSEvent = { Records: records }; - const mock = vi.fn(scaleUp); - mock.mockImplementation(() => Promise.resolve(['message-1', 'message-2'])); - vi.mocked(scaleUp).mockImplementation(mock); + vi.mocked(scaleUp).mockResolvedValue(['message-1', 'message-2']); const result = await scaleUpHandler(multiRecordEvent, context); expect(result).toEqual({ @@ -243,9 +241,7 @@ describe('Test scale up lambda wrapper.', () => { const multiRecordEvent: SQSEvent = { Records: records }; const error = new ScaleError(2); - const mock = vi.fn(scaleUp); - mock.mockImplementation(() => Promise.reject(error)); - vi.mocked(scaleUp).mockImplementation(mock); + vi.mocked(scaleUp).mockRejectedValue(error); await expect(scaleUpHandler(multiRecordEvent, context)).resolves.toEqual({ batchItemFailures: [{ itemIdentifier: 'message-0' }, { itemIdentifier: 'message-1' }], From 3f63afc349431838bc208bb996a3d2a5dcdb1f66 Mon Sep 17 00:00:00 2001 From: Iain Lane Date: Thu, 27 Nov 2025 15:41:29 +0000 Subject: [PATCH 3/7] fix mocks to use `vi.mock<...>Value` This is a shorter and better way to set up mocks. --- .../control-plane/src/lambda.test.ts | 77 ++++--------------- 1 file changed, 15 insertions(+), 62 deletions(-) diff --git a/lambdas/functions/control-plane/src/lambda.test.ts b/lambdas/functions/control-plane/src/lambda.test.ts index bf5b648402..2c9a98e420 100644 --- a/lambdas/functions/control-plane/src/lambda.test.ts +++ b/lambdas/functions/control-plane/src/lambda.test.ts @@ -93,29 +93,19 @@ describe('Test scale up lambda wrapper.', () => { }); it('Scale without error should resolve.', async () => { - const mock = vi.fn(scaleUp); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve([]); - }); - }); + vi.mocked(scaleUp).mockResolvedValue([]); await expect(scaleUpHandler(sqsEvent, context)).resolves.not.toThrow(); }); it('Non scale should resolve.', async () => { const error = new Error('Non scale should resolve.'); - const mock = vi.fn(scaleUp); - mock.mockRejectedValue(error); + vi.mocked(scaleUp).mockRejectedValue(error); await expect(scaleUpHandler(sqsEvent, context)).resolves.not.toThrow(); }); it('Scale should create a batch failure message', async () => { const error = new ScaleError(); - const mock = vi.fn() as MockedFunction; - mock.mockImplementation(() => { - return Promise.reject(error); - }); - vi.mocked(scaleUp).mockImplementation(mock); + vi.mocked(scaleUp).mockRejectedValue(error); await expect(scaleUpHandler(sqsEvent, context)).resolves.toEqual({ batchItemFailures: [{ itemIdentifier: sqsRecord.messageId }], }); @@ -142,9 +132,7 @@ describe('Test scale up lambda wrapper.', () => { const records = createMultipleRecords(3); const multiRecordEvent: SQSEvent = { Records: records }; - const mock = vi.fn(scaleUp); - mock.mockImplementation(() => Promise.resolve([])); - vi.mocked(scaleUp).mockImplementation(mock); + vi.mocked(scaleUp).mockResolvedValue([]); await expect(scaleUpHandler(multiRecordEvent, context)).resolves.not.toThrow(); expect(scaleUp).toHaveBeenCalledWith( @@ -175,9 +163,7 @@ describe('Test scale up lambda wrapper.', () => { Records: [...sqsRecords, ...nonSqsRecords], }; - const mock = vi.fn(scaleUp); - mock.mockImplementation(() => Promise.resolve([])); - vi.mocked(scaleUp).mockImplementation(mock); + vi.mocked(scaleUp).mockResolvedValue([]); await scaleUpHandler(mixedEvent, context); expect(scaleUp).toHaveBeenCalledWith( @@ -211,15 +197,13 @@ describe('Test scale up lambda wrapper.', () => { ]; const multiRecordEvent: SQSEvent = { Records: records }; - const mock = vi.fn(scaleUp); - mock.mockImplementation((messages) => { + vi.mocked(scaleUp).mockImplementation((messages) => { // Verify messages are sorted by retry count (ascending) expect(messages[0].messageId).toBe('no-retry'); expect(messages[1].messageId).toBe('low-retry'); expect(messages[2].messageId).toBe('high-retry'); return Promise.resolve([]); }); - vi.mocked(scaleUp).mockImplementation(mock); await scaleUpHandler(multiRecordEvent, context); }); @@ -228,9 +212,7 @@ describe('Test scale up lambda wrapper.', () => { const records = createMultipleRecords(2); const multiRecordEvent: SQSEvent = { Records: records }; - const mock = vi.fn(scaleUp); - mock.mockImplementation(() => Promise.reject(new Error('Generic error'))); - vi.mocked(scaleUp).mockImplementation(mock); + vi.mocked(scaleUp).mockRejectedValue(new Error('Generic error')); const result = await scaleUpHandler(multiRecordEvent, context); expect(result).toEqual({ batchItemFailures: [] }); @@ -252,41 +234,26 @@ describe('Test scale up lambda wrapper.', () => { describe('Test scale down lambda wrapper.', () => { it('Scaling down no error.', async () => { - const mock = vi.fn(scaleDown); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve(); - }); - }); + vi.mocked(scaleDown).mockResolvedValue(); await expect(scaleDownHandler({}, context)).resolves.not.toThrow(); }); it('Scaling down with error.', async () => { const error = new Error('Scaling down with error.'); - const mock = vi.fn(scaleDown); - mock.mockRejectedValue(error); + vi.mocked(scaleDown).mockRejectedValue(error); await expect(scaleDownHandler({}, context)).resolves.not.toThrow(); }); }); describe('Adjust pool.', () => { it('Receive message to adjust pool.', async () => { - const mock = vi.fn(adjust); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve(); - }); - }); + vi.mocked(adjust).mockResolvedValue(); await expect(adjustPool({ poolSize: 2 }, context)).resolves.not.toThrow(); }); it('Handle error for adjusting pool.', async () => { const error = new Error('Handle error for adjusting pool.'); - const mock = vi.fn() as MockedFunction; - mock.mockImplementation(() => { - return Promise.reject(error); - }); - vi.mocked(adjust).mockImplementation(mock); + vi.mocked(adjust).mockRejectedValue(error); const logSpy = vi.spyOn(logger, 'error'); await adjustPool({ poolSize: 0 }, context); expect(logSpy).toHaveBeenCalledWith(`Handle error for adjusting pool. ${error.message}`, { error }); @@ -303,12 +270,7 @@ describe('Test middleware', () => { describe('Test ssm housekeeper lambda wrapper.', () => { it('Invoke without errors.', async () => { - const mock = vi.fn(cleanSSMTokens); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve(); - }); - }); + vi.mocked(cleanSSMTokens).mockResolvedValue(); process.env.SSM_CLEANUP_CONFIG = JSON.stringify({ dryRun: false, @@ -320,29 +282,20 @@ describe('Test ssm housekeeper lambda wrapper.', () => { }); it('Errors not throws.', async () => { - const mock = vi.fn(cleanSSMTokens); - mock.mockRejectedValue(new Error()); + vi.mocked(cleanSSMTokens).mockRejectedValue(new Error()); await expect(ssmHousekeeper({}, context)).resolves.not.toThrow(); }); }); describe('Test job retry check wrapper', () => { it('Handle without error should resolve.', async () => { - const mock = vi.fn() as MockedFunction; - mock.mockImplementation(() => { - return Promise.resolve(); - }); - vi.mocked(checkAndRetryJob).mockImplementation(mock); + vi.mocked(checkAndRetryJob).mockResolvedValue(); await expect(jobRetryCheck(sqsEvent, context)).resolves.not.toThrow(); }); it('Handle with error should resolve and log only a warning.', async () => { const error = new Error('Error handling retry check.'); - const mock = vi.fn() as MockedFunction; - mock.mockImplementation(() => { - return Promise.reject(error); - }); - vi.mocked(checkAndRetryJob).mockImplementation(mock); + vi.mocked(checkAndRetryJob).mockRejectedValue(error); const logSpyWarn = vi.spyOn(logger, 'warn'); await expect(jobRetryCheck(sqsEvent, context)).resolves.not.toThrow(); From 135a00673b789c156c13b63f80ffa26185217d6b Mon Sep 17 00:00:00 2001 From: Niek Palm Date: Sat, 29 Nov 2025 10:22:10 +0100 Subject: [PATCH 4/7] chore: create dedeciated environment for release --- .github/workflows/release.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 752d8db93e..15cd21a781 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -3,7 +3,6 @@ on: push: branches: - main - - v1 workflow_dispatch: concurrency: @@ -22,6 +21,7 @@ jobs: actions: write # for release-please-action to trigger other workflows id-token: write # for actions/attest-build-provenance to generate attestations attestations: write # for actions/attest-build-provenance to write attestations + environment: release steps: - name: Harden the runner (Audit all outbound calls) uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2 @@ -30,7 +30,7 @@ jobs: - uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0 with: - node-version: 22 + node-version: 24 package-manager-cache: false - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 with: From c316ef68e5489dc5ff49c22b28b5b27cc1569e34 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Wed, 3 Dec 2025 11:29:02 +0100 Subject: [PATCH 5/7] fix(lambda): bump the aws group in /lambdas with 7 updates (#4924) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps the aws group in /lambdas with 7 updates: | Package | From | To | | --- | --- | --- | | [@aws-sdk/client-ec2](https://github.com/aws/aws-sdk-js-v3/tree/HEAD/clients/client-ec2) | `3.938.0` | `3.940.0` | | [@aws-sdk/client-ssm](https://github.com/aws/aws-sdk-js-v3/tree/HEAD/clients/client-ssm) | `3.936.0` | `3.940.0` | | [@aws-sdk/types](https://github.com/aws/aws-sdk-js-v3/tree/HEAD/packages/types) | `3.930.0` | `3.936.0` | | [@aws-sdk/client-sqs](https://github.com/aws/aws-sdk-js-v3/tree/HEAD/clients/client-sqs) | `3.936.0` | `3.940.0` | | [@aws-sdk/client-s3](https://github.com/aws/aws-sdk-js-v3/tree/HEAD/clients/client-s3) | `3.937.0` | `3.940.0` | | [@aws-sdk/lib-storage](https://github.com/aws/aws-sdk-js-v3/tree/HEAD/lib/lib-storage) | `3.937.0` | `3.940.0` | | [@aws-sdk/client-eventbridge](https://github.com/aws/aws-sdk-js-v3/tree/HEAD/clients/client-eventbridge) | `3.936.0` | `3.940.0` | Updates `@aws-sdk/client-ec2` from 3.938.0 to 3.940.0
Release notes

Sourced from @​aws-sdk/client-ec2's releases.

v3.940.0

3.940.0(2025-11-25)

New Features
  • clients: update client endpoints as of 2025-11-25 (e2770904)
  • client-network-firewall: Network Firewall release of the Proxy feature. (0eb20e88)
  • client-organizations: Add support for policy operations on the S3_POLICY and BEDROCK_POLICY policy type. (75e196ee)
  • client-route-53: Adds support for new route53 feature: accelerated recovery. (dbe0a58f)
  • client-ec2: This release adds support to view Network firewall proxy appliances attached to an existing NAT Gateway via DescribeNatGateways API NatGatewayAttachedAppliance structure. (7d70b063)
Bug Fixes
  • core/protocols: performance improvements for shape serde traversal (#7523) (b20a25ea)
Tests

For list of updated packages, view updated-packages.md in assets-3.940.0.zip

v3.939.0

3.939.0(2025-11-24)

Chores
  • scripts: reduce api validation to packages/lib only (#7519) (eb74d6a0)
New Features
  • client-cloudwatch-logs: New CloudWatch Logs feature - LogGroup Deletion Protection, a capability that allows customers to safeguard their critical CloudWatch log groups from accidental or unintended deletion. (02360329)
  • client-cloudfront: Add TrustStore, ConnectionFunction APIs to CloudFront SDK (168505ee)
Bug Fixes
  • clients: export enum objects for string shapes (#7521) (62f648df)
  • cloudfront-signer: skip extended encoding for query parameters in the base url (#7515) (954d411e)
Tests

For list of updated packages, view updated-packages.md in assets-3.939.0.zip

Changelog

Sourced from @​aws-sdk/client-ec2's changelog.

3.940.0 (2025-11-25)

Features

  • client-ec2: This release adds support to view Network firewall proxy appliances attached to an existing NAT Gateway via DescribeNatGateways API NatGatewayAttachedAppliance structure. (7d70b06)

3.939.0 (2025-11-24)

Note: Version bump only for package @​aws-sdk/client-ec2

Commits
  • e9962f1 Publish v3.940.0
  • 7d70b06 feat(client-ec2): This release adds support to view Network firewall proxy ap...
  • 1592379 Publish v3.939.0
  • See full diff in compare view

Updates `@aws-sdk/client-ssm` from 3.936.0 to 3.940.0
Release notes

Sourced from @​aws-sdk/client-ssm's releases.

v3.940.0

3.940.0(2025-11-25)

New Features
  • clients: update client endpoints as of 2025-11-25 (e2770904)
  • client-network-firewall: Network Firewall release of the Proxy feature. (0eb20e88)
  • client-organizations: Add support for policy operations on the S3_POLICY and BEDROCK_POLICY policy type. (75e196ee)
  • client-route-53: Adds support for new route53 feature: accelerated recovery. (dbe0a58f)
  • client-ec2: This release adds support to view Network firewall proxy appliances attached to an existing NAT Gateway via DescribeNatGateways API NatGatewayAttachedAppliance structure. (7d70b063)
Bug Fixes
  • core/protocols: performance improvements for shape serde traversal (#7523) (b20a25ea)
Tests

For list of updated packages, view updated-packages.md in assets-3.940.0.zip

v3.939.0

3.939.0(2025-11-24)

Chores
  • scripts: reduce api validation to packages/lib only (#7519) (eb74d6a0)
New Features
  • client-cloudwatch-logs: New CloudWatch Logs feature - LogGroup Deletion Protection, a capability that allows customers to safeguard their critical CloudWatch log groups from accidental or unintended deletion. (02360329)
  • client-cloudfront: Add TrustStore, ConnectionFunction APIs to CloudFront SDK (168505ee)
Bug Fixes
  • clients: export enum objects for string shapes (#7521) (62f648df)
  • cloudfront-signer: skip extended encoding for query parameters in the base url (#7515) (954d411e)
Tests

For list of updated packages, view updated-packages.md in assets-3.939.0.zip

... (truncated)

Changelog

Sourced from @​aws-sdk/client-ssm's changelog.

3.940.0 (2025-11-25)

Note: Version bump only for package @​aws-sdk/client-ssm

3.939.0 (2025-11-24)

Note: Version bump only for package @​aws-sdk/client-ssm

Commits

Updates `@aws-sdk/types` from 3.930.0 to 3.936.0
Release notes

Sourced from @​aws-sdk/types's releases.

v3.936.0

3.936.0(2025-11-19)

New Features
  • credential-provider-login: add login credential provider (#7512) (2c08b1e0)

For list of updated packages, view updated-packages.md in assets-3.936.0.zip

v3.935.0

3.935.0(2025-11-19)

Chores
New Features
  • clients: update client endpoints as of 2025-11-19 (d7b51c49)
  • client-sts: IAM now supports outbound identity federation via the STS GetWebIdentityToken API, enabling AWS workloads to securely authenticate with external services using short-lived JSON Web Tokens. (f9fed01c)
  • client-dynamodb: Extended Global Secondary Index (GSI) composite keys to support up to 8 attributes. (622ef038)
  • client-medialive: MediaLive is adding support for MediaConnect Router by supporting a new input type called MEDIACONNECT_ROUTER. This new input type will provide seamless encrypted transport between MediaConnect Router and your MediaLive channel. (1667189e)
  • client-bcm-pricing-calculator: Add GroupSharingPreference, CostCategoryGroupSharingPreferenceArn, and CostCategoryGroupSharingPreferenceEffectiveDate to Bill Estimate. Add GroupSharingPreference and CostCategoryGroupSharingPreferenceArn to Bill Scenario. (e0dc140c)
  • client-backup: Amazon GuardDuty Malware Protection now supports AWS Backup, extending malware detection capabilities to EC2, EBS, and S3 backups. (498dcf3d)
  • client-connectcampaignsv2: This release added support for ring timer configuration for campaign calls. (1155c3c4)
  • client-ecs: Added support for Amazon ECS Managed Instances infrastructure optimization configuration. (2ee0c3f3)
  • client-ecr: Add support for ECR archival storage class and Inspector org policy for scanning (ed5e232d)
  • client-sagemaker: Added support for enhanced metrics for SageMaker AI Endpoints. This features provides Utilization Metrics at instance and container granularity and also provides easy configuration of metric publish frequency from 10 sec -> 5 mins (ad2587c7)
  • client-apigatewayv2: Support for API Gateway portals and portal products. (fc064256)
  • client-billingconductor: This release adds support for Billing Transfers, enabling management of billing transfers with billing groups on AWS Billing Conductor. (4e32b65d)
  • client-cloudwatch-logs: Adding support for ocsf version 1.5, add optional parameter MappingVersion (2a15be86)
  • client-api-gateway: API Gateway now supports response streaming and new security policies for REST APIs and custom domain names. (e1d2d6b1)
  • client-cost-optimization-hub: Release ListEfficiencyMetrics API (2b031582)
  • client-bedrock-runtime: This release includes support for Search Results. (40ffa77a)
  • client-cloudtrail: AWS CloudTrail now supports Insights for data events, expanding beyond management events to automatically detect unusual activity on data plane operations. (f8570665)
  • client-health: Adds actionability and personas properties to Health events exposed through DescribeEvents, DescribeEventsForOrganization, DescribeEventDetails, and DescribeEventTypes APIs. Adds filtering by actionabilities and personas in EventFilter, OrganizationEventFilter, EventTypeFilter. (c754b242)
  • client-networkflowmonitor: Added new enum value (AWS::EKS::Cluster) for type field under MonitorLocalResource (66729787)
  • client-invoicing: Add support for adding Billing transfers in Invoice configuration (2e493490)
  • client-s3: Adds support for blocking SSE-C writes to general purpose buckets. (cee2e72f)
  • client-network-firewall: Partner Managed Rulegroup feature support (2e8472d6)
  • client-emr: Add CloudWatch Logs integration for Spark driver, executor and step logs (7e6e1684)
  • client-fsx: Adding File Server Resource Manager configuration to FSx Windows (2e3c0c96)
  • client-guardduty: Add support for scanning and viewing scan results for backup resource types (231cf06b)
  • client-sfn: Adds support to TestState for mocked results and exceptions, along with additional inspection data. (1b18be75)
  • client-partnercentral-channel: Initial GA launch of Partner Central Channel (b77d1682)
  • client-secrets-manager: Adds support to create, update, retrieve, rotate, and delete managed external secrets. (c13b6f97)
  • client-iam: Added the EnableOutboundWebIdentityFederation, DisableOutboundWebIdentityFederation and GetOutboundWebIdentityFederationInfo APIs for the IAM outbound federation feature. (5774faa2)

... (truncated)

Changelog

Sourced from @​aws-sdk/types's changelog.

3.936.0 (2025-11-19)

Features

  • credential-provider-login: add login credential provider (#7512) (2c08b1e)
Commits

Updates `@aws-sdk/client-sqs` from 3.936.0 to 3.940.0
Release notes

Sourced from @​aws-sdk/client-sqs's releases.

v3.940.0

3.940.0(2025-11-25)

New Features
  • clients: update client endpoints as of 2025-11-25 (e2770904)
  • client-network-firewall: Network Firewall release of the Proxy feature. (0eb20e88)
  • client-organizations: Add support for policy operations on the S3_POLICY and BEDROCK_POLICY policy type. (75e196ee)
  • client-route-53: Adds support for new route53 feature: accelerated recovery. (dbe0a58f)
  • client-ec2: This release adds support to view Network firewall proxy appliances attached to an existing NAT Gateway via DescribeNatGateways API NatGatewayAttachedAppliance structure. (7d70b063)
Bug Fixes
  • core/protocols: performance improvements for shape serde traversal (#7523) (b20a25ea)
Tests

For list of updated packages, view updated-packages.md in assets-3.940.0.zip

v3.939.0

3.939.0(2025-11-24)

Chores
  • scripts: reduce api validation to packages/lib only (#7519) (eb74d6a0)
New Features
  • client-cloudwatch-logs: New CloudWatch Logs feature - LogGroup Deletion Protection, a capability that allows customers to safeguard their critical CloudWatch log groups from accidental or unintended deletion. (02360329)
  • client-cloudfront: Add TrustStore, ConnectionFunction APIs to CloudFront SDK (168505ee)
Bug Fixes
  • clients: export enum objects for string shapes (#7521) (62f648df)
  • cloudfront-signer: skip extended encoding for query parameters in the base url (#7515) (954d411e)
Tests

For list of updated packages, view updated-packages.md in assets-3.939.0.zip

... (truncated)

Changelog

Sourced from @​aws-sdk/client-sqs's changelog.

3.940.0 (2025-11-25)

Note: Version bump only for package @​aws-sdk/client-sqs

3.939.0 (2025-11-24)

Note: Version bump only for package @​aws-sdk/client-sqs

Commits

Updates `@aws-sdk/client-s3` from 3.937.0 to 3.940.0
Release notes

Sourced from @​aws-sdk/client-s3's releases.

v3.940.0

3.940.0(2025-11-25)

New Features
  • clients: update client endpoints as of 2025-11-25 (e2770904)
  • client-network-firewall: Network Firewall release of the Proxy feature. (0eb20e88)
  • client-organizations: Add support for policy operations on the S3_POLICY and BEDROCK_POLICY policy type. (75e196ee)
  • client-route-53: Adds support for new route53 feature: accelerated recovery. (dbe0a58f)
  • client-ec2: This release adds support to view Network firewall proxy appliances attached to an existing NAT Gateway via DescribeNatGateways API NatGatewayAttachedAppliance structure. (7d70b063)
Bug Fixes
  • core/protocols: performance improvements for shape serde traversal (#7523) (b20a25ea)
Tests

For list of updated packages, view updated-packages.md in assets-3.940.0.zip

v3.939.0

3.939.0(2025-11-24)

Chores
  • scripts: reduce api validation to packages/lib only (#7519) (eb74d6a0)
New Features
  • client-cloudwatch-logs: New CloudWatch Logs feature - LogGroup Deletion Protection, a capability that allows customers to safeguard their critical CloudWatch log groups from accidental or unintended deletion. (02360329)
  • client-cloudfront: Add TrustStore, ConnectionFunction APIs to CloudFront SDK (168505ee)
Bug Fixes
  • clients: export enum objects for string shapes (#7521) (62f648df)
  • cloudfront-signer: skip extended encoding for query parameters in the base url (#7515) (954d411e)
Tests

For list of updated packages, view updated-packages.md in assets-3.939.0.zip

... (truncated)

Changelog

Sourced from @​aws-sdk/client-s3's changelog.

3.940.0 (2025-11-25)

Note: Version bump only for package @​aws-sdk/client-s3

3.939.0 (2025-11-24)

Note: Version bump only for package @​aws-sdk/client-s3

Commits

Updates `@aws-sdk/lib-storage` from 3.937.0 to 3.940.0
Release notes

Sourced from @​aws-sdk/lib-storage's releases.

v3.940.0

3.940.0(2025-11-25)

New Features
  • clients: update client endpoints as of 2025-11-25 (e2770904)
  • client-network-firewall: Network Firewall release of the Proxy feature. (0eb20e88)
  • client-organizations: Add support for policy operations on the S3_POLICY and BEDROCK_POLICY policy type. (75e196ee)
  • client-route-53: Adds support for new route53 feature: accelerated recovery. (dbe0a58f)
  • client-ec2: This release adds support to view Network firewall proxy appliances attached to an existing NAT Gateway via DescribeNatGateways API NatGatewayAttachedAppliance structure. (7d70b063)
Bug Fixes
  • core/protocols: performance improvements for shape serde traversal (#7523) (b20a25ea)
Tests

For list of updated packages, view updated-packages.md in assets-3.940.0.zip

v3.939.0

3.939.0(2025-11-24)

Chores
  • scripts: reduce api validation to packages/lib only (#7519) (eb74d6a0)
New Features
  • client-cloudwatch-logs: New CloudWatch Logs feature - LogGroup Deletion Protection, a capability that allows customers to safeguard their critical CloudWatch log groups from accidental or unintended deletion. (02360329)
  • client-cloudfront: Add TrustStore, ConnectionFunction APIs to CloudFront SDK (168505ee)
Bug Fixes
  • clients: export enum objects for string shapes (#7521) (62f648df)
  • cloudfront-signer: skip extended encoding for query parameters in the base url (#7515) (954d411e)
Tests

For list of updated packages, view updated-packages.md in assets-3.939.0.zip

... (truncated)

Changelog

Sourced from @​aws-sdk/lib-storage's changelog.

3.940.0 (2025-11-25)

Note: Version bump only for package @​aws-sdk/lib-storage

3.939.0 (2025-11-24)

Note: Version bump only for package @​aws-sdk/lib-storage

Commits

Updates `@aws-sdk/client-eventbridge` from 3.936.0 to 3.940.0
Release notes

Sourced from @​aws-sdk/client-eventbridge's releases.

v3.940.0

3.940.0(2025-11-25)

New Features
  • clients: update client endpoints as of 2025-11-25 (e2770904)
  • client-network-firewall: Network Firewall release of the Proxy feature. (0eb20e88)
  • client-organizations: Add support for policy operations on the S3_POLICY and BEDROCK_POLICY policy type. (75e196ee)
  • client-route-53: Adds support for new route53 feature: accelerated recovery. (dbe0a58f)
  • client-ec2: This release adds support to view Network firewall proxy appliances attached to an existing NAT Gateway via DescribeNatGateways API NatGatewayAttachedAppliance structure. (7d70b063)
Bug Fixes
  • core/protocols: performance improvements for shape serde traversal (#7523) (b20a25ea)
Tests

For list of updated packages, view updated-packages.md in assets-3.940.0.zip

v3.939.0

3.939.0(2025-11-24)

Chores
  • scripts: reduce api validation to packages/lib only (#7519) (eb74d6a0)
New Features
  • client-cloudwatch-logs: New CloudWatch Logs feature - LogGroup Deletion Protection, a capability that allows customers to safeguard their critical CloudWatch log groups from accidental or unintended deletion. (02360329)
  • client-cloudfront: Add TrustStore, ConnectionFunction APIs to CloudFront SDK (168505ee)
Bug Fixes
  • clients: export enum objects for string shapes (#7521) (62f648df)
  • cloudfront-signer: skip extended encoding for query parameters in the base url (#7515) (954d411e)
Tests

For list of updated packages, view updated-packages.md in assets-3.939.0.zip

... (truncated)

Changelog

Sourced from @​aws-sdk/client-eventbridge's changelog.

3.940.0 (2025-11-25)

Note: Version bump only for package @​aws-sdk/client-eventbridge

3.939.0 (2025-11-24)

Note: Version bump only for package @​aws-sdk/client-eventbridge

Commits

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore ` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore ` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore ` will remove the ignore condition of the specified dependency and ignore conditions
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- .../functions/ami-housekeeper/package.json | 6 +- lambdas/functions/control-plane/package.json | 6 +- .../functions/gh-agent-syncer/package.json | 6 +- .../termination-watcher/package.json | 4 +- lambdas/functions/webhook/package.json | 4 +- lambdas/libs/aws-ssm-util/package.json | 4 +- lambdas/yarn.lock | 354 +++++++++--------- 7 files changed, 187 insertions(+), 197 deletions(-) diff --git a/lambdas/functions/ami-housekeeper/package.json b/lambdas/functions/ami-housekeeper/package.json index 8107e367c8..39559d2e44 100644 --- a/lambdas/functions/ami-housekeeper/package.json +++ b/lambdas/functions/ami-housekeeper/package.json @@ -17,7 +17,7 @@ "all": "yarn build && yarn format && yarn lint && yarn test" }, "devDependencies": { - "@aws-sdk/types": "^3.930.0", + "@aws-sdk/types": "^3.936.0", "@types/aws-lambda": "^8.10.155", "@vercel/ncc": "^0.38.4", "aws-sdk-client-mock": "^4.1.0", @@ -26,8 +26,8 @@ "dependencies": { "@aws-github-runner/aws-powertools-util": "*", "@aws-github-runner/aws-ssm-util": "*", - "@aws-sdk/client-ec2": "^3.934.0", - "@aws-sdk/client-ssm": "^3.934.0", + "@aws-sdk/client-ec2": "^3.940.0", + "@aws-sdk/client-ssm": "^3.940.0", "cron-parser": "^5.4.0" }, "nx": { diff --git a/lambdas/functions/control-plane/package.json b/lambdas/functions/control-plane/package.json index adea74c329..43350b70ca 100644 --- a/lambdas/functions/control-plane/package.json +++ b/lambdas/functions/control-plane/package.json @@ -17,7 +17,7 @@ "all": "yarn build && yarn format && yarn lint && yarn test" }, "devDependencies": { - "@aws-sdk/types": "^3.930.0", + "@aws-sdk/types": "^3.936.0", "@octokit/types": "^16.0.0", "@types/aws-lambda": "^8.10.155", "@types/node": "^22.19.0", @@ -33,8 +33,8 @@ "@aws-github-runner/aws-powertools-util": "*", "@aws-github-runner/aws-ssm-util": "*", "@aws-lambda-powertools/parameters": "^2.28.1", - "@aws-sdk/client-ec2": "^3.934.0", - "@aws-sdk/client-sqs": "^3.934.0", + "@aws-sdk/client-ec2": "^3.940.0", + "@aws-sdk/client-sqs": "^3.940.0", "@middy/core": "^6.4.5", "@octokit/auth-app": "8.1.2", "@octokit/core": "7.0.6", diff --git a/lambdas/functions/gh-agent-syncer/package.json b/lambdas/functions/gh-agent-syncer/package.json index 22253ceeb2..d00e0ff171 100644 --- a/lambdas/functions/gh-agent-syncer/package.json +++ b/lambdas/functions/gh-agent-syncer/package.json @@ -17,7 +17,7 @@ "all": "yarn build && yarn format && yarn lint && yarn test" }, "devDependencies": { - "@aws-sdk/types": "^3.930.0", + "@aws-sdk/types": "^3.936.0", "@types/aws-lambda": "^8.10.155", "@types/node": "^22.19.0", "@types/request": "^2.48.13", @@ -28,8 +28,8 @@ }, "dependencies": { "@aws-github-runner/aws-powertools-util": "*", - "@aws-sdk/client-s3": "^3.934.0", - "@aws-sdk/lib-storage": "^3.934.0", + "@aws-sdk/client-s3": "^3.940.0", + "@aws-sdk/lib-storage": "^3.940.0", "@middy/core": "^6.4.5", "@octokit/rest": "22.0.1", "axios": "^1.13.2" diff --git a/lambdas/functions/termination-watcher/package.json b/lambdas/functions/termination-watcher/package.json index 3182ada2f5..2b7e2b326f 100644 --- a/lambdas/functions/termination-watcher/package.json +++ b/lambdas/functions/termination-watcher/package.json @@ -15,7 +15,7 @@ "all": "yarn build && yarn format && yarn lint && yarn test" }, "devDependencies": { - "@aws-sdk/types": "^3.930.0", + "@aws-sdk/types": "^3.936.0", "@types/aws-lambda": "^8.10.155", "@types/node": "^22.19.0", "@vercel/ncc": "^0.38.4", @@ -24,7 +24,7 @@ }, "dependencies": { "@aws-github-runner/aws-powertools-util": "*", - "@aws-sdk/client-ec2": "^3.934.0", + "@aws-sdk/client-ec2": "^3.940.0", "@middy/core": "^6.4.5" }, "nx": { diff --git a/lambdas/functions/webhook/package.json b/lambdas/functions/webhook/package.json index d074b29812..44d68ade08 100644 --- a/lambdas/functions/webhook/package.json +++ b/lambdas/functions/webhook/package.json @@ -17,7 +17,7 @@ "all": "yarn build && yarn format && yarn lint && yarn test" }, "devDependencies": { - "@aws-sdk/client-eventbridge": "^3.934.0", + "@aws-sdk/client-eventbridge": "^3.940.0", "@octokit/webhooks-types": "^7.6.1", "@types/aws-lambda": "^8.10.155", "@types/express": "^5.0.3", @@ -30,7 +30,7 @@ "dependencies": { "@aws-github-runner/aws-powertools-util": "*", "@aws-github-runner/aws-ssm-util": "*", - "@aws-sdk/client-sqs": "^3.934.0", + "@aws-sdk/client-sqs": "^3.940.0", "@middy/core": "^6.4.5", "@octokit/rest": "22.0.1", "@octokit/types": "^16.0.0", diff --git a/lambdas/libs/aws-ssm-util/package.json b/lambdas/libs/aws-ssm-util/package.json index a32c6f19cc..98ad2c1d71 100644 --- a/lambdas/libs/aws-ssm-util/package.json +++ b/lambdas/libs/aws-ssm-util/package.json @@ -15,7 +15,7 @@ "all": "yarn build && yarn format && yarn lint && yarn test" }, "devDependencies": { - "@aws-sdk/types": "^3.930.0", + "@aws-sdk/types": "^3.936.0", "@types/aws-lambda": "^8.10.155", "@types/node": "^22.19.0", "aws-sdk-client-mock": "^4.1.0", @@ -23,7 +23,7 @@ }, "dependencies": { "@aws-github-runner/aws-powertools-util": "*", - "@aws-sdk/client-ssm": "^3.934.0" + "@aws-sdk/client-ssm": "^3.940.0" }, "nx": { "includedScripts": [ diff --git a/lambdas/yarn.lock b/lambdas/yarn.lock index aaed7917f8..cd32ab95d4 100644 --- a/lambdas/yarn.lock +++ b/lambdas/yarn.lock @@ -103,9 +103,9 @@ __metadata: dependencies: "@aws-github-runner/aws-powertools-util": "npm:*" "@aws-github-runner/aws-ssm-util": "npm:*" - "@aws-sdk/client-ec2": "npm:^3.934.0" - "@aws-sdk/client-ssm": "npm:^3.934.0" - "@aws-sdk/types": "npm:^3.930.0" + "@aws-sdk/client-ec2": "npm:^3.940.0" + "@aws-sdk/client-ssm": "npm:^3.940.0" + "@aws-sdk/types": "npm:^3.936.0" "@types/aws-lambda": "npm:^8.10.155" "@vercel/ncc": "npm:^0.38.4" aws-sdk-client-mock: "npm:^4.1.0" @@ -133,8 +133,8 @@ __metadata: resolution: "@aws-github-runner/aws-ssm-util@workspace:libs/aws-ssm-util" dependencies: "@aws-github-runner/aws-powertools-util": "npm:*" - "@aws-sdk/client-ssm": "npm:^3.934.0" - "@aws-sdk/types": "npm:^3.930.0" + "@aws-sdk/client-ssm": "npm:^3.940.0" + "@aws-sdk/types": "npm:^3.936.0" "@types/aws-lambda": "npm:^8.10.155" "@types/node": "npm:^22.19.0" aws-sdk-client-mock: "npm:^4.1.0" @@ -149,9 +149,9 @@ __metadata: "@aws-github-runner/aws-powertools-util": "npm:*" "@aws-github-runner/aws-ssm-util": "npm:*" "@aws-lambda-powertools/parameters": "npm:^2.28.1" - "@aws-sdk/client-ec2": "npm:^3.934.0" - "@aws-sdk/client-sqs": "npm:^3.934.0" - "@aws-sdk/types": "npm:^3.930.0" + "@aws-sdk/client-ec2": "npm:^3.940.0" + "@aws-sdk/client-sqs": "npm:^3.940.0" + "@aws-sdk/types": "npm:^3.936.0" "@middy/core": "npm:^6.4.5" "@octokit/auth-app": "npm:8.1.2" "@octokit/core": "npm:7.0.6" @@ -176,9 +176,9 @@ __metadata: resolution: "@aws-github-runner/gh-agent-syncer@workspace:functions/gh-agent-syncer" dependencies: "@aws-github-runner/aws-powertools-util": "npm:*" - "@aws-sdk/client-s3": "npm:^3.934.0" - "@aws-sdk/lib-storage": "npm:^3.934.0" - "@aws-sdk/types": "npm:^3.930.0" + "@aws-sdk/client-s3": "npm:^3.940.0" + "@aws-sdk/lib-storage": "npm:^3.940.0" + "@aws-sdk/types": "npm:^3.936.0" "@middy/core": "npm:^6.4.5" "@octokit/rest": "npm:22.0.1" "@types/aws-lambda": "npm:^8.10.155" @@ -197,8 +197,8 @@ __metadata: resolution: "@aws-github-runner/termination-watcher@workspace:functions/termination-watcher" dependencies: "@aws-github-runner/aws-powertools-util": "npm:*" - "@aws-sdk/client-ec2": "npm:^3.934.0" - "@aws-sdk/types": "npm:^3.930.0" + "@aws-sdk/client-ec2": "npm:^3.940.0" + "@aws-sdk/types": "npm:^3.936.0" "@middy/core": "npm:^6.4.5" "@types/aws-lambda": "npm:^8.10.155" "@types/node": "npm:^22.19.0" @@ -214,8 +214,8 @@ __metadata: dependencies: "@aws-github-runner/aws-powertools-util": "npm:*" "@aws-github-runner/aws-ssm-util": "npm:*" - "@aws-sdk/client-eventbridge": "npm:^3.934.0" - "@aws-sdk/client-sqs": "npm:^3.934.0" + "@aws-sdk/client-eventbridge": "npm:^3.940.0" + "@aws-sdk/client-sqs": "npm:^3.940.0" "@middy/core": "npm:^6.4.5" "@octokit/rest": "npm:22.0.1" "@octokit/types": "npm:^16.0.0" @@ -315,24 +315,24 @@ __metadata: languageName: node linkType: hard -"@aws-sdk/client-ec2@npm:^3.934.0": - version: 3.938.0 - resolution: "@aws-sdk/client-ec2@npm:3.938.0" +"@aws-sdk/client-ec2@npm:^3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/client-ec2@npm:3.940.0" dependencies: "@aws-crypto/sha256-browser": "npm:5.2.0" "@aws-crypto/sha256-js": "npm:5.2.0" - "@aws-sdk/core": "npm:3.936.0" - "@aws-sdk/credential-provider-node": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" + "@aws-sdk/credential-provider-node": "npm:3.940.0" "@aws-sdk/middleware-host-header": "npm:3.936.0" "@aws-sdk/middleware-logger": "npm:3.936.0" "@aws-sdk/middleware-recursion-detection": "npm:3.936.0" "@aws-sdk/middleware-sdk-ec2": "npm:3.936.0" - "@aws-sdk/middleware-user-agent": "npm:3.936.0" + "@aws-sdk/middleware-user-agent": "npm:3.940.0" "@aws-sdk/region-config-resolver": "npm:3.936.0" "@aws-sdk/types": "npm:3.936.0" "@aws-sdk/util-endpoints": "npm:3.936.0" "@aws-sdk/util-user-agent-browser": "npm:3.936.0" - "@aws-sdk/util-user-agent-node": "npm:3.936.0" + "@aws-sdk/util-user-agent-node": "npm:3.940.0" "@smithy/config-resolver": "npm:^4.4.3" "@smithy/core": "npm:^3.18.5" "@smithy/fetch-http-handler": "npm:^5.3.6" @@ -360,28 +360,28 @@ __metadata: "@smithy/util-utf8": "npm:^4.2.0" "@smithy/util-waiter": "npm:^4.2.5" tslib: "npm:^2.6.2" - checksum: 10c0/6a33ec4c59a16410e9507bb2003e9c96316f69d94a0a4c9a02d4c438c751f233673f4a2cf24ee722f1497d2fd17c5685aee0556ef964f3fc09d1197b3af855f5 + checksum: 10c0/c4a6104b403fb08c76db2dff4fb3f7d52ff6a96184d2f120f9505639418daeb848c0dfb7c49f49ad7d6c046ce2ca88409bec8f4d0c5f8df7adf76a147f1b1175 languageName: node linkType: hard -"@aws-sdk/client-eventbridge@npm:^3.934.0": - version: 3.936.0 - resolution: "@aws-sdk/client-eventbridge@npm:3.936.0" +"@aws-sdk/client-eventbridge@npm:^3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/client-eventbridge@npm:3.940.0" dependencies: "@aws-crypto/sha256-browser": "npm:5.2.0" "@aws-crypto/sha256-js": "npm:5.2.0" - "@aws-sdk/core": "npm:3.936.0" - "@aws-sdk/credential-provider-node": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" + "@aws-sdk/credential-provider-node": "npm:3.940.0" "@aws-sdk/middleware-host-header": "npm:3.936.0" "@aws-sdk/middleware-logger": "npm:3.936.0" "@aws-sdk/middleware-recursion-detection": "npm:3.936.0" - "@aws-sdk/middleware-user-agent": "npm:3.936.0" + "@aws-sdk/middleware-user-agent": "npm:3.940.0" "@aws-sdk/region-config-resolver": "npm:3.936.0" - "@aws-sdk/signature-v4-multi-region": "npm:3.936.0" + "@aws-sdk/signature-v4-multi-region": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@aws-sdk/util-endpoints": "npm:3.936.0" "@aws-sdk/util-user-agent-browser": "npm:3.936.0" - "@aws-sdk/util-user-agent-node": "npm:3.936.0" + "@aws-sdk/util-user-agent-node": "npm:3.940.0" "@smithy/config-resolver": "npm:^4.4.3" "@smithy/core": "npm:^3.18.5" "@smithy/fetch-http-handler": "npm:^5.3.6" @@ -408,35 +408,35 @@ __metadata: "@smithy/util-retry": "npm:^4.2.5" "@smithy/util-utf8": "npm:^4.2.0" tslib: "npm:^2.6.2" - checksum: 10c0/6d77a28b363077de0d5bbf8ad8f0b1ee5991dfe631c704468ebe348b519a6c3d3eb49cb1ca61a39ac8a2327de0f116339ac886d6b3dabd0222f795b473145082 + checksum: 10c0/3eaf7d50424127a463c444742656985911d16f7de693483c9c6d1fc54a1e4454b7229096def242b905f31182a5973b3d0d9ee8dde1309626183c7111e675ffbe languageName: node linkType: hard -"@aws-sdk/client-s3@npm:^3.934.0": - version: 3.937.0 - resolution: "@aws-sdk/client-s3@npm:3.937.0" +"@aws-sdk/client-s3@npm:^3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/client-s3@npm:3.940.0" dependencies: "@aws-crypto/sha1-browser": "npm:5.2.0" "@aws-crypto/sha256-browser": "npm:5.2.0" "@aws-crypto/sha256-js": "npm:5.2.0" - "@aws-sdk/core": "npm:3.936.0" - "@aws-sdk/credential-provider-node": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" + "@aws-sdk/credential-provider-node": "npm:3.940.0" "@aws-sdk/middleware-bucket-endpoint": "npm:3.936.0" "@aws-sdk/middleware-expect-continue": "npm:3.936.0" - "@aws-sdk/middleware-flexible-checksums": "npm:3.936.0" + "@aws-sdk/middleware-flexible-checksums": "npm:3.940.0" "@aws-sdk/middleware-host-header": "npm:3.936.0" "@aws-sdk/middleware-location-constraint": "npm:3.936.0" "@aws-sdk/middleware-logger": "npm:3.936.0" "@aws-sdk/middleware-recursion-detection": "npm:3.936.0" - "@aws-sdk/middleware-sdk-s3": "npm:3.936.0" + "@aws-sdk/middleware-sdk-s3": "npm:3.940.0" "@aws-sdk/middleware-ssec": "npm:3.936.0" - "@aws-sdk/middleware-user-agent": "npm:3.936.0" + "@aws-sdk/middleware-user-agent": "npm:3.940.0" "@aws-sdk/region-config-resolver": "npm:3.936.0" - "@aws-sdk/signature-v4-multi-region": "npm:3.936.0" + "@aws-sdk/signature-v4-multi-region": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@aws-sdk/util-endpoints": "npm:3.936.0" "@aws-sdk/util-user-agent-browser": "npm:3.936.0" - "@aws-sdk/util-user-agent-node": "npm:3.936.0" + "@aws-sdk/util-user-agent-node": "npm:3.940.0" "@smithy/config-resolver": "npm:^4.4.3" "@smithy/core": "npm:^3.18.5" "@smithy/eventstream-serde-browser": "npm:^4.2.5" @@ -471,28 +471,28 @@ __metadata: "@smithy/util-utf8": "npm:^4.2.0" "@smithy/util-waiter": "npm:^4.2.5" tslib: "npm:^2.6.2" - checksum: 10c0/f9bdb7d52f0074170478cb6f407522c2de1b6f5d7a6a8a592af40c3d6d7989288078b5e51a58ad5db16ede96d8a1d0ee6e52033af48a528156021f1f416db5b7 + checksum: 10c0/349b6afcf0940e453472137b0fe13a57896946926a7b03333f2435665ba79000a54c70072e011d76aa7c52512d88a0195ef57794eb996d31a24253e906526b03 languageName: node linkType: hard -"@aws-sdk/client-sqs@npm:^3.934.0": - version: 3.936.0 - resolution: "@aws-sdk/client-sqs@npm:3.936.0" +"@aws-sdk/client-sqs@npm:^3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/client-sqs@npm:3.940.0" dependencies: "@aws-crypto/sha256-browser": "npm:5.2.0" "@aws-crypto/sha256-js": "npm:5.2.0" - "@aws-sdk/core": "npm:3.936.0" - "@aws-sdk/credential-provider-node": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" + "@aws-sdk/credential-provider-node": "npm:3.940.0" "@aws-sdk/middleware-host-header": "npm:3.936.0" "@aws-sdk/middleware-logger": "npm:3.936.0" "@aws-sdk/middleware-recursion-detection": "npm:3.936.0" "@aws-sdk/middleware-sdk-sqs": "npm:3.936.0" - "@aws-sdk/middleware-user-agent": "npm:3.936.0" + "@aws-sdk/middleware-user-agent": "npm:3.940.0" "@aws-sdk/region-config-resolver": "npm:3.936.0" "@aws-sdk/types": "npm:3.936.0" "@aws-sdk/util-endpoints": "npm:3.936.0" "@aws-sdk/util-user-agent-browser": "npm:3.936.0" - "@aws-sdk/util-user-agent-node": "npm:3.936.0" + "@aws-sdk/util-user-agent-node": "npm:3.940.0" "@smithy/config-resolver": "npm:^4.4.3" "@smithy/core": "npm:^3.18.5" "@smithy/fetch-http-handler": "npm:^5.3.6" @@ -520,27 +520,27 @@ __metadata: "@smithy/util-retry": "npm:^4.2.5" "@smithy/util-utf8": "npm:^4.2.0" tslib: "npm:^2.6.2" - checksum: 10c0/7a626809fa3814d3cbd463deba96633c939bd3953365516387bb2b50777b108f663eeb7022e88d356dd03756a490699428bd29bfffb089760e9c21b8a1f7d4cb + checksum: 10c0/991ca050e885f7a88bcffac579c97bf3071a3599c299024ee8a583aa269756eb5ba9c76cab2e0fb0369f9396fa64e4ff9541f2de781612565053f04697b4258a languageName: node linkType: hard -"@aws-sdk/client-ssm@npm:^3.934.0": - version: 3.936.0 - resolution: "@aws-sdk/client-ssm@npm:3.936.0" +"@aws-sdk/client-ssm@npm:^3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/client-ssm@npm:3.940.0" dependencies: "@aws-crypto/sha256-browser": "npm:5.2.0" "@aws-crypto/sha256-js": "npm:5.2.0" - "@aws-sdk/core": "npm:3.936.0" - "@aws-sdk/credential-provider-node": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" + "@aws-sdk/credential-provider-node": "npm:3.940.0" "@aws-sdk/middleware-host-header": "npm:3.936.0" "@aws-sdk/middleware-logger": "npm:3.936.0" "@aws-sdk/middleware-recursion-detection": "npm:3.936.0" - "@aws-sdk/middleware-user-agent": "npm:3.936.0" + "@aws-sdk/middleware-user-agent": "npm:3.940.0" "@aws-sdk/region-config-resolver": "npm:3.936.0" "@aws-sdk/types": "npm:3.936.0" "@aws-sdk/util-endpoints": "npm:3.936.0" "@aws-sdk/util-user-agent-browser": "npm:3.936.0" - "@aws-sdk/util-user-agent-node": "npm:3.936.0" + "@aws-sdk/util-user-agent-node": "npm:3.940.0" "@smithy/config-resolver": "npm:^4.4.3" "@smithy/core": "npm:^3.18.5" "@smithy/fetch-http-handler": "npm:^5.3.6" @@ -568,26 +568,26 @@ __metadata: "@smithy/util-utf8": "npm:^4.2.0" "@smithy/util-waiter": "npm:^4.2.5" tslib: "npm:^2.6.2" - checksum: 10c0/ea5b297fe626e27f674cd803d366a447df776e8cb05956da3748fbb8fb87e3553938b1338dd7982a977c18b8a2fd886a527e7dd4350d4898c26b7fcaf8e93da4 + checksum: 10c0/51bc53731a1cfdc90dc525a80f5eb5eb46fe9622687f42091e25f40a9fdb2469086e7de1bc059cfb4860431be5f59f89c3505527d94bbcad2c49228ad8d34db0 languageName: node linkType: hard -"@aws-sdk/client-sso@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/client-sso@npm:3.936.0" +"@aws-sdk/client-sso@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/client-sso@npm:3.940.0" dependencies: "@aws-crypto/sha256-browser": "npm:5.2.0" "@aws-crypto/sha256-js": "npm:5.2.0" - "@aws-sdk/core": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" "@aws-sdk/middleware-host-header": "npm:3.936.0" "@aws-sdk/middleware-logger": "npm:3.936.0" "@aws-sdk/middleware-recursion-detection": "npm:3.936.0" - "@aws-sdk/middleware-user-agent": "npm:3.936.0" + "@aws-sdk/middleware-user-agent": "npm:3.940.0" "@aws-sdk/region-config-resolver": "npm:3.936.0" "@aws-sdk/types": "npm:3.936.0" "@aws-sdk/util-endpoints": "npm:3.936.0" "@aws-sdk/util-user-agent-browser": "npm:3.936.0" - "@aws-sdk/util-user-agent-node": "npm:3.936.0" + "@aws-sdk/util-user-agent-node": "npm:3.940.0" "@smithy/config-resolver": "npm:^4.4.3" "@smithy/core": "npm:^3.18.5" "@smithy/fetch-http-handler": "npm:^5.3.6" @@ -614,13 +614,13 @@ __metadata: "@smithy/util-retry": "npm:^4.2.5" "@smithy/util-utf8": "npm:^4.2.0" tslib: "npm:^2.6.2" - checksum: 10c0/5b86e09a7d64b8ff3559fa0ff253893549280c349adaefd92868c38e1fd8528b538947875d7a15843fb2d864978e288d87e8f5defcde4ec9c31871087aa187e8 + checksum: 10c0/c0f6c8bc4ad55f2b573fbc40f472b974679c11c6e2bc224b1b9a4f4a9134895b37127eaaa588d56cb2e32522de4921dd813ae7229f5db4fedeeea1d06500e74c languageName: node linkType: hard -"@aws-sdk/core@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/core@npm:3.936.0" +"@aws-sdk/core@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/core@npm:3.940.0" dependencies: "@aws-sdk/types": "npm:3.936.0" "@aws-sdk/xml-builder": "npm:3.930.0" @@ -635,28 +635,28 @@ __metadata: "@smithy/util-middleware": "npm:^4.2.5" "@smithy/util-utf8": "npm:^4.2.0" tslib: "npm:^2.6.2" - checksum: 10c0/fd1e194e9e9b4bac9d3a61d7044bfb85fa61e210a2f64cc92a6310dad5f6031920664525c138a60b09ada0d59655d1d4ffc415e633a9900a89cb04f7d7f240b4 + checksum: 10c0/090b960007d3fe7a6f54d6e9a739f7de51c25d6e8f7519821ed94d8760508a9a1f034bc4ffc8b87a797eba485baf7024d45fc86556ce224b35da2530fe85af20 languageName: node linkType: hard -"@aws-sdk/credential-provider-env@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/credential-provider-env@npm:3.936.0" +"@aws-sdk/credential-provider-env@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/credential-provider-env@npm:3.940.0" dependencies: - "@aws-sdk/core": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@smithy/property-provider": "npm:^4.2.5" "@smithy/types": "npm:^4.9.0" tslib: "npm:^2.6.2" - checksum: 10c0/2a12b64625b75e8e0a533810f52ff19d67f5c17372bbbcf087664e8723a9938afd12af016cde7417441e5aa7759f7d96a39f8daeddeda1c2b89112bb77380ef8 + checksum: 10c0/538ede72ad6357ccc613957b11bcd254789cd502e14938c26870c326ff1518df9bb5b23fd4d1139bac77b4394ea6a1a621ad025609d62f86d7b363636ca73e5e languageName: node linkType: hard -"@aws-sdk/credential-provider-http@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/credential-provider-http@npm:3.936.0" +"@aws-sdk/credential-provider-http@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/credential-provider-http@npm:3.940.0" dependencies: - "@aws-sdk/core": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@smithy/fetch-http-handler": "npm:^5.3.6" "@smithy/node-http-handler": "npm:^4.4.5" @@ -666,116 +666,116 @@ __metadata: "@smithy/types": "npm:^4.9.0" "@smithy/util-stream": "npm:^4.5.6" tslib: "npm:^2.6.2" - checksum: 10c0/692afe243b1077f75b01cb5eed74964931f88d4596c9e9f75c6b868f5564e5081f1a2c2b702d1f447a83d3c4423de3e56d550aa6c84f129d4ef4d2e7b41d69fb + checksum: 10c0/a3092b60041cb5be3d07891c1be959b14420a5d630372030877970c7d111c0ca8881daeb6740c16767c3a587a9a65d5e6aa8081a73a58a6cccefc98f9307a9e3 languageName: node linkType: hard -"@aws-sdk/credential-provider-ini@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/credential-provider-ini@npm:3.936.0" - dependencies: - "@aws-sdk/core": "npm:3.936.0" - "@aws-sdk/credential-provider-env": "npm:3.936.0" - "@aws-sdk/credential-provider-http": "npm:3.936.0" - "@aws-sdk/credential-provider-login": "npm:3.936.0" - "@aws-sdk/credential-provider-process": "npm:3.936.0" - "@aws-sdk/credential-provider-sso": "npm:3.936.0" - "@aws-sdk/credential-provider-web-identity": "npm:3.936.0" - "@aws-sdk/nested-clients": "npm:3.936.0" +"@aws-sdk/credential-provider-ini@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/credential-provider-ini@npm:3.940.0" + dependencies: + "@aws-sdk/core": "npm:3.940.0" + "@aws-sdk/credential-provider-env": "npm:3.940.0" + "@aws-sdk/credential-provider-http": "npm:3.940.0" + "@aws-sdk/credential-provider-login": "npm:3.940.0" + "@aws-sdk/credential-provider-process": "npm:3.940.0" + "@aws-sdk/credential-provider-sso": "npm:3.940.0" + "@aws-sdk/credential-provider-web-identity": "npm:3.940.0" + "@aws-sdk/nested-clients": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@smithy/credential-provider-imds": "npm:^4.2.5" "@smithy/property-provider": "npm:^4.2.5" "@smithy/shared-ini-file-loader": "npm:^4.4.0" "@smithy/types": "npm:^4.9.0" tslib: "npm:^2.6.2" - checksum: 10c0/ff35034cbacca404dab5afc2c560bfe6ef3966846d4f62451eabec23b98e89def2522974ca193940ef2f0d6c6f3828f86a43a521665d8eb56cc16119ea273715 + checksum: 10c0/28b78575da447ea9a8f21c926fe0b1ef037e886a1676d60e168702abbeb070241a869b758bab1522e9e97ad7940376e30e7866c72201fab46a3dd67c4073af94 languageName: node linkType: hard -"@aws-sdk/credential-provider-login@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/credential-provider-login@npm:3.936.0" +"@aws-sdk/credential-provider-login@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/credential-provider-login@npm:3.940.0" dependencies: - "@aws-sdk/core": "npm:3.936.0" - "@aws-sdk/nested-clients": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" + "@aws-sdk/nested-clients": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@smithy/property-provider": "npm:^4.2.5" "@smithy/protocol-http": "npm:^5.3.5" "@smithy/shared-ini-file-loader": "npm:^4.4.0" "@smithy/types": "npm:^4.9.0" tslib: "npm:^2.6.2" - checksum: 10c0/7b38c8a3fe4df64361e667233a35e38bf9c821b9412c7a6ff3a594a8ce1fc55220167a29a1c0e698db319712f5103eb173549903998fd5e1cce994202dbf078c + checksum: 10c0/a408b413bf13c73c25bec80323e0cb59a86cf44b724156db6fd34cd8ae72b55af81a0c7c6325d1f99b85bd5f04aa64edadd06910c4f7ab0e5f8a714c54aad26e languageName: node linkType: hard -"@aws-sdk/credential-provider-node@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/credential-provider-node@npm:3.936.0" - dependencies: - "@aws-sdk/credential-provider-env": "npm:3.936.0" - "@aws-sdk/credential-provider-http": "npm:3.936.0" - "@aws-sdk/credential-provider-ini": "npm:3.936.0" - "@aws-sdk/credential-provider-process": "npm:3.936.0" - "@aws-sdk/credential-provider-sso": "npm:3.936.0" - "@aws-sdk/credential-provider-web-identity": "npm:3.936.0" +"@aws-sdk/credential-provider-node@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/credential-provider-node@npm:3.940.0" + dependencies: + "@aws-sdk/credential-provider-env": "npm:3.940.0" + "@aws-sdk/credential-provider-http": "npm:3.940.0" + "@aws-sdk/credential-provider-ini": "npm:3.940.0" + "@aws-sdk/credential-provider-process": "npm:3.940.0" + "@aws-sdk/credential-provider-sso": "npm:3.940.0" + "@aws-sdk/credential-provider-web-identity": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@smithy/credential-provider-imds": "npm:^4.2.5" "@smithy/property-provider": "npm:^4.2.5" "@smithy/shared-ini-file-loader": "npm:^4.4.0" "@smithy/types": "npm:^4.9.0" tslib: "npm:^2.6.2" - checksum: 10c0/06d191fd5679a9fe0f7c6cd9695c15d250ea6a01c1401f299a9efb221f11c3097751aa02ec2a9ad08588d3aac50aa5d14a6e6c1ccfde0741e6a19a7dc2849624 + checksum: 10c0/ecaa866d4cf9bce5cdf71e67d76e3e1b35e0f57b266f2b3447c08ccd5555c5b19d83a015cc153d2b6165ff6b1fce0c55d08eb306dcde909583741200ae287469 languageName: node linkType: hard -"@aws-sdk/credential-provider-process@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/credential-provider-process@npm:3.936.0" +"@aws-sdk/credential-provider-process@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/credential-provider-process@npm:3.940.0" dependencies: - "@aws-sdk/core": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@smithy/property-provider": "npm:^4.2.5" "@smithy/shared-ini-file-loader": "npm:^4.4.0" "@smithy/types": "npm:^4.9.0" tslib: "npm:^2.6.2" - checksum: 10c0/71d1c680be0b2e22d2cbd3e70cc1957e4d3f8137791ddcf29e3b46219d9c3dfb809c355c7dec50fc7f3763c4f3182504cba0e939b837fda22d82650679efca3e + checksum: 10c0/42aba573606be61f5d82120fa5379ff6eaf819be0972b20b08422a25b7f41c2113eaa762476590a08912ca248bd7eddf3504bd6620b18a98574450315b4962d0 languageName: node linkType: hard -"@aws-sdk/credential-provider-sso@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/credential-provider-sso@npm:3.936.0" +"@aws-sdk/credential-provider-sso@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/credential-provider-sso@npm:3.940.0" dependencies: - "@aws-sdk/client-sso": "npm:3.936.0" - "@aws-sdk/core": "npm:3.936.0" - "@aws-sdk/token-providers": "npm:3.936.0" + "@aws-sdk/client-sso": "npm:3.940.0" + "@aws-sdk/core": "npm:3.940.0" + "@aws-sdk/token-providers": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@smithy/property-provider": "npm:^4.2.5" "@smithy/shared-ini-file-loader": "npm:^4.4.0" "@smithy/types": "npm:^4.9.0" tslib: "npm:^2.6.2" - checksum: 10c0/0ef158e59790af455b0d58beda7cad753e83d0f21a0bbfd27d58f1214605ee74018c2e90b6cfe1e19c76156497378c5fecdc0aef5657e0d60059dec9e58dfd95 + checksum: 10c0/fd6397d6df02ce23b1151a4453d35fd123b15a71322aab3e50885268ecac21cd441bc02063b0ad834d57ce57e70c3cf07f1e6ad75814e7baf74741a5919d3e9c languageName: node linkType: hard -"@aws-sdk/credential-provider-web-identity@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/credential-provider-web-identity@npm:3.936.0" +"@aws-sdk/credential-provider-web-identity@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/credential-provider-web-identity@npm:3.940.0" dependencies: - "@aws-sdk/core": "npm:3.936.0" - "@aws-sdk/nested-clients": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" + "@aws-sdk/nested-clients": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@smithy/property-provider": "npm:^4.2.5" "@smithy/shared-ini-file-loader": "npm:^4.4.0" "@smithy/types": "npm:^4.9.0" tslib: "npm:^2.6.2" - checksum: 10c0/92691aedc9fa566a6ce5c1fe111ed7085173c04fcd6e9a2fca90c752475d4397f56c2faab37ddfb9af6eb79ee4c716adc27f298da32296a00b98d145940fd3b6 + checksum: 10c0/9967bbde6603372b89a600cfed211caa769709e34b27f90f627ee5b60c5994b6db0f17b4bbd1ea4ac133092691dc94a0776ba82a187075e875005c864eb7e851 languageName: node linkType: hard -"@aws-sdk/lib-storage@npm:^3.934.0": - version: 3.937.0 - resolution: "@aws-sdk/lib-storage@npm:3.937.0" +"@aws-sdk/lib-storage@npm:^3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/lib-storage@npm:3.940.0" dependencies: "@smithy/abort-controller": "npm:^4.2.5" "@smithy/middleware-endpoint": "npm:^4.3.12" @@ -785,8 +785,8 @@ __metadata: stream-browserify: "npm:3.0.0" tslib: "npm:^2.6.2" peerDependencies: - "@aws-sdk/client-s3": ^3.937.0 - checksum: 10c0/0afa951877300582eabbc3dcb30e4bcfc76bccf807712a5f084c24faeceac7feb6d9c9e3581d71206f3b83045c740996901c3006121b95cc6c8304ba92fb4ccc + "@aws-sdk/client-s3": ^3.940.0 + checksum: 10c0/e3c3118fa352f588f816a603ab91a2be10fd1d9c2008837d9d1ff1026b6116360f002da92e574f7c3c09c6a9c576f41ac42c8064b847c43510efa46188b6e80c languageName: node linkType: hard @@ -817,14 +817,14 @@ __metadata: languageName: node linkType: hard -"@aws-sdk/middleware-flexible-checksums@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/middleware-flexible-checksums@npm:3.936.0" +"@aws-sdk/middleware-flexible-checksums@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/middleware-flexible-checksums@npm:3.940.0" dependencies: "@aws-crypto/crc32": "npm:5.2.0" "@aws-crypto/crc32c": "npm:5.2.0" "@aws-crypto/util": "npm:5.2.0" - "@aws-sdk/core": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@smithy/is-array-buffer": "npm:^4.2.0" "@smithy/node-config-provider": "npm:^4.3.5" @@ -834,7 +834,7 @@ __metadata: "@smithy/util-stream": "npm:^4.5.6" "@smithy/util-utf8": "npm:^4.2.0" tslib: "npm:^2.6.2" - checksum: 10c0/7f035289080ebe075bc88efabcb088a028b1df5010b779d5e8e68b0b7a17506a02285a2df1b48a2483c5e3c6744d9f7a5964424fa6c7f1edbccf5b3fbc4f2ee5 + checksum: 10c0/d5d0b549baf03c1f103fe6a266f980d9f93116ac12d98285819192a29e1a0e58eb2ca12a9e5f6f3290294ec8fac85d0c0bb992105b895d93c4328dbc7aa665fb languageName: node linkType: hard @@ -901,11 +901,11 @@ __metadata: languageName: node linkType: hard -"@aws-sdk/middleware-sdk-s3@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/middleware-sdk-s3@npm:3.936.0" +"@aws-sdk/middleware-sdk-s3@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/middleware-sdk-s3@npm:3.940.0" dependencies: - "@aws-sdk/core": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@aws-sdk/util-arn-parser": "npm:3.893.0" "@smithy/core": "npm:^3.18.5" @@ -919,7 +919,7 @@ __metadata: "@smithy/util-stream": "npm:^4.5.6" "@smithy/util-utf8": "npm:^4.2.0" tslib: "npm:^2.6.2" - checksum: 10c0/057c78cc565a596ab5308089281fbce3ecfd6788bc973f3da40d6e5cee4227b682788c2436947a7bc5aae22043e5849ddff9cff0bc240f3e791b98260f432d46 + checksum: 10c0/ecd85d7c391f53d5dc26658289428c2444781b1e612d98c8d86dca4d8ff07ac473886cfd07f396592ab14abc59651d7954e2fa532e2efbb84e29f5ecbc69f00f languageName: node linkType: hard @@ -948,37 +948,37 @@ __metadata: languageName: node linkType: hard -"@aws-sdk/middleware-user-agent@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/middleware-user-agent@npm:3.936.0" +"@aws-sdk/middleware-user-agent@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/middleware-user-agent@npm:3.940.0" dependencies: - "@aws-sdk/core": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@aws-sdk/util-endpoints": "npm:3.936.0" "@smithy/core": "npm:^3.18.5" "@smithy/protocol-http": "npm:^5.3.5" "@smithy/types": "npm:^4.9.0" tslib: "npm:^2.6.2" - checksum: 10c0/ef0fef610b48339bf6d7aaf862f435797a4d72e45291c2a056a8ac656178d64b5cbf89141916b3bf393b0e887e4ed812cfeea3b357afb0ab6bd8bf59ffdc02c6 + checksum: 10c0/1756e35c96c5289857c65c8620d9e3afe5b14259fb0bb1290f8f08d879dd62a44569b28c505e2a56e641300df4e15fd7f29e788d1301ee2a0926caab6d2d0b9f languageName: node linkType: hard -"@aws-sdk/nested-clients@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/nested-clients@npm:3.936.0" +"@aws-sdk/nested-clients@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/nested-clients@npm:3.940.0" dependencies: "@aws-crypto/sha256-browser": "npm:5.2.0" "@aws-crypto/sha256-js": "npm:5.2.0" - "@aws-sdk/core": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" "@aws-sdk/middleware-host-header": "npm:3.936.0" "@aws-sdk/middleware-logger": "npm:3.936.0" "@aws-sdk/middleware-recursion-detection": "npm:3.936.0" - "@aws-sdk/middleware-user-agent": "npm:3.936.0" + "@aws-sdk/middleware-user-agent": "npm:3.940.0" "@aws-sdk/region-config-resolver": "npm:3.936.0" "@aws-sdk/types": "npm:3.936.0" "@aws-sdk/util-endpoints": "npm:3.936.0" "@aws-sdk/util-user-agent-browser": "npm:3.936.0" - "@aws-sdk/util-user-agent-node": "npm:3.936.0" + "@aws-sdk/util-user-agent-node": "npm:3.940.0" "@smithy/config-resolver": "npm:^4.4.3" "@smithy/core": "npm:^3.18.5" "@smithy/fetch-http-handler": "npm:^5.3.6" @@ -1005,7 +1005,7 @@ __metadata: "@smithy/util-retry": "npm:^4.2.5" "@smithy/util-utf8": "npm:^4.2.0" tslib: "npm:^2.6.2" - checksum: 10c0/0b7bc80fb6b14872e0d559c39b0c7ed24b9b9709c30d8254370044cb2dfa835bbb96fb46499b152c52cf36f1e82932317291435a7af18b4e50d7d582be37d7ad + checksum: 10c0/6695cd044d5b43f26a6d2ae533dcd56f6a8780dc0a19e152af1dfb1017fa1a1813c1e742ca7ba608561f881f4cd4e18f957293698d880d857b460dd715b8ac50 languageName: node linkType: hard @@ -1022,36 +1022,36 @@ __metadata: languageName: node linkType: hard -"@aws-sdk/signature-v4-multi-region@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/signature-v4-multi-region@npm:3.936.0" +"@aws-sdk/signature-v4-multi-region@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/signature-v4-multi-region@npm:3.940.0" dependencies: - "@aws-sdk/middleware-sdk-s3": "npm:3.936.0" + "@aws-sdk/middleware-sdk-s3": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@smithy/protocol-http": "npm:^5.3.5" "@smithy/signature-v4": "npm:^5.3.5" "@smithy/types": "npm:^4.9.0" tslib: "npm:^2.6.2" - checksum: 10c0/177d19a0082cc9c56b1f078734462056b50e262351e527c353bdaafb5303d87ae809f1351803fec28b5575acc1209271fa16a30a9b53f2e1c8da4b27e0aa50e4 + checksum: 10c0/877127f4f3a64e62e110b80b7f1c0f6e99a670e2263d4efa2c46c5ae249ee9cf5081a1e38e1f8c3df2fedffb772f6a33f348f95b2301246c9b37b46c32aa055e languageName: node linkType: hard -"@aws-sdk/token-providers@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/token-providers@npm:3.936.0" +"@aws-sdk/token-providers@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/token-providers@npm:3.940.0" dependencies: - "@aws-sdk/core": "npm:3.936.0" - "@aws-sdk/nested-clients": "npm:3.936.0" + "@aws-sdk/core": "npm:3.940.0" + "@aws-sdk/nested-clients": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@smithy/property-provider": "npm:^4.2.5" "@smithy/shared-ini-file-loader": "npm:^4.4.0" "@smithy/types": "npm:^4.9.0" tslib: "npm:^2.6.2" - checksum: 10c0/dbc40d1d03a670c8c14401f94e76d6824cc6a56e1c2241b19cda45afe81a6c6a42ca20420afe5d5092c78c1aa197ccafb4fa755a911a09eed6fac63a8afdd388 + checksum: 10c0/6dc90385d521521124eb65a1acdc28c792f5c353c15cc61ba08f7e2dae45f3ad81e02603eb0c244f453409becf73ec7c4e92a32048a464f07e85055a84faf0d7 languageName: node linkType: hard -"@aws-sdk/types@npm:3.936.0": +"@aws-sdk/types@npm:3.936.0, @aws-sdk/types@npm:^3.936.0": version: 3.936.0 resolution: "@aws-sdk/types@npm:3.936.0" dependencies: @@ -1071,16 +1071,6 @@ __metadata: languageName: node linkType: hard -"@aws-sdk/types@npm:^3.930.0": - version: 3.930.0 - resolution: "@aws-sdk/types@npm:3.930.0" - dependencies: - "@smithy/types": "npm:^4.9.0" - tslib: "npm:^2.6.2" - checksum: 10c0/8487d53c953cb8dc7437d9160c98438314c5f9f0d17f02ced2e8661f19aaaf71e860b700a8ec83bdda4bd831f71b3776e871953b5eb10db59d4d5067557f873b - languageName: node - linkType: hard - "@aws-sdk/util-arn-parser@npm:3.893.0": version: 3.893.0 resolution: "@aws-sdk/util-arn-parser@npm:3.893.0" @@ -1136,11 +1126,11 @@ __metadata: languageName: node linkType: hard -"@aws-sdk/util-user-agent-node@npm:3.936.0": - version: 3.936.0 - resolution: "@aws-sdk/util-user-agent-node@npm:3.936.0" +"@aws-sdk/util-user-agent-node@npm:3.940.0": + version: 3.940.0 + resolution: "@aws-sdk/util-user-agent-node@npm:3.940.0" dependencies: - "@aws-sdk/middleware-user-agent": "npm:3.936.0" + "@aws-sdk/middleware-user-agent": "npm:3.940.0" "@aws-sdk/types": "npm:3.936.0" "@smithy/node-config-provider": "npm:^4.3.5" "@smithy/types": "npm:^4.9.0" @@ -1150,7 +1140,7 @@ __metadata: peerDependenciesMeta: aws-crt: optional: true - checksum: 10c0/727ce52a013513b68ad85c6ae08614a983f55fd96825fc531053d726e69e225ccfb71e35cf341a9e2d20086c6d915c342da809d142151e7e32248f679a1653aa + checksum: 10c0/0287c87d3e4bb8f679c54123314ed164013b357ad7a8eefd1685ecef14c6fed062e31e9a689c6e761acc49a1f3eb1903a95d450f823c76fb89f49a4729a83a93 languageName: node linkType: hard From 0f2457e7d3eab13db18f5e9f2ec8126552a42c74 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Wed, 3 Dec 2025 11:43:46 +0100 Subject: [PATCH 6/7] fix(lambda): bump the aws-powertools group in /lambdas with 4 updates (#4925) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps the aws-powertools group in /lambdas with 4 updates: [@aws-lambda-powertools/parameters](https://github.com/aws-powertools/powertools-lambda-typescript), [@aws-lambda-powertools/logger](https://github.com/aws-powertools/powertools-lambda-typescript), [@aws-lambda-powertools/metrics](https://github.com/aws-powertools/powertools-lambda-typescript) and [@aws-lambda-powertools/tracer](https://github.com/aws-powertools/powertools-lambda-typescript). Updates `@aws-lambda-powertools/parameters` from 2.28.1 to 2.29.0
Release notes

Sourced from @​aws-lambda-powertools/parameters's releases.

v2.29.0

Summary

🎉 Powertools for AWS Lambda (Typescript) - Event Handler Utility is now Generally Available (GA)

Docs

We're excited to announce that the Event Handler utility is now production-ready! 🚀 Event Handler provides lightweight routing to reduce boilerplate for API Gateway REST/HTTP API, ALB and Lambda Function URLs.

⭐ Congratulations to @​yoshi-taka, @​iamgerg, @​fidelisojeah, and @​benthorner for their first PR merged in the project 🎉

Import path update

With Event Handler moving to GA, the import path has changed from the experimental namespace to a stable one.

// Before
import { Router } from
'@aws-lambda-powertools/event-handler/experimental-rest';

// Now import { Router } from '@​aws-lambda-powertools/event-handler/http';

Support for HTTP APIs, ALB, and Function URL

Event Handler now supports HTTP APIs (API Gateway v2), Application Load Balancers (ALB) and Lambda Function URL in addition to the existing REST API and support. This means you can use the same routing API across different AWS services, making it easier to build and migrate serverless applications regardless of your chosen architecture.

import { Router } from
'@aws-lambda-powertools/event-handler/http';
import type {
  ALBEvent,
  APIGatewayProxyEvent,
  APIGatewayProxyEventV2,
  Context,
  LambdaFunctionURLEvent,
} from 'aws-lambda';

const app = new Router(); app.get('/hello', () => { return { message: 'Hello Event Handler!', }; });

// Works across different services without any changes export const restApiHandler = (event: APIGatewayProxyEvent, context: Context) => app.resolve(event, context);

export const httpApiHandler = ( </tr></table>

... (truncated)

Changelog

Sourced from @​aws-lambda-powertools/parameters's changelog.

2.29.0 (2025-11-21)

Improvements

  • commons Make trace ID access more robust (#4693) (b26cd2c)

Bug Fixes

  • logger infinite loop on log buffer when item size is max bytes (#4741) (f0677d4)
  • logger not passing persistent keys to children (#4740) (eafbe13)
  • event-handler moved the response mutation logic to the composeMiddleware function (#4773) (2fe04e3)
  • event-handler handle repeated queryString values (#4755) (5d3cf2d)
  • event-handler allow event handler response to return array (#4725) (eef92ca)

Features

  • logger use async local storage for logger (#4668) (4507fcc)
  • metrics use async local storage for metrics (#4663) (#4694) (2e08f74)
  • parser add type for values parsed by DynamoDBStreamRecord (#4793) (c2bd849)
  • batch use async local storage for batch processing (#4700) (67a8de7)
  • event-handler add support for ALB (#4759) (a470892)
  • event-handler expose response streaming in public API (#4743) (be4e4e2)
  • event-handler add first-class support for binary responses (#4723) (13dbcdc)
  • event-handler Add support for HTTP APIs (API Gateway v2) (#4714) (2f70018)

Maintenance

  • tracer bump aws-xray-sdk-core from 3.11.0 to 3.12.0 (#4792) (afb5678)
  • event-handler unflag http handler from experimental (#4801) (a2deb8d)
Commits
  • fa726e0 chore(ci): bump version to 2.29.0 (#4802)
  • a2deb8d chore(event-handler): unflag http handler from experimental (#4801)
  • c2bd849 feat(parser): add type for values parsed by DynamoDBStreamRecord (#4793)
  • afb5678 chore(deps): bump aws-xray-sdk-core from 3.11.0 to 3.12.0 (#4792)
  • 8806cad docs(event-handler): added documentation for support for HTTP API, ALB and FU...
  • d2e0fcc chore(deps): upgrade InvokeStore to v0.2.1 (#4794)
  • bccd0b1 docs(event-handler): add response streaming docs (#4786)
  • 2279f9b chore(deps): bump mkdocs-llmstxt from 0.4.0 to 0.5.0 in /docs (#4789)
  • 12c5e63 chore(deps): bump actions/checkout from 5.0.1 to 6.0.0 (#4788)
  • 943bb4f docs(event-handler): update binary response docs (#4783)
  • Additional commits viewable in compare view
Maintainer changes

This version was pushed to npm by [GitHub Actions](https://www.npmjs.com/~GitHub Actions), a new releaser for @​aws-lambda-powertools/parameters since your current version.


Updates `@aws-lambda-powertools/logger` from 2.28.1 to 2.29.0
Release notes

Sourced from @​aws-lambda-powertools/logger's releases.

v2.29.0

Summary

🎉 Powertools for AWS Lambda (Typescript) - Event Handler Utility is now Generally Available (GA)

Docs

We're excited to announce that the Event Handler utility is now production-ready! 🚀 Event Handler provides lightweight routing to reduce boilerplate for API Gateway REST/HTTP API, ALB and Lambda Function URLs.

⭐ Congratulations to @​yoshi-taka, @​iamgerg, @​fidelisojeah, and @​benthorner for their first PR merged in the project 🎉

Import path update

With Event Handler moving to GA, the import path has changed from the experimental namespace to a stable one.

// Before
import { Router } from
'@aws-lambda-powertools/event-handler/experimental-rest';

// Now import { Router } from '@​aws-lambda-powertools/event-handler/http';

Support for HTTP APIs, ALB, and Function URL

Event Handler now supports HTTP APIs (API Gateway v2), Application Load Balancers (ALB) and Lambda Function URL in addition to the existing REST API and support. This means you can use the same routing API across different AWS services, making it easier to build and migrate serverless applications regardless of your chosen architecture.

import { Router } from
'@aws-lambda-powertools/event-handler/http';
import type {
  ALBEvent,
  APIGatewayProxyEvent,
  APIGatewayProxyEventV2,
  Context,
  LambdaFunctionURLEvent,
} from 'aws-lambda';

const app = new Router(); app.get('/hello', () => { return { message: 'Hello Event Handler!', }; });

// Works across different services without any changes export const restApiHandler = (event: APIGatewayProxyEvent, context: Context) => app.resolve(event, context);

export const httpApiHandler = ( </tr></table>

... (truncated)

Changelog

Sourced from @​aws-lambda-powertools/logger's changelog.

2.29.0 (2025-11-21)

Improvements

  • commons Make trace ID access more robust (#4693) (b26cd2c)

Bug Fixes

  • logger infinite loop on log buffer when item size is max bytes (#4741) (f0677d4)
  • logger not passing persistent keys to children (#4740) (eafbe13)
  • event-handler moved the response mutation logic to the composeMiddleware function (#4773) (2fe04e3)
  • event-handler handle repeated queryString values (#4755) (5d3cf2d)
  • event-handler allow event handler response to return array (#4725) (eef92ca)

Features

  • logger use async local storage for logger (#4668) (4507fcc)
  • metrics use async local storage for metrics (#4663) (#4694) (2e08f74)
  • parser add type for values parsed by DynamoDBStreamRecord (#4793) (c2bd849)
  • batch use async local storage for batch processing (#4700) (67a8de7)
  • event-handler add support for ALB (#4759) (a470892)
  • event-handler expose response streaming in public API (#4743) (be4e4e2)
  • event-handler add first-class support for binary responses (#4723) (13dbcdc)
  • event-handler Add support for HTTP APIs (API Gateway v2) (#4714) (2f70018)

Maintenance

  • tracer bump aws-xray-sdk-core from 3.11.0 to 3.12.0 (#4792) (afb5678)
  • event-handler unflag http handler from experimental (#4801) (a2deb8d)
Commits
  • fa726e0 chore(ci): bump version to 2.29.0 (#4802)
  • a2deb8d chore(event-handler): unflag http handler from experimental (#4801)
  • c2bd849 feat(parser): add type for values parsed by DynamoDBStreamRecord (#4793)
  • afb5678 chore(deps): bump aws-xray-sdk-core from 3.11.0 to 3.12.0 (#4792)
  • 8806cad docs(event-handler): added documentation for support for HTTP API, ALB and FU...
  • d2e0fcc chore(deps): upgrade InvokeStore to v0.2.1 (#4794)
  • bccd0b1 docs(event-handler): add response streaming docs (#4786)
  • 2279f9b chore(deps): bump mkdocs-llmstxt from 0.4.0 to 0.5.0 in /docs (#4789)
  • 12c5e63 chore(deps): bump actions/checkout from 5.0.1 to 6.0.0 (#4788)
  • 943bb4f docs(event-handler): update binary response docs (#4783)
  • Additional commits viewable in compare view
Maintainer changes

This version was pushed to npm by [GitHub Actions](https://www.npmjs.com/~GitHub Actions), a new releaser for @​aws-lambda-powertools/logger since your current version.


Updates `@aws-lambda-powertools/metrics` from 2.28.1 to 2.29.0
Release notes

Sourced from @​aws-lambda-powertools/metrics's releases.

v2.29.0

Summary

🎉 Powertools for AWS Lambda (Typescript) - Event Handler Utility is now Generally Available (GA)

Docs

We're excited to announce that the Event Handler utility is now production-ready! 🚀 Event Handler provides lightweight routing to reduce boilerplate for API Gateway REST/HTTP API, ALB and Lambda Function URLs.

⭐ Congratulations to @​yoshi-taka, @​iamgerg, @​fidelisojeah, and @​benthorner for their first PR merged in the project 🎉

Import path update

With Event Handler moving to GA, the import path has changed from the experimental namespace to a stable one.

// Before
import { Router } from
'@aws-lambda-powertools/event-handler/experimental-rest';

// Now import { Router } from '@​aws-lambda-powertools/event-handler/http';

Support for HTTP APIs, ALB, and Function URL

Event Handler now supports HTTP APIs (API Gateway v2), Application Load Balancers (ALB) and Lambda Function URL in addition to the existing REST API and support. This means you can use the same routing API across different AWS services, making it easier to build and migrate serverless applications regardless of your chosen architecture.

import { Router } from
'@aws-lambda-powertools/event-handler/http';
import type {
  ALBEvent,
  APIGatewayProxyEvent,
  APIGatewayProxyEventV2,
  Context,
  LambdaFunctionURLEvent,
} from 'aws-lambda';

const app = new Router(); app.get('/hello', () => { return { message: 'Hello Event Handler!', }; });

// Works across different services without any changes export const restApiHandler = (event: APIGatewayProxyEvent, context: Context) => app.resolve(event, context);

export const httpApiHandler = ( </tr></table>

... (truncated)

Changelog

Sourced from @​aws-lambda-powertools/metrics's changelog.

2.29.0 (2025-11-21)

Improvements

  • commons Make trace ID access more robust (#4693) (b26cd2c)

Bug Fixes

  • logger infinite loop on log buffer when item size is max bytes (#4741) (f0677d4)
  • logger not passing persistent keys to children (#4740) (eafbe13)
  • event-handler moved the response mutation logic to the composeMiddleware function (#4773) (2fe04e3)
  • event-handler handle repeated queryString values (#4755) (5d3cf2d)
  • event-handler allow event handler response to return array (#4725) (eef92ca)

Features

  • logger use async local storage for logger (#4668) (4507fcc)
  • metrics use async local storage for metrics (#4663) (#4694) (2e08f74)
  • parser add type for values parsed by DynamoDBStreamRecord (#4793) (c2bd849)
  • batch use async local storage for batch processing (#4700) (67a8de7)
  • event-handler add support for ALB (#4759) (a470892)
  • event-handler expose response streaming in public API (#4743) (be4e4e2)
  • event-handler add first-class support for binary responses (#4723) (13dbcdc)
  • event-handler Add support for HTTP APIs (API Gateway v2) (#4714) (2f70018)

Maintenance

  • tracer bump aws-xray-sdk-core from 3.11.0 to 3.12.0 (#4792) (afb5678)
  • event-handler unflag http handler from experimental (#4801) (a2deb8d)
Commits
  • fa726e0 chore(ci): bump version to 2.29.0 (#4802)
  • a2deb8d chore(event-handler): unflag http handler from experimental (#4801)
  • c2bd849 feat(parser): add type for values parsed by DynamoDBStreamRecord (#4793)
  • afb5678 chore(deps): bump aws-xray-sdk-core from 3.11.0 to 3.12.0 (#4792)
  • 8806cad docs(event-handler): added documentation for support for HTTP API, ALB and FU...
  • d2e0fcc chore(deps): upgrade InvokeStore to v0.2.1 (#4794)
  • bccd0b1 docs(event-handler): add response streaming docs (#4786)
  • 2279f9b chore(deps): bump mkdocs-llmstxt from 0.4.0 to 0.5.0 in /docs (#4789)
  • 12c5e63 chore(deps): bump actions/checkout from 5.0.1 to 6.0.0 (#4788)
  • 943bb4f docs(event-handler): update binary response docs (#4783)
  • Additional commits viewable in compare view
Maintainer changes

This version was pushed to npm by [GitHub Actions](https://www.npmjs.com/~GitHub Actions), a new releaser for @​aws-lambda-powertools/metrics since your current version.


Updates `@aws-lambda-powertools/tracer` from 2.28.1 to 2.29.0
Release notes

Sourced from @​aws-lambda-powertools/tracer's releases.

v2.29.0

Summary

🎉 Powertools for AWS Lambda (Typescript) - Event Handler Utility is now Generally Available (GA)

Docs

We're excited to announce that the Event Handler utility is now production-ready! 🚀 Event Handler provides lightweight routing to reduce boilerplate for API Gateway REST/HTTP API, ALB and Lambda Function URLs.

⭐ Congratulations to @​yoshi-taka, @​iamgerg, @​fidelisojeah, and @​benthorner for their first PR merged in the project 🎉

Import path update

With Event Handler moving to GA, the import path has changed from the experimental namespace to a stable one.

// Before
import { Router } from
'@aws-lambda-powertools/event-handler/experimental-rest';

// Now import { Router } from '@​aws-lambda-powertools/event-handler/http';

Support for HTTP APIs, ALB, and Function URL

Event Handler now supports HTTP APIs (API Gateway v2), Application Load Balancers (ALB) and Lambda Function URL in addition to the existing REST API and support. This means you can use the same routing API across different AWS services, making it easier to build and migrate serverless applications regardless of your chosen architecture.

import { Router } from
'@aws-lambda-powertools/event-handler/http';
import type {
  ALBEvent,
  APIGatewayProxyEvent,
  APIGatewayProxyEventV2,
  Context,
  LambdaFunctionURLEvent,
} from 'aws-lambda';

const app = new Router(); app.get('/hello', () => { return { message: 'Hello Event Handler!', }; });

// Works across different services without any changes export const restApiHandler = (event: APIGatewayProxyEvent, context: Context) => app.resolve(event, context);

export const httpApiHandler = ( </tr></table>

... (truncated)

Changelog

Sourced from @​aws-lambda-powertools/tracer's changelog.

2.29.0 (2025-11-21)

Improvements

  • commons Make trace ID access more robust (#4693) (b26cd2c)

Bug Fixes

  • logger infinite loop on log buffer when item size is max bytes (#4741) (f0677d4)
  • logger not passing persistent keys to children (#4740) (eafbe13)
  • event-handler moved the response mutation logic to the composeMiddleware function (#4773) (2fe04e3)
  • event-handler handle repeated queryString values (#4755) (5d3cf2d)
  • event-handler allow event handler response to return array (#4725) (eef92ca)

Features

  • logger use async local storage for logger (#4668) (4507fcc)
  • metrics use async local storage for metrics (#4663) (#4694) (2e08f74)
  • parser add type for values parsed by DynamoDBStreamRecord (#4793) (c2bd849)
  • batch use async local storage for batch processing (#4700) (67a8de7)
  • event-handler add support for ALB (#4759) (a470892)
  • event-handler expose response streaming in public API (#4743) (be4e4e2)
  • event-handler add first-class support for binary responses (#4723) (13dbcdc)
  • event-handler Add support for HTTP APIs (API Gateway v2) (#4714) (2f70018)

Maintenance

  • tracer bump aws-xray-sdk-core from 3.11.0 to 3.12.0 (#4792) (afb5678)
  • event-handler unflag http handler from experimental (#4801) (a2deb8d)
Commits
  • fa726e0 chore(ci): bump version to 2.29.0 (#4802)
  • a2deb8d chore(event-handler): unflag http handler from experimental (#4801)
  • c2bd849 feat(parser): add type for values parsed by DynamoDBStreamRecord (#4793)
  • afb5678 chore(deps): bump aws-xray-sdk-core from 3.11.0 to 3.12.0 (#4792)
  • 8806cad docs(event-handler): added documentation for support for HTTP API, ALB and FU...
  • d2e0fcc chore(deps): upgrade InvokeStore to v0.2.1 (#4794)
  • bccd0b1 docs(event-handler): add response streaming docs (#4786)
  • 2279f9b chore(deps): bump mkdocs-llmstxt from 0.4.0 to 0.5.0 in /docs (#4789)
  • 12c5e63 chore(deps): bump actions/checkout from 5.0.1 to 6.0.0 (#4788)
  • 943bb4f docs(event-handler): update binary response docs (#4783)
  • Additional commits viewable in compare view
Maintainer changes

This version was pushed to npm by [GitHub Actions](https://www.npmjs.com/~GitHub Actions), a new releaser for @​aws-lambda-powertools/tracer since your current version.


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore ` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore ` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore ` will remove the ignore condition of the specified dependency and ignore conditions
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- lambdas/functions/control-plane/package.json | 2 +- lambdas/libs/aws-powertools-util/package.json | 6 +- lambdas/yarn.lock | 80 ++++++++++--------- 3 files changed, 45 insertions(+), 43 deletions(-) diff --git a/lambdas/functions/control-plane/package.json b/lambdas/functions/control-plane/package.json index 43350b70ca..ca8af2f115 100644 --- a/lambdas/functions/control-plane/package.json +++ b/lambdas/functions/control-plane/package.json @@ -32,7 +32,7 @@ "dependencies": { "@aws-github-runner/aws-powertools-util": "*", "@aws-github-runner/aws-ssm-util": "*", - "@aws-lambda-powertools/parameters": "^2.28.1", + "@aws-lambda-powertools/parameters": "^2.29.0", "@aws-sdk/client-ec2": "^3.940.0", "@aws-sdk/client-sqs": "^3.940.0", "@middy/core": "^6.4.5", diff --git a/lambdas/libs/aws-powertools-util/package.json b/lambdas/libs/aws-powertools-util/package.json index 076a612585..0117543ea2 100644 --- a/lambdas/libs/aws-powertools-util/package.json +++ b/lambdas/libs/aws-powertools-util/package.json @@ -20,9 +20,9 @@ "body-parser": "^2.2.1" }, "dependencies": { - "@aws-lambda-powertools/logger": "^2.28.1", - "@aws-lambda-powertools/metrics": "^2.28.1", - "@aws-lambda-powertools/tracer": "^2.28.1", + "@aws-lambda-powertools/logger": "^2.29.0", + "@aws-lambda-powertools/metrics": "^2.29.0", + "@aws-lambda-powertools/tracer": "^2.29.0", "aws-lambda": "^1.0.7" }, "nx": { diff --git a/lambdas/yarn.lock b/lambdas/yarn.lock index cd32ab95d4..235b3ad812 100644 --- a/lambdas/yarn.lock +++ b/lambdas/yarn.lock @@ -118,9 +118,9 @@ __metadata: version: 0.0.0-use.local resolution: "@aws-github-runner/aws-powertools-util@workspace:libs/aws-powertools-util" dependencies: - "@aws-lambda-powertools/logger": "npm:^2.28.1" - "@aws-lambda-powertools/metrics": "npm:^2.28.1" - "@aws-lambda-powertools/tracer": "npm:^2.28.1" + "@aws-lambda-powertools/logger": "npm:^2.29.0" + "@aws-lambda-powertools/metrics": "npm:^2.29.0" + "@aws-lambda-powertools/tracer": "npm:^2.29.0" "@types/aws-lambda": "npm:^8.10.155" "@types/node": "npm:^22.19.0" aws-lambda: "npm:^1.0.7" @@ -148,7 +148,7 @@ __metadata: dependencies: "@aws-github-runner/aws-powertools-util": "npm:*" "@aws-github-runner/aws-ssm-util": "npm:*" - "@aws-lambda-powertools/parameters": "npm:^2.28.1" + "@aws-lambda-powertools/parameters": "npm:^2.29.0" "@aws-sdk/client-ec2": "npm:^3.940.0" "@aws-sdk/client-sqs": "npm:^3.940.0" "@aws-sdk/types": "npm:^3.936.0" @@ -232,50 +232,53 @@ __metadata: languageName: unknown linkType: soft -"@aws-lambda-powertools/commons@npm:2.28.1": - version: 2.28.1 - resolution: "@aws-lambda-powertools/commons@npm:2.28.1" - checksum: 10c0/edc6f6f1fb07c56108ef2af7904c9f8c58d08be8328ecdb04a58e33563c325c618e21abc0e3e639edd1b823eb5596df706f95d72b4d816d8dc7e6f633af3f78f +"@aws-lambda-powertools/commons@npm:2.29.0": + version: 2.29.0 + resolution: "@aws-lambda-powertools/commons@npm:2.29.0" + dependencies: + "@aws/lambda-invoke-store": "npm:0.2.1" + checksum: 10c0/62796c0380614ff4ba8907487c479c6d5b8298cc730afc4932ba717983e61f8f20bb87e302bdd1162b1bf965a069f2ee7c51468bb237231ee835771406d49e1b languageName: node linkType: hard -"@aws-lambda-powertools/logger@npm:^2.28.1": - version: 2.28.1 - resolution: "@aws-lambda-powertools/logger@npm:2.28.1" +"@aws-lambda-powertools/logger@npm:^2.29.0": + version: 2.29.0 + resolution: "@aws-lambda-powertools/logger@npm:2.29.0" dependencies: - "@aws-lambda-powertools/commons": "npm:2.28.1" + "@aws-lambda-powertools/commons": "npm:2.29.0" + "@aws/lambda-invoke-store": "npm:0.2.1" lodash.merge: "npm:^4.6.2" peerDependencies: - "@aws-lambda-powertools/jmespath": 2.28.1 + "@aws-lambda-powertools/jmespath": 2.29.0 "@middy/core": 4.x || 5.x || 6.x peerDependenciesMeta: "@aws-lambda-powertools/jmespath": optional: true "@middy/core": optional: true - checksum: 10c0/2be2860f8be1094486c5eb537c1a26816fbb710179b24cd152bcf66a35788ab09e47ecf8f862ec6bab4ab2bf3c05058f713b48f2f1f86a27cd71d0918b57288b + checksum: 10c0/ad9b50067ba3eb0a9dd96e1f20fd36d461acb62566c504bb8accda7b94dcb90c514ebc12f91b4cbf1c663d45b62f9187eab49ee1b5162b4c405c6ea9f2730ca0 languageName: node linkType: hard -"@aws-lambda-powertools/metrics@npm:^2.28.1": - version: 2.28.1 - resolution: "@aws-lambda-powertools/metrics@npm:2.28.1" +"@aws-lambda-powertools/metrics@npm:^2.29.0": + version: 2.29.0 + resolution: "@aws-lambda-powertools/metrics@npm:2.29.0" dependencies: - "@aws-lambda-powertools/commons": "npm:2.28.1" + "@aws-lambda-powertools/commons": "npm:2.29.0" peerDependencies: "@middy/core": 4.x || 5.x || 6.x peerDependenciesMeta: "@middy/core": optional: true - checksum: 10c0/eb0a7deed897be7cce335a309e7af59a83398e4f4e5b30e4c9aaf550e8adf0f2d4969db24766711e41938e4bf5ce9d6a3e42213b91d5dbb4fb9a3fa318583a91 + checksum: 10c0/5e93880c0f76975c2bfd2c8ca3d4fa1c0d6802570ff718fb4b98353bcc6f55dc874df739526057c9a4122a4a38eb98d43097ff3b518297ca702113f40d34123f languageName: node linkType: hard -"@aws-lambda-powertools/parameters@npm:^2.28.1": - version: 2.28.1 - resolution: "@aws-lambda-powertools/parameters@npm:2.28.1" +"@aws-lambda-powertools/parameters@npm:^2.29.0": + version: 2.29.0 + resolution: "@aws-lambda-powertools/parameters@npm:2.29.0" dependencies: - "@aws-lambda-powertools/commons": "npm:2.28.1" + "@aws-lambda-powertools/commons": "npm:2.29.0" peerDependencies: "@aws-sdk/client-appconfigdata": ">=3.x" "@aws-sdk/client-dynamodb": ">=3.x" @@ -296,22 +299,22 @@ __metadata: optional: true "@middy/core": optional: true - checksum: 10c0/c5b123a9d8a58e50d82c3b923ec769bded4557af55f431a8a609c2d0545812300543e3fab574bb9f10552eff2cbd1b8ed4873c2e3e47783a00668af2f6a98fbe + checksum: 10c0/3293cbf2fd3b7214ca906a94578af0ed583b4321a76b5962472636653da05348e68cec7fb42b5b7f3ae18b36f4d686a52938375010350ac752677bbadb49273a languageName: node linkType: hard -"@aws-lambda-powertools/tracer@npm:^2.28.1": - version: 2.28.1 - resolution: "@aws-lambda-powertools/tracer@npm:2.28.1" +"@aws-lambda-powertools/tracer@npm:^2.29.0": + version: 2.29.0 + resolution: "@aws-lambda-powertools/tracer@npm:2.29.0" dependencies: - "@aws-lambda-powertools/commons": "npm:2.28.1" - aws-xray-sdk-core: "npm:^3.11.0" + "@aws-lambda-powertools/commons": "npm:2.29.0" + aws-xray-sdk-core: "npm:^3.12.0" peerDependencies: "@middy/core": 4.x || 5.x || 6.x peerDependenciesMeta: "@middy/core": optional: true - checksum: 10c0/6cd975ddf36942810850689756d7a21ddff0ea90e599cf22795e34041b4731f641a921cc5537dbc9c5c8e611e73d9e2a7396fecb4d1c95cc718b81947b412494 + checksum: 10c0/7db935bdbbfb3034e21a811e8df21f3e00a24d7ff85eb6f631fd670b57e4e1a43a4f1709c6c741b8f2ae8458cdbe915d2af89c28047595603f7d787bd4427f26 languageName: node linkType: hard @@ -1155,10 +1158,10 @@ __metadata: languageName: node linkType: hard -"@aws/lambda-invoke-store@npm:^0.0.1": - version: 0.0.1 - resolution: "@aws/lambda-invoke-store@npm:0.0.1" - checksum: 10c0/0bbf3060014a462177fb743e132e9b106a6743ad9cd905df4bd26e9ca8bfe2cc90473b03a79938fa908934e45e43f366f57af56a697991abda71d9ac92f5018f +"@aws/lambda-invoke-store@npm:0.2.1": + version: 0.2.1 + resolution: "@aws/lambda-invoke-store@npm:0.2.1" + checksum: 10c0/7fdfd6e4b175d36dae522556efc51b0f7445af3d55e516acee0f4e52946833ec9655be45cb3bdefec5974c0c6e5bcca3ad1bce7d397eb5f7a2393623867fb4b2 languageName: node linkType: hard @@ -6004,18 +6007,17 @@ __metadata: languageName: node linkType: hard -"aws-xray-sdk-core@npm:^3.11.0": - version: 3.11.0 - resolution: "aws-xray-sdk-core@npm:3.11.0" +"aws-xray-sdk-core@npm:^3.12.0": + version: 3.12.0 + resolution: "aws-xray-sdk-core@npm:3.12.0" dependencies: "@aws-sdk/types": "npm:^3.4.1" - "@aws/lambda-invoke-store": "npm:^0.0.1" "@smithy/service-error-classification": "npm:^2.0.4" "@types/cls-hooked": "npm:^4.3.3" atomic-batcher: "npm:^1.0.2" cls-hooked: "npm:^4.2.2" semver: "npm:^7.5.3" - checksum: 10c0/fa9fe964a1c78dc3717d36baa57658f360bb07d3beaf46689d2c321f40ad8fabe2b21c689b7399c3f0ba486dbf2dd056c1cfe772e613f0458979ca7614ad56c5 + checksum: 10c0/6750bf432c0e7e35844d4f5a317896e0b277eb7d3623e2f1934e5c917dad961f2fc1d100b5abff3ba92d551a9fe2d716b1207ae7687515140c8053e7f605864f languageName: node linkType: hard From a4c2a9e039923210ebaf7ee125f4191d2b770c7d Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Sat, 6 Dec 2025 18:16:48 +0000 Subject: [PATCH 7/7] docs: auto update terraform docs --- modules/runners/job-retry/README.md | 2 +- modules/webhook-github-app/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/runners/job-retry/README.md b/modules/runners/job-retry/README.md index 6491c2019a..f54b943855 100644 --- a/modules/runners/job-retry/README.md +++ b/modules/runners/job-retry/README.md @@ -42,7 +42,7 @@ The module is an inner module and used by the runner module when the opt-in feat | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| -| [config](#input\_config) | Configuration for the spot termination watcher lambda function.

`aws_partition`: Partition for the base arn if not 'aws'
`architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions.
`environment_variables`: Environment variables for the lambda.
`enable_organization_runners`: Enable organization runners.
`enable_metric`: Enable metric for the lambda. If `spot_warning` is set to true, the lambda will emit a metric when it detects a spot termination warning.
'ghes\_url': Optional GitHub Enterprise Server URL.
'user\_agent': Optional User-Agent header for GitHub API requests.
'github\_app\_parameters': Parameter Store for GitHub App Parameters.
'kms\_key\_arn': Optional CMK Key ARN instead of using the default AWS managed key.
`lambda_event_source_mapping_batch_size`: Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default will be used.
`lambda_event_source_mapping_maximum_batching_window_in_seconds`: Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10.
`lambda_principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing.
`lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'.
`logging_kms_key_id`: Specifies the kms key id to encrypt the logs with
`logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.
`memory_size`: Memory size linit in MB of the lambda.
`metrics`: Configuration to enable metrics creation by the lambda.
`prefix`: The prefix used for naming resources.
`role_path`: The path that will be added to the role, if not set the environment name will be used.
`role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda.
`runtime`: AWS Lambda runtime.
`s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly.
`s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas.
`s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket.
`security_group_ids`: List of security group IDs associated with the Lambda function.
'sqs\_build\_queue': SQS queue for build events to re-publish job request.
`subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`.
`tag_filters`: Map of tags that will be used to filter the resources to be tracked. Only for which all tags are present and starting with the same value as the value in the map will be tracked.
`tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`timeout`: Time out of the lambda in seconds.
`tracing_config`: Configuration for lambda tracing.
`zip`: File location of the lambda zip file. |
object({
aws_partition = optional(string, null)
architecture = optional(string, null)
enable_organization_runners = bool
environment_variables = optional(map(string), {})
ghes_url = optional(string, null)
user_agent = optional(string, null)
github_app_parameters = object({
key_base64 = map(string)
id = map(string)
})
kms_key_arn = optional(string, null)
lambda_event_source_mapping_batch_size = optional(number, 10)
lambda_event_source_mapping_maximum_batching_window_in_seconds = optional(number, 0)
lambda_tags = optional(map(string), {})
log_level = optional(string, null)
logging_kms_key_id = optional(string, null)
logging_retention_in_days = optional(number, null)
memory_size = optional(number, null)
metrics = optional(object({
enable = optional(bool, false)
namespace = optional(string, null)
metric = optional(object({
enable_github_app_rate_limit = optional(bool, true)
enable_job_retry = optional(bool, true)
}), {})
}), {})
prefix = optional(string, null)
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
queue_encryption = optional(object({
kms_data_key_reuse_period_seconds = optional(number, null)
kms_master_key_id = optional(string, null)
sqs_managed_sse_enabled = optional(bool, true)
}), {})
role_path = optional(string, null)
role_permissions_boundary = optional(string, null)
runtime = optional(string, null)
security_group_ids = optional(list(string), [])
subnet_ids = optional(list(string), [])
s3_bucket = optional(string, null)
s3_key = optional(string, null)
s3_object_version = optional(string, null)
sqs_build_queue = object({
url = string
arn = string
})
tags = optional(map(string), {})
timeout = optional(number, 30)
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
zip = optional(string, null)
})
| n/a | yes | +| [config](#input\_config) | Configuration for the spot termination watcher lambda function.

`aws_partition`: Partition for the base arn if not 'aws'
`architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions.
`environment_variables`: Environment variables for the lambda.
`enable_organization_runners`: Enable organization runners.
`enable_metric`: Enable metric for the lambda. If `spot_warning` is set to true, the lambda will emit a metric when it detects a spot termination warning.
'ghes\_url': Optional GitHub Enterprise Server URL.
'user\_agent': Optional User-Agent header for GitHub API requests.
'github\_app\_parameters': Parameter Store for GitHub App Parameters.
'kms\_key\_arn': Optional CMK Key ARN instead of using the default AWS managed key.
`lambda_event_source_mapping_batch_size`: Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default will be used.
`lambda_event_source_mapping_maximum_batching_window_in_seconds`: Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10.
`lambda_principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing.
`lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'.
`logging_kms_key_id`: Specifies the kms key id to encrypt the logs with
`logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.
`memory_size`: Memory size limit in MB of the lambda.
`metrics`: Configuration to enable metrics creation by the lambda.
`prefix`: The prefix used for naming resources.
`role_path`: The path that will be added to the role, if not set the environment name will be used.
`role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda.
`runtime`: AWS Lambda runtime.
`s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly.
`s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas.
`s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket.
`security_group_ids`: List of security group IDs associated with the Lambda function.
'sqs\_build\_queue': SQS queue for build events to re-publish job request.
`subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`.
`tag_filters`: Map of tags that will be used to filter the resources to be tracked. Only for which all tags are present and starting with the same value as the value in the map will be tracked.
`tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`timeout`: Time out of the lambda in seconds.
`tracing_config`: Configuration for lambda tracing.
`zip`: File location of the lambda zip file. |
object({
aws_partition = optional(string, null)
architecture = optional(string, null)
enable_organization_runners = bool
environment_variables = optional(map(string), {})
ghes_url = optional(string, null)
user_agent = optional(string, null)
github_app_parameters = object({
key_base64 = map(string)
id = map(string)
})
kms_key_arn = optional(string, null)
lambda_event_source_mapping_batch_size = optional(number, 10)
lambda_event_source_mapping_maximum_batching_window_in_seconds = optional(number, 0)
lambda_tags = optional(map(string), {})
log_level = optional(string, null)
logging_kms_key_id = optional(string, null)
logging_retention_in_days = optional(number, null)
memory_size = optional(number, null)
metrics = optional(object({
enable = optional(bool, false)
namespace = optional(string, null)
metric = optional(object({
enable_github_app_rate_limit = optional(bool, true)
enable_job_retry = optional(bool, true)
}), {})
}), {})
prefix = optional(string, null)
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
queue_encryption = optional(object({
kms_data_key_reuse_period_seconds = optional(number, null)
kms_master_key_id = optional(string, null)
sqs_managed_sse_enabled = optional(bool, true)
}), {})
role_path = optional(string, null)
role_permissions_boundary = optional(string, null)
runtime = optional(string, null)
security_group_ids = optional(list(string), [])
subnet_ids = optional(list(string), [])
s3_bucket = optional(string, null)
s3_key = optional(string, null)
s3_object_version = optional(string, null)
sqs_build_queue = object({
url = string
arn = string
})
tags = optional(map(string), {})
timeout = optional(number, 30)
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
zip = optional(string, null)
})
| n/a | yes | ## Outputs diff --git a/modules/webhook-github-app/README.md b/modules/webhook-github-app/README.md index 0c09a761c5..6de85ee30d 100644 --- a/modules/webhook-github-app/README.md +++ b/modules/webhook-github-app/README.md @@ -34,7 +34,7 @@ No modules. | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| -| [github\_app](#input\_github\_app) | GitHub app parameters, see your github app. Ensure the key is the base64-encoded `.pem` file (the output of `base64 app.private-key.pem`, not the content of `private-key.pem`). |
object({
key_base64 = string
id = string
webhook_secret = string
})
| n/a | yes | +| [github\_app](#input\_github\_app) | GitHub app parameters, see your GitHub app. Ensure the key is the base64-encoded `.pem` file (the output of `base64 app.private-key.pem`, not the content of `private-key.pem`). |
object({
key_base64 = string
id = string
webhook_secret = string
})
| n/a | yes | | [webhook\_endpoint](#input\_webhook\_endpoint) | The endpoint to use for the webhook, defaults to the endpoint of the runners module. | `string` | n/a | yes | ## Outputs