-
-
Notifications
You must be signed in to change notification settings - Fork 801
[OpenAPI] Add archive for collection #8978
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR introduces a new OpenAPI collection archive system for packaging and managing OpenAPI endpoint and model definitions. The implementation adds a ZIP-based container format with comprehensive APIs for creating, reading, and modifying archives.
Key changes:
- Introduces
HotChocolate.Adapters.OpenApi.Packagingproject with archive functionality - Adds comprehensive test coverage through
Adapters.OpenApi.Packaging.Tests - Centralizes target framework configuration using
$(OpenApiTargetFrameworks)property
Reviewed changes
Copilot reviewed 22 out of 22 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
src/Directory.Build.props |
Defines shared OpenApiTargetFrameworks property for consistent framework targeting |
src/HotChocolate/Adapters/src/Adapters.OpenApi.Packaging/OpenApiCollectionArchive.cs |
Main archive API with create/open/read/write operations for endpoints and models |
src/HotChocolate/Adapters/src/Adapters.OpenApi.Packaging/ArchiveSession.cs |
Manages archive session state and temporary file operations |
src/HotChocolate/Adapters/src/Adapters.OpenApi.Packaging/ArchiveMetadata.cs |
Defines metadata structure for archive format versioning and content tracking |
src/HotChocolate/Adapters/src/Adapters.OpenApi.Packaging/Serializers/ArchiveMetadataSerializer.cs |
JSON serialization logic for archive metadata |
src/HotChocolate/Adapters/src/Adapters.OpenApi.Packaging/NameValidator.cs |
Validates endpoint and model names using regex patterns |
src/HotChocolate/Adapters/src/Adapters.OpenApi.Packaging/*.cs |
Supporting types (FileNames, FileKind, OpenApiEndpoint, OpenApiModel, enums, options) |
src/HotChocolate/Adapters/src/Adapters.OpenApi.Packaging/HotChocolate.Adapters.OpenApi.Packaging.csproj |
New project file for packaging library |
src/HotChocolate/Adapters/test/Adapters.OpenApi.Packaging.Tests/OpenApiCollectionArchiveTests.cs |
Comprehensive test suite with 30+ test methods covering archive operations |
src/HotChocolate/Adapters/test/Adapters.OpenApi.Packaging.Tests/HotChocolate.Adapters.OpenApi.Packaging.Tests.csproj |
Test project configuration |
src/HotChocolate/Adapters/src/Adapters.OpenApi*/HotChocolate.*.csproj |
Updated to use centralized target frameworks property |
src/HotChocolate/Adapters/Adapters.slnx |
Added new packaging projects to solution |
src/All.slnx |
Added new packaging projects to main solution |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| public static ArchiveMetadata Parse(ReadOnlyMemory<byte> data) | ||
| { | ||
| var document = JsonDocument.Parse(data); |
Copilot
AI
Dec 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The JsonDocument created on line 43 is never disposed, causing a resource leak. Wrap it in a using statement to ensure proper disposal: using var document = JsonDocument.Parse(data);
| var document = JsonDocument.Parse(data); | |
| using var document = JsonDocument.Parse(data); |
| } | ||
|
|
||
| file ??= FileEntry.Created(path); | ||
| var stream = File.Open(file.TempPath, FileMode.Create, FileAccess.Write); | ||
| _files.Add(path, file); |
Copilot
AI
Dec 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When a file entry is created on line 115-116 but already exists in the archive, the entry is not added to _files dictionary before opening the write stream on line 120. This means on line 121, _files.Add(path, file) will try to add an entry that was already used on line 115, but since it was never added to the dictionary, line 121 succeeds. However, the logic flow is confusing. If the intent is to handle replacement, the file should be added to _files after line 116, not after line 120. Otherwise, multiple calls to OpenWrite for the same path between lines 113-117 could create multiple FileEntry instances.
| } | |
| file ??= FileEntry.Created(path); | |
| var stream = File.Open(file.TempPath, FileMode.Create, FileAccess.Write); | |
| _files.Add(path, file); | |
| _files.Add(path, file); | |
| } | |
| file ??= FileEntry.Created(path); | |
| var stream = File.Open(file.TempPath, FileMode.Create, FileAccess.Write); | |
| if (!_files.ContainsKey(path)) | |
| { | |
| _files.Add(path, file); | |
| } |
|
|
||
| await using var readStream = zipEntry.Open(); | ||
| await using var writeStream = File.Open(fileEntry.TempPath, FileMode.Create, FileAccess.Write); | ||
|
|
||
| int read; | ||
| while ((read = await readStream.ReadAsync(buffer, cancellationToken)) > 0) | ||
| { | ||
| consumed += read; | ||
|
|
||
| if (consumed > maxAllowedSize) | ||
| { | ||
| throw new InvalidOperationException( | ||
| $"File is too large and exceeds the allowed size of {maxAllowedSize}."); | ||
| } | ||
|
|
||
| await writeStream.WriteAsync(buffer.AsMemory(0, read), cancellationToken); | ||
| } |
Copilot
AI
Dec 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The rented buffer from ArrayPool<byte>.Shared.Rent(4096) is never returned to the pool. This causes a memory leak. Add a try-finally block to ensure the buffer is returned using ArrayPool<byte>.Shared.Return(buffer) before the method exits, even if an exception is thrown.
| await using var readStream = zipEntry.Open(); | |
| await using var writeStream = File.Open(fileEntry.TempPath, FileMode.Create, FileAccess.Write); | |
| int read; | |
| while ((read = await readStream.ReadAsync(buffer, cancellationToken)) > 0) | |
| { | |
| consumed += read; | |
| if (consumed > maxAllowedSize) | |
| { | |
| throw new InvalidOperationException( | |
| $"File is too large and exceeds the allowed size of {maxAllowedSize}."); | |
| } | |
| await writeStream.WriteAsync(buffer.AsMemory(0, read), cancellationToken); | |
| } | |
| try | |
| { | |
| await using var readStream = zipEntry.Open(); | |
| await using var writeStream = File.Open(fileEntry.TempPath, FileMode.Create, FileAccess.Write); | |
| int read; | |
| while ((read = await readStream.ReadAsync(buffer, cancellationToken)) > 0) | |
| { | |
| consumed += read; | |
| if (consumed > maxAllowedSize) | |
| { | |
| throw new InvalidOperationException( | |
| $"File is too large and exceeds the allowed size of {maxAllowedSize}."); | |
| } | |
| await writeStream.WriteAsync(buffer.AsMemory(0, read), cancellationToken); | |
| } | |
| } | |
| finally | |
| { | |
| ArrayPool<byte>.Shared.Return(buffer); | |
| } |
| private void TryReturnBuffer(ArrayBufferWriter<byte> buffer) | ||
| { | ||
| buffer.Clear(); | ||
|
|
||
| var currentBuffer = _buffer; | ||
| var currentCapacity = _buffer?.Capacity ?? 0; | ||
|
|
||
| if (currentCapacity < buffer.Capacity) | ||
| { | ||
| Interlocked.CompareExchange(ref _buffer, buffer, currentBuffer); | ||
| } |
Copilot
AI
Dec 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a race condition in TryReturnBuffer. Between reading _buffer on line 422 and calling Interlocked.CompareExchange on line 427, another thread could update _buffer, making currentBuffer stale. This could cause the newer buffer to be overwritten with an older one. Consider reading _buffer once and using that value consistently, or restructure the logic to handle concurrent updates correctly.
🚀 Fusion Gateway Performance ResultsSimple Composite QueryConstant Load (50 VUs)
📊 Response Time Metrics
Ramping Load (0→50→500→50 VUs)
📊 Response Time Metrics
Executed Query fragment User on User {
id
username
name
}
fragment Review on Review {
id
body
}
fragment Product on Product {
inStock
name
price
shippingEstimate
upc
weight
}
query TestQuery {
topProducts(first: 5) {
...Product
reviews {
...Review
author {
...User
}
}
}
}Deep Recursion QueryConstant Load (50 VUs)
📊 Response Time Metrics
Ramping Load (0→50→500→50 VUs)
📊 Response Time Metrics
Executed Query fragment User on User {
id
username
name
}
fragment Review on Review {
id
body
}
fragment Product on Product {
inStock
name
price
shippingEstimate
upc
weight
}
query TestQuery {
users {
...User
reviews {
...Review
product {
...Product
reviews {
...Review
author {
...User
reviews {
...Review
product {
...Product
}
}
}
}
}
}
}
topProducts(first: 5) {
...Product
reviews {
...Review
author {
...User
reviews {
...Review
product {
...Product
}
}
}
}
}
}Variable Batching ThroughputConstant Load (50 VUs)
📊 Response Time Metrics
Ramping Load (0→50→500→50 VUs)
📊 Response Time Metrics
Executed Query query TestQuery_8f7a46ce_2(
$__fusion_1_upc: ID!
$__fusion_2_price: Long!
$__fusion_2_weight: Long!
) {
productByUpc(upc: $__fusion_1_upc) {
inStock
shippingEstimate(weight: $__fusion_2_weight, price: $__fusion_2_price)
}
}Variables (5 sets batched in single request) [
{ "__fusion_1_upc": "1", "__fusion_2_price": 899, "__fusion_2_weight": 100 },
{ "__fusion_1_upc": "2", "__fusion_2_price": 1299, "__fusion_2_weight": 1000 },
{ "__fusion_1_upc": "3", "__fusion_2_price": 15, "__fusion_2_weight": 20 },
{ "__fusion_1_upc": "4", "__fusion_2_price": 499, "__fusion_2_weight": 100 },
{ "__fusion_1_upc": "5", "__fusion_2_price": 1299, "__fusion_2_weight": 1000 }
]No baseline data available for comparison. Run 19971294427 • Commit 6ded803 • Fri, 05 Dec 2025 18:05:55 GMT |
No description provided.