diff --git a/docs/reference/experimental/async/submission.md b/docs/reference/experimental/async/submission.md
new file mode 100644
index 000000000..21f4a0c9a
--- /dev/null
+++ b/docs/reference/experimental/async/submission.md
@@ -0,0 +1,19 @@
+# Submission
+
+Contained within this file are experimental interfaces for working with the Synapse Python
+Client. Unless otherwise noted these interfaces are subject to change at any time. Use
+at your own risk.
+
+## API Reference
+
+::: synapseclient.models.Submission
+ options:
+ inherited_members: true
+ members:
+ - store_async
+ - get_async
+ - delete_async
+ - cancel_async
+ - get_evaluation_submissions_async
+ - get_user_submissions_async
+ - get_submission_count_async
diff --git a/docs/reference/experimental/async/submission_bundle.md b/docs/reference/experimental/async/submission_bundle.md
new file mode 100644
index 000000000..39c43a90a
--- /dev/null
+++ b/docs/reference/experimental/async/submission_bundle.md
@@ -0,0 +1,14 @@
+# Submission Bundle
+
+Contained within this file are experimental interfaces for working with the Synapse Python
+Client. Unless otherwise noted these interfaces are subject to change at any time. Use
+at your own risk.
+
+## API Reference
+
+::: synapseclient.models.SubmissionBundle
+ options:
+ inherited_members: true
+ members:
+ - get_evaluation_submission_bundles_async
+ - get_user_submission_bundles_async
diff --git a/docs/reference/experimental/async/submission_status.md b/docs/reference/experimental/async/submission_status.md
new file mode 100644
index 000000000..c1adfa346
--- /dev/null
+++ b/docs/reference/experimental/async/submission_status.md
@@ -0,0 +1,16 @@
+# Submission Status
+
+Contained within this file are experimental interfaces for working with the Synapse Python
+Client. Unless otherwise noted these interfaces are subject to change at any time. Use
+at your own risk.
+
+## API Reference
+
+::: synapseclient.models.SubmissionStatus
+ options:
+ inherited_members: true
+ members:
+ - get_async
+ - store_async
+ - get_all_submission_statuses_async
+ - batch_update_submission_statuses_async
diff --git a/docs/reference/experimental/sync/submission.md b/docs/reference/experimental/sync/submission.md
new file mode 100644
index 000000000..533c8c84c
--- /dev/null
+++ b/docs/reference/experimental/sync/submission.md
@@ -0,0 +1,19 @@
+# Submission
+
+Contained within this file are experimental interfaces for working with the Synapse Python
+Client. Unless otherwise noted these interfaces are subject to change at any time. Use
+at your own risk.
+
+## API Reference
+
+::: synapseclient.models.Submission
+ options:
+ inherited_members: true
+ members:
+ - store
+ - get
+ - delete
+ - cancel
+ - get_evaluation_submissions
+ - get_user_submissions
+ - get_submission_count
diff --git a/docs/reference/experimental/sync/submission_bundle.md b/docs/reference/experimental/sync/submission_bundle.md
new file mode 100644
index 000000000..e7440ee8d
--- /dev/null
+++ b/docs/reference/experimental/sync/submission_bundle.md
@@ -0,0 +1,14 @@
+# Submission Bundle
+
+Contained within this file are experimental interfaces for working with the Synapse Python
+Client. Unless otherwise noted these interfaces are subject to change at any time. Use
+at your own risk.
+
+## API Reference
+
+::: synapseclient.models.SubmissionBundle
+ options:
+ inherited_members: true
+ members:
+ - get_evaluation_submission_bundles
+ - get_user_submission_bundles
diff --git a/docs/reference/experimental/sync/submission_status.md b/docs/reference/experimental/sync/submission_status.md
new file mode 100644
index 000000000..9a2e482a6
--- /dev/null
+++ b/docs/reference/experimental/sync/submission_status.md
@@ -0,0 +1,16 @@
+# Submission Status
+
+Contained within this file are experimental interfaces for working with the Synapse Python
+Client. Unless otherwise noted these interfaces are subject to change at any time. Use
+at your own risk.
+
+## API Reference
+
+::: synapseclient.models.SubmissionStatus
+ options:
+ inherited_members: true
+ members:
+ - get
+ - store
+ - get_all_submission_statuses
+ - batch_update_submission_statuses
diff --git a/docs/tutorials/python/submission.md b/docs/tutorials/python/submission.md
new file mode 100644
index 000000000..a5f999ba7
--- /dev/null
+++ b/docs/tutorials/python/submission.md
@@ -0,0 +1,131 @@
+# Submissions, SubmissionStatuses, SubmissionBundles
+Users can work with Submissions on the python client, since these objects are part of the Evaluation API data model. A user in a Synapse Evaluation can submit a Synapse Entity as a Submission to that Evaluation. Submission data is owned by the parent Evaluation, and is immutable.
+
+The data model includes additional objects to support scoring of Submissions and convenient data access:
+
+- **SubmissionStatus**: An object used to track scoring information for a single Submission. This object is intended to be modified by the users (or test harnesses) managing the Evaluation.
+- **SubmissionBundle**: A convenience object to transport a Submission and its accompanying SubmissionStatus in a single web service call.
+
+This tutorial will demonstrate how to work with all 3 object types using the python client for 2 different use-cases:
+
+1. Participating in a Synapse challenge
+1. Organizing a Synapse challenge
+
+
+## Tutorial Purpose
+In this tutorial:
+
+As a participant of a Synapse challenge, you will
+
+1. Make a submission to an existing evaluation queue on Synapse
+1. Fetch your existing submission
+1. Count your submissions
+1. Fetch all of your submissions from an existing evaluation queue on Synapse
+1. Check the status of your submission
+1. Cancel your submission
+
+As an organizer of a Synapse challenge, you will
+
+1. Annotate a submission to score it
+1. Batch-update submission statuses
+1. Fetch the submission bundle for a given submission
+1. Allow cancellation of submissions
+1. Delete submissions
+
+## Prerequisites
+* You have completed the [Evaluation](./evaluation.md) tutorial, or have an existing Evaluation on Synapse to work from
+* You have an existing entity with which to make a submission (can be a [File](./file.md) or Docker Repository)
+* You have the correct permissions on the Evaluation queue for your desired tutorial section (participant or organizer)
+
+## 1. Participating in a Synapse challenge
+
+### 1. Make a submission to an existing evaluation queue on Synapse
+
+```python
+{!docs/tutorials/python/tutorial_scripts/submission_participant.py!lines=30-51}
+```
+
+### 2. Fetch your existing submission
+
+```python
+{!docs/tutorials/python/tutorial_scripts/submission_participant.py!lines=56-71}
+```
+
+### 3. Count your submissions
+
+```python
+{!docs/tutorials/python/tutorial_scripts/submission_participant.py!lines=72-92}
+```
+
+### 4. Fetch all of your submissions from an existing evaluation queue on Synapse
+
+```python
+{!docs/tutorials/python/tutorial_scripts/submission_participant.py!lines=93-107}
+```
+
+### 5. Check the status of your submission
+
+```python
+{!docs/tutorials/python/tutorial_scripts/submission_participant.py!lines=108-130}
+```
+
+### 6. Cancel your submission
+
+```python
+{!docs/tutorials/python/tutorial_scripts/submission_participant.py!lines=131-157}
+```
+
+## 2. Organizing a Synapse challenge
+
+### 1. Annotate a submission to score it
+
+```python
+{!docs/tutorials/python/tutorial_scripts/submission_organizer.py!lines=29-57}
+```
+
+### 2. Batch-update submission statuses
+
+```python
+{!docs/tutorials/python/tutorial_scripts/submission_organizer.py!lines=58-101}
+```
+
+### 3. Fetch the submission bundle for a given submission
+
+```python
+{!docs/tutorials/python/tutorial_scripts/submission_organizer.py!lines=102-142}
+```
+
+### 4. Allow cancellation of submissions
+
+```python
+{!docs/tutorials/python/tutorial_scripts/submission_organizer.py!lines=143-195}
+```
+
+### 5. Delete submissions
+
+```python
+{!docs/tutorials/python/tutorial_scripts/submission_organizer.py!lines=196-229}
+```
+
+## Source code for this tutorial
+
+
+ Click to show me (source code for Participant)
+
+```python
+{!docs/tutorials/python/tutorial_scripts/submission_participant.py!}
+```
+
+
+
+ Click to show me (source code for Organizer)
+
+```python
+{!docs/tutorials/python/tutorial_scripts/submission_organizer.py!}
+```
+
+
+## References
+- [Evaluation][synapseclient.models.Evaluation]
+- [File][synapseclient.models.File]
+- [syn.login][synapseclient.Synapse.login]
diff --git a/docs/tutorials/python/tutorial_scripts/submission_organizer.py b/docs/tutorials/python/tutorial_scripts/submission_organizer.py
new file mode 100644
index 000000000..a6aa303dd
--- /dev/null
+++ b/docs/tutorials/python/tutorial_scripts/submission_organizer.py
@@ -0,0 +1,211 @@
+"""
+Submission Organizer Tutorial - Code for working with Submissions as a challenge organizer.
+
+This tutorial demonstrates how to:
+1. Annotate a submission to score it
+2. Batch-update submission statuses
+3. Fetch the submission bundle for a given submission
+4. Allow cancellation of submissions
+5. Delete submissions
+"""
+
+from synapseclient import Synapse
+from synapseclient.models import Submission, SubmissionBundle, SubmissionStatus
+
+syn = Synapse()
+syn.login()
+
+# REQUIRED: Set these to your actual Synapse IDs
+# Do NOT leave these as None - the script will not work properly
+EVALUATION_ID = None # Replace with the evaluation queue ID you manage
+SUBMISSION_ID = None # Replace with a submission ID from your evaluation
+
+assert (
+ EVALUATION_ID is not None
+), "EVALUATION_ID must be set to the evaluation queue ID you manage"
+assert (
+ SUBMISSION_ID is not None
+), "SUBMISSION_ID must be set to a submission ID from your evaluation"
+
+print(f"Working with Evaluation: {EVALUATION_ID}")
+print(f"Managing Submission: {SUBMISSION_ID}")
+
+# ==============================================================================
+# 1. Annotate a submission to score it
+# ==============================================================================
+
+print("\n=== 1. Annotating a submission with scores ===")
+
+# First, get the submission status
+status = SubmissionStatus(id=SUBMISSION_ID).get()
+print(f"Retrieved submission status for submission {SUBMISSION_ID}")
+print(f"Current status: {status.status}")
+
+# Update the submission status with scoring information
+status.status = "SCORED"
+status.submission_annotations = {
+ "accuracy": [0.85],
+ "precision": [0.82],
+ "feedback": ["Good performance!"],
+ "validation_errors": "None detected",
+ "score_errors": "None detected",
+}
+
+# Store the updated status
+updated_status = status.store()
+print(f"Successfully scored submission!")
+print(f"Status: {updated_status.status}")
+print(f"Annotations added:")
+for key, value in updated_status.submission_annotations.items():
+ print(f" {key}: {value}")
+
+# ==============================================================================
+# 2. Batch-update submission statuses
+# ==============================================================================
+
+print("\n=== 2. Batch updating submission statuses ===")
+
+# First, get all submission statuses that need updating
+statuses_to_update = SubmissionStatus.get_all_submission_statuses(
+ evaluation_id=EVALUATION_ID,
+ status="RECEIVED", # Get submissions that haven't been scored yet
+ limit=50, # Limit to 50 for this example (max is 500 for batch operations)
+)
+
+print(f"Found {len(statuses_to_update)} submissions to batch update")
+
+if statuses_to_update:
+ # Update each status with validation information
+ for i, status in enumerate(statuses_to_update):
+ status.status = "VALIDATED"
+ status.submission_annotations = {
+ "validation_status": ["PASSED"],
+ "validation_timestamp": ["2024-11-24T10:30:00Z"],
+ "batch_number": [i + 1],
+ "validator": ["automated_system"],
+ }
+
+ # Perform batch update
+ batch_response = SubmissionStatus.batch_update_submission_statuses(
+ evaluation_id=EVALUATION_ID,
+ statuses=statuses_to_update,
+ is_first_batch=True,
+ is_last_batch=True,
+ )
+
+ print(f"Batch update completed successfully!")
+ print(f"Batch response: {batch_response}")
+else:
+ print("No submissions found with 'RECEIVED' status to update")
+
+# ==============================================================================
+# 3. Fetch the submission bundle for a given submission
+# ==============================================================================
+
+print("\n=== 3. Fetching submission bundle ===")
+
+# Get all submission bundles for the evaluation
+print("Fetching all submission bundles for the evaluation...")
+
+bundles = list(
+ SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=EVALUATION_ID, status="SCORED" # Only get scored submissions
+ )
+)
+
+print(f"Found {len(bundles)} scored submission bundles")
+
+for i, bundle in enumerate(bundles[:5]): # Show first 5
+ submission = bundle.submission
+ status = bundle.submission_status
+
+ print(f"\nBundle {i + 1}:")
+ if submission:
+ print(f" Submission ID: {submission.id}")
+ print(f" Submitter: {submission.submitter_alias}")
+ print(f" Entity ID: {submission.entity_id}")
+ print(f" Created: {submission.created_on}")
+
+ if status:
+ print(f" Status: {status.status}")
+ print(f" Modified: {status.modified_on}")
+ if status.submission_annotations:
+ print(f" Scores:")
+ for key, value in status.submission_annotations.items():
+ if key in ["accuracy", "f1_score", "precision", "recall"]:
+ print(f" {key}: {value}")
+
+# ==============================================================================
+# 4. Allow cancellation of submissions
+# ==============================================================================
+
+print("\n=== 4. Managing submission cancellation ===")
+
+# First, check if any submissions have requested cancellation
+all_statuses = SubmissionStatus.get_all_submission_statuses(
+ evaluation_id=EVALUATION_ID, limit=100
+)
+
+cancellation_requests = [status for status in all_statuses if status.cancel_requested]
+
+print(f"Found {len(cancellation_requests)} submissions with cancellation requests")
+
+# Process cancellation requests
+for status in cancellation_requests:
+ print(f"Processing cancellation request for submission {status.id}")
+
+ # Update to allow cancellation (organizer decision)
+ status.can_cancel = True
+ status.status = "CANCELLED"
+ status.submission_annotations.update(
+ {
+ "cancellation_reason": ["User requested cancellation"],
+ "cancelled_by": ["organizer"],
+ "cancellation_date": ["2024-11-24"],
+ }
+ )
+
+ # Store the update
+ updated_status = status.store()
+ print(f" Approved cancellation for submission {updated_status.id}")
+
+# Example: Proactively allow cancellation for a specific submission
+print("\nEnabling cancellation for a specific submission...")
+target_status = SubmissionStatus(id=SUBMISSION_ID).get()
+target_status.can_cancel = True
+target_status = target_status.store()
+print(f"Cancellation enabled for submission {SUBMISSION_ID}")
+
+# ==============================================================================
+# 5. Delete submissions
+# ==============================================================================
+
+# print("\n=== 5. Deleting submissions ===")
+# print("Finding and deleting submissions that have been requested for cancellation...")
+
+# # Get all submission statuses to check for cancellation requests
+# all_statuses = SubmissionStatus.get_all_submission_statuses(
+# evaluation_id=EVALUATION_ID,
+# )
+
+# # Find submissions that have been requested for cancellation
+# submissions_to_delete = []
+# for status in all_statuses:
+# if status.cancel_requested:
+# submissions_to_delete.append(status.id)
+
+# print(f"Found {len(submissions_to_delete)} submissions with cancellation requests")
+
+# # Delete each submission that was requested for cancellation
+# for submission_id in submissions_to_delete:
+# submission = Submission(id=submission_id).get()
+# submission.delete()
+# print(f"Successfully deleted submission {submission_id}")
+
+# if submissions_to_delete:
+# print(f"Completed deletion of {len(submissions_to_delete)} requested submissions")
+
+print(f"\nDeletion step is commented out by default.")
+print(f"Uncomment the deletion code if you want to test this functionality.")
+
+print(f"\n=== Organizer tutorial completed! ===")
diff --git a/docs/tutorials/python/tutorial_scripts/submission_participant.py b/docs/tutorials/python/tutorial_scripts/submission_participant.py
new file mode 100644
index 000000000..83fb2b0d2
--- /dev/null
+++ b/docs/tutorials/python/tutorial_scripts/submission_participant.py
@@ -0,0 +1,147 @@
+"""
+Submission Participant Tutorial - Code for working with Submissions as a challenge participant.
+
+This tutorial demonstrates how to:
+1. Make a submission to an existing evaluation queue
+2. Fetch your existing submission
+3. Count your submissions
+4. Fetch all of your submissions from an evaluation queue
+5. Check the status of your submission
+6. Cancel your submission
+"""
+
+from synapseclient import Synapse
+from synapseclient.models import Submission, SubmissionStatus
+
+syn = Synapse()
+syn.login()
+
+# REQUIRED: Set these to your actual Synapse IDs
+# Do NOT leave these as None - the script will not work properly
+EVALUATION_ID = "9617645" # Replace with the evaluation queue ID you want to submit to
+ENTITY_ID = "syn59211514" # Replace with the entity ID you want to submit
+
+assert EVALUATION_ID is not None, "EVALUATION_ID must be set to the evaluation queue ID"
+assert (
+ ENTITY_ID is not None
+), "ENTITY_ID must be set to the entity ID you want to submit"
+
+print(f"Working with Evaluation: {EVALUATION_ID}")
+print(f"Submitting Entity: {ENTITY_ID}")
+
+# ==============================================================================
+# 1. Make a submission to an existing evaluation queue on Synapse
+# ==============================================================================
+
+print("\n=== 1. Making a submission ===")
+
+# Create a new Submission object
+submission = Submission(
+ entity_id=ENTITY_ID, evaluation_id=EVALUATION_ID, name="My Tutorial Submission"
+)
+
+# Submit the entity to the evaluation queue
+submission = submission.store()
+
+print(f"Submission created successfully!")
+print(f"Submission ID: {submission.id}")
+print(f"Submitted Entity: {submission.entity_id}")
+print(f"Evaluation: {submission.evaluation_id}")
+print(f"Submission Name: {submission.name}")
+print(f"Created On: {submission.created_on}")
+
+# Store the submission ID for later use
+submission_id = submission.id
+
+# ==============================================================================
+# 2. Fetch your existing submission
+# ==============================================================================
+
+print("\n=== 2. Fetching existing submission ===")
+
+# Retrieve the submission we just created
+retrieved_submission = Submission(id=submission_id).get()
+
+print(f"Retrieved submission:")
+print(f" ID: {retrieved_submission.id}")
+print(f" Name: {retrieved_submission.name}")
+print(f" Entity ID: {retrieved_submission.entity_id}")
+print(f" Submitter: {retrieved_submission.submitter_alias}")
+print(f" Created On: {retrieved_submission.created_on}")
+
+# ==============================================================================
+# 3. Count your submissions
+# ==============================================================================
+
+print("\n=== 3. Counting submissions ===")
+
+# Get the total count of submissions for this evaluation
+submission_count = Submission.get_submission_count(evaluation_id=EVALUATION_ID)
+
+print(f"Total submissions in evaluation: {submission_count}")
+
+# Get count of submissions with specific status (optional)
+scored_count = Submission.get_submission_count(
+ evaluation_id=EVALUATION_ID, status="SCORED"
+)
+
+print(f"SCORED submissions in evaluation: {scored_count}")
+
+# ==============================================================================
+# 4. Fetch all of your submissions from an existing evaluation queue
+# ==============================================================================
+
+print("\n=== 4. Fetching all your submissions ===")
+
+# Get all of your submissions for this evaluation
+user_submissions = list(Submission.get_user_submissions(evaluation_id=EVALUATION_ID))
+
+print(f"Found {len(user_submissions)} submissions from the current user:")
+for i, sub in enumerate(user_submissions, 1):
+ print(f" {i}. ID: {sub.id}, Name: {sub.name}, Created: {sub.created_on}")
+
+# ==============================================================================
+# 5. Check the status of your submission
+# ==============================================================================
+
+print("\n=== 5. Checking submission status ===")
+
+# Fetch the status of our submission
+status = SubmissionStatus(id=submission_id).get()
+
+print(f"Submission status details:")
+print(f" Status: {status.status}")
+print(f" Modified On: {status.modified_on}")
+print(f" Can Cancel: {status.can_cancel}")
+print(f" Cancel Requested: {status.cancel_requested}")
+
+# Check if there are any submission annotations (scores, feedback, etc.)
+if status.submission_annotations:
+ print(f" Submission Annotations:")
+ for key, value in status.submission_annotations.items():
+ print(f" {key}: {value}")
+else:
+ print(f" No submission annotations available")
+
+# ==============================================================================
+# 6. Cancel your submission (optional)
+# ==============================================================================
+
+print("\n=== 6. Cancelling submission ===")
+
+# Note: Only cancel if the submission allows it
+# Uncomment the following lines if you want to test cancellation:
+
+# cancelled_submission = submission.cancel()
+# print(f"Submission {cancelled_submission.id} has been requested for cancellation")
+#
+# # Check the updated status
+# updated_status = SubmissionStatus(id=submission_id).get()
+# print(f"Cancel requested: {updated_status.cancel_requested}")
+
+print(f"\nCancellation is commented out by default.")
+print(f"Uncomment the cancellation code if you want to test this functionality.")
+
+print(f"\n=== Tutorial completed! ===")
+print(f"Your submission ID {submission_id} is ready for evaluation.")
+print(f"Check back later to see if the organizers have scored your submission.")
diff --git a/mkdocs.yml b/mkdocs.yml
index cca09f73a..864a9a4d6 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -28,6 +28,7 @@ nav:
- Folder: tutorials/python/folder.md
- File: tutorials/python/file.md
- Evaluation: tutorials/python/evaluation.md
+ - Submission: tutorials/python/submission.md
- Annotation: tutorials/python/annotation.md
# - Versions: tutorials/python/versions.md
# - Activity/Provenance: tutorials/python/activity.md
@@ -89,6 +90,9 @@ nav:
- Folder: reference/experimental/sync/folder.md
- File: reference/experimental/sync/file.md
- Evaluation: reference/experimental/sync/evaluation.md
+ - Submission: reference/experimental/sync/submission.md
+ - SubmissionStatus: reference/experimental/sync/submission_status.md
+ - SubmissionBundle: reference/experimental/sync/submission_bundle.md
- Table: reference/experimental/sync/table.md
- VirtualTable: reference/experimental/sync/virtualtable.md
- Dataset: reference/experimental/sync/dataset.md
@@ -113,6 +117,9 @@ nav:
- Folder: reference/experimental/async/folder.md
- File: reference/experimental/async/file.md
- Evaluation: reference/experimental/async/evaluation.md
+ - Submission: reference/experimental/async/submission.md
+ - SubmissionStatus: reference/experimental/async/submission_status.md
+ - SubmissionBundle: reference/experimental/async/submission_bundle.md
- Table: reference/experimental/async/table.md
- VirtualTable: reference/experimental/async/virtualtable.md
- Dataset: reference/experimental/async/dataset.md
diff --git a/synapseclient/annotations.py b/synapseclient/annotations.py
index f9d6ac124..5e4414d29 100644
--- a/synapseclient/annotations.py
+++ b/synapseclient/annotations.py
@@ -228,6 +228,127 @@ def to_submission_status_annotations(annotations, is_private=True):
return synapseAnnos
+def to_submission_annotations(
+ id: typing.Union[str, int],
+ etag: str,
+ annotations: typing.Dict[str, typing.Any],
+ logger: typing.Optional[typing.Any] = None,
+) -> typing.Dict[str, typing.Any]:
+ """
+ Converts a normal dictionary to the format used for submission annotations, which is different from the format
+ used to annotate entities.
+
+ This function creates the proper nested structure that includes id, etag, and annotations in the format
+ expected by the submissionAnnotations field of a SubmissionStatus request body.
+
+ Arguments:
+ id: The unique ID of the submission being annotated.
+ etag: The etag of the submission status for optimistic concurrency control.
+ annotations: A normal Python dictionary comprised of the annotations to be added.
+ logger: An optional logger instance. If not provided, a default logger will be used.
+
+ Returns:
+ A dictionary in the format expected by submissionAnnotations with nested structure containing
+ id, etag, and annotations object with type/value format.
+
+ Example: Using this function
+ Converting annotations to submission format
+
+ from synapseclient.annotations import to_submission_annotations
+
+ # Input annotations
+ my_annotations = {
+ "score": 85,
+ "feedback": "Good work!"
+ }
+
+ # Convert to submission annotations format
+ submission_annos = to_submission_annotations(
+ id="9999999",
+ etag="abc123",
+ annotations=my_annotations,
+ is_private=True
+ )
+
+ # Result:
+ # {
+ # "id": "9999999",
+ # "etag": "abc123",
+ # "annotations": {
+ # "score": {"type": "INTEGER", "value": [85]},
+ # "feedback": {"type": "STRING", "value": ["Good work!"]},
+ # }
+ # }
+
+ Note:
+ This function is designed specifically for the submissionAnnotations field format,
+ which is part of the creation of a SubmissionStatus request body:
+
+
+ """
+ # Create the base structure
+ submission_annos = {"id": str(id), "etag": str(etag), "annotations": {}}
+
+ # Convert each annotation to the proper nested format
+ for key, value in annotations.items():
+ # Ensure value is a list
+ if not isinstance(value, list):
+ value_list = [value]
+ else:
+ value_list = value
+
+ # Warn about empty annotation values and skip them
+ if not value_list:
+ if logger:
+ logger.warning(
+ f"Annotation '{key}' has an empty value list and will be skipped"
+ )
+ else:
+ from synapseclient import Synapse
+
+ client = Synapse.get_client()
+ client.logger.warning(
+ f"Annotation '{key}' has an empty value list and will be skipped"
+ )
+ continue
+
+ # Determine type based on the first element
+ first_element = value_list[0]
+
+ if isinstance(first_element, str):
+ submission_annos["annotations"][key] = {
+ "type": "STRING",
+ "value": value_list,
+ }
+ elif isinstance(first_element, bool):
+ # Convert booleans to lowercase strings
+ submission_annos["annotations"][key] = {
+ "type": "STRING",
+ "value": [str(v).lower() for v in value_list],
+ }
+ elif isinstance(first_element, int):
+ submission_annos["annotations"][key] = {"type": "LONG", "value": value_list}
+ elif isinstance(first_element, float):
+ submission_annos["annotations"][key] = {
+ "type": "DOUBLE",
+ "value": value_list,
+ }
+ elif is_date(first_element):
+ # Convert dates to unix timestamps
+ submission_annos["annotations"][key] = {
+ "type": "LONG",
+ "value": [to_unix_epoch_time(v) for v in value_list],
+ }
+ else:
+ # Default to string representation
+ submission_annos["annotations"][key] = {
+ "type": "STRING",
+ "value": [str(v) for v in value_list],
+ }
+
+ return submission_annos
+
+
# TODO: this should accept a status object and return its annotations or an empty dict if there are none
def from_submission_status_annotations(annotations) -> dict:
"""
diff --git a/synapseclient/api/__init__.py b/synapseclient/api/__init__.py
index 09c578ed2..1e02bf24c 100644
--- a/synapseclient/api/__init__.py
+++ b/synapseclient/api/__init__.py
@@ -64,15 +64,28 @@
update_entity_acl,
)
from .evaluation_services import (
+ batch_update_submission_statuses,
+ cancel_submission,
create_or_update_evaluation,
+ create_submission,
delete_evaluation,
+ delete_submission,
get_all_evaluations,
+ get_all_submission_statuses,
get_available_evaluations,
get_evaluation,
get_evaluation_acl,
get_evaluation_permissions,
+ get_evaluation_submission_bundles,
+ get_evaluation_submissions,
get_evaluations_by_project,
+ get_submission,
+ get_submission_count,
+ get_submission_status,
+ get_user_submission_bundles,
+ get_user_submissions,
update_evaluation_acl,
+ update_submission_status,
)
from .file_services import (
AddPartResponse,
@@ -282,4 +295,18 @@
"get_evaluation_acl",
"update_evaluation_acl",
"get_evaluation_permissions",
+ # submission-related evaluation services
+ "create_submission",
+ "get_submission",
+ "get_evaluation_submissions",
+ "get_user_submissions",
+ "get_submission_count",
+ "delete_submission",
+ "cancel_submission",
+ "get_submission_status",
+ "update_submission_status",
+ "get_all_submission_statuses",
+ "batch_update_submission_statuses",
+ "get_evaluation_submission_bundles",
+ "get_user_submission_bundles",
]
diff --git a/synapseclient/api/evaluation_services.py b/synapseclient/api/evaluation_services.py
index 8149618b3..80fee69a8 100644
--- a/synapseclient/api/evaluation_services.py
+++ b/synapseclient/api/evaluation_services.py
@@ -4,7 +4,9 @@
"""
import json
-from typing import TYPE_CHECKING, List, Optional
+from typing import TYPE_CHECKING, Any, AsyncGenerator, Dict, List, Optional
+
+from synapseclient.api.api_client import rest_get_paginated_async
if TYPE_CHECKING:
from synapseclient import Synapse
@@ -386,3 +388,465 @@ async def get_evaluation_permissions(
uri = f"/evaluation/{evaluation_id}/permissions"
return await client.rest_get_async(uri)
+
+
+async def create_submission(
+ request_body: dict, etag: str, synapse_client: Optional["Synapse"] = None
+) -> dict:
+ """
+ Creates a Submission and sends a submission notification email to the submitter's team members.
+
+
+
+ Arguments:
+ request_body: The request body to send to the server.
+ etag: The current eTag of the Entity being submitted.
+ synapse_client: If not passed in and caching was not disabled by `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+ """
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+
+ uri = "/evaluation/submission"
+
+ # Add etag as query parameter if provided
+ params = {"etag": etag}
+
+ response = await client.rest_post_async(
+ uri, body=json.dumps(request_body), params=params
+ )
+
+ return response
+
+
+async def get_submission(
+ submission_id: str, synapse_client: Optional["Synapse"] = None
+) -> dict:
+ """
+ Retrieves a Submission by its ID.
+
+
+
+ Arguments:
+ submission_id: The ID of the submission to fetch.
+ synapse_client: If not passed in and caching was not disabled by `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ The requested Submission.
+ """
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+
+ uri = f"/evaluation/submission/{submission_id}"
+
+ response = await client.rest_get_async(uri)
+
+ return response
+
+
+async def get_evaluation_submissions(
+ evaluation_id: str,
+ status: Optional[str] = None,
+ *,
+ synapse_client: Optional["Synapse"] = None,
+) -> AsyncGenerator[Dict[str, Any], None]:
+ """
+ Generator to get all Submissions for a specified Evaluation queue.
+
+
+
+ Arguments:
+ evaluation_id: The ID of the evaluation queue.
+ status: Optionally filter submissions by a submission status, such as SCORED, VALID,
+ INVALID, OPEN, CLOSED or EVALUATION_IN_PROGRESS.
+ synapse_client: If not passed in and caching was not disabled by `Synapse.allow_client_caching(False)`
+ this will use the last created instance from the Synapse class constructor.
+
+ Yields:
+ Individual Submission objects from each page of the response.
+ """
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+
+ uri = f"/evaluation/{evaluation_id}/submission/all"
+ query_params = {}
+
+ if status:
+ query_params["status"] = status
+
+ async for item in rest_get_paginated_async(
+ uri=uri, params=query_params, synapse_client=client
+ ):
+ yield item
+
+
+async def get_user_submissions(
+ evaluation_id: str,
+ user_id: Optional[str] = None,
+ *,
+ synapse_client: Optional["Synapse"] = None,
+) -> AsyncGenerator[Dict[str, Any], None]:
+ """
+ Generator to get all user Submissions for a specified Evaluation queue.
+ If user_id is omitted, this returns the submissions of the caller.
+
+
+
+ Arguments:
+ evaluation_id: The ID of the evaluation queue.
+ user_id: Optionally specify the ID of the user whose submissions will be returned.
+ If omitted, this returns the submissions of the caller.
+ synapse_client: If not passed in and caching was not disabled by `Synapse.allow_client_caching(False)`
+ this will use the last created instance from the Synapse class constructor.
+
+ Yields:
+ Individual Submission objects from each page of the response.
+ """
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+
+ uri = f"/evaluation/{evaluation_id}/submission"
+ query_params = {}
+
+ if user_id:
+ query_params["userId"] = user_id
+
+ async for item in rest_get_paginated_async(
+ uri=uri, params=query_params, synapse_client=client
+ ):
+ yield item
+
+
+async def get_submission_count(
+ evaluation_id: str,
+ status: Optional[str] = None,
+ synapse_client: Optional["Synapse"] = None,
+) -> dict:
+ """
+ Gets the number of Submissions for a specified Evaluation queue, optionally filtered by submission status.
+
+
+
+ Arguments:
+ evaluation_id: The ID of the evaluation queue.
+ status: Optionally filter submissions by a submission status, such as SCORED, VALID,
+ INVALID, OPEN, CLOSED or EVALUATION_IN_PROGRESS.
+ synapse_client: If not passed in and caching was not disabled by `Synapse.allow_client_caching(False)`
+ this will use the last created instance from the Synapse class constructor.
+
+ Returns:
+ A response JSON containing the submission count.
+ """
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+
+ uri = f"/evaluation/{evaluation_id}/submission/count"
+ query_params = {}
+
+ if status:
+ query_params["status"] = status
+
+ response = await client.rest_get_async(uri, params=query_params)
+
+ return response
+
+
+async def delete_submission(
+ submission_id: str, synapse_client: Optional["Synapse"] = None
+) -> None:
+ """
+ Deletes a Submission and its SubmissionStatus.
+
+
+
+ Arguments:
+ submission_id: The ID of the submission to delete.
+ synapse_client: If not passed in and caching was not disabled by `Synapse.allow_client_caching(False)`
+ this will use the last created instance from the Synapse class constructor.
+ """
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+
+ uri = f"/evaluation/submission/{submission_id}"
+
+ await client.rest_delete_async(uri)
+
+
+async def cancel_submission(
+ submission_id: str, synapse_client: Optional["Synapse"] = None
+) -> dict:
+ """
+ Cancels a Submission. Only the user who created the Submission may cancel it.
+
+
+
+ Arguments:
+ submission_id: The ID of the submission to cancel.
+ synapse_client: If not passed in and caching was not disabled by `Synapse.allow_client_caching(False)`
+ this will use the last created instance from the Synapse class constructor.
+
+ Returns:
+ The Submission response object for the canceled submission as a JSON dict.
+ """
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+
+ uri = f"/evaluation/submission/{submission_id}/cancellation"
+
+ response = await client.rest_put_async(uri)
+
+ return response
+
+
+async def get_submission_status(
+ submission_id: str, synapse_client: Optional["Synapse"] = None
+) -> dict:
+ """
+ Gets the SubmissionStatus object associated with a specified Submission.
+
+
+
+ Arguments:
+ submission_id: The ID of the submission to get the status for.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ The SubmissionStatus object as a JSON dict.
+
+ Note:
+ The caller must be granted the ACCESS_TYPE.READ on the specified Evaluation.
+ Furthermore, the caller must be granted the ACCESS_TYPE.READ_PRIVATE_SUBMISSION
+ to see all data marked as "private" in the SubmissionStatus.
+ """
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+
+ uri = f"/evaluation/submission/{submission_id}/status"
+
+ response = await client.rest_get_async(uri)
+
+ return response
+
+
+async def update_submission_status(
+ submission_id: str, request_body: dict, synapse_client: Optional["Synapse"] = None
+) -> dict:
+ """
+ Updates a SubmissionStatus object.
+
+
+
+ Arguments:
+ submission_id: The ID of the SubmissionStatus being updated.
+ request_body: The SubmissionStatus object to update as a dictionary.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ The updated SubmissionStatus object as a JSON dict.
+
+ Note:
+ Synapse employs an Optimistic Concurrency Control (OCC) scheme to handle
+ concurrent updates. Each time a SubmissionStatus is updated a new etag will be
+ issued to the SubmissionStatus. When an update is requested, Synapse will compare
+ the etag of the passed SubmissionStatus with the current etag of the SubmissionStatus.
+ If the etags do not match, then the update will be rejected with a PRECONDITION_FAILED
+ (412) response. When this occurs, the caller should fetch the latest copy of the
+ SubmissionStatus and re-apply any changes, then re-attempt the SubmissionStatus update.
+
+ The caller must be granted the ACCESS_TYPE.UPDATE_SUBMISSION on the specified Evaluation.
+ """
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+
+ uri = f"/evaluation/submission/{submission_id}/status"
+
+ response = await client.rest_put_async(uri, body=json.dumps(request_body))
+
+ return response
+
+
+async def get_all_submission_statuses(
+ evaluation_id: str,
+ status: Optional[str] = None,
+ limit: int = 10,
+ offset: int = 0,
+ synapse_client: Optional["Synapse"] = None,
+) -> dict:
+ """
+ Gets a collection of SubmissionStatuses to a specified Evaluation.
+
+
+
+ Arguments:
+ evaluation_id: The ID of the specified Evaluation.
+ status: Optionally filter submission statuses by status.
+ limit: Limits the number of entities that will be fetched for this page.
+ When null it will default to 10, max value 100. Default to 10.
+ offset: The offset index determines where this page will start from.
+ An index of 0 is the first entity. Default to 0.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ A PaginatedResults object as a JSON dict containing
+ a paginated list of submission statuses for the evaluation queue.
+
+ Note:
+ The caller must be granted the ACCESS_TYPE.READ on the specified Evaluation.
+ Furthermore, the caller must be granted the ACCESS_TYPE.READ_PRIVATE_SUBMISSION
+ to see all data marked as "private" in the SubmissionStatuses.
+ """
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+
+ uri = f"/evaluation/{evaluation_id}/submission/status/all"
+ query_params = {"limit": limit, "offset": offset}
+
+ if status:
+ query_params["status"] = status
+
+ response = await client.rest_get_async(uri, params=query_params)
+
+ return response
+
+
+async def batch_update_submission_statuses(
+ evaluation_id: str, request_body: dict, synapse_client: Optional["Synapse"] = None
+) -> dict:
+ """
+ Update multiple SubmissionStatuses. The maximum batch size is 500.
+
+
+
+ Arguments:
+ evaluation_id: The ID of the Evaluation to which the SubmissionStatus objects belong.
+ request_body: The SubmissionStatusBatch object as a dictionary containing:
+ - statuses: List of SubmissionStatus objects to update
+ - isFirstBatch: Boolean indicating if this is the first batch in the series
+ - isLastBatch: Boolean indicating if this is the last batch in the series
+ - batchToken: Token from previous batch response (required for all but first batch)
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ A BatchUploadResponse object as a JSON dict containing the batch token
+ and other response information.
+
+ Note:
+ To allow upload of more than the maximum batch size (500), the system supports
+ uploading a series of batches. Synapse employs optimistic concurrency on the
+ series in the form of a batch token. Each request (except the first) must include
+ the 'batch token' returned in the response to the previous batch. If another client
+ begins batch upload simultaneously, a PRECONDITION_FAILED (412) response will be
+ generated and upload must restart from the first batch.
+
+ After the final batch is uploaded, the data for the Evaluation queue will be
+ mirrored to the tables which support querying. Therefore uploaded data will not
+ appear in Evaluation queries until after the final batch is successfully uploaded.
+
+ It is the client's responsibility to note in each batch request:
+ 1. Whether it is the first batch in the series (isFirstBatch)
+ 2. Whether it is the last batch (isLastBatch)
+
+ For a single batch both flags are set to 'true'.
+
+ The caller must be granted the ACCESS_TYPE.UPDATE_SUBMISSION on the specified Evaluation.
+ """
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+
+ uri = f"/evaluation/{evaluation_id}/statusBatch"
+
+ response = await client.rest_put_async(uri, body=json.dumps(request_body))
+
+ return response
+
+
+async def get_evaluation_submission_bundles(
+ evaluation_id: str,
+ status: Optional[str] = None,
+ *,
+ synapse_client: Optional["Synapse"] = None,
+) -> AsyncGenerator[Dict[str, Any], None]:
+ """
+ Generator to get all bundled Submissions and SubmissionStatuses to a given Evaluation.
+
+
+
+ Arguments:
+ evaluation_id: The ID of the specified Evaluation.
+ status: Optionally filter submission bundles by status.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Yields:
+ Individual SubmissionBundle objects from each page of the response.
+
+ Note:
+ The caller must be granted the ACCESS_TYPE.READ_PRIVATE_SUBMISSION on the specified Evaluation.
+ """
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+
+ uri = f"/evaluation/{evaluation_id}/submission/bundle/all"
+ query_params = {}
+
+ if status:
+ query_params["status"] = status
+
+ async for item in rest_get_paginated_async(
+ uri=uri, params=query_params, synapse_client=client
+ ):
+ yield item
+
+
+async def get_user_submission_bundles(
+ evaluation_id: str,
+ *,
+ synapse_client: Optional["Synapse"] = None,
+) -> AsyncGenerator[Dict[str, Any], None]:
+ """
+ Generator to get all user bundled Submissions and SubmissionStatuses for a specified Evaluation.
+
+
+
+ Arguments:
+ evaluation_id: The ID of the specified Evaluation.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Yields:
+ Individual SubmissionBundle objects from each page of the response.
+ """
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+
+ uri = f"/evaluation/{evaluation_id}/submission/bundle"
+ query_params = {}
+
+ async for item in rest_get_paginated_async(
+ uri=uri, params=query_params, synapse_client=client
+ ):
+ yield item
diff --git a/synapseclient/evaluation.py b/synapseclient/evaluation.py
index 507896543..8a84e2aec 100644
--- a/synapseclient/evaluation.py
+++ b/synapseclient/evaluation.py
@@ -208,10 +208,17 @@ def putACLURI(self):
return "/evaluation/acl"
+@deprecated(
+ version="4.11.0",
+ reason="To be removed in 5.0.0. "
+ "Use the Submission model from synapseclient.models.submission instead.",
+)
class Submission(DictObject):
"""
Builds a Synapse submission object.
+ WARNING - This class is deprecated and will no longer be maintained. Please use the Submission model from synapseclient.models.submission instead.
+
Arguments:
name: Name of submission
entityId: Synapse ID of the Entity to submit
diff --git a/synapseclient/models/__init__.py b/synapseclient/models/__init__.py
index 9a5322727..700087ab5 100644
--- a/synapseclient/models/__init__.py
+++ b/synapseclient/models/__init__.py
@@ -25,6 +25,9 @@
from synapseclient.models.recordset import RecordSet
from synapseclient.models.schema_organization import JSONSchema, SchemaOrganization
from synapseclient.models.services import FailureStrategy
+from synapseclient.models.submission import Submission
+from synapseclient.models.submission_bundle import SubmissionBundle
+from synapseclient.models.submission_status import SubmissionStatus
from synapseclient.models.submissionview import SubmissionView
from synapseclient.models.table import Table
from synapseclient.models.table_components import (
@@ -128,6 +131,9 @@
"EntityRef",
"DatasetCollection",
# Submission models
+ "Submission",
+ "SubmissionBundle",
+ "SubmissionStatus",
"SubmissionView",
# JSON Schema models
"SchemaOrganization",
diff --git a/synapseclient/models/submission.py b/synapseclient/models/submission.py
new file mode 100644
index 000000000..f90660d26
--- /dev/null
+++ b/synapseclient/models/submission.py
@@ -0,0 +1,920 @@
+from dataclasses import dataclass, field
+from typing import AsyncGenerator, Dict, Generator, List, Optional, Protocol, Union
+
+from typing_extensions import Self
+
+from synapseclient import Synapse
+from synapseclient.api import evaluation_services
+from synapseclient.core.async_utils import (
+ async_to_sync,
+ otel_trace_method,
+ skip_async_to_sync,
+ wrap_async_generator_to_sync_generator,
+)
+from synapseclient.models.mixins.access_control import AccessControllable
+
+
+class SubmissionSynchronousProtocol(Protocol):
+ """Protocol defining the synchronous interface for Submission operations."""
+
+ def store(
+ self,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> "Self":
+ """
+ Store the submission in Synapse. This creates a new submission in an evaluation queue.
+
+ Arguments:
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ The Submission object with the ID set.
+
+ Raises:
+ ValueError: If the submission is missing required fields, or if unable to fetch entity etag.
+
+ Example: Creating a submission
+
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ submission = Submission(
+ entity_id="syn123456",
+ evaluation_id="9614543",
+ name="My Submission"
+ ).store()
+ print(submission.id)
+ ```
+ """
+ return self
+
+ def get(
+ self,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> "Self":
+ """
+ Retrieve a Submission from Synapse.
+
+ Arguments:
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ The Submission instance retrieved from Synapse.
+
+ Raises:
+ ValueError: If the submission does not have an ID to get.
+
+ Example: Retrieving a submission by ID
+
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ submission = Submission(id="syn1234").get()
+ print(submission)
+ ```
+ """
+ return self
+
+ def delete(self, *, synapse_client: Optional[Synapse] = None) -> None:
+ """
+ Delete a Submission from Synapse.
+
+ Arguments:
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Raises:
+ ValueError: If the submission does not have an ID to delete.
+
+ Example: Delete a submission
+
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ submission = Submission(id="syn1234")
+ submission.delete()
+ print("Deleted Submission.")
+ ```
+ """
+ pass
+
+ def cancel(
+ self,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> "Self":
+ """
+ Cancel a Submission. Only the user who created the Submission may cancel it.
+
+ Arguments:
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ The updated Submission object.
+
+ Raises:
+ ValueError: If the submission does not have an ID to cancel.
+
+ Example: Cancel a submission
+
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ submission = Submission(id="syn1234")
+ canceled_submission = submission.cancel()
+ ```
+ """
+ return self
+
+ @classmethod
+ def get_evaluation_submissions(
+ cls,
+ evaluation_id: str,
+ status: Optional[str] = None,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> Generator["Submission", None, None]:
+ """
+ Retrieves all Submissions for a specified Evaluation queue.
+
+ Arguments:
+ evaluation_id: The ID of the evaluation queue.
+ status: Optionally filter submissions by a submission status.
+ Submission status can be one of
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ Submission objects as they are retrieved from the API.
+
+ Example: Getting submissions for an evaluation
+
+ Get SCORED submissions from a specific evaluation.
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ submissions = list(Submission.get_evaluation_submissions(
+ evaluation_id="9999999",
+ status="SCORED"
+ ))
+ print(f"Found {len(submissions)} submissions")
+ ```
+ """
+ yield from wrap_async_generator_to_sync_generator(
+ async_gen_func=cls.get_evaluation_submissions_async,
+ evaluation_id=evaluation_id,
+ status=status,
+ synapse_client=synapse_client,
+ )
+
+ @classmethod
+ def get_user_submissions(
+ cls,
+ evaluation_id: str,
+ user_id: Optional[str] = None,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> Generator["Submission", None, None]:
+ """
+ Retrieves all user Submissions for a specified Evaluation queue.
+ If user_id is omitted, this returns the submissions of the caller.
+
+ Arguments:
+ evaluation_id: The ID of the evaluation queue.
+ user_id: Optionally specify the ID of the user whose submissions will be returned.
+ If omitted, this returns the submissions of the caller.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ Submission objects as they are retrieved from the API.
+
+ Example: Getting user submissions
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ submissions = list(Submission.get_user_submissions(
+ evaluation_id="9999999",
+ user_id="123456"
+ ))
+ print(f"Found {len(submissions)} user submissions")
+ ```
+ """
+ yield from wrap_async_generator_to_sync_generator(
+ async_gen_func=cls.get_user_submissions_async,
+ evaluation_id=evaluation_id,
+ user_id=user_id,
+ synapse_client=synapse_client,
+ )
+
+ @staticmethod
+ def get_submission_count(
+ evaluation_id: str,
+ status: Optional[str] = None,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> Dict:
+ """
+ Gets the number of Submissions for a specified Evaluation queue, optionally filtered by submission status.
+
+ Arguments:
+ evaluation_id: The ID of the evaluation queue.
+ status: Optionally filter submissions by a submission status, such as SCORED, VALID,
+ INVALID, OPEN, CLOSED or EVALUATION_IN_PROGRESS.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ A response JSON containing the submission count.
+
+ Example: Getting submission count
+
+ Get the total number of SCORED submissions from a specific evaluation.
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ response = Submission.get_submission_count(
+ evaluation_id="9999999",
+ status="SCORED"
+ )
+ print(f"Found {response} submissions")
+ ```
+ """
+ return {}
+
+
+@dataclass
+@async_to_sync
+class Submission(
+ SubmissionSynchronousProtocol,
+ AccessControllable,
+):
+ """A `Submission` object represents a Synapse Submission, which is created when a user
+ submits an entity to an evaluation queue.
+
+
+ Attributes:
+ id: The unique ID of this Submission.
+ user_id: The ID of the user that submitted this Submission.
+ submitter_alias: The name of the user that submitted this Submission.
+ entity_id: The ID of the entity being submitted.
+ version_number: The version number of the entity at submission.
+ evaluation_id: The ID of the Evaluation to which this Submission belongs.
+ name: The name of this Submission.
+ created_on: The date this Submission was created.
+ team_id: The ID of the team that submitted this submission (if it's a team submission).
+ contributors: User IDs of team members who contributed to this submission (if it's a team submission).
+ submission_status: The status of this Submission.
+ entity_bundle_json: The bundled entity information at submission. This includes the entity, annotations,
+ file handles, and other metadata.
+ docker_repository_name: For Docker repositories, the repository name.
+ docker_digest: For Docker repositories, the digest of the submitted Docker image.
+
+ Example: Retrieve a Submission.
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ submission = Submission(id="syn123456").get()
+ print(submission)
+ ```
+ """
+
+ id: Optional[str] = None
+ """
+ The unique ID of this Submission.
+ """
+
+ user_id: Optional[str] = None
+ """
+ The ID of the user that submitted this Submission.
+ """
+
+ submitter_alias: Optional[str] = None
+ """
+ The name of the user that submitted this Submission.
+ """
+
+ entity_id: Optional[str] = None
+ """
+ The ID of the entity being submitted.
+ """
+
+ version_number: Optional[int] = field(default=None, compare=False)
+ """
+ The version number of the entity at submission. If not provided, it will be automatically retrieved from the entity. If entity is a Docker repository, this attribute should be ignored in favor of `docker_digest` or `docker_tag`.
+ """
+
+ evaluation_id: Optional[str] = None
+ """
+ The ID of the Evaluation to which this Submission belongs.
+ """
+
+ name: Optional[str] = None
+ """
+ The name of this Submission.
+ """
+
+ created_on: Optional[str] = field(default=None, compare=False)
+ """
+ The date this Submission was created.
+ """
+
+ team_id: Optional[str] = None
+ """
+ The ID of the team that submitted this submission (if it's a team submission).
+ """
+
+ contributors: List[str] = field(default_factory=list)
+ """
+ User IDs of team members who contributed to this submission (if it's a team submission).
+ """
+
+ submission_status: Optional[Dict] = None
+ """
+ The status of this Submission.
+ """
+
+ entity_bundle_json: Optional[str] = None
+ """
+ The bundled entity information at submission. This includes the entity, annotations,
+ file handles, and other metadata.
+ """
+
+ docker_repository_name: Optional[str] = None
+ """
+ For Docker repositories, the repository name.
+ """
+
+ docker_digest: Optional[str] = None
+ """
+ For Docker repositories, the digest of the submitted Docker image.
+ """
+
+ docker_tag: Optional[str] = None
+ """
+ For Docker repositories, the tag of the submitted Docker image.
+ """
+
+ etag: Optional[str] = None
+ """The current eTag of the Entity being submitted. If not provided, it will be automatically retrieved."""
+
+ def fill_from_dict(
+ self, synapse_submission: Dict[str, Union[bool, str, int, List]]
+ ) -> "Submission":
+ """
+ Converts a response from the REST API into this dataclass.
+
+ Arguments:
+ synapse_submission: The response from the REST API.
+
+ Returns:
+ The Submission object.
+ """
+ self.id = synapse_submission.get("id", None)
+ self.user_id = synapse_submission.get("userId", None)
+ self.submitter_alias = synapse_submission.get("submitterAlias", None)
+ self.entity_id = synapse_submission.get("entityId", None)
+ self.version_number = synapse_submission.get("versionNumber", None)
+ self.evaluation_id = synapse_submission.get("evaluationId", None)
+ self.name = synapse_submission.get("name", None)
+ self.created_on = synapse_submission.get("createdOn", None)
+ self.team_id = synapse_submission.get("teamId", None)
+ self.contributors = synapse_submission.get("contributors", [])
+ self.submission_status = synapse_submission.get("submissionStatus", None)
+ self.entity_bundle_json = synapse_submission.get("entityBundleJSON", None)
+ self.docker_repository_name = synapse_submission.get(
+ "dockerRepositoryName", None
+ )
+ self.docker_digest = synapse_submission.get("dockerDigest", None)
+
+ return self
+
+ async def _fetch_latest_entity(
+ self, *, synapse_client: Optional[Synapse] = None
+ ) -> Dict:
+ """
+ Fetch the latest entity information from Synapse.
+
+ Arguments:
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ Dictionary containing entity information from the REST API.
+
+ Raises:
+ ValueError: If entity_id is not set or if unable to fetch entity information.
+ """
+ if not self.entity_id:
+ raise ValueError("entity_id must be set to fetch entity information")
+
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+
+ try:
+ entity_info = await client.rest_get_async(f"/entity/{self.entity_id}")
+
+ # If this is a DockerRepository, fetch docker image tag & digest, and add it to the entity_info dict
+ if (
+ entity_info.get("concreteType")
+ == "org.sagebionetworks.repo.model.docker.DockerRepository"
+ ):
+ docker_tag_response = await client.rest_get_async(
+ f"/entity/{self.entity_id}/dockerTag"
+ )
+
+ # Get the latest digest from the docker tag results
+ if "results" in docker_tag_response and docker_tag_response["results"]:
+ # Sort by createdOn timestamp to get the latest entry
+ # Convert ISO timestamp strings to datetime objects for comparison
+ from datetime import datetime
+
+ latest_result = max(
+ docker_tag_response["results"],
+ key=lambda x: datetime.fromisoformat(
+ x["createdOn"].replace("Z", "+00:00")
+ ),
+ )
+
+ # Add the latest result to entity_info
+ entity_info.update(latest_result)
+
+ return entity_info
+ except Exception as e:
+ raise ValueError(
+ f"Unable to fetch entity information for {self.entity_id}: {e}"
+ )
+
+ def to_synapse_request(self) -> Dict:
+ """Creates a request body expected of the Synapse REST API for the Submission model.
+
+ Returns:
+ A dictionary containing the request body for creating a submission.
+
+ Raises:
+ ValueError: If any required attributes are missing.
+ """
+ # These attributes are required for creating a submission
+ required_attributes = ["entity_id", "evaluation_id"]
+
+ for attribute in required_attributes:
+ if not getattr(self, attribute):
+ raise ValueError(
+ f"Your submission object is missing the '{attribute}' attribute. This attribute is required to create a submission"
+ )
+
+ # Build a request body for creating a submission
+ request_body = {
+ "entityId": self.entity_id,
+ "evaluationId": self.evaluation_id,
+ "versionNumber": self.version_number,
+ }
+
+ # Add optional fields if they are set
+ if self.name is not None:
+ request_body["name"] = self.name
+ if self.team_id is not None:
+ request_body["teamId"] = self.team_id
+ if self.contributors:
+ request_body["contributors"] = self.contributors
+ if self.docker_repository_name is not None:
+ request_body["dockerRepositoryName"] = self.docker_repository_name
+ if self.docker_digest is not None:
+ request_body["dockerDigest"] = self.docker_digest
+
+ return request_body
+
+ @otel_trace_method(
+ method_to_trace_name=lambda self, **kwargs: f"Submission_Store: {self.id if self.id else 'new_submission'}"
+ )
+ async def store_async(
+ self,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> "Submission":
+ """
+ Store the submission in Synapse. This creates a new submission in an evaluation queue.
+
+ Arguments:
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ The Submission object with the ID set.
+
+ Raises:
+ ValueError: If the submission is missing required fields, or if unable to fetch entity etag.
+
+ Example: Creating a submission
+
+ ```python
+ import asyncio
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ async def create_submission_example():
+
+ submission = Submission(
+ entity_id="syn123456",
+ evaluation_id="9614543",
+ name="My Submission"
+ )
+ submission = await submission.store_async()
+ print(submission.id)
+
+ asyncio.run(create_submission_example())
+ ```
+ """
+
+ if self.entity_id:
+ entity_info = await self._fetch_latest_entity(synapse_client=synapse_client)
+
+ self.entity_etag = entity_info.get("etag")
+
+ if (
+ entity_info.get("concreteType")
+ == "org.sagebionetworks.repo.model.FileEntity"
+ ):
+ self.version_number = entity_info.get("versionNumber")
+ elif (
+ entity_info.get("concreteType")
+ == "org.sagebionetworks.repo.model.docker.DockerRepository"
+ ):
+ self.docker_repository_name = entity_info.get("repositoryName")
+ self.docker_digest = entity_info.get("digest")
+ self.docker_tag = entity_info.get("tag")
+ # All docker repositories are assigned version number 1, even if they have multiple tags
+ self.version_number = 1
+ else:
+ raise ValueError("entity_id is required to create a submission")
+
+ if not self.entity_etag:
+ raise ValueError("Unable to fetch etag for entity")
+
+ # Build the request body now that all the necessary dataclass attributes are set
+ request_body = self.to_synapse_request()
+
+ response = await evaluation_services.create_submission(
+ request_body, self.entity_etag, synapse_client=synapse_client
+ )
+ self.fill_from_dict(response)
+ return self
+
+ @otel_trace_method(
+ method_to_trace_name=lambda self, **kwargs: f"Submission_Get: {self.id}"
+ )
+ async def get_async(
+ self,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> "Submission":
+ """
+ Retrieve a Submission from Synapse.
+
+ Arguments:
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ The Submission instance retrieved from Synapse.
+
+ Raises:
+ ValueError: If the submission does not have an ID to get.
+
+ Example: Retrieving a submission by ID
+
+ ```python
+ import asyncio
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ async def get_submission_example():
+
+ submission = await Submission(id="9999999").get_async()
+ print(submission)
+
+ asyncio.run(get_submission_example())
+ ```
+ """
+ if not self.id:
+ raise ValueError("The submission must have an ID to get.")
+
+ response = await evaluation_services.get_submission(
+ submission_id=self.id, synapse_client=synapse_client
+ )
+
+ self.fill_from_dict(response)
+
+ return self
+
+ @skip_async_to_sync
+ @classmethod
+ async def get_evaluation_submissions_async(
+ cls,
+ evaluation_id: str,
+ status: Optional[str] = None,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> AsyncGenerator["Submission", None]:
+ """
+ Generator to get all Submissions for a specified Evaluation queue.
+
+ Arguments:
+ evaluation_id: The ID of the evaluation queue.
+ status: Optionally filter submissions by a submission status.
+ Submission status can be one of
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Yields:
+ Individual Submission objects from each page of the response.
+
+ Example: Getting submissions for an evaluation
+
+ Get SCORED submissions from a specific evaluation.
+ ```python
+ import asyncio
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ async def get_evaluation_submissions_example():
+ submissions = []
+ async for submission in Submission.get_evaluation_submissions_async(
+ evaluation_id="9999999",
+ status="SCORED"
+ ):
+ submissions.append(submission)
+ print(f"Found {len(submissions)} submissions")
+
+ asyncio.run(get_evaluation_submissions_example())
+ ```
+ """
+ async for submission_data in evaluation_services.get_evaluation_submissions(
+ evaluation_id=evaluation_id,
+ status=status,
+ synapse_client=synapse_client,
+ ):
+ submission_object = cls().fill_from_dict(synapse_submission=submission_data)
+ yield submission_object
+
+ @skip_async_to_sync
+ @classmethod
+ async def get_user_submissions_async(
+ cls,
+ evaluation_id: str,
+ user_id: Optional[str] = None,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> AsyncGenerator["Submission", None]:
+ """
+ Generator to get all user Submissions for a specified Evaluation queue.
+ If user_id is omitted, this returns the submissions of the caller.
+
+ Arguments:
+ evaluation_id: The ID of the evaluation queue.
+ user_id: Optionally specify the ID of the user whose submissions will be returned.
+ If omitted, this returns the submissions of the caller.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Yields:
+ Individual Submission objects from each page of the response.
+
+ Example: Getting user submissions
+ ```python
+ import asyncio
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ async def get_user_submissions_example():
+ submissions = []
+ async for submission in Submission.get_user_submissions_async(
+ evaluation_id="9999999",
+ user_id="123456"
+ ):
+ submissions.append(submission)
+ print(f"Found {len(submissions)} user submissions")
+
+ asyncio.run(get_user_submissions_example())
+ ```
+ """
+ async for submission_data in evaluation_services.get_user_submissions(
+ evaluation_id=evaluation_id,
+ user_id=user_id,
+ synapse_client=synapse_client,
+ ):
+ submission_object = cls().fill_from_dict(synapse_submission=submission_data)
+ yield submission_object
+
+ @staticmethod
+ async def get_submission_count_async(
+ evaluation_id: str,
+ status: Optional[str] = None,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> Dict:
+ """
+ Gets the number of Submissions for a specified Evaluation queue, optionally filtered by submission status.
+
+ Arguments:
+ evaluation_id: The ID of the evaluation queue.
+ status: Optionally filter submissions by a submission status, such as SCORED, VALID,
+ INVALID, OPEN, CLOSED or EVALUATION_IN_PROGRESS.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ A response JSON containing the submission count.
+
+ Example: Getting submission count
+
+ Get the total number of SCORED submissions from a specific evaluation.
+ ```python
+ import asyncio
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ async def get_submission_count_example():
+ response = await Submission.get_submission_count_async(
+ evaluation_id="9999999",
+ status="SCORED"
+ )
+ print(f"Found {response} submissions")
+
+ asyncio.run(get_submission_count_example())
+ ```
+ """
+ return await evaluation_services.get_submission_count(
+ evaluation_id=evaluation_id, status=status, synapse_client=synapse_client
+ )
+
+ @otel_trace_method(
+ method_to_trace_name=lambda self, **kwargs: f"Submission_Delete: {self.id}"
+ )
+ async def delete_async(
+ self,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> None:
+ """
+ Delete a Submission from Synapse.
+
+ Arguments:
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Raises:
+ ValueError: If the submission does not have an ID to delete.
+
+ Example: Delete a submission
+
+ ```python
+ import asyncio
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ async def delete_submission_example():
+ submission = Submission(id="9999999")
+ await submission.delete_async()
+ print("Submission deleted successfully")
+
+ # Run the async function
+ asyncio.run(delete_submission_example())
+ ```
+ """
+ if not self.id:
+ raise ValueError("The submission must have an ID to delete.")
+
+ await evaluation_services.delete_submission(
+ submission_id=self.id, synapse_client=synapse_client
+ )
+
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+ logger = client.logger
+ logger.info(f"Submission {self.id} has successfully been deleted.")
+
+ @otel_trace_method(
+ method_to_trace_name=lambda self, **kwargs: f"Submission_Cancel: {self.id}"
+ )
+ async def cancel_async(
+ self,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> "Submission":
+ """
+ Cancel a Submission. Only the user who created the Submission may cancel it.
+
+ Arguments:
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ The updated Submission object.
+
+ Raises:
+ ValueError: If the submission does not have an ID to cancel.
+
+ Example: Cancel a submission
+
+ ```python
+ import asyncio
+ from synapseclient import Synapse
+ from synapseclient.models import Submission
+
+ syn = Synapse()
+ syn.login()
+
+ async def cancel_submission_example():
+ submission = Submission(id="syn1234")
+ canceled_submission = await submission.cancel_async()
+ print(f"Canceled submission: {canceled_submission.id}")
+
+ # Run the async function
+ asyncio.run(cancel_submission_example())
+ ```
+ """
+ if not self.id:
+ raise ValueError("The submission must have an ID to cancel.")
+
+ await evaluation_services.cancel_submission(
+ submission_id=self.id, synapse_client=synapse_client
+ )
+
+ from synapseclient import Synapse
+
+ client = Synapse.get_client(synapse_client=synapse_client)
+ logger = client.logger
+ logger.info(f"A request to cancel Submission {self.id} has been submitted.")
diff --git a/synapseclient/models/submission_bundle.py b/synapseclient/models/submission_bundle.py
new file mode 100644
index 000000000..ffd56d489
--- /dev/null
+++ b/synapseclient/models/submission_bundle.py
@@ -0,0 +1,335 @@
+from dataclasses import dataclass
+from typing import (
+ TYPE_CHECKING,
+ AsyncGenerator,
+ Dict,
+ Generator,
+ Optional,
+ Protocol,
+ Union,
+)
+
+from synapseclient import Synapse
+from synapseclient.api import evaluation_services
+from synapseclient.core.async_utils import (
+ async_to_sync,
+ skip_async_to_sync,
+ wrap_async_generator_to_sync_generator,
+)
+
+if TYPE_CHECKING:
+ from synapseclient.models.submission import Submission
+ from synapseclient.models.submission_status import SubmissionStatus
+
+
+class SubmissionBundleSynchronousProtocol(Protocol):
+ """Protocol defining the synchronous interface for SubmissionBundle operations."""
+
+ @classmethod
+ def get_evaluation_submission_bundles(
+ cls,
+ evaluation_id: str,
+ status: Optional[str] = None,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> Generator["SubmissionBundle", None, None]:
+ """
+ Retrieves bundled Submissions and SubmissionStatuses for a given Evaluation.
+
+ Arguments:
+ evaluation_id: The ID of the specified Evaluation.
+ status: Optionally filter submission bundles by status.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ SubmissionBundle objects as they are retrieved from the API.
+
+ Note:
+ The caller must be granted the ACCESS_TYPE.READ_PRIVATE_SUBMISSION on the specified Evaluation.
+
+ Example: Getting submission bundles for an evaluation
+
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionBundle
+
+ syn = Synapse()
+ syn.login()
+
+ bundles = list(SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id="9614543",
+ status="SCORED"
+ ))
+ print(f"Found {len(bundles)} submission bundles")
+ for bundle in bundles:
+ print(f"Submission ID: {bundle.submission.id if bundle.submission else 'N/A'}")
+ ```
+ """
+ yield from wrap_async_generator_to_sync_generator(
+ async_gen_func=cls.get_evaluation_submission_bundles_async,
+ evaluation_id=evaluation_id,
+ status=status,
+ synapse_client=synapse_client,
+ )
+
+ @classmethod
+ def get_user_submission_bundles(
+ cls,
+ evaluation_id: str,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> Generator["SubmissionBundle", None, None]:
+ """
+ Retrieves all user bundled Submissions and SubmissionStatuses for a specified Evaluation.
+
+ Arguments:
+ evaluation_id: The ID of the specified Evaluation.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ SubmissionBundle objects as they are retrieved from the API.
+
+ Example: Getting user submission bundles
+
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionBundle
+
+ syn = Synapse()
+ syn.login()
+
+ bundles = list(SubmissionBundle.get_user_submission_bundles(
+ evaluation_id="9999999"
+ ))
+ print(f"Found {len(bundles)} user submission bundles")
+ for bundle in bundles:
+ print(f"Submission ID: {bundle.submission.id}")
+ ```
+ """
+ yield from wrap_async_generator_to_sync_generator(
+ async_gen_func=cls.get_user_submission_bundles_async,
+ evaluation_id=evaluation_id,
+ synapse_client=synapse_client,
+ )
+
+
+@dataclass
+@async_to_sync
+class SubmissionBundle(SubmissionBundleSynchronousProtocol):
+ """A `SubmissionBundle` object represents a bundle containing a Synapse Submission
+ and its accompanying SubmissionStatus. This bundle provides convenient access to both
+ the submission data and its current status in a single object.
+
+
+
+ Attributes:
+ submission: A Submission to a Synapse Evaluation is a pointer to a versioned Entity.
+ Submissions are immutable, so we archive a copy of the EntityBundle at the time of submission.
+ submission_status: A SubmissionStatus is a secondary, mutable object associated with a Submission.
+ This object should be used to contain scoring data about the Submission.
+
+ Example: Retrieve submission bundles for an evaluation.
+
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionBundle
+
+ syn = Synapse()
+ syn.login()
+
+ # Get all submission bundles for an evaluation
+ bundles = SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id="9614543",
+ status="SCORED"
+ )
+
+ for bundle in bundles:
+ print(f"Submission ID: {bundle.submission.id if bundle.submission else 'N/A'}")
+ print(f"Status: {bundle.submission_status.status if bundle.submission_status else 'N/A'}")
+ ```
+
+ Example: Retrieve user submission bundles for an evaluation.
+
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionBundle
+
+ syn = Synapse()
+ syn.login()
+
+ # Get current user's submission bundles for an evaluation
+ user_bundles = SubmissionBundle.get_user_submission_bundles(
+ evaluation_id="9999999",
+ limit=25
+ )
+
+ for bundle in user_bundles:
+ print(f"Submission ID: {bundle.submission.id}")
+ print(f"Status: {bundle.submission_status.status}")
+ ```
+ """
+
+ submission: Optional["Submission"] = None
+ """
+ A Submission to a Synapse Evaluation is a pointer to a versioned Entity.
+ Submissions are immutable, so we archive a copy of the EntityBundle at the time of submission.
+ """
+
+ submission_status: Optional["SubmissionStatus"] = None
+ """
+ A SubmissionStatus is a secondary, mutable object associated with a Submission.
+ This object should be used to contain scoring data about the Submission.
+ """
+
+ def fill_from_dict(
+ self,
+ synapse_submission_bundle: Dict[str, Union[bool, str, int, Dict]],
+ ) -> "SubmissionBundle":
+ """
+ Converts a response from the REST API into this dataclass.
+
+ Arguments:
+ synapse_submission_bundle: The response from the REST API.
+
+ Returns:
+ The SubmissionBundle object.
+ """
+ from synapseclient.models.submission import Submission
+ from synapseclient.models.submission_status import SubmissionStatus
+
+ submission_dict = synapse_submission_bundle.get("submission", None)
+ if submission_dict:
+ self.submission = Submission().fill_from_dict(submission_dict)
+ else:
+ self.submission = None
+
+ submission_status_dict = synapse_submission_bundle.get("submissionStatus", None)
+ if submission_status_dict:
+ self.submission_status = SubmissionStatus().fill_from_dict(
+ submission_status_dict
+ )
+ # Manually set evaluation_id from the submission data if available
+ if (
+ self.submission_status
+ and self.submission
+ and self.submission.evaluation_id
+ ):
+ self.submission_status.evaluation_id = self.submission.evaluation_id
+ else:
+ self.submission_status = None
+
+ return self
+
+ @skip_async_to_sync
+ @classmethod
+ async def get_evaluation_submission_bundles_async(
+ cls,
+ evaluation_id: str,
+ status: Optional[str] = None,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> AsyncGenerator["SubmissionBundle", None]:
+ """
+ Generator to get all bundled Submissions and SubmissionStatuses for a given Evaluation.
+
+ Arguments:
+ evaluation_id: The ID of the specified Evaluation.
+ status: Optionally filter submission bundles by status.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Yields:
+ Individual SubmissionBundle objects from each page of the response.
+
+ Note:
+ The caller must be granted the ACCESS_TYPE.READ_PRIVATE_SUBMISSION on the specified Evaluation.
+
+ Example: Getting submission bundles for an evaluation
+
+ ```python
+ import asyncio
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionBundle
+
+ syn = Synapse()
+ syn.login()
+
+ async def get_submission_bundles_example():
+ bundles = []
+ async for bundle in SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id="9999999",
+ status="SCORED"
+ ):
+ bundles.append(bundle)
+ print(f"Found {len(bundles)} submission bundles")
+ for bundle in bundles:
+ print(f"Submission ID: {bundle.submission.id}")
+
+ asyncio.run(get_submission_bundles_example())
+ ```
+ """
+ async for bundle_data in evaluation_services.get_evaluation_submission_bundles(
+ evaluation_id=evaluation_id,
+ status=status,
+ synapse_client=synapse_client,
+ ):
+ bundle = cls()
+ bundle.fill_from_dict(bundle_data)
+ yield bundle
+
+ @skip_async_to_sync
+ @classmethod
+ async def get_user_submission_bundles_async(
+ cls,
+ evaluation_id: str,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> AsyncGenerator["SubmissionBundle", None]:
+ """
+ Generator to get all user bundled Submissions and SubmissionStatuses for a specified Evaluation.
+
+ Arguments:
+ evaluation_id: The ID of the specified Evaluation.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Yields:
+ Individual SubmissionBundle objects from each page of the response.
+
+ Example: Getting user submission bundles
+
+
+ ```python
+ import asyncio
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionBundle
+
+ syn = Synapse()
+ syn.login()
+
+ async def get_user_submission_bundles_example():
+ bundles = []
+ async for bundle in SubmissionBundle.get_user_submission_bundles_async(
+ evaluation_id="9999999"
+ ):
+ bundles.append(bundle)
+ print(f"Found {len(bundles)} user submission bundles")
+ for bundle in bundles:
+ print(f"Submission ID: {bundle.submission.id}")
+
+ asyncio.run(get_user_submission_bundles_example())
+ ```
+ """
+ async for bundle_data in evaluation_services.get_user_submission_bundles(
+ evaluation_id=evaluation_id,
+ synapse_client=synapse_client,
+ ):
+ bundle = cls()
+ bundle.fill_from_dict(bundle_data)
+ yield bundle
diff --git a/synapseclient/models/submission_status.py b/synapseclient/models/submission_status.py
new file mode 100644
index 000000000..d8be2c913
--- /dev/null
+++ b/synapseclient/models/submission_status.py
@@ -0,0 +1,844 @@
+from dataclasses import dataclass, field, replace
+from datetime import date, datetime
+from typing import Dict, List, Optional, Protocol, Union
+
+from typing_extensions import Self
+
+from synapseclient import Synapse
+from synapseclient.annotations import (
+ to_submission_annotations,
+ to_submission_status_annotations,
+)
+from synapseclient.api import evaluation_services
+from synapseclient.core.async_utils import async_to_sync, otel_trace_method
+from synapseclient.core.utils import merge_dataclass_entities
+from synapseclient.models import Annotations
+from synapseclient.models.mixins.access_control import AccessControllable
+
+
+class SubmissionStatusSynchronousProtocol(Protocol):
+ """Protocol defining the synchronous interface for SubmissionStatus operations."""
+
+ def get(
+ self,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> "Self":
+ """
+ Retrieve a SubmissionStatus from Synapse.
+
+ Arguments:
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ The SubmissionStatus instance retrieved from Synapse.
+
+ Raises:
+ ValueError: If the submission status does not have an ID to get.
+
+ Example: Retrieving a submission status by ID
+
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionStatus
+
+ syn = Synapse()
+ syn.login()
+
+ status = SubmissionStatus(id="9999999").get()
+ print(status)
+ ```
+ """
+ return self
+
+ def store(
+ self,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> "Self":
+ """
+ Store (update) the SubmissionStatus in Synapse.
+
+ Arguments:
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ The updated SubmissionStatus object.
+
+ Raises:
+ ValueError: If the submission status is missing required fields.
+
+ Example: Update a submission status
+
+ Update an existing submission status by first retrieving it, then modifying fields and storing the changes.
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionStatus
+
+ syn = Synapse()
+ syn.login()
+
+ # Get existing status
+ status = SubmissionStatus(id="9999999").get()
+
+ # Update fields
+ status.status = "SCORED"
+ status.submission_annotations = {"score": [85.5]}
+
+ # Store the update
+ status = status.store()
+ print(f"Updated status:")
+ print(status)
+ ```
+ """
+ return self
+
+ @staticmethod
+ def get_all_submission_statuses(
+ evaluation_id: str,
+ status: Optional[str] = None,
+ limit: int = 10,
+ offset: int = 0,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> List["SubmissionStatus"]:
+ """
+ Gets a collection of SubmissionStatuses to a specified Evaluation.
+
+ Arguments:
+ evaluation_id: The ID of the specified Evaluation.
+ status: Optionally filter submission statuses by status.
+ limit: Limits the number of entities that will be fetched for this page.
+ When null it will default to 10, max value 100. Default to 10.
+ offset: The offset index determines where this page will start from.
+ An index of 0 is the first entity. Default to 0.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ A list of SubmissionStatus objects for the evaluation queue.
+
+ Example: Getting all submission statuses for an evaluation
+
+ Retrieve a list of submission statuses for a specific evaluation, optionally filtered by status.
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionStatus
+
+ syn = Synapse()
+ syn.login()
+
+ statuses = SubmissionStatus.get_all_submission_statuses(
+ evaluation_id="9614543",
+ status="SCORED",
+ limit=50
+ )
+ print(f"Found {len(statuses)} submission statuses")
+ for status in statuses:
+ print(f"Status ID: {status.id}, Status: {status.status}")
+ ```
+ """
+ return []
+
+ @staticmethod
+ def batch_update_submission_statuses(
+ evaluation_id: str,
+ statuses: List["SubmissionStatus"],
+ is_first_batch: bool = True,
+ is_last_batch: bool = True,
+ batch_token: Optional[str] = None,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> Dict:
+ """
+ Update multiple SubmissionStatuses. The maximum batch size is 500.
+
+ Arguments:
+ evaluation_id: The ID of the Evaluation to which the SubmissionStatus objects belong.
+ statuses: List of SubmissionStatus objects to update.
+ is_first_batch: Boolean indicating if this is the first batch in the series. Default True.
+ is_last_batch: Boolean indicating if this is the last batch in the series. Default True.
+ batch_token: Token from previous batch response (required for all but first batch).
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ A BatchUploadResponse object as a JSON dict containing the batch token
+ and other response information.
+
+ Example: Batch update submission statuses
+
+ Update multiple submission statuses in a single batch operation for efficiency.
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionStatus
+
+ syn = Synapse()
+ syn.login()
+
+ # Retrieve existing statuses to update
+ statuses = SubmissionStatus.get_all_submission_statuses(
+ evaluation_id="9614543",
+ status="RECEIVED"
+ )
+
+ # Modify statuses as needed
+ for status in statuses:
+ status.status = "SCORED"
+
+ # Update statuses in batch
+ response = SubmissionStatus.batch_update_submission_statuses(
+ evaluation_id="9614543",
+ statuses=statuses,
+ is_first_batch=True,
+ is_last_batch=True
+ )
+ print(f"Batch update completed: {response}")
+ ```
+ """
+ return {}
+
+
+@dataclass
+@async_to_sync
+class SubmissionStatus(
+ SubmissionStatusSynchronousProtocol,
+ AccessControllable,
+):
+ """A SubmissionStatus is a secondary, mutable object associated with a Submission.
+ This object should be used to contain scoring data about the Submission.
+
+
+ Attributes:
+ id: The unique, immutable Synapse ID of the Submission.
+ etag: Synapse employs an Optimistic Concurrency Control (OCC) scheme to handle
+ concurrent updates. The eTag changes every time a SubmissionStatus is updated;
+ it is used to detect when a client's copy of a SubmissionStatus is out-of-date.
+ modified_on: The date on which this SubmissionStatus was last modified.
+ status: The possible states of a Synapse Submission (e.g., RECEIVED, VALIDATED, SCORED).
+ score: This field is deprecated and should not be used. Use the 'submission_annotations' field instead.
+ report: This field is deprecated and should not be used. Use the 'submission_annotations' field instead.
+ annotations: Primary container object for Annotations on a Synapse object. These annotations use the legacy
+ format and do not show up in a submission view. The visibility is controlled by private_status_annotations.
+ submission_annotations: Submission Annotations are additional key-value pair metadata that are associated with an object.
+ These annotations use the modern nested format and show up in a submission view.
+ private_status_annotations: Indicates whether the annotations (not to be confused with submission annotations) are private (True) or public (False).
+ Default is True. This controls the visibility of the 'annotations' field.
+ entity_id: The Synapse ID of the Entity in this Submission.
+ evaluation_id: The ID of the Evaluation to which this Submission belongs. This field is automatically
+ populated when retrieving a SubmissionStatus via get() and is required when updating annotations.
+ version_number: The version number of the Entity in this Submission.
+ status_version: A version of the status, auto-generated and auto-incremented by the system and read-only to the client.
+ can_cancel: Can this submission be cancelled? By default, this will be set to False. Users can read this value.
+ Only the queue's scoring application can change this value.
+ cancel_requested: Has user requested to cancel this submission? By default, this will be set to False.
+ Submission owner can read and request to change this value.
+
+ Example: Retrieve and update a SubmissionStatus.
+
+ This example demonstrates the basic workflow of retrieving an existing submission status, updating its fields, and storing the changes back to Synapse.
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionStatus
+
+ syn = Synapse()
+ syn.login()
+
+ # Get a submission status
+ status = SubmissionStatus(id="9999999").get()
+
+ # Update the status
+ status.status = "SCORED"
+ status.submission_annotations = {"score": [85.5], "feedback": ["Good work!"]}
+ status = status.store()
+ print(status)
+ ```
+
+ Example: Get all submission statuses for an evaluation.
+
+ Retrieve multiple submission statuses for an evaluation queue with optional filtering.
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionStatus
+
+ syn = Synapse()
+ syn.login()
+
+ # Get all RECEIVED statuses for an evaluation
+ statuses = SubmissionStatus.get_all_submission_statuses(
+ evaluation_id="9999999",
+ status="RECEIVED",
+ limit=100
+ )
+
+ print(f"Found {len(statuses)} submission statuses")
+ for status in statuses:
+ print(f"Status ID: {status.id}, Status: {status.status}")
+ ```
+
+ Example: Batch update multiple submission statuses.
+
+ Efficiently update multiple submission statuses in a single operation.
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionStatus
+
+ syn = Synapse()
+ syn.login()
+
+ # Retrieve statuses to update
+ statuses = SubmissionStatus.get_all_submission_statuses(
+ evaluation_id="9999999",
+ status="RECEIVED"
+ )
+
+ # Update each status
+ for status in statuses:
+ status.status = "SCORED"
+ status.submission_annotations = {
+ "validation_score": 95.0,
+ "comments": "Passed validation"
+ }
+
+ # Batch update all statuses
+ response = SubmissionStatus.batch_update_submission_statuses(
+ evaluation_id="9614543",
+ statuses=statuses
+ )
+ print(f"Batch update completed: {response}")
+ ```
+ """
+
+ id: Optional[str] = None
+ """
+ The unique, immutable Synapse ID of the Submission.
+ """
+
+ etag: Optional[str] = field(default=None, compare=False)
+ """
+ Synapse employs an Optimistic Concurrency Control (OCC) scheme to handle
+ concurrent updates. The eTag changes every time a SubmissionStatus is updated;
+ it is used to detect when a client's copy of a SubmissionStatus is out-of-date.
+ """
+
+ modified_on: Optional[str] = field(default=None, compare=False)
+ """
+ The date on which this SubmissionStatus was last modified.
+ """
+
+ status: Optional[str] = None
+ """
+ The possible states of a Synapse Submission (e.g., RECEIVED, VALIDATED, SCORED).
+ """
+
+ score: Optional[float] = None
+ """
+ This field is deprecated and should not be used. Use the 'submission_annotations' field instead.
+ """
+
+ report: Optional[str] = None
+ """
+ This field is deprecated and should not be used. Use the 'submission_annotations' field instead.
+ """
+
+ annotations: Optional[
+ Dict[
+ str,
+ Union[
+ List[str],
+ List[bool],
+ List[float],
+ List[int],
+ List[date],
+ List[datetime],
+ ],
+ ]
+ ] = field(default_factory=dict)
+ """Primary container object for Annotations on a Synapse object."""
+
+ submission_annotations: Optional[
+ Dict[
+ str,
+ Union[
+ List[str],
+ List[bool],
+ List[float],
+ List[int],
+ List[date],
+ List[datetime],
+ ],
+ ]
+ ] = field(default_factory=dict)
+ """Annotations are additional key-value pair metadata that are associated with an object."""
+
+ private_status_annotations: Optional[bool] = field(default=True)
+ """Indicates whether the submission status annotations (NOT to be confused with submission annotations) are private (True) or public (False). Default is True."""
+
+ entity_id: Optional[str] = None
+ """
+ The Synapse ID of the Entity in this Submission.
+ """
+
+ evaluation_id: Optional[str] = None
+ """
+ The ID of the Evaluation to which this Submission belongs.
+ """
+
+ version_number: Optional[int] = field(default=None, compare=False)
+ """
+ The version number of the Entity in this Submission.
+ """
+
+ status_version: Optional[int] = field(default=None, compare=False)
+ """
+ A version of the status, auto-generated and auto-incremented by the system and read-only to the client.
+ """
+
+ can_cancel: Optional[bool] = field(default=False)
+ """
+ Can this submission be cancelled? By default, this will be set to False. Submission owner can read this value.
+ Only the queue's organizers can change this value.
+ """
+
+ cancel_requested: Optional[bool] = field(default=False, compare=False)
+ """
+ Has user requested to cancel this submission? By default, this will be set to False.
+ Submission owner can read and request to change this value.
+ """
+
+ _last_persistent_instance: Optional["SubmissionStatus"] = field(
+ default=None, repr=False, compare=False
+ )
+ """The last persistent instance of this object. This is used to determine if the
+ object has been changed and needs to be updated in Synapse."""
+
+ @property
+ def has_changed(self) -> bool:
+ """Determines if the object has been newly created OR changed since last retrieval, and needs to be updated in Synapse."""
+ if not self._last_persistent_instance:
+ return True
+
+ model_attributes_changed = self._last_persistent_instance != self
+ annotations_changed = (
+ self._last_persistent_instance.annotations != self.annotations
+ )
+ submission_annotations_changed = (
+ self._last_persistent_instance.submission_annotations
+ != self.submission_annotations
+ )
+
+ return (
+ model_attributes_changed
+ or annotations_changed
+ or submission_annotations_changed
+ )
+
+ def _set_last_persistent_instance(self) -> None:
+ """Stash the last time this object interacted with Synapse. This is used to
+ determine if the object has been changed and needs to be updated in Synapse."""
+ self._last_persistent_instance = replace(self)
+
+ def fill_from_dict(
+ self,
+ synapse_submission_status: Dict[str, Union[bool, str, int, float, List]],
+ ) -> "SubmissionStatus":
+ """
+ Converts a response from the REST API into this dataclass.
+
+ Arguments:
+ synapse_submission_status: The response from the REST API.
+
+ Returns:
+ The SubmissionStatus object.
+ """
+ self.id = synapse_submission_status.get("id", None)
+ self.etag = synapse_submission_status.get("etag", None)
+ self.modified_on = synapse_submission_status.get("modifiedOn", None)
+ self.status = synapse_submission_status.get("status", None)
+ self.score = synapse_submission_status.get("score", None)
+ self.report = synapse_submission_status.get("report", None)
+ self.entity_id = synapse_submission_status.get("entityId", None)
+ self.version_number = synapse_submission_status.get("versionNumber", None)
+ self.status_version = synapse_submission_status.get("statusVersion", None)
+ self.can_cancel = synapse_submission_status.get("canCancel", False)
+ self.cancel_requested = synapse_submission_status.get("cancelRequested", False)
+
+ # Handle annotations
+ annotations_dict = synapse_submission_status.get("annotations", {})
+ if annotations_dict:
+ self.annotations = Annotations.from_dict(annotations_dict)
+
+ # Handle submission annotations
+ submission_annotations_dict = synapse_submission_status.get(
+ "submissionAnnotations", {}
+ )
+ if submission_annotations_dict:
+ self.submission_annotations = Annotations.from_dict(
+ submission_annotations_dict
+ )
+
+ return self
+
+ def to_synapse_request(self, synapse_client: Optional[Synapse] = None) -> Dict:
+ """
+ Creates a request body expected by the Synapse REST API for the SubmissionStatus model.
+
+ Arguments:
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ A dictionary containing the request body for updating a submission status.
+
+ Raises:
+ ValueError: If any required attributes are missing.
+ """
+ # Get the client for logging
+ client = Synapse.get_client(synapse_client=synapse_client)
+ logger = client.logger
+
+ # These attributes are required for updating a submission status
+ required_attributes = ["id", "etag", "status_version"]
+
+ for attribute in required_attributes:
+ if getattr(self, attribute) is None:
+ raise ValueError(
+ f"Your submission status object is missing the '{attribute}' attribute. This attribute is required to update a submission status"
+ )
+
+ # Build request body with required fields
+ request_body = {
+ "id": self.id,
+ "etag": self.etag,
+ "statusVersion": self.status_version,
+ }
+
+ # Add optional fields only if they have values
+ if self.status is not None:
+ request_body["status"] = self.status
+ if self.score is not None:
+ request_body["score"] = self.score
+ if self.report is not None:
+ request_body["report"] = self.report
+ if self.entity_id is not None:
+ request_body["entityId"] = self.entity_id
+ if self.version_number is not None:
+ request_body["versionNumber"] = self.version_number
+ if self.can_cancel is not None:
+ request_body["canCancel"] = self.can_cancel
+ if self.cancel_requested is not None:
+ request_body["cancelRequested"] = self.cancel_requested
+
+ if self.annotations and len(self.annotations) > 0:
+ # evaluation_id is required when annotations are provided for scopeId
+ if self.evaluation_id is None:
+ raise ValueError(
+ "Your submission status object is missing the 'evaluation_id' attribute. This attribute is required when submissions are updated with annotations. Please retrieve your submission status with .get() to populate this field."
+ )
+
+ # Add required objectId and scopeId to annotations dict as per Synapse API requirements
+ # https://rest-docs.synapse.org/rest/org/sagebionetworks/repo/model/annotation/Annotations.html
+ annotations_with_metadata = self.annotations.copy()
+ annotations_with_metadata["objectId"] = self.id
+ annotations_with_metadata["scopeId"] = self.evaluation_id
+
+ request_body["annotations"] = to_submission_status_annotations(
+ annotations_with_metadata, self.private_status_annotations
+ )
+
+ if self.submission_annotations and len(self.submission_annotations) > 0:
+ request_body["submissionAnnotations"] = to_submission_annotations(
+ id=self.id,
+ etag=self.etag,
+ annotations=self.submission_annotations,
+ logger=logger,
+ )
+
+ return request_body
+
+ @otel_trace_method(
+ method_to_trace_name=lambda self, **kwargs: f"SubmissionStatus_Get: {self.id}"
+ )
+ async def get_async(
+ self,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> "SubmissionStatus":
+ """
+ Retrieve a SubmissionStatus from Synapse.
+
+ Arguments:
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ The SubmissionStatus instance retrieved from Synapse.
+
+ Raises:
+ ValueError: If the submission status does not have an ID to get.
+
+ Example: Retrieving a submission status by ID
+
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionStatus
+
+ syn = Synapse()
+ syn.login()
+
+ status = await SubmissionStatus(id="9999999").get_async()
+ print(status)
+ ```
+ """
+ if not self.id:
+ raise ValueError("The submission status must have an ID to get.")
+
+ response = await evaluation_services.get_submission_status(
+ submission_id=self.id, synapse_client=synapse_client
+ )
+ self.fill_from_dict(response)
+
+ # Fetch evaluation_id from the associated submission since it's not in the SubmissionStatus response
+ if not self.evaluation_id:
+ submission_response = await evaluation_services.get_submission(
+ submission_id=self.id, synapse_client=synapse_client
+ )
+ self.evaluation_id = submission_response.get("evaluationId", None)
+
+ self._set_last_persistent_instance()
+ return self
+
+ @otel_trace_method(
+ method_to_trace_name=lambda self, **kwargs: f"SubmissionStatus_Store: {self.id if self.id else 'new_status'}"
+ )
+ async def store_async(
+ self,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> "SubmissionStatus":
+ """
+ Store (update) the SubmissionStatus in Synapse.
+
+ Arguments:
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ The updated SubmissionStatus object.
+
+ Raises:
+ ValueError: If the submission status is missing required fields.
+
+ Example: Update a submission status
+
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionStatus
+
+ syn = Synapse()
+ syn.login()
+
+ # Get existing status
+ status = await SubmissionStatus(id="9999999").get_async()
+
+ # Update fields
+ status.status = "SCORED"
+ status.submission_annotations = {"score": [85.5]}
+
+ # Store the update
+ status = await status.store_async()
+ print(f"Updated status: {status.status}")
+ ```
+ """
+ if not self.id:
+ raise ValueError("The submission status must have an ID to update.")
+
+ # Get the client for logging
+ client = Synapse.get_client(synapse_client=synapse_client)
+ logger = client.logger
+
+ # Check if there are changes to apply
+ if self._last_persistent_instance and self.has_changed:
+ # Merge with the last persistent instance to preserve system-managed fields
+ merge_dataclass_entities(
+ source=self._last_persistent_instance,
+ destination=self,
+ fields_to_preserve_from_source=[
+ "id",
+ "etag",
+ "modified_on",
+ "entity_id",
+ "version_number",
+ ],
+ logger=logger,
+ )
+ elif self._last_persistent_instance and not self.has_changed:
+ logger.warning(
+ f"SubmissionStatus (ID: {self.id}) has not changed since last 'store' or 'get' event, so it will not be updated in Synapse. Please get the submission status again if you want to refresh its state."
+ )
+ return self
+
+ request_body = self.to_synapse_request()
+
+ response = await evaluation_services.update_submission_status(
+ submission_id=self.id,
+ request_body=request_body,
+ synapse_client=synapse_client,
+ )
+
+ self.fill_from_dict(response)
+ self._set_last_persistent_instance()
+ return self
+
+ @staticmethod
+ async def get_all_submission_statuses_async(
+ evaluation_id: str,
+ status: Optional[str] = None,
+ limit: int = 10,
+ offset: int = 0,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> List["SubmissionStatus"]:
+ """
+ Gets a collection of SubmissionStatuses to a specified Evaluation.
+
+ Arguments:
+ evaluation_id: The ID of the specified Evaluation.
+ status: Optionally filter submission statuses by status.
+ limit: Limits the number of entities that will be fetched for this page.
+ When null it will default to 10, max value 100. Default to 10.
+ offset: The offset index determines where this page will start from.
+ An index of 0 is the first entity. Default to 0.
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ A list of SubmissionStatus objects for the evaluation queue.
+
+ Example: Getting all submission statuses for an evaluation
+
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionStatus
+
+ syn = Synapse()
+ syn.login()
+
+ statuses = await SubmissionStatus.get_all_submission_statuses_async(
+ evaluation_id="9614543",
+ status="SCORED",
+ limit=50
+ )
+ print(f"Found {len(statuses)} submission statuses")
+ for status in statuses:
+ print(f"Status ID: {status.id}, Status: {status.status}")
+ ```
+ """
+ response = await evaluation_services.get_all_submission_statuses(
+ evaluation_id=evaluation_id,
+ status=status,
+ limit=limit,
+ offset=offset,
+ synapse_client=synapse_client,
+ )
+
+ # Convert each result to a SubmissionStatus object
+ submission_statuses = []
+ for status_dict in response.get("results", []):
+ submission_status = SubmissionStatus()
+ submission_status.fill_from_dict(status_dict)
+ # Manually set evaluation_id since it's not in the SubmissionStatus response
+ submission_status.evaluation_id = evaluation_id
+ submission_status._set_last_persistent_instance()
+ submission_statuses.append(submission_status)
+
+ return submission_statuses
+
+ @staticmethod
+ async def batch_update_submission_statuses_async(
+ evaluation_id: str,
+ statuses: List["SubmissionStatus"],
+ is_first_batch: bool = True,
+ is_last_batch: bool = True,
+ batch_token: Optional[str] = None,
+ *,
+ synapse_client: Optional[Synapse] = None,
+ ) -> Dict:
+ """
+ Update multiple SubmissionStatuses. The maximum batch size is 500.
+
+ Arguments:
+ evaluation_id: The ID of the Evaluation to which the SubmissionStatus objects belong.
+ statuses: List of SubmissionStatus objects to update.
+ is_first_batch: Boolean indicating if this is the first batch in the series. Default True.
+ is_last_batch: Boolean indicating if this is the last batch in the series. Default True.
+ batch_token: Token from previous batch response (required for all but first batch).
+ synapse_client: If not passed in and caching was not disabled by
+ `Synapse.allow_client_caching(False)` this will use the last created
+ instance from the Synapse class constructor.
+
+ Returns:
+ A BatchUploadResponse object as a JSON dict containing the batch token
+ and other response information.
+
+ Example: Batch update submission statuses
+
+ ```python
+ from synapseclient import Synapse
+ from synapseclient.models import SubmissionStatus
+
+ syn = Synapse()
+ syn.login()
+
+ # Retrieve existing statuses to update
+ statuses = SubmissionStatus.get_all_submission_statuses(
+ evaluation_id="9614543",
+ status="RECEIVED"
+ )
+
+ # Modify statuses as needed
+ for status in statuses:
+ status.status = "SCORED"
+
+ # Update statuses in batch
+ response = await SubmissionStatus.batch_update_submission_statuses_async(
+ evaluation_id="9614543",
+ statuses=statuses,
+ is_first_batch=True,
+ is_last_batch=True
+ )
+ print(f"Batch update completed: {response}")
+ ```
+ """
+ # Convert SubmissionStatus objects to dictionaries
+ status_dicts = []
+ for status in statuses:
+ status_dict = status.to_synapse_request()
+ status_dicts.append(status_dict)
+
+ # Prepare the batch request body
+ request_body = {
+ "statuses": status_dicts,
+ "isFirstBatch": is_first_batch,
+ "isLastBatch": is_last_batch,
+ }
+
+ # Add batch token if provided (required for all but first batch)
+ if batch_token:
+ request_body["batchToken"] = batch_token
+
+ return await evaluation_services.batch_update_submission_statuses(
+ evaluation_id=evaluation_id,
+ request_body=request_body,
+ synapse_client=synapse_client,
+ )
diff --git a/tests/integration/synapseclient/models/async/test_submission_async.py b/tests/integration/synapseclient/models/async/test_submission_async.py
new file mode 100644
index 000000000..f7cb4fdda
--- /dev/null
+++ b/tests/integration/synapseclient/models/async/test_submission_async.py
@@ -0,0 +1,674 @@
+"""Async integration tests for the synapseclient.models.Submission class."""
+
+import uuid
+from typing import Callable
+
+import pytest
+import pytest_asyncio
+
+from synapseclient import Synapse
+from synapseclient.core.exceptions import SynapseHTTPError
+from synapseclient.models import Evaluation, File, Project, Submission
+
+
+class TestSubmissionCreationAsync:
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest_asyncio.fixture(scope="function")
+ async def test_project(
+ self, syn: Synapse, schedule_for_cleanup: Callable[..., None]
+ ) -> Project:
+ """Create a test project for submission tests."""
+ project = await Project(name=f"test_project_{uuid.uuid4()}").store_async(
+ synapse_client=syn
+ )
+ schedule_for_cleanup(project.id)
+ return project
+
+ @pytest_asyncio.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission tests",
+ content_source=test_project.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = await evaluation.store_async(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest_asyncio.fixture(scope="function")
+ async def test_file(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ """Create a test file for submission tests."""
+ import os
+ import tempfile
+
+ # Create a temporary file
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write("This is test content for submission testing.")
+ temp_file_path = temp_file.name
+
+ try:
+ file = await File(
+ path=temp_file_path,
+ name=f"test_file_{uuid.uuid4()}.txt",
+ parent_id=test_project.id,
+ ).store_async(synapse_client=syn)
+ schedule_for_cleanup(file.id)
+ return file
+ finally:
+ # Clean up the temporary file
+ os.unlink(temp_file_path)
+
+ async def test_store_submission_successfully_async(
+ self, test_evaluation: Evaluation, test_file: File
+ ):
+ # WHEN I create a submission with valid data using async method
+ submission = Submission(
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ name=f"Test Submission {uuid.uuid4()}",
+ )
+ created_submission = await submission.store_async(synapse_client=self.syn)
+ self.schedule_for_cleanup(created_submission.id)
+
+ # THEN the submission should be created successfully
+ assert created_submission.id is not None
+ assert created_submission.entity_id == test_file.id
+ assert created_submission.evaluation_id == test_evaluation.id
+ assert created_submission.name == submission.name
+ assert created_submission.user_id is not None
+ assert created_submission.created_on is not None
+ assert created_submission.version_number is not None
+
+ async def test_store_submission_without_entity_id_async(
+ self, test_evaluation: Evaluation
+ ):
+ # WHEN I try to create a submission without entity_id using async method
+ submission = Submission(
+ evaluation_id=test_evaluation.id,
+ name="Test Submission",
+ )
+
+ # THEN it should raise a ValueError
+ with pytest.raises(
+ ValueError, match="entity_id is required to create a submission"
+ ):
+ await submission.store_async(synapse_client=self.syn)
+
+ async def test_store_submission_without_evaluation_id_async(self, test_file: File):
+ # WHEN I try to create a submission without evaluation_id using async method
+ submission = Submission(
+ entity_id=test_file.id,
+ name="Test Submission",
+ )
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'evaluation_id' attribute"):
+ await submission.store_async(synapse_client=self.syn)
+
+ # async def test_store_submission_with_docker_repository_async(
+ # self, test_evaluation: Evaluation
+ # ):
+ # # GIVEN we would need a Docker repository entity (mocked for this test)
+ # # This test demonstrates the expected behavior for Docker repository submissions
+
+ # # WHEN I create a submission for a Docker repository entity using async method
+ # # TODO: This would require a real Docker repository entity in a full integration test
+ # submission = Submission(
+ # entity_id="syn123456789", # Would be a Docker repository ID
+ # evaluation_id=test_evaluation.id,
+ # name=f"Docker Submission {uuid.uuid4()}",
+ # )
+
+ # # THEN the submission should handle Docker-specific attributes
+ # # (This test would need to be expanded with actual Docker repository setup)
+ # assert submission.entity_id == "syn123456789"
+ # assert submission.evaluation_id == test_evaluation.id
+
+
+class TestSubmissionRetrievalAsync:
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest_asyncio.fixture(scope="function")
+ async def test_project(
+ self, syn: Synapse, schedule_for_cleanup: Callable[..., None]
+ ) -> Project:
+ """Create a test project for submission tests."""
+ project = await Project(name=f"test_project_{uuid.uuid4()}").store_async(
+ synapse_client=syn
+ )
+ schedule_for_cleanup(project.id)
+ return project
+
+ @pytest_asyncio.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission tests",
+ content_source=test_project.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = await evaluation.store_async(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest_asyncio.fixture(scope="function")
+ async def test_file(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ """Create a test file for submission tests."""
+ import os
+ import tempfile
+
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write("This is test content for submission testing.")
+ temp_file_path = temp_file.name
+
+ try:
+ file = await File(
+ path=temp_file_path,
+ name=f"test_file_{uuid.uuid4()}.txt",
+ parent_id=test_project.id,
+ ).store_async(synapse_client=syn)
+ schedule_for_cleanup(file.id)
+ return file
+ finally:
+ os.unlink(temp_file_path)
+
+ @pytest_asyncio.fixture(scope="function")
+ async def test_submission(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Submission:
+ """Create a test submission for retrieval tests."""
+ submission = Submission(
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ name=f"Test Submission {uuid.uuid4()}",
+ )
+ created_submission = await submission.store_async(synapse_client=syn)
+ schedule_for_cleanup(created_submission.id)
+ return created_submission
+
+ async def test_get_submission_by_id_async(
+ self, test_submission: Submission, test_evaluation: Evaluation, test_file: File
+ ):
+ # WHEN I get a submission by ID using async method
+ retrieved_submission = await Submission(id=test_submission.id).get_async(
+ synapse_client=self.syn
+ )
+
+ # THEN the submission should be retrieved correctly
+ assert retrieved_submission.id == test_submission.id
+ assert retrieved_submission.entity_id == test_file.id
+ assert retrieved_submission.evaluation_id == test_evaluation.id
+ assert retrieved_submission.name == test_submission.name
+ assert retrieved_submission.user_id is not None
+ assert retrieved_submission.created_on is not None
+
+ async def test_get_evaluation_submissions_async(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ # WHEN I get all submissions for an evaluation using async generator
+ submissions = []
+ async for submission in Submission.get_evaluation_submissions_async(
+ evaluation_id=test_evaluation.id, synapse_client=self.syn
+ ):
+ submissions.append(submission)
+
+ # THEN I should get a list of submission objects
+ assert len(submissions) > 0
+ assert all(isinstance(sub, Submission) for sub in submissions)
+
+ # AND the test submission should be in the results
+ submission_ids = [sub.id for sub in submissions]
+ assert test_submission.id in submission_ids
+
+ async def test_get_evaluation_submissions_with_status_filter_async(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ # WHEN I get submissions filtered by status using async generator
+ submissions = []
+ async for submission in Submission.get_evaluation_submissions_async(
+ evaluation_id=test_evaluation.id,
+ status="RECEIVED",
+ synapse_client=self.syn,
+ ):
+ submissions.append(submission)
+
+ # The test submission should be in the results (initially in RECEIVED status)
+ submission_ids = [sub.id for sub in submissions]
+ assert test_submission.id in submission_ids
+
+ async def test_get_evaluation_submissions_async_generator_behavior(
+ self, test_evaluation: Evaluation
+ ):
+ # WHEN I get submissions using the async generator
+ submissions_generator = Submission.get_evaluation_submissions_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+
+ # THEN I should be able to iterate through the results
+ submissions = []
+ async for submission in submissions_generator:
+ assert isinstance(submission, Submission)
+ submissions.append(submission)
+
+ # AND all submissions should be valid Submission objects
+ assert all(isinstance(sub, Submission) for sub in submissions)
+
+ async def test_get_user_submissions_async(self, test_evaluation: Evaluation):
+ # WHEN I get submissions for the current user using async generator
+ submissions = []
+ async for submission in Submission.get_user_submissions_async(
+ evaluation_id=test_evaluation.id, synapse_client=self.syn
+ ):
+ submissions.append(submission)
+
+ # THEN all submissions should be valid Submission objects
+ # Note: Could be empty if user hasn't made submissions to this evaluation
+ assert all(isinstance(sub, Submission) for sub in submissions)
+
+ async def test_get_user_submissions_async_generator_behavior(
+ self, test_evaluation: Evaluation
+ ):
+ # WHEN I get user submissions using the async generator
+ submissions_generator = Submission.get_user_submissions_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+
+ # THEN I should be able to iterate through the results
+ submissions = []
+ async for submission in submissions_generator:
+ assert isinstance(submission, Submission)
+ submissions.append(submission)
+
+ # AND all submissions should be valid Submission objects
+ assert all(isinstance(sub, Submission) for sub in submissions)
+
+ async def test_get_submission_count_async(self, test_evaluation: Evaluation):
+ # WHEN I get the submission count for an evaluation using async method
+ response = await Submission.get_submission_count_async(
+ evaluation_id=test_evaluation.id, synapse_client=self.syn
+ )
+
+ # THEN I should get a count response
+ assert isinstance(response, int)
+
+
+class TestSubmissionDeletionAsync:
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest_asyncio.fixture(scope="function")
+ async def test_project(
+ self, syn: Synapse, schedule_for_cleanup: Callable[..., None]
+ ) -> Project:
+ """Create a test project for submission tests."""
+ project = await Project(name=f"test_project_{uuid.uuid4()}").store_async(
+ synapse_client=syn
+ )
+ schedule_for_cleanup(project.id)
+ return project
+
+ @pytest_asyncio.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission tests",
+ content_source=test_project.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = await evaluation.store_async(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest_asyncio.fixture(scope="function")
+ async def test_file(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ """Create a test file for submission tests."""
+ import os
+ import tempfile
+
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write("This is test content for submission testing.")
+ temp_file_path = temp_file.name
+
+ try:
+ file = await File(
+ path=temp_file_path,
+ name=f"test_file_{uuid.uuid4()}.txt",
+ parent_id=test_project.id,
+ ).store_async(synapse_client=syn)
+ schedule_for_cleanup(file.id)
+ return file
+ finally:
+ os.unlink(temp_file_path)
+
+ async def test_delete_submission_successfully_async(
+ self, test_evaluation: Evaluation, test_file: File
+ ):
+ # GIVEN a submission created with async method
+ submission = Submission(
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ name=f"Test Submission for Deletion {uuid.uuid4()}",
+ )
+ created_submission = await submission.store_async(synapse_client=self.syn)
+
+ # WHEN I delete the submission using async method
+ await created_submission.delete_async(synapse_client=self.syn)
+
+ # THEN attempting to retrieve it should raise an error
+ with pytest.raises(SynapseHTTPError):
+ await Submission(id=created_submission.id).get_async(
+ synapse_client=self.syn
+ )
+
+ async def test_delete_submission_without_id_async(self):
+ # WHEN I try to delete a submission without an ID using async method
+ submission = Submission(entity_id="syn123", evaluation_id="456")
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="must have an ID to delete"):
+ await submission.delete_async(synapse_client=self.syn)
+
+
+class TestSubmissionCancelAsync:
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest_asyncio.fixture(scope="function")
+ async def test_project(
+ self, syn: Synapse, schedule_for_cleanup: Callable[..., None]
+ ) -> Project:
+ """Create a test project for submission tests."""
+ project = await Project(name=f"test_project_{uuid.uuid4()}").store_async(
+ synapse_client=syn
+ )
+ schedule_for_cleanup(project.id)
+ return project
+
+ @pytest_asyncio.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission tests",
+ content_source=test_project.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = await evaluation.store_async(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest_asyncio.fixture(scope="function")
+ async def test_file(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ """Create a test file for submission tests."""
+ import os
+ import tempfile
+
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write("This is test content for submission testing.")
+ temp_file_path = temp_file.name
+
+ try:
+ file = await File(
+ path=temp_file_path,
+ name=f"test_file_{uuid.uuid4()}.txt",
+ parent_id=test_project.id,
+ ).store_async(synapse_client=syn)
+ schedule_for_cleanup(file.id)
+ return file
+ finally:
+ os.unlink(temp_file_path)
+
+ # async def test_cancel_submission_successfully_async(
+ # self, test_evaluation: Evaluation, test_file: File
+ # ):
+ # # GIVEN a submission created with async method
+ # submission = Submission(
+ # entity_id=test_file.id,
+ # evaluation_id=test_evaluation.id,
+ # name=f"Test Submission for Cancellation {uuid.uuid4()}",
+ # )
+ # created_submission = await submission.store_async(synapse_client=self.syn)
+ # self.schedule_for_cleanup(created_submission.id)
+
+ # # WHEN I cancel the submission using async method
+ # cancelled_submission = await created_submission.cancel_async(synapse_client=self.syn)
+
+ # # THEN the submission should be cancelled
+ # assert cancelled_submission.id == created_submission.id
+
+ async def test_cancel_submission_without_id_async(self):
+ # WHEN I try to cancel a submission without an ID using async method
+ submission = Submission(entity_id="syn123", evaluation_id="456")
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="must have an ID to cancel"):
+ await submission.cancel_async(synapse_client=self.syn)
+
+
+class TestSubmissionValidationAsync:
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ async def test_get_submission_without_id_async(self):
+ # WHEN I try to get a submission without an ID using async method
+ submission = Submission(entity_id="syn123", evaluation_id="456")
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="must have an ID to get"):
+ await submission.get_async(synapse_client=self.syn)
+
+ async def test_to_synapse_request_missing_entity_id_async(self):
+ # WHEN I try to create a request without entity_id
+ submission = Submission(evaluation_id="456", name="Test")
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'entity_id' attribute"):
+ submission.to_synapse_request()
+
+ async def test_to_synapse_request_missing_evaluation_id_async(self):
+ # WHEN I try to create a request without evaluation_id
+ submission = Submission(entity_id="syn123", name="Test")
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'evaluation_id' attribute"):
+ submission.to_synapse_request()
+
+ async def test_to_synapse_request_valid_data_async(self):
+ # WHEN I create a request with valid required data
+ submission = Submission(
+ entity_id="syn123456",
+ evaluation_id="789",
+ name="Test Submission",
+ team_id="team123",
+ contributors=["user1", "user2"],
+ docker_repository_name="test/repo",
+ docker_digest="sha256:abc123",
+ )
+
+ request_body = submission.to_synapse_request()
+
+ # THEN it should create a valid request body
+ assert request_body["entityId"] == "syn123456"
+ assert request_body["evaluationId"] == "789"
+ assert request_body["name"] == "Test Submission"
+ assert request_body["teamId"] == "team123"
+ assert request_body["contributors"] == ["user1", "user2"]
+ assert request_body["dockerRepositoryName"] == "test/repo"
+ assert request_body["dockerDigest"] == "sha256:abc123"
+
+ async def test_to_synapse_request_minimal_data_async(self):
+ # WHEN I create a request with only required data
+ submission = Submission(entity_id="syn123456", evaluation_id="789")
+
+ request_body = submission.to_synapse_request()
+
+ # THEN it should create a minimal request body
+ assert request_body["entityId"] == "syn123456"
+ assert request_body["evaluationId"] == "789"
+ assert "name" not in request_body
+ assert "teamId" not in request_body
+ assert "contributors" not in request_body
+ assert "dockerRepositoryName" not in request_body
+ assert "dockerDigest" not in request_body
+
+ async def test_fetch_latest_entity_success_async(self):
+ # GIVEN a submission with a valid entity_id
+ submission = Submission(entity_id="syn123456", evaluation_id="789")
+
+ # Note: This test would need a real entity ID to work in practice
+ # For now, we test the validation logic
+ with pytest.raises(ValueError, match="Unable to fetch entity information"):
+ await submission._fetch_latest_entity(synapse_client=self.syn)
+
+ async def test_fetch_latest_entity_without_entity_id_async(self):
+ # GIVEN a submission without entity_id
+ submission = Submission(evaluation_id="789")
+
+ # WHEN I try to fetch entity information
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="entity_id must be set"):
+ await submission._fetch_latest_entity(synapse_client=self.syn)
+
+
+class TestSubmissionDataMappingAsync:
+ async def test_fill_from_dict_complete_data_async(self):
+ # GIVEN a complete submission response from the REST API
+ api_response = {
+ "id": "123456",
+ "userId": "user123",
+ "submitterAlias": "testuser",
+ "entityId": "syn789",
+ "versionNumber": 1,
+ "evaluationId": "eval456",
+ "name": "Test Submission",
+ "createdOn": "2023-01-01T10:00:00.000Z",
+ "teamId": "team123",
+ "contributors": ["user1", "user2"],
+ "submissionStatus": {"status": "RECEIVED"},
+ "entityBundleJSON": '{"entity": {"id": "syn789"}}',
+ "dockerRepositoryName": "test/repo",
+ "dockerDigest": "sha256:abc123",
+ }
+
+ # WHEN I fill a submission object from the dict
+ submission = Submission()
+ submission.fill_from_dict(api_response)
+
+ # THEN all fields should be mapped correctly
+ assert submission.id == "123456"
+ assert submission.user_id == "user123"
+ assert submission.submitter_alias == "testuser"
+ assert submission.entity_id == "syn789"
+ assert submission.version_number == 1
+ assert submission.evaluation_id == "eval456"
+ assert submission.name == "Test Submission"
+ assert submission.created_on == "2023-01-01T10:00:00.000Z"
+ assert submission.team_id == "team123"
+ assert submission.contributors == ["user1", "user2"]
+ assert submission.submission_status == {"status": "RECEIVED"}
+ assert submission.entity_bundle_json == '{"entity": {"id": "syn789"}}'
+ assert submission.docker_repository_name == "test/repo"
+ assert submission.docker_digest == "sha256:abc123"
+
+ async def test_fill_from_dict_minimal_data_async(self):
+ # GIVEN a minimal submission response from the REST API
+ api_response = {
+ "id": "123456",
+ "entityId": "syn789",
+ "evaluationId": "eval456",
+ }
+
+ # WHEN I fill a submission object from the dict
+ submission = Submission()
+ submission.fill_from_dict(api_response)
+
+ # THEN required fields should be set and optional fields should have defaults
+ assert submission.id == "123456"
+ assert submission.entity_id == "syn789"
+ assert submission.evaluation_id == "eval456"
+ assert submission.user_id is None
+ assert submission.submitter_alias is None
+ assert submission.version_number is None
+ assert submission.name is None
+ assert submission.created_on is None
+ assert submission.team_id is None
+ assert submission.contributors == []
+ assert submission.submission_status is None
+ assert submission.entity_bundle_json is None
+ assert submission.docker_repository_name is None
+ assert submission.docker_digest is None
diff --git a/tests/integration/synapseclient/models/async/test_submission_bundle_async.py b/tests/integration/synapseclient/models/async/test_submission_bundle_async.py
new file mode 100644
index 000000000..b24f8c0ef
--- /dev/null
+++ b/tests/integration/synapseclient/models/async/test_submission_bundle_async.py
@@ -0,0 +1,653 @@
+"""Integration tests for the synapseclient.models.SubmissionBundle class async methods."""
+
+import uuid
+from typing import Callable
+
+import pytest
+
+from synapseclient import Synapse
+from synapseclient.core.exceptions import SynapseHTTPError
+from synapseclient.models import (
+ Evaluation,
+ File,
+ Project,
+ Submission,
+ SubmissionBundle,
+ SubmissionStatus,
+)
+
+
+class TestSubmissionBundleRetrievalAsync:
+ """Tests for retrieving SubmissionBundle objects using async methods."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_project(
+ self, syn: Synapse, schedule_for_cleanup: Callable[..., None]
+ ) -> Project:
+ project = await Project(name=f"test_project_{uuid.uuid4()}").store_async(
+ synapse_client=syn
+ )
+ schedule_for_cleanup(project.id)
+ return project
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ evaluation = await Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="Test evaluation for SubmissionBundle async testing",
+ content_source=test_project.id,
+ submission_instructions_message="Submit your files here",
+ submission_receipt_message="Thank you for your submission!",
+ ).store_async(synapse_client=syn)
+ schedule_for_cleanup(evaluation.id)
+ return evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ file_content = (
+ f"Test file content for submission bundle async tests {uuid.uuid4()}"
+ )
+ with open("test_file_for_submission_bundle_async.txt", "w") as f:
+ f.write(file_content)
+
+ file_entity = await File(
+ path="test_file_for_submission_bundle_async.txt",
+ name=f"test_submission_file_async_{uuid.uuid4()}",
+ parent_id=test_project.id,
+ ).store_async(synapse_client=syn)
+ schedule_for_cleanup(file_entity.id)
+ return file_entity
+
+ @pytest.fixture(scope="function")
+ async def test_submission(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Submission:
+ submission = await Submission(
+ name=f"test_submission_{uuid.uuid4()}",
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ submitter_alias="test_user_bundle_async",
+ ).store_async(synapse_client=syn)
+ schedule_for_cleanup(submission.id)
+ return submission
+
+ @pytest.fixture(scope="function")
+ async def multiple_submissions(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> list[Submission]:
+ """Create multiple submissions for testing pagination and filtering."""
+ submissions = []
+ for i in range(3):
+ submission = await Submission(
+ name=f"test_submission_{uuid.uuid4()}_{i}",
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ submitter_alias=f"test_user_async_{i}",
+ ).store_async(synapse_client=syn)
+ schedule_for_cleanup(submission.id)
+ submissions.append(submission)
+ return submissions
+
+ async def test_get_evaluation_submission_bundles_basic_async(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ """Test getting submission bundles for an evaluation using async methods."""
+ # WHEN I get submission bundles for an evaluation using async generator
+ bundles = []
+ async for bundle in SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ ):
+ bundles.append(bundle)
+
+ # THEN the bundles should be retrieved
+ assert len(bundles) >= 1 # At least our test submission
+
+ # AND each bundle should have proper structure
+ found_test_bundle = False
+ for bundle in bundles:
+ assert isinstance(bundle, SubmissionBundle)
+ assert bundle.submission is not None
+ assert bundle.submission.id is not None
+ assert bundle.submission.evaluation_id == test_evaluation.id
+
+ async def test_get_evaluation_submission_bundles_async_generator_behavior(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ # WHEN I get submission bundles using the async generator
+ bundles_generator = SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+
+ # THEN I should be able to iterate through the results
+ bundles = []
+ async for bundle in bundles_generator:
+ assert isinstance(bundle, SubmissionBundle)
+ bundles.append(bundle)
+
+ # AND all bundles should be valid SubmissionBundle objects
+ assert all(isinstance(bundle, SubmissionBundle) for bundle in bundles)
+
+ if bundle.submission.id == test_submission.id:
+ found_test_bundle = True
+ assert bundle.submission.entity_id == test_submission.entity_id
+ assert bundle.submission.name == test_submission.name
+
+ # AND our test submission should be found
+ assert found_test_bundle, "Test submission should be found in bundles"
+
+ async def test_get_evaluation_submission_bundles_with_status_filter_async(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ """Test getting submission bundles filtered by status using async methods."""
+ # WHEN I get submission bundles filtered by "RECEIVED" status
+ bundles = []
+ async for bundle in SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ status="RECEIVED",
+ synapse_client=self.syn,
+ ):
+ bundles.append(bundle)
+
+ # THEN the bundles should be retrieved
+ assert bundles is not None
+
+ # AND all bundles should have RECEIVED status (if any exist)
+ for bundle in bundles:
+ if bundle.submission_status:
+ assert bundle.submission_status.status == "RECEIVED"
+
+ # WHEN I attempt to get submission bundles with an invalid status
+ with pytest.raises(SynapseHTTPError) as exc_info:
+ bundles = []
+ async for bundle in SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ status="NONEXISTENT_STATUS",
+ synapse_client=self.syn,
+ ):
+ bundles.append(bundle)
+ # THEN it should raise a SynapseHTTPError (400 for invalid enum)
+ assert exc_info.value.response.status_code == 400
+ assert "No enum constant" in str(exc_info.value)
+ assert "NONEXISTENT_STATUS" in str(exc_info.value)
+
+ async def test_get_evaluation_submission_bundles_with_automatic_pagination_async(
+ self, test_evaluation: Evaluation, multiple_submissions: list[Submission]
+ ):
+ """Test automatic pagination when getting submission bundles using async generator methods."""
+ # WHEN I get submission bundles using async generator (handles pagination automatically)
+ all_bundles = []
+ async for bundle in SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ ):
+ all_bundles.append(bundle)
+
+ # THEN I should get all bundles for the evaluation
+ assert all_bundles is not None
+ assert len(all_bundles) >= len(
+ multiple_submissions
+ ) # At least our test submissions
+
+ # AND each bundle should be valid
+ for bundle in all_bundles:
+ assert isinstance(bundle, SubmissionBundle)
+ assert bundle.submission is not None
+ assert bundle.submission.evaluation_id == test_evaluation.id
+
+ # AND all our test submissions should be found
+ found_submission_ids = {
+ bundle.submission.id for bundle in all_bundles if bundle.submission
+ }
+ test_submission_ids = {submission.id for submission in multiple_submissions}
+ assert test_submission_ids.issubset(
+ found_submission_ids
+ ), "All test submissions should be found in the results"
+
+ async def test_get_evaluation_submission_bundles_invalid_evaluation_async(self):
+ """Test getting submission bundles for invalid evaluation ID using async methods."""
+ # WHEN I try to get submission bundles for a non-existent evaluation
+ with pytest.raises(SynapseHTTPError) as exc_info:
+ bundles = []
+ async for bundle in SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id="syn999999999999",
+ synapse_client=self.syn,
+ ):
+ bundles.append(bundle)
+
+ # THEN it should raise a SynapseHTTPError (likely 403 or 404)
+ assert exc_info.value.response.status_code in [403, 404]
+
+ async def test_get_user_submission_bundles_basic_async(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ """Test getting user submission bundles for an evaluation using async methods."""
+ # WHEN I get user submission bundles for an evaluation using async generator
+ bundles = []
+ async for bundle in SubmissionBundle.get_user_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ ):
+ bundles.append(bundle)
+
+ # THEN the bundles should be retrieved
+ assert bundles is not None
+ assert len(bundles) >= 1 # At least our test submission
+
+ # AND each bundle should have proper structure
+ found_test_bundle = False
+ for bundle in bundles:
+ assert isinstance(bundle, SubmissionBundle)
+ assert bundle.submission is not None
+ assert bundle.submission.id is not None
+ assert bundle.submission.evaluation_id == test_evaluation.id
+
+ if bundle.submission.id == test_submission.id:
+ found_test_bundle = True
+ assert bundle.submission.entity_id == test_submission.entity_id
+ assert bundle.submission.name == test_submission.name
+
+ # AND our test submission should be found
+ assert found_test_bundle, "Test submission should be found in user bundles"
+
+ async def test_get_user_submission_bundles_with_automatic_pagination_async(
+ self, test_evaluation: Evaluation, multiple_submissions: list[Submission]
+ ):
+ """Test automatic pagination when getting user submission bundles using async generator methods."""
+ # WHEN I get user submission bundles using async generator (handles pagination automatically)
+ all_bundles = []
+ async for bundle in SubmissionBundle.get_user_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ ):
+ all_bundles.append(bundle)
+
+ # THEN I should get all bundles for the user in this evaluation
+ assert all_bundles is not None
+ assert len(all_bundles) >= len(
+ multiple_submissions
+ ) # At least our test submissions
+
+ # AND each bundle should be valid
+ for bundle in all_bundles:
+ assert isinstance(bundle, SubmissionBundle)
+ assert bundle.submission is not None
+ assert bundle.submission.evaluation_id == test_evaluation.id
+
+ # AND all our test submissions should be found
+ found_submission_ids = {
+ bundle.submission.id for bundle in all_bundles if bundle.submission
+ }
+ test_submission_ids = {submission.id for submission in multiple_submissions}
+ assert test_submission_ids.issubset(
+ found_submission_ids
+ ), "All test submissions should be found in the user results"
+
+
+class TestSubmissionBundleDataIntegrityAsync:
+ """Tests for data integrity and relationships in SubmissionBundle objects using async methods."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_project(
+ self, syn: Synapse, schedule_for_cleanup: Callable[..., None]
+ ) -> Project:
+ project = await Project(name=f"test_project_{uuid.uuid4()}").store_async(
+ synapse_client=syn
+ )
+ schedule_for_cleanup(project.id)
+ return project
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ evaluation = await Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="Test evaluation for data integrity async testing",
+ content_source=test_project.id,
+ submission_instructions_message="Submit your files here",
+ submission_receipt_message="Thank you for your submission!",
+ ).store_async(synapse_client=syn)
+ schedule_for_cleanup(evaluation.id)
+ return evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ file_content = (
+ f"Test file content for data integrity async tests {uuid.uuid4()}"
+ )
+ with open("test_file_for_data_integrity_async.txt", "w") as f:
+ f.write(file_content)
+
+ file_entity = await File(
+ path="test_file_for_data_integrity_async.txt",
+ name=f"test_integrity_file_async_{uuid.uuid4()}",
+ parent_id=test_project.id,
+ ).store_async(synapse_client=syn)
+ schedule_for_cleanup(file_entity.id)
+ return file_entity
+
+ @pytest.fixture(scope="function")
+ async def test_submission(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Submission:
+ submission = await Submission(
+ name=f"test_submission_{uuid.uuid4()}",
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ submitter_alias="test_user_integrity_async",
+ ).store_async(synapse_client=syn)
+ schedule_for_cleanup(submission.id)
+ return submission
+
+ async def test_submission_bundle_data_consistency_async(
+ self, test_evaluation: Evaluation, test_submission: Submission, test_file: File
+ ):
+ """Test that submission bundles maintain data consistency between submission and status using async methods."""
+ # WHEN I get submission bundles for the evaluation using async generator
+ bundles = []
+ async for bundle in SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ ):
+ bundles.append(bundle)
+
+ # THEN I should find our test submission
+ test_bundle = None
+ for bundle in bundles:
+ if bundle.submission and bundle.submission.id == test_submission.id:
+ test_bundle = bundle
+ break
+
+ assert test_bundle is not None, "Test submission bundle should be found"
+
+ # AND the submission data should be consistent
+ assert test_bundle.submission.id == test_submission.id
+ assert test_bundle.submission.entity_id == test_file.id
+ assert test_bundle.submission.evaluation_id == test_evaluation.id
+ assert test_bundle.submission.name == test_submission.name
+
+ # AND if there's a submission status, it should reference the same entities
+ if test_bundle.submission_status:
+ assert test_bundle.submission_status.id == test_submission.id
+ assert test_bundle.submission_status.entity_id == test_file.id
+ assert test_bundle.submission_status.evaluation_id == test_evaluation.id
+
+ async def test_submission_bundle_status_updates_reflected_async(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ """Test that submission status updates are reflected in bundles using async methods."""
+ # GIVEN a submission status that I can update
+ submission_status = await SubmissionStatus(id=test_submission.id).get_async(
+ synapse_client=self.syn
+ )
+ original_status = submission_status.status
+
+ # WHEN I update the submission status
+ submission_status.status = "VALIDATED"
+ submission_status.submission_annotations = {
+ "test_score": 95.5,
+ "test_feedback": "Excellent work!",
+ }
+ updated_status = await submission_status.store_async(synapse_client=self.syn)
+
+ # AND I get submission bundles again using async generator
+ bundles = []
+ async for bundle in SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ ):
+ bundles.append(bundle)
+
+ # THEN the bundle should reflect the updated status
+ test_bundle = None
+ for bundle in bundles:
+ if bundle.submission and bundle.submission.id == test_submission.id:
+ test_bundle = bundle
+ break
+
+ assert test_bundle is not None
+ assert test_bundle.submission_status is not None
+ assert test_bundle.submission_status.status == "VALIDATED"
+ assert test_bundle.submission_status.submission_annotations is not None
+ assert "test_score" in test_bundle.submission_status.submission_annotations
+ assert test_bundle.submission_status.submission_annotations["test_score"] == [
+ 95.5
+ ]
+
+ # CLEANUP: Reset the status back to original
+ submission_status.status = original_status
+ submission_status.submission_annotations = {}
+ await submission_status.store_async(synapse_client=self.syn)
+
+ async def test_submission_bundle_evaluation_id_propagation_async(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ """Test that evaluation_id is properly propagated from submission to status using async methods."""
+ # WHEN I get submission bundles using async generator
+ bundles = []
+ async for bundle in SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ ):
+ bundles.append(bundle)
+
+ # THEN find our test bundle
+ test_bundle = None
+ for bundle in bundles:
+ if bundle.submission and bundle.submission.id == test_submission.id:
+ test_bundle = bundle
+ break
+
+ assert test_bundle is not None
+
+ # AND both submission and status should have the correct evaluation_id
+ assert test_bundle.submission.evaluation_id == test_evaluation.id
+ if test_bundle.submission_status:
+ assert test_bundle.submission_status.evaluation_id == test_evaluation.id
+
+
+class TestSubmissionBundleEdgeCasesAsync:
+ """Tests for edge cases and error handling in SubmissionBundle operations using async methods."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_project(
+ self, syn: Synapse, schedule_for_cleanup: Callable[..., None]
+ ) -> Project:
+ project = await Project(name=f"test_project_{uuid.uuid4()}").store_async(
+ synapse_client=syn
+ )
+ schedule_for_cleanup(project.id)
+ return project
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ evaluation = await Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="Test evaluation for edge case async testing",
+ content_source=test_project.id,
+ submission_instructions_message="Submit your files here",
+ submission_receipt_message="Thank you for your submission!",
+ ).store_async(synapse_client=syn)
+ schedule_for_cleanup(evaluation.id)
+ return evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ file_content = f"Test file content for edge case async tests {uuid.uuid4()}"
+ with open("test_file_for_edge_case_async.txt", "w") as f:
+ f.write(file_content)
+
+ file_entity = await File(
+ path="test_file_for_edge_case_async.txt",
+ name=f"test_edge_case_file_async_{uuid.uuid4()}",
+ parent_id=test_project.id,
+ ).store_async(synapse_client=syn)
+ schedule_for_cleanup(file_entity.id)
+ return file_entity
+
+ @pytest.fixture(scope="function")
+ async def test_submission(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Submission:
+ submission = await Submission(
+ name=f"test_submission_{uuid.uuid4()}",
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ submitter_alias="test_user_edge_case_async",
+ ).store_async(synapse_client=syn)
+ schedule_for_cleanup(submission.id)
+ return submission
+
+ async def test_get_evaluation_submission_bundles_empty_evaluation_async(
+ self, test_evaluation: Evaluation
+ ):
+ """Test getting submission bundles from an evaluation with no submissions using async methods."""
+ # WHEN I get submission bundles from an evaluation with no submissions using async generator
+ bundles = []
+ async for bundle in SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ ):
+ bundles.append(bundle)
+
+ # THEN it should return an empty list (not None or error)
+ assert bundles is not None
+ assert isinstance(bundles, list)
+ assert len(bundles) == 0
+
+ async def test_get_user_submission_bundles_empty_evaluation_async(
+ self, test_evaluation: Evaluation
+ ):
+ """Test getting user submission bundles from an evaluation with no submissions using async methods."""
+ # WHEN I get user submission bundles from an evaluation with no submissions using async generator
+ bundles = []
+ async for bundle in SubmissionBundle.get_user_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ ):
+ bundles.append(bundle)
+
+ # THEN it should return an empty list (not None or error)
+ assert bundles is not None
+ assert isinstance(bundles, list)
+ assert len(bundles) == 0
+
+ async def test_get_evaluation_submission_bundles_all_results_async(
+ self, test_evaluation: Evaluation
+ ):
+ """Test getting all submission bundles using async generator methods."""
+ # WHEN I request all bundles using async generator (no limit needed)
+ bundles = []
+ async for bundle in SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ ):
+ bundles.append(bundle)
+
+ # THEN it should work without error
+ assert bundles is not None
+ assert isinstance(bundles, list)
+ # The actual count doesn't matter since the evaluation is empty
+
+ async def test_get_user_submission_bundles_empty_results_async(
+ self, test_evaluation: Evaluation
+ ):
+ """Test getting user submission bundles when no results exist using async generator methods."""
+ # WHEN I request bundles from an evaluation with no user submissions using async generator
+ bundles = []
+ async for bundle in SubmissionBundle.get_user_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ ):
+ bundles.append(bundle)
+
+ # THEN it should return an empty list (not error)
+ assert bundles is not None
+ assert isinstance(bundles, list)
+ assert len(bundles) == 0
+
+ async def test_get_submission_bundles_with_default_parameters_async(
+ self, test_evaluation: Evaluation
+ ):
+ """Test that default parameters work correctly using async methods."""
+ # WHEN I call methods without optional parameters using async generators
+ eval_bundles = []
+ async for bundle in SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ ):
+ eval_bundles.append(bundle)
+
+ user_bundles = []
+ async for bundle in SubmissionBundle.get_user_submission_bundles_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ ):
+ user_bundles.append(bundle)
+
+ # THEN both should work with default values
+ assert eval_bundles is not None
+ assert user_bundles is not None
+ assert isinstance(eval_bundles, list)
+ assert isinstance(user_bundles, list)
diff --git a/tests/integration/synapseclient/models/async/test_submission_status_async.py b/tests/integration/synapseclient/models/async/test_submission_status_async.py
new file mode 100644
index 000000000..bf72a8158
--- /dev/null
+++ b/tests/integration/synapseclient/models/async/test_submission_status_async.py
@@ -0,0 +1,909 @@
+"""Integration tests for the synapseclient.models.SubmissionStatus class async methods."""
+
+import os
+import tempfile
+import uuid
+from typing import Callable
+
+import pytest
+
+from synapseclient import Synapse
+from synapseclient.annotations import from_submission_status_annotations
+from synapseclient.core.exceptions import SynapseHTTPError
+from synapseclient.models import Evaluation, File, Project, Submission, SubmissionStatus
+
+
+class TestSubmissionStatusRetrieval:
+ """Tests for retrieving SubmissionStatus objects async."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission status tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission status tests",
+ content_source=project_model.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = await evaluation.store_async(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ """Create a test file for submission status tests."""
+ # Create a temporary file
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write("This is test content for submission status testing.")
+ temp_file_path = temp_file.name
+
+ try:
+ file = File(
+ path=temp_file_path,
+ name=f"test_file_{uuid.uuid4()}.txt",
+ parent_id=project_model.id,
+ )
+ stored_file = await file.store_async(synapse_client=syn)
+ schedule_for_cleanup(stored_file.id)
+ return stored_file
+ finally:
+ # Clean up the temporary file
+ os.unlink(temp_file_path)
+
+ @pytest.fixture(scope="function")
+ async def test_submission(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Submission:
+ """Create a test submission for status tests."""
+ submission = Submission(
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ name=f"Test Submission {uuid.uuid4()}",
+ )
+ created_submission = await submission.store_async(synapse_client=syn)
+ schedule_for_cleanup(created_submission.id)
+ return created_submission
+
+ async def test_get_submission_status_by_id(
+ self, test_submission: Submission, test_evaluation: Evaluation
+ ):
+ """Test retrieving a submission status by ID async."""
+ # WHEN I get a submission status by ID
+ submission_status = await SubmissionStatus(id=test_submission.id).get_async(
+ synapse_client=self.syn
+ )
+
+ # THEN the submission status should be retrieved correctly
+ assert submission_status.id == test_submission.id
+ assert submission_status.entity_id == test_submission.entity_id
+ assert submission_status.evaluation_id == test_evaluation.id
+ assert (
+ submission_status.status is not None
+ ) # Should have some status (e.g., "RECEIVED")
+ assert submission_status.etag is not None
+ assert submission_status.status_version is not None
+ assert submission_status.modified_on is not None
+
+ async def test_get_submission_status_without_id(self):
+ """Test that getting a submission status without ID raises ValueError async."""
+ # WHEN I try to get a submission status without an ID
+ submission_status = SubmissionStatus()
+
+ # THEN it should raise a ValueError
+ with pytest.raises(
+ ValueError, match="The submission status must have an ID to get"
+ ):
+ await submission_status.get_async(synapse_client=self.syn)
+
+ async def test_get_submission_status_with_invalid_id(self):
+ """Test that getting a submission status with invalid ID raises exception async."""
+ # WHEN I try to get a submission status with an invalid ID
+ submission_status = SubmissionStatus(id="syn999999999999")
+
+ # THEN it should raise a SynapseHTTPError (404)
+ with pytest.raises(SynapseHTTPError):
+ await submission_status.get_async(synapse_client=self.syn)
+
+
+class TestSubmissionStatusUpdates:
+ """Tests for updating SubmissionStatus objects async."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission status tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission status tests",
+ content_source=project_model.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = await evaluation.store_async(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ """Create a test file for submission status tests."""
+ # Create a temporary file
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write("This is test content for submission status testing.")
+ temp_file_path = temp_file.name
+
+ try:
+ file = File(
+ path=temp_file_path,
+ name=f"test_file_{uuid.uuid4()}.txt",
+ parent_id=project_model.id,
+ )
+ stored_file = await file.store_async(synapse_client=syn)
+ schedule_for_cleanup(stored_file.id)
+ return stored_file
+ finally:
+ # Clean up the temporary file
+ os.unlink(temp_file_path)
+
+ @pytest.fixture(scope="function")
+ async def test_submission(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Submission:
+ """Create a test submission for status tests."""
+ submission = Submission(
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ name=f"Test Submission {uuid.uuid4()}",
+ )
+ created_submission = await submission.store_async(synapse_client=syn)
+ schedule_for_cleanup(created_submission.id)
+ return created_submission
+
+ @pytest.fixture(scope="function")
+ async def test_submission_status(
+ self, test_submission: Submission
+ ) -> SubmissionStatus:
+ """Create a test submission status by getting the existing one."""
+ submission_status = await SubmissionStatus(id=test_submission.id).get_async(
+ synapse_client=self.syn
+ )
+ return submission_status
+
+ async def test_store_submission_status_with_status_change(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test updating a submission status with a status change async."""
+ # GIVEN a submission status that exists
+ original_status = test_submission_status.status
+ original_etag = test_submission_status.etag
+ original_status_version = test_submission_status.status_version
+
+ # WHEN I update the status
+ test_submission_status.status = "VALIDATED"
+ updated_status = await test_submission_status.store_async(
+ synapse_client=self.syn
+ )
+
+ # THEN the submission status should be updated
+ assert updated_status.id == test_submission_status.id
+ assert updated_status.status == "VALIDATED"
+ assert updated_status.status != original_status
+ assert updated_status.etag != original_etag # etag should change
+ assert updated_status.status_version > original_status_version
+
+ async def test_store_submission_status_with_submission_annotations(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test updating a submission status with submission annotations async."""
+ # WHEN I add submission annotations and store
+ test_submission_status.submission_annotations = {
+ "score": 85.5,
+ "feedback": "Good work!",
+ }
+ updated_status = await test_submission_status.store_async(
+ synapse_client=self.syn
+ )
+
+ # THEN the submission annotations should be saved
+ assert updated_status.submission_annotations is not None
+ assert "score" in updated_status.submission_annotations
+ assert updated_status.submission_annotations["score"] == [85.5]
+ assert updated_status.submission_annotations["feedback"] == ["Good work!"]
+
+ async def test_store_submission_status_with_legacy_annotations(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test updating a submission status with legacy annotations async."""
+ # WHEN I add legacy annotations and store
+ test_submission_status.annotations = {
+ "internal_score": 92.3,
+ "reviewer_notes": "Excellent submission",
+ }
+ updated_status = await test_submission_status.store_async(
+ synapse_client=self.syn
+ )
+ assert updated_status.annotations is not None
+
+ converted_annotations = from_submission_status_annotations(
+ updated_status.annotations
+ )
+
+ # THEN the legacy annotations should be saved
+ assert "internal_score" in converted_annotations
+ assert converted_annotations["internal_score"] == 92.3
+ assert converted_annotations["reviewer_notes"] == "Excellent submission"
+
+ async def test_store_submission_status_with_combined_annotations(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test updating a submission status with both types of annotations async."""
+ # WHEN I add both submission and legacy annotations
+ test_submission_status.submission_annotations = {
+ "public_score": 78.0,
+ "category": "Bronze",
+ }
+ test_submission_status.annotations = {
+ "internal_review": True,
+ "notes": "Needs minor improvements",
+ }
+ updated_status = await test_submission_status.store_async(
+ synapse_client=self.syn
+ )
+
+ # THEN both types of annotations should be saved
+ assert updated_status.submission_annotations is not None
+ assert "public_score" in updated_status.submission_annotations
+ assert updated_status.submission_annotations["public_score"] == [78.0]
+
+ assert updated_status.annotations is not None
+ converted_annotations = from_submission_status_annotations(
+ updated_status.annotations
+ )
+ assert "internal_review" in converted_annotations
+ assert converted_annotations["internal_review"] == "true"
+
+ async def test_store_submission_status_with_private_annotations_false(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test updating a submission status with private_status_annotations set to False async."""
+ # WHEN I add legacy annotations with private_status_annotations set to False
+ test_submission_status.annotations = {
+ "public_internal_score": 88.5,
+ "public_notes": "This should be visible",
+ }
+ test_submission_status.private_status_annotations = False
+
+ # WHEN I store the submission status
+ updated_status = await test_submission_status.store_async(
+ synapse_client=self.syn
+ )
+
+ # THEN they should be properly stored
+ assert updated_status.annotations is not None
+ converted_annotations = from_submission_status_annotations(
+ updated_status.annotations
+ )
+ assert "public_internal_score" in converted_annotations
+ assert converted_annotations["public_internal_score"] == 88.5
+ assert converted_annotations["public_notes"] == "This should be visible"
+
+ # AND the annotations should be marked as not private
+ for annos_type in ["stringAnnos", "doubleAnnos"]:
+ annotations = updated_status.annotations[annos_type]
+ assert all(not anno["isPrivate"] for anno in annotations)
+
+ async def test_store_submission_status_with_private_annotations_true(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test updating a submission status with private_status_annotations set to True (default) async."""
+ # WHEN I add legacy annotations with private_status_annotations set to True (default)
+ test_submission_status.annotations = {
+ "private_internal_score": 95.0,
+ "private_notes": "This should be private",
+ }
+ test_submission_status.private_status_annotations = True
+
+ # AND I create the request body to inspect it
+ request_body = test_submission_status.to_synapse_request(
+ synapse_client=self.syn
+ )
+
+ # WHEN I store the submission status
+ updated_status = await test_submission_status.store_async(
+ synapse_client=self.syn
+ )
+
+ # THEN they should be properly stored
+ assert updated_status.annotations is not None
+ converted_annotations = from_submission_status_annotations(
+ updated_status.annotations
+ )
+ assert "private_internal_score" in converted_annotations
+ assert converted_annotations["private_internal_score"] == 95.0
+ assert converted_annotations["private_notes"] == "This should be private"
+
+ # AND the annotations should be marked as private
+ for annos_type in ["stringAnnos", "doubleAnnos"]:
+ annotations = updated_status.annotations[annos_type]
+ print(annotations)
+ assert all(anno["isPrivate"] for anno in annotations)
+
+ async def test_store_submission_status_without_id(self):
+ """Test that storing a submission status without ID raises ValueError async."""
+ # WHEN I try to store a submission status without an ID
+ submission_status = SubmissionStatus(status="SCORED")
+
+ # THEN it should raise a ValueError
+ with pytest.raises(
+ ValueError, match="The submission status must have an ID to update"
+ ):
+ await submission_status.store_async(synapse_client=self.syn)
+
+ async def test_store_submission_status_without_changes(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test that storing a submission status without changes shows warning async."""
+ # GIVEN a submission status that hasn't been modified
+ # (it already has _last_persistent_instance set from get())
+
+ # WHEN I try to store it without making changes
+ result = await test_submission_status.store_async(synapse_client=self.syn)
+
+ # THEN it should return the same instance (no update sent to Synapse)
+ assert result is test_submission_status
+
+ async def test_store_submission_status_change_tracking(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test that change tracking works correctly async."""
+ # GIVEN a submission status that was retrieved (has_changed should be False)
+ assert not test_submission_status.has_changed
+
+ # WHEN I make a change
+ test_submission_status.status = "SCORED"
+
+ # THEN has_changed should be True
+ assert test_submission_status.has_changed
+
+ # WHEN I store the changes
+ updated_status = await test_submission_status.store_async(
+ synapse_client=self.syn
+ )
+
+ # THEN has_changed should be False again
+ assert not updated_status.has_changed
+
+ async def test_has_changed_property_edge_cases(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test the has_changed property with various edge cases and detailed scenarios async."""
+ # GIVEN a submission status that was just retrieved
+ assert not test_submission_status.has_changed
+ original_annotations = (
+ test_submission_status.annotations.copy()
+ if test_submission_status.annotations
+ else {}
+ )
+
+ # WHEN I modify only annotations (not submission_annotations)
+ test_submission_status.annotations = {"test_key": "test_value"}
+
+ # THEN has_changed should be True
+ assert test_submission_status.has_changed
+
+ # WHEN I reset annotations to the original value (should be the same as the persistent instance)
+ test_submission_status.annotations = original_annotations
+
+ # THEN has_changed should be False (same as original)
+ assert not test_submission_status.has_changed
+
+ # WHEN I add a different annotation value
+ test_submission_status.annotations = {"different_key": "different_value"}
+
+ # THEN has_changed should be True
+ assert test_submission_status.has_changed
+
+ # WHEN I store and get a fresh copy
+ updated_status = await test_submission_status.store_async(
+ synapse_client=self.syn
+ )
+ fresh_status = await SubmissionStatus(id=updated_status.id).get_async(
+ synapse_client=self.syn
+ )
+
+ # THEN the fresh copy should not have changes
+ assert not fresh_status.has_changed
+
+ # WHEN I modify only submission_annotations
+ fresh_status.submission_annotations = {"new_key": ["new_value"]}
+
+ # THEN has_changed should be True
+ assert fresh_status.has_changed
+
+ # WHEN I modify a scalar field
+ fresh_status.status = "VALIDATED"
+
+ # THEN has_changed should still be True
+ assert fresh_status.has_changed
+
+
+class TestSubmissionStatusBulkOperations:
+ """Tests for bulk SubmissionStatus operations async."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission status tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission status tests",
+ content_source=project_model.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = await evaluation.store_async(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_files(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> list[File]:
+ """Create multiple test files for submission status tests."""
+ files = []
+ for i in range(3):
+ # Create a temporary file
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write(
+ f"This is test content {i} for submission status testing."
+ )
+ temp_file_path = temp_file.name
+
+ try:
+ file = File(
+ path=temp_file_path,
+ name=f"test_file_{i}_{uuid.uuid4()}.txt",
+ parent_id=project_model.id,
+ )
+ stored_file = await file.store_async(synapse_client=syn)
+ schedule_for_cleanup(stored_file.id)
+ files.append(stored_file)
+ finally:
+ # Clean up the temporary file
+ os.unlink(temp_file_path)
+
+ return files
+
+ @pytest.fixture(scope="function")
+ async def test_submissions(
+ self,
+ test_evaluation: Evaluation,
+ test_files: list[File],
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> list[Submission]:
+ """Create multiple test submissions for status tests."""
+ submissions = []
+ for i, file in enumerate(test_files):
+ submission = Submission(
+ entity_id=file.id,
+ evaluation_id=test_evaluation.id,
+ name=f"Test Submission {i} {uuid.uuid4()}",
+ )
+ created_submission = await submission.store_async(synapse_client=syn)
+ schedule_for_cleanup(created_submission.id)
+ submissions.append(created_submission)
+ return submissions
+
+ async def test_get_all_submission_statuses(
+ self, test_evaluation: Evaluation, test_submissions: list[Submission]
+ ):
+ """Test getting all submission statuses for an evaluation async."""
+ # WHEN I get all submission statuses for the evaluation
+ statuses = await SubmissionStatus.get_all_submission_statuses_async(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+
+ # THEN I should get submission statuses for all submissions
+ assert len(statuses) >= len(test_submissions)
+ status_ids = [status.id for status in statuses]
+
+ # AND all test submissions should have their statuses in the results
+ for submission in test_submissions:
+ assert submission.id in status_ids
+
+ # AND each status should have proper attributes
+ for status in statuses:
+ assert status.id is not None
+ assert status.evaluation_id == test_evaluation.id
+ assert status.status is not None
+ assert status.etag is not None
+
+ async def test_get_all_submission_statuses_with_status_filter(
+ self, test_evaluation: Evaluation, test_submissions: list[Submission]
+ ):
+ """Test getting submission statuses with status filter async."""
+ # WHEN I get submission statuses filtered by status
+ statuses = await SubmissionStatus.get_all_submission_statuses_async(
+ evaluation_id=test_evaluation.id,
+ status="RECEIVED",
+ synapse_client=self.syn,
+ )
+
+ # THEN I should only get statuses with the specified status
+ for status in statuses:
+ assert status.status == "RECEIVED"
+ assert status.evaluation_id == test_evaluation.id
+
+ async def test_get_all_submission_statuses_with_pagination(
+ self, test_evaluation: Evaluation, test_submissions: list[Submission]
+ ):
+ """Test getting submission statuses with pagination async."""
+ # WHEN I get submission statuses with pagination
+ statuses_page1 = await SubmissionStatus.get_all_submission_statuses_async(
+ evaluation_id=test_evaluation.id,
+ limit=2,
+ offset=0,
+ synapse_client=self.syn,
+ )
+
+ # THEN I should get at most 2 statuses
+ assert len(statuses_page1) <= 2
+
+ # WHEN I get the next page
+ statuses_page2 = await SubmissionStatus.get_all_submission_statuses_async(
+ evaluation_id=test_evaluation.id,
+ limit=2,
+ offset=2,
+ synapse_client=self.syn,
+ )
+
+ # THEN the results should be different (assuming more than 2 submissions exist)
+ if len(statuses_page1) == 2 and len(statuses_page2) > 0:
+ page1_ids = {status.id for status in statuses_page1}
+ page2_ids = {status.id for status in statuses_page2}
+ assert page1_ids != page2_ids # Should be different sets
+
+ async def test_batch_update_submission_statuses(
+ self, test_evaluation: Evaluation, test_submissions: list[Submission]
+ ):
+ """Test batch updating multiple submission statuses async."""
+ # GIVEN multiple submission statuses
+ statuses = []
+ for submission in test_submissions:
+ status = await SubmissionStatus(id=submission.id).get_async(
+ synapse_client=self.syn
+ )
+ # Update each status
+ status.status = "VALIDATED"
+ status.submission_annotations = {
+ "batch_score": 90.0 + (len(statuses) * 2),
+ "batch_processed": True,
+ }
+ statuses.append(status)
+
+ # WHEN I batch update the statuses
+ response = await SubmissionStatus.batch_update_submission_statuses_async(
+ evaluation_id=test_evaluation.id,
+ statuses=statuses,
+ synapse_client=self.syn,
+ )
+
+ # THEN the batch update should succeed
+ assert response is not None
+ assert "batchToken" in response or response == {} # Response format may vary
+
+ # AND I should be able to verify the updates by retrieving the statuses
+ for original_status in statuses:
+ updated_status = await SubmissionStatus(id=original_status.id).get_async(
+ synapse_client=self.syn
+ )
+ assert updated_status.status == "VALIDATED"
+ converted_submission_annotations = from_submission_status_annotations(
+ updated_status.submission_annotations
+ )
+ assert "batch_score" in converted_submission_annotations
+ assert converted_submission_annotations["batch_processed"] == ["true"]
+
+
+class TestSubmissionStatusCancellation:
+ """Tests for SubmissionStatus cancellation functionality async."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission status tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission status tests",
+ content_source=project_model.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = await evaluation.store_async(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ """Create a test file for submission status tests."""
+ # Create a temporary file
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write("This is test content for submission status testing.")
+ temp_file_path = temp_file.name
+
+ try:
+ file = File(
+ path=temp_file_path,
+ name=f"test_file_{uuid.uuid4()}.txt",
+ parent_id=project_model.id,
+ )
+ stored_file = await file.store_async(synapse_client=syn)
+ schedule_for_cleanup(stored_file.id)
+ return stored_file
+ finally:
+ # Clean up the temporary file
+ os.unlink(temp_file_path)
+
+ @pytest.fixture(scope="function")
+ async def test_submission(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Submission:
+ """Create a test submission for status tests."""
+ submission = Submission(
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ name=f"Test Submission {uuid.uuid4()}",
+ )
+ created_submission = await submission.store_async(synapse_client=syn)
+ schedule_for_cleanup(created_submission.id)
+ return created_submission
+
+ async def test_submission_cancellation_workflow(self, test_submission: Submission):
+ """Test the complete submission cancellation workflow async."""
+ # GIVEN a submission that exists
+ submission_id = test_submission.id
+
+ # WHEN I get the initial submission status
+ initial_status = await SubmissionStatus(id=submission_id).get_async(
+ synapse_client=self.syn
+ )
+
+ # THEN initially it should not be cancellable or cancelled
+ assert initial_status.can_cancel is False
+ assert initial_status.cancel_requested is False
+
+ # WHEN I update the submission status to allow cancellation
+ initial_status.can_cancel = True
+ updated_status = await initial_status.store_async(synapse_client=self.syn)
+
+ # THEN the submission should be marked as cancellable
+ assert updated_status.can_cancel is True
+ assert updated_status.cancel_requested is False
+
+ # WHEN I cancel the submission
+ await test_submission.cancel_async()
+
+ # THEN I should be able to retrieve the updated status showing cancellation was requested
+ final_status = await SubmissionStatus(id=submission_id).get_async(
+ synapse_client=self.syn
+ )
+ assert final_status.can_cancel is True
+ assert final_status.cancel_requested is True
+
+
+class TestSubmissionStatusValidation:
+ """Tests for SubmissionStatus validation and error handling async."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ async def test_to_synapse_request_missing_required_attributes(self):
+ """Test that to_synapse_request validates required attributes async."""
+ # WHEN I try to create a request with missing required attributes
+ submission_status = SubmissionStatus(id="123") # Missing etag, status_version
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'etag' attribute"):
+ submission_status.to_synapse_request(synapse_client=self.syn)
+
+ # WHEN I add etag but still missing status_version
+ submission_status.etag = "some-etag"
+
+ # THEN it should raise a ValueError for status_version
+ with pytest.raises(ValueError, match="missing the 'status_version' attribute"):
+ submission_status.to_synapse_request(synapse_client=self.syn)
+
+ async def test_to_synapse_request_with_annotations_missing_evaluation_id(self):
+ """Test that annotations require evaluation_id async."""
+ # WHEN I try to create a request with annotations but no evaluation_id
+ submission_status = SubmissionStatus(
+ id="123", etag="some-etag", status_version=1, annotations={"test": "value"}
+ )
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'evaluation_id' attribute"):
+ submission_status.to_synapse_request(synapse_client=self.syn)
+
+ async def test_to_synapse_request_valid_attributes(self):
+ """Test that to_synapse_request works with valid attributes async."""
+ # WHEN I create a request with all required attributes
+ submission_status = SubmissionStatus(
+ id="123",
+ etag="some-etag",
+ status_version=1,
+ status="SCORED",
+ evaluation_id="eval123",
+ submission_annotations={"score": 85.5},
+ )
+
+ # THEN it should create a valid request body
+ request_body = submission_status.to_synapse_request(synapse_client=self.syn)
+
+ # AND the request should have the required fields
+ assert request_body["id"] == "123"
+ assert request_body["etag"] == "some-etag"
+ assert request_body["statusVersion"] == 1
+ assert request_body["status"] == "SCORED"
+ assert "submissionAnnotations" in request_body
+
+ async def test_fill_from_dict_with_complete_response(self):
+ """Test filling a SubmissionStatus from a complete API response async."""
+ # GIVEN a complete API response
+ api_response = {
+ "id": "123456",
+ "etag": "abcd-1234",
+ "modifiedOn": "2023-01-01T00:00:00.000Z",
+ "status": "SCORED",
+ "entityId": "syn789",
+ "versionNumber": 1,
+ "statusVersion": 2,
+ "canCancel": False,
+ "cancelRequested": False,
+ "annotations": {
+ "objectId": "123456",
+ "scopeId": "9617645",
+ "stringAnnos": [
+ {
+ "key": "internal_note",
+ "isPrivate": True,
+ "value": "This is internal",
+ },
+ {
+ "key": "reviewer_notes",
+ "isPrivate": True,
+ "value": "Excellent work",
+ },
+ ],
+ "doubleAnnos": [
+ {"key": "validation_score", "isPrivate": True, "value": 95.0}
+ ],
+ "longAnnos": [],
+ },
+ "submissionAnnotations": {"feedback": ["Great work!"], "score": [92.5]},
+ }
+
+ # WHEN I fill a SubmissionStatus from the response
+ submission_status = SubmissionStatus()
+ result = submission_status.fill_from_dict(api_response)
+
+ # THEN all fields should be populated correctly
+ assert result.id == "123456"
+ assert result.etag == "abcd-1234"
+ assert result.modified_on == "2023-01-01T00:00:00.000Z"
+ assert result.status == "SCORED"
+ assert result.entity_id == "syn789"
+ assert result.version_number == 1
+ assert result.status_version == 2
+ assert result.can_cancel is False
+ assert result.cancel_requested is False
+
+ # The annotations field should contain the raw submission status format
+ assert result.annotations is not None
+ assert "objectId" in result.annotations
+ assert "scopeId" in result.annotations
+ assert "stringAnnos" in result.annotations
+ assert "doubleAnnos" in result.annotations
+ assert len(result.annotations["stringAnnos"]) == 2
+ assert len(result.annotations["doubleAnnos"]) == 1
+
+ # The submission_annotations should be in simple key-value format
+ assert "feedback" in result.submission_annotations
+ assert "score" in result.submission_annotations
+ assert result.submission_annotations["feedback"] == ["Great work!"]
+ assert result.submission_annotations["score"] == [92.5]
+
+ async def test_fill_from_dict_with_minimal_response(self):
+ """Test filling a SubmissionStatus from a minimal API response async."""
+ # GIVEN a minimal API response
+ api_response = {"id": "123456", "status": "RECEIVED"}
+
+ # WHEN I fill a SubmissionStatus from the response
+ submission_status = SubmissionStatus()
+ result = submission_status.fill_from_dict(api_response)
+
+ # THEN basic fields should be populated
+ assert result.id == "123456"
+ assert result.status == "RECEIVED"
+ # AND optional fields should have default values
+ assert result.etag is None
+ assert result.can_cancel is False
+ assert result.cancel_requested is False
diff --git a/tests/integration/synapseclient/models/synchronous/test_submission.py b/tests/integration/synapseclient/models/synchronous/test_submission.py
new file mode 100644
index 000000000..e89800042
--- /dev/null
+++ b/tests/integration/synapseclient/models/synchronous/test_submission.py
@@ -0,0 +1,647 @@
+"""Integration tests for the synapseclient.models.Submission class."""
+
+import uuid
+from typing import Callable
+
+import pytest
+
+from synapseclient import Synapse
+from synapseclient.core.exceptions import SynapseHTTPError
+from synapseclient.models import Evaluation, File, Project, Submission
+
+
+class TestSubmissionCreation:
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_project(
+ self, syn: Synapse, schedule_for_cleanup: Callable[..., None]
+ ) -> Project:
+ """Create a test project for submission tests."""
+ project = Project(name=f"test_project_{uuid.uuid4()}").store(synapse_client=syn)
+ schedule_for_cleanup(project.id)
+ return project
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission tests",
+ content_source=test_project.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = evaluation.store(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ """Create a test file for submission tests."""
+ import os
+ import tempfile
+
+ # Create a temporary file
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write("This is test content for submission testing.")
+ temp_file_path = temp_file.name
+
+ try:
+ file = File(
+ path=temp_file_path,
+ name=f"test_file_{uuid.uuid4()}.txt",
+ parent_id=test_project.id,
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(file.id)
+ return file
+ finally:
+ # Clean up the temporary file
+ os.unlink(temp_file_path)
+
+ async def test_store_submission_successfully(
+ self, test_evaluation: Evaluation, test_file: File
+ ):
+ # WHEN I create a submission with valid data
+ submission = Submission(
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ name=f"Test Submission {uuid.uuid4()}",
+ )
+ created_submission = submission.store(synapse_client=self.syn)
+ self.schedule_for_cleanup(created_submission.id)
+
+ # THEN the submission should be created successfully
+ assert created_submission.id is not None
+ assert created_submission.entity_id == test_file.id
+ assert created_submission.evaluation_id == test_evaluation.id
+ assert created_submission.name == submission.name
+ assert created_submission.user_id is not None
+ assert created_submission.created_on is not None
+ assert created_submission.version_number is not None
+
+ async def test_store_submission_without_entity_id(
+ self, test_evaluation: Evaluation
+ ):
+ # WHEN I try to create a submission without entity_id
+ submission = Submission(
+ evaluation_id=test_evaluation.id,
+ name="Test Submission",
+ )
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="entity_id is required"):
+ submission.store(synapse_client=self.syn)
+
+ async def test_store_submission_without_evaluation_id(self, test_file: File):
+ # WHEN I try to create a submission without evaluation_id
+ submission = Submission(
+ entity_id=test_file.id,
+ name="Test Submission",
+ )
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'evaluation_id' attribute"):
+ submission.store(synapse_client=self.syn)
+
+ async def test_store_submission_with_docker_repository(
+ self, test_evaluation: Evaluation
+ ):
+ # GIVEN we would need a Docker repository entity (mocked for this test)
+ # This test demonstrates the expected behavior for Docker repository submissions
+
+ # WHEN I create a submission for a Docker repository entity
+ # TODO: This would require a real Docker repository entity in a full integration test
+ submission = Submission(
+ entity_id="syn123456789", # Would be a Docker repository ID
+ evaluation_id=test_evaluation.id,
+ name=f"Docker Submission {uuid.uuid4()}",
+ )
+
+ # THEN the submission should handle Docker-specific attributes
+ # (This test would need to be expanded with actual Docker repository setup)
+ assert submission.entity_id == "syn123456789"
+ assert submission.evaluation_id == test_evaluation.id
+
+
+class TestSubmissionRetrieval:
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_project(
+ self, syn: Synapse, schedule_for_cleanup: Callable[..., None]
+ ) -> Project:
+ """Create a test project for submission tests."""
+ project = Project(name=f"test_project_{uuid.uuid4()}").store(synapse_client=syn)
+ schedule_for_cleanup(project.id)
+ return project
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission tests",
+ content_source=test_project.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = evaluation.store(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ """Create a test file for submission tests."""
+ import os
+ import tempfile
+
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write("This is test content for submission testing.")
+ temp_file_path = temp_file.name
+
+ try:
+ file = File(
+ path=temp_file_path,
+ name=f"test_file_{uuid.uuid4()}.txt",
+ parent_id=test_project.id,
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(file.id)
+ return file
+ finally:
+ os.unlink(temp_file_path)
+
+ @pytest.fixture(scope="function")
+ async def test_submission(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Submission:
+ """Create a test submission for retrieval tests."""
+ submission = Submission(
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ name=f"Test Submission {uuid.uuid4()}",
+ )
+ created_submission = submission.store(synapse_client=syn)
+ schedule_for_cleanup(created_submission.id)
+ return created_submission
+
+ async def test_get_submission_by_id(
+ self, test_submission: Submission, test_evaluation: Evaluation, test_file: File
+ ):
+ # WHEN I get a submission by ID
+ retrieved_submission = Submission(id=test_submission.id).get(
+ synapse_client=self.syn
+ )
+
+ # THEN the submission should be retrieved correctly
+ assert retrieved_submission.id == test_submission.id
+ assert retrieved_submission.entity_id == test_file.id
+ assert retrieved_submission.evaluation_id == test_evaluation.id
+ assert retrieved_submission.name == test_submission.name
+ assert retrieved_submission.user_id is not None
+ assert retrieved_submission.created_on is not None
+
+ async def test_get_evaluation_submissions(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ # WHEN I get all submissions for an evaluation
+ submissions = list(
+ Submission.get_evaluation_submissions(
+ evaluation_id=test_evaluation.id, synapse_client=self.syn
+ )
+ )
+
+ # THEN I should get a list of submission objects
+ assert len(submissions) > 0
+ assert all(isinstance(sub, Submission) for sub in submissions)
+
+ # AND the test submission should be in the results
+ submission_ids = [sub.id for sub in submissions]
+ assert test_submission.id in submission_ids
+
+ async def test_get_evaluation_submissions_with_status_filter(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ # WHEN I get submissions filtered by status
+ submissions = list(
+ Submission.get_evaluation_submissions(
+ evaluation_id=test_evaluation.id,
+ status="RECEIVED",
+ synapse_client=self.syn,
+ )
+ )
+
+ # The test submission should be in the results (initially in RECEIVED status)
+ submission_ids = [sub.id for sub in submissions]
+ assert test_submission.id in submission_ids
+
+ async def test_get_evaluation_submissions_generator_behavior(
+ self, test_evaluation: Evaluation
+ ):
+ # WHEN I get submissions using the generator
+ submissions_generator = Submission.get_evaluation_submissions(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+
+ # THEN I should be able to iterate through the results
+ submissions = []
+ for submission in submissions_generator:
+ assert isinstance(submission, Submission)
+ submissions.append(submission)
+
+ # AND all submissions should be valid Submission objects
+ assert all(isinstance(sub, Submission) for sub in submissions)
+
+ async def test_get_user_submissions(self, test_evaluation: Evaluation):
+ # WHEN I get submissions for the current user
+ submissions_generator = Submission.get_user_submissions(
+ evaluation_id=test_evaluation.id, synapse_client=self.syn
+ )
+
+ # THEN I should get a generator that yields Submission objects
+ submissions = list(submissions_generator)
+ # Note: Could be empty if user hasn't made submissions to this evaluation
+ assert all(isinstance(sub, Submission) for sub in submissions)
+
+ async def test_get_user_submissions_generator_behavior(
+ self, test_evaluation: Evaluation
+ ):
+ # WHEN I get user submissions using the generator
+ submissions_generator = Submission.get_user_submissions(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+
+ # THEN I should be able to iterate through the results
+ submissions = []
+ for submission in submissions_generator:
+ assert isinstance(submission, Submission)
+ submissions.append(submission)
+
+ # AND all submissions should be valid Submission objects
+ assert all(isinstance(sub, Submission) for sub in submissions)
+
+ async def test_get_submission_count(self, test_evaluation: Evaluation):
+ # WHEN I get the submission count for an evaluation
+ response = Submission.get_submission_count(
+ evaluation_id=test_evaluation.id, synapse_client=self.syn
+ )
+
+ # THEN I should get a count response
+ assert isinstance(response, int)
+ assert response >= 0
+
+
+class TestSubmissionDeletion:
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_project(
+ self, syn: Synapse, schedule_for_cleanup: Callable[..., None]
+ ) -> Project:
+ """Create a test project for submission tests."""
+ project = Project(name=f"test_project_{uuid.uuid4()}").store(synapse_client=syn)
+ schedule_for_cleanup(project.id)
+ return project
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission tests",
+ content_source=test_project.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = evaluation.store(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ """Create a test file for submission tests."""
+ import os
+ import tempfile
+
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write("This is test content for submission testing.")
+ temp_file_path = temp_file.name
+
+ try:
+ file = File(
+ path=temp_file_path,
+ name=f"test_file_{uuid.uuid4()}.txt",
+ parent_id=test_project.id,
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(file.id)
+ return file
+ finally:
+ os.unlink(temp_file_path)
+
+ async def test_delete_submission_successfully(
+ self, test_evaluation: Evaluation, test_file: File
+ ):
+ # GIVEN a submission
+ submission = Submission(
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ name=f"Test Submission for Deletion {uuid.uuid4()}",
+ )
+ created_submission = submission.store(synapse_client=self.syn)
+
+ # WHEN I delete the submission
+ created_submission.delete(synapse_client=self.syn)
+
+ # THEN attempting to retrieve it should raise an error
+ with pytest.raises(SynapseHTTPError):
+ Submission(id=created_submission.id).get(synapse_client=self.syn)
+
+ async def test_delete_submission_without_id(self):
+ # WHEN I try to delete a submission without an ID
+ submission = Submission(entity_id="syn123", evaluation_id="456")
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="must have an ID to delete"):
+ submission.delete(synapse_client=self.syn)
+
+
+class TestSubmissionCancel:
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_project(
+ self, syn: Synapse, schedule_for_cleanup: Callable[..., None]
+ ) -> Project:
+ """Create a test project for submission tests."""
+ project = Project(name=f"test_project_{uuid.uuid4()}").store(synapse_client=syn)
+ schedule_for_cleanup(project.id)
+ return project
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission tests",
+ content_source=test_project.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = evaluation.store(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ """Create a test file for submission tests."""
+ import os
+ import tempfile
+
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write("This is test content for submission testing.")
+ temp_file_path = temp_file.name
+
+ try:
+ file = File(
+ path=temp_file_path,
+ name=f"test_file_{uuid.uuid4()}.txt",
+ parent_id=test_project.id,
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(file.id)
+ return file
+ finally:
+ os.unlink(temp_file_path)
+
+ # TODO: Add with SubmissionStatus model tests
+ # async def test_cancel_submission_successfully(
+ # self, test_evaluation: Evaluation, test_file: File
+ # ):
+ # # GIVEN a submission
+ # submission = Submission(
+ # entity_id=test_file.id,
+ # evaluation_id=test_evaluation.id,
+ # name=f"Test Submission for Cancellation {uuid.uuid4()}",
+ # )
+ # created_submission = submission.store(synapse_client=self.syn)
+ # self.schedule_for_cleanup(created_submission.id)
+
+ # # WHEN I cancel the submission
+ # cancelled_submission = created_submission.cancel(synapse_client=self.syn)
+
+ # # THEN the submission should be cancelled
+ # assert cancelled_submission.id == created_submission.id
+
+ async def test_cancel_submission_without_id(self):
+ # WHEN I try to cancel a submission without an ID
+ submission = Submission(entity_id="syn123", evaluation_id="456")
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="must have an ID to cancel"):
+ submission.cancel(synapse_client=self.syn)
+
+
+class TestSubmissionValidation:
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ async def test_get_submission_without_id(self):
+ # WHEN I try to get a submission without an ID
+ submission = Submission(entity_id="syn123", evaluation_id="456")
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="must have an ID to get"):
+ submission.get(synapse_client=self.syn)
+
+ async def test_to_synapse_request_missing_entity_id(self):
+ # WHEN I try to create a request without entity_id
+ submission = Submission(evaluation_id="456", name="Test")
+
+ # THEN it should raise a ValueError
+ with pytest.raises(
+ ValueError,
+ match="Your submission object is missing the 'entity_id' attribute",
+ ):
+ submission.to_synapse_request()
+
+ async def test_to_synapse_request_missing_evaluation_id(self):
+ # WHEN I try to create a request without evaluation_id
+ submission = Submission(entity_id="syn123", name="Test")
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'evaluation_id' attribute"):
+ submission.to_synapse_request()
+
+ async def test_to_synapse_request_valid_data(self):
+ # WHEN I create a request with valid required data
+ submission = Submission(
+ entity_id="syn123456",
+ evaluation_id="789",
+ name="Test Submission",
+ team_id="team123",
+ contributors=["user1", "user2"],
+ docker_repository_name="test/repo",
+ docker_digest="sha256:abc123",
+ )
+
+ request_body = submission.to_synapse_request()
+
+ # THEN it should create a valid request body
+ assert request_body["entityId"] == "syn123456"
+ assert request_body["evaluationId"] == "789"
+ assert request_body["name"] == "Test Submission"
+ assert request_body["teamId"] == "team123"
+ assert request_body["contributors"] == ["user1", "user2"]
+ assert request_body["dockerRepositoryName"] == "test/repo"
+ assert request_body["dockerDigest"] == "sha256:abc123"
+
+ async def test_to_synapse_request_minimal_data(self):
+ # WHEN I create a request with only required data
+ submission = Submission(entity_id="syn123456", evaluation_id="789")
+
+ request_body = submission.to_synapse_request()
+
+ # THEN it should create a minimal request body
+ assert request_body["entityId"] == "syn123456"
+ assert request_body["evaluationId"] == "789"
+ assert "name" not in request_body
+ assert "teamId" not in request_body
+ assert "contributors" not in request_body
+ assert "dockerRepositoryName" not in request_body
+ assert "dockerDigest" not in request_body
+
+
+class TestSubmissionDataMapping:
+ async def test_fill_from_dict_complete_data(self):
+ # GIVEN a complete submission response from the REST API
+ api_response = {
+ "id": "123456",
+ "userId": "user123",
+ "submitterAlias": "testuser",
+ "entityId": "syn789",
+ "versionNumber": 1,
+ "evaluationId": "eval456",
+ "name": "Test Submission",
+ "createdOn": "2023-01-01T10:00:00.000Z",
+ "teamId": "team123",
+ "contributors": ["user1", "user2"],
+ "submissionStatus": {"status": "RECEIVED"},
+ "entityBundleJSON": '{"entity": {"id": "syn789"}}',
+ "dockerRepositoryName": "test/repo",
+ "dockerDigest": "sha256:abc123",
+ }
+
+ # WHEN I fill a submission object from the dict
+ submission = Submission()
+ submission.fill_from_dict(api_response)
+
+ # THEN all fields should be mapped correctly
+ assert submission.id == "123456"
+ assert submission.user_id == "user123"
+ assert submission.submitter_alias == "testuser"
+ assert submission.entity_id == "syn789"
+ assert submission.version_number == 1
+ assert submission.evaluation_id == "eval456"
+ assert submission.name == "Test Submission"
+ assert submission.created_on == "2023-01-01T10:00:00.000Z"
+ assert submission.team_id == "team123"
+ assert submission.contributors == ["user1", "user2"]
+ assert submission.submission_status == {"status": "RECEIVED"}
+ assert submission.entity_bundle_json == '{"entity": {"id": "syn789"}}'
+ assert submission.docker_repository_name == "test/repo"
+ assert submission.docker_digest == "sha256:abc123"
+
+ async def test_fill_from_dict_minimal_data(self):
+ # GIVEN a minimal submission response from the REST API
+ api_response = {
+ "id": "123456",
+ "entityId": "syn789",
+ "evaluationId": "eval456",
+ }
+
+ # WHEN I fill a submission object from the dict
+ submission = Submission()
+ submission.fill_from_dict(api_response)
+
+ # THEN required fields should be set and optional fields should have defaults
+ assert submission.id == "123456"
+ assert submission.entity_id == "syn789"
+ assert submission.evaluation_id == "eval456"
+ assert submission.user_id is None
+ assert submission.submitter_alias is None
+ assert submission.version_number is None
+ assert submission.name is None
+ assert submission.created_on is None
+ assert submission.team_id is None
+ assert submission.contributors == []
+ assert submission.submission_status is None
+ assert submission.entity_bundle_json is None
+ assert submission.docker_repository_name is None
+ assert submission.docker_digest is None
diff --git a/tests/integration/synapseclient/models/synchronous/test_submission_bundle.py b/tests/integration/synapseclient/models/synchronous/test_submission_bundle.py
new file mode 100644
index 000000000..70d2bbf79
--- /dev/null
+++ b/tests/integration/synapseclient/models/synchronous/test_submission_bundle.py
@@ -0,0 +1,613 @@
+"""Integration tests for the synapseclient.models.SubmissionBundle class."""
+
+import uuid
+from typing import Callable
+
+import pytest
+
+from synapseclient import Synapse
+from synapseclient.core.exceptions import SynapseHTTPError
+from synapseclient.models import (
+ Evaluation,
+ File,
+ Project,
+ Submission,
+ SubmissionBundle,
+ SubmissionStatus,
+)
+
+
+class TestSubmissionBundleRetrieval:
+ """Tests for retrieving SubmissionBundle objects."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_project(
+ self, syn: Synapse, schedule_for_cleanup: Callable[..., None]
+ ) -> Project:
+ project = Project(name=f"test_project_{uuid.uuid4()}").store(synapse_client=syn)
+ schedule_for_cleanup(project.id)
+ return project
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="Test evaluation for SubmissionBundle testing",
+ content_source=test_project.id,
+ submission_instructions_message="Submit your files here",
+ submission_receipt_message="Thank you for your submission!",
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(evaluation.id)
+ return evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ file_content = f"Test file content for submission bundle tests {uuid.uuid4()}"
+ with open("test_file_for_submission_bundle.txt", "w") as f:
+ f.write(file_content)
+
+ file_entity = File(
+ path="test_file_for_submission_bundle.txt",
+ name=f"test_submission_file_{uuid.uuid4()}",
+ parent_id=test_project.id,
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(file_entity.id)
+ return file_entity
+
+ @pytest.fixture(scope="function")
+ async def test_submission(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Submission:
+ submission = Submission(
+ name=f"test_submission_{uuid.uuid4()}",
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ submitter_alias="test_user_bundle",
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(submission.id)
+ return submission
+
+ @pytest.fixture(scope="function")
+ async def multiple_submissions(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> list[Submission]:
+ """Create multiple submissions for testing pagination and filtering."""
+ submissions = []
+ for i in range(3):
+ submission = Submission(
+ name=f"test_submission_{uuid.uuid4()}_{i}",
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ submitter_alias=f"test_user_{i}",
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(submission.id)
+ submissions.append(submission)
+ return submissions
+
+ async def test_get_evaluation_submission_bundles_basic(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ """Test getting submission bundles for an evaluation."""
+ # WHEN I get submission bundles for an evaluation using generator
+ bundles = list(
+ SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN I should get at least our test submission
+ assert len(bundles) >= 1 # At least our test submission
+
+ # AND each bundle should have proper structure
+ found_test_bundle = False
+ for bundle in bundles:
+ assert isinstance(bundle, SubmissionBundle)
+ assert bundle.submission is not None
+ assert bundle.submission.id is not None
+ assert bundle.submission.evaluation_id == test_evaluation.id
+
+ if bundle.submission.id == test_submission.id:
+ found_test_bundle = True
+ assert bundle.submission.entity_id == test_submission.entity_id
+ assert bundle.submission.name == test_submission.name
+
+ # AND our test submission should be found
+ assert found_test_bundle, "Test submission should be found in bundles"
+
+ async def test_get_evaluation_submission_bundles_generator_behavior(
+ self, test_evaluation: Evaluation
+ ):
+ """Test that the generator returns SubmissionBundle objects correctly."""
+ # WHEN I get submission bundles using the generator
+ bundles_generator = SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+
+ # THEN I should be able to iterate through the results
+ bundles = []
+ for bundle in bundles_generator:
+ assert isinstance(bundle, SubmissionBundle)
+ bundles.append(bundle)
+
+ # AND all bundles should be valid SubmissionBundle objects
+ assert all(isinstance(bundle, SubmissionBundle) for bundle in bundles)
+
+ async def test_get_evaluation_submission_bundles_with_status_filter(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ """Test getting submission bundles filtered by status."""
+ # WHEN I get submission bundles filtered by "RECEIVED" status
+ bundles = list(
+ SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ status="RECEIVED",
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN the bundles should be retrieved
+ assert bundles is not None
+
+ # AND all bundles should have RECEIVED status (if any exist)
+ for bundle in bundles:
+ if bundle.submission_status:
+ assert bundle.submission_status.status == "RECEIVED"
+
+ # WHEN I attempt to get submission bundles with an invalid status
+ with pytest.raises(SynapseHTTPError) as exc_info:
+ list(
+ SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ status="NONEXISTENT_STATUS",
+ synapse_client=self.syn,
+ )
+ )
+ # THEN it should raise a SynapseHTTPError (400 for invalid enum)
+ assert exc_info.value.response.status_code == 400
+ assert "No enum constant" in str(exc_info.value)
+ assert "NONEXISTENT_STATUS" in str(exc_info.value)
+
+ async def test_get_evaluation_submission_bundles_generator_behavior_with_multiple(
+ self, test_evaluation: Evaluation, multiple_submissions: list[Submission]
+ ):
+ """Test generator behavior when getting submission bundles with multiple submissions."""
+ # WHEN I get submission bundles using the generator
+ bundles = list(
+ SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN I should get all available bundles (at least the ones we created)
+ assert bundles is not None
+ assert len(bundles) >= len(multiple_submissions)
+
+ # AND I should be able to iterate through the generator multiple times
+ # by creating a new generator each time
+ bundles_generator = SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+
+ # THEN I should get the same bundles when iterating again
+ bundles_second_iteration = list(bundles_generator)
+ assert len(bundles_second_iteration) == len(bundles)
+
+ # AND all created submissions should be found
+ bundle_submission_ids = {
+ bundle.submission.id for bundle in bundles if bundle.submission
+ }
+ created_submission_ids = {sub.id for sub in multiple_submissions}
+ assert created_submission_ids.issubset(
+ bundle_submission_ids
+ ), "All created submissions should be found in bundles"
+
+ async def test_get_evaluation_submission_bundles_invalid_evaluation(self):
+ """Test getting submission bundles for invalid evaluation ID."""
+ # WHEN I try to get submission bundles for a non-existent evaluation
+ with pytest.raises(SynapseHTTPError) as exc_info:
+ list(
+ SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id="syn999999999999",
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN it should raise a SynapseHTTPError (likely 403 or 404)
+ assert exc_info.value.response.status_code in [403, 404]
+
+ async def test_get_user_submission_bundles_basic(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ """Test getting user submission bundles for an evaluation."""
+ # WHEN I get user submission bundles for an evaluation
+ bundles = list(
+ SubmissionBundle.get_user_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN the bundles should be retrieved
+ assert bundles is not None
+ assert len(bundles) >= 1 # At least our test submission
+
+ # AND each bundle should have proper structure
+ found_test_bundle = False
+ for bundle in bundles:
+ assert isinstance(bundle, SubmissionBundle)
+ assert bundle.submission is not None
+ assert bundle.submission.id is not None
+ assert bundle.submission.evaluation_id == test_evaluation.id
+
+ if bundle.submission.id == test_submission.id:
+ found_test_bundle = True
+ assert bundle.submission.entity_id == test_submission.entity_id
+ assert bundle.submission.name == test_submission.name
+
+ # AND our test submission should be found
+ assert found_test_bundle, "Test submission should be found in user bundles"
+
+ async def test_get_user_submission_bundles_generator_behavior_with_multiple(
+ self, test_evaluation: Evaluation, multiple_submissions: list[Submission]
+ ):
+ """Test generator behavior when getting user submission bundles with multiple submissions."""
+ # WHEN I get user submission bundles using the generator
+ bundles = list(
+ SubmissionBundle.get_user_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN I should get all available bundles (at least the ones we created)
+ assert bundles is not None
+ assert len(bundles) >= len(multiple_submissions)
+
+ # AND I should be able to iterate through the generator multiple times
+ # by creating a new generator each time
+ bundles_generator = SubmissionBundle.get_user_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+
+ # THEN I should get the same bundles when iterating again
+ bundles_second_iteration = list(bundles_generator)
+ assert len(bundles_second_iteration) == len(bundles)
+
+ # AND all created submissions should be found
+ bundle_submission_ids = {
+ bundle.submission.id for bundle in bundles if bundle.submission
+ }
+ created_submission_ids = {sub.id for sub in multiple_submissions}
+ assert created_submission_ids.issubset(
+ bundle_submission_ids
+ ), "All created submissions should be found in user bundles"
+
+
+class TestSubmissionBundleDataIntegrity:
+ """Tests for data integrity and relationships in SubmissionBundle objects."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_project(
+ self, syn: Synapse, schedule_for_cleanup: Callable[..., None]
+ ) -> Project:
+ project = Project(name=f"test_project_{uuid.uuid4()}").store(synapse_client=syn)
+ schedule_for_cleanup(project.id)
+ return project
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="Test evaluation for data integrity testing",
+ content_source=test_project.id,
+ submission_instructions_message="Submit your files here",
+ submission_receipt_message="Thank you for your submission!",
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(evaluation.id)
+ return evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ file_content = f"Test file content for data integrity tests {uuid.uuid4()}"
+ with open("test_file_for_data_integrity.txt", "w") as f:
+ f.write(file_content)
+
+ file_entity = File(
+ path="test_file_for_data_integrity.txt",
+ name=f"test_integrity_file_{uuid.uuid4()}",
+ parent_id=test_project.id,
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(file_entity.id)
+ return file_entity
+
+ @pytest.fixture(scope="function")
+ async def test_submission(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Submission:
+ submission = Submission(
+ name=f"test_submission_{uuid.uuid4()}",
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ submitter_alias="test_user_integrity",
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(submission.id)
+ return submission
+
+ async def test_submission_bundle_data_consistency(
+ self, test_evaluation: Evaluation, test_submission: Submission, test_file: File
+ ):
+ """Test that submission bundles maintain data consistency between submission and status."""
+ # WHEN I get submission bundles for the evaluation
+ bundles = list(
+ SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN I should find our test submission
+ test_bundle = None
+ for bundle in bundles:
+ if bundle.submission and bundle.submission.id == test_submission.id:
+ test_bundle = bundle
+ break
+
+ assert test_bundle is not None, "Test submission bundle should be found"
+
+ # AND the submission data should be consistent
+ assert test_bundle.submission.id == test_submission.id
+ assert test_bundle.submission.entity_id == test_file.id
+ assert test_bundle.submission.evaluation_id == test_evaluation.id
+ assert test_bundle.submission.name == test_submission.name
+
+ # AND if there's a submission status, it should reference the same entities
+ if test_bundle.submission_status:
+ assert test_bundle.submission_status.id == test_submission.id
+ assert test_bundle.submission_status.entity_id == test_file.id
+ assert test_bundle.submission_status.evaluation_id == test_evaluation.id
+
+ async def test_submission_bundle_status_updates_reflected(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ """Test that submission status updates are reflected in bundles."""
+ # GIVEN a submission status that I can update
+ submission_status = SubmissionStatus(id=test_submission.id).get(
+ synapse_client=self.syn
+ )
+ original_status = submission_status.status
+
+ # WHEN I update the submission status
+ submission_status.status = "VALIDATED"
+ submission_status.submission_annotations = {
+ "test_score": 95.5,
+ "test_feedback": "Excellent work!",
+ }
+ updated_status = submission_status.store(synapse_client=self.syn)
+
+ # AND I get submission bundles again
+ bundles = list(
+ SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN the bundle should reflect the updated status
+ test_bundle = None
+ for bundle in bundles:
+ if bundle.submission and bundle.submission.id == test_submission.id:
+ test_bundle = bundle
+ break
+
+ assert test_bundle is not None
+ assert test_bundle.submission_status is not None
+ assert test_bundle.submission_status.status == "VALIDATED"
+ assert test_bundle.submission_status.submission_annotations is not None
+ assert "test_score" in test_bundle.submission_status.submission_annotations
+ assert test_bundle.submission_status.submission_annotations["test_score"] == [
+ 95.5
+ ]
+
+ # CLEANUP: Reset the status back to original
+ submission_status.status = original_status
+ submission_status.submission_annotations = {}
+ submission_status.store(synapse_client=self.syn)
+
+ async def test_submission_bundle_evaluation_id_propagation(
+ self, test_evaluation: Evaluation, test_submission: Submission
+ ):
+ """Test that evaluation_id is properly propagated from submission to status."""
+ # WHEN I get submission bundles
+ bundles = list(
+ SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN find our test bundle
+ test_bundle = None
+ for bundle in bundles:
+ if bundle.submission and bundle.submission.id == test_submission.id:
+ test_bundle = bundle
+ break
+
+ assert test_bundle is not None
+
+ # AND both submission and status should have the correct evaluation_id
+ assert test_bundle.submission.evaluation_id == test_evaluation.id
+ if test_bundle.submission_status:
+ assert test_bundle.submission_status.evaluation_id == test_evaluation.id
+
+
+class TestSubmissionBundleEdgeCases:
+ """Tests for edge cases and error handling in SubmissionBundle operations."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_project(
+ self, syn: Synapse, schedule_for_cleanup: Callable[..., None]
+ ) -> Project:
+ project = Project(name=f"test_project_{uuid.uuid4()}").store(synapse_client=syn)
+ schedule_for_cleanup(project.id)
+ return project
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ test_project: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="Test evaluation for edge case testing",
+ content_source=test_project.id,
+ submission_instructions_message="Submit your files here",
+ submission_receipt_message="Thank you for your submission!",
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(evaluation.id)
+ return evaluation
+
+ async def test_get_evaluation_submission_bundles_empty_evaluation(
+ self, test_evaluation: Evaluation
+ ):
+ """Test getting submission bundles from an evaluation with no submissions."""
+ # WHEN I get submission bundles from an evaluation with no submissions
+ bundles = list(
+ SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN it should return an empty list (not None or error)
+ assert bundles is not None
+ assert isinstance(bundles, list)
+ assert len(bundles) == 0
+
+ async def test_get_user_submission_bundles_empty_evaluation(
+ self, test_evaluation: Evaluation
+ ):
+ """Test getting user submission bundles from an evaluation with no submissions."""
+ # WHEN I get user submission bundles from an evaluation with no submissions
+ bundles = list(
+ SubmissionBundle.get_user_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN it should return an empty list (not None or error)
+ assert bundles is not None
+ assert isinstance(bundles, list)
+ assert len(bundles) == 0
+
+ async def test_get_evaluation_submission_bundles_generator_consistency(
+ self, test_evaluation: Evaluation
+ ):
+ """Test that the generator produces consistent results across multiple iterations."""
+ # WHEN I request bundles using the generator
+ bundles = list(
+ SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN it should work without error
+ assert bundles is not None
+ assert isinstance(bundles, list)
+ # The actual count doesn't matter since the evaluation is empty
+
+ async def test_get_user_submission_bundles_generator_empty_results(
+ self, test_evaluation: Evaluation
+ ):
+ """Test that user submission bundles generator handles empty results correctly."""
+ # WHEN I request bundles from an empty evaluation
+ bundles = list(
+ SubmissionBundle.get_user_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN it should return an empty list (not error)
+ assert bundles is not None
+ assert isinstance(bundles, list)
+ assert len(bundles) == 0
+
+ async def test_get_submission_bundles_with_default_parameters(
+ self, test_evaluation: Evaluation
+ ):
+ """Test that default parameters work correctly."""
+ # WHEN I call methods without optional parameters
+ eval_bundles = list(
+ SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+ )
+ user_bundles = list(
+ SubmissionBundle.get_user_submission_bundles(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN both should work with default values
+ assert eval_bundles is not None
+ assert user_bundles is not None
+ assert isinstance(eval_bundles, list)
+ assert isinstance(user_bundles, list)
diff --git a/tests/integration/synapseclient/models/synchronous/test_submission_status.py b/tests/integration/synapseclient/models/synchronous/test_submission_status.py
new file mode 100644
index 000000000..b4ae45372
--- /dev/null
+++ b/tests/integration/synapseclient/models/synchronous/test_submission_status.py
@@ -0,0 +1,883 @@
+"""Integration tests for the synapseclient.models.SubmissionStatus class."""
+
+import os
+import tempfile
+import test
+import uuid
+from typing import Callable
+
+import pytest
+
+from synapseclient import Synapse
+from synapseclient.annotations import from_submission_status_annotations
+from synapseclient.core.exceptions import SynapseHTTPError
+from synapseclient.models import Evaluation, File, Project, Submission, SubmissionStatus
+
+
+class TestSubmissionStatusRetrieval:
+ """Tests for retrieving SubmissionStatus objects."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission status tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission status tests",
+ content_source=project_model.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = evaluation.store(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ """Create a test file for submission status tests."""
+ # Create a temporary file
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write("This is test content for submission status testing.")
+ temp_file_path = temp_file.name
+
+ try:
+ file = File(
+ path=temp_file_path,
+ name=f"test_file_{uuid.uuid4()}.txt",
+ parent_id=project_model.id,
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(file.id)
+ return file
+ finally:
+ # Clean up the temporary file
+ os.unlink(temp_file_path)
+
+ @pytest.fixture(scope="function")
+ async def test_submission(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Submission:
+ """Create a test submission for status tests."""
+ submission = Submission(
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ name=f"Test Submission {uuid.uuid4()}",
+ )
+ created_submission = submission.store(synapse_client=syn)
+ schedule_for_cleanup(created_submission.id)
+ return created_submission
+
+ async def test_get_submission_status_by_id(
+ self, test_submission: Submission, test_evaluation: Evaluation
+ ):
+ """Test retrieving a submission status by ID."""
+ # WHEN I get a submission status by ID
+ submission_status = SubmissionStatus(id=test_submission.id).get(
+ synapse_client=self.syn
+ )
+
+ # THEN the submission status should be retrieved correctly
+ assert submission_status.id == test_submission.id
+ assert submission_status.entity_id == test_submission.entity_id
+ assert submission_status.evaluation_id == test_evaluation.id
+ assert (
+ submission_status.status is not None
+ ) # Should have some status (e.g., "RECEIVED")
+ assert submission_status.etag is not None
+ assert submission_status.status_version is not None
+ assert submission_status.modified_on is not None
+
+ async def test_get_submission_status_without_id(self):
+ """Test that getting a submission status without ID raises ValueError."""
+ # WHEN I try to get a submission status without an ID
+ submission_status = SubmissionStatus()
+
+ # THEN it should raise a ValueError
+ with pytest.raises(
+ ValueError, match="The submission status must have an ID to get"
+ ):
+ submission_status.get(synapse_client=self.syn)
+
+ async def test_get_submission_status_with_invalid_id(self):
+ """Test that getting a submission status with invalid ID raises exception."""
+ # WHEN I try to get a submission status with an invalid ID
+ submission_status = SubmissionStatus(id="syn999999999999")
+
+ # THEN it should raise a SynapseHTTPError (404)
+ with pytest.raises(SynapseHTTPError):
+ submission_status.get(synapse_client=self.syn)
+
+
+class TestSubmissionStatusUpdates:
+ """Tests for updating SubmissionStatus objects."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission status tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission status tests",
+ content_source=project_model.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = evaluation.store(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ """Create a test file for submission status tests."""
+ # Create a temporary file
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write("This is test content for submission status testing.")
+ temp_file_path = temp_file.name
+
+ try:
+ file = File(
+ path=temp_file_path,
+ name=f"test_file_{uuid.uuid4()}.txt",
+ parent_id=project_model.id,
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(file.id)
+ return file
+ finally:
+ # Clean up the temporary file
+ os.unlink(temp_file_path)
+
+ @pytest.fixture(scope="function")
+ async def test_submission(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Submission:
+ """Create a test submission for status tests."""
+ submission = Submission(
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ name=f"Test Submission {uuid.uuid4()}",
+ )
+ created_submission = submission.store(synapse_client=syn)
+ schedule_for_cleanup(created_submission.id)
+ return created_submission
+
+ @pytest.fixture(scope="function")
+ async def test_submission_status(
+ self, test_submission: Submission
+ ) -> SubmissionStatus:
+ """Create a test submission status by getting the existing one."""
+ submission_status = SubmissionStatus(id=test_submission.id).get(
+ synapse_client=self.syn
+ )
+ return submission_status
+
+ async def test_store_submission_status_with_status_change(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test updating a submission status with a status change."""
+ # GIVEN a submission status that exists
+ original_status = test_submission_status.status
+ original_etag = test_submission_status.etag
+ original_status_version = test_submission_status.status_version
+
+ # WHEN I update the status
+ test_submission_status.status = "VALIDATED"
+ updated_status = test_submission_status.store(synapse_client=self.syn)
+
+ # THEN the submission status should be updated
+ assert updated_status.id == test_submission_status.id
+ assert updated_status.status == "VALIDATED"
+ assert updated_status.status != original_status
+ assert updated_status.etag != original_etag # etag should change
+ assert updated_status.status_version > original_status_version
+
+ async def test_store_submission_status_with_submission_annotations(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test updating a submission status with submission annotations."""
+ # WHEN I add submission annotations and store
+ test_submission_status.submission_annotations = {
+ "score": 85.5,
+ "feedback": "Good work!",
+ }
+ updated_status = test_submission_status.store(synapse_client=self.syn)
+
+ # THEN the submission annotations should be saved
+ assert updated_status.submission_annotations is not None
+ assert "score" in updated_status.submission_annotations
+ assert updated_status.submission_annotations["score"] == [85.5]
+ assert updated_status.submission_annotations["feedback"] == ["Good work!"]
+
+ async def test_store_submission_status_with_legacy_annotations(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test updating a submission status with legacy annotations."""
+ # WHEN I add legacy annotations and store
+ test_submission_status.annotations = {
+ "internal_score": 92.3,
+ "reviewer_notes": "Excellent submission",
+ }
+ updated_status = test_submission_status.store(synapse_client=self.syn)
+ assert updated_status.annotations is not None
+
+ converted_annotations = from_submission_status_annotations(
+ updated_status.annotations
+ )
+
+ # THEN the legacy annotations should be saved
+ assert "internal_score" in converted_annotations
+ assert converted_annotations["internal_score"] == 92.3
+ assert converted_annotations["reviewer_notes"] == "Excellent submission"
+
+ async def test_store_submission_status_with_combined_annotations(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test updating a submission status with both types of annotations."""
+ # WHEN I add both submission and legacy annotations
+ test_submission_status.submission_annotations = {
+ "public_score": 78.0,
+ "category": "Bronze",
+ }
+ test_submission_status.annotations = {
+ "internal_review": True,
+ "notes": "Needs minor improvements",
+ }
+ updated_status = test_submission_status.store(synapse_client=self.syn)
+
+ # THEN both types of annotations should be saved
+ assert updated_status.submission_annotations is not None
+ assert "public_score" in updated_status.submission_annotations
+ assert updated_status.submission_annotations["public_score"] == [78.0]
+
+ assert updated_status.annotations is not None
+ converted_annotations = from_submission_status_annotations(
+ updated_status.annotations
+ )
+ assert "internal_review" in converted_annotations
+ assert converted_annotations["internal_review"] == "true"
+
+ async def test_store_submission_status_with_private_annotations_false(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test updating a submission status with private_status_annotations set to False."""
+ # WHEN I add legacy annotations with private_status_annotations set to False
+ test_submission_status.annotations = {
+ "public_internal_score": 88.5,
+ "public_notes": "This should be visible",
+ }
+ test_submission_status.private_status_annotations = False
+
+ # WHEN I store the submission status
+ updated_status = test_submission_status.store(synapse_client=self.syn)
+
+ # THEN they should be properly stored
+ assert updated_status.annotations is not None
+ converted_annotations = from_submission_status_annotations(
+ updated_status.annotations
+ )
+ assert "public_internal_score" in converted_annotations
+ assert converted_annotations["public_internal_score"] == 88.5
+ assert converted_annotations["public_notes"] == "This should be visible"
+
+ # AND the annotations should be marked as not private
+ for annos_type in ["stringAnnos", "doubleAnnos"]:
+ annotations = updated_status.annotations[annos_type]
+ assert all(not anno["isPrivate"] for anno in annotations)
+
+ async def test_store_submission_status_with_private_annotations_true(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test updating a submission status with private_status_annotations set to True (default)."""
+ # WHEN I add legacy annotations with private_status_annotations set to True (default)
+ test_submission_status.annotations = {
+ "private_internal_score": 95.0,
+ "private_notes": "This should be private",
+ }
+ test_submission_status.private_status_annotations = True
+
+ # AND I create the request body to inspect it
+ request_body = test_submission_status.to_synapse_request(
+ synapse_client=self.syn
+ )
+
+ # WHEN I store the submission status
+ updated_status = test_submission_status.store(synapse_client=self.syn)
+
+ # THEN they should be properly stored
+ assert updated_status.annotations is not None
+ converted_annotations = from_submission_status_annotations(
+ updated_status.annotations
+ )
+ assert "private_internal_score" in converted_annotations
+ assert converted_annotations["private_internal_score"] == 95.0
+ assert converted_annotations["private_notes"] == "This should be private"
+
+ # AND the annotations should be marked as private
+ for annos_type in ["stringAnnos", "doubleAnnos"]:
+ annotations = updated_status.annotations[annos_type]
+ print(annotations)
+ assert all(anno["isPrivate"] for anno in annotations)
+
+ async def test_store_submission_status_without_id(self):
+ """Test that storing a submission status without ID raises ValueError."""
+ # WHEN I try to store a submission status without an ID
+ submission_status = SubmissionStatus(status="SCORED")
+
+ # THEN it should raise a ValueError
+ with pytest.raises(
+ ValueError, match="The submission status must have an ID to update"
+ ):
+ submission_status.store(synapse_client=self.syn)
+
+ async def test_store_submission_status_without_changes(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test that storing a submission status without changes shows warning."""
+ # GIVEN a submission status that hasn't been modified
+ # (it already has _last_persistent_instance set from get())
+
+ # WHEN I try to store it without making changes
+ result = test_submission_status.store(synapse_client=self.syn)
+
+ # THEN it should return the same instance (no update sent to Synapse)
+ assert result is test_submission_status
+
+ async def test_store_submission_status_change_tracking(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test that change tracking works correctly."""
+ # GIVEN a submission status that was retrieved (has_changed should be False)
+ assert not test_submission_status.has_changed
+
+ # WHEN I make a change
+ test_submission_status.status = "SCORED"
+
+ # THEN has_changed should be True
+ assert test_submission_status.has_changed
+
+ # WHEN I store the changes
+ updated_status = test_submission_status.store(synapse_client=self.syn)
+
+ # THEN has_changed should be False again
+ assert not updated_status.has_changed
+
+ async def test_has_changed_property_edge_cases(
+ self, test_submission_status: SubmissionStatus
+ ):
+ """Test the has_changed property with various edge cases and detailed scenarios."""
+ # GIVEN a submission status that was just retrieved
+ assert not test_submission_status.has_changed
+ original_annotations = (
+ test_submission_status.annotations.copy()
+ if test_submission_status.annotations
+ else {}
+ )
+
+ # WHEN I modify only annotations (not submission_annotations)
+ test_submission_status.annotations = {"test_key": "test_value"}
+
+ # THEN has_changed should be True
+ assert test_submission_status.has_changed
+
+ # WHEN I reset annotations to the original value (should be the same as the persistent instance)
+ test_submission_status.annotations = original_annotations
+
+ # THEN has_changed should be False (same as original)
+ assert not test_submission_status.has_changed
+
+ # WHEN I add a different annotation value
+ test_submission_status.annotations = {"different_key": "different_value"}
+
+ # THEN has_changed should be True
+ assert test_submission_status.has_changed
+
+ # WHEN I store and get a fresh copy
+ updated_status = test_submission_status.store(synapse_client=self.syn)
+ fresh_status = SubmissionStatus(id=updated_status.id).get(
+ synapse_client=self.syn
+ )
+
+ # THEN the fresh copy should not have changes
+ assert not fresh_status.has_changed
+
+ # WHEN I modify only submission_annotations
+ fresh_status.submission_annotations = {"new_key": ["new_value"]}
+
+ # THEN has_changed should be True
+ assert fresh_status.has_changed
+
+ # WHEN I modify a scalar field
+ fresh_status.status = "VALIDATED"
+
+ # THEN has_changed should still be True
+ assert fresh_status.has_changed
+
+
+class TestSubmissionStatusBulkOperations:
+ """Tests for bulk SubmissionStatus operations."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission status tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission status tests",
+ content_source=project_model.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = evaluation.store(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_files(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> list[File]:
+ """Create multiple test files for submission status tests."""
+ files = []
+ for i in range(3):
+ # Create a temporary file
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write(
+ f"This is test content {i} for submission status testing."
+ )
+ temp_file_path = temp_file.name
+
+ try:
+ file = File(
+ path=temp_file_path,
+ name=f"test_file_{i}_{uuid.uuid4()}.txt",
+ parent_id=project_model.id,
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(file.id)
+ files.append(file)
+ finally:
+ # Clean up the temporary file
+ os.unlink(temp_file_path)
+ return files
+
+ @pytest.fixture(scope="function")
+ async def test_submissions(
+ self,
+ test_evaluation: Evaluation,
+ test_files: list[File],
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> list[Submission]:
+ """Create multiple test submissions for status tests."""
+ submissions = []
+ for i, file in enumerate(test_files):
+ submission = Submission(
+ entity_id=file.id,
+ evaluation_id=test_evaluation.id,
+ name=f"Test Submission {i} {uuid.uuid4()}",
+ )
+ created_submission = submission.store(synapse_client=syn)
+ schedule_for_cleanup(created_submission.id)
+ submissions.append(created_submission)
+ return submissions
+
+ async def test_get_all_submission_statuses(
+ self, test_evaluation: Evaluation, test_submissions: list[Submission]
+ ):
+ """Test getting all submission statuses for an evaluation."""
+ # WHEN I get all submission statuses for the evaluation
+ statuses = SubmissionStatus.get_all_submission_statuses(
+ evaluation_id=test_evaluation.id,
+ synapse_client=self.syn,
+ )
+
+ # THEN I should get submission statuses for all submissions
+ assert len(statuses) >= len(test_submissions)
+ status_ids = [status.id for status in statuses]
+
+ # AND all test submissions should have their statuses in the results
+ for submission in test_submissions:
+ assert submission.id in status_ids
+
+ # AND each status should have proper attributes
+ for status in statuses:
+ assert status.id is not None
+ assert status.evaluation_id == test_evaluation.id
+ assert status.status is not None
+ assert status.etag is not None
+
+ async def test_get_all_submission_statuses_with_status_filter(
+ self, test_evaluation: Evaluation, test_submissions: list[Submission]
+ ):
+ """Test getting submission statuses with status filter."""
+ # WHEN I get submission statuses filtered by status
+ statuses = SubmissionStatus.get_all_submission_statuses(
+ evaluation_id=test_evaluation.id,
+ status="RECEIVED",
+ synapse_client=self.syn,
+ )
+
+ # THEN I should only get statuses with the specified status
+ for status in statuses:
+ assert status.status == "RECEIVED"
+ assert status.evaluation_id == test_evaluation.id
+
+ async def test_get_all_submission_statuses_with_pagination(
+ self, test_evaluation: Evaluation, test_submissions: list[Submission]
+ ):
+ """Test getting submission statuses with pagination."""
+ # WHEN I get submission statuses with pagination
+ statuses_page1 = SubmissionStatus.get_all_submission_statuses(
+ evaluation_id=test_evaluation.id,
+ limit=2,
+ offset=0,
+ synapse_client=self.syn,
+ )
+
+ # THEN I should get at most 2 statuses
+ assert len(statuses_page1) <= 2
+
+ # WHEN I get the next page
+ statuses_page2 = SubmissionStatus.get_all_submission_statuses(
+ evaluation_id=test_evaluation.id,
+ limit=2,
+ offset=2,
+ synapse_client=self.syn,
+ )
+
+ # THEN the results should be different (assuming more than 2 submissions exist)
+ if len(statuses_page1) == 2 and len(statuses_page2) > 0:
+ page1_ids = {status.id for status in statuses_page1}
+ page2_ids = {status.id for status in statuses_page2}
+ assert page1_ids != page2_ids # Should be different sets
+
+ async def test_batch_update_submission_statuses(
+ self, test_evaluation: Evaluation, test_submissions: list[Submission]
+ ):
+ """Test batch updating multiple submission statuses."""
+ # GIVEN multiple submission statuses
+ statuses = []
+ for submission in test_submissions:
+ status = SubmissionStatus(id=submission.id).get(synapse_client=self.syn)
+ # Update each status
+ status.status = "VALIDATED"
+ status.submission_annotations = {
+ "batch_score": 90.0 + (len(statuses) * 2),
+ "batch_processed": True,
+ }
+ statuses.append(status)
+
+ # WHEN I batch update the statuses
+ response = SubmissionStatus.batch_update_submission_statuses(
+ evaluation_id=test_evaluation.id,
+ statuses=statuses,
+ synapse_client=self.syn,
+ )
+
+ # THEN the batch update should succeed
+ assert response is not None
+ assert "batchToken" in response or response == {} # Response format may vary
+
+ # AND I should be able to verify the updates by retrieving the statuses
+ for original_status in statuses:
+ updated_status = SubmissionStatus(id=original_status.id).get(
+ synapse_client=self.syn
+ )
+ assert updated_status.status == "VALIDATED"
+ converted_submission_annotations = from_submission_status_annotations(
+ updated_status.submission_annotations
+ )
+ assert "batch_score" in converted_submission_annotations
+ assert converted_submission_annotations["batch_processed"] == ["true"]
+
+
+class TestSubmissionStatusCancellation:
+ """Tests for SubmissionStatus cancellation functionality."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ @pytest.fixture(scope="function")
+ async def test_evaluation(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Evaluation:
+ """Create a test evaluation for submission status tests."""
+ evaluation = Evaluation(
+ name=f"test_evaluation_{uuid.uuid4()}",
+ description="A test evaluation for submission status tests",
+ content_source=project_model.id,
+ submission_instructions_message="Please submit your results",
+ submission_receipt_message="Thank you!",
+ )
+ created_evaluation = evaluation.store(synapse_client=syn)
+ schedule_for_cleanup(created_evaluation.id)
+ return created_evaluation
+
+ @pytest.fixture(scope="function")
+ async def test_file(
+ self,
+ project_model: Project,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> File:
+ """Create a test file for submission status tests."""
+ # Create a temporary file
+ with tempfile.NamedTemporaryFile(
+ mode="w", delete=False, suffix=".txt"
+ ) as temp_file:
+ temp_file.write("This is test content for submission status testing.")
+ temp_file_path = temp_file.name
+
+ try:
+ file = File(
+ path=temp_file_path,
+ name=f"test_file_{uuid.uuid4()}.txt",
+ parent_id=project_model.id,
+ ).store(synapse_client=syn)
+ schedule_for_cleanup(file.id)
+ return file
+ finally:
+ # Clean up the temporary file
+ os.unlink(temp_file_path)
+
+ @pytest.fixture(scope="function")
+ async def test_submission(
+ self,
+ test_evaluation: Evaluation,
+ test_file: File,
+ syn: Synapse,
+ schedule_for_cleanup: Callable[..., None],
+ ) -> Submission:
+ """Create a test submission for status tests."""
+ submission = Submission(
+ entity_id=test_file.id,
+ evaluation_id=test_evaluation.id,
+ name=f"Test Submission {uuid.uuid4()}",
+ )
+ created_submission = submission.store(synapse_client=syn)
+ schedule_for_cleanup(created_submission.id)
+ return created_submission
+
+ async def test_submission_cancellation_workflow(self, test_submission: Submission):
+ """Test the complete submission cancellation workflow."""
+ # GIVEN a submission that exists
+ submission_id = test_submission.id
+
+ # WHEN I get the initial submission status
+ initial_status = SubmissionStatus(id=submission_id).get(synapse_client=self.syn)
+
+ # THEN initially it should not be cancellable or cancelled
+ assert initial_status.can_cancel is False
+ assert initial_status.cancel_requested is False
+
+ # WHEN I update the submission status to allow cancellation
+ initial_status.can_cancel = True
+ updated_status = initial_status.store(synapse_client=self.syn)
+
+ # THEN the submission should be marked as cancellable
+ assert updated_status.can_cancel is True
+ assert updated_status.cancel_requested is False
+
+ # WHEN I cancel the submission
+ test_submission.cancel()
+
+ # THEN I should be able to retrieve the updated status showing cancellation was requested
+ final_status = SubmissionStatus(id=submission_id).get(synapse_client=self.syn)
+ assert final_status.can_cancel is True
+ assert final_status.cancel_requested is True
+
+
+class TestSubmissionStatusValidation:
+ """Tests for SubmissionStatus validation and error handling."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init(self, syn: Synapse, schedule_for_cleanup: Callable[..., None]) -> None:
+ self.syn = syn
+ self.schedule_for_cleanup = schedule_for_cleanup
+
+ async def test_to_synapse_request_missing_required_attributes(self):
+ """Test that to_synapse_request validates required attributes."""
+ # WHEN I try to create a request with missing required attributes
+ submission_status = SubmissionStatus(id="123") # Missing etag, status_version
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'etag' attribute"):
+ submission_status.to_synapse_request(synapse_client=self.syn)
+
+ # WHEN I add etag but still missing status_version
+ submission_status.etag = "some-etag"
+
+ # THEN it should raise a ValueError for status_version
+ with pytest.raises(ValueError, match="missing the 'status_version' attribute"):
+ submission_status.to_synapse_request(synapse_client=self.syn)
+
+ async def test_to_synapse_request_with_annotations_missing_evaluation_id(self):
+ """Test that annotations require evaluation_id."""
+ # WHEN I try to create a request with annotations but no evaluation_id
+ submission_status = SubmissionStatus(
+ id="123", etag="some-etag", status_version=1, annotations={"test": "value"}
+ )
+
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'evaluation_id' attribute"):
+ submission_status.to_synapse_request(synapse_client=self.syn)
+
+ async def test_to_synapse_request_valid_attributes(self):
+ """Test that to_synapse_request works with valid attributes."""
+ # WHEN I create a request with all required attributes
+ submission_status = SubmissionStatus(
+ id="123",
+ etag="some-etag",
+ status_version=1,
+ status="SCORED",
+ evaluation_id="eval123",
+ submission_annotations={"score": 85.5},
+ )
+
+ # THEN it should create a valid request body
+ request_body = submission_status.to_synapse_request(synapse_client=self.syn)
+
+ # AND the request should have the required fields
+ assert request_body["id"] == "123"
+ assert request_body["etag"] == "some-etag"
+ assert request_body["statusVersion"] == 1
+ assert request_body["status"] == "SCORED"
+ assert "submissionAnnotations" in request_body
+
+ async def test_fill_from_dict_with_complete_response(self):
+ """Test filling a SubmissionStatus from a complete API response."""
+ # GIVEN a complete API response
+ api_response = {
+ "id": "123456",
+ "etag": "abcd-1234",
+ "modifiedOn": "2023-01-01T00:00:00.000Z",
+ "status": "SCORED",
+ "entityId": "syn789",
+ "versionNumber": 1,
+ "statusVersion": 2,
+ "canCancel": False,
+ "cancelRequested": False,
+ "annotations": {
+ "objectId": "123456",
+ "scopeId": "9617645",
+ "stringAnnos": [
+ {
+ "key": "internal_note",
+ "isPrivate": True,
+ "value": "This is internal",
+ },
+ {
+ "key": "reviewer_notes",
+ "isPrivate": True,
+ "value": "Excellent work",
+ },
+ ],
+ "doubleAnnos": [
+ {"key": "validation_score", "isPrivate": True, "value": 95.0}
+ ],
+ "longAnnos": [],
+ },
+ "submissionAnnotations": {"feedback": ["Great work!"], "score": [92.5]},
+ }
+
+ # WHEN I fill a SubmissionStatus from the response
+ submission_status = SubmissionStatus()
+ result = submission_status.fill_from_dict(api_response)
+
+ # THEN all fields should be populated correctly
+ assert result.id == "123456"
+ assert result.etag == "abcd-1234"
+ assert result.modified_on == "2023-01-01T00:00:00.000Z"
+ assert result.status == "SCORED"
+ assert result.entity_id == "syn789"
+ assert result.version_number == 1
+ assert result.status_version == 2
+ assert result.can_cancel is False
+ assert result.cancel_requested is False
+
+ # The annotations field should contain the raw submission status format
+ assert result.annotations is not None
+ assert "objectId" in result.annotations
+ assert "scopeId" in result.annotations
+ assert "stringAnnos" in result.annotations
+ assert "doubleAnnos" in result.annotations
+ assert len(result.annotations["stringAnnos"]) == 2
+ assert len(result.annotations["doubleAnnos"]) == 1
+
+ # The submission_annotations should be in simple key-value format
+ assert "feedback" in result.submission_annotations
+ assert "score" in result.submission_annotations
+ assert result.submission_annotations["feedback"] == ["Great work!"]
+ assert result.submission_annotations["score"] == [92.5]
+
+ async def test_fill_from_dict_with_minimal_response(self):
+ """Test filling a SubmissionStatus from a minimal API response."""
+ # GIVEN a minimal API response
+ api_response = {"id": "123456", "status": "RECEIVED"}
+
+ # WHEN I fill a SubmissionStatus from the response
+ submission_status = SubmissionStatus()
+ result = submission_status.fill_from_dict(api_response)
+
+ # THEN basic fields should be populated
+ assert result.id == "123456"
+ assert result.status == "RECEIVED"
+ # AND optional fields should have default values
+ assert result.etag is None
+ assert result.can_cancel is False
+ assert result.cancel_requested is False
diff --git a/tests/unit/synapseclient/models/async/unit_test_submission_async.py b/tests/unit/synapseclient/models/async/unit_test_submission_async.py
new file mode 100644
index 000000000..3587c0cdb
--- /dev/null
+++ b/tests/unit/synapseclient/models/async/unit_test_submission_async.py
@@ -0,0 +1,596 @@
+"""Async unit tests for the synapseclient.models.Submission class."""
+import uuid
+from typing import Dict, List, Union
+from unittest.mock import AsyncMock, MagicMock, call, patch
+
+import pytest
+
+from synapseclient import Synapse
+from synapseclient.core.exceptions import SynapseHTTPError
+from synapseclient.models import Submission
+
+SUBMISSION_ID = "9614543"
+USER_ID = "123456"
+SUBMITTER_ALIAS = "test_user"
+ENTITY_ID = "syn789012"
+VERSION_NUMBER = 1
+EVALUATION_ID = "9999999"
+SUBMISSION_NAME = "Test Submission"
+CREATED_ON = "2023-01-01T10:00:00.000Z"
+TEAM_ID = "team123"
+CONTRIBUTORS = ["user1", "user2", "user3"]
+SUBMISSION_STATUS = {"status": "RECEIVED", "score": 85.5}
+ENTITY_BUNDLE_JSON = '{"entity": {"id": "syn789012", "name": "test_entity"}}'
+DOCKER_REPOSITORY_NAME = "test/repository"
+DOCKER_DIGEST = "sha256:abc123def456"
+ETAG = "etag_value"
+
+
+class TestSubmissionAsync:
+ """Async tests for the synapseclient.models.Submission class."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init_syn(self, syn: Synapse) -> None:
+ self.syn = syn
+
+ def get_example_submission_response(self) -> Dict[str, Union[str, int, List, Dict]]:
+ """Get a complete example submission response from the REST API."""
+ return {
+ "id": SUBMISSION_ID,
+ "userId": USER_ID,
+ "submitterAlias": SUBMITTER_ALIAS,
+ "entityId": ENTITY_ID,
+ "versionNumber": VERSION_NUMBER,
+ "evaluationId": EVALUATION_ID,
+ "name": SUBMISSION_NAME,
+ "createdOn": CREATED_ON,
+ "teamId": TEAM_ID,
+ "contributors": CONTRIBUTORS,
+ "submissionStatus": SUBMISSION_STATUS,
+ "entityBundleJSON": ENTITY_BUNDLE_JSON,
+ "dockerRepositoryName": DOCKER_REPOSITORY_NAME,
+ "dockerDigest": DOCKER_DIGEST,
+ }
+
+ def get_minimal_submission_response(self) -> Dict[str, str]:
+ """Get a minimal example submission response from the REST API."""
+ return {
+ "id": SUBMISSION_ID,
+ "entityId": ENTITY_ID,
+ "evaluationId": EVALUATION_ID,
+ }
+
+ def get_example_entity_response(self) -> Dict[str, Union[str, int]]:
+ """Get an example entity response for testing entity fetching."""
+ return {
+ "id": ENTITY_ID,
+ "etag": ETAG,
+ "versionNumber": VERSION_NUMBER,
+ "name": "test_entity",
+ "concreteType": "org.sagebionetworks.repo.model.FileEntity",
+ }
+
+ def get_example_docker_entity_response(self) -> Dict[str, Union[str, int]]:
+ """Get an example Docker repository entity response for testing."""
+ return {
+ "id": ENTITY_ID,
+ "etag": ETAG,
+ "name": "test_docker_repo",
+ "concreteType": "org.sagebionetworks.repo.model.docker.DockerRepository",
+ "repositoryName": "test/repository",
+ }
+
+ def get_example_docker_tag_response(self) -> Dict[str, Union[str, int, List]]:
+ """Get an example Docker tag response for testing."""
+ return {
+ "totalNumberOfResults": 2,
+ "results": [
+ {
+ "tag": "v1.0",
+ "digest": "sha256:older123def456",
+ "createdOn": "2024-01-01T10:00:00.000Z",
+ },
+ {
+ "tag": "v2.0",
+ "digest": "sha256:latest456abc789",
+ "createdOn": "2024-06-01T15:30:00.000Z",
+ },
+ ],
+ }
+
+ def get_complex_docker_tag_response(self) -> Dict[str, Union[str, int, List]]:
+ """Get a more complex Docker tag response with multiple versions to test sorting."""
+ return {
+ "totalNumberOfResults": 4,
+ "results": [
+ {
+ "tag": "v1.0",
+ "digest": "sha256:version1",
+ "createdOn": "2024-01-01T10:00:00.000Z",
+ },
+ {
+ "tag": "v3.0",
+ "digest": "sha256:version3",
+ "createdOn": "2024-08-15T12:00:00.000Z", # This should be selected (latest)
+ },
+ {
+ "tag": "v2.0",
+ "digest": "sha256:version2",
+ "createdOn": "2024-06-01T15:30:00.000Z",
+ },
+ {
+ "tag": "v1.5",
+ "digest": "sha256:version1_5",
+ "createdOn": "2024-03-15T08:45:00.000Z",
+ },
+ ],
+ }
+
+ def test_fill_from_dict_complete_data_async(self) -> None:
+ # GIVEN a complete submission response from the REST API
+ # WHEN I call fill_from_dict with the example submission response
+ submission = Submission().fill_from_dict(self.get_example_submission_response())
+
+ # THEN the Submission object should be filled with all the data
+ assert submission.id == SUBMISSION_ID
+ assert submission.user_id == USER_ID
+ assert submission.submitter_alias == SUBMITTER_ALIAS
+ assert submission.entity_id == ENTITY_ID
+ assert submission.version_number == VERSION_NUMBER
+ assert submission.evaluation_id == EVALUATION_ID
+ assert submission.name == SUBMISSION_NAME
+ assert submission.created_on == CREATED_ON
+ assert submission.team_id == TEAM_ID
+ assert submission.contributors == CONTRIBUTORS
+ assert submission.submission_status == SUBMISSION_STATUS
+ assert submission.entity_bundle_json == ENTITY_BUNDLE_JSON
+ assert submission.docker_repository_name == DOCKER_REPOSITORY_NAME
+ assert submission.docker_digest == DOCKER_DIGEST
+
+ def test_to_synapse_request_complete_data_async(self) -> None:
+ # GIVEN a submission with all optional fields set
+ submission = Submission(
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ name=SUBMISSION_NAME,
+ team_id=TEAM_ID,
+ contributors=CONTRIBUTORS,
+ docker_repository_name=DOCKER_REPOSITORY_NAME,
+ docker_digest=DOCKER_DIGEST,
+ version_number=VERSION_NUMBER,
+ )
+
+ # WHEN I call to_synapse_request
+ request_body = submission.to_synapse_request()
+
+ # THEN the request body should contain all fields in the correct format
+ assert request_body["entityId"] == ENTITY_ID
+ assert request_body["evaluationId"] == EVALUATION_ID
+ assert request_body["versionNumber"] == VERSION_NUMBER
+ assert request_body["name"] == SUBMISSION_NAME
+ assert request_body["teamId"] == TEAM_ID
+ assert request_body["contributors"] == CONTRIBUTORS
+ assert request_body["dockerRepositoryName"] == DOCKER_REPOSITORY_NAME
+ assert request_body["dockerDigest"] == DOCKER_DIGEST
+
+ @pytest.mark.asyncio
+ async def test_fetch_latest_entity_success_async(self) -> None:
+ # GIVEN a submission with an entity_id
+ submission = Submission(entity_id=ENTITY_ID, evaluation_id=EVALUATION_ID)
+
+ # WHEN I call _fetch_latest_entity with a mocked successful response
+ with patch.object(
+ self.syn,
+ "rest_get_async",
+ new_callable=AsyncMock,
+ return_value=self.get_example_entity_response(),
+ ) as mock_rest_get:
+ entity_info = await submission._fetch_latest_entity(synapse_client=self.syn)
+
+ # THEN it should return the entity information
+ assert entity_info["id"] == ENTITY_ID
+ assert entity_info["etag"] == ETAG
+ assert entity_info["versionNumber"] == VERSION_NUMBER
+ mock_rest_get.assert_called_once_with(f"/entity/{ENTITY_ID}")
+
+ @pytest.mark.asyncio
+ async def test_fetch_latest_entity_docker_repository_async(self) -> None:
+ # GIVEN a submission with a Docker repository entity_id
+ submission = Submission(entity_id=ENTITY_ID, evaluation_id=EVALUATION_ID)
+
+ # WHEN I call _fetch_latest_entity with mocked Docker repository responses
+ with patch.object(
+ self.syn,
+ "rest_get_async",
+ new_callable=AsyncMock,
+ ) as mock_rest_get:
+ # Configure the mock to return different responses for different URLs
+ def side_effect(url):
+ if url == f"/entity/{ENTITY_ID}":
+ return self.get_example_docker_entity_response()
+ elif url == f"/entity/{ENTITY_ID}/dockerTag":
+ return self.get_example_docker_tag_response()
+
+ mock_rest_get.side_effect = side_effect
+
+ entity_info = await submission._fetch_latest_entity(synapse_client=self.syn)
+
+ # THEN it should return the entity information with latest docker tag info
+ assert entity_info["id"] == ENTITY_ID
+ assert entity_info["etag"] == ETAG
+ assert entity_info["repositoryName"] == "test/repository"
+ # Should have the latest tag information (v2.0 based on createdOn date)
+ assert entity_info["tag"] == "v2.0"
+ assert entity_info["digest"] == "sha256:latest456abc789"
+ assert entity_info["createdOn"] == "2024-06-01T15:30:00.000Z"
+
+ # Verify both API calls were made
+ expected_calls = [
+ call(f"/entity/{ENTITY_ID}"),
+ call(f"/entity/{ENTITY_ID}/dockerTag"),
+ ]
+ mock_rest_get.assert_has_calls(expected_calls)
+
+ @pytest.mark.asyncio
+ async def test_fetch_latest_entity_docker_empty_results_async(self) -> None:
+ # GIVEN a submission with a Docker repository entity_id
+ submission = Submission(entity_id=ENTITY_ID, evaluation_id=EVALUATION_ID)
+
+ # WHEN I call _fetch_latest_entity with empty docker tag results
+ with patch.object(
+ self.syn,
+ "rest_get_async",
+ new_callable=AsyncMock,
+ ) as mock_rest_get:
+ # Configure the mock to return empty docker tag results
+ def side_effect(url):
+ if url == f"/entity/{ENTITY_ID}":
+ return self.get_example_docker_entity_response()
+ elif url == f"/entity/{ENTITY_ID}/dockerTag":
+ return {"totalNumberOfResults": 0, "results": []}
+
+ mock_rest_get.side_effect = side_effect
+
+ entity_info = await submission._fetch_latest_entity(synapse_client=self.syn)
+
+ # THEN it should return the entity information without docker tag info
+ assert entity_info["id"] == ENTITY_ID
+ assert entity_info["etag"] == ETAG
+ assert entity_info["repositoryName"] == "test/repository"
+ # Should not have docker tag fields since results were empty
+ assert "tag" not in entity_info
+ assert "digest" not in entity_info
+ assert "createdOn" not in entity_info
+
+ @pytest.mark.asyncio
+ async def test_fetch_latest_entity_docker_complex_tag_selection_async(self) -> None:
+ # GIVEN a submission with a Docker repository with multiple tags
+ submission = Submission(entity_id=ENTITY_ID, evaluation_id=EVALUATION_ID)
+
+ # WHEN I call _fetch_latest_entity with multiple docker tags with different dates
+ with patch.object(
+ self.syn,
+ "rest_get_async",
+ new_callable=AsyncMock,
+ ) as mock_rest_get:
+ # Configure the mock to return complex docker tag results
+ def side_effect(url):
+ if url == f"/entity/{ENTITY_ID}":
+ return self.get_example_docker_entity_response()
+ elif url == f"/entity/{ENTITY_ID}/dockerTag":
+ return self.get_complex_docker_tag_response()
+
+ mock_rest_get.side_effect = side_effect
+
+ entity_info = await submission._fetch_latest_entity(synapse_client=self.syn)
+
+ # THEN it should select the tag with the latest createdOn timestamp (v3.0)
+ assert entity_info["tag"] == "v3.0"
+ assert entity_info["digest"] == "sha256:version3"
+ assert entity_info["createdOn"] == "2024-08-15T12:00:00.000Z"
+
+ @pytest.mark.asyncio
+ async def test_store_async_success(self) -> None:
+ # GIVEN a submission with valid data
+ submission = Submission(
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ name=SUBMISSION_NAME,
+ )
+
+ # WHEN I call store_async with mocked dependencies
+ with patch.object(
+ submission,
+ "_fetch_latest_entity",
+ new_callable=AsyncMock,
+ return_value=self.get_example_entity_response(),
+ ) as mock_fetch_entity, patch(
+ "synapseclient.api.evaluation_services.create_submission",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_response(),
+ ) as mock_create_submission:
+ stored_submission = await submission.store_async(synapse_client=self.syn)
+
+ # THEN it should fetch entity information, create the submission, and fill the object
+ mock_fetch_entity.assert_called_once_with(synapse_client=self.syn)
+ mock_create_submission.assert_called_once()
+
+ # Verify the submission is filled with response data
+ assert stored_submission.id == SUBMISSION_ID
+ assert stored_submission.entity_id == ENTITY_ID
+ assert stored_submission.evaluation_id == EVALUATION_ID
+
+ @pytest.mark.asyncio
+ async def test_store_async_docker_repository_success(self) -> None:
+ # GIVEN a submission with valid data for Docker repository
+ submission = Submission(
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ name=SUBMISSION_NAME,
+ )
+
+ # WHEN I call store_async with mocked Docker repository entity
+ docker_entity_with_tag = self.get_example_docker_entity_response()
+ docker_entity_with_tag.update(
+ {
+ "tag": "v2.0",
+ "digest": "sha256:latest456abc789",
+ "createdOn": "2024-06-01T15:30:00.000Z",
+ }
+ )
+
+ with patch.object(
+ submission,
+ "_fetch_latest_entity",
+ new_callable=AsyncMock,
+ return_value=docker_entity_with_tag,
+ ) as mock_fetch_entity, patch(
+ "synapseclient.api.evaluation_services.create_submission",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_response(),
+ ) as mock_create_submission:
+ stored_submission = await submission.store_async(synapse_client=self.syn)
+
+ # THEN it should handle Docker repository specific logic
+ mock_fetch_entity.assert_called_once_with(synapse_client=self.syn)
+ mock_create_submission.assert_called_once()
+
+ # Verify Docker repository attributes are set correctly
+ assert stored_submission.version_number == 1 # Docker repos get version 1
+ assert stored_submission.docker_repository_name == "test/repository"
+ assert stored_submission.docker_digest == DOCKER_DIGEST
+
+ @pytest.mark.asyncio
+ async def test_store_async_with_team_data_success(self) -> None:
+ # GIVEN a submission with team information
+ submission = Submission(
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ name=SUBMISSION_NAME,
+ team_id=TEAM_ID,
+ contributors=CONTRIBUTORS,
+ )
+
+ # WHEN I call store_async with mocked dependencies
+ with patch.object(
+ submission,
+ "_fetch_latest_entity",
+ new_callable=AsyncMock,
+ return_value=self.get_example_entity_response(),
+ ) as mock_fetch_entity, patch(
+ "synapseclient.api.evaluation_services.create_submission",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_response(),
+ ) as mock_create_submission:
+ stored_submission = await submission.store_async(synapse_client=self.syn)
+
+ # THEN it should preserve team information in the stored submission
+ mock_fetch_entity.assert_called_once_with(synapse_client=self.syn)
+ mock_create_submission.assert_called_once()
+
+ # Verify team data is preserved
+ assert stored_submission.team_id == TEAM_ID
+ assert stored_submission.contributors == CONTRIBUTORS
+ assert stored_submission.id == SUBMISSION_ID
+ assert stored_submission.entity_id == ENTITY_ID
+ assert stored_submission.evaluation_id == EVALUATION_ID
+
+ @pytest.mark.asyncio
+ async def test_get_async_success(self) -> None:
+ # GIVEN a submission with an ID
+ submission = Submission(id=SUBMISSION_ID)
+
+ # WHEN I call get_async with a mocked successful response
+ with patch(
+ "synapseclient.api.evaluation_services.get_submission",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_response(),
+ ) as mock_get_submission:
+ retrieved_submission = await submission.get_async(synapse_client=self.syn)
+
+ # THEN it should call the API and fill the object
+ mock_get_submission.assert_called_once_with(
+ submission_id=SUBMISSION_ID,
+ synapse_client=self.syn,
+ )
+ assert retrieved_submission.id == SUBMISSION_ID
+ assert retrieved_submission.entity_id == ENTITY_ID
+ assert retrieved_submission.evaluation_id == EVALUATION_ID
+
+ @pytest.mark.asyncio
+ async def test_delete_async_success(self) -> None:
+ # GIVEN a submission with an ID
+ submission = Submission(id=SUBMISSION_ID)
+
+ # WHEN I call delete_async with mocked dependencies
+ with patch(
+ "synapseclient.api.evaluation_services.delete_submission",
+ new_callable=AsyncMock,
+ ) as mock_delete_submission, patch(
+ "synapseclient.Synapse.get_client",
+ return_value=self.syn,
+ ):
+ # Mock the logger
+ self.syn.logger = MagicMock()
+
+ await submission.delete_async(synapse_client=self.syn)
+
+ # THEN it should call the API and log the deletion
+ mock_delete_submission.assert_called_once_with(
+ submission_id=SUBMISSION_ID,
+ synapse_client=self.syn,
+ )
+ self.syn.logger.info.assert_called_once_with(
+ f"Submission {SUBMISSION_ID} has successfully been deleted."
+ )
+
+ @pytest.mark.asyncio
+ async def test_cancel_async_success(self) -> None:
+ # GIVEN a submission with an ID
+ submission = Submission(id=SUBMISSION_ID)
+
+ # WHEN I call cancel_async with mocked dependencies
+ with patch(
+ "synapseclient.api.evaluation_services.cancel_submission",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_response(),
+ ) as mock_cancel_submission, patch(
+ "synapseclient.Synapse.get_client",
+ return_value=self.syn,
+ ):
+ # Mock the logger
+ self.syn.logger = MagicMock()
+
+ cancelled_submission = await submission.cancel_async(
+ synapse_client=self.syn
+ )
+
+ # THEN it should call the API, log the cancellation, and update the object
+ mock_cancel_submission.assert_called_once_with(
+ submission_id=SUBMISSION_ID,
+ synapse_client=self.syn,
+ )
+ self.syn.logger.info.assert_called_once_with(
+ f"A request to cancel Submission {SUBMISSION_ID} has been submitted."
+ )
+
+ @pytest.mark.asyncio
+ async def test_get_evaluation_submissions_async(self) -> None:
+ # GIVEN evaluation parameters
+ evaluation_id = EVALUATION_ID
+ status = "SCORED"
+
+ # WHEN I call get_evaluation_submissions_async
+ with patch(
+ "synapseclient.api.evaluation_services.get_evaluation_submissions"
+ ) as mock_get_submissions:
+ # Create an async generator function that yields submission data
+ async def mock_async_gen(*args, **kwargs):
+ submission_data = self.get_example_submission_response()
+ yield submission_data
+
+ # Make the mock return our async generator when called
+ mock_get_submissions.side_effect = mock_async_gen
+
+ submissions = []
+ async for submission in Submission.get_evaluation_submissions_async(
+ evaluation_id=evaluation_id,
+ status=status,
+ synapse_client=self.syn,
+ ):
+ submissions.append(submission)
+
+ # THEN it should call the API with correct parameters and yield Submission objects
+ mock_get_submissions.assert_called_once_with(
+ evaluation_id=evaluation_id,
+ status=status,
+ synapse_client=self.syn,
+ )
+ assert len(submissions) == 1
+ assert isinstance(submissions[0], Submission)
+ assert submissions[0].id == SUBMISSION_ID
+
+ @pytest.mark.asyncio
+ async def test_get_user_submissions_async(self) -> None:
+ # GIVEN user submission parameters
+ evaluation_id = EVALUATION_ID
+ user_id = USER_ID
+
+ # WHEN I call get_user_submissions_async
+ with patch(
+ "synapseclient.api.evaluation_services.get_user_submissions"
+ ) as mock_get_user_submissions:
+ # Create an async generator function that yields submission data
+ async def mock_async_gen(*args, **kwargs):
+ submission_data = self.get_example_submission_response()
+ yield submission_data
+
+ # Make the mock return our async generator when called
+ mock_get_user_submissions.side_effect = mock_async_gen
+
+ submissions = []
+ async for submission in Submission.get_user_submissions_async(
+ evaluation_id=evaluation_id,
+ user_id=user_id,
+ synapse_client=self.syn,
+ ):
+ submissions.append(submission)
+
+ # THEN it should call the API with correct parameters and yield Submission objects
+ mock_get_user_submissions.assert_called_once_with(
+ evaluation_id=evaluation_id,
+ user_id=user_id,
+ synapse_client=self.syn,
+ )
+ assert len(submissions) == 1
+ assert isinstance(submissions[0], Submission)
+ assert submissions[0].id == SUBMISSION_ID
+
+ @pytest.mark.asyncio
+ async def test_get_submission_count_async(self) -> None:
+ # GIVEN submission count parameters
+ evaluation_id = EVALUATION_ID
+ status = "VALID"
+
+ expected_response = 42
+
+ # WHEN I call get_submission_count_async
+ with patch(
+ "synapseclient.api.evaluation_services.get_submission_count",
+ new_callable=AsyncMock,
+ return_value=expected_response,
+ ) as mock_get_count:
+ response = await Submission.get_submission_count_async(
+ evaluation_id=evaluation_id,
+ status=status,
+ synapse_client=self.syn,
+ )
+
+ # THEN it should call the API with correct parameters
+ mock_get_count.assert_called_once_with(
+ evaluation_id=evaluation_id,
+ status=status,
+ synapse_client=self.syn,
+ )
+ assert response == expected_response
+
+ def test_to_synapse_request_minimal_data_async(self) -> None:
+ # GIVEN a submission with only required fields
+ submission = Submission(
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ version_number=VERSION_NUMBER,
+ )
+
+ # WHEN I call to_synapse_request
+ request_body = submission.to_synapse_request()
+
+ # THEN the request body should contain only required fields
+ assert request_body["entityId"] == ENTITY_ID
+ assert request_body["evaluationId"] == EVALUATION_ID
+ assert request_body["versionNumber"] == VERSION_NUMBER
+ assert "name" not in request_body
+ assert "teamId" not in request_body
+ assert "contributors" not in request_body
+ assert "dockerRepositoryName" not in request_body
+ assert "dockerDigest" not in request_body
diff --git a/tests/unit/synapseclient/models/async/unit_test_submission_bundle_async.py b/tests/unit/synapseclient/models/async/unit_test_submission_bundle_async.py
new file mode 100644
index 000000000..5253c2f27
--- /dev/null
+++ b/tests/unit/synapseclient/models/async/unit_test_submission_bundle_async.py
@@ -0,0 +1,478 @@
+"""Unit tests for the synapseclient.models.SubmissionBundle class async methods."""
+
+from typing import Dict, Union
+from unittest.mock import AsyncMock, patch
+
+import pytest
+
+from synapseclient import Synapse
+from synapseclient.models import Submission, SubmissionBundle, SubmissionStatus
+
+SUBMISSION_ID = "9999999"
+SUBMISSION_STATUS_ID = "9999999"
+ENTITY_ID = "syn123456"
+EVALUATION_ID = "9614543"
+USER_ID = "123456"
+ETAG = "etag_value"
+MODIFIED_ON = "2023-01-01T00:00:00.000Z"
+CREATED_ON = "2023-01-01T00:00:00.000Z"
+STATUS = "RECEIVED"
+
+
+class TestSubmissionBundle:
+ """Tests for the synapseclient.models.SubmissionBundle class async methods."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init_syn(self, syn: Synapse) -> None:
+ self.syn = syn
+
+ def get_example_submission_dict(self) -> Dict[str, Union[str, int, Dict]]:
+ """Return example submission data from REST API."""
+ return {
+ "id": SUBMISSION_ID,
+ "userId": USER_ID,
+ "submitterAlias": "test_user",
+ "entityId": ENTITY_ID,
+ "versionNumber": 1,
+ "name": "Test Submission",
+ "createdOn": CREATED_ON,
+ "evaluationId": EVALUATION_ID,
+ "entityBundle": {
+ "entity": {
+ "id": ENTITY_ID,
+ "name": "test_entity",
+ "concreteType": "org.sagebionetworks.repo.model.FileEntity",
+ },
+ "entityType": "org.sagebionetworks.repo.model.FileEntity",
+ },
+ }
+
+ def get_example_submission_status_dict(
+ self,
+ ) -> Dict[str, Union[str, int, bool, Dict]]:
+ """Return example submission status data from REST API."""
+ return {
+ "id": SUBMISSION_STATUS_ID,
+ "etag": ETAG,
+ "modifiedOn": MODIFIED_ON,
+ "status": STATUS,
+ "entityId": ENTITY_ID,
+ "versionNumber": 1,
+ "statusVersion": 1,
+ "canCancel": False,
+ "cancelRequested": False,
+ "submissionAnnotations": {"score": [85.5], "feedback": ["Good work!"]},
+ }
+
+ def get_example_submission_bundle_dict(self) -> Dict[str, Dict]:
+ """Return example submission bundle data from REST API."""
+ return {
+ "submission": self.get_example_submission_dict(),
+ "submissionStatus": self.get_example_submission_status_dict(),
+ }
+
+ def get_example_submission_bundle_minimal_dict(self) -> Dict[str, Dict]:
+ """Return example minimal submission bundle data from REST API."""
+ return {
+ "submission": {
+ "id": SUBMISSION_ID,
+ "entityId": ENTITY_ID,
+ "evaluationId": EVALUATION_ID,
+ },
+ "submissionStatus": None,
+ }
+
+ def test_init_submission_bundle(self) -> None:
+ """Test creating a SubmissionBundle with basic attributes."""
+ # GIVEN submission and submission status objects
+ submission = Submission(
+ id=SUBMISSION_ID,
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ )
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ entity_id=ENTITY_ID,
+ )
+
+ # WHEN I create a SubmissionBundle object
+ bundle = SubmissionBundle(
+ submission=submission,
+ submission_status=submission_status,
+ )
+
+ # THEN the SubmissionBundle should have the expected attributes
+ assert bundle.submission == submission
+ assert bundle.submission_status == submission_status
+ assert bundle.submission.id == SUBMISSION_ID
+ assert bundle.submission_status.id == SUBMISSION_STATUS_ID
+
+ def test_init_submission_bundle_empty(self) -> None:
+ """Test creating an empty SubmissionBundle."""
+ # WHEN I create an empty SubmissionBundle object
+ bundle = SubmissionBundle()
+
+ # THEN the SubmissionBundle should have None attributes
+ assert bundle.submission is None
+ assert bundle.submission_status is None
+
+ def test_fill_from_dict_complete(self) -> None:
+ """Test filling a SubmissionBundle from complete REST API response."""
+ # GIVEN a complete submission bundle response
+ bundle_data = self.get_example_submission_bundle_dict()
+
+ # WHEN I fill a SubmissionBundle from the response
+ bundle = SubmissionBundle().fill_from_dict(bundle_data)
+
+ # THEN all fields should be populated correctly
+ assert bundle.submission is not None
+ assert bundle.submission_status is not None
+
+ # Check submission fields
+ assert bundle.submission.id == SUBMISSION_ID
+ assert bundle.submission.entity_id == ENTITY_ID
+ assert bundle.submission.evaluation_id == EVALUATION_ID
+ assert bundle.submission.user_id == USER_ID
+
+ # Check submission status fields
+ assert bundle.submission_status.id == SUBMISSION_STATUS_ID
+ assert bundle.submission_status.status == STATUS
+ assert bundle.submission_status.entity_id == ENTITY_ID
+ assert (
+ bundle.submission_status.evaluation_id == EVALUATION_ID
+ ) # set from submission
+
+ # Check submission annotations
+ assert "score" in bundle.submission_status.submission_annotations
+ assert bundle.submission_status.submission_annotations["score"] == [85.5]
+
+ def test_fill_from_dict_minimal(self) -> None:
+ """Test filling a SubmissionBundle from minimal REST API response."""
+ # GIVEN a minimal submission bundle response
+ bundle_data = self.get_example_submission_bundle_minimal_dict()
+
+ # WHEN I fill a SubmissionBundle from the response
+ bundle = SubmissionBundle().fill_from_dict(bundle_data)
+
+ # THEN submission should be populated but submission_status should be None
+ assert bundle.submission is not None
+ assert bundle.submission_status is None
+
+ # Check submission fields
+ assert bundle.submission.id == SUBMISSION_ID
+ assert bundle.submission.entity_id == ENTITY_ID
+ assert bundle.submission.evaluation_id == EVALUATION_ID
+
+ def test_fill_from_dict_no_submission(self) -> None:
+ """Test filling a SubmissionBundle with no submission data."""
+ # GIVEN a bundle response with no submission
+ bundle_data = {
+ "submission": None,
+ "submissionStatus": self.get_example_submission_status_dict(),
+ }
+
+ # WHEN I fill a SubmissionBundle from the response
+ bundle = SubmissionBundle().fill_from_dict(bundle_data)
+
+ # THEN submission should be None but submission_status should be populated
+ assert bundle.submission is None
+ assert bundle.submission_status is not None
+ assert bundle.submission_status.id == SUBMISSION_STATUS_ID
+ assert bundle.submission_status.status == STATUS
+
+ def test_fill_from_dict_evaluation_id_setting(self) -> None:
+ """Test that evaluation_id is properly set from submission to submission_status."""
+ # GIVEN a bundle response where submission_status doesn't have evaluation_id
+ submission_dict = self.get_example_submission_dict()
+ status_dict = self.get_example_submission_status_dict()
+ # Remove evaluation_id from status_dict to simulate API response
+ status_dict.pop("evaluationId", None)
+
+ bundle_data = {
+ "submission": submission_dict,
+ "submissionStatus": status_dict,
+ }
+
+ # WHEN I fill a SubmissionBundle from the response
+ bundle = SubmissionBundle().fill_from_dict(bundle_data)
+
+ # THEN submission_status should get evaluation_id from submission
+ assert bundle.submission is not None
+ assert bundle.submission_status is not None
+ assert bundle.submission.evaluation_id == EVALUATION_ID
+ assert bundle.submission_status.evaluation_id == EVALUATION_ID
+
+ async def test_get_evaluation_submission_bundles_async(self) -> None:
+ """Test getting submission bundles for an evaluation."""
+ # GIVEN mock response data
+ mock_response = {
+ "results": [
+ {
+ "submission": {
+ "id": "123",
+ "entityId": ENTITY_ID,
+ "evaluationId": EVALUATION_ID,
+ "userId": USER_ID,
+ },
+ "submissionStatus": {
+ "id": "123",
+ "status": "RECEIVED",
+ "entityId": ENTITY_ID,
+ },
+ },
+ {
+ "submission": {
+ "id": "456",
+ "entityId": ENTITY_ID,
+ "evaluationId": EVALUATION_ID,
+ "userId": USER_ID,
+ },
+ "submissionStatus": {
+ "id": "456",
+ "status": "SCORED",
+ "entityId": ENTITY_ID,
+ },
+ },
+ ]
+ }
+
+ # WHEN I call get_evaluation_submission_bundles_async
+ with patch(
+ "synapseclient.api.evaluation_services.get_evaluation_submission_bundles"
+ ) as mock_get_bundles:
+ # Create an async generator function that yields bundle data
+ async def mock_async_gen(*args, **kwargs):
+ for bundle_data in mock_response["results"]:
+ yield bundle_data
+
+ # Make the mock return our async generator when called
+ mock_get_bundles.side_effect = mock_async_gen
+
+ result = []
+ async for bundle in SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id=EVALUATION_ID,
+ status="RECEIVED",
+ synapse_client=self.syn,
+ ):
+ result.append(bundle)
+
+ # THEN the service should be called with correct parameters
+ mock_get_bundles.assert_called_once_with(
+ evaluation_id=EVALUATION_ID,
+ status="RECEIVED",
+ synapse_client=self.syn,
+ )
+
+ # AND the result should contain SubmissionBundle objects
+ assert len(result) == 2
+ assert all(isinstance(bundle, SubmissionBundle) for bundle in result)
+
+ # Check first bundle
+ assert result[0].submission is not None
+ assert result[0].submission.id == "123"
+ assert result[0].submission_status is not None
+ assert result[0].submission_status.id == "123"
+ assert result[0].submission_status.status == "RECEIVED"
+ assert (
+ result[0].submission_status.evaluation_id == EVALUATION_ID
+ ) # set from submission
+
+ # Check second bundle
+ assert result[1].submission is not None
+ assert result[1].submission.id == "456"
+ assert result[1].submission_status is not None
+ assert result[1].submission_status.id == "456"
+ assert result[1].submission_status.status == "SCORED"
+
+ async def test_get_evaluation_submission_bundles_async_empty_response(self) -> None:
+ """Test getting submission bundles with empty response."""
+ # GIVEN empty mock response
+ mock_response = {"results": []}
+
+ # WHEN I call get_evaluation_submission_bundles_async
+ with patch(
+ "synapseclient.api.evaluation_services.get_evaluation_submission_bundles"
+ ) as mock_get_bundles:
+ # Create an async generator function that yields no data
+ async def mock_async_gen(*args, **kwargs):
+ return
+ yield # This will never execute
+
+ # Make the mock return our async generator when called
+ mock_get_bundles.side_effect = mock_async_gen
+
+ result = []
+ async for bundle in SubmissionBundle.get_evaluation_submission_bundles_async(
+ evaluation_id=EVALUATION_ID,
+ synapse_client=self.syn,
+ ):
+ result.append(bundle)
+
+ # THEN the service should be called
+ mock_get_bundles.assert_called_once_with(
+ evaluation_id=EVALUATION_ID,
+ status=None,
+ synapse_client=self.syn,
+ )
+
+ # AND the result should be an empty list
+ assert len(result) == 0
+
+ async def test_get_user_submission_bundles_async(self) -> None:
+ """Test getting user submission bundles."""
+ # GIVEN mock response data
+ mock_response = {
+ "results": [
+ {
+ "submission": {
+ "id": "789",
+ "entityId": ENTITY_ID,
+ "evaluationId": EVALUATION_ID,
+ "userId": USER_ID,
+ "name": "User Submission 1",
+ },
+ "submissionStatus": {
+ "id": "789",
+ "status": "VALIDATED",
+ "entityId": ENTITY_ID,
+ },
+ },
+ ]
+ }
+
+ # WHEN I call get_user_submission_bundles_async
+ with patch(
+ "synapseclient.api.evaluation_services.get_user_submission_bundles"
+ ) as mock_get_user_bundles:
+ # Create an async generator function that yields bundle data
+ async def mock_async_gen(*args, **kwargs):
+ for bundle_data in mock_response["results"]:
+ yield bundle_data
+
+ # Make the mock return our async generator when called
+ mock_get_user_bundles.side_effect = mock_async_gen
+
+ result = []
+ async for bundle in SubmissionBundle.get_user_submission_bundles_async(
+ evaluation_id=EVALUATION_ID,
+ synapse_client=self.syn,
+ ):
+ result.append(bundle)
+
+ # THEN the service should be called with correct parameters
+ mock_get_user_bundles.assert_called_once_with(
+ evaluation_id=EVALUATION_ID,
+ synapse_client=self.syn,
+ )
+
+ # AND the result should contain SubmissionBundle objects
+ assert len(result) == 1
+ assert isinstance(result[0], SubmissionBundle)
+
+ # Check bundle contents
+ assert result[0].submission is not None
+ assert result[0].submission.id == "789"
+ assert result[0].submission.name == "User Submission 1"
+ assert result[0].submission_status is not None
+ assert result[0].submission_status.id == "789"
+ assert result[0].submission_status.status == "VALIDATED"
+ assert result[0].submission_status.evaluation_id == EVALUATION_ID
+
+ async def test_get_user_submission_bundles_async_default_params(self) -> None:
+ """Test getting user submission bundles with default parameters."""
+ # GIVEN mock response
+ mock_response = {"results": []}
+
+ # WHEN I call get_user_submission_bundles_async with defaults
+ with patch(
+ "synapseclient.api.evaluation_services.get_user_submission_bundles"
+ ) as mock_get_user_bundles:
+ # Create an async generator function that yields no data
+ async def mock_async_gen(*args, **kwargs):
+ return
+ yield
+
+ # Make the mock return our async generator when called
+ mock_get_user_bundles.side_effect = mock_async_gen
+
+ result = []
+ async for bundle in SubmissionBundle.get_user_submission_bundles_async(
+ evaluation_id=EVALUATION_ID,
+ synapse_client=self.syn,
+ ):
+ result.append(bundle)
+
+ # THEN the service should be called with default parameters
+ mock_get_user_bundles.assert_called_once_with(
+ evaluation_id=EVALUATION_ID,
+ synapse_client=self.syn,
+ )
+
+ # AND the result should be empty
+ assert len(result) == 0
+
+ def test_dataclass_equality(self) -> None:
+ """Test dataclass equality comparison."""
+ # GIVEN two SubmissionBundle objects with the same data
+ submission = Submission(id=SUBMISSION_ID, entity_id=ENTITY_ID)
+ status = SubmissionStatus(id=SUBMISSION_STATUS_ID, status=STATUS)
+
+ bundle1 = SubmissionBundle(submission=submission, submission_status=status)
+ bundle2 = SubmissionBundle(submission=submission, submission_status=status)
+
+ # THEN they should be equal
+ assert bundle1 == bundle2
+
+ # WHEN I modify one of them
+ bundle2.submission_status = SubmissionStatus(id="different", status="DIFFERENT")
+
+ # THEN they should not be equal
+ assert bundle1 != bundle2
+
+ def test_dataclass_equality_with_none(self) -> None:
+ """Test dataclass equality with None values."""
+ # GIVEN two SubmissionBundle objects with None values
+ bundle1 = SubmissionBundle(submission=None, submission_status=None)
+ bundle2 = SubmissionBundle(submission=None, submission_status=None)
+
+ # THEN they should be equal
+ assert bundle1 == bundle2
+
+ # WHEN I add a submission to one
+ bundle2.submission = Submission(id=SUBMISSION_ID)
+
+ # THEN they should not be equal
+ assert bundle1 != bundle2
+
+ def test_repr_and_str(self) -> None:
+ """Test string representation of SubmissionBundle."""
+ # GIVEN a SubmissionBundle with some data
+ submission = Submission(id=SUBMISSION_ID, entity_id=ENTITY_ID)
+ status = SubmissionStatus(id=SUBMISSION_STATUS_ID, status=STATUS)
+ bundle = SubmissionBundle(submission=submission, submission_status=status)
+
+ # WHEN I get the string representation
+ repr_str = repr(bundle)
+ str_str = str(bundle)
+
+ # THEN it should contain relevant information
+ assert "SubmissionBundle" in repr_str
+ assert SUBMISSION_ID in repr_str
+ assert SUBMISSION_STATUS_ID in repr_str
+
+ # AND str should be the same as repr for dataclasses
+ assert str_str == repr_str
+
+ def test_repr_with_none_values(self) -> None:
+ """Test string representation with None values."""
+ # GIVEN a SubmissionBundle with None values
+ bundle = SubmissionBundle(submission=None, submission_status=None)
+
+ # WHEN I get the string representation
+ repr_str = repr(bundle)
+
+ # THEN it should show None values
+ assert "SubmissionBundle" in repr_str
+ assert "submission=None" in repr_str
+ assert "submission_status=None" in repr_str
diff --git a/tests/unit/synapseclient/models/async/unit_test_submission_status_async.py b/tests/unit/synapseclient/models/async/unit_test_submission_status_async.py
new file mode 100644
index 000000000..c48ad2e1e
--- /dev/null
+++ b/tests/unit/synapseclient/models/async/unit_test_submission_status_async.py
@@ -0,0 +1,599 @@
+"""Unit tests for the synapseclient.models.SubmissionStatus class."""
+
+from typing import Dict, Union
+from unittest.mock import AsyncMock, patch
+
+import pytest
+
+from synapseclient import Synapse
+from synapseclient.models import SubmissionStatus
+
+SUBMISSION_STATUS_ID = "9999999"
+ENTITY_ID = "syn123456"
+EVALUATION_ID = "9614543"
+ETAG = "etag_value"
+MODIFIED_ON = "2023-01-01T00:00:00.000Z"
+STATUS = "RECEIVED"
+SCORE = 85.5
+REPORT = "Test report"
+VERSION_NUMBER = 1
+STATUS_VERSION = 1
+CAN_CANCEL = False
+CANCEL_REQUESTED = False
+PRIVATE_STATUS_ANNOTATIONS = True
+
+
+class TestSubmissionStatus:
+ """Tests for the synapseclient.models.SubmissionStatus class."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init_syn(self, syn: Synapse) -> None:
+ self.syn = syn
+
+ def get_example_submission_status_dict(
+ self,
+ ) -> Dict[str, Union[str, int, bool, Dict]]:
+ """Return example submission status data from REST API."""
+ return {
+ "id": SUBMISSION_STATUS_ID,
+ "etag": ETAG,
+ "modifiedOn": MODIFIED_ON,
+ "status": STATUS,
+ "score": SCORE,
+ "report": REPORT,
+ "entityId": ENTITY_ID,
+ "versionNumber": VERSION_NUMBER,
+ "statusVersion": STATUS_VERSION,
+ "canCancel": CAN_CANCEL,
+ "cancelRequested": CANCEL_REQUESTED,
+ "annotations": {
+ "objectId": SUBMISSION_STATUS_ID,
+ "scopeId": EVALUATION_ID,
+ "stringAnnos": [
+ {
+ "key": "internal_note",
+ "isPrivate": True,
+ "value": "This is internal",
+ }
+ ],
+ "doubleAnnos": [
+ {"key": "validation_score", "isPrivate": True, "value": 95.0}
+ ],
+ "longAnnos": [],
+ },
+ "submissionAnnotations": {"feedback": ["Great work!"], "score": [92.5]},
+ }
+
+ def get_example_submission_dict(self) -> Dict[str, str]:
+ """Return example submission data from REST API."""
+ return {
+ "id": SUBMISSION_STATUS_ID,
+ "evaluationId": EVALUATION_ID,
+ "entityId": ENTITY_ID,
+ "versionNumber": VERSION_NUMBER,
+ "userId": "123456",
+ "submitterAlias": "test_user",
+ "createdOn": "2023-01-01T00:00:00.000Z",
+ }
+
+ def test_init_submission_status(self) -> None:
+ """Test creating a SubmissionStatus with basic attributes."""
+ # WHEN I create a SubmissionStatus object
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ )
+
+ # THEN the SubmissionStatus should have the expected attributes
+ assert submission_status.id == SUBMISSION_STATUS_ID
+ assert submission_status.status == STATUS
+ assert submission_status.entity_id == ENTITY_ID
+ assert submission_status.evaluation_id == EVALUATION_ID
+ assert submission_status.can_cancel is False # default value
+ assert submission_status.cancel_requested is False # default value
+ assert submission_status.private_status_annotations is True # default value
+
+ def test_fill_from_dict(self) -> None:
+ """Test filling a SubmissionStatus from a REST API response."""
+ # GIVEN an example submission status response
+ submission_status_data = self.get_example_submission_status_dict()
+
+ # WHEN I fill a SubmissionStatus from the response
+ submission_status = SubmissionStatus().fill_from_dict(submission_status_data)
+
+ # THEN all fields should be populated correctly
+ assert submission_status.id == SUBMISSION_STATUS_ID
+ assert submission_status.etag == ETAG
+ assert submission_status.modified_on == MODIFIED_ON
+ assert submission_status.status == STATUS
+ assert submission_status.score == SCORE
+ assert submission_status.report == REPORT
+ assert submission_status.entity_id == ENTITY_ID
+ assert submission_status.version_number == VERSION_NUMBER
+ assert submission_status.status_version == STATUS_VERSION
+ assert submission_status.can_cancel is CAN_CANCEL
+ assert submission_status.cancel_requested is CANCEL_REQUESTED
+
+ # Check annotations
+ assert submission_status.annotations is not None
+ assert "objectId" in submission_status.annotations
+ assert "scopeId" in submission_status.annotations
+ assert "stringAnnos" in submission_status.annotations
+ assert "doubleAnnos" in submission_status.annotations
+
+ # Check submission annotations
+ assert "feedback" in submission_status.submission_annotations
+ assert "score" in submission_status.submission_annotations
+ assert submission_status.submission_annotations["feedback"] == ["Great work!"]
+ assert submission_status.submission_annotations["score"] == [92.5]
+
+ def test_fill_from_dict_minimal(self) -> None:
+ """Test filling a SubmissionStatus from minimal REST API response."""
+ # GIVEN a minimal submission status response
+ minimal_data = {"id": SUBMISSION_STATUS_ID, "status": STATUS}
+
+ # WHEN I fill a SubmissionStatus from the response
+ submission_status = SubmissionStatus().fill_from_dict(minimal_data)
+
+ # THEN basic fields should be populated
+ assert submission_status.id == SUBMISSION_STATUS_ID
+ assert submission_status.status == STATUS
+ # AND optional fields should have default values
+ assert submission_status.etag is None
+ assert submission_status.can_cancel is False
+ assert submission_status.cancel_requested is False
+
+ async def test_get_async(self) -> None:
+ """Test retrieving a SubmissionStatus by ID."""
+ # GIVEN a SubmissionStatus with an ID
+ submission_status = SubmissionStatus(id=SUBMISSION_STATUS_ID)
+
+ # WHEN I call get_async
+ with patch(
+ "synapseclient.api.evaluation_services.get_submission_status",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_status_dict(),
+ ) as mock_get_status, patch(
+ "synapseclient.api.evaluation_services.get_submission",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_dict(),
+ ) as mock_get_submission:
+ result = await submission_status.get_async(synapse_client=self.syn)
+
+ # THEN the submission status should be retrieved
+ mock_get_status.assert_called_once_with(
+ submission_id=SUBMISSION_STATUS_ID, synapse_client=self.syn
+ )
+ mock_get_submission.assert_called_once_with(
+ submission_id=SUBMISSION_STATUS_ID, synapse_client=self.syn
+ )
+
+ # AND the result should have the expected data
+ assert result.id == SUBMISSION_STATUS_ID
+ assert result.status == STATUS
+ assert result.evaluation_id == EVALUATION_ID
+ assert result._last_persistent_instance is not None
+
+ async def test_get_async_without_id(self) -> None:
+ """Test that getting a SubmissionStatus without ID raises ValueError."""
+ # GIVEN a SubmissionStatus without an ID
+ submission_status = SubmissionStatus()
+
+ # WHEN I call get_async
+ # THEN it should raise a ValueError
+ with pytest.raises(
+ ValueError, match="The submission status must have an ID to get"
+ ):
+ await submission_status.get_async(synapse_client=self.syn)
+
+ async def test_store_async(self) -> None:
+ """Test storing a SubmissionStatus."""
+ # GIVEN a SubmissionStatus with required attributes
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ etag=ETAG,
+ status_version=STATUS_VERSION,
+ status="SCORED",
+ evaluation_id=EVALUATION_ID,
+ )
+ submission_status._set_last_persistent_instance()
+
+ # AND I modify the status
+ submission_status.status = "VALIDATED"
+
+ # WHEN I call store_async
+ with patch(
+ "synapseclient.api.evaluation_services.update_submission_status",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_status_dict(),
+ ) as mock_update:
+ result = await submission_status.store_async(synapse_client=self.syn)
+
+ # THEN the submission status should be updated
+ mock_update.assert_called_once()
+ call_args = mock_update.call_args
+ assert call_args.kwargs["submission_id"] == SUBMISSION_STATUS_ID
+ assert call_args.kwargs["synapse_client"] == self.syn
+
+ # AND the result should have updated data
+ assert result.id == SUBMISSION_STATUS_ID
+ assert result.status == STATUS # from mock response
+ assert result._last_persistent_instance is not None
+
+ async def test_store_async_without_id(self) -> None:
+ """Test that storing a SubmissionStatus without ID raises ValueError."""
+ # GIVEN a SubmissionStatus without an ID
+ submission_status = SubmissionStatus(status="SCORED")
+
+ # WHEN I call store_async
+ # THEN it should raise a ValueError
+ with pytest.raises(
+ ValueError, match="The submission status must have an ID to update"
+ ):
+ await submission_status.store_async(synapse_client=self.syn)
+
+ async def test_store_async_without_changes(self) -> None:
+ """Test storing a SubmissionStatus without changes."""
+ # GIVEN a SubmissionStatus that hasn't been modified
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ etag=ETAG,
+ status_version=STATUS_VERSION,
+ status=STATUS,
+ )
+ submission_status._set_last_persistent_instance()
+
+ # WHEN I call store_async without making changes
+ result = await submission_status.store_async(synapse_client=self.syn)
+
+ # THEN it should return the same instance (no update sent to Synapse)
+ assert result is submission_status
+
+ def test_to_synapse_request_missing_id(self) -> None:
+ """Test to_synapse_request with missing ID."""
+ # GIVEN a SubmissionStatus without an ID
+ submission_status = SubmissionStatus()
+
+ # WHEN I call to_synapse_request
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'id' attribute"):
+ submission_status.to_synapse_request(synapse_client=self.syn)
+
+ def test_to_synapse_request_missing_etag(self) -> None:
+ """Test to_synapse_request with missing etag."""
+ # GIVEN a SubmissionStatus with ID but no etag
+ submission_status = SubmissionStatus(id=SUBMISSION_STATUS_ID)
+
+ # WHEN I call to_synapse_request
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'etag' attribute"):
+ submission_status.to_synapse_request(synapse_client=self.syn)
+
+ def test_to_synapse_request_missing_status_version(self) -> None:
+ """Test to_synapse_request with missing status_version."""
+ # GIVEN a SubmissionStatus with ID and etag but no status_version
+ submission_status = SubmissionStatus(id=SUBMISSION_STATUS_ID, etag=ETAG)
+
+ # WHEN I call to_synapse_request
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'status_version' attribute"):
+ submission_status.to_synapse_request(synapse_client=self.syn)
+
+ def test_to_synapse_request_missing_evaluation_id_with_annotations(self) -> None:
+ """Test to_synapse_request with annotations but missing evaluation_id."""
+ # GIVEN a SubmissionStatus with annotations but no evaluation_id
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ etag=ETAG,
+ status_version=STATUS_VERSION,
+ annotations={"test": "value"},
+ )
+
+ # WHEN I call to_synapse_request
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'evaluation_id' attribute"):
+ submission_status.to_synapse_request(synapse_client=self.syn)
+
+ def test_to_synapse_request_valid(self) -> None:
+ """Test to_synapse_request with valid attributes."""
+ # GIVEN a SubmissionStatus with all required attributes
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ etag=ETAG,
+ status_version=STATUS_VERSION,
+ status="SCORED",
+ evaluation_id=EVALUATION_ID,
+ submission_annotations={"score": 85.5},
+ annotations={"internal_note": "test"},
+ )
+
+ # WHEN I call to_synapse_request
+ request_body = submission_status.to_synapse_request(synapse_client=self.syn)
+
+ # THEN the request should have the required fields
+ assert request_body["id"] == SUBMISSION_STATUS_ID
+ assert request_body["etag"] == ETAG
+ assert request_body["statusVersion"] == STATUS_VERSION
+ assert request_body["status"] == "SCORED"
+ assert "submissionAnnotations" in request_body
+ assert "annotations" in request_body
+
+ def test_has_changed_property_new_instance(self) -> None:
+ """Test has_changed property for a new instance."""
+ # GIVEN a new SubmissionStatus instance
+ submission_status = SubmissionStatus(id=SUBMISSION_STATUS_ID)
+
+ # THEN has_changed should be True (no persistent instance)
+ assert submission_status.has_changed is True
+
+ def test_has_changed_property_after_get(self) -> None:
+ """Test has_changed property after retrieving from Synapse."""
+ # GIVEN a SubmissionStatus that was retrieved (has persistent instance)
+ submission_status = SubmissionStatus(id=SUBMISSION_STATUS_ID, status=STATUS)
+ submission_status._set_last_persistent_instance()
+
+ # THEN has_changed should be False
+ assert submission_status.has_changed is False
+
+ # WHEN I modify a field
+ submission_status.status = "VALIDATED"
+
+ # THEN has_changed should be True
+ assert submission_status.has_changed is True
+
+ def test_has_changed_property_annotations(self) -> None:
+ """Test has_changed property with annotation changes."""
+ # GIVEN a SubmissionStatus with annotations
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ annotations={"original": "value"},
+ submission_annotations={"score": 85.0},
+ )
+ submission_status._set_last_persistent_instance()
+
+ # THEN has_changed should be False initially
+ assert submission_status.has_changed is False
+
+ # WHEN I modify annotations
+ submission_status.annotations = {"modified": "value"}
+
+ # THEN has_changed should be True
+ assert submission_status.has_changed is True
+
+ # WHEN I reset annotations to original and modify submission_annotations
+ submission_status.annotations = {"original": "value"}
+ submission_status.submission_annotations = {"score": 90.0}
+
+ # THEN has_changed should still be True
+ assert submission_status.has_changed is True
+
+ async def test_get_all_submission_statuses_async(self) -> None:
+ """Test getting all submission statuses for an evaluation."""
+ # GIVEN mock response data
+ mock_response = {
+ "results": [
+ {
+ "id": "123",
+ "status": "RECEIVED",
+ "entityId": ENTITY_ID,
+ "evaluationId": EVALUATION_ID,
+ },
+ {
+ "id": "456",
+ "status": "SCORED",
+ "entityId": ENTITY_ID,
+ "evaluationId": EVALUATION_ID,
+ },
+ ]
+ }
+
+ # WHEN I call get_all_submission_statuses_async
+ with patch(
+ "synapseclient.api.evaluation_services.get_all_submission_statuses",
+ new_callable=AsyncMock,
+ return_value=mock_response,
+ ) as mock_get_all:
+ result = await SubmissionStatus.get_all_submission_statuses_async(
+ evaluation_id=EVALUATION_ID,
+ status="RECEIVED",
+ limit=50,
+ offset=0,
+ synapse_client=self.syn,
+ )
+
+ # THEN the service should be called with correct parameters
+ mock_get_all.assert_called_once_with(
+ evaluation_id=EVALUATION_ID,
+ status="RECEIVED",
+ limit=50,
+ offset=0,
+ synapse_client=self.syn,
+ )
+
+ # AND the result should contain SubmissionStatus objects
+ assert len(result) == 2
+ assert all(isinstance(status, SubmissionStatus) for status in result)
+ assert result[0].id == "123"
+ assert result[0].status == "RECEIVED"
+ assert result[1].id == "456"
+ assert result[1].status == "SCORED"
+
+ async def test_batch_update_submission_statuses_async(self) -> None:
+ """Test batch updating submission statuses."""
+ # GIVEN a list of SubmissionStatus objects
+ statuses = [
+ SubmissionStatus(
+ id="123",
+ etag="etag1",
+ status_version=1,
+ status="VALIDATED",
+ evaluation_id=EVALUATION_ID,
+ ),
+ SubmissionStatus(
+ id="456",
+ etag="etag2",
+ status_version=1,
+ status="SCORED",
+ evaluation_id=EVALUATION_ID,
+ ),
+ ]
+
+ # AND mock response
+ mock_response = {"batchToken": "token123"}
+
+ # WHEN I call batch_update_submission_statuses_async
+ with patch(
+ "synapseclient.api.evaluation_services.batch_update_submission_statuses",
+ new_callable=AsyncMock,
+ return_value=mock_response,
+ ) as mock_batch_update:
+ result = await SubmissionStatus.batch_update_submission_statuses_async(
+ evaluation_id=EVALUATION_ID,
+ statuses=statuses,
+ is_first_batch=True,
+ is_last_batch=True,
+ synapse_client=self.syn,
+ )
+
+ # THEN the service should be called with correct parameters
+ mock_batch_update.assert_called_once()
+ call_args = mock_batch_update.call_args
+ assert call_args.kwargs["evaluation_id"] == EVALUATION_ID
+ assert call_args.kwargs["synapse_client"] == self.syn
+
+ # Check request body structure
+ request_body = call_args.kwargs["request_body"]
+ assert request_body["isFirstBatch"] is True
+ assert request_body["isLastBatch"] is True
+ assert "statuses" in request_body
+ assert len(request_body["statuses"]) == 2
+
+ # AND the result should be the mock response
+ assert result == mock_response
+
+ async def test_batch_update_with_batch_token(self) -> None:
+ """Test batch update with batch token for subsequent batches."""
+ # GIVEN a list of SubmissionStatus objects and a batch token
+ statuses = [
+ SubmissionStatus(
+ id="123",
+ etag="etag1",
+ status_version=1,
+ status="VALIDATED",
+ evaluation_id=EVALUATION_ID,
+ )
+ ]
+ batch_token = "previous_batch_token"
+
+ # WHEN I call batch_update_submission_statuses_async with a batch token
+ with patch(
+ "synapseclient.api.evaluation_services.batch_update_submission_statuses",
+ new_callable=AsyncMock,
+ return_value={},
+ ) as mock_batch_update:
+ await SubmissionStatus.batch_update_submission_statuses_async(
+ evaluation_id=EVALUATION_ID,
+ statuses=statuses,
+ is_first_batch=False,
+ is_last_batch=True,
+ batch_token=batch_token,
+ synapse_client=self.syn,
+ )
+
+ # THEN the batch token should be included in the request
+ call_args = mock_batch_update.call_args
+ request_body = call_args.kwargs["request_body"]
+ assert request_body["batchToken"] == batch_token
+ assert request_body["isFirstBatch"] is False
+
+ def test_set_last_persistent_instance(self) -> None:
+ """Test setting the last persistent instance."""
+ # GIVEN a SubmissionStatus
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ annotations={"test": "value"},
+ )
+
+ # WHEN I set the last persistent instance
+ submission_status._set_last_persistent_instance()
+
+ # THEN the persistent instance should be set
+ assert submission_status._last_persistent_instance is not None
+ assert submission_status._last_persistent_instance.id == SUBMISSION_STATUS_ID
+ assert submission_status._last_persistent_instance.status == STATUS
+ assert submission_status._last_persistent_instance.annotations == {
+ "test": "value"
+ }
+
+ # AND modifying the current instance shouldn't affect the persistent one
+ submission_status.status = "MODIFIED"
+ assert submission_status._last_persistent_instance.status == STATUS
+
+ def test_dataclass_equality(self) -> None:
+ """Test dataclass equality comparison."""
+ # GIVEN two SubmissionStatus objects with the same data
+ status1 = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ entity_id=ENTITY_ID,
+ )
+ status2 = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ entity_id=ENTITY_ID,
+ )
+
+ # THEN they should be equal
+ assert status1 == status2
+
+ # WHEN I modify one of them
+ status2.status = "DIFFERENT"
+
+ # THEN they should not be equal
+ assert status1 != status2
+
+ def test_dataclass_fields_excluded_from_comparison(self) -> None:
+ """Test that certain fields are excluded from comparison."""
+ # GIVEN two SubmissionStatus objects that differ only in comparison-excluded fields
+ status1 = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ etag="etag1",
+ modified_on="2023-01-01",
+ cancel_requested=False,
+ )
+ status2 = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ etag="etag2", # different etag
+ modified_on="2023-01-02", # different modified_on
+ cancel_requested=True, # different cancel_requested
+ )
+
+ # THEN they should still be equal (these fields are excluded from comparison)
+ assert status1 == status2
+
+ def test_repr_and_str(self) -> None:
+ """Test string representation of SubmissionStatus."""
+ # GIVEN a SubmissionStatus with some data
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ entity_id=ENTITY_ID,
+ )
+
+ # WHEN I get the string representation
+ repr_str = repr(submission_status)
+ str_str = str(submission_status)
+
+ # THEN it should contain the relevant information
+ assert SUBMISSION_STATUS_ID in repr_str
+ assert STATUS in repr_str
+ assert ENTITY_ID in repr_str
+ assert "SubmissionStatus" in repr_str
+
+ # AND str should be the same as repr for dataclasses
+ assert str_str == repr_str
diff --git a/tests/unit/synapseclient/models/synchronous/unit_test_submission.py b/tests/unit/synapseclient/models/synchronous/unit_test_submission.py
new file mode 100644
index 000000000..b7d6f2721
--- /dev/null
+++ b/tests/unit/synapseclient/models/synchronous/unit_test_submission.py
@@ -0,0 +1,801 @@
+"""Unit tests for the synapseclient.models.Submission class."""
+import uuid
+from typing import Dict, List, Union
+from unittest.mock import AsyncMock, MagicMock, call, patch
+
+import pytest
+
+from synapseclient import Synapse
+from synapseclient.core.exceptions import SynapseHTTPError
+from synapseclient.models import Submission
+
+SUBMISSION_ID = "9614543"
+USER_ID = "123456"
+SUBMITTER_ALIAS = "test_user"
+ENTITY_ID = "syn789012"
+VERSION_NUMBER = 1
+EVALUATION_ID = "9999999"
+SUBMISSION_NAME = "Test Submission"
+CREATED_ON = "2023-01-01T10:00:00.000Z"
+TEAM_ID = "team123"
+CONTRIBUTORS = ["user1", "user2", "user3"]
+SUBMISSION_STATUS = {"status": "RECEIVED", "score": 85.5}
+ENTITY_BUNDLE_JSON = '{"entity": {"id": "syn789012", "name": "test_entity"}}'
+DOCKER_REPOSITORY_NAME = "test/repository"
+DOCKER_DIGEST = "sha256:abc123def456"
+ETAG = "etag_value"
+
+
+class TestSubmission:
+ """Tests for the synapseclient.models.Submission class."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init_syn(self, syn: Synapse) -> None:
+ self.syn = syn
+
+ def get_example_submission_response(self) -> Dict[str, Union[str, int, List, Dict]]:
+ """Get a complete example submission response from the REST API."""
+ return {
+ "id": SUBMISSION_ID,
+ "userId": USER_ID,
+ "submitterAlias": SUBMITTER_ALIAS,
+ "entityId": ENTITY_ID,
+ "versionNumber": VERSION_NUMBER,
+ "evaluationId": EVALUATION_ID,
+ "name": SUBMISSION_NAME,
+ "createdOn": CREATED_ON,
+ "teamId": TEAM_ID,
+ "contributors": CONTRIBUTORS,
+ "submissionStatus": SUBMISSION_STATUS,
+ "entityBundleJSON": ENTITY_BUNDLE_JSON,
+ "dockerRepositoryName": DOCKER_REPOSITORY_NAME,
+ "dockerDigest": DOCKER_DIGEST,
+ }
+
+ def get_minimal_submission_response(self) -> Dict[str, str]:
+ """Get a minimal example submission response from the REST API."""
+ return {
+ "id": SUBMISSION_ID,
+ "entityId": ENTITY_ID,
+ "evaluationId": EVALUATION_ID,
+ }
+
+ def get_example_entity_response(self) -> Dict[str, Union[str, int]]:
+ """Get an example entity response for testing entity fetching."""
+ return {
+ "id": ENTITY_ID,
+ "etag": ETAG,
+ "versionNumber": VERSION_NUMBER,
+ "name": "test_entity",
+ "concreteType": "org.sagebionetworks.repo.model.FileEntity",
+ }
+
+ def get_example_docker_entity_response(self) -> Dict[str, Union[str, int]]:
+ """Get an example Docker repository entity response for testing."""
+ return {
+ "id": ENTITY_ID,
+ "etag": ETAG,
+ "name": "test_docker_repo",
+ "concreteType": "org.sagebionetworks.repo.model.docker.DockerRepository",
+ "repositoryName": "test/repository",
+ }
+
+ def get_example_docker_tag_response(self) -> Dict[str, Union[str, int, List]]:
+ """Get an example Docker tag response for testing."""
+ return {
+ "totalNumberOfResults": 2,
+ "results": [
+ {
+ "tag": "v1.0",
+ "digest": "sha256:older123def456",
+ "createdOn": "2024-01-01T10:00:00.000Z",
+ },
+ {
+ "tag": "v2.0",
+ "digest": "sha256:latest456abc789",
+ "createdOn": "2024-06-01T15:30:00.000Z",
+ },
+ ],
+ }
+
+ def get_complex_docker_tag_response(self) -> Dict[str, Union[str, int, List]]:
+ """Get a more complex Docker tag response with multiple versions to test sorting."""
+ return {
+ "totalNumberOfResults": 4,
+ "results": [
+ {
+ "tag": "v1.0",
+ "digest": "sha256:version1",
+ "createdOn": "2024-01-01T10:00:00.000Z",
+ },
+ {
+ "tag": "v3.0",
+ "digest": "sha256:version3",
+ "createdOn": "2024-08-15T12:00:00.000Z", # This should be selected (latest)
+ },
+ {
+ "tag": "v2.0",
+ "digest": "sha256:version2",
+ "createdOn": "2024-06-01T15:30:00.000Z",
+ },
+ {
+ "tag": "v1.5",
+ "digest": "sha256:version1_5",
+ "createdOn": "2024-03-15T08:45:00.000Z",
+ },
+ ],
+ }
+
+ def test_fill_from_dict_complete_data(self) -> None:
+ # GIVEN a complete submission response from the REST API
+ # WHEN I call fill_from_dict with the example submission response
+ submission = Submission().fill_from_dict(self.get_example_submission_response())
+
+ # THEN the Submission object should be filled with all the data
+ assert submission.id == SUBMISSION_ID
+ assert submission.user_id == USER_ID
+ assert submission.submitter_alias == SUBMITTER_ALIAS
+ assert submission.entity_id == ENTITY_ID
+ assert submission.version_number == VERSION_NUMBER
+ assert submission.evaluation_id == EVALUATION_ID
+ assert submission.name == SUBMISSION_NAME
+ assert submission.created_on == CREATED_ON
+ assert submission.team_id == TEAM_ID
+ assert submission.contributors == CONTRIBUTORS
+ assert submission.submission_status == SUBMISSION_STATUS
+ assert submission.entity_bundle_json == ENTITY_BUNDLE_JSON
+ assert submission.docker_repository_name == DOCKER_REPOSITORY_NAME
+ assert submission.docker_digest == DOCKER_DIGEST
+
+ def test_fill_from_dict_minimal_data(self) -> None:
+ # GIVEN a minimal submission response from the REST API
+ # WHEN I call fill_from_dict with the minimal submission response
+ submission = Submission().fill_from_dict(self.get_minimal_submission_response())
+
+ # THEN the Submission object should be filled with required data and defaults for optional data
+ assert submission.id == SUBMISSION_ID
+ assert submission.entity_id == ENTITY_ID
+ assert submission.evaluation_id == EVALUATION_ID
+ assert submission.user_id is None
+ assert submission.submitter_alias is None
+ assert submission.version_number is None
+ assert submission.name is None
+ assert submission.created_on is None
+ assert submission.team_id is None
+ assert submission.contributors == []
+ assert submission.submission_status is None
+ assert submission.entity_bundle_json is None
+ assert submission.docker_repository_name is None
+ assert submission.docker_digest is None
+
+ def test_to_synapse_request_complete_data(self) -> None:
+ # GIVEN a submission with all optional fields set
+ submission = Submission(
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ name=SUBMISSION_NAME,
+ team_id=TEAM_ID,
+ contributors=CONTRIBUTORS,
+ docker_repository_name=DOCKER_REPOSITORY_NAME,
+ docker_digest=DOCKER_DIGEST,
+ version_number=VERSION_NUMBER,
+ )
+
+ # WHEN I call to_synapse_request
+ request_body = submission.to_synapse_request()
+
+ # THEN the request body should contain all fields in the correct format
+ assert request_body["entityId"] == ENTITY_ID
+ assert request_body["evaluationId"] == EVALUATION_ID
+ assert request_body["versionNumber"] == VERSION_NUMBER
+ assert request_body["name"] == SUBMISSION_NAME
+ assert request_body["teamId"] == TEAM_ID
+ assert request_body["contributors"] == CONTRIBUTORS
+ assert request_body["dockerRepositoryName"] == DOCKER_REPOSITORY_NAME
+ assert request_body["dockerDigest"] == DOCKER_DIGEST
+
+ def test_to_synapse_request_minimal_data(self) -> None:
+ # GIVEN a submission with only required fields
+ submission = Submission(
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ version_number=VERSION_NUMBER,
+ )
+
+ # WHEN I call to_synapse_request
+ request_body = submission.to_synapse_request()
+
+ # THEN the request body should contain only required fields
+ assert request_body["entityId"] == ENTITY_ID
+ assert request_body["evaluationId"] == EVALUATION_ID
+ assert request_body["versionNumber"] == VERSION_NUMBER
+ assert "name" not in request_body
+ assert "teamId" not in request_body
+ assert "contributors" not in request_body
+ assert "dockerRepositoryName" not in request_body
+ assert "dockerDigest" not in request_body
+
+ def test_to_synapse_request_missing_entity_id(self) -> None:
+ # GIVEN a submission without entity_id
+ submission = Submission(evaluation_id=EVALUATION_ID)
+
+ # WHEN I call to_synapse_request
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'entity_id' attribute"):
+ submission.to_synapse_request()
+
+ def test_to_synapse_request_missing_evaluation_id(self) -> None:
+ # GIVEN a submission without evaluation_id
+ submission = Submission(entity_id=ENTITY_ID)
+
+ # WHEN I call to_synapse_request
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'evaluation_id' attribute"):
+ submission.to_synapse_request()
+
+ @pytest.mark.asyncio
+ async def test_fetch_latest_entity_success(self) -> None:
+ # GIVEN a submission with an entity_id
+ submission = Submission(entity_id=ENTITY_ID, evaluation_id=EVALUATION_ID)
+
+ # WHEN I call _fetch_latest_entity with a mocked successful response
+ with patch.object(
+ self.syn,
+ "rest_get_async",
+ new_callable=AsyncMock,
+ return_value=self.get_example_entity_response(),
+ ) as mock_rest_get:
+ entity_info = await submission._fetch_latest_entity(synapse_client=self.syn)
+
+ # THEN it should return the entity information
+ assert entity_info["id"] == ENTITY_ID
+ assert entity_info["etag"] == ETAG
+ assert entity_info["versionNumber"] == VERSION_NUMBER
+ mock_rest_get.assert_called_once_with(f"/entity/{ENTITY_ID}")
+
+ @pytest.mark.asyncio
+ async def test_fetch_latest_entity_docker_repository(self) -> None:
+ # GIVEN a submission with a Docker repository entity_id
+ submission = Submission(entity_id=ENTITY_ID, evaluation_id=EVALUATION_ID)
+
+ # WHEN I call _fetch_latest_entity with mocked Docker repository responses
+ with patch.object(
+ self.syn,
+ "rest_get_async",
+ new_callable=AsyncMock,
+ ) as mock_rest_get:
+ # Configure the mock to return different responses for different URLs
+ def side_effect(url):
+ if url == f"/entity/{ENTITY_ID}":
+ return self.get_example_docker_entity_response()
+ elif url == f"/entity/{ENTITY_ID}/dockerTag":
+ return self.get_example_docker_tag_response()
+
+ mock_rest_get.side_effect = side_effect
+
+ entity_info = await submission._fetch_latest_entity(synapse_client=self.syn)
+
+ # THEN it should return the entity information with latest docker tag info
+ assert entity_info["id"] == ENTITY_ID
+ assert entity_info["etag"] == ETAG
+ assert entity_info["repositoryName"] == "test/repository"
+ # Should have the latest tag information (v2.0 based on createdOn date)
+ assert entity_info["tag"] == "v2.0"
+ assert entity_info["digest"] == "sha256:latest456abc789"
+ assert entity_info["createdOn"] == "2024-06-01T15:30:00.000Z"
+
+ # Verify both API calls were made
+ expected_calls = [
+ call(f"/entity/{ENTITY_ID}"),
+ call(f"/entity/{ENTITY_ID}/dockerTag"),
+ ]
+ mock_rest_get.assert_has_calls(expected_calls)
+
+ @pytest.mark.asyncio
+ async def test_fetch_latest_entity_docker_empty_results(self) -> None:
+ # GIVEN a submission with a Docker repository entity_id
+ submission = Submission(entity_id=ENTITY_ID, evaluation_id=EVALUATION_ID)
+
+ # WHEN I call _fetch_latest_entity with empty docker tag results
+ with patch.object(
+ self.syn,
+ "rest_get_async",
+ new_callable=AsyncMock,
+ ) as mock_rest_get:
+ # Configure the mock to return empty docker tag results
+ def side_effect(url):
+ if url == f"/entity/{ENTITY_ID}":
+ return self.get_example_docker_entity_response()
+ elif url == f"/entity/{ENTITY_ID}/dockerTag":
+ return {"totalNumberOfResults": 0, "results": []}
+
+ mock_rest_get.side_effect = side_effect
+
+ entity_info = await submission._fetch_latest_entity(synapse_client=self.syn)
+
+ # THEN it should return the entity information without docker tag info
+ assert entity_info["id"] == ENTITY_ID
+ assert entity_info["etag"] == ETAG
+ assert entity_info["repositoryName"] == "test/repository"
+ # Should not have docker tag fields since results were empty
+ assert "tag" not in entity_info
+ assert "digest" not in entity_info
+ assert "createdOn" not in entity_info
+
+ @pytest.mark.asyncio
+ async def test_fetch_latest_entity_docker_complex_tag_selection(self) -> None:
+ # GIVEN a submission with a Docker repository with multiple tags
+ submission = Submission(entity_id=ENTITY_ID, evaluation_id=EVALUATION_ID)
+
+ # WHEN I call _fetch_latest_entity with multiple docker tags with different dates
+ with patch.object(
+ self.syn,
+ "rest_get_async",
+ new_callable=AsyncMock,
+ ) as mock_rest_get:
+ # Configure the mock to return complex docker tag results
+ def side_effect(url):
+ if url == f"/entity/{ENTITY_ID}":
+ return self.get_example_docker_entity_response()
+ elif url == f"/entity/{ENTITY_ID}/dockerTag":
+ return self.get_complex_docker_tag_response()
+
+ mock_rest_get.side_effect = side_effect
+
+ entity_info = await submission._fetch_latest_entity(synapse_client=self.syn)
+
+ # THEN it should select the tag with the latest createdOn timestamp (v3.0)
+ assert entity_info["tag"] == "v3.0"
+ assert entity_info["digest"] == "sha256:version3"
+ assert entity_info["createdOn"] == "2024-08-15T12:00:00.000Z"
+
+ @pytest.mark.asyncio
+ async def test_fetch_latest_entity_without_entity_id(self) -> None:
+ # GIVEN a submission without entity_id
+ submission = Submission(evaluation_id=EVALUATION_ID)
+
+ # WHEN I call _fetch_latest_entity
+ # THEN it should raise a ValueError
+ with pytest.raises(
+ ValueError, match="entity_id must be set to fetch entity information"
+ ):
+ await submission._fetch_latest_entity(synapse_client=self.syn)
+
+ @pytest.mark.asyncio
+ async def test_fetch_latest_entity_api_error(self) -> None:
+ # GIVEN a submission with an entity_id
+ submission = Submission(entity_id=ENTITY_ID, evaluation_id=EVALUATION_ID)
+
+ # WHEN I call _fetch_latest_entity and the API returns an error
+ with patch.object(
+ self.syn,
+ "rest_get_async",
+ new_callable=AsyncMock,
+ side_effect=SynapseHTTPError("Entity not found"),
+ ) as mock_rest_get:
+ # THEN it should raise a ValueError with context about the original error
+ with pytest.raises(
+ ValueError, match=f"Unable to fetch entity information for {ENTITY_ID}"
+ ):
+ await submission._fetch_latest_entity(synapse_client=self.syn)
+
+ @pytest.mark.asyncio
+ async def test_store_async_success(self) -> None:
+ # GIVEN a submission with valid data
+ submission = Submission(
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ name=SUBMISSION_NAME,
+ )
+
+ # WHEN I call store_async with mocked dependencies
+ with patch.object(
+ submission,
+ "_fetch_latest_entity",
+ new_callable=AsyncMock,
+ return_value=self.get_example_entity_response(),
+ ) as mock_fetch_entity, patch(
+ "synapseclient.api.evaluation_services.create_submission",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_response(),
+ ) as mock_create_submission:
+ stored_submission = await submission.store_async(synapse_client=self.syn)
+
+ # THEN it should fetch entity information, create the submission, and fill the object
+ mock_fetch_entity.assert_called_once_with(synapse_client=self.syn)
+ mock_create_submission.assert_called_once()
+
+ # Check the call arguments to create_submission
+ call_args = mock_create_submission.call_args
+ request_body = call_args[0][0]
+ etag = call_args[0][1]
+
+ assert request_body["entityId"] == ENTITY_ID
+ assert request_body["evaluationId"] == EVALUATION_ID
+ assert request_body["name"] == SUBMISSION_NAME
+ assert request_body["versionNumber"] == VERSION_NUMBER
+ assert etag == ETAG
+
+ # Verify the submission is filled with response data
+ assert stored_submission.id == SUBMISSION_ID
+ assert stored_submission.entity_id == ENTITY_ID
+ assert stored_submission.evaluation_id == EVALUATION_ID
+
+ @pytest.mark.asyncio
+ async def test_store_async_docker_repository_success(self) -> None:
+ # GIVEN a submission with valid data for Docker repository
+ submission = Submission(
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ name=SUBMISSION_NAME,
+ )
+
+ # WHEN I call store_async with mocked Docker repository entity
+ docker_entity_with_tag = self.get_example_docker_entity_response()
+ docker_entity_with_tag.update(
+ {
+ "tag": "v2.0",
+ "digest": "sha256:latest456abc789",
+ "createdOn": "2024-06-01T15:30:00.000Z",
+ }
+ )
+
+ with patch.object(
+ submission,
+ "_fetch_latest_entity",
+ new_callable=AsyncMock,
+ return_value=docker_entity_with_tag,
+ ) as mock_fetch_entity, patch(
+ "synapseclient.api.evaluation_services.create_submission",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_response(),
+ ) as mock_create_submission:
+ stored_submission = await submission.store_async(synapse_client=self.syn)
+
+ # THEN it should handle Docker repository specific logic
+ mock_fetch_entity.assert_called_once_with(synapse_client=self.syn)
+ mock_create_submission.assert_called_once()
+
+ # Verify Docker repository attributes are set correctly
+ assert submission.version_number == 1 # Docker repos get version 1
+ assert submission.docker_repository_name == "test/repository"
+ assert stored_submission.docker_digest == DOCKER_DIGEST
+
+ @pytest.mark.asyncio
+ async def test_store_async_with_team_data_success(self) -> None:
+ # GIVEN a submission with team information
+ submission = Submission(
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ name=SUBMISSION_NAME,
+ team_id=TEAM_ID,
+ contributors=CONTRIBUTORS,
+ )
+
+ # WHEN I call store_async with mocked dependencies
+ with patch.object(
+ submission,
+ "_fetch_latest_entity",
+ new_callable=AsyncMock,
+ return_value=self.get_example_entity_response(),
+ ) as mock_fetch_entity, patch(
+ "synapseclient.api.evaluation_services.create_submission",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_response(),
+ ) as mock_create_submission:
+ stored_submission = await submission.store_async(synapse_client=self.syn)
+
+ # THEN it should preserve team information in the stored submission
+ mock_fetch_entity.assert_called_once_with(synapse_client=self.syn)
+ mock_create_submission.assert_called_once()
+
+ # Verify team data is preserved
+ assert stored_submission.team_id == TEAM_ID
+ assert stored_submission.contributors == CONTRIBUTORS
+ assert stored_submission.id == SUBMISSION_ID
+ assert stored_submission.entity_id == ENTITY_ID
+ assert stored_submission.evaluation_id == EVALUATION_ID
+
+ @pytest.mark.asyncio
+ async def test_store_async_missing_entity_id(self) -> None:
+ # GIVEN a submission without entity_id
+ submission = Submission(evaluation_id=EVALUATION_ID, name=SUBMISSION_NAME)
+
+ # WHEN I call store_async
+ # THEN it should raise a ValueError during to_synapse_request
+ with pytest.raises(
+ ValueError, match="entity_id is required to create a submission"
+ ):
+ await submission.store_async(synapse_client=self.syn)
+
+ @pytest.mark.asyncio
+ async def test_store_async_entity_fetch_failure(self) -> None:
+ # GIVEN a submission with valid data but entity fetch fails
+ submission = Submission(
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ name=SUBMISSION_NAME,
+ )
+
+ # WHEN I call store_async and entity fetching fails
+ with patch.object(
+ submission,
+ "_fetch_latest_entity",
+ new_callable=AsyncMock,
+ side_effect=ValueError("Unable to fetch entity information"),
+ ) as mock_fetch_entity:
+ # THEN it should propagate the ValueError
+ with pytest.raises(ValueError, match="Unable to fetch entity information"):
+ await submission.store_async(synapse_client=self.syn)
+
+ @pytest.mark.asyncio
+ async def test_get_async_success(self) -> None:
+ # GIVEN a submission with an ID
+ submission = Submission(id=SUBMISSION_ID)
+
+ # WHEN I call get_async with a mocked successful response
+ with patch(
+ "synapseclient.api.evaluation_services.get_submission",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_response(),
+ ) as mock_get_submission:
+ retrieved_submission = await submission.get_async(synapse_client=self.syn)
+
+ # THEN it should call the API and fill the object
+ mock_get_submission.assert_called_once_with(
+ submission_id=SUBMISSION_ID,
+ synapse_client=self.syn,
+ )
+ assert retrieved_submission.id == SUBMISSION_ID
+ assert retrieved_submission.entity_id == ENTITY_ID
+ assert retrieved_submission.evaluation_id == EVALUATION_ID
+
+ @pytest.mark.asyncio
+ async def test_get_async_without_id(self) -> None:
+ # GIVEN a submission without an ID
+ submission = Submission(entity_id=ENTITY_ID, evaluation_id=EVALUATION_ID)
+
+ # WHEN I call get_async
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="must have an ID to get"):
+ await submission.get_async(synapse_client=self.syn)
+
+ @pytest.mark.asyncio
+ async def test_delete_async_success(self) -> None:
+ # GIVEN a submission with an ID
+ submission = Submission(id=SUBMISSION_ID)
+
+ # WHEN I call delete_async with mocked dependencies
+ with patch(
+ "synapseclient.api.evaluation_services.delete_submission",
+ new_callable=AsyncMock,
+ ) as mock_delete_submission, patch(
+ "synapseclient.Synapse.get_client",
+ return_value=self.syn,
+ ):
+ # Mock the logger
+ self.syn.logger = MagicMock()
+
+ await submission.delete_async(synapse_client=self.syn)
+
+ # THEN it should call the API and log the deletion
+ mock_delete_submission.assert_called_once_with(
+ submission_id=SUBMISSION_ID,
+ synapse_client=self.syn,
+ )
+ self.syn.logger.info.assert_called_once_with(
+ f"Submission {SUBMISSION_ID} has successfully been deleted."
+ )
+
+ @pytest.mark.asyncio
+ async def test_delete_async_without_id(self) -> None:
+ # GIVEN a submission without an ID
+ submission = Submission(entity_id=ENTITY_ID, evaluation_id=EVALUATION_ID)
+
+ # WHEN I call delete_async
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="must have an ID to delete"):
+ await submission.delete_async(synapse_client=self.syn)
+
+ @pytest.mark.asyncio
+ async def test_cancel_async_success(self) -> None:
+ # GIVEN a submission with an ID
+ submission = Submission(id=SUBMISSION_ID)
+
+ # WHEN I call cancel_async with mocked dependencies
+ with patch(
+ "synapseclient.api.evaluation_services.cancel_submission",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_response(),
+ ) as mock_cancel_submission, patch(
+ "synapseclient.Synapse.get_client",
+ return_value=self.syn,
+ ):
+ # Mock the logger
+ self.syn.logger = MagicMock()
+
+ await submission.cancel_async(synapse_client=self.syn)
+
+ # THEN it should call the API, log the cancellation, and update the object
+ mock_cancel_submission.assert_called_once_with(
+ submission_id=SUBMISSION_ID,
+ synapse_client=self.syn,
+ )
+ self.syn.logger.info.assert_called_once_with(
+ f"A request to cancel Submission {SUBMISSION_ID} has been submitted."
+ )
+
+ @pytest.mark.asyncio
+ async def test_cancel_async_without_id(self) -> None:
+ # GIVEN a submission without an ID
+ submission = Submission(entity_id=ENTITY_ID, evaluation_id=EVALUATION_ID)
+
+ # WHEN I call cancel_async
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="must have an ID to cancel"):
+ await submission.cancel_async(synapse_client=self.syn)
+
+ @pytest.mark.asyncio
+ async def test_get_evaluation_submissions_async(self) -> None:
+ # GIVEN evaluation parameters
+ evaluation_id = EVALUATION_ID
+ status = "SCORED"
+
+ # WHEN I call get_evaluation_submissions_async
+ with patch(
+ "synapseclient.api.evaluation_services.get_evaluation_submissions"
+ ) as mock_get_submissions:
+ # Create an async generator function that yields submission data
+ async def mock_async_gen(*args, **kwargs):
+ submission_data = self.get_example_submission_response()
+ yield submission_data
+
+ # Make the mock return our async generator when called
+ mock_get_submissions.side_effect = mock_async_gen
+
+ submissions = []
+ async for submission in Submission.get_evaluation_submissions_async(
+ evaluation_id=evaluation_id,
+ status=status,
+ synapse_client=self.syn,
+ ):
+ submissions.append(submission)
+
+ # THEN it should call the API with correct parameters and yield Submission objects
+ mock_get_submissions.assert_called_once_with(
+ evaluation_id=evaluation_id,
+ status=status,
+ synapse_client=self.syn,
+ )
+ assert len(submissions) == 1
+ assert isinstance(submissions[0], Submission)
+ assert submissions[0].id == SUBMISSION_ID
+
+ @pytest.mark.asyncio
+ async def test_get_user_submissions_async(self) -> None:
+ # GIVEN user submission parameters
+ evaluation_id = EVALUATION_ID
+ user_id = USER_ID
+
+ # WHEN I call get_user_submissions_async
+ with patch(
+ "synapseclient.api.evaluation_services.get_user_submissions"
+ ) as mock_get_user_submissions:
+ # Create an async generator function that yields submission data
+ async def mock_async_gen(*args, **kwargs):
+ submission_data = self.get_example_submission_response()
+ yield submission_data
+
+ # Make the mock return our async generator when called
+ mock_get_user_submissions.side_effect = mock_async_gen
+
+ submissions = []
+ async for submission in Submission.get_user_submissions_async(
+ evaluation_id=evaluation_id,
+ user_id=user_id,
+ synapse_client=self.syn,
+ ):
+ submissions.append(submission)
+
+ # THEN it should call the API with correct parameters and yield Submission objects
+ mock_get_user_submissions.assert_called_once_with(
+ evaluation_id=evaluation_id,
+ user_id=user_id,
+ synapse_client=self.syn,
+ )
+ assert len(submissions) == 1
+ assert isinstance(submissions[0], Submission)
+ assert submissions[0].id == SUBMISSION_ID
+
+ @pytest.mark.asyncio
+ async def test_get_submission_count_async(self) -> None:
+ # GIVEN submission count parameters
+ evaluation_id = EVALUATION_ID
+ status = "VALID"
+
+ expected_response = 42
+
+ # WHEN I call get_submission_count_async
+ with patch(
+ "synapseclient.api.evaluation_services.get_submission_count",
+ new_callable=AsyncMock,
+ return_value=expected_response,
+ ) as mock_get_count:
+ response = await Submission.get_submission_count_async(
+ evaluation_id=evaluation_id,
+ status=status,
+ synapse_client=self.syn,
+ )
+
+ # THEN it should call the API with correct parameters
+ mock_get_count.assert_called_once_with(
+ evaluation_id=evaluation_id,
+ status=status,
+ synapse_client=self.syn,
+ )
+ assert response == expected_response
+
+ def test_default_values(self) -> None:
+ # GIVEN a new Submission object with no parameters
+ submission = Submission()
+
+ # THEN all attributes should have their default values
+ assert submission.id is None
+ assert submission.user_id is None
+ assert submission.submitter_alias is None
+ assert submission.entity_id is None
+ assert submission.version_number is None
+ assert submission.evaluation_id is None
+ assert submission.name is None
+ assert submission.created_on is None
+ assert submission.team_id is None
+ assert submission.contributors == []
+ assert submission.submission_status is None
+ assert submission.entity_bundle_json is None
+ assert submission.docker_repository_name is None
+ assert submission.docker_digest is None
+ assert submission.etag is None
+
+ def test_constructor_with_values(self) -> None:
+ # GIVEN specific values for submission attributes
+ # WHEN I create a Submission object with those values
+ submission = Submission(
+ id=SUBMISSION_ID,
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ name=SUBMISSION_NAME,
+ team_id=TEAM_ID,
+ contributors=CONTRIBUTORS,
+ docker_repository_name=DOCKER_REPOSITORY_NAME,
+ docker_digest=DOCKER_DIGEST,
+ )
+
+ # THEN the object should be initialized with those values
+ assert submission.id == SUBMISSION_ID
+ assert submission.entity_id == ENTITY_ID
+ assert submission.evaluation_id == EVALUATION_ID
+ assert submission.name == SUBMISSION_NAME
+ assert submission.team_id == TEAM_ID
+ assert submission.contributors == CONTRIBUTORS
+ assert submission.docker_repository_name == DOCKER_REPOSITORY_NAME
+ assert submission.docker_digest == DOCKER_DIGEST
+
+ def test_to_synapse_request_with_none_values(self) -> None:
+ # GIVEN a submission with some None values for optional fields
+ submission = Submission(
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ name=None, # Explicitly None
+ team_id=None, # Explicitly None
+ contributors=[], # Empty list (falsy)
+ )
+
+ # WHEN I call to_synapse_request
+ request_body = submission.to_synapse_request()
+
+ # THEN None and empty values should not be included
+ assert request_body["entityId"] == ENTITY_ID
+ assert request_body["evaluationId"] == EVALUATION_ID
+ assert "name" not in request_body
+ assert "teamId" not in request_body
+ assert "contributors" not in request_body
diff --git a/tests/unit/synapseclient/models/synchronous/unit_test_submission_bundle.py b/tests/unit/synapseclient/models/synchronous/unit_test_submission_bundle.py
new file mode 100644
index 000000000..cb5d44018
--- /dev/null
+++ b/tests/unit/synapseclient/models/synchronous/unit_test_submission_bundle.py
@@ -0,0 +1,488 @@
+"""Unit tests for the synapseclient.models.SubmissionBundle class synchronous methods."""
+
+from typing import Dict, Union
+from unittest.mock import AsyncMock, patch
+
+import pytest
+
+from synapseclient import Synapse
+from synapseclient.models import Submission, SubmissionBundle, SubmissionStatus
+
+SUBMISSION_ID = "9999999"
+SUBMISSION_STATUS_ID = "9999999"
+ENTITY_ID = "syn123456"
+EVALUATION_ID = "9614543"
+USER_ID = "123456"
+ETAG = "etag_value"
+MODIFIED_ON = "2023-01-01T00:00:00.000Z"
+CREATED_ON = "2023-01-01T00:00:00.000Z"
+STATUS = "RECEIVED"
+
+
+class TestSubmissionBundleSync:
+ """Tests for the synapseclient.models.SubmissionBundle class synchronous methods."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init_syn(self, syn: Synapse) -> None:
+ self.syn = syn
+
+ def get_example_submission_dict(self) -> Dict[str, Union[str, int, Dict]]:
+ """Return example submission data from REST API."""
+ return {
+ "id": SUBMISSION_ID,
+ "userId": USER_ID,
+ "submitterAlias": "test_user",
+ "entityId": ENTITY_ID,
+ "versionNumber": 1,
+ "name": "Test Submission",
+ "createdOn": CREATED_ON,
+ "evaluationId": EVALUATION_ID,
+ "entityBundle": {
+ "entity": {
+ "id": ENTITY_ID,
+ "name": "test_entity",
+ "concreteType": "org.sagebionetworks.repo.model.FileEntity",
+ },
+ "entityType": "org.sagebionetworks.repo.model.FileEntity",
+ },
+ }
+
+ def get_example_submission_status_dict(
+ self,
+ ) -> Dict[str, Union[str, int, bool, Dict]]:
+ """Return example submission status data from REST API."""
+ return {
+ "id": SUBMISSION_STATUS_ID,
+ "etag": ETAG,
+ "modifiedOn": MODIFIED_ON,
+ "status": STATUS,
+ "entityId": ENTITY_ID,
+ "versionNumber": 1,
+ "statusVersion": 1,
+ "canCancel": False,
+ "cancelRequested": False,
+ "submissionAnnotations": {"score": [85.5], "feedback": ["Good work!"]},
+ }
+
+ def get_example_submission_bundle_dict(self) -> Dict[str, Dict]:
+ """Return example submission bundle data from REST API."""
+ return {
+ "submission": self.get_example_submission_dict(),
+ "submissionStatus": self.get_example_submission_status_dict(),
+ }
+
+ def get_example_submission_bundle_minimal_dict(self) -> Dict[str, Dict]:
+ """Return example minimal submission bundle data from REST API."""
+ return {
+ "submission": {
+ "id": SUBMISSION_ID,
+ "entityId": ENTITY_ID,
+ "evaluationId": EVALUATION_ID,
+ },
+ "submissionStatus": None,
+ }
+
+ def test_init_submission_bundle(self) -> None:
+ """Test creating a SubmissionBundle with basic attributes."""
+ # GIVEN submission and submission status objects
+ submission = Submission(
+ id=SUBMISSION_ID,
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ )
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ entity_id=ENTITY_ID,
+ )
+
+ # WHEN I create a SubmissionBundle object
+ bundle = SubmissionBundle(
+ submission=submission,
+ submission_status=submission_status,
+ )
+
+ # THEN the SubmissionBundle should have the expected attributes
+ assert bundle.submission == submission
+ assert bundle.submission_status == submission_status
+ assert bundle.submission.id == SUBMISSION_ID
+ assert bundle.submission_status.id == SUBMISSION_STATUS_ID
+
+ def test_init_submission_bundle_empty(self) -> None:
+ """Test creating an empty SubmissionBundle."""
+ # WHEN I create an empty SubmissionBundle object
+ bundle = SubmissionBundle()
+
+ # THEN the SubmissionBundle should have None attributes
+ assert bundle.submission is None
+ assert bundle.submission_status is None
+
+ def test_fill_from_dict_complete(self) -> None:
+ """Test filling a SubmissionBundle from complete REST API response."""
+ # GIVEN a complete submission bundle response
+ bundle_data = self.get_example_submission_bundle_dict()
+
+ # WHEN I fill a SubmissionBundle from the response
+ bundle = SubmissionBundle().fill_from_dict(bundle_data)
+
+ # THEN all fields should be populated correctly
+ assert bundle.submission is not None
+ assert bundle.submission_status is not None
+
+ # Check submission fields
+ assert bundle.submission.id == SUBMISSION_ID
+ assert bundle.submission.entity_id == ENTITY_ID
+ assert bundle.submission.evaluation_id == EVALUATION_ID
+ assert bundle.submission.user_id == USER_ID
+
+ # Check submission status fields
+ assert bundle.submission_status.id == SUBMISSION_STATUS_ID
+ assert bundle.submission_status.status == STATUS
+ assert bundle.submission_status.entity_id == ENTITY_ID
+ assert (
+ bundle.submission_status.evaluation_id == EVALUATION_ID
+ ) # set from submission
+
+ # Check submission annotations
+ assert "score" in bundle.submission_status.submission_annotations
+ assert bundle.submission_status.submission_annotations["score"] == [85.5]
+
+ def test_fill_from_dict_minimal(self) -> None:
+ """Test filling a SubmissionBundle from minimal REST API response."""
+ # GIVEN a minimal submission bundle response
+ bundle_data = self.get_example_submission_bundle_minimal_dict()
+
+ # WHEN I fill a SubmissionBundle from the response
+ bundle = SubmissionBundle().fill_from_dict(bundle_data)
+
+ # THEN submission should be populated but submission_status should be None
+ assert bundle.submission is not None
+ assert bundle.submission_status is None
+
+ # Check submission fields
+ assert bundle.submission.id == SUBMISSION_ID
+ assert bundle.submission.entity_id == ENTITY_ID
+ assert bundle.submission.evaluation_id == EVALUATION_ID
+
+ def test_fill_from_dict_no_submission(self) -> None:
+ """Test filling a SubmissionBundle with no submission data."""
+ # GIVEN a bundle response with no submission
+ bundle_data = {
+ "submission": None,
+ "submissionStatus": self.get_example_submission_status_dict(),
+ }
+
+ # WHEN I fill a SubmissionBundle from the response
+ bundle = SubmissionBundle().fill_from_dict(bundle_data)
+
+ # THEN submission should be None but submission_status should be populated
+ assert bundle.submission is None
+ assert bundle.submission_status is not None
+ assert bundle.submission_status.id == SUBMISSION_STATUS_ID
+ assert bundle.submission_status.status == STATUS
+
+ def test_fill_from_dict_evaluation_id_setting(self) -> None:
+ """Test that evaluation_id is properly set from submission to submission_status."""
+ # GIVEN a bundle response where submission_status doesn't have evaluation_id
+ submission_dict = self.get_example_submission_dict()
+ status_dict = self.get_example_submission_status_dict()
+ # Remove evaluation_id from status_dict to simulate API response
+ status_dict.pop("evaluationId", None)
+
+ bundle_data = {
+ "submission": submission_dict,
+ "submissionStatus": status_dict,
+ }
+
+ # WHEN I fill a SubmissionBundle from the response
+ bundle = SubmissionBundle().fill_from_dict(bundle_data)
+
+ # THEN submission_status should get evaluation_id from submission
+ assert bundle.submission is not None
+ assert bundle.submission_status is not None
+ assert bundle.submission.evaluation_id == EVALUATION_ID
+ assert bundle.submission_status.evaluation_id == EVALUATION_ID
+
+ def test_get_evaluation_submission_bundles(self) -> None:
+ """Test getting submission bundles for an evaluation using sync method."""
+ # GIVEN mock response data
+ mock_response = {
+ "results": [
+ {
+ "submission": {
+ "id": "123",
+ "entityId": ENTITY_ID,
+ "evaluationId": EVALUATION_ID,
+ "userId": USER_ID,
+ },
+ "submissionStatus": {
+ "id": "123",
+ "status": "RECEIVED",
+ "entityId": ENTITY_ID,
+ },
+ },
+ {
+ "submission": {
+ "id": "456",
+ "entityId": ENTITY_ID,
+ "evaluationId": EVALUATION_ID,
+ "userId": USER_ID,
+ },
+ "submissionStatus": {
+ "id": "456",
+ "status": "SCORED",
+ "entityId": ENTITY_ID,
+ },
+ },
+ ]
+ }
+
+ # WHEN I call get_evaluation_submission_bundles (sync method)
+ with patch(
+ "synapseclient.api.evaluation_services.get_evaluation_submission_bundles"
+ ) as mock_get_bundles:
+ # Create an async generator function that yields bundle data
+ async def mock_async_gen(*args, **kwargs):
+ for bundle_data in mock_response["results"]:
+ yield bundle_data
+
+ # Make the mock return our async generator when called
+ mock_get_bundles.side_effect = mock_async_gen
+
+ result = list(
+ SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=EVALUATION_ID,
+ status="RECEIVED",
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN the service should be called with correct parameters
+ mock_get_bundles.assert_called_once_with(
+ evaluation_id=EVALUATION_ID,
+ status="RECEIVED",
+ synapse_client=self.syn,
+ )
+
+ # AND the result should contain SubmissionBundle objects
+ assert len(result) == 2
+ assert all(isinstance(bundle, SubmissionBundle) for bundle in result)
+
+ # Check first bundle
+ assert result[0].submission is not None
+ assert result[0].submission.id == "123"
+ assert result[0].submission_status is not None
+ assert result[0].submission_status.id == "123"
+ assert result[0].submission_status.status == "RECEIVED"
+ assert (
+ result[0].submission_status.evaluation_id == EVALUATION_ID
+ ) # set from submission
+
+ # Check second bundle
+ assert result[1].submission is not None
+ assert result[1].submission.id == "456"
+ assert result[1].submission_status is not None
+ assert result[1].submission_status.id == "456"
+ assert result[1].submission_status.status == "SCORED"
+
+ def test_get_evaluation_submission_bundles_empty_response(self) -> None:
+ """Test getting submission bundles with empty response using sync method."""
+ # GIVEN empty mock response
+ mock_response = {"results": []}
+
+ # WHEN I call get_evaluation_submission_bundles (sync method)
+ with patch(
+ "synapseclient.api.evaluation_services.get_evaluation_submission_bundles"
+ ) as mock_get_bundles:
+ # Create an async generator function that yields no data
+ async def mock_async_gen(*args, **kwargs):
+ return
+ yield
+
+ # Make the mock return our async generator when called
+ mock_get_bundles.side_effect = mock_async_gen
+
+ result = list(
+ SubmissionBundle.get_evaluation_submission_bundles(
+ evaluation_id=EVALUATION_ID,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN the service should be called
+ mock_get_bundles.assert_called_once_with(
+ evaluation_id=EVALUATION_ID,
+ status=None,
+ synapse_client=self.syn,
+ )
+
+ # AND the result should be an empty list
+ assert len(result) == 0
+
+ def test_get_user_submission_bundles(self) -> None:
+ """Test getting user submission bundles using sync method."""
+ # GIVEN mock response data
+ mock_response = {
+ "results": [
+ {
+ "submission": {
+ "id": "789",
+ "entityId": ENTITY_ID,
+ "evaluationId": EVALUATION_ID,
+ "userId": USER_ID,
+ "name": "User Submission 1",
+ },
+ "submissionStatus": {
+ "id": "789",
+ "status": "VALIDATED",
+ "entityId": ENTITY_ID,
+ },
+ },
+ ]
+ }
+
+ # WHEN I call get_user_submission_bundles (sync method)
+ with patch(
+ "synapseclient.api.evaluation_services.get_user_submission_bundles"
+ ) as mock_get_user_bundles:
+ # Create an async generator function that yields bundle data
+ async def mock_async_gen(*args, **kwargs):
+ for bundle_data in mock_response["results"]:
+ yield bundle_data
+
+ # Make the mock return our async generator when called
+ mock_get_user_bundles.side_effect = mock_async_gen
+
+ result = list(
+ SubmissionBundle.get_user_submission_bundles(
+ evaluation_id=EVALUATION_ID,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN the service should be called with correct parameters
+ mock_get_user_bundles.assert_called_once_with(
+ evaluation_id=EVALUATION_ID,
+ synapse_client=self.syn,
+ )
+
+ # AND the result should contain SubmissionBundle objects
+ assert len(result) == 1
+ assert isinstance(result[0], SubmissionBundle)
+
+ # Check bundle contents
+ assert result[0].submission is not None
+ assert result[0].submission.id == "789"
+ assert result[0].submission.name == "User Submission 1"
+ assert result[0].submission_status is not None
+ assert result[0].submission_status.id == "789"
+ assert result[0].submission_status.status == "VALIDATED"
+ assert result[0].submission_status.evaluation_id == EVALUATION_ID
+
+ def test_get_user_submission_bundles_default_params(self) -> None:
+ """Test getting user submission bundles with default parameters using sync method."""
+ # GIVEN mock response
+ mock_response = {"results": []}
+
+ # WHEN I call get_user_submission_bundles with defaults (sync method)
+ with patch(
+ "synapseclient.api.evaluation_services.get_user_submission_bundles"
+ ) as mock_get_user_bundles:
+ # Create an async generator function that yields no data
+ async def mock_async_gen(*args, **kwargs):
+ return
+ yield # This will never execute
+
+ # Make the mock return our async generator when called
+ mock_get_user_bundles.side_effect = mock_async_gen
+
+ result = list(
+ SubmissionBundle.get_user_submission_bundles(
+ evaluation_id=EVALUATION_ID,
+ synapse_client=self.syn,
+ )
+ )
+
+ # THEN the service should be called with default parameters
+ mock_get_user_bundles.assert_called_once_with(
+ evaluation_id=EVALUATION_ID,
+ synapse_client=self.syn,
+ )
+
+ # AND the result should be empty
+ assert len(result) == 0
+
+ def test_dataclass_equality(self) -> None:
+ """Test dataclass equality comparison."""
+ # GIVEN two SubmissionBundle objects with the same data
+ submission = Submission(id=SUBMISSION_ID, entity_id=ENTITY_ID)
+ status = SubmissionStatus(id=SUBMISSION_STATUS_ID, status=STATUS)
+
+ bundle1 = SubmissionBundle(submission=submission, submission_status=status)
+ bundle2 = SubmissionBundle(submission=submission, submission_status=status)
+
+ # THEN they should be equal
+ assert bundle1 == bundle2
+
+ # WHEN I modify one of them
+ bundle2.submission_status = SubmissionStatus(id="different", status="DIFFERENT")
+
+ # THEN they should not be equal
+ assert bundle1 != bundle2
+
+ def test_dataclass_equality_with_none(self) -> None:
+ """Test dataclass equality with None values."""
+ # GIVEN two SubmissionBundle objects with None values
+ bundle1 = SubmissionBundle(submission=None, submission_status=None)
+ bundle2 = SubmissionBundle(submission=None, submission_status=None)
+
+ # THEN they should be equal
+ assert bundle1 == bundle2
+
+ # WHEN I add a submission to one
+ bundle2.submission = Submission(id=SUBMISSION_ID)
+
+ # THEN they should not be equal
+ assert bundle1 != bundle2
+
+ def test_repr_and_str(self) -> None:
+ """Test string representation of SubmissionBundle."""
+ # GIVEN a SubmissionBundle with some data
+ submission = Submission(id=SUBMISSION_ID, entity_id=ENTITY_ID)
+ status = SubmissionStatus(id=SUBMISSION_STATUS_ID, status=STATUS)
+ bundle = SubmissionBundle(submission=submission, submission_status=status)
+
+ # WHEN I get the string representation
+ repr_str = repr(bundle)
+ str_str = str(bundle)
+
+ # THEN it should contain relevant information
+ assert "SubmissionBundle" in repr_str
+ assert SUBMISSION_ID in repr_str
+ assert SUBMISSION_STATUS_ID in repr_str
+
+ # AND str should be the same as repr for dataclasses
+ assert str_str == repr_str
+
+ def test_repr_with_none_values(self) -> None:
+ """Test string representation with None values."""
+ # GIVEN a SubmissionBundle with None values
+ bundle = SubmissionBundle(submission=None, submission_status=None)
+
+ # WHEN I get the string representation
+ repr_str = repr(bundle)
+
+ # THEN it should show None values
+ assert "SubmissionBundle" in repr_str
+ assert "submission=None" in repr_str
+ assert "submission_status=None" in repr_str
+
+ def test_protocol_implementation(self) -> None:
+ """Test that SubmissionBundle implements the synchronous protocol correctly."""
+ # THEN it should have all the required synchronous methods
+ assert hasattr(SubmissionBundle, "get_evaluation_submission_bundles")
+ assert hasattr(SubmissionBundle, "get_user_submission_bundles")
+
+ # AND the methods should be callable
+ assert callable(SubmissionBundle.get_evaluation_submission_bundles)
+ assert callable(SubmissionBundle.get_user_submission_bundles)
diff --git a/tests/unit/synapseclient/models/synchronous/unit_test_submission_status.py b/tests/unit/synapseclient/models/synchronous/unit_test_submission_status.py
new file mode 100644
index 000000000..e724e64f0
--- /dev/null
+++ b/tests/unit/synapseclient/models/synchronous/unit_test_submission_status.py
@@ -0,0 +1,597 @@
+"""Unit tests for the synapseclient.models.SubmissionStatus class synchronous methods."""
+
+from typing import Dict, Union
+from unittest.mock import AsyncMock, patch
+
+import pytest
+
+from synapseclient import Synapse
+from synapseclient.models import SubmissionStatus
+
+SUBMISSION_STATUS_ID = "9999999"
+ENTITY_ID = "syn123456"
+EVALUATION_ID = "9614543"
+ETAG = "etag_value"
+MODIFIED_ON = "2023-01-01T00:00:00.000Z"
+STATUS = "RECEIVED"
+SCORE = 85.5
+REPORT = "Test report"
+VERSION_NUMBER = 1
+STATUS_VERSION = 1
+CAN_CANCEL = False
+CANCEL_REQUESTED = False
+PRIVATE_STATUS_ANNOTATIONS = True
+
+
+class TestSubmissionStatusSync:
+ """Tests for the synapseclient.models.SubmissionStatus class synchronous methods."""
+
+ @pytest.fixture(autouse=True, scope="function")
+ def init_syn(self, syn: Synapse) -> None:
+ self.syn = syn
+
+ def get_example_submission_status_dict(
+ self,
+ ) -> Dict[str, Union[str, int, bool, Dict]]:
+ """Return example submission status data from REST API."""
+ return {
+ "id": SUBMISSION_STATUS_ID,
+ "etag": ETAG,
+ "modifiedOn": MODIFIED_ON,
+ "status": STATUS,
+ "score": SCORE,
+ "report": REPORT,
+ "entityId": ENTITY_ID,
+ "versionNumber": VERSION_NUMBER,
+ "statusVersion": STATUS_VERSION,
+ "canCancel": CAN_CANCEL,
+ "cancelRequested": CANCEL_REQUESTED,
+ "annotations": {
+ "objectId": SUBMISSION_STATUS_ID,
+ "scopeId": EVALUATION_ID,
+ "stringAnnos": [
+ {
+ "key": "internal_note",
+ "isPrivate": True,
+ "value": "This is internal",
+ }
+ ],
+ "doubleAnnos": [
+ {"key": "validation_score", "isPrivate": True, "value": 95.0}
+ ],
+ "longAnnos": [],
+ },
+ "submissionAnnotations": {"feedback": ["Great work!"], "score": [92.5]},
+ }
+
+ def get_example_submission_dict(self) -> Dict[str, str]:
+ """Return example submission data from REST API."""
+ return {
+ "id": SUBMISSION_STATUS_ID,
+ "evaluationId": EVALUATION_ID,
+ "entityId": ENTITY_ID,
+ "versionNumber": VERSION_NUMBER,
+ "userId": "123456",
+ "submitterAlias": "test_user",
+ "createdOn": "2023-01-01T00:00:00.000Z",
+ }
+
+ def test_init_submission_status(self) -> None:
+ """Test creating a SubmissionStatus with basic attributes."""
+ # WHEN I create a SubmissionStatus object
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ entity_id=ENTITY_ID,
+ evaluation_id=EVALUATION_ID,
+ )
+
+ # THEN the SubmissionStatus should have the expected attributes
+ assert submission_status.id == SUBMISSION_STATUS_ID
+ assert submission_status.status == STATUS
+ assert submission_status.entity_id == ENTITY_ID
+ assert submission_status.evaluation_id == EVALUATION_ID
+ assert submission_status.can_cancel is False # default value
+ assert submission_status.cancel_requested is False # default value
+ assert submission_status.private_status_annotations is True # default value
+
+ def test_fill_from_dict(self) -> None:
+ """Test filling a SubmissionStatus from a REST API response."""
+ # GIVEN an example submission status response
+ submission_status_data = self.get_example_submission_status_dict()
+
+ # WHEN I fill a SubmissionStatus from the response
+ submission_status = SubmissionStatus().fill_from_dict(submission_status_data)
+
+ # THEN all fields should be populated correctly
+ assert submission_status.id == SUBMISSION_STATUS_ID
+ assert submission_status.etag == ETAG
+ assert submission_status.modified_on == MODIFIED_ON
+ assert submission_status.status == STATUS
+ assert submission_status.score == SCORE
+ assert submission_status.report == REPORT
+ assert submission_status.entity_id == ENTITY_ID
+ assert submission_status.version_number == VERSION_NUMBER
+ assert submission_status.status_version == STATUS_VERSION
+ assert submission_status.can_cancel is CAN_CANCEL
+ assert submission_status.cancel_requested is CANCEL_REQUESTED
+
+ # Check annotations
+ assert submission_status.annotations is not None
+ assert "objectId" in submission_status.annotations
+ assert "scopeId" in submission_status.annotations
+ assert "stringAnnos" in submission_status.annotations
+ assert "doubleAnnos" in submission_status.annotations
+
+ # Check submission annotations
+ assert "feedback" in submission_status.submission_annotations
+ assert "score" in submission_status.submission_annotations
+ assert submission_status.submission_annotations["feedback"] == ["Great work!"]
+ assert submission_status.submission_annotations["score"] == [92.5]
+
+ def test_fill_from_dict_minimal(self) -> None:
+ """Test filling a SubmissionStatus from minimal REST API response."""
+ # GIVEN a minimal submission status response
+ minimal_data = {"id": SUBMISSION_STATUS_ID, "status": STATUS}
+
+ # WHEN I fill a SubmissionStatus from the response
+ submission_status = SubmissionStatus().fill_from_dict(minimal_data)
+
+ # THEN basic fields should be populated
+ assert submission_status.id == SUBMISSION_STATUS_ID
+ assert submission_status.status == STATUS
+ # AND optional fields should have default values
+ assert submission_status.etag is None
+ assert submission_status.can_cancel is False
+ assert submission_status.cancel_requested is False
+
+ def test_get(self) -> None:
+ """Test retrieving a SubmissionStatus by ID using sync method."""
+ # GIVEN a SubmissionStatus with an ID
+ submission_status = SubmissionStatus(id=SUBMISSION_STATUS_ID)
+
+ # WHEN I call get (sync method)
+ with patch(
+ "synapseclient.api.evaluation_services.get_submission_status",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_status_dict(),
+ ) as mock_get_status, patch(
+ "synapseclient.api.evaluation_services.get_submission",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_dict(),
+ ) as mock_get_submission:
+ result = submission_status.get(synapse_client=self.syn)
+
+ # THEN the submission status should be retrieved
+ mock_get_status.assert_called_once_with(
+ submission_id=SUBMISSION_STATUS_ID, synapse_client=self.syn
+ )
+ mock_get_submission.assert_called_once_with(
+ submission_id=SUBMISSION_STATUS_ID, synapse_client=self.syn
+ )
+
+ # AND the result should have the expected data
+ assert result.id == SUBMISSION_STATUS_ID
+ assert result.status == STATUS
+ assert result.evaluation_id == EVALUATION_ID
+
+ def test_get_without_id(self) -> None:
+ """Test that getting a SubmissionStatus without ID raises ValueError."""
+ # GIVEN a SubmissionStatus without an ID
+ submission_status = SubmissionStatus()
+
+ # WHEN I call get
+ # THEN it should raise a ValueError
+ with pytest.raises(
+ ValueError, match="The submission status must have an ID to get"
+ ):
+ submission_status.get(synapse_client=self.syn)
+
+ def test_store(self) -> None:
+ """Test storing a SubmissionStatus using sync method."""
+ # GIVEN a SubmissionStatus with required attributes
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ etag=ETAG,
+ status_version=STATUS_VERSION,
+ status="SCORED",
+ evaluation_id=EVALUATION_ID,
+ )
+ submission_status._set_last_persistent_instance()
+
+ # AND I modify the status
+ submission_status.status = "VALIDATED"
+
+ # WHEN I call store (sync method)
+ with patch(
+ "synapseclient.api.evaluation_services.update_submission_status",
+ new_callable=AsyncMock,
+ return_value=self.get_example_submission_status_dict(),
+ ) as mock_update:
+ result = submission_status.store(synapse_client=self.syn)
+
+ # THEN the submission status should be updated
+ mock_update.assert_called_once()
+ call_args = mock_update.call_args
+ assert call_args.kwargs["submission_id"] == SUBMISSION_STATUS_ID
+ assert call_args.kwargs["synapse_client"] == self.syn
+
+ # AND the result should have updated data
+ assert result.id == SUBMISSION_STATUS_ID
+ assert result.status == STATUS # from mock response
+
+ def test_store_without_id(self) -> None:
+ """Test that storing a SubmissionStatus without ID raises ValueError."""
+ # GIVEN a SubmissionStatus without an ID
+ submission_status = SubmissionStatus(status="SCORED")
+
+ # WHEN I call store
+ # THEN it should raise a ValueError
+ with pytest.raises(
+ ValueError, match="The submission status must have an ID to update"
+ ):
+ submission_status.store(synapse_client=self.syn)
+
+ def test_store_without_changes(self) -> None:
+ """Test storing a SubmissionStatus without changes."""
+ # GIVEN a SubmissionStatus that hasn't been modified
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ etag=ETAG,
+ status_version=STATUS_VERSION,
+ status=STATUS,
+ )
+ submission_status._set_last_persistent_instance()
+
+ # WHEN I call store without making changes
+ result = submission_status.store(synapse_client=self.syn)
+
+ # THEN it should return the same instance (no update sent to Synapse)
+ assert result is submission_status
+
+ def test_to_synapse_request_missing_id(self) -> None:
+ """Test to_synapse_request with missing ID."""
+ # GIVEN a SubmissionStatus without an ID
+ submission_status = SubmissionStatus()
+
+ # WHEN I call to_synapse_request
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'id' attribute"):
+ submission_status.to_synapse_request(synapse_client=self.syn)
+
+ def test_to_synapse_request_missing_etag(self) -> None:
+ """Test to_synapse_request with missing etag."""
+ # GIVEN a SubmissionStatus with ID but no etag
+ submission_status = SubmissionStatus(id=SUBMISSION_STATUS_ID)
+
+ # WHEN I call to_synapse_request
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'etag' attribute"):
+ submission_status.to_synapse_request(synapse_client=self.syn)
+
+ def test_to_synapse_request_missing_status_version(self) -> None:
+ """Test to_synapse_request with missing status_version."""
+ # GIVEN a SubmissionStatus with ID and etag but no status_version
+ submission_status = SubmissionStatus(id=SUBMISSION_STATUS_ID, etag=ETAG)
+
+ # WHEN I call to_synapse_request
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'status_version' attribute"):
+ submission_status.to_synapse_request(synapse_client=self.syn)
+
+ def test_to_synapse_request_missing_evaluation_id_with_annotations(self) -> None:
+ """Test to_synapse_request with annotations but missing evaluation_id."""
+ # GIVEN a SubmissionStatus with annotations but no evaluation_id
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ etag=ETAG,
+ status_version=STATUS_VERSION,
+ annotations={"test": "value"},
+ )
+
+ # WHEN I call to_synapse_request
+ # THEN it should raise a ValueError
+ with pytest.raises(ValueError, match="missing the 'evaluation_id' attribute"):
+ submission_status.to_synapse_request(synapse_client=self.syn)
+
+ def test_to_synapse_request_valid(self) -> None:
+ """Test to_synapse_request with valid attributes."""
+ # GIVEN a SubmissionStatus with all required attributes
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ etag=ETAG,
+ status_version=STATUS_VERSION,
+ status="SCORED",
+ evaluation_id=EVALUATION_ID,
+ submission_annotations={"score": 85.5},
+ annotations={"internal_note": "test"},
+ )
+
+ # WHEN I call to_synapse_request
+ request_body = submission_status.to_synapse_request(synapse_client=self.syn)
+
+ # THEN the request should have the required fields
+ assert request_body["id"] == SUBMISSION_STATUS_ID
+ assert request_body["etag"] == ETAG
+ assert request_body["statusVersion"] == STATUS_VERSION
+ assert request_body["status"] == "SCORED"
+ assert "submissionAnnotations" in request_body
+ assert "annotations" in request_body
+
+ def test_has_changed_property_new_instance(self) -> None:
+ """Test has_changed property for a new instance."""
+ # GIVEN a new SubmissionStatus instance
+ submission_status = SubmissionStatus(id=SUBMISSION_STATUS_ID)
+
+ # THEN has_changed should be True (no persistent instance)
+ assert submission_status.has_changed is True
+
+ def test_has_changed_property_after_get(self) -> None:
+ """Test has_changed property after retrieving from Synapse."""
+ # GIVEN a SubmissionStatus that was retrieved (has persistent instance)
+ submission_status = SubmissionStatus(id=SUBMISSION_STATUS_ID, status=STATUS)
+ submission_status._set_last_persistent_instance()
+
+ # THEN has_changed should be False
+ assert submission_status.has_changed is False
+
+ # WHEN I modify a field
+ submission_status.status = "VALIDATED"
+
+ # THEN has_changed should be True
+ assert submission_status.has_changed is True
+
+ def test_has_changed_property_annotations(self) -> None:
+ """Test has_changed property with annotation changes."""
+ # GIVEN a SubmissionStatus with annotations
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ annotations={"original": "value"},
+ submission_annotations={"score": 85.0},
+ )
+ submission_status._set_last_persistent_instance()
+
+ # THEN has_changed should be False initially
+ assert submission_status.has_changed is False
+
+ # WHEN I modify annotations
+ submission_status.annotations = {"modified": "value"}
+
+ # THEN has_changed should be True
+ assert submission_status.has_changed is True
+
+ # WHEN I reset annotations to original and modify submission_annotations
+ submission_status.annotations = {"original": "value"}
+ submission_status.submission_annotations = {"score": 90.0}
+
+ # THEN has_changed should still be True
+ assert submission_status.has_changed is True
+
+ def test_get_all_submission_statuses(self) -> None:
+ """Test getting all submission statuses using sync method."""
+ # GIVEN mock response data
+ mock_response = {
+ "results": [
+ {
+ "id": "123",
+ "status": "RECEIVED",
+ "entityId": ENTITY_ID,
+ "evaluationId": EVALUATION_ID,
+ },
+ {
+ "id": "456",
+ "status": "SCORED",
+ "entityId": ENTITY_ID,
+ "evaluationId": EVALUATION_ID,
+ },
+ ]
+ }
+
+ # WHEN I call get_all_submission_statuses (sync method)
+ with patch(
+ "synapseclient.api.evaluation_services.get_all_submission_statuses",
+ new_callable=AsyncMock,
+ return_value=mock_response,
+ ) as mock_get_all:
+ result = SubmissionStatus.get_all_submission_statuses(
+ evaluation_id=EVALUATION_ID,
+ status="RECEIVED",
+ limit=50,
+ offset=0,
+ synapse_client=self.syn,
+ )
+
+ # THEN the service should be called with correct parameters
+ mock_get_all.assert_called_once_with(
+ evaluation_id=EVALUATION_ID,
+ status="RECEIVED",
+ limit=50,
+ offset=0,
+ synapse_client=self.syn,
+ )
+
+ # AND the result should contain SubmissionStatus objects
+ assert len(result) == 2
+ assert all(isinstance(status, SubmissionStatus) for status in result)
+ assert result[0].id == "123"
+ assert result[0].status == "RECEIVED"
+ assert result[1].id == "456"
+ assert result[1].status == "SCORED"
+
+ def test_batch_update_submission_statuses(self) -> None:
+ """Test batch updating submission statuses using sync method."""
+ # GIVEN a list of SubmissionStatus objects
+ statuses = [
+ SubmissionStatus(
+ id="123",
+ etag="etag1",
+ status_version=1,
+ status="VALIDATED",
+ evaluation_id=EVALUATION_ID,
+ ),
+ SubmissionStatus(
+ id="456",
+ etag="etag2",
+ status_version=1,
+ status="SCORED",
+ evaluation_id=EVALUATION_ID,
+ ),
+ ]
+
+ # AND mock response
+ mock_response = {"batchToken": "token123"}
+
+ # WHEN I call batch_update_submission_statuses (sync method)
+ with patch(
+ "synapseclient.api.evaluation_services.batch_update_submission_statuses",
+ new_callable=AsyncMock,
+ return_value=mock_response,
+ ) as mock_batch_update:
+ result = SubmissionStatus.batch_update_submission_statuses(
+ evaluation_id=EVALUATION_ID,
+ statuses=statuses,
+ is_first_batch=True,
+ is_last_batch=True,
+ synapse_client=self.syn,
+ )
+
+ # THEN the service should be called with correct parameters
+ mock_batch_update.assert_called_once()
+ call_args = mock_batch_update.call_args
+ assert call_args.kwargs["evaluation_id"] == EVALUATION_ID
+ assert call_args.kwargs["synapse_client"] == self.syn
+
+ # Check request body structure
+ request_body = call_args.kwargs["request_body"]
+ assert request_body["isFirstBatch"] is True
+ assert request_body["isLastBatch"] is True
+ assert "statuses" in request_body
+ assert len(request_body["statuses"]) == 2
+
+ # AND the result should be the mock response
+ assert result == mock_response
+
+ def test_batch_update_with_batch_token(self) -> None:
+ """Test batch update with batch token for subsequent batches."""
+ # GIVEN a list of SubmissionStatus objects and a batch token
+ statuses = [
+ SubmissionStatus(
+ id="123",
+ etag="etag1",
+ status_version=1,
+ status="VALIDATED",
+ evaluation_id=EVALUATION_ID,
+ )
+ ]
+ batch_token = "previous_batch_token"
+
+ # WHEN I call batch_update_submission_statuses with a batch token
+ with patch(
+ "synapseclient.api.evaluation_services.batch_update_submission_statuses",
+ new_callable=AsyncMock,
+ return_value={},
+ ) as mock_batch_update:
+ SubmissionStatus.batch_update_submission_statuses(
+ evaluation_id=EVALUATION_ID,
+ statuses=statuses,
+ is_first_batch=False,
+ is_last_batch=True,
+ batch_token=batch_token,
+ synapse_client=self.syn,
+ )
+
+ # THEN the batch token should be included in the request
+ call_args = mock_batch_update.call_args
+ request_body = call_args.kwargs["request_body"]
+ assert request_body["batchToken"] == batch_token
+ assert request_body["isFirstBatch"] is False
+
+ def test_set_last_persistent_instance(self) -> None:
+ """Test setting the last persistent instance."""
+ # GIVEN a SubmissionStatus
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ annotations={"test": "value"},
+ )
+
+ # WHEN I set the last persistent instance
+ submission_status._set_last_persistent_instance()
+
+ # THEN the persistent instance should be set
+ assert submission_status._last_persistent_instance is not None
+ assert submission_status._last_persistent_instance.id == SUBMISSION_STATUS_ID
+ assert submission_status._last_persistent_instance.status == STATUS
+ assert submission_status._last_persistent_instance.annotations == {
+ "test": "value"
+ }
+
+ # AND modifying the current instance shouldn't affect the persistent one
+ submission_status.status = "MODIFIED"
+ assert submission_status._last_persistent_instance.status == STATUS
+
+ def test_dataclass_equality(self) -> None:
+ """Test dataclass equality comparison."""
+ # GIVEN two SubmissionStatus objects with the same data
+ status1 = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ entity_id=ENTITY_ID,
+ )
+ status2 = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ entity_id=ENTITY_ID,
+ )
+
+ # THEN they should be equal
+ assert status1 == status2
+
+ # WHEN I modify one of them
+ status2.status = "DIFFERENT"
+
+ # THEN they should not be equal
+ assert status1 != status2
+
+ def test_dataclass_fields_excluded_from_comparison(self) -> None:
+ """Test that certain fields are excluded from comparison."""
+ # GIVEN two SubmissionStatus objects that differ only in comparison-excluded fields
+ status1 = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ etag="etag1",
+ modified_on="2023-01-01",
+ cancel_requested=False,
+ )
+ status2 = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ etag="etag2", # different etag
+ modified_on="2023-01-02", # different modified_on
+ cancel_requested=True, # different cancel_requested
+ )
+
+ # THEN they should still be equal (these fields are excluded from comparison)
+ assert status1 == status2
+
+ def test_repr_and_str(self) -> None:
+ """Test string representation of SubmissionStatus."""
+ # GIVEN a SubmissionStatus with some data
+ submission_status = SubmissionStatus(
+ id=SUBMISSION_STATUS_ID,
+ status=STATUS,
+ entity_id=ENTITY_ID,
+ )
+
+ # WHEN I get the string representation
+ repr_str = repr(submission_status)
+ str_str = str(submission_status)
+
+ # THEN it should contain the relevant information
+ assert SUBMISSION_STATUS_ID in repr_str
+ assert STATUS in repr_str
+ assert ENTITY_ID in repr_str
+ assert "SubmissionStatus" in repr_str
+
+ # AND str should be the same as repr for dataclasses
+ assert str_str == repr_str