Skip to content

Conversation

@egegunes
Copy link
Contributor

@egegunes egegunes commented Dec 18, 2025

K8SPSMDB-1527 Powered by Pull Request Badge

CHANGE DESCRIPTION

Problem:
Short explanation of the problem.

Cause:
Short explanation of the root cause of the issue if applicable.

Solution:
Short explanation of the solution we are providing with this PR.

CHECKLIST

Jira

  • Is the Jira ticket created and referenced properly?
  • Does the Jira ticket have the proper statuses for documentation (Needs Doc) and QA (Needs QA)?
  • Does the Jira ticket link to the proper milestone (Fix Version field)?

Tests

  • Is an E2E test/test case added for the new feature/change?
  • Are unit tests added where appropriate?
  • Are OpenShift compare files changed for E2E tests (compare/*-oc.yml)?

Config/Logging/Testability

  • Are all needed new/changed options added to default YAML files?
  • Are all needed new/changed options added to the Helm Chart?
  • Did we add proper logging messages for operator actions?
  • Did we ensure compatibility with the previous version or cluster upgrade process?
  • Does the change support oldest and newest supported MongoDB version?
  • Does the change support oldest and newest supported Kubernetes version?

@egegunes egegunes added this to the v1.22.0 milestone Dec 18, 2025
Copilot AI review requested due to automatic review settings December 18, 2025 14:15
@pull-request-size pull-request-size bot added the size/XL 500-999 lines label Dec 18, 2025
logf "sigs.k8s.io/controller-runtime/pkg/log"

"github.com/percona/percona-backup-mongodb/pbm/config"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[goimports-reviser] reported by reviewdog 🐶

Suggested change

"github.com/percona/percona-backup-mongodb/pbm/config"

pbmVersion "github.com/percona/percona-backup-mongodb/pbm/version"
psmdbv1 "github.com/percona/percona-server-mongodb-operator/pkg/apis/psmdb/v1"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[goimports-reviser] reported by reviewdog 🐶

Suggested change
psmdbv1 "github.com/percona/percona-server-mongodb-operator/pkg/apis/psmdb/v1"
psmdbv1 "github.com/percona/percona-server-mongodb-operator/pkg/apis/psmdb/v1"

Backup: api.BackupSpec{
Enabled: true,
Storages: map[string]api.BackupStorageSpec{
"s3": api.BackupStorageSpec{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[gofmt] reported by reviewdog 🐶

Suggested change
"s3": api.BackupStorageSpec{
"s3": {

Backup: api.BackupSpec{
Enabled: true,
Storages: map[string]api.BackupStorageSpec{
"s3": api.BackupStorageSpec{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[gofmt] reported by reviewdog 🐶

Suggested change
"s3": api.BackupStorageSpec{
"s3": {

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR ensures that a MongoDB cluster cannot reach the "ready" state until PBM (Percona Backup for MongoDB) is fully configured and ready when backup is enabled. This prevents operations from proceeding when the backup infrastructure is not yet available.

Key changes:

  • Added pbmInProgress check to prevent cluster from becoming ready before PBM version and configuration are established
  • Introduced PBMReady condition type to track PBM configuration state
  • Moved reconcileBackupVersion function from backup.go to pbm.go and removed the requirement for cluster to be ready before checking PBM version

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
pkg/controller/perconaservermongodb/status.go Added pbmInProgress flag that checks if backup version or config hash is empty, preventing cluster ready state until PBM is configured
pkg/controller/perconaservermongodb/status_test.go Added comprehensive table-driven tests covering various PBM readiness scenarios including backup disabled, version empty, config hash empty, and fully ready states
pkg/controller/perconaservermongodb/pbm.go Added checks for replset initialization and shard status before PBM config reconciliation; moved reconcileBackupVersion here and removed ready state requirement; added PBMReady condition updates
pkg/controller/perconaservermongodb/backup.go Removed reconcileBackupVersion function (moved to pbm.go) and cleaned up imports
pkg/controller/perconaservermongodb/conditions.go New file with helper function updateCondition to update cluster conditions and write status
pkg/apis/psmdb/v1/psmdb_types.go Added ConditionTypePBMReady constant; moved AddCondition method earlier in file for better organization

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

continue
}

if cr.Spec.Sharding.Enabled && status.AddedAsShard == nil || !*status.AddedAsShard {
Copy link

Copilot AI Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logical condition has incorrect operator precedence. The AND operator (&&) should be evaluated before OR (||), but the current expression may not behave as intended. The condition should be grouped as: cr.Spec.Sharding.Enabled && (status.AddedAsShard == nil || !*status.AddedAsShard). Currently, when sharding is disabled, it will still check !*status.AddedAsShard which could panic if status.AddedAsShard is nil.

Suggested change
if cr.Spec.Sharding.Enabled && status.AddedAsShard == nil || !*status.AddedAsShard {
if cr.Spec.Sharding.Enabled && (status.AddedAsShard == nil || !*status.AddedAsShard) {

Copilot uses AI. Check for mistakes.

func (r *ReconcilePerconaServerMongoDB) updateCondition(ctx context.Context, cr *psmdbv1.PerconaServerMongoDB, c psmdbv1.ClusterCondition) error {
cr.Status.AddCondition(c)
return r.writeStatus(ctx, cr)
Copy link

Copilot AI Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The updateCondition function calls writeStatus which immediately persists the status to the API server. This is called multiple times during reconciliation (lines 166 and 191 in pbm.go), but there's also a deferred updateStatus call at the end of the Reconcile function. While the CR is passed by reference so the conditions should be preserved, calling writeStatus multiple times during a single reconciliation loop is not ideal as it creates unnecessary API calls and potential race conditions. Consider accumulating condition changes and writing them only once at the end of reconciliation, similar to how other status fields are handled.

Suggested change
return r.writeStatus(ctx, cr)
return nil

Copilot uses AI. Check for mistakes.
@hors hors requested a review from mayankshah1607 December 18, 2025 20:01
@JNKPercona
Copy link
Collaborator

Test Name Result Time
arbiter passed 00:11:38
balancer passed 00:18:19
cross-site-sharded passed 00:18:56
custom-replset-name passed 00:10:22
custom-tls passed 00:13:50
custom-users-roles failure 00:58:55
custom-users-roles-sharded passed 00:12:06
data-at-rest-encryption passed 00:13:22
data-sharded passed 00:22:53
demand-backup failure 00:59:35
demand-backup-eks-credentials-irsa passed 00:00:08
demand-backup-fs failure 00:59:18
demand-backup-if-unhealthy failure 00:58:13
demand-backup-incremental failure 00:59:39
demand-backup-incremental-sharded passed 01:03:57
demand-backup-physical-parallel failure 00:59:33
demand-backup-physical-aws failure 00:58:42
demand-backup-physical-azure failure 00:58:45
demand-backup-physical-gcp-s3 failure 00:58:39
demand-backup-physical-gcp-native failure 00:57:52
demand-backup-physical-minio failure 00:56:48
demand-backup-physical-minio-native failure 00:54:25
demand-backup-physical-sharded-parallel passed 00:11:37
demand-backup-physical-sharded-aws passed 00:18:33
demand-backup-physical-sharded-azure passed 00:18:25
demand-backup-physical-sharded-gcp-native passed 00:18:05
demand-backup-physical-sharded-minio passed 00:17:48
demand-backup-physical-sharded-minio-native passed 00:17:59
demand-backup-sharded passed 00:25:08
expose-sharded failure 00:16:43
finalizer passed 00:10:00
ignore-labels-annotations passed 00:07:44
init-deploy failure 00:21:27
ldap passed 00:09:03
ldap-tls passed 00:12:50
limits passed 00:06:24
liveness failure 00:16:42
mongod-major-upgrade passed 00:12:13
mongod-major-upgrade-sharded passed 00:22:37
monitoring-2-0 passed 00:25:30
monitoring-pmm3 passed 00:28:51
multi-cluster-service passed 00:15:27
multi-storage failure 00:01:33
non-voting-and-hidden failure 00:00:58
one-pod failure 00:00:58
operator-self-healing-chaos passed 00:13:40
pitr failure 00:00:56
pitr-physical failure 00:00:58
pitr-sharded passed 00:22:34
pitr-to-new-cluster failure 00:00:57
pitr-physical-backup-source passed 00:57:06
preinit-updates passed 00:07:37
pvc-resize failure 00:11:19
recover-no-primary passed 00:27:16
replset-overrides failure 01:04:55
replset-remapping failure 00:30:08
replset-remapping-sharded passed 00:22:06
rs-shard-migration failure 01:30:07
scaling passed 00:17:11
scheduled-backup failure 00:29:53
security-context failure 00:21:40
self-healing-chaos passed 00:16:55
service-per-pod passed 00:19:23
serviceless-external-nodes passed 00:07:59
smart-update passed 00:08:36
split-horizon passed 00:08:17
stable-resource-version passed 00:04:55
storage passed 00:08:03
tls-issue-cert-manager passed 00:29:33
upgrade passed 00:10:15
upgrade-consistency passed 00:06:26
upgrade-consistency-sharded-tls passed 00:54:31
upgrade-sharded passed 00:20:26
upgrade-partial-backup failure 00:09:10
users failure 00:53:04
version-service passed 00:25:57
Summary Value
Tests Run 76/76
Job Duration 07:19:35
Total Test Time 32:05:01

commit: 4461031
image: perconalab/percona-server-mongodb-operator:PR-2152-4461031b

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size/XL 500-999 lines

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants