Skip to content

Cluster sync adj in p&a flavour#1814

Open
M4KIF wants to merge 2 commits intofeature/database-controllersfrom
jkoterba/feature/cluster-sync-adj
Open

Cluster sync adj in p&a flavour#1814
M4KIF wants to merge 2 commits intofeature/database-controllersfrom
jkoterba/feature/cluster-sync-adj

Conversation

@M4KIF
Copy link
Copy Markdown

@M4KIF M4KIF commented Apr 2, 2026

Description

Rewritten the sync logic

  • moved each component checks to separate objects all implementing the Condition interface by which we iterate and check the health.

Key Changes

  • interfaced the health check
  • implemented the above for provisioner, pooler, configmap, secret
  • short unit test added
  • added constants and state check via bitmask

Testing and Verification

local integ suite passes (make test) and units (pkg/postgres/cluster/core go test) passes

Related Issues

Jira tickets, GitHub issues, Support tickets...

PR Checklist

  • [nie wiem] Code changes adhere to the project's coding standards.
  • Relevant unit and integration tests are included.
  • Documentation has been updated accordingly.
  • All tests pass locally.
  • The PR description follows the project's guidelines.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 2, 2026

CLA Assistant Lite bot:
Thank you for your submission, we really appreciate it. Like many open-source projects, we ask that you sign our Contribution License Agreement before we can accept your contribution. You can sign the CLA by just posting a Pull Request Comment with the exact sentence copied from below.


I have read the CLA Document and I hereby sign the CLA


You can retrigger this bot by commenting recheck in this Pull Request

@M4KIF M4KIF changed the base branch from main to feature/database-controllers April 2, 2026 14:35
CNPG cluster
Poolers
Access resources (configmap and secret)
And all of them needs to be set to Ready for our PostgresCluster phase to become Ready?
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, that's correct

rewrite to consider taking state of other objects into account
before declaring readyness.

CNPG cluster
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct me if I'm wrong, but what I'm missing in the code is checking if CNPG cluster is ready and if yes then updating our ClusterReady condition to true, so at the end (here) we can check if all conditions are true and set whole custom resource status ready to true

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently? Yes, currently It's mostly a scaffolding to which I will place any business logic.

}
return ctrl.Result{}, patchErr
default:
if statusErr := updateStatus(clusterReady, metav1.ConditionFalse, reasonClusterBuildSucceeded,
Copy link
Copy Markdown

@limak9182 limak9182 Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we are updating the status with clusterReady condition False if we patched CNPG Cluster and requeue.
Shouldnt we in the next reconciliation go again to this check again if CNPG cluster is in desired state:

!equality.Semantic.DeepEqual(currentNormalized, desiredNormalized)

and if it is (we are not going inside this if), set the clusterReady condition to true?
Something like:

statusErr := updateStatus(clusterReady, metav1.ConditionTrue...

just after this big if block?

Maybe one more CNPG cluster check is needed, just to be sure it's in healthy and ready state, if not then requeue or leave with error?

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually after thinking about it, it should be probably after reconcileManagedRoles as it's the last thing we are doing with CNPG cluster.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general I think It's a bit misleading that we do cluster ready and condtion == false. At least for me, It should be sth like ClusterErrorRetry, like http codes.

Copy link
Copy Markdown

@limak9182 limak9182 Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I think I don't not fully understand that. By cluster do you mean our PostgresClusterCR or Cluster Ready condition (actual CNPG cluster)?

Copy link
Copy Markdown
Collaborator

@mploski mploski Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general I think It's a bit misleading that we do cluster ready and condtion == false. At least for me, It should be sth like ClusterErrorRetry, like http codes.

This is standard pattern in k8s, we should stick to it. Take a look at our design docs where we map phases and conditions to current situation

// return ctrl.Result{}, nil
}

type clusterReadynessCheck interface {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sound like a great thing to add to the specific port capabilities :-)

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then the interface here would be implemented by the ports. Clean and nice idea

pgcConstants "github.com/splunk/splunk-operator/pkg/postgresql/cluster/business/core/types/constants"
)

type Provisioner interface {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think our secondary ports should reflect that we create cluster and database and we should map our interfaces around it.

*/

// basically a sync logic
state := pgcConstants.EmptyState
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the idea here was to decouple status check from cnpg status, At the same time we also check health after every stage and we move forward only if we are ok, if it is still in progress we requeue or raise error. Here we have a code we can use to check where we are with status iteratively, but I dont see yet how it solve our core problem

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we build our state here, after checking all ports for readyness/not dying there is the moment for us to decide what happened and how It happened.
We don't set our state as state == cnpgVariable mapped to ours, we decide what do we want to do with the fact that cm's, secrets, provisioner etc. are ready.

cnpgv1 "github.com/cloudnative-pg/cloudnative-pg/api/v1"
enterprisev4 "github.com/splunk/splunk-operator/api/v4"
clustercore "github.com/splunk/splunk-operator/pkg/postgresql/cluster/core"
clustercore "github.com/splunk/splunk-operator/pkg/postgresql/cluster/business/core"
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tbh business string is redundat here. Core itself is already a domain

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree It's redundant, here It's a tradeoff for verbosity and segregation of components. And a service pattern at once, ie. the service/ is the primary port (reconciler that we provide) implementation. core/ is core, and ports/ are the contracts that we need for the core to work. They can grow large, hence the whole separate dir for ports.

cnpgCluster *cnpgv1.Cluster
}

func (c *provisionerHealthCheck) Condition() (pgcConstants.State, error) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think all of this conditions check should be part of our Ports, also how you want to map condition to phase?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, they were placed here as what I need. Solving It like you say is the thing I'm hoping for. For the provisioner/cluster etc. ports to include an interface for checking It's state.

Then the adapter would be essentially mapping the dependency state to our abstraction of It's state. Ie. we have cluster ready, provisioning, failover, cupcake, coffee etc.
Mapping condition to phase would be the job of the facade, ie. the cluster.go. That would be the whole operational brain behind. Ie. lot's of individual pieces funneled into our business decisions.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sth. like a stateMapper, or another objects that specialises in deciding on what phase/condition we're in could also be born. The bit mask could be used for covering the phase 1, ie. state = FinaliserNotAdded && !ClusterProvisioning etc.
Phase 2 -> state == Finaliser && ClusterReady etc.

}

func (p *poolerHealthCheck) Condition() (pgcConstants.State, error) {
return pgcConstants.PoolerReady, nil
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how do we want to check the actual condition component has in status?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wdym? It's kind of the job of the adapter to test and provide that the state is actual. Ie. If we place this as a method of a port, and implement It via adapters. We actually won't work on the real state of the component in our core. Only on our understanding of It.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently It would be just to copy paste the thing that we do inside cluster.go, ie. the resource obj. of the Pooler, k8s.Get(obj, ...) and all similar. As there is no abstraction currently.

)

const (
ComponentsReady = PoolerReady | ProvisionerReady | SecretReady | ConfigMapReady
Copy link
Copy Markdown
Collaborator

@mploski mploski Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what we try to achieve here? Is it a bitmask? Since you use IOTA we endup in having just random integer?
That would be probably simpler to just use struct with keeping the state like that:

 type ClusterState struct {                                                                                                                                         
      Provisioner ComponentPhase                                                                                                                                     
      Pooler      ComponentPhase                                                                                                                                     
      ConfigMap   ComponentPhase                                                                                                                                     
      Secret      ComponentPhase                                                                                                                                     
  }  

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bitmask. And It does the same job of keeping a struct with aditional field.
And It kinda solves the case of having to create new types just for each component.
Just adding additional states to the state machine, ie. the values in the "enum". The iota usage is an enum in go: https://yourbasic.org/golang/iota/
It's the first usage in this file, hence It's basically an enum from 0

Copy link
Copy Markdown
Collaborator

@mploski mploski Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And It kinda solves the case of having to create new types just for each component.

We already create types for a new component for many reasons, so what problem it really solve? I agree it is smart way of doing this, but neverthless if new component arise you need to add it to the const and extend types. I feel like we trade go readability for a really small c-like optimisation especially with this model of bitwise comparision later:

state |= componentHealth.State
if state&pgcConstants.ComponentsReady == pgcConstants.ComponentsReady

Also, if we build state incrementally it means that the LAST successful state in state machine is an final success. With this in mind we dont need to check every other component state afterwards.

Can we do this simpler so when Im wake up at 5am in the morning I easily understand the code?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this state check, after the iteration health check passes is redundant here.
And taking into consideration the potential future work. Which could include a file division.
I could expand the *healthCheck types with them returning the *(component)StateDto instead of relying on generic state bits.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because later, in the very near future It seems that the project could follow this footstep of having some separation in phases and It's crucial elements. Like we've discussed on the p&a ideas brainstorm.

rc.emitPoolerReadyTransition(postgresCluster, poolerOldConditions)
}

if state&pgcConstants.ComponentsReady != 0 {
Copy link
Copy Markdown
Collaborator

@mploski mploski Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if this logic is not broken. What happens if we set ProvisionerReady but later in stage we set failed for something. Sue to way we set state and components this would pass. I think it is because iota incrementing by 1 not by power of 2? I think we should rely not on bitmasking here, but simple struct with state for every stage and if all is good we are good

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with unsetting the bit's at any space, this condition starts failing. As well as with not setting the bits, the values don't AND, hence if we mark ProvisionerReady and then set masks for ConfigMapFailed, it won't fire.
And to prove how this logic would work, there would be tests that would make sure any misfires aren't possible.

@M4KIF M4KIF force-pushed the jkoterba/feature/cluster-sync-adj branch from b0b2f12 to ad7aaec Compare April 8, 2026 08:02
@M4KIF M4KIF marked this pull request as ready for review April 8, 2026 08:03
oldPhase = *postgresCluster.Status.Phase
// Aggregate component readiness from iterative health checks.
state := pgcConstants.EmptyState
conditions := []clusterReadynessCheck{
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so after every phase that is not immediate like cluster creation we should also incorporate state check rediness. I think we discussed that we dont really need to check at the end assuming we check intermediary statuses per phase?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with that, but Isn't then the scope == refactor the reconciler?
I've tried to stick with changing the sync logic and doing the ground prep for more changes in coming tickets and potential p&a rework.

for _, check := range conditions {
componentHealth, err := check.Condition(ctx)
if err != nil {
if statusErr := updateStatus(componentHealth.Condition, metav1.ConditionFalse, componentHealth.Reason, componentHealth.Message, componentHealth.Phase); statusErr != nil {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we run this at the end of reconcillation it seems that some of the code is dead i.e if we are here, we cannot have a configmap or secret orphaned. If we do it should be discovered during this phase and requeue/err

@M4KIF M4KIF requested review from limak9182 and mploski April 9, 2026 06:13
}
return ctrl.Result{}, statusErr
}
logger.Error(err, "Component health check reported issues",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return componentHealth.Result, err
}

if isPendingState(componentHealth.State) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we run this code on every phase separately, here we should requeue

if postgresCluster.Status.Phase != nil {
newPhase = *postgresCluster.Status.Phase

if state&pgcConstants.ComponentsReady == pgcConstants.ComponentsReady {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thois could be potentially method on a state so it reads natually at 4 am i.e
func (s State) HasAll(required State) bool {
return s&required == required
}
if state.HasAll(pgcConstants.ComponentsReady) {}

return &provisionerHealthCheck{cluster: cluster, cnpgCluster: cnpgCluster}
}

func (c *provisionerHealthCheck) Condition(_ context.Context) (StateInformationDto, error) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like providing interface like that! One doubt I have is about pureness and responsiblity of those condition methods. They check conditions but also fetch k8s objects. IMO k8s objects should be evaluated and the reconciller kickoff and propagated

@mploski
Copy link
Copy Markdown
Collaborator

mploski commented Apr 9, 2026

I like the direction we are heading two!
Few concerns two discuss:

  1. Should we aggregate the status check at the end or we should check per phase and when we pass everything in reconcile then we know we are good. Considering we build state incrementally and sequentally this make sense IMO
  2. Function pureness
  3. bitmask usage in that form - if 1 is true then we might not need it. If we need it mabe we can wrap it in helper functions/refactor the approach so we understand the code better in the morning when we have P0
  4. I believe we should start with drowing current state machine and how we set a phase and conditions. Did you do that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants