Skip to content

OPRUN-4099: OLMv1 Deployment Configuration API#1915

Merged
openshift-merge-bot[bot] merged 15 commits intoopenshift:masterfrom
oceanc80:olmv1-subscription-config
Mar 17, 2026
Merged

OPRUN-4099: OLMv1 Deployment Configuration API#1915
openshift-merge-bot[bot] merged 15 commits intoopenshift:masterfrom
oceanc80:olmv1-subscription-config

Conversation

@oceanc80
Copy link
Copy Markdown
Contributor

@oceanc80 oceanc80 commented Jan 2, 2026

Enhancement extending OLMv1's ClusterExtension API to support deployment configuration in order to provide feature parity with OLMv0's SubscriptionConfig.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Jan 2, 2026
@openshift-ci openshift-ci Bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 2, 2026
@openshift-ci-robot
Copy link
Copy Markdown

openshift-ci-robot commented Jan 2, 2026

@oceanc80: This pull request references OPRUN-4099 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the task to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Enhancement extending OLMv1's ClusterExtension API to support deployment configuration in order to provide feature parity with OLMv0's SubscriptionConfig.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Jan 2, 2026

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

Comment thread enhancements/olm/olmv1-deployment-configuration-api.md Outdated
Comment thread enhancements/olm/olmv1-deployment-configuration-api.md Outdated
- As a cluster extension admin, I want to attach custom storage volumes to operator pods, so that I can provide persistent storage or configuration files to operators.
- As a cluster extension admin, I want to configure pod affinity rules for operator deployments, so that I can control how operator pods are distributed across cluster nodes.
- As a cluster extension admin, I want to add custom annotations to operator deployments, so that I can integrate with monitoring and observability tools.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder what the story for the Selector is. I wonder if its to handle changes in the pod label selector in the operator's controller deployment between versions (the label selector in the deployment spec is immutable). This configuration could provide upgrade continuity across this type of breaking change.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could also see it being used for blue/green deployments or other similar deployment strategies.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The selector of a deployment is immutable, iirc. I dove into the history of this field, and it looks like it was basically there from the beginning with no real explanation that I could find, and it has never been honored as far as I can tell.

Chalk it up to how fast and loose the early days of OLM were.

Comment thread enhancements/olm/olmv1-deployment-configuration-api.md Outdated
Comment thread enhancements/olm/olmv1-deployment-configuration-api.md Outdated
Comment thread enhancements/olm/olmv1-deployment-configuration-api.md Outdated
@kuiwang02
Copy link
Copy Markdown

@oceanc80 I know the PR is in WIP, and I also am not sure if the following comment is the scope of this EP.

If the following comment is not the scope of this EP or it is not correct time to raise the comments, you could ignore the following the comment:


The deploymentConfig API design looks well thought out. I noticed that the proposal currently focuses on initial installation scenarios, and I was wondering if you could clarify the behavior for runtime configuration updates, which I expect will be a common operational workflow.

Question 1: Modifying Existing deploymentConfig Values

Scenario: After creating a ClusterExtension with deploymentConfig, a user wants to update some values (e.g., changing memory limits from 256Mi to 512Mi, or adding a new nodeSelector).

Could you clarify:

  1. Will users be able to modify spec.config.inline.deploymentConfig values after ClusterExtension creation?
  2. If supported, will the changes automatically reconcile and apply to the existing Deployment?
  3. Which configuration field changes will trigger a pod rolling update?
  4. Are there any fields that don't support runtime updates?

Example:

  # Initial configuration
  deploymentConfig:
    resources:
      limits:
        memory: "256Mi"

  # User updates to:
  deploymentConfig:
    resources:
      limits:
        memory: "512Mi"      # ← modified
    nodeSelector:            # ← added
      infrastructure: "dedicated"

Question 2: Adding deploymentConfig After Creation

Scenario: A user creates a ClusterExtension without defining deploymentConfig initially, then later wants to add deployment configuration.

Could you clarify:

  1. Is it supported to add deploymentConfig to an existing ClusterExtension that was created without it?
  2. If supported, will the changes automatically reconcile and apply to the existing Deployment?
  3. Which new configuration field will trigger a pod rolling update?
  4. Are there any fields that don't support runtime added?
  5. Similarly, what happens if a user removes deploymentConfig entirely?

Example:

  # Initial creation (no deploymentConfig)
  apiVersion: olm.operatorframework.io/v1
  kind: ClusterExtension
  metadata:
    name: my-operator
  spec:
    source:
      sourceType: Catalog
      catalog:
        packageName: my-operator
    # Note: no config.inline.deploymentConfig

  ---

  # Later, user adds deploymentConfig
  spec:
    config:
      inline:
        deploymentConfig:    # ← newly added
          nodeSelector:
            infrastructure: "dedicated"

@perdasilva
Copy link
Copy Markdown

@kuiwang02 let me try to reply to your questions

Question 1: Modifying Existing deploymentConfig Values

  1. Will users be able to modify spec.config.inline.deploymentConfig values after ClusterExtension creation?

From the perspective of OLMv1, bundle configuration is opaque. It will take user input, validated it against the configuration schema provided by the bundle, and apply it to generate the final manifests. So, any configuration can
be changed at runtime. This does mean that some user configurations might generate manifests that cannot be applied, or could lead to unintended or bad consequences. If there are errors, the extension will be in a broken state until the configuration is fixed.

  1. If supported, will the changes automatically reconcile and apply to the existing Deployment?

Yes. The Deployment will be regenerated with the new values and applied to the cluster.

  1. Which configuration field changes will trigger a pod rolling update?

I'd say so, yes. Any changes to the pod template should trigger a new replicaset and the deployment will transition towards that.

  1. Are there any fields that don't support runtime updates?

This is a good question. I know there are fields in the Deployment spec that are immutable (e.g. the label selector). That the only one I can think of. I think the configuration options under the deployment config are mutable.

Question 2: Adding deploymentConfig After Creation

  1. Is it supported to add deploymentConfig to an existing ClusterExtension that was created without it?

Yes. For the same reasons in Q1.1

  1. If supported, will the changes automatically reconcile and apply to the existing Deployment?

Yes.

  1. Which new configuration field will trigger a pod rolling update?

Yes.

  1. Are there any fields that don't support runtime added?

I don't think so.

  1. Similarly, what happens if a user removes deploymentConfig entirely?

Then we are back to the Deployment spec defined in the bundle by the author.

The mental model here is really no different than:

  1. Create a Deployment
  2. Modify the deployment

AFAIK only the pod label selector is immutable once set.

@kuiwang02
Copy link
Copy Markdown

immutable

@perdasilva Thanks for your great reply. I got it.

@oceanc80 oceanc80 marked this pull request as ready for review February 17, 2026 18:07
@openshift-ci openshift-ci Bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 17, 2026
@anik120
Copy link
Copy Markdown
Contributor

anik120 commented Feb 17, 2026

@JoelSpeed PTAL, thanks!

Comment thread enhancements/olm/olmv1-deployment-configuration-api.md
Comment thread enhancements/olm/olmv1-deployment-configuration-api.md
Comment thread enhancements/olm/olmv1-deployment-configuration-api.md
Comment thread enhancements/olm/olmv1-deployment-configuration-api.md
Comment thread enhancements/olm/olmv1-deployment-configuration-api.md
}
```

The `Selector` field in the `SubscriptionConfig` is present but is not ever extracted or used by OLMv0. OLMv1 will maintain this behavior so the field will be accepted but ignored.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepting but ignoring a field is bad practice. Why not create a new type for the deployment config? It doesn't look like it'll be particularly complex to implement

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same answer as above:

we had this discussion upstream where I'd proposed a new, completely separate structure for v1, but it was vetoed in favor of keeping v0 and v1 in sync.

It was discussed that reusing the v0 structure would mean carrying over debts, but the cost of it was assessed to be acceptable for long term maintainability

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @JoelSpeed 's point here. Even if we re-use the type, we are also in control of the schema generation for that type, right? So at a minimum we could specifically exclude that field from the generated schema.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Big +1 to excluding this field from the schema downstream if it's not going to be supported

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like there's some confusion here: we are NOT exposing the field in the schema, even though it's being carried over because of the usage of v1alpha1.SubscriptionConfig.

Wrote a test to make that more explicit: operator-framework/operator-controller#2525

Also @oceanc80 fyi oceanc80#2

Comment thread enhancements/olm/olmv1-deployment-configuration-api.md
Comment thread enhancements/olm/olmv1-deployment-configuration-api.md
## Open Questions / Considerations

### Track changes to underlying kubernetes corev1 structures?
SubscriptionConfig uses many kubernetes corev1 structures from the standard kube lib. This means that the OLMv0 Subscription API would track changes to those structures (e.g. if a new Volume type is added to the API etc.). We need to think about whether we want the same behavior here, and if so how we'd like to implement it. E.g. we could have some process downloading and mining the openapi specs for the given kube lib version we have in go.mod, and having make verify fail when that changes. We'd want to think about how we'd handle any CEL expressions in those corev1 structures when doing the validation (and whether we want to handle them?).
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you doing any processing of these fields, or just setting them directly on the deployment that you're rendering and applying? If you aren't processing them and are just passing them through, then this is probably fine

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anik120
Copy link
Copy Markdown
Contributor

anik120 commented Mar 2, 2026

@JoelSpeed can we move this along? Our downstreaming efforts are currently blocked by this EP's merging (the feature is planned for TPNU for this release)

@JoelSpeed
Copy link
Copy Markdown
Contributor

None of my comments here are blocking, though I do recommend you double check on #1915 (comment)

/override ci/prow/markdownlint

New sections were added to the template after you started this

/assign @joelanford

Joe is listed as your approver

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Mar 2, 2026

@JoelSpeed: Overrode contexts on behalf of JoelSpeed: ci/prow/markdownlint

Details

In response to this:

None of my comments here are blocking, though I do recommend you double check on #1915 (comment)

/override ci/prow/markdownlint

New sections were added to the template after you started this

/assign @joelanford

Joe is listed as your approver

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Comment thread enhancements/olm/olmv1-deployment-configuration-api.md
Comment on lines +343 to +344
1. OLM Team (primary)
2. Layered Product Team
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Flip this around. Layered product teams are better positioned to answer questions about issues with their specific operator configurations and interplay with their CSV's stated defaults. I would expect:

  1. Customer escalates to layered product team
  2. If LP team can't diagnose, LP team escalates to OLM team.

@tmshort
Copy link
Copy Markdown
Contributor

tmshort commented Mar 11, 2026

/test markdownlint

tmshort and others added 2 commits March 11, 2026 14:48
This should allow this to pass markdownlint

Signed-off-by: Todd Short <todd.short@me.com>
- everettraven
creation-date: 2025-12-30
last-updated: 2025-12-30
tracking-link:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add status: provisional (or implementable?). Not sure which is appropriate for "ready for tech preview implementation.

1. Layered Product Teams (primary)
2. OLM Team

## Support Procedures
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The template lists this as a section without the "optional" label, and there are some examples of the kinds of things that are useful to document here. Not necessary for TechPreview, but something we'll need to populate before promoting to GA.

Comment on lines +323 to +329
## Upgrade / Downgrade Strategy

### Upgrade

### Downgrade

## Version Skew Strategy
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to populate these sections. Descriptions are present in the template.

Not a merge blocker, but something that needs to be done prior to GA promotion.

Comment on lines +321 to +322
### Removing a deprecated feature

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-blocking: Remove this section title, since this EP is not about removal of a deprecated feature.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's required per the markdownlint CI.

Comment thread enhancements/olm/olmv1-deployment-configuration-api.md
api-approvers:
- everettraven
creation-date: 2025-12-30
last-updated: 2025-12-30
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update this to today?


Example inline configuration structure:

```yaml
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: yaml code block, but JSON content. Align these?

@joelanford
Copy link
Copy Markdown
Member

/approve

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Mar 13, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: joelanford

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci Bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 13, 2026
@tmshort
Copy link
Copy Markdown
Contributor

tmshort commented Mar 13, 2026

@JoelSpeed How does this look now?

@tmshort
Copy link
Copy Markdown
Contributor

tmshort commented Mar 13, 2026

or @everettraven ?

@JoelSpeed
Copy link
Copy Markdown
Contributor

/lgtm

@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label Mar 17, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Mar 17, 2026

@oceanc80: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot Bot merged commit 2b38513 into openshift:master Mar 17, 2026
2 checks passed
@oceanc80 oceanc80 deleted the olmv1-subscription-config branch March 17, 2026 10:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants