update README
All checks were successful
Build and Publish / build-release (push) Successful in 6m14s

This commit is contained in:
2026-04-30 16:47:43 -05:00
parent dfcfb7d752
commit 9a2cc0d5e1

384
README.md
View File

@@ -1,80 +1,351 @@
# src
// TODO(user): Add simple overview of use/purpose
# Zitadel Resources Operator
A Kubernetes operator for managing Zitadel resources across single and multi-cluster environments.
## Description
// TODO(user): An in-depth paragraph about your project and overview of use
The Zitadel Resources Operator enables declarative management of Zitadel resources through Kubernetes custom resources. It supports both traditional same-cluster deployments and cross-cluster scenarios where resources can reference Zitadel entities across different Kubernetes clusters.
### Key Features
- **Cross-Cluster Support**: Reference Zitadel resources across different Kubernetes clusters using direct Zitadel IDs
- **Declarative Management**: Define Zitadel resources as Kubernetes custom resources
- **Automatic Reconciliation**: Ensures desired state is maintained in Zitadel
- **Flexible Reference Types**: Support for both Kubernetes object references and direct Zitadel ID references
- **Resource Hierarchy**: Connection → Organization → Project → Applications
- **Validation**: Built-in validation rules to ensure correct resource configuration
## Architecture
### Resource Hierarchy
The operator follows this resource hierarchy:
```
Connection (Zitadel instance connection)
Organization (Zitadel organization)
Project (Zitadel project)
Applications (OIDCApp, APIApp, MachineUser, etc.)
```
### Supported Resources
- **Connection**: Zitadel instance connection configuration
- **Organization**: Zitadel organization management
- **Project**: Zitadel project with roles and grants
- **OIDCApp**: OIDC application configuration
- **APIApp**: API application configuration
- **MachineUser**: Machine user management
- **Action**: Custom actions
- **Flow**: Flow configurations
## Cross-Cluster Support
The operator supports two reference modes for flexible deployment scenarios:
### Same-Cluster References (Traditional)
Use Kubernetes object references when resources exist in the same cluster:
```yaml
apiVersion: zitadel.github.com/v1alpha1
kind: Project
metadata:
name: my-project
namespace: default
spec:
organizationRef:
name: my-organization
namespace: default
projectName: my-project
```
### Cross-Cluster References (New)
Use direct Zitadel ID references when resources span multiple clusters:
```yaml
apiVersion: zitadel.github.com/v1alpha1
kind: Project
metadata:
name: my-project
namespace: workload-dev
spec:
organizationRef:
id: "367990024731427343"
connectionRef:
name: mgmt-prod-connection
namespace: zitadel-system
projectName: my-project
```
### Reference Validation
The operator enforces these validation rules:
- **Mutual Exclusivity**: Must provide either `name` (K8s reference) or `id` (Zitadel ID), but not both
- **Required Field**: At least one reference method must be provided
- **Connection Requirement**: When using `id`, `connectionRef.name` is required
## Usage Examples
### Complete Cross-Cluster Setup
#### 1. Create Connection in Management Cluster
```yaml
apiVersion: zitadel.github.com/v1alpha1
kind: Connection
metadata:
name: mgmt-prod-connection
namespace: zitadel-system
spec:
host: id.corredorconect.com
secure: true
authentication:
pat:
tokenSecretKey:
name: mgmt-prod-secret
key: pat
```
#### 2. Create Organization in Management Cluster
```yaml
apiVersion: zitadel.github.com/v1alpha1
kind: Organization
metadata:
name: corredorconect
namespace: zitadel-system
spec:
connectionRef:
name: mgmt-prod-connection
namespace: zitadel-system
organizationName: corredorconect
```
#### 3. Create Project in Workload Cluster (Cross-Cluster)
```yaml
apiVersion: zitadel.github.com/v1alpha1
kind: Project
metadata:
name: seguros-dev
namespace: zitadel-resources-operator
spec:
organizationRef:
id: "367990024731427343"
connectionRef:
name: zitadel-connection
namespace: zitadel-resources-operator
projectName: segurOS-dev
projectRoleAssertion: true
projectRoleCheck: true
hasProjectCheck: true
roles:
- key: admin
displayName: Admin
group: system
```
#### 4. Create OIDC App in Workload Cluster (Cross-Cluster)
```yaml
apiVersion: zitadel.github.com/v1alpha1
kind: OIDCApp
metadata:
name: my-oidc-app
namespace: workload-dev
spec:
projectRef:
id: "987654321098765432"
connectionRef:
name: zitadel-connection
namespace: zitadel-resources-operator
oidcAppName: my-app
redirectUris:
- https://example.com/callback
responseTypes:
- OIDC_RESPONSE_TYPE_CODE
grantTypes:
- OIDC_GRANT_TYPE_AUTHORIZATION_CODE
appType: OIDC_APP_TYPE_WEB
authMethodType: OIDC_AUTH_METHOD_TYPE_BASIC
```
## Reference Types
### OrganizationRef
References an organization either by Kubernetes object name or direct Zitadel ID.
**Fields:**
- `name` (string): Kubernetes object name (same-cluster)
- `id` (string): Direct Zitadel organization ID (cross-cluster)
- `connectionRef` (ConnectionRef): Connection for cross-cluster references
- Standard `ObjectReference` fields: `namespace`, `kind`, `apiVersion`, etc.
**Validation:**
- Must provide either `name` or `id`, but not both
- When using `id`, `connectionRef.name` is required
### ProjectRef
References a project either by Kubernetes object name or direct Zitadel ID.
**Fields:**
- `name` (string): Kubernetes object name (same-cluster)
- `id` (string): Direct Zitadel project ID (cross-cluster)
- `connectionRef` (ConnectionRef): Connection for cross-cluster references
- Standard `ObjectReference` fields: `namespace`, `kind`, `apiVersion`, etc.
**Validation:**
- Must provide either `name` or `id`, but not both
- When using `id`, `connectionRef.name` is required
## Getting Started
Youll need a Kubernetes cluster to run against. You can use [KIND](https://sigs.k8s.io/kind) to get a local cluster for testing, or run against a remote cluster.
**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster `kubectl cluster-info` shows).
### Running on the cluster
1. Install Instances of Custom Resources:
### Prerequisites
```sh
kubectl apply -f config/samples/
- Kubernetes cluster (v1.25+)
- kubectl configured to communicate with your cluster
- Zitadel instance with appropriate credentials
### Installation
1. **Clone the repository:**
```bash
git clone <repository-url>
cd zitadel-resources-operator
```
2. Build and push your image to the location specified by `IMG`:
```sh
make docker-build docker-push IMG=<some-registry>/src:tag
```
3. Deploy the controller to the cluster with the image specified by `IMG`:
```sh
make deploy IMG=<some-registry>/src:tag
```
### Uninstall CRDs
To delete the CRDs from the cluster:
```sh
make uninstall
```
### Undeploy controller
UnDeploy the controller from the cluster:
```sh
make undeploy
```
## Contributing
// TODO(user): Add detailed information on how you would like others to contribute to this project
### How it works
This project aims to follow the Kubernetes [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/).
It uses [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/),
which provide a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster.
### Test It Out
1. Install the CRDs into the cluster:
```sh
2. **Install CRDs:**
```bash
make install
```
2. Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running):
3. **Deploy the operator:**
```bash
make deploy IMG=<your-image-registry>/zitadel-resources-operator:tag
```
```sh
4. **Verify installation:**
```bash
kubectl get pods -n zitadel-resources-operator-system
```
### Configuration
Create a Connection resource to authenticate with your Zitadel instance:
```yaml
apiVersion: zitadel.github.com/v1alpha1
kind: Connection
metadata:
name: zitadel-connection
namespace: zitadel-resources-operator
spec:
host: your-zitadel-instance.com
secure: true
authentication:
pat:
tokenSecretKey:
name: zitadel-credentials
key: pat
```
Create the secret with your credentials:
```bash
kubectl create secret generic zitadel-credentials \
--from-literal=pat=your-zitadel-pat-token \
-n zitadel-resources-operator
```
### Development
#### Running Locally
1. **Install CRDs:**
```bash
make install
```
2. **Run the operator locally:**
```bash
make run
```
**NOTE:** You can also run this in one step by running: `make install run`
#### Building and Testing
### Modifying the API definitions
If you are editing the API definitions, generate the manifests such as CRs or CRDs using:
1. **Build the operator:**
```bash
make build
```
```sh
2. **Run tests:**
```bash
make test
```
3. **Generate manifests:**
```bash
make manifests
```
**NOTE:** Run `make --help` for more information on all potential `make` targets
#### Modifying API Definitions
More information can be found via the [Kubebuilder Documentation](https://book.kubebuilder.io/introduction.html)
When modifying API types, regenerate the CRDs:
```bash
make manifests
make install
```
## Troubleshooting
### Common Issues
**Issue:** `Organization.zitadel.github.com "" not found`
- **Solution:** Ensure you're using either `name` or `id` in references, not both. Check validation rules.
**Issue:** Cross-cluster references not working
- **Solution:** Verify that `connectionRef` is properly specified when using `id` references.
**Issue:** Resources not reconciling
- **Solution:** Check operator logs: `kubectl logs -n zitadel-resources-operator-system deployment/zitadel-resources-operator-controller-manager`
### Debug Mode
Enable debug logging by setting the log level:
```bash
kubectl set env deployment/zitadel-resources-operator-controller-manager \
-n zitadel-resources-operator-system \
--containers=manager \
LOG_LEVEL=debug
```
## Contributing
Contributions are welcome! Please follow these guidelines:
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests for new functionality
5. Submit a pull request
### Development Workflow
1. **Make changes to the code**
2. **Run tests:** `make test`
3. **Generate manifests:** `make manifests`
4. **Build locally:** `make build`
5. **Test in cluster:** `make install && make run`
## License
@@ -91,4 +362,3 @@ distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.