Service Foundry: User Guide
- What Is Service Foundry?
- GitOps-Driven Platform Management
- Core Components of Service Foundry
- Why Use Service Foundry?
- Rapid Bootstrap with a Single Command
- Service Foundry Console Features
- Dashboard Overview
- Git Ops Section
- Managed Applications
- Enterprise Applications
- Open Source Software
- GitOps Applications
- Storage & Volumes Section
- Storage Classes
- Persistent Volumes
- Persistent Volume Claims
- Kubernetes Stack Orchestration
- Framework Core
- Shared Components
- Observability Stack
- Single Sign-On (SSO) Stack
- Spring Backend Stack (Work In Progress)
- Big Data Stack (Work In Progress)
Young Gyu Kim <credemol@gmail.com>
Helping you manage your Kubernetes apps with ease - no deep tech knowledge needed!
End-to-End GitOps Orchestration for Kubernetes Workloads
A Modular GitOps Framework for Kubernetes-Native Platform Engineering
What Is Service Foundry?
Service Foundry is a Kubernetes-native platform engineering framework designed to help teams provision, deploy, secure, and observe workloads using GitOps workflows.
It integrates popular open-source tools into a cohesive automation system, enabling you to:
-
Use standardized GitOps patterns to manage infrastructure and workloads
-
Collaborate through version-controlled configuration
-
Deploy complete application stacks (e.g., observability, identity, data pipelines)
-
Secure resources using Single Sign-On (SSO) and encrypted secrets
It bridges the gap between developers and operations by providing a UI-driven GitOps console, while preserving the auditable, declarative nature of Git-based infrastructure as code.
GitOps-Driven Platform Management
Service Foundry adopts a GitOps-first approach, meaning:
-
All Kubernetes resources (infra modules, apps, secrets, etc.) are declared in YAML and stored in Git
-
Changes are reviewed via Pull Requests and automatically applied by Argo CD
-
The entire state of the cluster is version-controlled, auditable, and reproducible
This enables:
-
Predictable deployments — Git is the single source of truth
-
Safe rollbacks — Easily revert to previous versions
-
Transparent ops — Operational changes are traceable via Git history
Core Components of Service Foundry
| Component | Purpose |
|---|---|
Argo CD |
Reconciles Kubernetes state with Git |
SealedSecrets |
Encrypts secrets for safe Git storage |
Keycloak |
Acts as the OAuth2 identity provider (SSO) |
Traefik |
Kubernetes-native ingress and routing |
Oauth2 Proxy |
Authentication middleware for UIs |
Service Foundry Console |
Web UI to generate, review, and manage GitOps manifests |
Service Foundry Builder |
CLI/Backend tool to generate manifests and bootstrap environments |
Source Generator |
Generates Kubernetes manifests from templates |
Why Use Service Foundry?
-
Declarative + Automated: All infrastructure and app configurations are versioned and applied automatically
-
Built-in Security: Seamless OAuth2-based login, secret encryption, and role-based access
-
Production-Grade Observability: OpenTelemetry, Prometheus, Grafana, and Loki are pre-integrated
-
Unified Management Interface: Web console for all configuration and lifecycle tasks
-
Modular Architecture: Deploy only the modules you need — SSO, Observability, Big Data, etc.
-
Developer & Platform Collaboration: Developers install services; operators manage infrastructure — all within GitOps
Rapid Bootstrap with a Single Command
Service Foundry provides a CLI tool and Helm charts to bootstrap your platform in one step:
-
Installs Argo CD, Keycloak, Traefik, SealedSecrets, and Service Foundry Console
-
Sets up initial Git repository structure
-
Deploys infrastructure modules and app scaffolds using GitOps
-
Secures access with SSO
$ helm install service-foundry-builder \
service-foundry/service-foundry-builder \
--set command=bootstrap \
-n service-foundry --create-namespace \
--version $SF_BUILDER_CHART_VERSION
After setup, platform users can log in to the console and provision applications from a form-based UI — with YAML manifests automatically committed to Git.
Service Foundry Console
The Service Foundry Console provides a self-service web interface for platform users to provision and manage workloads. By default, it is accessible at:
| Ensure that your DNS is configured to point to the Traefik load balancer. |
All components, including the console itself, are deployed as Argo CD Applications. Their manifests are stored and versioned in the GitOps repository, ensuring full traceability and reproducibility. Users can operate either via the console or directly using Git workflows.
Git Repository
The Git repository initialized during bootstrap stores all Kubernetes manifest files managed by Service Foundry — including infrastructure modules, Argo CD Applications, Helm values files, SealedSecrets and Kubernetes Resources.
This centralized structure supports both manual Git operations (clone, commit, push, etc.) and visual editing via the Service Foundry Console.
SealedSecrets for Secure Configuration
Sensitive data (e.g., passwords, tokens, API keys) are encrypted using Bitnami SealedSecrets before being committed to Git. During bootstrap, Service Foundry automatically converts Kubernetes Secret manifests into SealedSecret manifests.
Argo CD Integration
All components of the Service Foundry platform, Open Source Software applications, and user-defined applications are managed as Argo CD Applications. This ensures that the desired state defined in Git is continuously reconciled with the actual state in the Kubernetes cluster.
The GitOps Repository is configured in Argo CD Settings as a repository, allowing Argo CD to sync the applications defined in the repository to the Kubernetes cluster.
Repository Configuration
Service Foundry automatically registers your GitOps repository in Argo CD during bootstrap, allowing continuous sync of application states.
Argo CD Project Scope
A dedicated Argo CD Project named service-foundry is created by default to scope all applications deployed via Service Foundry.
Service Foundry Console Features
The Service Foundry Console provides a visual control plane for managing, observing, and operating applications and platform components deployed across your Kubernetes clusters. It offers a simplified interface for interacting with GitOps-managed resources and platform services.
GitOps-Centric Application Management
-
Managed Applications: View the full list of Argo CD–managed applications, monitor their sync and health status, and perform lifecycle operations such as uninstall or update.
-
Enterprise Applications: Deploy and manage proprietary or internal software packages, typically sourced from private Helm registries.
-
Open Source Software: Browse and install curated open-source packages (e.g., Redis, Postgres, Kafka) with Helm chart–based GitOps workflows.
-
Raw GitOps Applications: Directly edit or remove raw Argo CD Applications stored in the GitOps repository.
Storage & Volumes
-
Storage Classes: Inspect and manage StorageClasses available in the cluster.
-
Persistent Volumes: View and manage Persistent Volumes (PVs) provisioned in the cluster.
-
Persistent Volume Claims: Create and manage Persistent Volume Claims (PVCs) bound to PVs for application storage needs.
Kubernetes Stack Orchestration
-
Framework Core: Inspect and manage foundational components installed during platform bootstrap (e.g., Argo CD, Traefik, SealedSecrets).
-
Shared Components: Control cluster-wide operators and services that support multiple stacks — such as Prometheus Operator, OpenTelemetry Operator, Spark Operator.
-
Observability Stack: Manage observability tools like Grafana, Prometheus, Loki, Tempo, and their provisioning.
-
Single Sign-On (SSO): Configure identity providers and manage user access via Keycloak and OAuth2 Proxy.
-
Big Data Stack: Deploy scalable analytics infrastructure — including Apache Spark, Airflow, Neo4j, and OpenSearch.
-
Spring Backend Stack: Easily deploy and operate Spring Boot–based microservices using standard templates and Helm charts.
Console Settings
-
Global Configuration: Manage environment-wide settings related to GitOps, authentication, UI preferences, and more.
Dashboard Overview
The Dashboard provides a high-level overview of the Kubernetes cluster and the Service Foundry platform status. It displays key metrics, recent activities, and quick links to various sections of the console.
Features
-
Enable Observability: Quickly enable or disable the observability stack (OpenTelemetry, Prometheus, Grafana, Loki, Tempo) across the cluster.
-
Quick Links: Key features for Operation
-
Install Enterprise Application
-
Install Open Source Software
-
Enable EFS / S3 CSI Drivers
-
Create Persistent Volume
-
Create Persistent Volume Claim
-
Install Shared Components
-
Configure SSO Server
-
SSO Resource Servers
-
-
Application Status
-
Healthy Applications: Number of applications in a healthy state.
-
Synced Applications: Number of applications that are in sync with the Git repository.
-
-
Application Categories Overview
-
Enterprise Applications: Total number of enterprise applications deployed.
-
Open Source Applications: Total number of open-source applications installed.
-
Core Applications: Number of core platform components running.
-
Shared Applications: Number of shared services and operators deployed.
-
Observability Applications: Number of observability tools active.
-
Single Sign-On Applications: Number of SSO-related components.
-
Git Ops Section
The Git Ops section provides a comprehensive interface for managing all applications and resources defined in the GitOps repository. It is divided into four main subsections:
-
Managed Applications
-
Enterprise Applications
-
Open Source Software
-
GitOps Applications
Managed Applications
The Managed Applications section provides a centralized dashboard for all Argo CD applications deployed in the Kubernetes cluster. From here, users can monitor, modify, and manage applications through a GitOps-driven workflow.
Platform users can:
-
View the sync and health status of all deployed Argo CD applications
-
Uninstall selected applications directly from the UI
-
Filter between Open Source and Enterprise applications
-
Navigate to application-specific views for further inspection
Application Filters and Actions
| Button | Action |
|---|---|
|
Toggle to display only Enterprise Applications. |
|
Toggle to display only Open Source Software applications. |
|
Click to uninstall the selected applications from the cluster and remove it from the Git repository. |
| Button | Action |
|---|---|
|
Uninstall the application in a row. |
Clicking on any application name opens the detailed application view with the following tabs:
Application Files
Users can inspect and modify manifest files directly in the browser-based editor. The Console supports a Git-aware editing workflow:
| Button | Action |
|---|---|
|
Hide File Tree. |
|
Show File Tree. |
|
Refresh the file tree to reflect the current state from Git. |
|
Enable editing mode for the selected manifest file. |
|
Discard edits and revert to the last committed version. |
|
Save changes made to the manifest files. This will stage the changes for commit. |
|
Discard all unsaved changes across files. |
|
Commit and push changes to the Git repository (triggers Argo CD sync). |
|
Add a Git commit message before pushing. |
|
Decrease font size in the file editor. |
|
Increase font size in the file editor. |
Application Details Tab
The Details tab shows metadata for the selected Argo CD application, including:
-
Application name and namespace
-
Argo CD project
-
Sync status (Synced / OutOfSync)
-
Health status (Healthy / Degraded / Missing)
-
Source repository path and revision
Kubernetes Resources Tab
The Resources tab lists all Kubernetes resources associated with the application. It allows users to:
-
View resource type, name, namespace, and status
-
Drill down into specific workloads (e.g., Deployments, Services, Secrets)
-
Monitor resource state and lifecycle
Enterprise Applications
The Enterprise Applications section enables teams to define, deploy, and manage internal business applications sourced from private container registries. Service Foundry supports both Helm-based and Kustomize-based GitOps workflows, allowing users to scaffold applications using reusable templates and manage deployments through Argo CD.
To create a new application, click “Add New Application”, then select either a Helm or Kustomize deployment model depending on your application packaging format.
Create Enterprise Application
Users can scaffold enterprise applications by filling in a guided form. Service Foundry generates the required manifest files and Argo CD Application resources, commits them to the GitOps repository, and deploys them automatically.
Common Fields
These fields are shared between Helm and Kustomize application types and are used to populate Kubernetes manifests and Argo CD configuration files.
| Field name | Description | example |
|---|---|---|
Project Code |
Logical identifier used as a prefix for Kubernetes resources. |
prj1 |
Application Name |
Application-specific identifier used in resource naming. |
myapp |
Namespace |
Target Kubernetes namespace for deployment. |
prj1 |
Version |
Chart version (for Helm) or application version tag. |
0.1.0 |
Image Registry |
Hostname of the Docker image registry. |
ghcr.io |
Image Repository |
Repository path to the container image. |
o11y-otel-spring-example |
Image Tag |
Docker image tag to be deployed. |
0.1.0 |
Replica Count |
Number of application pods to deploy. |
2 |
Container Port |
Port the application container listens on. |
8080 |
Service Type |
Kubernetes Service type. Options: ClusterIP, NodePort, LoadBalancer. |
ClusterIP |
Kustomize-Based Application
For applications defined using raw Kubernetes manifests, Service Foundry provides a visual resource composer for generating Kustomize-based applications. Users can dynamically add/remove/rename Kubernetes resource templates (e.g., Deployment, Service, ConfigMap, Secret, Ingress, etc.).
| Button | Action |
|---|---|
Add Resource |
Insert a new resource (e.g., Deployment, Service) into the manifest set. |
Remove |
Delete the selected resource from the Kustomize structure. |
Rename |
Rename an existing resource before creation. |
Once resources are configured, users can review and edit the generated YAML files in the built-in editor.
Deploy via GitOps
Click “Create Application” to generate the necessary manifest and Argo CD application files. These files are automatically committed to the GitOps repository and Argo CD will detect and deploy the new application to the Kubernetes cluster.
The newly created application will also appear in the Managed Applications section of the Console.
You can click on the application name to:
-
Inspect deployment manifests
-
View sync and health status
-
Access related Kubernetes resources
All generated manifest files remain fully editable from the Console. Once modified, changes can be committed and pushed directly to the Git repository, triggering Argo CD to re-sync the application and apply updates to the cluster.
Open Source Software
Service Foundry provides a curated catalog of popular open-source applications that can be deployed seamlessly using Helm charts from public registries. These applications can be provisioned with just a few clicks and are fully integrated into the GitOps workflow.
To install an application, select it from the catalog and click “Install”. Service Foundry guides you through a streamlined setup process using Helm-based configuration templates.
Example: Installing PostgreSQL
When installing PostgreSQL, users are prompted to configure essential parameters such as:
-
Database username and password
-
Initial database name
-
Helm chart version
After customization, clicking “Install Application” will:
-
Generate Helm-based Kubernetes manifests
-
Commit them to the GitOps repository
-
Create a corresponding Argo CD application to manage the deployment
During installation, the Job Status is displayed in the header area to track progress in real time.
GitOps Repository Integration
Once installation is complete, all manifests and configuration files are stored in the GitOps repository under a versioned path.
The deployed application will also appear in the Managed Applications section of the Console, alongside other enterprise or custom workloads.
Application Details & Management
Click on any deployed open-source application to view detailed information such as:
-
Kubernetes manifests
-
Helm release values
-
Resource status and health
-
Argo CD sync history
Applications can be deployed into different namespaces, and each instance is managed as an isolated Argo CD application with its own configuration and lifecycle.
Application Actions
Buttons to manage the application:
| Button | Action |
|---|---|
Helm App |
View Helm-specific values and manifests associated with the deployment. |
Kustomize App |
View Kustomize manifests if the application was scaffolded using Kustomize. |
UNINSTALL |
Remove the application from the Kubernetes cluster and delete the associated Argo CD application. Note: Manifest files will remain in the Git repository for audit purposes. |
GitOps Applications
The GitOps Applications section enables full lifecycle management of raw Kubernetes manifests stored in the Git repository. Users can create, edit, deploy, and remove applications using a Git-centric workflow—without needing to leave the console.
Service Foundry allows you to reuse existing Kubernetes manifests in your GitOps repository to scaffold new Argo CD applications.
Application List — Actions
Each application entry supports quick access to common actions:
| Button | Action |
|---|---|
|
Copy the full path to the Argo CD application file for reference or external tooling. |
|
Edit the Argo CD application manifest directly in the built-in file editor. |
|
Deploy the application to the cluster by creating the corresponding Argo CD application. Available only if the application is not yet installed. |
|
Uninstall the application from the cluster and remove all associated Kubernetes resources. Available only if already installed. |
|
Permanently delete the application manifest from the Git repository. This action cannot be undone. |
Batch Operations — Header Actions
The header also provides multi-application controls for bulk operations:
| Button | Description |
|---|---|
|
Delete selected GitOps application manifests from the repository. Irreversible. |
|
Create Argo CD applications for selected manifests and deploy them to the cluster. |
|
Uninstall selected applications and remove their Kubernetes resources from the cluster. |
Click an application name to view detailed information and edit files in place.
View GitOps Application
Users can drill down into each GitOps application to inspect and manage its manifests. This includes editing files, creating Argo CD apps, and managing installation status.
Following buttons are available in the header for application-specific actions:
| Button | Action |
|---|---|
|
Open the file tree to browse and edit application manifests. |
|
Create a new Argo CD application to deploy the manifest to the Kubernetes cluster. |
|
Remove the Argo CD application and associated resources from the cluster. |
|
Permanently delete the application manifest from the Git repository. This action is irreversible. |
Any edits to the manifest files can be committed directly from the console, triggering Argo CD to sync changes and apply them to the cluster automatically.
Storage & Volumes Section
The Storage & Volumes section provides a unified interface for managing persistent storage resources in the Kubernetes cluster. Users can inspect, create, and manage StorageClasses, PersistentVolumes (PVs), and PersistentVolumeClaims (PVCs) through a GitOps-driven workflow.
It is divided into three main subsections:
-
Storage Classes
-
Persistent Volumes
-
Persistent Volume Claims
Storage Classes
The Storage Classes section lists all StorageClasses available in the Kubernetes cluster. Users can view details such as provisioner type, parameters, and reclaim policy.
Enable EFS CSI Driver
To use AWS Elastic File System (EFS) for dynamic volume provisioning, the EFS CSI driver must be enabled in the cluster. Service Foundry provides a guided workflow to set this up.
To enable the EFS CSI driver, click “Enable EFS CSI Driver” and it will perform the following steps automatically:
-
Create an IAM Service Account for the EFS CSI Driver
-
Install the EFS CSI Driver as a Helm chart
-
Create an Elastic File System (EFS) in AWS named '${cluster-name}-efs'
-
Create a StorageClass named 'efs-sc' for EFS using the File System ID
Enable S3 CSI Driver
To use AWS S3-compatible object storage for dynamic volume provisioning, the S3 CSI driver must be enabled in the cluster. Service Foundry provides a guided workflow to set this up.
To enable the S3 CSI driver, click “Enable S3 CSI Driver” and it will perform the following steps automatically:
-
Create an IAM Service Account for the S3 CSI Driver
-
Install the S3 CSI Driver as a Helm chart
-
Create an S3 Bucket in AWS named '${cluster-name}-s3-bucket'
-
Create a StorageClass named 's3-sc' for S3 using the Bucket name
List Storage Classes after Enabling Drivers
New StorageClasses named 'efs-sc' and 's3-sc' will appear in the list after enabling the respective CSI drivers.
Add Storage Class
Following Storage Class Types are supported:
-
EBS (Elastic Block Store) - for block storage, suitable for single-node access
-
EFS (Elastic File System) - for shared file storage, suitable for multi-node access
-
S3 (Simple Storage Service) - for object storage, suitable for unstructured data
Add EBS Storage Class
Fill out the form with the following fields:
-
Storage Class Type: Select 'EBS' for Elastic Block Store.
-
Storage Class Name: A unique name for the StorageClass.
-
Provisioner: 'ebs.csi.aws.com' for EBS.
-
Volume Binding Mode: 'WaitForFirstConsumer' is recommended for dynamic provisioning.
-
Reclaim Policy: Choose between 'Retain' or 'Delete' based on your data retention needs.
-
Type: Select the EBS volume type (e.g., gp2, gp3, io1).
-
FsType: File system type (e.g., ext4, xfs).
Add EFS Storage Class
Fill out the form with the following fields:
-
Storage Class Type: Select 'EFS' for Elastic File System.
-
Storage Class Name: A unique name for the StorageClass.
-
Provisioner: 'efs.csi.aws.com' for EFS.
-
Volume Binding Mode: 'Immediate' or 'WaitForFirstConsumer' based on your use case.
-
Reclaim Policy: Choose between 'Retain' or 'Delete' based on your data retention needs.
-
Provisioning Mode: 'efs-ap'
-
File System ID: The ID of the EFS file system to use.
-
Directory Perms: Permissions for the root directory (e.g., 755).
Add S3 Storage Class
Fill out the form with the following fields:
-
Storage Class Type: Select 'S3' for Simple Storage Service.
-
Storage Class Name: A unique name for the StorageClass.
-
Provisioner: 's3.csi.aws.com' for S3.
-
Volume Binding Mode: 'Immediate' or 'WaitForFirstConsumer' based on your use case.
-
Reclaim Policy: Choose between 'Retain' or 'Delete' based on your data retention needs.
-
S3 Bucket Name: The name of the S3 bucket to use for storage.
Persistent Volumes
The Persistent Volumes section lists all PersistentVolumes (PVs) provisioned in the Kubernetes cluster. Users can view details such as capacity, access modes, reclaim policy, and status.
View Persistent Volume
Click the PV name to view detailed information about the volume, including its specifications and current status.
To delete a Persistent Volume, click the "DELETE PV" button in the header. This will remove the PV from the cluster and delete its manifest from the Git repository. Bound Persistent Volume Claims (PVCs) must be deleted first before the PV can be removed.
Add Persistent Volume
Click "Add Persistent Volume" to create a new PV.
Add EBS Persistent Volume
For EBS volumes, dynamic Provisioning is recommended. To use static provisioning with this page, ensure that you have already created an EBS Volume in AWS and have its Volume ID ready.
Select a Storage Class name of type 'EBS' from the Toggle button in the top right corner.
Fill out the form with the following fields:
-
Volume Name: A unique name for the Persistent Volume.
-
Storage Class: Select an existing StorageClass to associate with the PV.
-
Storage Capacity: Specify the size of the volume (e.g., 10Gi).
-
Access Modes: Choose one or more access modes. ReadWriteOnce (RWO) is common for EBS.
-
EBS Volume ID: The ID of the existing EBS volume to use.
Add EFS Persistent Volume
Select a Storage Class name of type 'EFS' from the Toggle button in the top right corner.
Fill out the form with the following fields:
-
Volume Name: A unique name for the Persistent Volume.
-
Storage Class: Select an existing StorageClass to associate with the PV.
-
Storage Capacity: Specify the size of the volume (e.g., 10Gi).
-
Access Modes: Choose one or more access modes. ReadWriteMany (RWX) is common for EFS.
-
EFS File System ID: The ID of the EFS file system to use. Use the same ID as specified in the StorageClass.
-
Access Point ID: (Optional) The ID of the EFS Access Point to use. You can create an access point in 'Acc Access Point' tab, or select one from the 'Access Point List' tab.
Add S3 Persistent Volume
Select a Storage Class name of type 'S3' from the Toggle button in the top right corner.
Fill out the form with the following fields:
-
Volume Name: A unique name for the Persistent Volume.
-
Storage Class: Select an existing StorageClass to associate with the PV.
-
Storage Capacity: Specify the size of the volume (e.g., 10Gi).
-
Access Modes: Choose one or more access modes. ReadWriteMany (RWX) is common for S3.
-
S3 Bucket Name: The name of the S3 bucket to use for storage. Use the same bucket name as specified in the StorageClass.
-
Bucket Prefix: (Optional) A prefix within the S3 bucket to use for this volume.
Persistent Volume Claims
The Persistent Volume Claims section lists all PersistentVolumeClaims (PVCs) created in the Kubernetes cluster. Users can view details such as requested storage, access modes, status, and associated PersistentVolumes.
View Persistent Volume Claim
Click the PVC name to view detailed information about the claim, including its specifications and current status.
To delete a Persistent Volume Claim, click the "DELETE PVC" button in the header. This will remove the PVC from the cluster and delete its manifest from the Git repository. Ensure that any applications using the PVC are updated or deleted first to avoid binding issues.
Add Persistent Volume Claim
Click "Add Persistent Volume Claim" to create a new PVC.
Add EBS Persistent Volume Claim
For EBS volumes, dynamic provisioning is recommended. To use static provisioning with this page, ensure that you have already created a Persistent Volume (PV) in the cluster that matches your claim requirements.
For EBS and GP2 volume type, set Dynamic Provisioning to 'Yes' to create a PVC that will automatically bind to a dynamically provisioned EBS volume.
Fill out the form with the following fields:
-
PVC Name: A unique name for the Persistent Volume Claim.
-
Namespace: The Kubernetes namespace where the PVC will be created.
-
Storage Class: Select an existing StorageClass to associate with the PVC.
-
Storage Capacity: Specify the size of the volume (e.g., 10Gi).
-
Access Modes: Choose one or more access modes. ReadWriteOnce (RWO) is common for EBS.
Add EFS Persistent Volume Claim
Fill out the form with the following fields:
-
PVC Name: A unique name for the Persistent Volume Claim.
-
Namespace: The Kubernetes namespace where the PVC will be created.
-
Volume Name: Select an existing Persistent Volume to bind to.
-
Storage Class: Select an existing StorageClass to associate with the PVC.
-
Storage Capacity: Specify the size of the volume (e.g., 10Gi).
-
Access Modes: Choose one or more access modes. ReadWriteMany (RWX) is common for EFS.
Add S3 Persistent Volume Claim
Fill out the form with the following fields:
-
PVC Name: A unique name for the Persistent Volume Claim.
-
Namespace: The Kubernetes namespace where the PVC will be created.
-
Volume Name: Select an existing Persistent Volume to bind to.
-
Storage Class: Select an existing StorageClass to associate with the PVC.
-
Storage Capacity: Specify the size of the volume (e.g., 10Gi).
-
Access Modes: Choose one or more access modes. ReadWriteMany (RWX) is common for S3.
Kubernetes Stack Orchestration
Service Foundry supports modular, stack-based orchestration for managing complex Kubernetes workloads. Each stack bundles a set of components into a cohesive deployment unit, enabling teams to layer functionalities in a controlled and scalable manner.
Users can selectively deploy the stacks they need, in any order, while respecting inter-stack dependencies. For example, the Observability Stack can be layered on top of the Framework Core and Shared Components.
Available Stacks:
-
Framework Core
-
Shared Components
-
Observability Stack
-
Single Sign-On (SSO) Stack
-
Spring Backend Stack (Work In Progress)
-
Big Data Stack (Work In Progress)
Framework Core
The Framework Core is the foundation of Service Foundry. It includes critical services required for the platform to operate reliably from initial setup onward.
Each component in this stack is preconfigured for seamless integration. Users may review configuration details and adjust settings as needed—but it is not recommended to uninstall any components from this stack, as they are essential for overall platform stability and functionality.
Shared Components
The Shared Components stack includes reusable services and Kubernetes Operators that provide cross-cutting functionality across multiple application domains and stacks.
Operators such as prometheus-operator, opentelemetry-operator, and spark-operator are part of this stack. These are not mandatory, and users can choose to install only the components relevant to their use case.
For example, if you’re deploying the Observability Stack, it’s recommended to install the prometheus-operator and opentelemetry-operator beforehand.
Click Orchestrate to generate Kubernetes manifests, commit them to the GitOps repository, and deploy them via Argo CD.
Applicable Domain
| Domain | Required Components |
|---|---|
Observability |
Cert-manager, Prometheus-operator, Opentelemetry-operator |
Big Data |
Cert-manager, Spark-operator |
Upon deployment, manifests are organized under the infra-apps directory in the GitOps repository:
The stack will also appear in the Managed Applications section of the console:
Observability Stack
The Observability Stack provides full support for logs, metrics, and traces. It is designed to adapt to different environments—Development, Staging, and Production—by offering tailored profiles for each.
Component dependencies are considered during orchestration. For example, the Otel Collector may require services like Loki, Tempo, and Prometheus to be deployed together.
Click Orchestrate to start the deployment process.
Available Profiles
| Profile | Description |
|---|---|
Dev Profile |
A lightweight configuration including Prometheus, Grafana, Loki, and Otel Collector—ideal for development and local testing. |
Staging Profile |
Adds components like OpenSearch and S3-compatible object storage to support staging environments and persistent data retention. |
Production Profile |
A comprehensive observability suite including Jaeger, Cassandra, and high-availability configurations suitable for production workloads. |
Each profile triggers manifest generation and GitOps deployment, with resources organized under the observability-apps directory:
Once deployed, the application appears in the Managed Applications section:
Single Sign-On (SSO) Stack
The SSO Stack enables authentication and secure access management across the platform. It integrates Keycloak, OAuth2 Proxy, and Traefik to provide seamless SSO for internal and third-party applications.
Traefik IngressRoute and OAuth2 Proxy are preconfigured to secure access to UIs such as Grafana and the Service Foundry Console.
Deploying the SSO Stack
Click Orchestrate to deploy the SSO stack. This creates manifest files, commits them to Git, and provisions the stack via Argo CD.
SSO Configuration
Oauth2 Proxy Ingress Form
The IngressRoute configuration form allows you to define routing rules for SSO-protected applications. Key fields include:
| Field name | Description | Example |
|---|---|---|
Name |
Unique name for the Ingress resource |
o11y-sso-ingress |
Namespace |
Kubernetes namespace |
o11y |
Service Name |
Kubernetes service to route to |
grafana |
Port Name |
Target port name on the service |
service |
Subdomain |
Subdomain for routing |
grafana |
A subdomain like grafana with root domain nsa2.com will create a route: http://grafana.nsa2.com.
You can verify the deployed route:
$ kubectl -n o11y get ingressroutes o11y-sso-ingress-route -o yaml | yq '.spec'
Sample IngressRoute manifest
entryPoints:
- web
routes:
- kind: Rule
match: Host(`grafana.nsa2.com`)
middlewares:
- name: cors-headers
- name: forward-auth
services:
- name: grafana
port: service
Once deployed, the SSO application is visible in both the GitOps repository and the Service Foundry Console:
Resource Servers
SSO-protected applications, such as Grafana, are listed under Resource Servers. These services inherit the same credentials and session state for unified access control.
When accessing Grafana, users are redirected to the Keycloak login page for authentication. After successful login, they are granted access to Grafana without needing to re-enter credentials.
Spring Backend Stack (Work In Progress)
The upcoming Spring Backend Stack is designed to support enterprise-grade Java applications using Spring Boot. It will include runtime dependencies such as PostgreSQL, Redis, RabbitMQ, and configuration tools tailored for microservice development and deployment.
Big Data Stack (Work In Progress)
The Big Data Stack will enable scalable data processing using popular open-source technologies. It includes support for Apache Spark, Airflow, OpenSearch, Neo4j, MinIO, and dbt. This stack is intended for teams building data pipelines, graph analytics, or large-scale ETL workflows.