Observability for Legacy Spring Apps - No Code Changes Required
- Overview
- Why Observability Matters (Even for Legacy Apps)
- Zero Code Changes with OpenTelemetry Java Agent
- Creating the OpenTelemetry Agent Container
- Injecting the Agent via Init Container
- Enable Prometheus Metrics
- Enable the Observability Stack On-Demand
- Deploy and Visualize
- Traffic Generation with Swagger UI
- Explore in Grafana
- Wrapping Up
Overview
This guide walks you through enabling full observability for legacy Spring Boot applications using OpenTelemetry on Kubernetes — all without modifying your existing application code or Docker images. You’ll learn how to configure an init container to inject the OpenTelemetry Java Agent, set up the agent’s config via a ConfigMap, and expose metrics, traces, and logs for seamless integration with your observability stack.
Why Observability Matters (Even for Legacy Apps)
Observability helps you gain visibility into application behavior, performance bottlenecks, and errors — even during the development phase. By using OpenTelemetry, you can collect metrics, traces, and logs that reveal insights into system health and user behavior, making debugging and performance optimization much easier.
Zero Code Changes with OpenTelemetry Java Agent
OpenTelemetry provides a Java agent that can be attached to any JVM-based app — including Spring Boot — without requiring any code changes. In this setup, we use an init container to inject the agent and optional extensions into a shared volume, which your legacy Spring container mounts at runtime.
The main application remains untouched:
-
No code changes
-
No base image rebuilds
-
No redeployment logic rewrites
Creating the OpenTelemetry Agent Container
Here’s a lightweight Dockerfile that fetches and prepares the OpenTelemetry Java Agent. It uses a minimal busybox base to keep the image small and secure.
# --- Stage 1: fetch agent ---
FROM alpine:3.20 AS fetcher
ARG OTEL_JAVA_AGENT_VERSION=2.20.1
RUN apk add --no-cache curl \
&& curl -L -o /tmp/opentelemetry-javaagent.jar \
https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/download/v${OTEL_JAVA_AGENT_VERSION}/opentelemetry-javaagent.jar
# --- Stage 2: minimal runtime with sh/cp (busybox) ---
FROM busybox:1.36
USER 65532:65532 # nonroot
WORKDIR /opt/otel
COPY --from=fetcher /tmp/opentelemetry-javaagent.jar /opt/otel/opentelemetry-javaagent.jar
# Place your own extensions to /opt/otel/ directory.
#COPY nsa2-otel-extension-1.0-all.jar /opt/otel/
# optional default entrypoint; we’ll override in the initContainer anyway
ENTRYPOINT ["sh","-c","cp -f /opt/otel/opentelemetry-javaagent.jar \"$OTEL_AGENT_OUT_DIR/$OTEL_AGENT_FILENAME\""]
Prebuilt image: credemol/otel-java-agent:2.20.1
Injecting the Agent via Init Container
In your Deployment, add an initContainer to copy the agent and optional custom JARs to a shared volume.
initContainers:
- name: otel-java-agent-init
image: credemol/otel-java-agent:2.20.1
imagePullPolicy: IfNotPresent
env:
- name: OTEL_AGENT_OUT_DIR
value: /otel/agent
- name: OTEL_AGENT_FILENAME
value: opentelemetry-javaagent.jar
command: ["/bin/sh","-c"]
args:
# copy + set sane perms; chown is helpful if your app runs as a specific uid
- |
cp -f /opt/otel/*.jar /otel/agent/
chmod 0644 /otel/agent/*.jar
volumeMounts:
- name: otel-agent
mountPath: /otel/agent
Custom Agent Config via ConfigMap
For more details about the OpenTelemetry Java agent properties, please refer to the official documentation:
Features:
-
JDBC - otel.instrumentation.jdbc.enabled (default: true)
-
Logback - otel.instrumentation.logback-appender.enabled (default: true)
-
Logback MDC - otel.instrumentation.logback-mdc.enabled (default: true)
-
Spring - Web otel.instrumentation.spring-web.enabled (default: true)
-
Spring - Web MVC otel.instrumentation.spring-webmvc.enabled (default: true)
-
Spring - WebFlux otel.instrumentation.spring-webflux.enabled (default: true)
-
Kafka - otel.instrumentation.kafka.enabled (default: true)
-
MongoDB - otel.instrumentation.mongo.enabled (default: true)
-
Micrometer - otel.instrumentation.micrometer.enabled (default: false)
-
R2DBC (reactive JDBC) - otel.instrumentation.r2dbc.enabled (default: true)
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-java-agent-config
data:
agent.properties: |
otel.instrumentation.jdbc.enabled=true
otel.instrumentation.spring-webmvc.enabled=true
# Optional: customize logback configuration
logback.xml: |
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="STDOUT" />
</root>
</configuration>
This ConfigMap will be mounted to /otel/config/ directory in the initContainer. The agent.properties file will be copied to the shared volume which is mounted to /otel/agent/.
Enabling OpenTelemetry in the Application Container
# omitted for brevity
# volumes
volumes:
- name: otel-agent
emptyDir: {} # or { medium: Memory } for tmpfs
- name: otel-agent-config # optional
configMap:
name: otel-java-agent-config
items:
- key: agent.properties
path: agent.properties
- key: logback.xml
path: logback.xml
initContainers:
# omitted for brevity. See previous section for details.
containers:
- name: app
image: credemol/postgresql-example:0.1.0
ports:
- containerPort: 8080
name: http
protocol: TCP
# Metrics endpoint for Target Allocator to scrape
- containerPort: 9464
name: metrics
protocol: TCP
#
volumeMounts:
- name: otel-agent
mountPath: /otel/agent
readOnly: true
- name: otel-agent-config # optional
mountPath: /otel/config
readOnly: true
env:
# 1) Inject the javaagent
- name: JAVA_TOOL_OPTIONS
value: "-javaagent:/otel/agent/opentelemetry-javaagent.jar"
- name: OTEL_JAVAAGENT_EXTENSIONS
value: "/otel/agent/nsa2-otel-extension-1.0-all.jar"
# 2) Core OTel config
- name: OTEL_SERVICE_NAME
value: "postgresql-example"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://otel-collector.o11y.svc.cluster.local:4317"
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: "grpc"
- name: OTEL_METRICS_EXPORTER
value: "prometheus"
# (Optional) add metadata & sampling
- name: OTEL_RESOURCE_ATTRIBUTES
value: "service.namespace=default,service.version=1.0.0,env=dev"
- name: OTEL_TRACES_SAMPLER
value: "parentbased_traceidratio"
- name: OTEL_TRACES_SAMPLER_ARG
value: "1.0" # 100% sampling for troubleshooting
# (Optional) point agent to a properties file
- name: OTEL_JAVAAGENT_CONFIGURATION_FILE
value: "/otel/config/agent.properties"
envFrom:
- configMapRef:
name: postgresql-example-configmap
optional: true
- secretRef:
name: postgresql-example-secret
optional: true
resources:
requests: { cpu: "100m", memory: "256Mi" }
limits: { cpu: "1000m", memory: "1024Mi" }
Key environment variables:
-
JAVA_TOOL_OPTIONS: This variable is used to specify the Java agent to be used. The value should be set to "-javaagent:/path/to/opentelemetry-javaagent.jar".
-
OTEL_JAVAAGENT_EXTENSIONS: This variable is used to specify the path to any additional extensions for the OpenTelemetry Java agent.
-
OTEL_JAVAAGENT_CONFIGURATION_FILE: This variable is used to specify the path to the agent.properties file if you want to customize the agent properties.
-
OTEL_EXPORTER_OTLP_ENDPOINT: This variable is used to specify the endpoint of the OpenTelemetry Collector. The value should be set to the address of the collector in your Kubernetes cluster.
-
OTEL_SERVICE_NAME: This variable is used to specify the name of the service. This name will be used to identify the service in the telemetry data.
-
OTEL_METRICS_EXPORTER: This variable is used to specify the metrics exporter to be used. The value should be set to "prometheus" for Target Allocator to scrape metrics.
Enable Prometheus Metrics
To expose metrics on port 9464 (for scraping by Prometheus or Target Allocator), set:
-
OTEL_METRICS_EXPORTER=prometheus
Make sure to expose this port in your container and Service definition.
apiVersion: v1
kind: Service
metadata:
name: postgresql-example
labels:
# unique name of the application required for ServiceMonitor
app.kubernetes.io/name: postgresql-example
provider: service-foundry
spec:
type: ClusterIP # ClusterIP, NodePort, or LoadBalancer
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
# Metrics endpoint for Target Allocator to scrape
- port: 9464
targetPort: 9464
protocol: TCP
name: metrics
# name: http
selector:
app: postgresql-example
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: postgresql-example-servicemonitor
namespace: qc
labels:
# Target Allocator defined in OtelCollector looks for this label to discover ServiceMonitors
metrics-unit: o11y
spec:
selector:
matchLabels:
# must match the label in the Service definition
app.kubernetes.io/name: postgresql-example
endpoints:
- port: metrics
interval: 30s
scheme: http
path: /metrics
The Target Allocator will automatically discover the ServiceMonitor and start scraping metrics from the application.
Metrics from Spring boot applications:
-
jvm cpu
-
jvm memory
-
jvm gc
-
jvm threads
-
http server requests
-
datasource (jdbc connection pool)
-
logback appender (if logback is used)
Enable the Observability Stack On-Demand
Using the Service Foundry Console, you can easily enable or disable the full observability stack — including Prometheus, Grafana, Tempo, Loki, and the OpenTelemetry Collector — as needed, saving cluster resources when not in use.
After a while, you should see the observability stack components running in the o11y namespace.
When you no longer need the observability stack, you can click 'Disable Observability' button to disable the observability stack and free up the resources.
Deploy and Visualize
Once everything is deployed:
-
Metrics are collected automatically
-
Traces are available via Tempo
-
Logs stream to Loki
-
Everything is visualized in pre-configured Grafana dashboards
You can use the 'Enterprise Applications' feature like when deploying a regular application.
When the application is deployed, you should see the application running in the qc namespace (or the namespace you specified).
Traffic Generation with Swagger UI
Use the built-in Swagger UI of your Spring Boot app to trigger some traffic (POST /users, GET /users) and generate traces and metrics.
Go to http://postgresql-example.your-root-domain/swagger-ui/index.html to access the Swagger UI.
Use the POST /users endpoint to create a new user. You can use the following JSON payload to create a user:
{
"name": "John Doe",
"email": "john@nsa2.com"
}
After creating a user, you can use the GET /users endpoint to retrieve the list of users.
Explore in Grafana
Go to http://grafana.your-root-domain to access the Grafana dashboard or Navigate to Single Sign-On (SSO) → Resource Servers page and click the Grafana link.
The default username is 'devops' and the password is 'password'.
Grafana Data Sources
The Grafana instance is pre-configured with the following data sources:
-
Tempo (for Traces)
-
Loki (for Logs)
-
Prometheus (for Metrics)
Click the 'Explore' menu to explore the telemetry data.
Trace Data
Click the 'Explore' of the Tempo data source to explore the trace data.
You should see the trace data for the requests sent to the application. Example Trace:
-
Service Name: postgresql-example
-
Span Name: GET /users
Log Data
Click the 'Explore' of the Loki data source to explore the log data.
-
service_name: postgresql-example
Metrics Data
Metrics data collectors:
-
Kubelet Cadvisor Collector: Collects node and pod metrics from Kubelet Cadvisor endpoint.
-
OpenTelemetry Target Allocator: Collects application metrics from the ServiceMonitor endpoints.
Go to Drilldown → Metrics to explore the metrics data.
There are more than 130 metrics available to explore. Click 'jvm_memory_used_bytes' metric to see the JVM memory usage of the application for example.
Wrapping Up
With this approach, you get production-grade observability without modifying your legacy application code. Whether for debugging, performance monitoring, or system auditing, OpenTelemetry + Kubernetes + Service Foundry offers a clean, scalable, and developer-friendly solution.
📘 View the web version: