Author: Geertjan Wielenga
Original post on Foojay: Read More
Table of Contents
“From Zero to Full Observability with Dash0” showed you how to add observability with auto-instrumentation: the Dash0 Kubernetes Operator collected infrastructure metrics (pod CPU, memory, node resources) and auto-instrumented your application to capture traces, logs, and metrics, all without code changes. Auto-instrumentation works for any JVM application, but it’s generic. This guide shows you how to get even richer, framework-aware telemetry for Spring Boot specifically by using Spring Observability, and how Dash0 collects, correlates, and visualizes both infrastructure and application telemetry together.
Spring Observability produces richer application telemetry than auto-instrumentation. It understands Spring’s abstractions like RestTemplate, WebClient, JdbcTemplate, and @Scheduled methods, so it captures semantic details that generic auto-instrumentation misses: which HTTP client you’re using, SQL query text from database calls, cache hit/miss rates, scheduled task execution. It propagates trace context into logs automatically and exports everything via OTLP. This is framework-aware instrumentation, not generic bytecode manipulation.
Dash0 collects telemetry from both layers. The Dash0 Kubernetes Operator injects OTLP collectors into your pods to receive Spring Observability’s telemetry. It also collects infrastructure metrics directly from Kubernetes: pod CPU and memory usage, node resources, container restarts. It correlates application metrics with infrastructure metrics automatically so you can see how your code performance relates to cluster resource usage. All of this flows into Dash0’s platform where you can query and visualize it without writing collector YAML for every service.
This guide shows you how to set up both layers. You add Spring Observability dependencies to produce application-level telemetry. The Dash0 Kubernetes Operator collects it via OTLP, collects infrastructure metrics from Kubernetes, and correlates everything automatically. The result is complete observability from business logic down to container resources.
Here’s what each component provides:
| Component | What It Provides |
|---|---|
| Spring Boot | Application framework: REST controllers, dependency injection, configuration management |
| Spring Observability | Application telemetry production: Instruments HTTP requests, database queries, cache operations, scheduled tasks. Produces traces, metrics, and logs with framework-aware context. Exports via OTLP. |
| Dash0 Kubernetes Operator | Telemetry collection: Injects OTLP collectors into pods to receive Spring Observability’s telemetry. Collects infrastructure metrics directly from Kubernetes (pod CPU/memory, node resources, container events). Forwards all telemetry to Dash0. |
| Dash0 | Observability platform: Receives telemetry from the operator, correlates application metrics with infrastructure metrics, stores everything, and provides querying and visualization in one UI. |
The setup uses the same deployment workflow as “From Zero to Full Observability with Dash0“: GitHub Actions builds and pushes the Docker image, and a GitHub Codespace runs the Kubernetes cluster using kind (Kubernetes in Docker). The difference is that the application now includes Spring Observability dependencies and configuration.
Prerequisites
You need the following before starting:
- GitHub account with access to GitHub Actions and Codespaces
- Docker Hub account for storing container images
- Dash0 account with an auth token and ingress endpoint (available in Settings)
- Basic Kubernetes knowledge including kubectl commands and YAML manifests
- Java 25 or later (handled by GitHub Actions, not required locally)
- Maven 3.9+ (also handled by GitHub Actions)
This guide uses Spring Boot 4, Spring Observability 2.0, and Micrometer Tracing 2.0. These versions support OTLP export and work with the Dash0 Kubernetes Operator without additional configuration.
Set up the Spring Boot application
The application from “From Zero to Full Observability with Dash0” had no observability dependencies. The Dash0 Kubernetes Operator’s auto-instrumentation added telemetry at runtime by injecting a Java agent. That approach works for any JVM application, but for Spring Boot specifically, you get richer telemetry by adding Spring Observability dependencies. Spring Observability produces framework-aware traces, metrics, and logs that Dash0 will collect via OTLP.
1. Create the pom.xml. This includes the web starter, Micrometer Tracing, the OpenTelemetry OTLP bridge, and the Spring Boot Actuator for exposing metrics.
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>4.0.0</version>
</parent>
<groupId>com.example</groupId>
<artifactId>order-service</artifactId>
<version>1.0.0</version>
<properties>
<java.version>25</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-tracing-bridge-otel</artifactId>
</dependency>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-exporter-otlp</artifactId>
</dependency>
</dependencies>
<build>
<finalName>order-service</finalName>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
The key dependencies here are micrometer-tracing-bridge-otel, which bridges Spring’s tracing abstraction to OpenTelemetry, and opentelemetry-exporter-otlp, which exports telemetry to an OTLP endpoint. These dependencies enable framework-aware instrumentation. Spring knows when you call RestTemplate, JdbcTemplate, or any other Spring component, and it produces traces and metrics with semantic context that auto-instrumentation can’t provide. This telemetry will be exported via OTLP to Dash0’s collector.
2. Create the main application class. This is the standard Spring Boot entry point with no additional configuration required.
Create src/main/java/com/example/demo/DemoApplication.java:
package com.example.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
3. Create the controller. This controller defines two endpoints. Spring Observability instruments them automatically and produces spans with Spring-specific metadata: the controller class, the mapping path, and the HTTP method, not just generic method entry and exit points. Dash0 will collect these spans via OTLP and make them queryable.
Create src/main/java/com/example/demo/OrderController.java:
package com.example.demo;
import org.springframework.web.bind.annotation.*;
import java.util.List;
@RestController
public class OrderController {
@GetMapping("/orders")
public List<String> getOrders() {
return List.of("order-1", "order-2", "order-3");
}
@PostMapping("/orders")
public String createOrder(@RequestBody String order) {
return "Created: " + order;
}
}
4. Configure OTLP export. This is where Spring Observability’s telemetry production meets Dash0’s collection. Spring Observability exports traces, metrics, and logs via OTLP. The Dash0 Kubernetes Operator provides an OTLP collector as a sidecar in each instrumented pod. You configure Spring to send telemetry to localhost:4318, which is where Dash0’s collector will listen and forward everything to the Dash0 platform.
Create src/main/resources/application.yml:
management:
endpoints:
web:
exposure:
include: health,metrics,prometheus
tracing:
sampling:
probability: 1.0
otlp:
tracing:
endpoint: http://localhost:4318/v1/traces
This configuration exposes actuator endpoints for health checks and metrics, sets trace sampling to 100% (every request is traced), and configures the OTLP endpoint. The endpoint points to localhost:4318, which is the default HTTP OTLP port. When you deploy to Kubernetes, the Dash0 Kubernetes Operator injects an OTLP collector sidecar that listens on this port. The collector receives Spring Observability’s telemetry and forwards it to Dash0’s platform where it gets stored, correlated with infrastructure metrics, and made queryable.
Further reading:
- Spring Boot Observability documentation
- Micrometer Tracing documentation
- OpenTelemetry Java documentation
Build and containerize the application
The application is now ready to build and package. The workflow is the same as “From Zero to Full Observability with Dash0“: GitHub Actions builds the Maven project and pushes the Docker image to Docker Hub. The difference is that this image contains Spring Observability dependencies.
1. Build the application locally to verify. This step is optional but useful for catching configuration errors before pushing to GitHub.
mvn package -DskipTests
This creates target/order-service.jar, which is the executable JAR that gets copied into the Docker image.
2. Create the Dockerfile. This uses the same Azul Zulu base image.
FROM azul/zulu-openjdk:25-jre COPY target/order-service.jar app.jar ENTRYPOINT ["java", "-jar", "/app.jar"]
3. Create the GitHub Actions workflow. This workflow builds the Maven project, logs in to Docker Hub, and pushes the image. It triggers on every push to main.
Create .github/workflows/build.yml:
name: Build and Push Docker Image
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up JDK 25
uses: actions/setup-java@v4
with:
java-version: '25'
distribution: 'zulu'
- name: Build with Maven
run: mvn package -DskipTests
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ secrets.DOCKER_USERNAME }}/order-service:latest
4. Add Docker Hub secrets to GitHub. Go to your GitHub repository Settings → Secrets and variables → Actions, then add:
DOCKER_USERNAME: your Docker Hub usernameDOCKER_PASSWORD: your Docker Hub password or access token
Push the code to GitHub and the workflow will build and push the image automatically.
Further reading:
Deploy to Kubernetes
With the image on Docker Hub, you can deploy the application to Kubernetes. Use a GitHub Codespace as the environment and kind to create a cluster inside the Codespace.
1. Open a Codespace. Go to your GitHub repository, click Code → Codespaces → Create codespace on main. Wait for the Codespace to initialize.
2. Install kind and create a cluster. These commands download kind, make it executable, move it to the system path, and create a cluster.
curl -Lo ./kind https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64 chmod +x ./kind sudo mv ./kind /usr/local/bin/kind kind create cluster
Wait for the cluster to be ready. You should see a message confirming that the control plane is up.
3. Create the Kubernetes manifest. This defines a Deployment with one replica and a Service that exposes it on port 80.
Create deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
spec:
replicas: 1
selector:
matchLabels:
app: order-service
template:
metadata:
labels:
app: order-service
spec:
containers:
- name: order-service
image: your-dockerhub-username/order-service:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: order-service
spec:
selector:
app: order-service
ports:
- port: 80
targetPort: 8080
4. Deploy the application. Apply the manifest and update the image reference to point to your Docker Hub image.
kubectl apply -f deployment.yaml kubectl set image deployment/order-service order-service=your-dockerhub-username/order-service:latest kubectl get pods
Wait until the pod shows Running. At this point the application is running and Spring Observability is producing telemetry, but there’s no OTLP collector yet to receive it. The telemetry is being generated but discarded. Once you install the Dash0 Kubernetes Operator in the next section, it will inject an OTLP collector to capture this telemetry.
Further reading:
Install the Dash0 Kubernetes Operator
Spring Observability is producing application telemetry via OTLP, but nothing is collecting it yet. Without the Dash0 Kubernetes Operator, you’d need to deploy and configure an OpenTelemetry Collector manually for each service or as shared infrastructure. The Dash0 Kubernetes Operator automates this: it injects an OTLP collector into each instrumented pod to receive Spring Observability’s traces, metrics, and logs. It also collects infrastructure metrics directly from Kubernetes: pod CPU and memory, node resources, container events. All of this telemetry flows to Dash0’s platform where it gets correlated and stored.
This is the same operator used in “From Zero to Full Observability with Dash0“, but configured differently. In that article, the operator injected auto-instrumentation to capture application traces and also collected infrastructure metrics from Kubernetes. In this article, the operator doesn’t need to inject auto-instrumentation because Spring Observability already produces application telemetry. Instead, the operator injects an OTLP collector to receive that telemetry, and it still collects infrastructure metrics from Kubernetes the same way.
1. Add the Helm repository and install the operator. Replace <your-region> with the region from your Dash0 ingress endpoint, and <your-auth-token> with an auth token from your Dash0 organization. Both are available in Dash0 under Settings.
helm repo add dash0-operator https://dash0hq.github.io/dash0-operator helm repo update dash0-operator helm install dash0-operator dash0-operator/dash0-operator --namespace dash0-system --create-namespace --set operator.dash0Export.endpoint=ingress.<your-region>.dash0.com:4317 --set operator.dash0Export.token=<your-auth-token>
2. Verify the operator is running.
kubectl get pods -n dash0-system
Wait until the operator pod shows Running.
3. Create the operator configuration resource. This tells the operator where to forward telemetry.
cat <<EOF | kubectl apply -f -
apiVersion: operator.dash0.com/v1alpha1
kind: Dash0OperatorConfiguration
metadata:
name: dash0-operator-configuration
spec:
export:
dash0:
endpoint: ingress.<your-region>.dash0.com:4317
authorization:
token: <your-auth-token>
EOF
4. Enable monitoring for the default namespace. This tells the operator to instrument workloads in the default namespace.
cat <<EOF | kubectl apply -f -
apiVersion: operator.dash0.com/v1alpha1
kind: Dash0Monitoring
metadata:
name: dash0-monitoring
namespace: default
spec:
export:
dash0:
endpoint: ingress.<your-region>.dash0.com:4317
authorization:
token: <your-auth-token>
EOF
5. Restart the deployment. The operator injects the OTLP collector at pod startup, so the pod needs to be restarted for the injection to take effect.
kubectl rollout restart deployment/order-service kubectl get pods
Wait until the new pod shows Running. At this point the pod has an OTLP collector sidecar listening on localhost:4318, which matches the endpoint configured in application.yml. Spring Observability exports application telemetry (traces, metrics, logs) to the sidecar. The Dash0 Kubernetes Operator also collects infrastructure metrics from Kubernetes. The sidecar forwards all telemetry to Dash0’s platform where it gets correlated: you can see application request rates alongside pod CPU usage, database query spans alongside memory consumption.
Further reading:
- Dash0 Kubernetes Operator documentation
- Dash0 Kubernetes Operator Helm chart
- Dash0 Kubernetes Operator GitHub repository
Verify complete observability
Dash0 is now collecting and correlating telemetry from both layers. Spring Observability produces application-level telemetry (request rate, latency, database queries). The Dash0 Kubernetes Operator collects this via OTLP and also collects infrastructure-level metrics from Kubernetes (pod CPU, memory, container restarts). All of this flows into Dash0’s platform where you can query it, visualize it, and see how application performance correlates with infrastructure resource usage.
1. Start a port-forward to the service.
kubectl port-forward svc/order-service 8080:80
2. Generate traffic. Open a second terminal in the Codespace and run:
while true; do curl http://localhost:8080/orders; sleep 1; done
This generates one request per second to the /orders endpoint.
3. Open the Dash0 UI. Go to app.dash0.com and check the following:
- Monitoring → Services shows
order-servicewith live request rate, error rate, and latency. This is application-level telemetry: Spring Observability produces these metrics via Micrometer as requests flow through your Spring Boot controllers, and Dash0 collects them via OTLP. - Telemetry → Tracing shows individual traces for each
/ordersrequest. Each trace includes spans for the HTTP request. If you add database or cache operations to the controller, Spring Observability will automatically produce spans for JdbcTemplate queries with SQL details and cache operations with hit/miss rates. Dash0 collects and stores these traces so you can query them. - Telemetry → Logging shows structured logs from the application. Spring Observability propagates trace IDs into logs, Dash0 collects them via the OTLP collector, and the Dash0 platform lets you correlate logs with traces directly in the UI.
- Monitoring → Resources shows pod and node resource usage: CPU consumption, memory usage, container restarts, network I/O. This is infrastructure-level telemetry: the Dash0 Kubernetes Operator collects these metrics directly from Kubernetes.
Dash0 correlates both layers automatically. You can see a slow HTTP request in Monitoring → Services (application layer), then check Monitoring → Resources (infrastructure layer) to see if the pod was CPU-throttled at that time. You can see a trace span for a database query (application layer) and correlate it with pod memory usage (infrastructure layer) in the same platform. This is what complete observability gives you: Dash0 collects telemetry from both Spring Observability and Kubernetes, correlates it, and makes it queryable in one place.
Further reading:
Final thoughts
You set up complete observability for a Spring Boot application on Kubernetes. Spring Observability produces application-level telemetry: HTTP request metrics, database query spans, cache hit rates, scheduled task execution. The Dash0 Kubernetes Operator collects this telemetry via OTLP and also collects infrastructure-level metrics from Kubernetes: pod CPU and memory, node resources, container restarts, deployment events. Dash0’s platform receives all of this telemetry, correlates it automatically, and makes it queryable in one place. You see the complete picture from business logic down to container resources.
This is the recommended approach for Spring Boot on Kubernetes. Spring Observability produces framework-aware telemetry that understands Spring abstractions like RestTemplate, JdbcTemplate, and @Scheduled methods. Dash0 collects telemetry from both Spring Observability (via OTLP) and Kubernetes (via the operator), correlates it, and visualizes it. You don’t write collector YAML for every service, and you don’t lose visibility into either layer.
To see richer application telemetry, add a database or cache to this application. Spring Observability will automatically produce spans for JdbcTemplate queries with SQL details and cache operations with hit/miss rates. Dash0 will collect these spans and let you correlate them with infrastructure metrics. You can also add custom spans by injecting a Tracer bean where you need application-specific instrumentation. Dash0 collects everything automatically.
For production deployments, consider reducing the trace sampling probability in application.yml from 1.0 (100%) to a lower value like 0.1 (10%) to reduce telemetry volume. Dash0’s Cost Control features can also filter out telemetry at the source before it leaves your cluster, which gives you more control over cost without sacrificing observability.
Next steps:
The post Complete Observability for Java: Spring Boot and Kubernetes with Dash0 appeared first on foojay.
NLJUG – Nederlandse Java User Group NLJUG – de Nederlandse Java User Group – is opgericht in 2003. De NLJUG verenigt software ontwikkelaars, architecten, ICT managers, studenten, new media developers en haar businesspartners met algemene interesse in alle aspecten van Java Technology.