Basic Setup
The basic setup of Kadeck (DSH) consists of a single Portal instance. This configuration is recommended for:
- Proof-of-Concepts (PoCs)
- Small environments or projects
- Isolated network segments
- Rapid evaluation and onboarding
This guide describes the complete deployment process for this setup. Once completed, you will be able to:
- Log in to the Portal
- Monitor your clusters and applications via Argus
- Explore your streaming data interactively
Deployment Overview
To bring up a fully functional environment, complete the following steps in order:
- Preparation
- Access & Credentials
- Configuration
- Deployment
- Testing
1. Preparation
Before deployment, ensure your infrastructure and access are correctly configured.
Portal Preparation
Container Setup
Portal is distributed as a Docker image and can be deployed as:
- A standalone container
- A Kubernetes Pod
- A Helm-based deployment
Ingress configuration is often required. Portal communicates over HTTP + WebSocket (WS) or HTTPS + Secure WebSocket (WSS). If an ingress controller is used, ensure that the appropriate headers (e.g., Connection: Upgrade) are forwarded to support WebSocket upgrades.
For required ports and network policies, refer to the Interoperability & Network section.
Database Configuration
Portal requires access to an external database (Postgres or H2) with full read and write access to a dedicated schema (e.g., dshportal). The default schema used is PUBLIC, but a custom schema can be used for secure environments. We recommend using a dedicated database for Portal.
SSL/TLS Certificate
Deploying Portal with SSL/TLS enabled is strongly recommended.
Certificates must be embedded in a Java Keystore (JKS) and mounted into the container at runtime.
Instructions for generating a custom certificate are provided in the FAQs section.
2. Access & Credentials
Before deploying Portal, you must obtain the necessary credentials from Xeotek. These credentials are required to:
- Pull container images from the private registry
- Configure system secrets and initialize the platform
Container Images
Container images for Portal are hosted in a private repository on Docker Hub.
Kadeck operates in high-security environments such as large enterprises and government agencies. For security reasons, direct access to the images is restricted and not publicly available.
Please contact your representative at Xeotek to obtain access to the Kadeck images.
System Credentials
In addition to image access, you will receive your Kadeck team and secret. These are required to activate and operate your Kadeck installation.
Both the container registry credentials and the team and secret are provided by Xeotek during onboarding.
Important:
- Store all credentials securely and restrict access.
- The team and secret must be configured before system startup.
- Follow your organization's policies for handling confidential deployment artifacts.
3. Configuration
For a full list of configuration options for the Kadeck, please visit the Configuration Table page.
Portal Configuration
| Key | Values | Description |
|---|---|---|
| xtk_kadeck_team | String | Your team's id |
| xtk_kadeck_secret | String | Your team's secret |
| xtk_kadeck_port | 8080 (or 8443) | The port through which the web user interface is accessible. |
| xtk_kadeck_loglevel | debug | Start with "debug" and change it to "warn" later. |
| xtk_kadeck_db_url | jdbc:postgresql://dshportal.acme.com:5432/dshportal | JDBC connection string to your Postgres or H2 database. |
| xtk_kadeck_db_username | String | Username of the database user with read/write access to the PUBLIC schema, if not specified otherwise. |
| xtk_kadeck_db_password | String | Password of the database user. |
| xtk_kadeck_keystore_path | /etc/selfsigned.jks | Path to mounted JKS |
| xtk_kadeck_keystore_pass | String | Password to access the certificate |
| xtk_kadeck_keystore_alias | String | Alias of the certificate |
4. Deployment
Portal can be deployed using a Helm chart, as a standalone container, or as a Kubernetes workload (including OpenShift). This section outlines recommended practices for each option with an emphasis on production readiness and operational maintainability.
Deployment Targets
- Helm (preferred for Kubernetes-based environments)
- Pod/Deployment YAMLs (for manual control or OpenShift customization)
- Docker run (for local development or PoC-only usage)
Helm Deployment
All Kadeck (DSH) components are available as Helm charts for streamlined deployment in Kubernetes-based environments.
Adding the Helm Repository
Add the Kadeck Helm repository:
helm repo add kadeckdash https://dl.cloudsmith.io/public/xeotek/kadeckdash/helm/charts/
helm repo update
Image Pull Secrets
Because Kadeck container images are hosted in a private registry, an image pull secret must be configured.
Create the secret using the credentials provided by Xeotek:
kubectl create secret docker-registry dsh-registry-secret \
--docker-server=https://index.docker.io/v1/ \
--docker-username=<your-xeotek-username> \
--docker-password=<your-xeotek-password> \
--namespace=<kadeckdash-system-namespace>
Reference the secret in your custom Helm values:
image:
imagePullSecrets:
- name: dsh-registry-secret
Best practice: Always scope the secret to the same namespace where Kadeck components are deployed.
Keystore Mounting (TLS)
When TLS is enabled, a Java Keystore (JKS) must be mounted into the Portal container at runtime. There are multiple strategies to manage this securely:
Option 1: Volume Mount
Mount a pre-generated .jks file into the container:
volumeMounts:
- name: tls-keystore
mountPath: /opt/portal/keystore
readOnly: true
volumes:
- name: tls-keystore
secret:
secretName: portal-tls
Option 2: External Secret Manager
Use a cloud-native secret manager (e.g., AWS Secrets Manager, HashiCorp Vault) with a sidecar injector or an operator to mount the keystore dynamically.
Best practice: Avoid hardcoding passwords and TLS paths in environment variables or values files. Use Kubernetes secrets or external secret managers.
Namespace & Isolation
Use a dedicated namespace for the Kadeck components (e.g., kadeckdash-system) to isolate the environment and simplify resource control and RBAC management.
kubectl create namespace kadeckdash-system
Best practice: Apply namespace-specific resource quotas and network policies to ensure isolation and enforce limits.
Health Endpoints
Portal exposes the following endpoints for orchestration and monitoring:
| Endpoint | Description | Usage |
|---|---|---|
/health | Overall health status | System dashboards, alerts |
/live | Liveness probe | Kubernetes liveness checks |
/ready | Readiness probe | Kubernetes readiness checks |
When deploying via Helm or custom manifests, ensure probes are configured as:
livenessProbe:
httpGet:
path: /live
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
Additional Recommendations
- Use readiness gates to delay service exposure until the database and keystore are available.
- Enable resource requests/limits to ensure predictable performance and avoid node contention.
- Ensure anti-affinity rules if deploying Portal in a HA setup to spread across availability zones or nodes.
5. Testing
After deploying Portal and Argus, validate that the system is operational before connecting additional components or exposing it to end users.
Browser Access
Open a browser and navigate to the Portal endpoint:
http(s)://<your-ingress-or-service-endpoint>
Log in using the default credentials:
Username: admin
Password: admin
Note: Change the default password immediately after login in production environments.
Log Inspection
Check the Portal logs to confirm successful startup. You should see output similar to:
INFO Server started at: http://0.0.0.0:8080
No stack traces or repeated warnings should appear during startup. All components (Portal, Argus) should reach a ready state.
Logs are written to the container’s local file system and output to the console.
The default log file location is:
/root/.xtk_kadeck_log
Both console output and log file contents are identical.
Health Checks
Verify that health endpoints return expected HTTP 200 status codes:
GET /live → 200 OK
GET /ready → 200 OK
GET /health → 200 OK
These endpoints can also be queried manually or tested via:
curl http://<pod-ip>:8080/live
Troubleshooting
If you encounter problems during deployment or runtime, detailed logs are critical for diagnosis.
Configuring Log Levels
Control log verbosity by setting environment variables at startup.
Main application log level:
xtk_kadeck_loglevel
Additional component log levels:
xtk_kadeck_loglevel_kafka— Apache Kafka client librariesxtk_kadeck_loglevel_netty— Netty networking
Available log levels:
- ERROR — Critical failures only
- WARN — Warnings and potential issues (default)
- INFO — General operational information
- DEBUG — Detailed internal state information (for troubleshooting only)
Recommendation:
Use DEBUG level only temporarily during troubleshooting.
Use WARN or INFO for regular operations.