Ktor - Deploy

Introduction

Deployment

Suppose you have a Ktor application named app and you want to deploy it to a remote server.

Fat JAR

Deploying a Ktor application involves packaging your code into a runnable format (usually a “Fat JAR”) and configuring a remote server to run it continuously while handling incoming web traffic.

Executable JAR file

To run the application on a server, you need to bundle your code and all its dependencies into a single, executable JAR file.

Open your terminal in the root of your project and run the build command:

Terminal window
./gradlew buildFatJar

Once the build succeeds, locate your packaged application. It will be located at build/libs/ and will likely be named something like app.jar.

Server

Connect to your server via SSH and install the Java Runtime Environment (JRE) so it can execute your JAR file.

Terminal window
ssh username@your_server_ip

Update your package manager and install Java (replace 17 with 21 if your project uses Java 21):

Terminal window
sudo apt update
sudo apt install openjdk-17-jre-headless -y

Create a directory to hold your application:

Terminal window
sudo mkdir /opt/app
sudo chown $USER:$USER /opt/app

Transfer the Application to the Server

You need to copy the Fat JAR from your local machine to the server.

Open a new terminal window on your local machine (do not close the SSH session) and use scp (Secure Copy Protocol):

Terminal window
scp build/libs/app.jar username@your_server_ip:/opt/app/app.jar

Service

If you just run the JAR file in the terminal, it will stop the moment you close your SSH connection.

To keep it running in the background and ensure it automatically restarts if the server reboots, we will create a systemd service.

Back in your server’s SSH session, create a new service file:

Terminal window
sudo nano /etc/systemd/system/app.service

Paste the following configuration (adjust the User if you want to run it under a specific service account instead of root/default):

[Unit]
Description=App Application
After=network.target
[Service]
User=root
# The path to your application directory
WorkingDirectory=/opt/app
# The command to start the app
ExecStart=/usr/bin/java -jar /opt/app/app.jar
SuccessExitStatus=143
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

Save and exit (Ctrl+O, Enter, Ctrl+X).

Reload systemd to recognize the new service, then start and enable it:

Terminal window
sudo systemctl daemon-reload
sudo systemctl start app
sudo systemctl enable app

Check the status to ensure it’s running cleanly:

Terminal window
sudo systemctl status app

Reverse Proxy

By default, Ktor usually runs on port 8080. It is best practice not to expose this port directly to the web, but instead use a robust web server like Nginx to intercept standard HTTP/HTTPS traffic (ports 80 and 443) and forward it to your Ktor app.

Install Nginx on your server:

Terminal window
sudo apt install nginx -y

Create a new Nginx configuration file for your app:

Terminal window
sudo nano /etc/nginx/sites-available/app

Paste the following block (replace your_domain.com with your domain name or server IP):

server {
listen 80;
server_name your_domain.com;
location / {
proxy_pass http://localhost:8080; # Points to your Ktor port
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

Enable the site by creating a symlink:

Terminal window
sudo ln -s /etc/nginx/sites-available/app /etc/nginx/sites-enabled/

Test the Nginx configuration and restart the service:

Terminal window
sudo nginx -t
sudo systemctl restart nginx

You should now be able to access your Ktor application by navigating to http://your_domain.com (or your server’s IP) in your web browser!

Secure with HTTPS

If you attached a domain name to your server, you can secure it with a free SSL certificate from Let’s Encrypt.

Terminal window
sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx -d your_domain.com

Follow the prompts, and Certbot will automatically rewrite your Nginx configuration to support HTTPS.

Docker

Using Docker is a fantastic next step. It isolates your application, its dependencies, and its runtime environment into a single, portable container. This means if it runs on your machine, it is guaranteed to run exactly the same way on your server—no more “it works on my machine” headaches!

Here is how to transition your Ktor deployment to a Dockerized setup using a Multi-Stage Docker Build. This approach is great because it builds your Fat JAR inside a temporary container, meaning you don’t even need Java or Gradle installed on your server.

Gradle

Terminal window
./gradlew buildImage
./gradlew runDocker

Dockerfile

In the root directory of your Ktor project (next to your build.gradle.kts), create a file simply named Dockerfile (no extension). Paste the following configuration:

# ==========================================
# Stage 1: Build the Fat JAR
# ==========================================
# Use an official Gradle image to build the app
FROM gradle:8.5-jdk17 AS build
# Copy your source code into the container
COPY --chown=gradle:gradle . /home/gradle/src
WORKDIR /home/gradle/src
# Run the build command (creates the Fat JAR)
RUN gradle buildFatJar --no-daemon
# ==========================================
# Stage 2: Run the Application
# ==========================================
# Use a lightweight Java Runtime image for the final container
FROM eclipse-temurin:17-jre-alpine
# Create a directory for the app
WORKDIR /app
# Copy ONLY the built JAR from the previous stage
COPY --from=build /home/gradle/src/build/libs/*-all.jar app.jar
# Expose the port your Ktor app runs on (usually 8080)
EXPOSE 8080
# Command to run when the container starts
ENTRYPOINT ["java", "-jar", "app.jar"]

While you can run plain Docker commands, using Docker Compose is much better for managing server deployments. It allows you to define how your container should restart and run in the background.

In the same root directory, create a docker-compose.yml file:

version: '3.8'
services:
ktor-web:
build: .
container_name: ktor_backend
ports:
- "8080:8080" # Maps server port 8080 to container port 8080
restart: unless-stopped # Automatically restarts on crash or server reboot

Step 3: Prepare Your Server

If you previously set up the systemd service from our last tutorial, you’ll want to stop and disable it so it doesn’t clash with Docker on port 8080:

Terminal window
sudo systemctl stop ktor-app
sudo systemctl disable ktor-app

Next, you need to install Docker and Docker Compose on your Ubuntu server. SSH into your server and run:

Terminal window
# Update packages
sudo apt update
# Install Docker
sudo apt install docker.io -y
# Install Docker Compose plugin
sudo apt install docker-compose-v2 -y
# Ensure Docker starts on boot
sudo systemctl enable docker
sudo systemctl start docker

Step 4: Deploy on the Server

Instead of manually copying JAR files using scp, the cleanest way to deploy Docker apps is to use version control (like Git).

  1. Push your project (including the new Dockerfile and docker-compose.yml) to a Git repository (GitHub, GitLab, etc.).
  2. On your server, clone the repository:
Terminal window
git clone https://github.com/your-username/your-ktor-repo.git /opt/ktor-docker
cd /opt/ktor-docker
  1. Build and start your container in the background using Docker Compose:
Terminal window
sudo docker compose up -d --build

Note: The --build flag forces Docker to execute your multi-stage Dockerfile, compiling your code fresh. The -d flag runs it in “detached” mode (in the background).

Step 5: Check Your Work

To verify your Ktor container is running smoothly, you can view the live logs:

Terminal window
sudo docker compose logs -f

(Press Ctrl+C to exit the log view).

What about Nginx?

If you set up Nginx as a reverse proxy in the previous tutorial, you don’t need to change anything! Nginx is already listening on port 80/443 and forwarding traffic to localhost:8080. Docker is now exposing your Ktor app on that exact same local port, so Nginx will seamlessly route external traffic right into your Docker container.


Ask AI

Would you like to take this a step further and set up a GitHub Actions workflow so that your server automatically pulls and rebuilds this Docker container every time you push new code?