On this page
Introduction
Suppose you have a Ktor application named app and you want to deploy it to a remote server.
Fat JAR
Deploying a Ktor application involves packaging your code into a runnable format (usually a “Fat JAR”) and configuring a remote server to run it continuously while handling incoming web traffic.
Executable JAR file
To run the application on a server, you need to bundle your code and all its dependencies into a single, executable JAR file.
Open your terminal in the root of your project and run the build command:
./gradlew buildFatJarOnce the build succeeds, locate your packaged application. It will be located at build/libs/ and will likely be named something like app.jar.
Server
Connect to your server via SSH and install the Java Runtime Environment (JRE) so it can execute your JAR file.
ssh username@your_server_ipUpdate your package manager and install Java (replace 17 with 21 if your project uses Java 21):
sudo apt updatesudo apt install openjdk-17-jre-headless -yCreate a directory to hold your application:
sudo mkdir /opt/appsudo chown $USER:$USER /opt/appTransfer the Application to the Server
You need to copy the Fat JAR from your local machine to the server.
Open a new terminal window on your local machine (do not close the SSH session) and use scp (Secure Copy Protocol):
scp build/libs/app.jar username@your_server_ip:/opt/app/app.jarService
If you just run the JAR file in the terminal, it will stop the moment you close your SSH connection.
To keep it running in the background and ensure it automatically restarts if the server reboots, we will create a systemd service.
Back in your server’s SSH session, create a new service file:
sudo nano /etc/systemd/system/app.servicePaste the following configuration (adjust the User if you want to run it under a specific service account instead of root/default):
[Unit]Description=App ApplicationAfter=network.target
[Service]User=root# The path to your application directoryWorkingDirectory=/opt/app# The command to start the appExecStart=/usr/bin/java -jar /opt/app/app.jarSuccessExitStatus=143Restart=alwaysRestartSec=10
[Install]WantedBy=multi-user.targetSave and exit (Ctrl+O, Enter, Ctrl+X).
Reload systemd to recognize the new service, then start and enable it:
sudo systemctl daemon-reloadsudo systemctl start appsudo systemctl enable appCheck the status to ensure it’s running cleanly:
sudo systemctl status appReverse Proxy
By default, Ktor usually runs on port 8080. It is best practice not to expose this port directly to the web, but instead use a robust web server like Nginx to intercept standard HTTP/HTTPS traffic (ports 80 and 443) and forward it to your Ktor app.
Install Nginx on your server:
sudo apt install nginx -yCreate a new Nginx configuration file for your app:
sudo nano /etc/nginx/sites-available/appPaste the following block (replace your_domain.com with your domain name or server IP):
server { listen 80; server_name your_domain.com;
location / { proxy_pass http://localhost:8080; # Points to your Ktor port proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; }}Enable the site by creating a symlink:
sudo ln -s /etc/nginx/sites-available/app /etc/nginx/sites-enabled/Test the Nginx configuration and restart the service:
sudo nginx -tsudo systemctl restart nginxYou should now be able to access your Ktor application by navigating to http://your_domain.com (or your server’s IP) in your web browser!
Secure with HTTPS
If you attached a domain name to your server, you can secure it with a free SSL certificate from Let’s Encrypt.
sudo apt install certbot python3-certbot-nginx -ysudo certbot --nginx -d your_domain.comFollow the prompts, and Certbot will automatically rewrite your Nginx configuration to support HTTPS.
Docker
Using Docker is a fantastic next step. It isolates your application, its dependencies, and its runtime environment into a single, portable container. This means if it runs on your machine, it is guaranteed to run exactly the same way on your server—no more “it works on my machine” headaches!
Here is how to transition your Ktor deployment to a Dockerized setup using a Multi-Stage Docker Build. This approach is great because it builds your Fat JAR inside a temporary container, meaning you don’t even need Java or Gradle installed on your server.
Gradle
./gradlew buildImage./gradlew runDockerDockerfile
In the root directory of your Ktor project (next to your build.gradle.kts), create a file simply named Dockerfile (no extension). Paste the following configuration:
# ==========================================# Stage 1: Build the Fat JAR# ==========================================# Use an official Gradle image to build the appFROM gradle:8.5-jdk17 AS build
# Copy your source code into the containerCOPY --chown=gradle:gradle . /home/gradle/srcWORKDIR /home/gradle/src
# Run the build command (creates the Fat JAR)RUN gradle buildFatJar --no-daemon
# ==========================================# Stage 2: Run the Application# ==========================================# Use a lightweight Java Runtime image for the final containerFROM eclipse-temurin:17-jre-alpine
# Create a directory for the appWORKDIR /app
# Copy ONLY the built JAR from the previous stageCOPY --from=build /home/gradle/src/build/libs/*-all.jar app.jar
# Expose the port your Ktor app runs on (usually 8080)EXPOSE 8080
# Command to run when the container startsENTRYPOINT ["java", "-jar", "app.jar"]Step 2: Create a docker-compose.yml (Recommended)
While you can run plain Docker commands, using Docker Compose is much better for managing server deployments. It allows you to define how your container should restart and run in the background.
In the same root directory, create a docker-compose.yml file:
version: '3.8'
services: ktor-web: build: . container_name: ktor_backend ports: - "8080:8080" # Maps server port 8080 to container port 8080 restart: unless-stopped # Automatically restarts on crash or server rebootStep 3: Prepare Your Server
If you previously set up the systemd service from our last tutorial, you’ll want to stop and disable it so it doesn’t clash with Docker on port 8080:
sudo systemctl stop ktor-appsudo systemctl disable ktor-appNext, you need to install Docker and Docker Compose on your Ubuntu server. SSH into your server and run:
# Update packagessudo apt update
# Install Dockersudo apt install docker.io -y
# Install Docker Compose pluginsudo apt install docker-compose-v2 -y
# Ensure Docker starts on bootsudo systemctl enable dockersudo systemctl start dockerStep 4: Deploy on the Server
Instead of manually copying JAR files using scp, the cleanest way to deploy Docker apps is to use version control (like Git).
- Push your project (including the new
Dockerfileanddocker-compose.yml) to a Git repository (GitHub, GitLab, etc.). - On your server, clone the repository:
git clone https://github.com/your-username/your-ktor-repo.git /opt/ktor-dockercd /opt/ktor-docker- Build and start your container in the background using Docker Compose:
sudo docker compose up -d --buildNote: The --build flag forces Docker to execute your multi-stage Dockerfile, compiling your code fresh. The -d flag runs it in “detached” mode (in the background).
Step 5: Check Your Work
To verify your Ktor container is running smoothly, you can view the live logs:
sudo docker compose logs -f(Press Ctrl+C to exit the log view).
What about Nginx?
If you set up Nginx as a reverse proxy in the previous tutorial, you don’t need to change anything! Nginx is already listening on port 80/443 and forwarding traffic to localhost:8080. Docker is now exposing your Ktor app on that exact same local port, so Nginx will seamlessly route external traffic right into your Docker container.
Ask AI
Would you like to take this a step further and set up a GitHub Actions workflow so that your server automatically pulls and rebuilds this Docker container every time you push new code?