Self-hosted Sentry: error tracking without a $30+ subscription

calendar_month May 08, 2026 schedule 7 min read visibility 16 views
person
Valebyte Team
Self-hosted Sentry: error tracking without a $30+ subscription
For stable operation of self-hosted Sentry on 10-20 projects with a volume of up to 50,000 events per month, you will need a VPS with at least 8 GB RAM, 4 vCPUs, and an 80 GB NVMe drive. This allows you to completely opt out of a Sentry.io subscription starting at $29/month and gain full control over your data.

Sentry is the industry standard for error monitoring and application performance monitoring (APM). However, the SaaS version's pricing policy often becomes a barrier for growing teams: the initial paid tier starts at $29, and the business plan starts at $249 per month. Switching to sentry on premise not only saves budget but also helps meet security requirements (GDPR, etc.), as error traces and user data do not leave your infrastructure.

Hardware Requirements: Why does self-hosted Sentry require 8GB RAM?

Deploying your own Sentry is not just about running a single container. Sentry's modern architecture consists of 25+ microservices, including heavyweight components like Kafka, ClickHouse, and Snuba. Attempting to run the system on a server with 4 GB RAM will inevitably trigger the OOM Killer within the first few hours of operation.

Minimum and Recommended VPS Specifications

For smooth operation, an error tracking vps must have a resource margin. Below are the requirements based on real-world benchmarks at a load of 10-15 events per second (EPS).

Specification Minimum (Dev/Small) Recommended (Production) High Load (100+ EPS)
CPU (vCores) 4 Cores (2.5 GHz+) 8 Cores 16+ Cores
RAM 8 GB 16 GB 32 GB+
Disk Type NVMe SSD NVMe SSD (RAID 1) NVMe SSD (Enterprise)
Disk Space 80 GB 200 GB 500 GB+
OS Ubuntu 22.04 LTS Debian 12 / Ubuntu 24.04 Debian 12

It is important to understand that ClickHouse and Kafka are very sensitive to the speed of the disk subsystem. Using standard HDDs or slow network drives will lead to massive lags in event processing. If you plan to store logs for a long time, check out the topic of moving from AWS Lightsail to dedicated servers to get maximum IOPS for the same money.

Sentry On-Premise Architecture: ClickHouse, Postgres, and Kafka

Modern sentry on premise has moved away from a simple "Postgres + Redis" model. It is now a complex data processing pipeline. Understanding its structure is critical for troubleshooting and configuring retention policies.

Role of Stack Components

  • PostgreSQL: Stores metadata — users, project settings, API keys, team structures. The errors themselves are not stored here.
  • Redis: Used as a broker for Celery tasks and for caching intermediate data.
  • Kafka: Acts as a buffer. All incoming events first go into Kafka, allowing Sentry to withstand peak loads without data loss.
  • ClickHouse: The primary storage for all events and timings. It is thanks to ClickHouse that Sentry instantly builds graphs across millions of records.
  • Snuba: A middleware service that translates Sentry queries into SQL queries for ClickHouse.
  • Symbolicator: Processes native stack traces (C++, Rust, Android) and matches them with debug symbols.

This separation allows for horizontal scaling of the error tracking vps, but on a single server, it requires strict resource limiting for each Docker container. If you are building a comprehensive monitoring system, you might also be interested in implementing self-hosted analytics without Google for full stack independence.

Looking for a reliable server for your projects?

VPS from $10/mo and Dedicated Servers from $9/mo with NVMe, DDoS protection, and 24/7 support.

View Offers →

Step-by-Step Installation via Sentry Docker Compose

The official deployment method is the getsentry/self-hosted repository. This is a set of scripts that automate the setup of the sentry docker environment, creating networks and volumes.

Step 1: System Preparation

Update packages and install necessary dependencies. You will need Docker and Docker Compose (V2 plugin).

sudo apt update && sudo apt upgrade -y
sudo apt install -y git curl build-essential

# Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

Step 2: Cloning and Configuration

We will use the latest stable version. It is not recommended to use the master branch for production.

git clone https://github.com/getsentry/self-hosted.git
cd self-hosted

# Check system requirements compliance
# The install.sh script will check RAM, but it's better to verify beforehand

Step 3: Running the Installer

The ./install.sh script will perform database migrations, set up Kafka topics, and create an admin user. The process takes 10 to 20 minutes depending on disk and CPU speed.

./install.sh

During installation, you will be asked to create an administrator account. Be sure to save these credentials. Once finished, start the containers:

docker compose up -d

After startup, your self-hosted Sentry will be available at http://YOUR_SERVER_IP:9000. For production use, it is highly recommended to set up a Reverse Proxy (Nginx or Traefik) with SSL support. Securing access to the control panel is just as important as protecting team passwords via self-hosted Vaultwarden.

Migration from SaaS (Sentry.io) to your own VPS

Many teams start with a free or cheap plan on Sentry.io but eventually hit the limits. Moving data from the cloud to sentry self hosted is possible, but with certain nuances. There is no "one-click" direct database import due to differences in API versions and the cloud ClickHouse architecture.

Export via API and CLI

The primary migration method is using the built-in export command. However, it only transfers the structure (projects, teams, keys), not the historical error events themselves.

  1. Create an Auth Token in the Sentry.io panel with org:read permissions.
  2. Use the Sentry Docker container to perform the export:
    docker compose run --rm web export /home/sentry-export.json
  3. For the cloud version, you will have to use the API to download project settings and recreate them locally via Terraform or custom Python scripts.

Changing DSN: A Critical Step

After setting up sentry on premise, your DSNs (Data Source Names) will change for all projects. You will need to update configs in all microservices and frontend applications. If you have many Next.js projects, the process can be automated by following the principles of migrating Next.js to your own VPS.

Tip: Do not delete your Sentry.io account immediately. Keep it as a fallback for 2 weeks until all clients have updated their cached DSNs in their browsers or apps.

Retention Setup: How to avoid filling the disk with logs

By default, sentry docker stores events for 90 days. With a high volume of errors (e.g., a looping frontend bug affecting thousands of users), ClickHouse can quickly "eat up" all free space on the NVMe.

Changing Data Retention Period

Retention settings are defined in the .env file or directly in the Snuba configuration. To change the retention period to 30 days (optimal for most), edit the variables:

# In the .env file in the self-hosted directory
SENTRY_EVENT_RETENTION_DAYS=30

After the change, you must restart the containers and wait for the workers to clear old partitions in ClickHouse.

Cleaning Up Unused Data

Sometimes Docker volumes start taking up too much space due to the logs of the containers themselves. Use the command to clean up:

docker system prune -a --volumes --filter "label!=com.docker.compose.project=self-hosted"

It is also useful to configure Docker log rotation in /etc/docker/daemon.json so that a single container cannot create a 50 GB log file.

Performance Optimization and ClickHouse Tuning

If your error tracking vps starts to slow down, the first thing to check is ClickHouse memory consumption. By default, it tries to occupy up to 50% of available RAM. On an 8-16 GB server, this can lead to conflicts with Kafka.

In the clickhouse/config.xml file, you can limit resource usage:

<max_server_memory_usage>4000000000</max_server_memory_usage>
<max_thread_pool_size>8</max_thread_pool_size>

For developers who actively use AI to fix bugs found in Sentry, an excellent addition would be the combination of Continue.dev + Ollama, deployed on the same or a neighboring server. This allows analyzing error traces with a local neural network without sending code to the outside world.

Monitoring Kafka Queues

If errors appear in the interface with a delay of several minutes, it means Kafka cannot keep up with the flow. You can check this with the command:

docker compose run --rm kafka kafka-consumer-groups --bootstrap-server kafka:9092 --describe --group snuba-consumers

If the LAG field is growing, you either need to increase the number of CPU cores or optimize the number of events sent on the client side (sampling).

Conclusion

Self-hosted Sentry is a powerful solution that pays for itself in the first month if you have more than 3 active projects, requiring only a high-quality VPS with 8-16 GB RAM. For stable operation, it is critical to use NVMe drives and set a retention policy of 14-30 days to avoid overflowing ClickHouse storage. It is recommended to deploy Sentry on a clean Ubuntu 22.04 OS via the official installer, having previously configured a Reverse Proxy to protect your data.

Ready to choose a server?

VPS and Dedicated Servers in 72+ countries with instant activation and full root access.

Start Now →

Share this post:

support_agent
Valebyte Support
Usually replies within minutes
Hi there!
Send us a message and we'll reply as soon as possible.