Components of the Hive
A detailed look at the physical and cloud infrastructure that powers Honeycluster — from rack-mounted on-prem servers to cloud services.
Honeycluster TeamHoneycluster Team-
EducationInfrastructure
Components of the Hive

Honeycluster is not a single server. It is a coordinated system of physical hardware in a colocation facility and cloud services at the edge, designed to serve XRPL data reliably at scale. This post breaks down every component in the stack — what it is, why we chose it, and what role it plays.

The rack
##

Everything starts in the datacenter. Our colocation rack houses the networking gear, database cluster, and XRPL nodes that form the backbone of Honeycluster's infrastructure.

Networking
###

Two pieces of hardware handle all traffic in and out of the rack:

ComponentModelRole
Router
TP-Link ER8411
Enterprise 10G VPN router with up to 10 WAN ports, SPI firewall, Omada SDN support, load balancing, and lightning protection
Switch
Cisco Catalyst C1300-24T-4G
24-port gigabit managed switch with 4 SFP uplinks

The TP-Link ER8411 gives us multi-WAN failover and load balancing across upstream connections. The Cisco switch handles internal traffic between all servers in the rack, with managed VLANs to isolate database traffic from public-facing services.

A patch panel sits between the switch and the servers, keeping cabling organized and making it straightforward to re-patch connections without tracing cables through the rack.

A note on patch panels

While we currently use a patch panel for cable management, we would not recommend it for new installations. Every RJ45 pass-through on a patch panel introduces an additional connection point — and every connection point is a potential failure. We have traced intermittent network drops back to loose patch panel connections more than once. If you are building a rack from scratch, run cables directly from the switch to the servers. The cable management is slightly messier, but you eliminate an entire class of connectivity issues.

ScyllaDB cluster — 6x Dell PowerEdge R630
###

The ScyllaDB cluster is the heaviest component in the rack. Six Dell R630 servers handle all indexed XRPL data — ledger history, transaction records, and account state that Clio serves to API consumers.

SpecPer Node
Processors
2x Intel Xeon E5-2697 v3 @ 2.3 GHz (18 cores each)
Memory
384 GB DDR4
Storage
15.36 TB NVMe SSD
Network
10 GbE + 1 GbE ports
Power
Dual 1100W redundant PSU

With 36 cores, 384 GB of RAM, and over 15 TB of NVMe per node, each R630 is built for the sustained write and read throughput that ScyllaDB demands. Across six nodes, the cluster provides 2.3 TB of RAM and 92 TB of NVMe storage with full replication and fault tolerance.

Power draw matters

Six R630s under heavy ScyllaDB write loads push significant power draw. We have exceeded our 20 kW rack allocation more than once during peak compaction and repair cycles. Understand your power budget before scaling a cluster like this — a tripped breaker takes the entire rack offline.

XRPL nodes — 3x Quanta QuantaGrid D51B-1U
###

Three Quanta servers run rippled and clio, providing direct connections to the XRPL peer network and serving API requests.

SpecPer Node
Processors
2x Intel Xeon E5-2697 v3 @ 2.3 GHz (18 cores each)
Memory
64 GB DDR4
Storage
960 GB NVMe SSD
Form factor
1U, 10x 2.5" SFF NVMe bays

These nodes are lighter on memory and storage than the database servers because rippled and clio have different resource profiles. The rippled process needs fast I/O for ledger validation and peer communication but does not require terabytes of local storage — that is what the ScyllaDB cluster is for. Clio connects to ScyllaDB for historical queries and uses rippled for real-time validated data.

Full-history node — Dell PowerEdge R640
###

One R640 serves as the dedicated full-history rippled node, storing every ledger and transaction since the XRPL genesis.

SpecDetails
Processors
2x Intel Xeon Platinum @ 2.0 GHz (26 cores each)
Memory
768 GB DDR4
Storage
5x 15.36 TB NVMe SSD (76.8 TB total)
RAID
12 GB SAS RAID controller
Network
10 GbE + 1 GbE ports
Power
Dual 1100W redundant PSU

This is the most heavily provisioned machine in the rack. Full-history storage on the XRPL is measured in tens of terabytes and growing. The R640's 76.8 TB of NVMe, 768 GB of RAM, and 52 cores give it the headroom to handle both the current dataset and years of future growth. The RAID controller provides an additional layer of data protection beyond NVMe drive redundancy.

Full-history is not optional for us

Full-history access is a core requirement for analytics platforms, compliance tools, and block explorers that depend on Honeycluster. Losing even a single ledger range would break downstream consumers. This machine is sized to ensure that never happens.

AWS edge services
##

The colocation rack handles data storage and XRPL connectivity. AWS handles everything between the rack and the outside world — traffic routing, authentication, rate limiting, and logging.

Application Load Balancer
###

An AWS ALB sits at the entry point for all public traffic. It handles TLS termination, health checking, and request routing to the upstream proxy layer. The ALB distributes traffic based on connection type — HTTP JSON-RPC requests route differently than WebSocket upgrades.

Upstream proxies
###

Three proxy instances sit behind the ALB, handling request validation, protocol translation, and connection management before traffic reaches the colocation servers. Running three provides redundancy — any single proxy can go down without affecting availability.

Redis cluster
###

A three-node Redis cluster handles two critical functions:

  • Authentication — API key validation and session management for authenticated endpoints

  • Rate limiting — Per-key and per-IP request throttling to protect backend infrastructure from abuse and ensure fair access

Redis was chosen for its sub-millisecond latency on lookups. At the volumes we handle, any overhead in the auth/rate-limit path directly impacts request latency for every consumer.

Web server
###

A dedicated web server hosts the Honeycluster dashboard, documentation, and public-facing tooling. This is deliberately separated from the API infrastructure so that a traffic spike on the marketing site cannot impact API availability.

MySQL RDS
###

An AWS RDS MySQL instance stores upstream access logs, request metadata, and operational metrics from the proxy layer. This data feeds internal dashboards and is used for usage analytics, debugging, and capacity planning. RDS handles backups, failover, and maintenance automatically.

Where we sourced hardware
##

Building a physical cluster means sourcing hardware from multiple vendors. Our primary suppliers:

VendorWhat We Sourced
ServerStore
Refurbished Dell R630 and R640 servers — significantly lower cost than buying new for equivalent specs
Amazon
Networking equipment, cables, and rack accessories
Provantage
Cisco switching and specialty components at competitive pricing

Refurbished enterprise hardware

Every server in the rack is refurbished enterprise-grade equipment. Dell PowerEdge R630 and R640 servers are widely available on the secondary market at a fraction of their original cost. For infrastructure workloads where raw compute and storage matter more than warranty coverage, refurbished hardware is a practical choice that dramatically reduces upfront capital expenditure.

How it all connects
##

The flow from a user request to a response looks like this:

  1. A request hits the AWS ALB over HTTPS or WSS

  2. The ALB routes to one of three upstream proxies

  3. The proxy checks Redis for authentication and rate limits

  4. Valid requests forward to the colocation rack over a secure tunnel

  5. The router directs traffic to the appropriate server via the switch

  6. For real-time data, the request hits a Quanta node running rippled/clio

  7. For historical queries, clio reads from the ScyllaDB cluster

  8. The response returns through the same path

Every layer has redundancy. No single component failure takes the system offline.

What we would change
##

If we were building a second rack from scratch, we would make several different choices based on what we have learned operating this one.

Networking
###

We would replace the TP-Link router and Cisco switch with Ubiquiti gear across the board. A Ubiquiti gateway for routing and Ubiquiti high-throughput switches for inter-node connectivity — particularly for the ScyllaDB cluster, where cross-node replication and repair traffic demands consistent, low-latency switching between all six database servers.

Ubiquiti's unified management layer simplifies monitoring and configuration compared to managing separate vendor ecosystems for routing and switching.

No patch panel
###

We already noted the problems with patch panels above. In a new build, we would eliminate the patch panel entirely. It adds an unnecessary failure point with no meaningful benefit in a rack of this size. Direct runs from the switch to each server are simpler and more reliable.

Cabling
###

Thinner Cat6 cables at 5-foot lengths for every run. Our current rack has a mix of cable lengths that creates bulk and makes airflow management harder than it needs to be. Standardizing on short, slim Cat6 keeps the rack clean and reduces cable mass in the cable management arms.

Power
###

C13/C14 power cords instead of NEMA 5-15P connectors. C13/C14 is the standard IEC connector on enterprise server power supplies and PDUs. NEMA 5-15P works — it is what we use now — but C13/C14 connections are more secure in a rack environment, lock into the PDU receptacle more firmly, and are the expected standard for datacenter equipment. For a new installation, there is no reason to use NEMA.

Small changes, big impact

None of these changes are dramatic. They are the kind of refinements that only become obvious after operating a rack under production load for months. The current setup works — but a second build would be tighter, cleaner, and easier to maintain.

Built to grow

This architecture is designed to scale horizontally. Adding ScyllaDB nodes increases storage and throughput. Adding Quanta nodes increases XRPL connectivity and API capacity. Adding proxies increases edge throughput. The colocation rack has room for additional hardware, and the AWS layer scales on demand.

Build on infrastructure you can trust

Managed nodes, real-time indexing, and production-grade APIs for the XRP Ledger.

Get started