Zen 4 architecture — up to 64 cores, 1152 GB DDR5 ECC, and 25 Gbps uplink. Available across 6 datacenters in Frankfurt, Strasbourg, New York, Miami, and Singapore. From €499/mo.
ArchitectureZen 4 (5nm TSMC)Core Range32– to 64 cores (1P)Memory TypeDDR5-4800 ECC RegisteredMax RAM (1P)1152 GB per socketStoragePCIe Gen 4 NVMe · Up to 15.4 TBNetwork Options1 Gbps or 25 Gbps Unmetered
Why AMD EPYC for Bare Metal
AMD EPYC Genoa (9004 series) sets the current benchmark for server workloads demanding massive parallelism and memory capacity in a single socket. Built on TSMC’s 5nm process, Zen 4 cores deliver 16% higher IPC than the previous generation, while the industry-unique 12-channel DDR5 memory controller enables up to 1152 GB per socket — more than any competing single-socket platform at this price point. All EPYC configurations ship with DDR5 ECC Registered memory and Gen 4 NVMe. No DDR4, no SATA, no compromise.
EPYC 9004 Series (Genoa)
Zen 4 · 5nm · Up to 96 cores
DDR5-4800 ECC Registered
12 channels · Max 1152 GB
PCIe Gen 5 · 128 Lanes
Gen 4 NVMe · No CPU steal
Platform Comparison
EPYC 9004 vs Competing Platforms
Specification
AMD EPYC 9004
Intel Xeon (4th Gen)
EPYC 7003 (prev)
Architecture
Zen 4 (5nm TSMC)
Golden Cove (Intel 7)
Zen 3 (7nm TSMC)
Max cores (1 socket)
96 cores
60 cores
64 cores
Max RAM (1 socket)
1152 GB DDR5
1024 GB DDR5
512 GB DDR4
Memory channels
12 channels
8 channels
8 channels
Memory bandwidth
~460 GB/s
~307 GB/s
~204 GB/s
PCIe lanes
128 lanes (Gen 5)
80 lanes (Gen 5)
128 lanes (Gen 4)
AMD EPYC Configurations
EPYC Configurations. 6 Global Locations.
Every AMD EPYC server ships with DDR5 ECC Registered memory, Gen 4 NVMe, full root + IPMI, and 24/7 engineer support. Choose your location at checkout. No setup fees.
CPU
8 servers
Workload Compatibility
Built for Demanding Workloads
AMD EPYC 9004 was designed specifically for workloads where core count, memory capacity, and memory bandwidth are the limiting factors. These are the use cases where EPYC consistently outperforms alternatives.
AI & LLM Inference
With up to 1152 GB DDR5 ECC and ~460 GB/s memory bandwidth, EPYC 9004 servers can hold Llama 3 70B entirely in RAM for low-latency inference without NVMe swap. Eliminates the throughput bottleneck that makes GPU-less LLM serving impractical on DDR4 platforms.
Up to 1152 GB RAM
In-Memory Databases
Redis, Aerospike, Memcached, Apache Ignite, and similar in-memory data stores scale linearly with RAM capacity. At 1152 GB, a single EPYC 9575F holds a dataset that would require a 6-node cluster on standard 192 GB servers — eliminating sharding complexity and cross-node latency.
~460 GB/s bandwidth
HPC & Parallel Workloads
64 physical cores with SMT give 128 hardware threads per socket. Scientific simulations, Monte Carlo engines, genome sequencing pipelines, rendering farms, and financial risk models that scale with thread count see near-linear throughput gains on EPYC compared to 12-core or 24-core alternatives.
64 cores · 128 threads
High-Density Virtualisation
64 cores and 1152 GB RAM allow running 50–100+ VMs or containers on a single bare metal host — making EPYC the most cost-effective platform for private cloud, VPS hosting, and multi-tenant SaaS. PCIe Gen 5 with 128 lanes eliminates I/O contention between high-bandwidth virtual machines.
PCIe Gen 5 · 128 lanes
Configuration Guide
Which EPYC Config Is Right for You?
Three distinct tiers of EPYC configurations — each built for a different bottleneck. Choose based on what your workload is actually constrained by.
Entry EPYC — Core-Count Bottleneck
EPYC 9354P32 coresFrom €209/mo
The EPYC 9354P (32c × 3.25 GHz, 192 GB DDR5 ECC) is the right choice when your workload is CPU-bound rather than memory-bound: multi-threaded web servers (PHP-FPM, Node.js cluster), CI/CD build farms, video transcoding, and parallel data processing pipelines. Available in Strasbourg from €209/mo or Frankfurt from €499/mo. The Strasbourg config is our most cost-efficient EPYC option in Europe.
The EPYC 9555P (64c × 3.20 GHz, 384 GB DDR5 ECC, 2× 3.84 TB NVMe) is the workhorse for database servers, in-memory analytics, medium LLM inference, and high-density virtualisation. 64 cores + 12-channel DDR5 delivers ~460 GB/s memory bandwidth — roughly 2.2× what DDR4 platforms offer at this price. Available in New York, Singapore, and Frankfurt.
When the entire working dataset must fit in RAM — vector databases, full-scale LLM serving (70B+ models), large Redis clusters, real-time fraud detection across millions of records — the 1152 GB configurations are the only single-socket option. The EPYC 9575F (3.30 GHz boost) in New York targets latency-critical HFT and financial applications. The EPYC 9554P XL in Miami provides the same memory footprint at lower cost for LATAM-facing workloads.
All four are AMD EPYC Genoa (9004 series) on the same Zen 4 platform. The 9354P has 32 cores at 3.25 GHz base and is the entry point. The 9554P has 64 cores at 3.1 GHz — our Miami flagship. The 9555P has 64 cores at 3.2 GHz with slightly higher clocks — available in New York, Frankfurt, and Singapore. The 9575F has 64 cores at 3.3 GHz and is the highest-clocked variant, making it the preferred choice for latency-sensitive workloads like HFT where per-core speed matters as much as parallelism. All ship with DDR5 ECC Registered memory and Gen 4 NVMe.
AMD EPYC Genoa implements a 12-channel DDR5 memory controller, while Intel Xeon Scalable (4th Gen Sapphire Rapids) uses an 8-channel DDR5 controller. With 12 DIMM slots and 64 GB DIMMs (the current DDR5 RDIMM sweet spot), EPYC reaches 12 × 96 GB = 1152 GB. Intel maxes out at 8 × 128 GB = 1024 GB using more expensive 128 GB RDIMMs. The 12-channel EPYC controller also delivers significantly higher aggregate memory bandwidth (~460 GB/s vs ~307 GB/s), which matters more than clock speed for memory-bound workloads.
No. EPYC wins decisively on multi-threaded throughput, memory capacity, memory bandwidth, and total cost of ownership for large deployments. Intel Xeon wins on single-threaded clock speed in the high-end SKUs (some Xeon Max variants boost higher), AVX-512 performance for certain vector workloads, and has a longer history in highly-regulated environments where software stacks were certified on Xeon first. For the workloads where memory capacity or parallel throughput is the bottleneck — databases, virtualisation, AI inference, HPC — EPYC is the better platform. For legacy enterprise software certified on Xeon, or single-threaded performance-critical applications, Xeon or Ryzen 9 may be more appropriate.
ECC (Error-Correcting Code) memory detects and corrects single-bit memory errors in hardware, preventing data corruption and silent crashes that standard non-ECC memory cannot catch. Registered (RDIMM) means the memory module includes a register buffer between the CPU and memory chips, enabling higher DIMM counts and larger capacities per channel. All EPYC configurations at CoreNetHub ship exclusively with DDR5 ECC Registered memory — the same memory class used in hyperscaler datacenters. This is a mandatory requirement for production database servers, in-memory applications, and any workload where data integrity is non-negotiable.
Yes. EPYC configurations are available across our 6 datacenters — Frankfurt, Strasbourg, New York, Miami, and Singapore. Specific SKUs (like the EPYC 9575F or the 1152 GB XL configs) are available in specific locations based on hardware stock. The pricing table above shows the current location for each configuration. If you need a specific CPU model in a location not currently listed, contact us — we can often provision custom configurations within 2–5 business days.
The 25G suffix means this configuration ships with a 25 Gbps network uplink instead of the standard 1 Gbps. Bandwidth remains unmetered in both cases. The 25G config also includes the maximum memory (1152 GB DDR5 ECC) and 4× 3.84 TB NVMe (15.4 TB raw). It’s designed for workloads where both internal throughput and external bandwidth are bottlenecks simultaneously: LLM model serving with large batch sizes, real-time streaming analytics, or CDN/object storage backends that need to saturate network bandwidth while keeping a large working set in RAM.