Intel Xeon Gold 5218 Dedicated Servers

Enterprise-class servers with Intel Xeon Gold 5218 — 16 cores, 32 threads, 2.3GHz base with 3.9GHz turbo. Cascade Lake architecture with AVX-512 for AI inference …

От

$186 /мес
28 серверов
1-2h настройка
99.9% аптайм
Локация CPU ОЗУ Диск Сеть Цена
Montreal, CA Montreal, CA
[Dual] Intel Xeon Gold …
64 GB 2 TB Unmetered
$186
Настройка
Stockholm, SE Stockholm, SE
[Dual] Intel Xeon Gold …
64 GB 6 TB 5 TB
$679
Настройка
Atlanta, US Atlanta, US
[Dual] Intel Xeon Gold …
64 GB 6 TB 5 TB
$679
Настройка
Bucharest, RO Bucharest, RO
[Dual] Intel Xeon Gold …
64 GB 6 TB 5 TB
$679
Настройка
Sofia, BG Sofia, BG
[Dual] Intel Xeon Gold …
64 GB 6 TB 5 TB
$679
Настройка
Las Vegas, US Las Vegas, US
[Dual] Intel Xeon Gold …
64 GB 6 TB 5 TB
$691
Настройка
Budapest, HU Budapest, HU
[Dual] Intel Xeon Gold …
64 GB 6 TB 5 TB
$693
Настройка
Madrid, ES Madrid, ES
[Dual] Intel Xeon Gold …
64 GB 6 TB 5 TB
$705
Настройка
Barcelona, ES Barcelona, ES
[Dual] Intel Xeon Gold …
64 GB 6 TB 5 TB
$705
Настройка
Copenhagen, DK Copenhagen, DK
[Dual] Intel Xeon Gold …
64 GB 6 TB 5 TB
$718
Настройка
Hong Kong, HK Hong Kong, HK
[Dual] Intel Xeon Gold …
64 GB 6 TB 5 TB
$727
Настройка
Tokyo, JP Tokyo, JP
[Dual] Intel Xeon Gold …
64 GB 6 TB 5 TB
$727
Настройка
Oslo, NO Oslo, NO
[Dual] Intel Xeon Gold …
64 GB 6 TB 5 TB
$756
Настройка
Brussels, BE Brussels, BE
[Dual] Intel Xeon Gold …
64 GB 6 TB 5 TB
$769
Настройка
Belgrade, RS Belgrade, RS
[Dual] Intel Xeon Gold …
64 GB 6 TB 5 TB
$769
Настройка

Преимущества

  • 16 cores / 32 threads with Cascade Lake architecture
  • AVX-512 instructions for AI/ML inference
  • Dual-socket capable — up to 64 threads
  • Intel DL Boost for deep learning workloads
  • Enterprise security with SGX support

Лучше всего для

AI Inference
Enterprise ERP & CRM
SQL Server & Oracle
Compliance-Sensitive Workloads

Intel Xeon Gold 5218 Dedicated Servers

Enterprise-class servers with Intel Xeon Gold 5218 — 16 cores, 32 threads, 2.3GHz base with 3.9GHz turbo. Cascade Lake architecture with AVX-512 for AI inference and enterprise applications.

FAQ