- Community Home
- >
- HPE Networking
- >
- Networking
- >
- Five design principles to get your data center out...
Networking
1823946
Members
3350
Online
109667
Solutions
Forums
Categories
Company
Local Language
๎คซ
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
๎คซ
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Subscribe to RSS Feed
|
Blog Options
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Five design principles to get your data center out of first gear
William_Choe
โ11-07-2023
07:00 AM
Like high-performance sports cars, today's data centers must perform faster and more efficiently than ever before. Advanced compute and storage I/O require higher speed top of rack connectivity to connect fabrics into 400G spines (with 800G on the horizon).
Because most network operations havenโt kept up with new API and automation-driven practices, legacy enterprise data centers are like a sports car stuck in first gear, unable to take full advantage of its speed. Manual network provisioning and configuration canโt match the velocity of modern development practices and microservice applications. Bolted-on appliances, agents, and complex traffic engineering introduce further drag, and the ability to deliver logs and telemetry to analytics tools to generate meaningful, actionable output is limited.
Five design principles for next-gen data centers
How do you move past the limitations of legacy architectures to a modern data center that runs like a Ferrari? Get your data center out of first gear and operating at full speed with these five design principles:
1. Modernize with DPU-enabled switches
Data processing units (DPUs) are processors that offload and accelerate network and security functions. Originally designed for servers, hyperscalers have adopted DPUs at scale, proving out the technology.
The HPE Aruba Networking CX 10000 Switch Series with AMD Pensando is the first to fully integrate (dual) DPUs into an L2/3 switch, moving stateful firewalling, NAT, and micro segmentation closer to workloads without impacting switch processing performance.
Embedding DPUs into data center switches instead of installing them in servers simplifies brownfield deployment and lowers total cost. Instead of buying DPUs for each server in the rack, a DPU-enabled ToR switch provides similar benefits at a fraction of the priceโwithout the need to unrack and crack open every server to install the new silicon. DPU-enabled switches mean you can adopt a distributed services model in existing data center environments at the rack and/or POD level, without painful upgrades or long deployment times.
2. Bring network and security services closer to workloads with a distributed services architecture
Security services in traditional data centers are typically delivered in two ways:
- Hardware appliances that hang off the data center network, requiring traffic engineering to direct flows out to the security cluster through a stack of appliances, then back into the network fabric, adding operational complexity and latency.
- Software agents that run in VMs or containers on servers, requiring installation of a host of agents and drivers that take device memory and CPU away from application processing and add a new tier of licensing and management costs.
Running firewall, NAT, and segmentation services within the network fabric applies these services closer to workloads and traffic flows while avoiding complex traffic engineering and cost and management burdens of server-based agents. DPU-enabled switches enable easier adoption of distributed services architectures in brownfield data centers, modernizing infrastructure at a lower cost and with less operational disruption.
3. Extend Zero Trust closer to applications
Zero Trust allows finer-grained control of application and service communications than typical port/protocol rules or ACLs, but it requires visibility into all your traffic. Most data center traffic in modern hypervisor or microservices-based application development runs east-west and passes through ToR switches. Distributing stateful firewall and micro segmentation services on ToR DPUs takes advantage of the visibility switches already have into these communications, to apply and enforce precise rules on host-to-host communicationโwithout the need to hairpin traffic out to security appliances.
And because you can inspect every packet or flow that passes through your ToR layer, you dramatically increase your chances of spottingโand stoppingโthe kind of lateral movement that attackers use to burrow into your infrastructure.
4. Blend network and security AIOps
Data is priceless information that can be analyzed for security, troubleshooting, performance monitoring, and other uses. A new generation of analytics tools uses AI and machine learning to extract actionable insights from data and provide predictive analytics to spot small issues before they become big.
Until now, network operations teams have had to rely on probes and taps to get this data, requiring either building a second network to monitor the first, or limiting the data sample. DPU-based switches from HPE Aruba Networking collect and export standards-based IPFix flow records and extend telemetry to include syslogs from stateful firewalls that run on the DPU. The DPU can export syslogs to third-party security tools including SIEMs and XDR systems, helping reduce blind spots and enabling network operators to respond to issues faster and more effectively.
5. Incorporate edge, colocation, and IaaS
Distributing services directly on a DPU-based switch extends network, security, and telemetry capabilities outside the data center to outside locations like colocation facilities, factories, branch locations, or public cloud edges. The HPE Aruba Networking CX 10000 can dramatically simplify a private/private 400G site-site IPsec handoff to either Microsoft Azure, AWS, or across on-premises and globally adjacent collocated hybrid cloud services such as HPE GreenLake.
Leveraging designs that combine colocation and infrastructure as a service (IaaS) offers additional benefits, including low latency, high bandwidth connections to major cloud providers, improved transaction speed, and data sovereignty. These integrated solutions also help reduce costs by limiting upfront CapEx, paying only for what you use, and avoiding public cloud egress charges.
The next generation of distributed services architecture supports applications in a variety of locations where critical data needs to be collected, processed, inspected, or passed along to the public cloud.
Accelerate your data center
High-performing sports cars are out of reach for many of us. A data center, built for speed with a distributed services architecture enabled by industry-first DPU-enabled switches, is not. Today itโs possible to transform your data center to meet workload needs without needing to rebuild it from the ground up. A next-gen solution extends Zero Trust deep into the data center, leverages network and security AIOps, and brings critical network and security capabilities to edge locations.
All to say: you might not get that Ferrari, but with a distributed services architecture, you can have a data center that runs like one.
For more information
Labels:
About the Author
William_Choe
William Choe is the VP of Product Management for Campus and Data Center Solutions, Instant On portfolio, and Global Product Operations at HPE Aruba Networking. William previously held executive roles at Dell, Cisco, and multiple startups. In his free time, William loves to ski, mountain bike, and hike.
- Back to Blog
- Newer Article
- Older Article
Labels
-
AI-Powered
23 -
AI-Powered Networking
21 -
Analytics and Assurance
4 -
Aruba Unplugged
7 -
Cloud
9 -
Corporate
3 -
customer stories
4 -
Data Center
19 -
data center networks
19 -
digital workplace
2 -
Edge
4 -
Enterprise Campus
9 -
Events
5 -
Government
10 -
Healthcare
2 -
Higher Education
2 -
Hospitality
4 -
Industries
1 -
IoT
8 -
Large Public Venue
1 -
Location Services
3 -
Manufacturing
1 -
midsize business
1 -
mobility
17 -
Network as a Service (NaaS)
12 -
Partner Views
4 -
Primary Education
1 -
Retail
1 -
SASE
21 -
SD-WAN
12 -
Security
100 -
small business
1 -
Solutions
7 -
Technical
5 -
Uncategorized
1 -
Wired Wireless WAN
87 -
women in technology
2
- « Previous
- Next »
Top Kudoed Authors Last 30 Days
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Learn About
News and Events
Support
© Copyright 2025 Hewlett Packard Enterprise Development LP