- Community Home
- >
- Storage
- >
- Around the Storage Block
- >
- Fibre Channel: The lifeblood of storage connectivi...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Fibre Channel: The lifeblood of storage connectivity in the data center
Why Fibre Channel? Fibre Channel is still the most secure, reliable, cost effective, and scalable protocol for connecting servers and storage togetherโand the only protocol that is purpose-built for handing storage traffic.
We all know that the amount of data continues to grow exponentially and that the data itself is the new currency businesses rely on. The ability to act upon that data in a timely manner can make or break how a business is able to compete in the marketplace. Therefore, quick and reliable access data is paramount and the underlying infrastructure that connects the user to data storage systems is more critical than ever before.
In todayโs data center, architects have many different connectivity options to choose from, but Fibre Channel has been and will remain to be the lifeblood for shared storage connectivity. This is because Fibre Channel is the most secure, reliable, cost effective and scalable protocol for connecting servers and storage together, and the only protocol that is purpose-built for handing storage traffic.
Fibre Channel has been around for decades now and it is still far and away the primary choice for connectivity to shared storage in the data center. With Fibre Channel, a dedicated storage network is created, and SCSI storage commands are routed between server and storage devices at bandwidths up to 28.05Gbps (32GFC) and with IOPS in excess of 1 million. Because it was designed from the ground up for storage traffic, Fibre Channel works very reliably to deliver high-performance connectivity. HPE StoreFabric 16GFC and 32GFC adapters and switching infrastructure provide the kind of bandwidth, IOPS and low latency required in the data center today and for years to come.
Advancements in Fibre Channel technology keep it ahead of the curve when it comes to connectivity.
For example, HPE StoreFabric 16GFC and 32GFC infrastructure is already capable of supporting NVMe storage traffic, even before NVMe native storage arrays are mainstream. Other advanced capabilities include advanced diagnostics, simplified deployment and orchestration and enhanced reliability such as T-10 PI, dual-port isolation and more.
The other popular option for storage connectivity is iSCSI. With iSCSI, the storage commands on a standard TCP/IP network and this makes it great for low-end to mid-range systems where performance and security are not the primary requirements. The popular misconception about Fibre Channel is that because it uses a dedicated storage network, it is more expensive than iSCSI. While iSCSI can run on the same Ethernet network as all the regular network traffic, to deliver the performance most customers need from their storage systems, iSCSI needs to run on a segmented or dedicated ethernet network, isolated from the regular network traffic. This means complex VLAN configurations and security policies or a completely dedicated Ethernet network. Just like Fibre Channel. The only real difference in cost between FC and iSCSI is when DAC cables are used in iSCSI implementations. But with a distance limit of 5 meters using DACs. This may work fine for an SMB customer with only a single storage array, but DACs do not work well in the large-scale data center.
When you look at the topology of a storage network, the best practice is identical between iSCSI and Fibre Channel. For resiliency and to eliminate downtime, the storage area network (SAN) design has two identical networking paths between servers and storage.
One big difference however is that Fibre Channel networks are not nearly as susceptible to security breaches as Ethernet.
When is the last time you heard that a Fibre Channel network was hacked? Never? How about an Ethernet network?
Security is one of the main reasons that Fibre Channel will remain a mainstay in the data center for many years to come.
As we look forward, the SCSI command set will be replaced by Non-Volitile Memory Express or NVMe commands. NVMe is a streamlined command set designed for SSD and storage class memory that is much more efficient than SCSI. In addition, NVMe is a multi-queue architecture with up to 64K I/O queues, with each I/O queue supporting up to 64K commands. Compared to SCSI with a single queue and 64 commands, NVMe can deliver significantly higher performance.
Todayโs HPE StoreFabric 16GFC and 32GFC infrastructure that supports SCSI commands can also run NVMe commands in the SAN, or over the fabric as it is called. With Ethernet customers will need to implement low latency RDMA over Converged Ethernet or RoCE to take full advantage of NVMe. However, this approach requires a complex loss-less Ethernet implementation using Data Center Bridging (DCB) and Priority Flow Control (PFC). The network complexity for NVMe over Ethernet will be a huge barrier for most customers, especially when the FC SAN deployed today works just fine with the NVMe storage of tomorrow.
The bottom line is that Fibre Channel will remain the lifeblood for connectivity between servers and shared storage.
HPE has a legacy of SAN leadership and a full complement of Fibre Channel HBAs and Switching technology to support customers today and tomorrow. For more information on HPE Storage Networking Fibre Channel technology, go to HPE Storage Networking.
Meet Around the Storage Block blogger Prabhu Punniamurthy, HPE StoreFabric, HPE Storage.
Storage Experts
Hewlett Packard Enterprise
twitter.com/HPE_Storage
linkedin.com/showcase/hpestorage/
hpe.com/storage
- Back to Blog
- Newer Article
- Older Article
- haniff on: High-performance, low-latency networks for edge an...
- StorageExperts on: Configure vSphere Metro Storage Cluster with HPE N...
- haniff on: Need for speed and efficiency from high performanc...
- haniff on: Efficient networking for HPEโs Alletra cloud-nativ...
- CalvinZito on: Whatโs new in HPE SimpliVity 4.1.0
- MichaelMattsson on: HPE CSI Driver for Kubernetes v1.4.0 with expanded...
- StorageExperts on: HPE Nimble Storage dHCI Intelligent 1-Click Update...
- ORielly on: Power Loss at the Edge? Protect Your Data with New...
- viraj h on: HPE Primera Storage celebrates one year!
- Ron Dharma on: Introducing Language Bindings for HPE SimpliVity R...