Array Setup and Networking
1823415 Members
2525 Online
109655 Solutions
New Discussion

Windows server and Nimble CS1000 SAN without a Switch

 
SOLVED
Go to solution
MrFisher
Visitor

Windows server and Nimble CS1000 SAN without a Switch

Can I "network" my Nimble directly to my Dell/Windows server without a switch? I've been partily successful with connecting the four 10Gbps DAC (TwinAx) directly to server's four. I have the subnet for the 10Gbps interfaces set with the following:
Triffice Type: Data only
Traffic Assignment: ISCSI + Group
IP Address Zone: Single
MTU: Jumbo

On the Windows server 2019 I have the physical 10Gbps interfaces set with:
Jumbo frames: 9014
Bridged the four interfaces (using Microsoft MAC bridge)
A single IP address is set to the bridged (Microsoft Network Adapter Multiplexor) interface.

There is no switch invloved.

While it is "working", I'm getting very poor preformance (~2 MiB/s) and there are multiple errors in the Windows system event log from source iScsiPrt:
"Target did not respond in time for a SCSI request. The CDB is given in the dump data."; ID 9
"Connection to the target was lost. The initiator will attempt to retry the connection."; ID 20
"The initiator could not send an iSCSI PDU. Error status is given in the dump data."; ID 7

Are my trouble becouse I'm missing some configuration?
OR
Is this never going to function in this configuration?

Spoiler

array --info ****
Model: CS1000
Extended Model: CS1000-2P-21T-1440F
Serial: ****
Version: 5.2.1.800-930936-opt
All-Flash: No
Array name: ****
Supported configuration: Yes
Link-local IP address: 169.254.84.165
Group ID: 2777738894916429475
Member GID: 1
Group Management IP: ****
1G/10G_T/SFP/FC NIC: 0/0/2/0
Total array capacity (MiB): 15553104
Total array usage (MiB): 102367
Total array cache capacity (MiB): 1373568
Volume compression: 0.00X
Uncompressed snapshot usage including pending deletes (MiB): 32
Snapshot compression: 1.89X
Pending Deletes (MiB): 0
Available space (MiB): 15450736
Dedupe capacity (MiB): 10485760
Dedupe usage (MiB): 0
Member of pool: default
Status: reachable


ctrlr --info A
Name: A
Serial number: ****-C1
State: active
Hostname: ****
Support IP address: ****
Support IP netmask: 255.255.255.0
Support IP nic: eth1
Power supply: OK
power-supply1 at left rear: ok
power-supply2 at right rear: ok
Cooling fans: OK
fan3 at rear of controller A: ok, speed: 10237rpm
fan4 at rear of controller A: ok, speed: 11000rpm
fan1 at front of controller A: ok, speed: 7850rpm
fan2 at front of controller A: ok, speed: 8200rpm
Temperature sensors: OK
motherboard at motherboard: ok, temperature: 32C
bp-temp1 at left-side backplane: ok, temperature: 26C
System partition status: OK
SCM accelerator: N/A
Last AutoSupport contact: N/A
ctrlr --info B
Name: B
Serial number: ****-C2
State: standby
Hostname: ****-B
Support IP address: ****
Support IP netmask: 255.255.255.0
Support IP nic: eth1
Power supply: OK
power-supply1 at left rear: ok
power-supply2 at right rear: ok
Cooling fans: OK
fan1 at front of controller B: ok, speed: 7350rpm
fan2 at front of controller B: ok, speed: 7500rpm
fan3 at rear of controller B: ok, speed: 10300rpm
fan4 at rear of controller B: ok, speed: 11100rpm
Temperature sensors: OK
motherboard at motherboard: ok, temperature: 28C
bp-temp2 at right-side backplane: ok, temperature: 25C
System partition status: OK
SCM accelerator: N/A
Last AutoSupport contact: N/A

ip --list
---------------+---------+------+----------+---------------+--------------------
IP Address NIC Status Type Array Controller
---------------+---------+------+----------+---------------+--------------------
10.3.10.51 eth1 up management **** A
10.3.10.151 eth1 up support **** A
10.3.10.50 eth1 up discovery **** A
172.16.103.51 tg1 up data **** A
172.16.103.50 tg1 up discovery **** A
172.16.103.52 tg2 up data **** A
10.3.10.152 eth1 up support **** B


netconfig --info active
Group Management IP: 10.3.10.51
Group Secondary Management IP: 10.3.10.52
Group leader array: ****
Member array(s): ****
ISCSI Automatic connection method: Yes
ISCSI Connection rebalancing : Yes

Routes:
---------------+---------------+---------------
Destination Netmask Gateway
---------------+---------------+---------------
0.0.0.0 0.0.0.0 10.3.10.1

Subnets:
------------------------+------------------+---------+---------------+----+-----
Label Network Type Discovery IP VLAN MTU
------------------------+------------------+---------+---------------+----+-----
Mgmt.Network 10.3.10.0/24 Mgmt 10.3.10.50 0 1500
iSCSI.VLAN103 172.16.103.0/24 Data 172.16.103.50 0 9000

Array Network Configuration: ****
Controller A IP: 10.3.10.151
Controller B IP: 10.3.10.152
---------+------------------------+---------------+------
NIC Subnet Label Data IP Address Tagged
---------+------------------------+---------------+------
eth1 Mgmt.Network N/A No
eth2 Mgmt.Network N/A No
tg1 iSCSI.VLAN103 172.16.103.51 No
tg2 iSCSI.VLAN103 172.16.103.52 No

1 REPLY 1
Mahesh202
HPE Pro
Solution

Re: Windows server and Nimble CS1000 SAN without a Switch

Hi MrFisher

Thank you for coming to the HPE Community forums

I am afraid to say that, under no circumstance is an iSCSI direct connect supported or advisable:

Failover, SW updates, and HA will all fail to direct attach the server to the array. Always use a switch, Fabric Interconnect, etc.

you can reference "Network Connections" and "Network Topology" designs in the appropriate Hardware Guide.
We do have documentation on what has been qualified.
X10:
https://infosight.hpe.com/InfoSight/media/cms/active/pubs_Hardware_Guide_AF1000__AF3000__AF5000__AF7000__AF9000__AFS2_Guide_doc_version_family.whz/jte1484627671048.html
Gen5:
https://infosight.hpe.com/InfoSight/media/cms/active/pubs_Hardware_Guide_-_AFxx.whz/jte1484627671048.html
It is also documented in the Validated Configuration Matrix now for iSCSI software initiators "2. iSCSI Direct Attach to a Nimble Storage Array does not function correctly and is not supported."

Hope this helps.!!


Regards
Mahesh



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo