- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Setup and Networking
- >
- Windows server and Nimble CS1000 SAN without a Swi...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-06-2022 02:22 PM
05-06-2022 02:22 PM
Can I "network" my Nimble directly to my Dell/Windows server without a switch? I've been partily successful with connecting the four 10Gbps DAC (TwinAx) directly to server's four. I have the subnet for the 10Gbps interfaces set with the following:
Triffice Type: Data only
Traffic Assignment: ISCSI + Group
IP Address Zone: Single
MTU: Jumbo
On the Windows server 2019 I have the physical 10Gbps interfaces set with:
Jumbo frames: 9014
Bridged the four interfaces (using Microsoft MAC bridge)
A single IP address is set to the bridged (Microsoft Network Adapter Multiplexor) interface.
There is no switch invloved.
While it is "working", I'm getting very poor preformance (~2 MiB/s) and there are multiple errors in the Windows system event log from source iScsiPrt:
"Target did not respond in time for a SCSI request. The CDB is given in the dump data."; ID 9
"Connection to the target was lost. The initiator will attempt to retry the connection."; ID 20
"The initiator could not send an iSCSI PDU. Error status is given in the dump data."; ID 7
Are my trouble becouse I'm missing some configuration?
OR
Is this never going to function in this configuration?
array --info ****
Model: CS1000
Extended Model: CS1000-2P-21T-1440F
Serial: ****
Version: 5.2.1.800-930936-opt
All-Flash: No
Array name: ****
Supported configuration: Yes
Link-local IP address: 169.254.84.165
Group ID: 2777738894916429475
Member GID: 1
Group Management IP: ****
1G/10G_T/SFP/FC NIC: 0/0/2/0
Total array capacity (MiB): 15553104
Total array usage (MiB): 102367
Total array cache capacity (MiB): 1373568
Volume compression: 0.00X
Uncompressed snapshot usage including pending deletes (MiB): 32
Snapshot compression: 1.89X
Pending Deletes (MiB): 0
Available space (MiB): 15450736
Dedupe capacity (MiB): 10485760
Dedupe usage (MiB): 0
Member of pool: default
Status: reachable
ctrlr --info A
Name: A
Serial number: ****-C1
State: active
Hostname: ****
Support IP address: ****
Support IP netmask: 255.255.255.0
Support IP nic: eth1
Power supply: OK
power-supply1 at left rear: ok
power-supply2 at right rear: ok
Cooling fans: OK
fan3 at rear of controller A: ok, speed: 10237rpm
fan4 at rear of controller A: ok, speed: 11000rpm
fan1 at front of controller A: ok, speed: 7850rpm
fan2 at front of controller A: ok, speed: 8200rpm
Temperature sensors: OK
motherboard at motherboard: ok, temperature: 32C
bp-temp1 at left-side backplane: ok, temperature: 26C
System partition status: OK
SCM accelerator: N/A
Last AutoSupport contact: N/A
ctrlr --info B
Name: B
Serial number: ****-C2
State: standby
Hostname: ****-B
Support IP address: ****
Support IP netmask: 255.255.255.0
Support IP nic: eth1
Power supply: OK
power-supply1 at left rear: ok
power-supply2 at right rear: ok
Cooling fans: OK
fan1 at front of controller B: ok, speed: 7350rpm
fan2 at front of controller B: ok, speed: 7500rpm
fan3 at rear of controller B: ok, speed: 10300rpm
fan4 at rear of controller B: ok, speed: 11100rpm
Temperature sensors: OK
motherboard at motherboard: ok, temperature: 28C
bp-temp2 at right-side backplane: ok, temperature: 25C
System partition status: OK
SCM accelerator: N/A
Last AutoSupport contact: N/A
ip --list
---------------+---------+------+----------+---------------+--------------------
IP Address NIC Status Type Array Controller
---------------+---------+------+----------+---------------+--------------------
10.3.10.51 eth1 up management **** A
10.3.10.151 eth1 up support **** A
10.3.10.50 eth1 up discovery **** A
172.16.103.51 tg1 up data **** A
172.16.103.50 tg1 up discovery **** A
172.16.103.52 tg2 up data **** A
10.3.10.152 eth1 up support **** B
netconfig --info active
Group Management IP: 10.3.10.51
Group Secondary Management IP: 10.3.10.52
Group leader array: ****
Member array(s): ****
ISCSI Automatic connection method: Yes
ISCSI Connection rebalancing : Yes
Routes:
---------------+---------------+---------------
Destination Netmask Gateway
---------------+---------------+---------------
0.0.0.0 0.0.0.0 10.3.10.1
Subnets:
------------------------+------------------+---------+---------------+----+-----
Label Network Type Discovery IP VLAN MTU
------------------------+------------------+---------+---------------+----+-----
Mgmt.Network 10.3.10.0/24 Mgmt 10.3.10.50 0 1500
iSCSI.VLAN103 172.16.103.0/24 Data 172.16.103.50 0 9000
Array Network Configuration: ****
Controller A IP: 10.3.10.151
Controller B IP: 10.3.10.152
---------+------------------------+---------------+------
NIC Subnet Label Data IP Address Tagged
---------+------------------------+---------------+------
eth1 Mgmt.Network N/A No
eth2 Mgmt.Network N/A No
tg1 iSCSI.VLAN103 172.16.103.51 No
tg2 iSCSI.VLAN103 172.16.103.52 No
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-09-2022 02:11 AM
05-09-2022 02:11 AM
SolutionHi MrFisher
Thank you for coming to the HPE Community forums
I am afraid to say that, under no circumstance is an iSCSI direct connect supported or advisable:
Failover, SW updates, and HA will all fail to direct attach the server to the array. Always use a switch, Fabric Interconnect, etc.
you can reference "Network Connections" and "Network Topology" designs in the appropriate Hardware Guide.
We do have documentation on what has been qualified.
X10:
https://infosight.hpe.com/InfoSight/media/cms/active/pubs_Hardware_Guide_AF1000__AF3000__AF5000__AF7000__AF9000__AFS2_Guide_doc_version_family.whz/jte1484627671048.html
Gen5:
https://infosight.hpe.com/InfoSight/media/cms/active/pubs_Hardware_Guide_-_AFxx.whz/jte1484627671048.html
It is also documented in the Validated Configuration Matrix now for iSCSI software initiators "2. iSCSI Direct Attach to a Nimble Storage Array does not function correctly and is not supported."
Hope this helps.!!
Regards
Mahesh
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
