<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic [LINUX KERNEL] Add. Sense: Invalid field in cdb in Operating System - VMware</title>
    <link>https://community.hpe.com/t5/operating-system-vmware/linux-kernel-add-sense-invalid-field-in-cdb/m-p/7236112#M4274</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;Issue encountered :&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;Since the middle of november 2024, arround 75% of my virtual machines had a SCSI message spam to journalctl logs :&lt;/P&gt;&lt;PRE&gt;Feb 25 16:29:06 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#667 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
Feb 25 16:29:06 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#667 Sense Key : Illegal Request [current]&amp;nbsp;
Feb 25 16:29:06 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#667 Add. Sense: Invalid field in cdb
Feb 25 16:29:06 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#667 CDB: Write same(16) 93 08 X X X X X X X X 00 00 00 18 00 00
Feb 25 16:29:06 virtualmachine kernel: blk_update_request: critical target error, dev sdb, sector 14855984 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio &lt;SPAN class=""&gt;class &lt;/SPAN&gt;0 &lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;It's always the same disque : /dev/sdb (2nd disk) and the same error messages :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="c"&gt;FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
Sense Key : Illegal Request [current] 
Add. Sense: Invalid field in cdb
CDB: Write same(16) 93 08
blk_update_request: critical target error, dev sdb, sector X op 0x9:(WRITE_ZEROES)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;At what point ?&lt;/FONT&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;It's totally at random times of the day. Virtual machines can had arround 1000 lines in journalctl like 0 lines&lt;/LI&gt;&lt;LI&gt;There are no logs that seem to be related to the problem in vmkernel.log of all ESXI.&lt;/LI&gt;&lt;LI&gt;There is no VMWARE ESXI and datastores in common. It affect 3/4 of virtual machines at all ESXI and datastores&lt;/LI&gt;&lt;LI&gt;Messages can be visible at all different templates of virtual machines (databases, app, gui...).&lt;/LI&gt;&lt;LI&gt;Some virtual machines belong to instances (bunch of arround 24 virtual machines) do the same work as others with same version and configurations but don't have these messages.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;Technical environnement specifications :&lt;/FONT&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Storage vendor : HP 3PAR 9450&lt;/LI&gt;&lt;LI&gt;OS storage vendor version :&amp;nbsp;&lt;SPAN&gt;3.3.2 MU1 (P15)&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;SAN switch vendor between storage and bladecenter&amp;nbsp; : Brocade&lt;/LI&gt;&lt;LI&gt;SAN switch vendor integreted to BladeCenter :&amp;nbsp;Brocade 16Gb/28c SAN Switch&lt;/LI&gt;&lt;LI&gt;BladeCenter vendor / model : HPE&amp;nbsp;&amp;nbsp;BladeSystem c7000 Enclosure G3&lt;/LI&gt;&lt;LI&gt;BladeCenter firmware version :&amp;nbsp;4.90&lt;/LI&gt;&lt;LI&gt;Blade Servers vendor and model : HPE&amp;nbsp;&lt;SPAN&gt;ProLiant BL460c Gen9 (32 servers) &amp;amp;&amp;nbsp;ProLiant BL460c Gen10 (16 servers).&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;OS vendor installed on blade servers :&amp;nbsp;VMware ESXi 7.0.3 Build 23794027&lt;/LI&gt;&lt;LI&gt;Datastores type : VMFS 5 &amp;amp; 6&lt;/LI&gt;&lt;LI&gt;Disk provision type on guest OS side : Thick Provsion Lazy zeroed&lt;/LI&gt;&lt;LI&gt;OS guest (virtual machines) vendor and version : Linux RedHat 8.4 to Linux RedHat 8.10&lt;/LI&gt;&lt;LI&gt;Kernel version on OS guest :&amp;nbsp;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;4.18.0-305.el8.x86_64 and&lt;/P&gt;&lt;P&gt;4.18.0-425.3.1.el8.x86_64 and&lt;/P&gt;&lt;P&gt;4.18.0-553.8.1.el8_10.x86_64&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;SPAN&gt;VMWARE TOOLS version: 12.3.5.46049 (build-22544099)&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;Hardware compatibility : Vers 19&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;disk architecture inside Linux virtual machines : LVM&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Firmware of blade servers gen9 :&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;HP FlexFabric 10Gb 2-port 534M Adapter 7.18.82 Slot 1&lt;/LI&gt;&lt;LI&gt;HP FlexFabric 10Gb 2-port 536FLB Adapter 7.18.82 Embedded&lt;/LI&gt;&lt;LI&gt;HP QMH2572 8Gb 2P FC HBA - FC 08.08.01 Slot 2&lt;/LI&gt;&lt;LI&gt;iLO 2.82 Feb 06 2023 System Board&lt;/LI&gt;&lt;LI&gt;Intelligent Platform Abstraction Data 25.00 System Board&lt;/LI&gt;&lt;LI&gt;Intelligent Provisioning 2.50.164 System Board&lt;/LI&gt;&lt;LI&gt;Power Management Controller Firmware 1.0.9 System Board&lt;/LI&gt;&lt;LI&gt;Power Management Controller FW Bootloader 1.0 System Board&lt;/LI&gt;&lt;LI&gt;Redundant System ROM I36 v2.60 (05/21/2018) System Board&lt;/LI&gt;&lt;LI&gt;SAS Programmable Logic Device Version 0x03 System Board&lt;/LI&gt;&lt;LI&gt;Server Platform Services (SPS) Firmware 3.1.3.21.4 System Board&lt;/LI&gt;&lt;LI&gt;Smart HBA H244br 7.00 Embedded&lt;/LI&gt;&lt;LI&gt;System Programmable Logic Device Version 0x17 System Board&lt;/LI&gt;&lt;LI&gt;System ROM I36 v2.90 (04/29/2021) System Board&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Firmware of blade servers gen10 :&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Drive HPG4 Port=1I:Box=1:Bay=1 &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Drive HPG4 Port=1I:Box=1:Bay=2 &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Embedded Video Controller 2.5 Embedded Device &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;HP FlexFabric 10Gb 2-port 534M Adapter 7.18.82 Mezzanine Slot 2 &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;HP FlexFabric 10Gb 2-port 536FLB Adapter 7.18.82 Embedded ALOM &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;HP QMH2672 16Gb FC HBA for BladeSystem c-Class &lt;SPAN&gt;8.08.232&lt;/SPAN&gt; Mezzanine Slot 1 &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;HPE Smart Array P204i-b SR Gen10 4.11 Embedded RAID &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;HPE Smart Storage Energy Pack 1 Firmware 0.70 Embedded Device &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;iLO 5 2.55 Oct 01 2021 System Board &amp;nbsp; Innovation Engine (IE) Firmware 0.2.2.3 System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Intelligent Platform Abstraction Data 9.4.0 Build 18 System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Intelligent Provisioning 3.31.63 System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Power Management Controller Firmware 1.0.7 System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Power Management Controller FW Bootloader 1.1 System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Redundant System ROM I41 v2.54 (09/03/2021) System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Server Platform Services (SPS) Descriptor 1.2 0 System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Server Platform Services (SPS) Firmware 4.1.4.505 System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;System Programmable Logic Device 0x1E System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;System ROM I41 v3.34 (09/30/2024) System Board&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;Impact&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;Actually, no impact has been detected&amp;nbsp;but since the word "critical" is in the message, it sends a large number of tickets to our monitoring tool.&lt;/P&gt;&lt;P&gt;And it can be difficult for application vendor when there is an application issue to debug it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;What i tried&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;First of all, it's very difficult to know who's causing the problem, as there are many intermediaries between storage and Linux virtual machines.&lt;/P&gt;&lt;P&gt;At my side, I'm only in charge of bladecenters up to Linux virtual machines. The part of SAN switchs and storage are managed by other team in my company. But i work with them to resolv it actually.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As I described above, it's not new.&lt;/P&gt;&lt;P&gt;The issue as been detected arround the middle of november 2024.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For the story :&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Begin of september 2024 to begin of december 2024 :&amp;nbsp;Server application major updates of 5 instances (bunch of arround 24 virtual machines), which also require OS, kernel and system package updates.&lt;/LI&gt;&lt;LI&gt;End of october 2024 to end of november 2024 (48 ESXI to update) :&amp;nbsp; Minor VMWARE ESXI Update - 7.0.3 build 19482537 to build&amp;nbsp;&lt;SPAN&gt;23794027.&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;Middle of november : Massive tickets from monitoring tool as been detected on same error message.&lt;/LI&gt;&lt;LI&gt;Middle of novembre : A VMWARE ticket has been opened. They said : VMWARE TOOLS is not updated at latest version. I updated it during application updates of 5 instances.&lt;/LI&gt;&lt;LI&gt;End of january 2025 : Messages come back again on this 5 updated instances (not all virtual machines) + messages stays on not updated virtual servers instances.&lt;/LI&gt;&lt;LI&gt;Begin of febuary 2025 : A VMWARE ticket has been opened. They said to ask to storage vendor.&lt;/LI&gt;&lt;LI&gt;Begin of febuary 2025 : A RedHat ticket has been opened. They said to ask to storage vendor too.&lt;/LI&gt;&lt;LI&gt;Middle of febuary 2025 :&amp;nbsp; A ticket has been opened at storage team of my company side. It's always in progress.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It probably begin before the middle of november but it started to get really binding in the middle of november.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Howerver, someone from my company analyze my issue and tell me :&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;It's weird because :&lt;/P&gt;&lt;P&gt;1 - Your virtual machine disks are on&amp;nbsp;&lt;STRONG&gt;Thick&lt;/STRONG&gt; Provision&lt;/P&gt;&lt;P&gt;2 - Verification of VPD in-guest :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[root@virtualmachine ~]# sg_vpd --page=0xb2 /dev/sdb&lt;BR /&gt;Logical block provisioning VPD page (SBC):&lt;BR /&gt;&amp;nbsp; Unmap command supported (LBPU): 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;STRONG&gt;Write same (16) with unmap bit supported (LBPWS): 0&lt;/STRONG&gt;&amp;nbsp;&amp;nbsp;&lt;STRONG&gt;&amp;nbsp;&amp;lt;------&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; Write same (10) with unmap bit supported (LBPWS10): 0&amp;nbsp;&amp;nbsp;&lt;BR /&gt;&amp;nbsp; Logical block provisioning read zeros (LBPRZ): 0&lt;BR /&gt;&amp;nbsp; Anchored LBAs supported (ANC_SUP): 0&lt;BR /&gt;&amp;nbsp; Threshold exponent: 1&lt;BR /&gt;&amp;nbsp; Descriptor present (DP): 0&lt;BR /&gt;&amp;nbsp; Minimum percentage: 0 [not reported]&lt;BR /&gt;&amp;nbsp; Provisioning type: 0 (not known or fully provisioned)&lt;BR /&gt;&amp;nbsp; Threshold percentage: 0 [percentages not supported]&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;3 - Why the OS / app seems to send des WRITE_SAME(16) ?&lt;/P&gt;&lt;P&gt;Example :&amp;nbsp;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="c"&gt;Feb 01 20:33:51 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#130 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
Feb 01 20:33:51 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#130 Sense Key : Illegal Request [current]
Feb 01 20:33:51 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#130 Add. Sense: Invalid field in cdb
Feb 01 20:33:51 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#130 CDB: Write same(16) 93 08 X X X X X X X X 00 00 00 08 00 00
Feb 01 20:33:51 virtualmachine kernel: blk_update_request: critical target error, dev sdb, sector 30670800 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;It's indeed the storage driver that responds in error on write_same (driver_sense) but what we need to understand is why the OS sends these SCSI commands when it's told that this is not supported?&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I analyze&amp;nbsp;If a service was the cause of the problem by use pidstat command :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="c"&gt;pidstat -d 1&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But no service or process seems to write on disk when message come :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="c"&gt;Feb 25 16:24:04 virtualmachine kernel: blk_update_request: critical target error, dev sdb

04:24:02 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
04:24:03 PM   247   2152101      0.00     16.00      0.00       0  java

04:24:03 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
04:24:04 PM     0      1227    256.00    144.00      0.00       0  systemd-journal
04:24:04 PM   247   2152101      0.00   2052.00      0.00       0  java

04:24:04 PM     0   3220187      0.00      0.00      0.00       1  kworker/u256:0-flush-253:3
04:24:04 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
04:24:05 PM     0   3672057      0.00      8.00      0.00       0  rsyslogd

04:24:05 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
04:24:06 PM     0      1230      0.00     32.00      0.00       0  jbd2/dm-3-8&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="c"&gt;Feb 25 12:01:36 othervirtualmachine kernel: blk_update_request: critical target error, dev sdb,

12:01:33 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
12:01:34 PM     0      1214      0.00     24.00      0.00       0  jbd2/dm-10-8
12:01:34 PM     0      1257      0.00     60.00      0.00       0  systemd-journal
12:01:34 PM     0      2588      0.00      0.00      8.00       0  xxxxxxx

12:01:34 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command

12:01:35 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
12:01:36 PM     0      1257      0.00    116.00      0.00       0  systemd-journal
12:01:36 PM     0   1449141      0.00      0.00      0.00       1  kworker/u256:1-events_unbound

12:01:36 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
12:01:37 PM     0   1369065      0.00      4.00      0.00       0  pidstat
12:01:37 PM     0   3553469      0.00      4.00      0.00       0  vmtoolsd

12:01:37 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
12:01:38 PM     0       757      0.00     24.00      0.00       1  jbd2/dm-0-8
12:01:38 PM     0      1235      0.00     32.00      0.00       0  jbd2/dm-13-8
12:01:38 PM     0      1239      0.00      4.00      0.00       0  jbd2/dm-8-8
12:01:38 PM     0      1251      0.00     12.00      0.00       0  jbd2/dm-14-8
12:01:38 PM     0      1257      0.00     16.00      0.00       0  systemd-journal
12:01:38 PM     0      1260      0.00     48.00      0.00       0  jbd2/dm-6-8&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="c"&gt;Feb 25 09:53:44 othervirtualmachine kernel: blk_update_request: critical target error, dev sdb,

09:53:41 AM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
09:53:42 AM     0      1079      0.00      0.00      0.00       1  jbd2/dm-4-8
09:53:42 AM    27    275798    578.22    827.72      0.00       0  mysqld

09:53:42 AM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
09:53:43 AM    27    275798    376.00    656.00      0.00       0  mysqld

09:53:43 AM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
09:53:44 AM    27    275798     88.00    416.00      0.00       0  mysqld

09:53:44 AM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
09:53:45 AM     0      1047      0.00      4.00      0.00       0  jbd2/dm-5-8
09:53:45 AM     0      1059    348.00    156.00      0.00       0  systemd-journal
09:53:45 AM     0      1064      0.00      8.00      0.00       0  jbd2/dm-9-8
09:53:45 AM     0      1079      0.00      4.00      0.00       1  jbd2/dm-4-8
09:53:45 AM    27    275798    472.00    760.00      0.00       0  mysqld
09:53:45 AM     0   1135691      0.00      4.00      0.00       0  pidstat
09:53:45 AM     0   1143255      0.00      0.00      0.00       1  kworker/u256:0-events_unbound

09:53:45 AM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
09:53:46 AM    27    275798  13408.00    948.00      0.00       0  mysqld
09:53:46 AM     0   1169146      0.00      0.00      0.00       1  kworker/0:2-events_power_efficient

09:53:46 AM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
09:53:47 AM    27    275798  18688.00    744.00      0.00       0  mysqld&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Another remark :&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;I can force the messages when i tried to format a LVM partition to ext4fs and remount it.&lt;/LI&gt;&lt;LI&gt;And when i tried, on another disk (/devsdc for example), to create new partition and link it to a new formated folder.&lt;/LI&gt;&lt;LI&gt;Messages seem to go away on virtual machines reboot but come back 2 weeks later&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you help me to find out&amp;nbsp;what tests can I still perform on the virtual servers ?&lt;BR /&gt;Or on VMWARE side ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards.&lt;/P&gt;</description>
    <pubDate>Thu, 27 Feb 2025 07:59:09 GMT</pubDate>
    <dc:creator>RAPHAELLEB</dc:creator>
    <dc:date>2025-02-27T07:59:09Z</dc:date>
    <item>
      <title>[LINUX KERNEL] Add. Sense: Invalid field in cdb</title>
      <link>https://community.hpe.com/t5/operating-system-vmware/linux-kernel-add-sense-invalid-field-in-cdb/m-p/7236112#M4274</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;Issue encountered :&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;Since the middle of november 2024, arround 75% of my virtual machines had a SCSI message spam to journalctl logs :&lt;/P&gt;&lt;PRE&gt;Feb 25 16:29:06 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#667 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
Feb 25 16:29:06 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#667 Sense Key : Illegal Request [current]&amp;nbsp;
Feb 25 16:29:06 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#667 Add. Sense: Invalid field in cdb
Feb 25 16:29:06 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#667 CDB: Write same(16) 93 08 X X X X X X X X 00 00 00 18 00 00
Feb 25 16:29:06 virtualmachine kernel: blk_update_request: critical target error, dev sdb, sector 14855984 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio &lt;SPAN class=""&gt;class &lt;/SPAN&gt;0 &lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;It's always the same disque : /dev/sdb (2nd disk) and the same error messages :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="c"&gt;FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
Sense Key : Illegal Request [current] 
Add. Sense: Invalid field in cdb
CDB: Write same(16) 93 08
blk_update_request: critical target error, dev sdb, sector X op 0x9:(WRITE_ZEROES)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;At what point ?&lt;/FONT&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;It's totally at random times of the day. Virtual machines can had arround 1000 lines in journalctl like 0 lines&lt;/LI&gt;&lt;LI&gt;There are no logs that seem to be related to the problem in vmkernel.log of all ESXI.&lt;/LI&gt;&lt;LI&gt;There is no VMWARE ESXI and datastores in common. It affect 3/4 of virtual machines at all ESXI and datastores&lt;/LI&gt;&lt;LI&gt;Messages can be visible at all different templates of virtual machines (databases, app, gui...).&lt;/LI&gt;&lt;LI&gt;Some virtual machines belong to instances (bunch of arround 24 virtual machines) do the same work as others with same version and configurations but don't have these messages.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;Technical environnement specifications :&lt;/FONT&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Storage vendor : HP 3PAR 9450&lt;/LI&gt;&lt;LI&gt;OS storage vendor version :&amp;nbsp;&lt;SPAN&gt;3.3.2 MU1 (P15)&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;SAN switch vendor between storage and bladecenter&amp;nbsp; : Brocade&lt;/LI&gt;&lt;LI&gt;SAN switch vendor integreted to BladeCenter :&amp;nbsp;Brocade 16Gb/28c SAN Switch&lt;/LI&gt;&lt;LI&gt;BladeCenter vendor / model : HPE&amp;nbsp;&amp;nbsp;BladeSystem c7000 Enclosure G3&lt;/LI&gt;&lt;LI&gt;BladeCenter firmware version :&amp;nbsp;4.90&lt;/LI&gt;&lt;LI&gt;Blade Servers vendor and model : HPE&amp;nbsp;&lt;SPAN&gt;ProLiant BL460c Gen9 (32 servers) &amp;amp;&amp;nbsp;ProLiant BL460c Gen10 (16 servers).&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;OS vendor installed on blade servers :&amp;nbsp;VMware ESXi 7.0.3 Build 23794027&lt;/LI&gt;&lt;LI&gt;Datastores type : VMFS 5 &amp;amp; 6&lt;/LI&gt;&lt;LI&gt;Disk provision type on guest OS side : Thick Provsion Lazy zeroed&lt;/LI&gt;&lt;LI&gt;OS guest (virtual machines) vendor and version : Linux RedHat 8.4 to Linux RedHat 8.10&lt;/LI&gt;&lt;LI&gt;Kernel version on OS guest :&amp;nbsp;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;4.18.0-305.el8.x86_64 and&lt;/P&gt;&lt;P&gt;4.18.0-425.3.1.el8.x86_64 and&lt;/P&gt;&lt;P&gt;4.18.0-553.8.1.el8_10.x86_64&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;SPAN&gt;VMWARE TOOLS version: 12.3.5.46049 (build-22544099)&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;Hardware compatibility : Vers 19&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;disk architecture inside Linux virtual machines : LVM&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Firmware of blade servers gen9 :&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;HP FlexFabric 10Gb 2-port 534M Adapter 7.18.82 Slot 1&lt;/LI&gt;&lt;LI&gt;HP FlexFabric 10Gb 2-port 536FLB Adapter 7.18.82 Embedded&lt;/LI&gt;&lt;LI&gt;HP QMH2572 8Gb 2P FC HBA - FC 08.08.01 Slot 2&lt;/LI&gt;&lt;LI&gt;iLO 2.82 Feb 06 2023 System Board&lt;/LI&gt;&lt;LI&gt;Intelligent Platform Abstraction Data 25.00 System Board&lt;/LI&gt;&lt;LI&gt;Intelligent Provisioning 2.50.164 System Board&lt;/LI&gt;&lt;LI&gt;Power Management Controller Firmware 1.0.9 System Board&lt;/LI&gt;&lt;LI&gt;Power Management Controller FW Bootloader 1.0 System Board&lt;/LI&gt;&lt;LI&gt;Redundant System ROM I36 v2.60 (05/21/2018) System Board&lt;/LI&gt;&lt;LI&gt;SAS Programmable Logic Device Version 0x03 System Board&lt;/LI&gt;&lt;LI&gt;Server Platform Services (SPS) Firmware 3.1.3.21.4 System Board&lt;/LI&gt;&lt;LI&gt;Smart HBA H244br 7.00 Embedded&lt;/LI&gt;&lt;LI&gt;System Programmable Logic Device Version 0x17 System Board&lt;/LI&gt;&lt;LI&gt;System ROM I36 v2.90 (04/29/2021) System Board&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Firmware of blade servers gen10 :&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Drive HPG4 Port=1I:Box=1:Bay=1 &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Drive HPG4 Port=1I:Box=1:Bay=2 &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Embedded Video Controller 2.5 Embedded Device &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;HP FlexFabric 10Gb 2-port 534M Adapter 7.18.82 Mezzanine Slot 2 &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;HP FlexFabric 10Gb 2-port 536FLB Adapter 7.18.82 Embedded ALOM &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;HP QMH2672 16Gb FC HBA for BladeSystem c-Class &lt;SPAN&gt;8.08.232&lt;/SPAN&gt; Mezzanine Slot 1 &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;HPE Smart Array P204i-b SR Gen10 4.11 Embedded RAID &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;HPE Smart Storage Energy Pack 1 Firmware 0.70 Embedded Device &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;iLO 5 2.55 Oct 01 2021 System Board &amp;nbsp; Innovation Engine (IE) Firmware 0.2.2.3 System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Intelligent Platform Abstraction Data 9.4.0 Build 18 System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Intelligent Provisioning 3.31.63 System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Power Management Controller Firmware 1.0.7 System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Power Management Controller FW Bootloader 1.1 System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Redundant System ROM I41 v2.54 (09/03/2021) System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Server Platform Services (SPS) Descriptor 1.2 0 System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;Server Platform Services (SPS) Firmware 4.1.4.505 System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;System Programmable Logic Device 0x1E System Board &amp;nbsp;&lt;/LI&gt;&lt;LI&gt;System ROM I41 v3.34 (09/30/2024) System Board&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;Impact&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;Actually, no impact has been detected&amp;nbsp;but since the word "critical" is in the message, it sends a large number of tickets to our monitoring tool.&lt;/P&gt;&lt;P&gt;And it can be difficult for application vendor when there is an application issue to debug it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="5"&gt;What i tried&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;First of all, it's very difficult to know who's causing the problem, as there are many intermediaries between storage and Linux virtual machines.&lt;/P&gt;&lt;P&gt;At my side, I'm only in charge of bladecenters up to Linux virtual machines. The part of SAN switchs and storage are managed by other team in my company. But i work with them to resolv it actually.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As I described above, it's not new.&lt;/P&gt;&lt;P&gt;The issue as been detected arround the middle of november 2024.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For the story :&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Begin of september 2024 to begin of december 2024 :&amp;nbsp;Server application major updates of 5 instances (bunch of arround 24 virtual machines), which also require OS, kernel and system package updates.&lt;/LI&gt;&lt;LI&gt;End of october 2024 to end of november 2024 (48 ESXI to update) :&amp;nbsp; Minor VMWARE ESXI Update - 7.0.3 build 19482537 to build&amp;nbsp;&lt;SPAN&gt;23794027.&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;Middle of november : Massive tickets from monitoring tool as been detected on same error message.&lt;/LI&gt;&lt;LI&gt;Middle of novembre : A VMWARE ticket has been opened. They said : VMWARE TOOLS is not updated at latest version. I updated it during application updates of 5 instances.&lt;/LI&gt;&lt;LI&gt;End of january 2025 : Messages come back again on this 5 updated instances (not all virtual machines) + messages stays on not updated virtual servers instances.&lt;/LI&gt;&lt;LI&gt;Begin of febuary 2025 : A VMWARE ticket has been opened. They said to ask to storage vendor.&lt;/LI&gt;&lt;LI&gt;Begin of febuary 2025 : A RedHat ticket has been opened. They said to ask to storage vendor too.&lt;/LI&gt;&lt;LI&gt;Middle of febuary 2025 :&amp;nbsp; A ticket has been opened at storage team of my company side. It's always in progress.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It probably begin before the middle of november but it started to get really binding in the middle of november.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Howerver, someone from my company analyze my issue and tell me :&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;It's weird because :&lt;/P&gt;&lt;P&gt;1 - Your virtual machine disks are on&amp;nbsp;&lt;STRONG&gt;Thick&lt;/STRONG&gt; Provision&lt;/P&gt;&lt;P&gt;2 - Verification of VPD in-guest :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[root@virtualmachine ~]# sg_vpd --page=0xb2 /dev/sdb&lt;BR /&gt;Logical block provisioning VPD page (SBC):&lt;BR /&gt;&amp;nbsp; Unmap command supported (LBPU): 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;STRONG&gt;Write same (16) with unmap bit supported (LBPWS): 0&lt;/STRONG&gt;&amp;nbsp;&amp;nbsp;&lt;STRONG&gt;&amp;nbsp;&amp;lt;------&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; Write same (10) with unmap bit supported (LBPWS10): 0&amp;nbsp;&amp;nbsp;&lt;BR /&gt;&amp;nbsp; Logical block provisioning read zeros (LBPRZ): 0&lt;BR /&gt;&amp;nbsp; Anchored LBAs supported (ANC_SUP): 0&lt;BR /&gt;&amp;nbsp; Threshold exponent: 1&lt;BR /&gt;&amp;nbsp; Descriptor present (DP): 0&lt;BR /&gt;&amp;nbsp; Minimum percentage: 0 [not reported]&lt;BR /&gt;&amp;nbsp; Provisioning type: 0 (not known or fully provisioned)&lt;BR /&gt;&amp;nbsp; Threshold percentage: 0 [percentages not supported]&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;3 - Why the OS / app seems to send des WRITE_SAME(16) ?&lt;/P&gt;&lt;P&gt;Example :&amp;nbsp;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="c"&gt;Feb 01 20:33:51 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#130 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
Feb 01 20:33:51 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#130 Sense Key : Illegal Request [current]
Feb 01 20:33:51 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#130 Add. Sense: Invalid field in cdb
Feb 01 20:33:51 virtualmachine kernel: sd 0:0:1:0: [sdb] tag#130 CDB: Write same(16) 93 08 X X X X X X X X 00 00 00 08 00 00
Feb 01 20:33:51 virtualmachine kernel: blk_update_request: critical target error, dev sdb, sector 30670800 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;It's indeed the storage driver that responds in error on write_same (driver_sense) but what we need to understand is why the OS sends these SCSI commands when it's told that this is not supported?&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I analyze&amp;nbsp;If a service was the cause of the problem by use pidstat command :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="c"&gt;pidstat -d 1&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But no service or process seems to write on disk when message come :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="c"&gt;Feb 25 16:24:04 virtualmachine kernel: blk_update_request: critical target error, dev sdb

04:24:02 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
04:24:03 PM   247   2152101      0.00     16.00      0.00       0  java

04:24:03 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
04:24:04 PM     0      1227    256.00    144.00      0.00       0  systemd-journal
04:24:04 PM   247   2152101      0.00   2052.00      0.00       0  java

04:24:04 PM     0   3220187      0.00      0.00      0.00       1  kworker/u256:0-flush-253:3
04:24:04 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
04:24:05 PM     0   3672057      0.00      8.00      0.00       0  rsyslogd

04:24:05 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
04:24:06 PM     0      1230      0.00     32.00      0.00       0  jbd2/dm-3-8&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="c"&gt;Feb 25 12:01:36 othervirtualmachine kernel: blk_update_request: critical target error, dev sdb,

12:01:33 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
12:01:34 PM     0      1214      0.00     24.00      0.00       0  jbd2/dm-10-8
12:01:34 PM     0      1257      0.00     60.00      0.00       0  systemd-journal
12:01:34 PM     0      2588      0.00      0.00      8.00       0  xxxxxxx

12:01:34 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command

12:01:35 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
12:01:36 PM     0      1257      0.00    116.00      0.00       0  systemd-journal
12:01:36 PM     0   1449141      0.00      0.00      0.00       1  kworker/u256:1-events_unbound

12:01:36 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
12:01:37 PM     0   1369065      0.00      4.00      0.00       0  pidstat
12:01:37 PM     0   3553469      0.00      4.00      0.00       0  vmtoolsd

12:01:37 PM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
12:01:38 PM     0       757      0.00     24.00      0.00       1  jbd2/dm-0-8
12:01:38 PM     0      1235      0.00     32.00      0.00       0  jbd2/dm-13-8
12:01:38 PM     0      1239      0.00      4.00      0.00       0  jbd2/dm-8-8
12:01:38 PM     0      1251      0.00     12.00      0.00       0  jbd2/dm-14-8
12:01:38 PM     0      1257      0.00     16.00      0.00       0  systemd-journal
12:01:38 PM     0      1260      0.00     48.00      0.00       0  jbd2/dm-6-8&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="c"&gt;Feb 25 09:53:44 othervirtualmachine kernel: blk_update_request: critical target error, dev sdb,

09:53:41 AM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
09:53:42 AM     0      1079      0.00      0.00      0.00       1  jbd2/dm-4-8
09:53:42 AM    27    275798    578.22    827.72      0.00       0  mysqld

09:53:42 AM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
09:53:43 AM    27    275798    376.00    656.00      0.00       0  mysqld

09:53:43 AM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
09:53:44 AM    27    275798     88.00    416.00      0.00       0  mysqld

09:53:44 AM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
09:53:45 AM     0      1047      0.00      4.00      0.00       0  jbd2/dm-5-8
09:53:45 AM     0      1059    348.00    156.00      0.00       0  systemd-journal
09:53:45 AM     0      1064      0.00      8.00      0.00       0  jbd2/dm-9-8
09:53:45 AM     0      1079      0.00      4.00      0.00       1  jbd2/dm-4-8
09:53:45 AM    27    275798    472.00    760.00      0.00       0  mysqld
09:53:45 AM     0   1135691      0.00      4.00      0.00       0  pidstat
09:53:45 AM     0   1143255      0.00      0.00      0.00       1  kworker/u256:0-events_unbound

09:53:45 AM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
09:53:46 AM    27    275798  13408.00    948.00      0.00       0  mysqld
09:53:46 AM     0   1169146      0.00      0.00      0.00       1  kworker/0:2-events_power_efficient

09:53:46 AM   UID       PID   kB_rd/s   kB_wr/s kB_ccwr/s iodelay  Command
09:53:47 AM    27    275798  18688.00    744.00      0.00       0  mysqld&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Another remark :&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;I can force the messages when i tried to format a LVM partition to ext4fs and remount it.&lt;/LI&gt;&lt;LI&gt;And when i tried, on another disk (/devsdc for example), to create new partition and link it to a new formated folder.&lt;/LI&gt;&lt;LI&gt;Messages seem to go away on virtual machines reboot but come back 2 weeks later&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you help me to find out&amp;nbsp;what tests can I still perform on the virtual servers ?&lt;BR /&gt;Or on VMWARE side ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards.&lt;/P&gt;</description>
      <pubDate>Thu, 27 Feb 2025 07:59:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-vmware/linux-kernel-add-sense-invalid-field-in-cdb/m-p/7236112#M4274</guid>
      <dc:creator>RAPHAELLEB</dc:creator>
      <dc:date>2025-02-27T07:59:09Z</dc:date>
    </item>
    <item>
      <title>[LINUX KERNEL] Add. Sense: Invalid field in cdb</title>
      <link>https://community.hpe.com/t5/operating-system-vmware/linux-kernel-add-sense-invalid-field-in-cdb/m-p/7236750#M4276</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi&amp;nbsp;RAPHAELLEB,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The error messages indicate that kernel could not able to attach the disk, as it could be an issue from other side not from OS end, &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;kindly ask the VMware, 3PAR to review from their end.&amp;nbsp; &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Please remember "it's not an OS issue"&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;LI-WRAPPER&gt;&lt;/LI-WRAPPER&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 06 Mar 2025 14:21:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-vmware/linux-kernel-add-sense-invalid-field-in-cdb/m-p/7236750#M4276</guid>
      <dc:creator>utnoor</dc:creator>
      <dc:date>2025-03-06T14:21:01Z</dc:date>
    </item>
    <item>
      <title>Re: [LINUX KERNEL] Add. Sense: Invalid field in cdb</title>
      <link>https://community.hpe.com/t5/operating-system-vmware/linux-kernel-add-sense-invalid-field-in-cdb/m-p/7236755#M4277</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I forgot to precise some informations :&lt;/P&gt;&lt;P&gt;Storage is shared internally to other projects with other ESXI platform. No other projects have this problem.&lt;BR /&gt;So the problem doesn't seems to be storage issue, since ESXI doesn't have any errors logs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Apparently, I finally found out the difference between the Linux virtual machines that have the problem and those that don't.&lt;BR /&gt;It seems that the file "/sys/class/scsi_disk/0\:0\:1\:0/provisioning_mode" is set to "disabled" when they have the problem. The rest that don't have the problem are set to “unmap” or “full”.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The "disabled" mode virtual machines seems to switch to "full" mode after a simple reboot.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But that doesn't explain why they randomly change the provisioning_mode configuration in VMs when they have the same OS template with the same packages.&lt;/P&gt;</description>
      <pubDate>Thu, 06 Mar 2025 14:34:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-vmware/linux-kernel-add-sense-invalid-field-in-cdb/m-p/7236755#M4277</guid>
      <dc:creator>RAPHAELLEB</dc:creator>
      <dc:date>2025-03-06T14:34:13Z</dc:date>
    </item>
    <item>
      <title>Re: [LINUX KERNEL] Add. Sense: Invalid field in cdb</title>
      <link>https://community.hpe.com/t5/operating-system-vmware/linux-kernel-add-sense-invalid-field-in-cdb/m-p/7236757#M4278</link>
      <description>&lt;P&gt;&lt;SPAN data-teams="true"&gt;Hi&amp;nbsp;RAPHAELLEB,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-teams="true"&gt;If you have Red Hat login ID&amp;nbsp; you can review this article =&amp;gt;&amp;nbsp;&lt;A href="https://access.redhat.com/solutions/1256863" target="_blank"&gt;https://access.redhat.com/solutions/1256863&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-teams="true"&gt;As per Red Hat verified article it's not an OS issue,&amp;nbsp;Those messages indicate that the server successfully submitted the IO to the target, but the target REJECTED our IO request with an error message.&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-teams="true"&gt;This may indicate that the LUNS have been unpresented from the storage system, but the server was not aware of this change.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-teams="true"&gt;you storage vendor to ascertain; why and under what circumstances storage is returning the Illegal Request sense to the server&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-teams="true"&gt;we can review the SCSI error code at the article ==&amp;gt;&amp;nbsp;&lt;A href="https://www.t10.org/lists/asc-num.htm" target="_blank"&gt;https://www.t10.org/lists/asc-num.htm&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 06 Mar 2025 14:55:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-vmware/linux-kernel-add-sense-invalid-field-in-cdb/m-p/7236757#M4278</guid>
      <dc:creator>utnoor</dc:creator>
      <dc:date>2025-03-06T14:55:11Z</dc:date>
    </item>
  </channel>
</rss>

