Pool Master
-
I need to reboot one of my host servers which is the pool master. There are no VMs on it. I went to Change the pool master to another server last night and got the error "Internal Error - Missing Column". The Pool master has been updated along with 2 (Out of 4 total) of the other host have been updated. The host than I am trying to move the pool master to is up to date as well.
-
Hi,
You don't switch the master while doing upgrades in a pool. Just reboot it.
-
@olivierlambert Ok, just reboot the master won't cause any issues?
-
No, it's perfectly fine to reboot the master. Obviously, without HA enabled on the pool.
-
Ok. Thank you, I appreciate it!
-
I rebooted the pool master. Even though it looks like it came up ok it is not connected to storage. In xcp-ng center it is showing a red X for the SR and this is the errors in dmesg.
[Tue Jul 8 14:43:19 2025] Loading iSCSI transport class v2.0-870. [Tue Jul 8 14:43:19 2025] iscsi: registered transport (tcp) [Tue Jul 8 14:43:20 2025] scsi host7: iSCSI Initiator over TCP/IP [Tue Jul 8 14:43:20 2025] scsi 7:0:0:1: Direct-Access SYNOLOGY Storage 4.0 PQ: 0 ANSI: 5 [Tue Jul 8 14:43:20 2025] scsi 7:0:0:1: alua: supports implicit TPGS [Tue Jul 8 14:43:20 2025] scsi 7:0:0:1: alua: device naa.60014059c706a83d1e77d47a8da5c5d1 port group 0 rel port 1 [Tue Jul 8 14:43:20 2025] sd 7:0:0:1: Attached scsi generic sg1 type 0 [Tue Jul 8 14:43:20 2025] sd 7:0:0:1: alua: transition timeout set to 60 seconds [Tue Jul 8 14:43:20 2025] sd 7:0:0:1: alua: port group 00 state A non-preferred supports TOlUSNA [Tue Jul 8 14:43:25 2025] sd 7:0:0:1: [sdb] 26109542400 512-byte logical blocks: (13.4 TB/12.2 TiB) [Tue Jul 8 14:43:25 2025] sd 7:0:0:1: [sdb] Write Protect is off [Tue Jul 8 14:43:25 2025] sd 7:0:0:1: [sdb] Mode Sense: 43 00 10 08 [Tue Jul 8 14:43:25 2025] sd 7:0:0:1: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA [Tue Jul 8 14:43:25 2025] sd 7:0:0:1: [sdb] Attached SCSI disk [Tue Jul 8 14:43:26 2025] sd 7:0:0:1: [sdb] Synchronizing SCSI cache [Tue Jul 8 14:43:32 2025] Buffer I/O error on dev sdb, logical block 3263692798, async page read [Tue Jul 8 14:43:32 2025] scsi 7:0:0:1: alua: Detached [Tue Jul 8 14:43:40 2025] scsi host7: iSCSI Initiator over TCP/IP [Tue Jul 8 14:43:40 2025] scsi 7:0:0:1: Direct-Access SYNOLOGY Storage 4.0 PQ: 0 ANSI: 5 [Tue Jul 8 14:43:40 2025] scsi 7:0:0:1: alua: supports implicit TPGS [Tue Jul 8 14:43:40 2025] scsi 7:0:0:1: alua: device naa.60014059c706a83d1e77d47a8da5c5d1 port group 0 rel port 1 [Tue Jul 8 14:43:40 2025] sd 7:0:0:1: Attached scsi generic sg1 type 0 [Tue Jul 8 14:43:40 2025] sd 7:0:0:1: alua: transition timeout set to 60 seconds [Tue Jul 8 14:43:40 2025] sd 7:0:0:1: alua: port group 00 state A non-preferred supports TOlUSNA [Tue Jul 8 14:43:46 2025] sd 7:0:0:1: [sdb] 26109542400 512-byte logical blocks: (13.4 TB/12.2 TiB) [Tue Jul 8 14:43:46 2025] sd 7:0:0:1: [sdb] Write Protect is off [Tue Jul 8 14:43:46 2025] sd 7:0:0:1: [sdb] Mode Sense: 43 00 10 08 [Tue Jul 8 14:43:46 2025] sd 7:0:0:1: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA [Tue Jul 8 14:43:49 2025] ldm_validate_partition_table(): Disk read failed. [Tue Jul 8 14:43:49 2025] sdb: unable to read partition table [Tue Jul 8 14:43:49 2025] sd 7:0:0:1: [sdb] Attached SCSI disk [Tue Jul 8 14:43:49 2025] sd 7:0:0:1: [sdb] Synchronizing SCSI cache [Tue Jul 8 14:43:49 2025] scsi 7:0:0:1: alua: Detached -
You have an issue with your storage configuration. It's hard to tell more without knowing more or digging more.
-
@olivierlambert Dang ok. I waited a few minute then clicked the Connect in XOA for that host and it connected. Not sure what to do really.
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login