XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Logs Partition Full

    Scheduled Pinned Locked Moved Xen Orchestra
    59 Posts 6 Posters 25.0k Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • X Offline
      x-rayd
      last edited by x-rayd

      Idea what is problem?

      [20:31 df-c01-node04 log]# tail -50 xensource.log
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|16 ||xenops_server] TASK.signal 599093 (object deleted)
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 ||xenops_server] Queue.pop returned ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|xenops_server] Task 599094 reference events: ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |events|xenops_server] Received an event on managed VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |queue|xenops_server] Queue.push ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"] onto redirected f828ce90-06e0-024f-9c9b-3f30b1a959b4:[  ]
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: processing event for VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Will update VM.allowed_operations because guest_agent has changed.
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: Updating VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 domid 9 guest_agent
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404650 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:d1538d321269 created by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404650 UNIX /var/lib/xcp/xapi|session.slave_login D:8d7d2a8f0ca6|xapi] Session.create trackid=59cc70e9542b52fdcd1622725fa443c3 pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |events|xenops_server] Received an event on managed VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |queue|xenops_server] Queue.push ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"] onto redirected f828ce90-06e0-024f-9c9b-3f30b1a959b4:[ ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"] ]
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404651 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:6320774722f1 created by task D:8d7d2a8f0ca6
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404652 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VM.update_allowed_operations D:e5300a15f1ea created by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404652 UNIX /var/lib/xcp/xapi|dispatch:VM.update_allowed_operations D:e5300a15f1ea|taskhelper] task VM.update_allowed_operations R:2852f822293e (uuid:ecd676c2-0a35-ad5c-7f38-48fc3e383d52) created (trackid=59cc70e9542b52fdcd1622725fa443c3) by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404652 UNIX /var/lib/xcp/xapi|VM.update_allowed_operations R:2852f822293e|audit] VM.update_allowed_operations: VM = 'f828ce90-06e0-024f-9c9b-3f30b1a959b4 (DF-server24.df-webhosting.de)'
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|xenops_server] VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 is not requesting any attention
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|xenops_server] VM_DB.signal f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|task_server] Task 599094 completed; duration = 0
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 ||xenops_server] TASK.signal 599094 (object deleted)
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 ||xenops_server] Queue.pop returned ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|xenops_server] Task 599095 reference events: ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404653 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:e038fd45a92c created by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404653 UNIX /var/lib/xcp/xapi|session.logout D:624f223cdcff|xapi] Session.destroy trackid=59cc70e9542b52fdcd1622725fa443c3
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Processing event: ["Vm","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenops event on VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|632725 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops_server] VM.stat f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|xenops_server] VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 is not requesting any attention
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|xenops_server] VM_DB.signal f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|task_server] Task 599095 completed; duration = 0
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 ||xenops_server] TASK.signal 599095 (object deleted)
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: processing event for VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Will update VM.allowed_operations because guest_agent has changed.
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: Updating VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 domid 9 guest_agent
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404654 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:3041ad7a5d1d created by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404654 UNIX /var/lib/xcp/xapi|session.slave_login D:fa655deb1721|xapi] Session.create trackid=aa48585d92a36631054ef9468218522c pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404655 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:a1e04c4c79aa created by task D:fa655deb1721
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1235 |xapi events D:f60b314e49a9|dummytaskhelper] task timeboxed_rpc D:98ed995dc8af created by task D:f60b314e49a9
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404656 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:event.from D:c9f2b3e839ca created by task D:f60b314e49a9
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404657 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VM.update_allowed_operations D:4a10137d6bc3 created by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404657 UNIX /var/lib/xcp/xapi|dispatch:VM.update_allowed_operations D:4a10137d6bc3|taskhelper] task VM.update_allowed_operations R:e4f434e89b02 (uuid:3e5a0652-8d8f-b774-53d9-f325c4cc63a1) created (trackid=aa48585d92a36631054ef9468218522c) by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404657 UNIX /var/lib/xcp/xapi|VM.update_allowed_operations R:e4f434e89b02|audit] VM.update_allowed_operations: VM = 'f828ce90-06e0-024f-9c9b-3f30b1a959b4 (DF-server24.df-webhosting.de)'
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404658 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:9eeefa7124f9 created by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404658 UNIX /var/lib/xcp/xapi|session.logout D:b558e450cae3|xapi] Session.destroy trackid=aa48585d92a36631054ef9468218522c
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Processing event: ["Vm","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenops event on VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|632727 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops_server] VM.stat f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: ignoring event for VM f828ce90-06e0-024f-9c9b-3f30b1a959b4: metadata has not changed
      Jul  1 20:32:06 df-c01-node04 xcp-rrdd: [ warn|df-c01-node04|0 monitor|main|rrdd_server] setting skip-cycles-after-error for plugin tap-31773-25 to 256
      Jul  1 20:32:06 df-c01-node04 xcp-rrdd: [ warn|df-c01-node04|0 monitor|main|rrdd_server] Failed to process plugin: tap-31773-25 (Rrd_protocol.Invalid_header_string)
      [20:32 df-c01-node04 log]#
      
      1 Reply Last reply Reply Quote 0
      • X Offline
        x-rayd
        last edited by

        xensource.log is to big 2 GB
        Im delete logfile, but disk space ist not free. Now cant see xensource.log, why?

        1 Reply Last reply Reply Quote 0
        • stormiS Offline
          stormi Vates ๐Ÿช XCP-ng Team
          last edited by

          Because the file descriptor is still open by the process that used it, so it's not really gone until it's released.

          Try xe-toolstack-restart

          1 Reply Last reply Reply Quote 1
          • X Offline
            x-rayd
            last edited by

            @stormi said in Logs Partition Full:

            xe-toolstack-restart

            no help! what can I do now?

            1 Reply Last reply Reply Quote 0
            • stormiS Offline
              stormi Vates ๐Ÿช XCP-ng Team
              last edited by

              What's the output from lsof +L1?

              1 Reply Last reply Reply Quote 0
              • X Offline
                x-rayd
                last edited by

                @stormi said in Logs Partition Full:

                lsof +L1

                [10:38 df-c01-node04 ~]# lsof +L1
                COMMAND     PID USER   FD   TYPE DEVICE   SIZE/OFF NLINK     NODE NAME
                rsyslogd   1070 root   11w   REG    8,5 1945839857     0       15 /var/log/xensource.log (deleted)
                monitor    2002 root    7u   REG    8,1        141     0   180233 /tmp/tmpf6sKcHH (deleted)
                ovsdb-ser  2003 root    7u   REG    8,1        141     0   180233 /tmp/tmpf6sKcHH (deleted)
                stunnel   13458 root    2w   REG   0,19        861     0 97301510 /run/nonpersistent/forkexecd/stunnelcd2b39.log (deleted)
                [17:21 df-c01-node04 ~]#
                
                1 Reply Last reply Reply Quote 0
                • stormiS Offline
                  stormi Vates ๐Ÿช XCP-ng Team
                  last edited by

                  So the file is owned by rsyslogd. Restart that service.

                  1 Reply Last reply Reply Quote 0
                  • X Offline
                    x-rayd
                    last edited by

                    waht is command?

                    1 Reply Last reply Reply Quote 0
                    • X Offline
                      x-rayd
                      last edited by

                      ??

                      1 Reply Last reply Reply Quote 0
                      • stormiS Offline
                        stormi Vates ๐Ÿช XCP-ng Team
                        last edited by

                        Restarting a service on linux is not something that only us can answer so you could already have found the answer on the net. The only catch is that the service name is actually rsyslog, not rsyslogd as systemctl | grep rsyslog shows.

                        systemctl restart rsyslog
                        
                        1 Reply Last reply Reply Quote 0
                        • X Offline
                          x-rayd
                          last edited by

                          Thanks, that help!!!

                          1 Reply Last reply Reply Quote 0
                          • X Offline
                            x-rayd
                            last edited by

                            What is problem?

                            [1041538.079027] block tdbb: sector-size: 512/512 capacity: 104857600
                            [1041614.490832] block tdag: sector-size: 512/512 capacity: 41943040
                            [1041665.112325] block tdah: sector-size: 512/512 capacity: 83886080
                            [1041682.920728] block tdal: sector-size: 512/512 capacity: 83886080
                            [1041687.977055] block tdan: sector-size: 512/512 capacity: 83886080
                            [1041775.983839] block tdar: sector-size: 512/512 capacity: 314572800
                            [1041784.022923] block tdat: sector-size: 512/512 capacity: 314572800
                            [1041788.477265] block tdau: sector-size: 512/512 capacity: 314572800
                            [1041797.083981] Buffer I/O error on dev dm-65, logical block 13134816, async page read
                            [1041975.964437] Buffer I/O error on dev dm-65, logical block 13134756, async page read
                            [1041987.007220] block tdad: sector-size: 512/512 capacity: 52428800
                            [1042011.762333] block tdad: sector-size: 512/512 capacity: 314572800
                            [1042025.513177] block tdal: sector-size: 512/512 capacity: 314572800
                            [1042030.608114] block tdal: sector-size: 512/512 capacity: 314572800
                            [1042046.800416] block tdad: sector-size: 512/512 capacity: 52428800
                            [1042051.052154] block tdah: sector-size: 512/512 capacity: 52428800
                            [1042060.751052] block tdal: sector-size: 512/512 capacity: 52428800
                            [1042075.720653] block tdan: sector-size: 512/512 capacity: 52428800
                            [1042092.955679] block tdbb: sector-size: 512/512 capacity: 52428800
                            [1042098.149160] block tdbe: sector-size: 512/512 capacity: 52428800
                            [1042163.718485] Buffer I/O error on dev dm-46, logical block 10508192, async page read
                            [1042234.057144] Buffer I/O error on dev dm-34, logical block 39400446, async page read
                            [1042533.212350] Buffer I/O error on dev dm-34, logical block 13134816, async page read
                            [1042721.551045] Buffer I/O error on dev dm-34, logical block 264073, async page read
                            [1044849.455053] Buffer I/O error on dev dm-12, logical block 39400062, async page read
                            [1046391.419666] Buffer I/O error on dev dm-12, logical block 3941302, async page read
                            [1049772.497399] Buffer I/O error on dev dm-12, logical block 6567872, async page read
                            [1049857.595545] Buffer I/O error on dev dm-12, logical block 6567550, async page read
                            [1049929.102838] Buffer I/O error on dev dm-12, logical block 6567822, async page read
                            [1050167.988714] Buffer I/O error on dev dm-12, logical block 26267563, async page read
                            [1050366.554847] Buffer I/O error on dev dm-12, logical block 6567862, async page read
                            [1050776.365052] Buffer I/O error on dev dm-12, logical block 65665963, async page read
                            [1051056.348013] Buffer I/O error on dev dm-12, logical block 5255136, async page read
                            [1051092.751391] Buffer I/O error on dev dm-12, logical block 13134724, async page read
                            [1051328.387483] Buffer I/O error on dev dm-12, logical block 3941344, async page read
                            [1051711.573576] Buffer I/O error on dev dm-12, logical block 13134752, async page read
                            [1051848.129739] Buffer I/O error on dev dm-12, logical block 6567844, async page read
                            [1051992.984716] Buffer I/O error on dev dm-12, logical block 105064334, async page read
                            [1052434.107654] Buffer I/O error on dev dm-12, logical block 39400326, async page read
                            [1052695.987730] Buffer I/O error on dev dm-12, logical block 13134724, async page read
                            [1052923.659130] Buffer I/O error on dev dm-12, logical block 13134726, async page read
                            [1053136.646307] Buffer I/O error on dev dm-12, logical block 64153536, async page read
                            [1053612.719918] Buffer I/O error on dev dm-12, logical block 6567808, async page read
                            [1053646.789183] Buffer I/O error on dev dm-12, logical block 6567920, async page read
                            [1053778.875359] Buffer I/O error on dev dm-12, logical block 6567808, async page read
                            [1053838.326806] Buffer I/O error on dev dm-12, logical block 10508203, async page read
                            [1054000.750328] Buffer I/O error on dev dm-12, logical block 5255076, async page read
                            [1054451.772637] Buffer I/O error on dev dm-12, logical block 6567814, async page read
                            [1103083.191699] device vif7.0 left promiscuous mode
                            [1103208.344685] block tdh: sector-size: 512/512 capacity: 104857600
                            [1103217.894245] block tdh: sector-size: 512/512 capacity: 104857600
                            [1103218.758711] device vif47.0 entered promiscuous mode
                            [1103219.717538] vif vif-47-0 vif47.0: Guest Rx ready
                            [1103286.054941] device vif11.0 left promiscuous mode
                            [1103306.933523] block tdk: sector-size: 512/512 capacity: 104857600
                            [1103316.719467] block tdk: sector-size: 512/512 capacity: 104857600
                            [1103317.536966] device vif48.0 entered promiscuous mode
                            [1103319.391521] vif vif-48-0 vif48.0: Guest Rx ready
                            
                            1 Reply Last reply Reply Quote 0
                            • DanpD Offline
                              Danp Pro Support Team @jtbw911
                              last edited by

                              @jtbw911 said in Logs Partition Full:

                              This often occurs when you're having storage issues (whether they are readily apparent or not), that may or may not be related to networking intermittency, that fills up the log files.

                              @x-rayd Quoted the above as it seems pertinent to the issue at hand. Do you have support for your Equallogic system? If so, then I would recommend reaching out to the vendor for assistance.

                              1 Reply Last reply Reply Quote 0
                              • X Offline
                                x-rayd
                                last edited by

                                @Danp said in Logs Partition Full:

                                Quoted the above as it seems pertinent to the issue at hand. Do you have support for your Equallogic system? If so, then I would recommend reaching out to the vendor for assistance.

                                why do you think is Equallogic problem?

                                DanpD 1 Reply Last reply Reply Quote 0
                                • DanpD Offline
                                  Danp Pro Support Team @x-rayd
                                  last edited by

                                  @x-rayd I could be wrong, but to me "I/O error on dev xxx" indicates a hardware issue. On top of that, you continue to have issues with full log partitions and snapshots that aren't properly removed. Therefore, I assumed that you are having an issue with your storage system.

                                  1 Reply Last reply Reply Quote 0
                                  • X Offline
                                    x-rayd
                                    last edited by

                                    @Danp said in Logs Partition Full:

                                    I could be wrong, but to me "I/O error on dev xxx" indicates a hardware issue. On top of that, you continue to have issues with full log partitions and snapshots that aren't properly removed. Therefore, I assumed that you are having an issue with your storage system.

                                    All vmยดs running on local SSD storage.
                                    Only backup snapshots running on Equallogic.

                                    1 Reply Last reply Reply Quote 0
                                    • X Offline
                                      x-rayd
                                      last edited by

                                      What is dm-12?
                                      Virtual server? how can i check which one?

                                      [1053612.719918] Buffer I/O error on dev dm-12, logical block 6567808, async page read
                                      [1053646.789183] Buffer I/O error on dev dm-12, logical block 6567920, async page r
                                      ```ead
                                      1 Reply Last reply Reply Quote 0
                                      • DanpD Offline
                                        Danp Pro Support Team
                                        last edited by

                                        Not my area of expertise. Perhaps this will point you in the right direction -- http://kb.eclipseinc.com/kb/can-i-safely-ignore-io-errors-on-dm-devices/

                                        1 Reply Last reply Reply Quote 0
                                        • X Offline
                                          x-rayd
                                          last edited by Danp

                                          How can I determine which VM is causing this problem?

                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830066:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830198:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830311:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830458:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830605:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830712:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830797:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830885:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831006:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831094:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831185:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831260:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831333:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831407:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831478:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831551:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.884264:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.884519:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.884615:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.884721:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.923891:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924054:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924207:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924319:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924505:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924675:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924803:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924913:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927026:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927185:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927303:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927436:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927657:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927772:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927903:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927993:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.928136:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.928255:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.928396:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.928479:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.932786:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
                                          Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.932906:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
                                          
                                          1 Reply Last reply Reply Quote 0
                                          • First post
                                            Last post