XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Logs Partition Full

    Scheduled Pinned Locked Moved Xen Orchestra
    59 Posts 6 Posters 29.6k Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • stormiS Offline
      stormi Vates 🪐 XCP-ng Team
      last edited by

      You can check what takes space with du -sh /var/log/*

      1 Reply Last reply Reply Quote 0
      • X Offline
        x-rayd
        last edited by

        @stormi said in Logs Partition Full:

        du -sh /var/log/*

        [5.1M    /var/log/audit.log
        24M     /var/log/audit.log.1
        0       /var/log/audit.log.1.gz
        3.5M    /var/log/audit.log.2.gz
        0       /var/log/audit.log.3.gz
        9.3M    /var/log/blktap
        0       /var/log/boot.log
        24K     /var/log/boot.log.1
        4.0K    /var/log/boot.log.2.gz
        4.0K    /var/log/boot.log.3.gz
        4.0K    /var/log/boot.log.4.gz
        4.0K    /var/log/boot.log.5.gz
        4.0K    /var/log/btmp
        4.0K    /var/log/btmp.1
        0       /var/log/btmp.1.gz
        4.0K    /var/log/cluster
        8.0K    /var/log/crit.log
        0       /var/log/crit.log.1
        0       /var/log/crit.log.1.gz
        2.9M    /var/log/cron
        40K     /var/log/cron.1
        0       /var/log/cron.1.gz
        4.0K    /var/log/grubby_prune_debug
        1004K   /var/log/installer
        4.0K    /var/log/interface-rename.log
        0       /var/log/interface-rename.log.1
        0       /var/log/interface-rename.log.1.gz
        1.3M    /var/log/kern.log
        20K     /var/log/kern.log.1
        0       /var/log/kern.log.1.gz
        16K     /var/log/lost+found
        164K    /var/log/maillog
        4.0K    /var/log/maillog.32.gz
        0       /var/log/messages
        0       /var/log/messages.1
        0       /var/log/messages.1.gz
        4.0K    /var/log/ntpstats
        4.0K    /var/log/openvswitch
        4.0K    /var/log/ovs-ctl.log
        4.0K    /var/log/ovs-ctl.log.32.gz
        4.0K    /var/log/ovsdb-server.log
        4.0K    /var/log/ovsdb-server.log.1
        0       /var/log/ovsdb-server.log.1.gz
        60K     /var/log/ovs-vswitchd.log
        4.0K    /var/log/ovs-vswitchd.log.1
        0       /var/log/ovs-vswitchd.log.1.gz
        4.0K    /var/log/ovs-xapi-sync.log
        4.0K    /var/log/ovs-xapi-sync.log.32.gz
        8.0K    /var/log/pbis-open-install.log
        4.0K    /var/log/pyperthreading-plugin.log
        114M    /var/log/sa
        8.0K    /var/log/samba
        37M     /var/log/secure
        5.6M    /var/log/SMlog
        29M     /var/log/SMlog.1
        0       /var/log/SMlog.1.gz
        5.0M    /var/log/SMlog.2.gz
        0       /var/log/spooler
        4.0K    /var/log/spooler.32.gz
        0       /var/log/tallylog
        0       /var/log/updater-plugin.log
        972K    /var/log/user.log
        4.0K    /var/log/user.log.1
        0       /var/log/user.log.1.gz
        696K    /var/log/VMSSlog
        20K     /var/log/VMSSlog.1
        0       /var/log/VMSSlog.1.gz
        68K     /var/log/wtmp
        8.0K    /var/log/xcp-rrdd-plugins.log
        13M     /var/log/xcp-rrdd-plugins.log.1
        0       /var/log/xcp-rrdd-plugins.log.1.gz
        240K    /var/log/xcp-rrdd-plugins.log.2.gz
        0       /var/log/xcp-rrdd-plugins.log.3.gz
        220K    /var/log/xen
        96M     /var/log/xensource.log
        18M     /var/log/xensource.log.1
        0       /var/log/xensource.log.1.gz
        8.2M    /var/log/xensource.log.2.gz
        9.2M    /var/log/xensource.log.3.gz
        6.5M    /var/log/xensource.log.4.gz
        8.0M    /var/log/xenstored-access.log
        33M     /var/log/xenstored-access.log.1
        0       /var/log/xenstored-access.log.1.gz
        2.3M    /var/log/xenstored-access.log.2.gz
        4.0K    /var/log/yum.log
        36K     /var/log/yum.log.1
        

        Filesystem Size Used Avail Use% Mounted on
        devtmpfs 3.9G 112K 3.9G 1% /dev
        tmpfs 3.9G 1004K 3.9G 1% /dev/shm
        tmpfs 3.9G 12M 3.9G 1% /run
        tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
        /dev/sda1 18G 2.4G 15G 14% /
        xenstore 3.9G 0 3.9G 0% /var/lib/xenstored
        /dev/sda5 3.9G 3.9G 0 100% /var/log

        1 Reply Last reply Reply Quote 0
        • stormiS Offline
          stormi Vates 🪐 XCP-ng Team
          last edited by

          I'm not sure, but I suppose the file descriptors for the removed files maybe be still open somewhere and thus the file still exist in some way. Try restarting the toolstack: xe-toolstack-restart.

          1 Reply Last reply Reply Quote 0
          • X Offline
            x-rayd
            last edited by

            xe-toolstack-restart not help 😞

            1 Reply Last reply Reply Quote 0
            • stormiS Offline
              stormi Vates 🪐 XCP-ng Team
              last edited by

              https://serverfault.com/a/315945/520838 may help

              1 Reply Last reply Reply Quote 0
              • X Offline
                x-rayd
                last edited by x-rayd

                Idea what is problem?

                [20:31 df-c01-node04 log]# tail -50 xensource.log
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|16 ||xenops_server] TASK.signal 599093 (object deleted)
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 ||xenops_server] Queue.pop returned ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|xenops_server] Task 599094 reference events: ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |events|xenops_server] Received an event on managed VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |queue|xenops_server] Queue.push ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"] onto redirected f828ce90-06e0-024f-9c9b-3f30b1a959b4:[  ]
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: processing event for VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Will update VM.allowed_operations because guest_agent has changed.
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: Updating VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 domid 9 guest_agent
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404650 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:d1538d321269 created by task D:c22abe907392
                Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404650 UNIX /var/lib/xcp/xapi|session.slave_login D:8d7d2a8f0ca6|xapi] Session.create trackid=59cc70e9542b52fdcd1622725fa443c3 pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |events|xenops_server] Received an event on managed VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |queue|xenops_server] Queue.push ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"] onto redirected f828ce90-06e0-024f-9c9b-3f30b1a959b4:[ ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"] ]
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404651 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:6320774722f1 created by task D:8d7d2a8f0ca6
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404652 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VM.update_allowed_operations D:e5300a15f1ea created by task D:c22abe907392
                Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404652 UNIX /var/lib/xcp/xapi|dispatch:VM.update_allowed_operations D:e5300a15f1ea|taskhelper] task VM.update_allowed_operations R:2852f822293e (uuid:ecd676c2-0a35-ad5c-7f38-48fc3e383d52) created (trackid=59cc70e9542b52fdcd1622725fa443c3) by task D:c22abe907392
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404652 UNIX /var/lib/xcp/xapi|VM.update_allowed_operations R:2852f822293e|audit] VM.update_allowed_operations: VM = 'f828ce90-06e0-024f-9c9b-3f30b1a959b4 (DF-server24.df-webhosting.de)'
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|xenops_server] VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 is not requesting any attention
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|xenops_server] VM_DB.signal f828ce90-06e0-024f-9c9b-3f30b1a959b4
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|task_server] Task 599094 completed; duration = 0
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 ||xenops_server] TASK.signal 599094 (object deleted)
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 ||xenops_server] Queue.pop returned ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|xenops_server] Task 599095 reference events: ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404653 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:e038fd45a92c created by task D:c22abe907392
                Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404653 UNIX /var/lib/xcp/xapi|session.logout D:624f223cdcff|xapi] Session.destroy trackid=59cc70e9542b52fdcd1622725fa443c3
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Processing event: ["Vm","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenops event on VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|632725 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops_server] VM.stat f828ce90-06e0-024f-9c9b-3f30b1a959b4
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|xenops_server] VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 is not requesting any attention
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|xenops_server] VM_DB.signal f828ce90-06e0-024f-9c9b-3f30b1a959b4
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|task_server] Task 599095 completed; duration = 0
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 ||xenops_server] TASK.signal 599095 (object deleted)
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: processing event for VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Will update VM.allowed_operations because guest_agent has changed.
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: Updating VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 domid 9 guest_agent
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404654 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:3041ad7a5d1d created by task D:c22abe907392
                Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404654 UNIX /var/lib/xcp/xapi|session.slave_login D:fa655deb1721|xapi] Session.create trackid=aa48585d92a36631054ef9468218522c pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404655 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:a1e04c4c79aa created by task D:fa655deb1721
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1235 |xapi events D:f60b314e49a9|dummytaskhelper] task timeboxed_rpc D:98ed995dc8af created by task D:f60b314e49a9
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404656 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:event.from D:c9f2b3e839ca created by task D:f60b314e49a9
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404657 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VM.update_allowed_operations D:4a10137d6bc3 created by task D:c22abe907392
                Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404657 UNIX /var/lib/xcp/xapi|dispatch:VM.update_allowed_operations D:4a10137d6bc3|taskhelper] task VM.update_allowed_operations R:e4f434e89b02 (uuid:3e5a0652-8d8f-b774-53d9-f325c4cc63a1) created (trackid=aa48585d92a36631054ef9468218522c) by task D:c22abe907392
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404657 UNIX /var/lib/xcp/xapi|VM.update_allowed_operations R:e4f434e89b02|audit] VM.update_allowed_operations: VM = 'f828ce90-06e0-024f-9c9b-3f30b1a959b4 (DF-server24.df-webhosting.de)'
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404658 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:9eeefa7124f9 created by task D:c22abe907392
                Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404658 UNIX /var/lib/xcp/xapi|session.logout D:b558e450cae3|xapi] Session.destroy trackid=aa48585d92a36631054ef9468218522c
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Processing event: ["Vm","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenops event on VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
                Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|632727 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops_server] VM.stat f828ce90-06e0-024f-9c9b-3f30b1a959b4
                Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: ignoring event for VM f828ce90-06e0-024f-9c9b-3f30b1a959b4: metadata has not changed
                Jul  1 20:32:06 df-c01-node04 xcp-rrdd: [ warn|df-c01-node04|0 monitor|main|rrdd_server] setting skip-cycles-after-error for plugin tap-31773-25 to 256
                Jul  1 20:32:06 df-c01-node04 xcp-rrdd: [ warn|df-c01-node04|0 monitor|main|rrdd_server] Failed to process plugin: tap-31773-25 (Rrd_protocol.Invalid_header_string)
                [20:32 df-c01-node04 log]#
                
                1 Reply Last reply Reply Quote 0
                • X Offline
                  x-rayd
                  last edited by

                  xensource.log is to big 2 GB
                  Im delete logfile, but disk space ist not free. Now cant see xensource.log, why?

                  1 Reply Last reply Reply Quote 0
                  • stormiS Offline
                    stormi Vates 🪐 XCP-ng Team
                    last edited by

                    Because the file descriptor is still open by the process that used it, so it's not really gone until it's released.

                    Try xe-toolstack-restart

                    1 Reply Last reply Reply Quote 1
                    • X Offline
                      x-rayd
                      last edited by

                      @stormi said in Logs Partition Full:

                      xe-toolstack-restart

                      no help! what can I do now?

                      1 Reply Last reply Reply Quote 0
                      • stormiS Offline
                        stormi Vates 🪐 XCP-ng Team
                        last edited by

                        What's the output from lsof +L1?

                        1 Reply Last reply Reply Quote 0
                        • X Offline
                          x-rayd
                          last edited by

                          @stormi said in Logs Partition Full:

                          lsof +L1

                          [10:38 df-c01-node04 ~]# lsof +L1
                          COMMAND     PID USER   FD   TYPE DEVICE   SIZE/OFF NLINK     NODE NAME
                          rsyslogd   1070 root   11w   REG    8,5 1945839857     0       15 /var/log/xensource.log (deleted)
                          monitor    2002 root    7u   REG    8,1        141     0   180233 /tmp/tmpf6sKcHH (deleted)
                          ovsdb-ser  2003 root    7u   REG    8,1        141     0   180233 /tmp/tmpf6sKcHH (deleted)
                          stunnel   13458 root    2w   REG   0,19        861     0 97301510 /run/nonpersistent/forkexecd/stunnelcd2b39.log (deleted)
                          [17:21 df-c01-node04 ~]#
                          
                          1 Reply Last reply Reply Quote 0
                          • stormiS Offline
                            stormi Vates 🪐 XCP-ng Team
                            last edited by

                            So the file is owned by rsyslogd. Restart that service.

                            1 Reply Last reply Reply Quote 0
                            • X Offline
                              x-rayd
                              last edited by

                              waht is command?

                              1 Reply Last reply Reply Quote 0
                              • X Offline
                                x-rayd
                                last edited by

                                ??

                                1 Reply Last reply Reply Quote 0
                                • stormiS Offline
                                  stormi Vates 🪐 XCP-ng Team
                                  last edited by

                                  Restarting a service on linux is not something that only us can answer so you could already have found the answer on the net. The only catch is that the service name is actually rsyslog, not rsyslogd as systemctl | grep rsyslog shows.

                                  systemctl restart rsyslog
                                  
                                  1 Reply Last reply Reply Quote 0
                                  • X Offline
                                    x-rayd
                                    last edited by

                                    Thanks, that help!!!

                                    1 Reply Last reply Reply Quote 0
                                    • X Offline
                                      x-rayd
                                      last edited by

                                      What is problem?

                                      [1041538.079027] block tdbb: sector-size: 512/512 capacity: 104857600
                                      [1041614.490832] block tdag: sector-size: 512/512 capacity: 41943040
                                      [1041665.112325] block tdah: sector-size: 512/512 capacity: 83886080
                                      [1041682.920728] block tdal: sector-size: 512/512 capacity: 83886080
                                      [1041687.977055] block tdan: sector-size: 512/512 capacity: 83886080
                                      [1041775.983839] block tdar: sector-size: 512/512 capacity: 314572800
                                      [1041784.022923] block tdat: sector-size: 512/512 capacity: 314572800
                                      [1041788.477265] block tdau: sector-size: 512/512 capacity: 314572800
                                      [1041797.083981] Buffer I/O error on dev dm-65, logical block 13134816, async page read
                                      [1041975.964437] Buffer I/O error on dev dm-65, logical block 13134756, async page read
                                      [1041987.007220] block tdad: sector-size: 512/512 capacity: 52428800
                                      [1042011.762333] block tdad: sector-size: 512/512 capacity: 314572800
                                      [1042025.513177] block tdal: sector-size: 512/512 capacity: 314572800
                                      [1042030.608114] block tdal: sector-size: 512/512 capacity: 314572800
                                      [1042046.800416] block tdad: sector-size: 512/512 capacity: 52428800
                                      [1042051.052154] block tdah: sector-size: 512/512 capacity: 52428800
                                      [1042060.751052] block tdal: sector-size: 512/512 capacity: 52428800
                                      [1042075.720653] block tdan: sector-size: 512/512 capacity: 52428800
                                      [1042092.955679] block tdbb: sector-size: 512/512 capacity: 52428800
                                      [1042098.149160] block tdbe: sector-size: 512/512 capacity: 52428800
                                      [1042163.718485] Buffer I/O error on dev dm-46, logical block 10508192, async page read
                                      [1042234.057144] Buffer I/O error on dev dm-34, logical block 39400446, async page read
                                      [1042533.212350] Buffer I/O error on dev dm-34, logical block 13134816, async page read
                                      [1042721.551045] Buffer I/O error on dev dm-34, logical block 264073, async page read
                                      [1044849.455053] Buffer I/O error on dev dm-12, logical block 39400062, async page read
                                      [1046391.419666] Buffer I/O error on dev dm-12, logical block 3941302, async page read
                                      [1049772.497399] Buffer I/O error on dev dm-12, logical block 6567872, async page read
                                      [1049857.595545] Buffer I/O error on dev dm-12, logical block 6567550, async page read
                                      [1049929.102838] Buffer I/O error on dev dm-12, logical block 6567822, async page read
                                      [1050167.988714] Buffer I/O error on dev dm-12, logical block 26267563, async page read
                                      [1050366.554847] Buffer I/O error on dev dm-12, logical block 6567862, async page read
                                      [1050776.365052] Buffer I/O error on dev dm-12, logical block 65665963, async page read
                                      [1051056.348013] Buffer I/O error on dev dm-12, logical block 5255136, async page read
                                      [1051092.751391] Buffer I/O error on dev dm-12, logical block 13134724, async page read
                                      [1051328.387483] Buffer I/O error on dev dm-12, logical block 3941344, async page read
                                      [1051711.573576] Buffer I/O error on dev dm-12, logical block 13134752, async page read
                                      [1051848.129739] Buffer I/O error on dev dm-12, logical block 6567844, async page read
                                      [1051992.984716] Buffer I/O error on dev dm-12, logical block 105064334, async page read
                                      [1052434.107654] Buffer I/O error on dev dm-12, logical block 39400326, async page read
                                      [1052695.987730] Buffer I/O error on dev dm-12, logical block 13134724, async page read
                                      [1052923.659130] Buffer I/O error on dev dm-12, logical block 13134726, async page read
                                      [1053136.646307] Buffer I/O error on dev dm-12, logical block 64153536, async page read
                                      [1053612.719918] Buffer I/O error on dev dm-12, logical block 6567808, async page read
                                      [1053646.789183] Buffer I/O error on dev dm-12, logical block 6567920, async page read
                                      [1053778.875359] Buffer I/O error on dev dm-12, logical block 6567808, async page read
                                      [1053838.326806] Buffer I/O error on dev dm-12, logical block 10508203, async page read
                                      [1054000.750328] Buffer I/O error on dev dm-12, logical block 5255076, async page read
                                      [1054451.772637] Buffer I/O error on dev dm-12, logical block 6567814, async page read
                                      [1103083.191699] device vif7.0 left promiscuous mode
                                      [1103208.344685] block tdh: sector-size: 512/512 capacity: 104857600
                                      [1103217.894245] block tdh: sector-size: 512/512 capacity: 104857600
                                      [1103218.758711] device vif47.0 entered promiscuous mode
                                      [1103219.717538] vif vif-47-0 vif47.0: Guest Rx ready
                                      [1103286.054941] device vif11.0 left promiscuous mode
                                      [1103306.933523] block tdk: sector-size: 512/512 capacity: 104857600
                                      [1103316.719467] block tdk: sector-size: 512/512 capacity: 104857600
                                      [1103317.536966] device vif48.0 entered promiscuous mode
                                      [1103319.391521] vif vif-48-0 vif48.0: Guest Rx ready
                                      
                                      1 Reply Last reply Reply Quote 0
                                      • DanpD Offline
                                        Danp Pro Support Team @jtbw911
                                        last edited by

                                        @jtbw911 said in Logs Partition Full:

                                        This often occurs when you're having storage issues (whether they are readily apparent or not), that may or may not be related to networking intermittency, that fills up the log files.

                                        @x-rayd Quoted the above as it seems pertinent to the issue at hand. Do you have support for your Equallogic system? If so, then I would recommend reaching out to the vendor for assistance.

                                        1 Reply Last reply Reply Quote 0
                                        • X Offline
                                          x-rayd
                                          last edited by

                                          @Danp said in Logs Partition Full:

                                          Quoted the above as it seems pertinent to the issue at hand. Do you have support for your Equallogic system? If so, then I would recommend reaching out to the vendor for assistance.

                                          why do you think is Equallogic problem?

                                          DanpD 1 Reply Last reply Reply Quote 0
                                          • DanpD Offline
                                            Danp Pro Support Team @x-rayd
                                            last edited by

                                            @x-rayd I could be wrong, but to me "I/O error on dev xxx" indicates a hardware issue. On top of that, you continue to have issues with full log partitions and snapshots that aren't properly removed. Therefore, I assumed that you are having an issue with your storage system.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post