XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Logs Partition Full

    Scheduled Pinned Locked Moved Xen Orchestra
    59 Posts 6 Posters 25.2k Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates 🪐 Co-Founder CEO
      last edited by

      I'm not surprised you have issues with Windows Server 2003 (this is probably generating a ton of logs).

      After all, this OS is 17 years old, so "why it doesn't work" after you install modern tools on it seems a rhetorical question.

      It's also not officially supported by Citrix anymore, so I suppose there's good reasons (like Windows XP).

      The real question is: should it rotate the logs anyway before it's full? I think it should yes, and that's the problem.

      1 Reply Last reply Reply Quote 0
      • DanpD Offline
        Danp Pro Support Team @x-rayd
        last edited by

        @x-rayd Try with XT 6.5 (this is what I use on the one 2003 server that I still support).

        1 Reply Last reply Reply Quote 0
        • X Offline
          x-rayd
          last edited by

          last daemon.log

          Jun 17 00:23:08 df-c01-node04 squeezed[1275]: [4150926.54] watch /control/feature-balloon <- 1
          Jun 17 00:23:08 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /control/feature-balloon <- 1
          Jun 17 00:23:08 df-c01-node04 squeezed[1275]: [4150926.58] watch /data/updated <- Wed Jun 17 00:23:08 2020
          Jun 17 00:23:08 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /data/updated <- Wed Jun 17 00:23:08 2020
          Jun 17 00:23:08 df-c01-node04 squeezed[1275]: [4150926.75] watch /data/updated <- Wed Jun 17 00:23:08 CEST 2020
          Jun 17 00:23:08 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /data/updated <- Wed Jun 17 00:23:08 CEST 2020
          Jun 17 00:23:10 df-c01-node04 squeezed[1275]: [4150928.15] watch /data/updated <- Wed Jun 17 00:23:10 2020
          Jun 17 00:23:10 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /data/updated <- Wed Jun 17 00:23:10 2020
          Jun 17 00:23:10 df-c01-node04 squeezed[1275]: [4150928.15] watch /data/updated <- Wed Jun 17 00:23:10 2020
          Jun 17 00:23:10 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /data/updated <- Wed Jun 17 00:23:10 2020
          Jun 17 00:23:16 df-c01-node04 squeezed[1275]: [4150934.19] watch /control/feature-balloon <- 1
          Jun 17 00:23:16 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /control/feature-balloon <- 1
          Jun 17 00:23:16 df-c01-node04 squeezed[1275]: [4150934.26] watch /data/updated <- Wed Jun 17 00:23:16 2020
          Jun 17 00:23:16 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /data/updated <- Wed Jun 17 00:23:16 2020
          Jun 17 00:23:30 df-c01-node04 squeezed[1275]: [4150948.51] watch /data/updated <- Wed Jun 17 00:23:30 CEST 2020
          Jun 17 00:23:30 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /data/updated <- Wed Jun 17 00:23:30 CEST 2020
          Jun 17 00:23:41 df-c01-node04 squeezed[1275]: [4150958.91] watch /data/updated <- 1
          Jun 17 00:23:41 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /data/updated <- 1
          Jun 17 00:23:41 df-c01-node04 squeezed[1275]: [4150959.51] watch /data/updated <- Wed Jun 17 00:23:41 CEST 2020
          Jun 17 00:23:41 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /data/updated <- Wed Jun 17 00:23:41 CEST 2020
          Jun 17 00:23:45 df-c01-node04 squeezed[1275]: [4150963.15] watch /control/feature-balloon <- 1
          Jun 17 00:23:45 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /control/feature-balloon <- 1
          Jun 17 00:23:45 df-c01-node04 squeezed[1275]: [4150963.26] watch /data/updated <- Wed Jun 17 00:23:45 2020
          Jun 17 00:23:45 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /data/updated <- Wed Jun 17 00:23:45 2020
          Jun 17 00:23:45 df-c01-node04 squeezed[1275]: [4150963.33] watch /data/updated <- Wed Jun 17 00:23:45 CEST 2020
          Jun 17 00:23:45 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /data/updated <- Wed Jun 17 00:23:45 CEST 2020
          Jun 17 00:23:50 df-c01-node04 squeezed[1275]: [4150967.96] watch /data/updated <- Wed Jun 17 00:23:50 CEST 2020
          Jun 17 00:23:50 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /data/updated <- Wed Jun 17 00:23:50 CEST 2020
          Jun 17 00:23:52 df-c01-node04 squeezed[1275]: [4150970.10] watch /data/updated <- Wed Jun 17 00:23:52 2020
          Jun 17 00:23:52 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xenops] watch /data/updated <- Wed Jun 17 00:23:52 2020
          Jun 17 00:23:53 df-c01-node04 squeezed[1275]: [4150971.35] watch /data/updated <- Wed Jun 17 00:23:53 2020
          Jun 17 00:23:53 df-c01-node04 squeezed: [debug|df-c01-node04|3 ||xen[16:56 df-c01-node04 log]#
          ```~~~~
          1 Reply Last reply Reply Quote 0
          • X Offline
            x-rayd
            last edited by

            @x-rayd said in Logs Partition Full:

            squeezed: [debug|df-c01-node04|3 ||xenops] watch /data/updated

            no idea?

            1 Reply Last reply Reply Quote 0
            • stormiS Offline
              stormi Vates 🪐 XCP-ng Team
              last edited by stormi

              Is the last lost indicating a problem to you? One message per second remains a lot but not enough to fill your log partition, contrarily to the messages that you posted earlier about XENBUS and XENVIF, that were flooding at a really quick pace.

              1 Reply Last reply Reply Quote 0
              • X Offline
                x-rayd
                last edited by

                Im delete daemon.log, but command df -h say partition is full, why?
                /dev/sda5 3.9G 3.9G 0 100% /var/log

                1 Reply Last reply Reply Quote 0
                • stormiS Offline
                  stormi Vates 🪐 XCP-ng Team
                  last edited by

                  You can check what takes space with du -sh /var/log/*

                  1 Reply Last reply Reply Quote 0
                  • X Offline
                    x-rayd
                    last edited by

                    @stormi said in Logs Partition Full:

                    du -sh /var/log/*

                    [5.1M    /var/log/audit.log
                    24M     /var/log/audit.log.1
                    0       /var/log/audit.log.1.gz
                    3.5M    /var/log/audit.log.2.gz
                    0       /var/log/audit.log.3.gz
                    9.3M    /var/log/blktap
                    0       /var/log/boot.log
                    24K     /var/log/boot.log.1
                    4.0K    /var/log/boot.log.2.gz
                    4.0K    /var/log/boot.log.3.gz
                    4.0K    /var/log/boot.log.4.gz
                    4.0K    /var/log/boot.log.5.gz
                    4.0K    /var/log/btmp
                    4.0K    /var/log/btmp.1
                    0       /var/log/btmp.1.gz
                    4.0K    /var/log/cluster
                    8.0K    /var/log/crit.log
                    0       /var/log/crit.log.1
                    0       /var/log/crit.log.1.gz
                    2.9M    /var/log/cron
                    40K     /var/log/cron.1
                    0       /var/log/cron.1.gz
                    4.0K    /var/log/grubby_prune_debug
                    1004K   /var/log/installer
                    4.0K    /var/log/interface-rename.log
                    0       /var/log/interface-rename.log.1
                    0       /var/log/interface-rename.log.1.gz
                    1.3M    /var/log/kern.log
                    20K     /var/log/kern.log.1
                    0       /var/log/kern.log.1.gz
                    16K     /var/log/lost+found
                    164K    /var/log/maillog
                    4.0K    /var/log/maillog.32.gz
                    0       /var/log/messages
                    0       /var/log/messages.1
                    0       /var/log/messages.1.gz
                    4.0K    /var/log/ntpstats
                    4.0K    /var/log/openvswitch
                    4.0K    /var/log/ovs-ctl.log
                    4.0K    /var/log/ovs-ctl.log.32.gz
                    4.0K    /var/log/ovsdb-server.log
                    4.0K    /var/log/ovsdb-server.log.1
                    0       /var/log/ovsdb-server.log.1.gz
                    60K     /var/log/ovs-vswitchd.log
                    4.0K    /var/log/ovs-vswitchd.log.1
                    0       /var/log/ovs-vswitchd.log.1.gz
                    4.0K    /var/log/ovs-xapi-sync.log
                    4.0K    /var/log/ovs-xapi-sync.log.32.gz
                    8.0K    /var/log/pbis-open-install.log
                    4.0K    /var/log/pyperthreading-plugin.log
                    114M    /var/log/sa
                    8.0K    /var/log/samba
                    37M     /var/log/secure
                    5.6M    /var/log/SMlog
                    29M     /var/log/SMlog.1
                    0       /var/log/SMlog.1.gz
                    5.0M    /var/log/SMlog.2.gz
                    0       /var/log/spooler
                    4.0K    /var/log/spooler.32.gz
                    0       /var/log/tallylog
                    0       /var/log/updater-plugin.log
                    972K    /var/log/user.log
                    4.0K    /var/log/user.log.1
                    0       /var/log/user.log.1.gz
                    696K    /var/log/VMSSlog
                    20K     /var/log/VMSSlog.1
                    0       /var/log/VMSSlog.1.gz
                    68K     /var/log/wtmp
                    8.0K    /var/log/xcp-rrdd-plugins.log
                    13M     /var/log/xcp-rrdd-plugins.log.1
                    0       /var/log/xcp-rrdd-plugins.log.1.gz
                    240K    /var/log/xcp-rrdd-plugins.log.2.gz
                    0       /var/log/xcp-rrdd-plugins.log.3.gz
                    220K    /var/log/xen
                    96M     /var/log/xensource.log
                    18M     /var/log/xensource.log.1
                    0       /var/log/xensource.log.1.gz
                    8.2M    /var/log/xensource.log.2.gz
                    9.2M    /var/log/xensource.log.3.gz
                    6.5M    /var/log/xensource.log.4.gz
                    8.0M    /var/log/xenstored-access.log
                    33M     /var/log/xenstored-access.log.1
                    0       /var/log/xenstored-access.log.1.gz
                    2.3M    /var/log/xenstored-access.log.2.gz
                    4.0K    /var/log/yum.log
                    36K     /var/log/yum.log.1
                    

                    Filesystem Size Used Avail Use% Mounted on
                    devtmpfs 3.9G 112K 3.9G 1% /dev
                    tmpfs 3.9G 1004K 3.9G 1% /dev/shm
                    tmpfs 3.9G 12M 3.9G 1% /run
                    tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
                    /dev/sda1 18G 2.4G 15G 14% /
                    xenstore 3.9G 0 3.9G 0% /var/lib/xenstored
                    /dev/sda5 3.9G 3.9G 0 100% /var/log

                    1 Reply Last reply Reply Quote 0
                    • stormiS Offline
                      stormi Vates 🪐 XCP-ng Team
                      last edited by

                      I'm not sure, but I suppose the file descriptors for the removed files maybe be still open somewhere and thus the file still exist in some way. Try restarting the toolstack: xe-toolstack-restart.

                      1 Reply Last reply Reply Quote 0
                      • X Offline
                        x-rayd
                        last edited by

                        xe-toolstack-restart not help 😞

                        1 Reply Last reply Reply Quote 0
                        • stormiS Offline
                          stormi Vates 🪐 XCP-ng Team
                          last edited by

                          https://serverfault.com/a/315945/520838 may help

                          1 Reply Last reply Reply Quote 0
                          • X Offline
                            x-rayd
                            last edited by x-rayd

                            Idea what is problem?

                            [20:31 df-c01-node04 log]# tail -50 xensource.log
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|16 ||xenops_server] TASK.signal 599093 (object deleted)
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 ||xenops_server] Queue.pop returned ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|xenops_server] Task 599094 reference events: ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |events|xenops_server] Received an event on managed VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |queue|xenops_server] Queue.push ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"] onto redirected f828ce90-06e0-024f-9c9b-3f30b1a959b4:[  ]
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: processing event for VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Will update VM.allowed_operations because guest_agent has changed.
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: Updating VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 domid 9 guest_agent
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404650 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:d1538d321269 created by task D:c22abe907392
                            Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404650 UNIX /var/lib/xcp/xapi|session.slave_login D:8d7d2a8f0ca6|xapi] Session.create trackid=59cc70e9542b52fdcd1622725fa443c3 pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |events|xenops_server] Received an event on managed VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |queue|xenops_server] Queue.push ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"] onto redirected f828ce90-06e0-024f-9c9b-3f30b1a959b4:[ ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"] ]
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404651 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:6320774722f1 created by task D:8d7d2a8f0ca6
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404652 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VM.update_allowed_operations D:e5300a15f1ea created by task D:c22abe907392
                            Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404652 UNIX /var/lib/xcp/xapi|dispatch:VM.update_allowed_operations D:e5300a15f1ea|taskhelper] task VM.update_allowed_operations R:2852f822293e (uuid:ecd676c2-0a35-ad5c-7f38-48fc3e383d52) created (trackid=59cc70e9542b52fdcd1622725fa443c3) by task D:c22abe907392
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404652 UNIX /var/lib/xcp/xapi|VM.update_allowed_operations R:2852f822293e|audit] VM.update_allowed_operations: VM = 'f828ce90-06e0-024f-9c9b-3f30b1a959b4 (DF-server24.df-webhosting.de)'
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|xenops_server] VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 is not requesting any attention
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|xenops_server] VM_DB.signal f828ce90-06e0-024f-9c9b-3f30b1a959b4
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|task_server] Task 599094 completed; duration = 0
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 ||xenops_server] TASK.signal 599094 (object deleted)
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 ||xenops_server] Queue.pop returned ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|xenops_server] Task 599095 reference events: ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404653 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:e038fd45a92c created by task D:c22abe907392
                            Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404653 UNIX /var/lib/xcp/xapi|session.logout D:624f223cdcff|xapi] Session.destroy trackid=59cc70e9542b52fdcd1622725fa443c3
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Processing event: ["Vm","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenops event on VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|632725 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops_server] VM.stat f828ce90-06e0-024f-9c9b-3f30b1a959b4
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|xenops_server] VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 is not requesting any attention
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|xenops_server] VM_DB.signal f828ce90-06e0-024f-9c9b-3f30b1a959b4
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|task_server] Task 599095 completed; duration = 0
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 ||xenops_server] TASK.signal 599095 (object deleted)
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: processing event for VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Will update VM.allowed_operations because guest_agent has changed.
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: Updating VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 domid 9 guest_agent
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404654 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:3041ad7a5d1d created by task D:c22abe907392
                            Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404654 UNIX /var/lib/xcp/xapi|session.slave_login D:fa655deb1721|xapi] Session.create trackid=aa48585d92a36631054ef9468218522c pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404655 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:a1e04c4c79aa created by task D:fa655deb1721
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1235 |xapi events D:f60b314e49a9|dummytaskhelper] task timeboxed_rpc D:98ed995dc8af created by task D:f60b314e49a9
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404656 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:event.from D:c9f2b3e839ca created by task D:f60b314e49a9
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404657 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VM.update_allowed_operations D:4a10137d6bc3 created by task D:c22abe907392
                            Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404657 UNIX /var/lib/xcp/xapi|dispatch:VM.update_allowed_operations D:4a10137d6bc3|taskhelper] task VM.update_allowed_operations R:e4f434e89b02 (uuid:3e5a0652-8d8f-b774-53d9-f325c4cc63a1) created (trackid=aa48585d92a36631054ef9468218522c) by task D:c22abe907392
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404657 UNIX /var/lib/xcp/xapi|VM.update_allowed_operations R:e4f434e89b02|audit] VM.update_allowed_operations: VM = 'f828ce90-06e0-024f-9c9b-3f30b1a959b4 (DF-server24.df-webhosting.de)'
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404658 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:9eeefa7124f9 created by task D:c22abe907392
                            Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404658 UNIX /var/lib/xcp/xapi|session.logout D:b558e450cae3|xapi] Session.destroy trackid=aa48585d92a36631054ef9468218522c
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Processing event: ["Vm","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenops event on VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
                            Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|632727 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops_server] VM.stat f828ce90-06e0-024f-9c9b-3f30b1a959b4
                            Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: ignoring event for VM f828ce90-06e0-024f-9c9b-3f30b1a959b4: metadata has not changed
                            Jul  1 20:32:06 df-c01-node04 xcp-rrdd: [ warn|df-c01-node04|0 monitor|main|rrdd_server] setting skip-cycles-after-error for plugin tap-31773-25 to 256
                            Jul  1 20:32:06 df-c01-node04 xcp-rrdd: [ warn|df-c01-node04|0 monitor|main|rrdd_server] Failed to process plugin: tap-31773-25 (Rrd_protocol.Invalid_header_string)
                            [20:32 df-c01-node04 log]#
                            
                            1 Reply Last reply Reply Quote 0
                            • X Offline
                              x-rayd
                              last edited by

                              xensource.log is to big 2 GB
                              Im delete logfile, but disk space ist not free. Now cant see xensource.log, why?

                              1 Reply Last reply Reply Quote 0
                              • stormiS Offline
                                stormi Vates 🪐 XCP-ng Team
                                last edited by

                                Because the file descriptor is still open by the process that used it, so it's not really gone until it's released.

                                Try xe-toolstack-restart

                                1 Reply Last reply Reply Quote 1
                                • X Offline
                                  x-rayd
                                  last edited by

                                  @stormi said in Logs Partition Full:

                                  xe-toolstack-restart

                                  no help! what can I do now?

                                  1 Reply Last reply Reply Quote 0
                                  • stormiS Offline
                                    stormi Vates 🪐 XCP-ng Team
                                    last edited by

                                    What's the output from lsof +L1?

                                    1 Reply Last reply Reply Quote 0
                                    • X Offline
                                      x-rayd
                                      last edited by

                                      @stormi said in Logs Partition Full:

                                      lsof +L1

                                      [10:38 df-c01-node04 ~]# lsof +L1
                                      COMMAND     PID USER   FD   TYPE DEVICE   SIZE/OFF NLINK     NODE NAME
                                      rsyslogd   1070 root   11w   REG    8,5 1945839857     0       15 /var/log/xensource.log (deleted)
                                      monitor    2002 root    7u   REG    8,1        141     0   180233 /tmp/tmpf6sKcHH (deleted)
                                      ovsdb-ser  2003 root    7u   REG    8,1        141     0   180233 /tmp/tmpf6sKcHH (deleted)
                                      stunnel   13458 root    2w   REG   0,19        861     0 97301510 /run/nonpersistent/forkexecd/stunnelcd2b39.log (deleted)
                                      [17:21 df-c01-node04 ~]#
                                      
                                      1 Reply Last reply Reply Quote 0
                                      • stormiS Offline
                                        stormi Vates 🪐 XCP-ng Team
                                        last edited by

                                        So the file is owned by rsyslogd. Restart that service.

                                        1 Reply Last reply Reply Quote 0
                                        • X Offline
                                          x-rayd
                                          last edited by

                                          waht is command?

                                          1 Reply Last reply Reply Quote 0
                                          • X Offline
                                            x-rayd
                                            last edited by

                                            ??

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post