XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    FYI - Applying 11/3/2022 and 11/4/2022 Commits in XO from Sources

    Scheduled Pinned Locked Moved Xen Orchestra
    22 Posts 9 Posters 4.7k Views 7 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • JamfoFLJ Offline
      JamfoFL
      last edited by JamfoFL

      Just an FYI for anyone who will be applying the plethora of new commits that were released yesterday and today.

      I am running XO from Sources on a VM with 2 CPU's and 2 GB RAM. I was up-to-date for the last commit on October 31, 0623d83, and ran the normal "git pull", "yarn", "yarn build" procedure to update to all of the latest commits from today (which is 17df749 at the time of writing).

      It looks like some part of this process is exceedingly memory intensive. After about five minutes or so (and there are parts of these updates that take quite some time to proceed from step to step), I started getting "out of memory" messages during the "yarn build" process.

      Fortunately, the new commits are well written, and the entire process was, finally, able to run to completion. All told, it took about 15 minutes until the process was done. While XO ran perfectly afterward, I did see a number of errors during the process.

      My solution was to boost the amount of RAM available to my XO VM to 4 GB, and I then re-ran the "yarn build". This time, I could see it progress past several steps where it had errored out during the first run. This also only took a few minutes with this run to complete. During the second build, there were a few times where the memory usage did use the full 4 GB available, so it looks like that helped.

      So... I just wanted to put this out there for anyone else in the same situation who may be getting ready to apply the new commits released yesterday and today. If you have the extra memory, it might be a good idea to add some to your XO VM (if you are running a 2 GB machine) so it can more easily process these updates.

      After the entire process is complete, the amount of CPU and RAM used by the XO VM settles back down to normal.

      Just FYI for the community!

      1 Reply Last reply Reply Quote 0
      • DanpD Offline
        Danp Pro Support Team
        last edited by Danp

        Interesting observations. FWIW, I haven't seen any "out of memory" messages. However, I have experienced my bash script mysteriously failing to continue following yarn build. I tried bumping memory from 2 --> 3 GB without any improvement. I'll try it again with 4 GB.

        Edit: Also seeing this warning that I believe is new --

        (!) Some chunks are larger than 500 KiB after minification. Consider:
        - Using dynamic import() to code-split the application
        - Use build.rollupOptions.output.manualChunks to improve chunking: https://rollupjs.org/guide/en/#outputmanualchunks
        - Adjust chunk size limit for this warning via build.chunkSizeWarningLimit.
        
        JamfoFLJ 1 Reply Last reply Reply Quote 0
        • JamfoFLJ Offline
          JamfoFL @Danp
          last edited by JamfoFL

          @Danp I had a bunch of different messages I noticed I had never seen before when running updates... some of them went by rather quickly, so I wasn't able to take note.

          I did see a line for "vite v2.9.14 building for production" that I don't recall ever seeing before... followed by a few other lines for services that were updating that also were not something I ever recalled seeing on previous updates. One of those was part of the errors I got the first time through.

          I also noted that I received that same "(!) Some chunks are larger than 500 KiB after minification..." message that you got, as well.

          1 Reply Last reply Reply Quote 0
          • DanpD Offline
            Danp Pro Support Team
            last edited by Danp

            For anyone following along, this is the command that stopped working correctly --

            curl <link to install script> | bash

            Changing it to the following appears to correct the problem --

            bash -c "$(curl <link to install script>)"

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Online
              olivierlambert Vates πŸͺ Co-Founder CEO
              last edited by

              Note that Vates do not promote any one liner for 3rd party XO install scripts, since we can't vouch for what's behind πŸ™‚

              DanpD 1 Reply Last reply Reply Quote 0
              • DanpD Offline
                Danp Pro Support Team @olivierlambert
                last edited by

                @olivierlambert It seems that the redirection was affected by a change in how the build process works.

                I edited the prior post to remove the complete links as they weren't pertinent to the discussion.

                1 Reply Last reply Reply Quote 1
                • I Offline
                  iLix
                  last edited by

                  Compiling with less then 4GB RAM has often failed for me.

                  DanpD 1 Reply Last reply Reply Quote 0
                  • DanpD Offline
                    Danp Pro Support Team @iLix
                    last edited by

                    @iLix I recall this being an issue in the past, but I've been building XO with 2GB for a while now.

                    I gskgerG 2 Replies Last reply Reply Quote 0
                    • I Offline
                      iLix @Danp
                      last edited by

                      @Danp OK, good to know. I just set it to 4GB and never looked backπŸ™‚

                      1 Reply Last reply Reply Quote 0
                      • gskgerG Offline
                        gskger Top contributor @Danp
                        last edited by

                        @Danp Same here. Have been running XO from script with 4GB since I ran into problems I think a year ago. No problems since than πŸ˜ƒ

                        1 Reply Last reply Reply Quote 0
                        • DanpD Offline
                          Danp Pro Support Team
                          last edited by

                          FWIW, today I'm having trouble building with 4GB --

                          [08:34:41] Starting 'copyAssets'...
                          transforming (10) ../../node_modules/@vue/runtime-dom/dist/runtime-dom.esm-bundler.jsKilled
                          error Command failed with exit code 137.
                          info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
                          ERROR: "type-check" exited with 137.
                          * @xen-orchestra/lite:build βˆ’ Error: 1
                          <snip>
                          [08:37:56] Finished 'copyAssets' after 3.23 min
                          [08:39:37] Finished 'buildScripts' after 4.92 min
                          [08:39:37] Finished 'build' after 4.92 min
                          βœ– 1
                          error Command failed with exit code 1.
                          
                          JamfoFLJ 1 Reply Last reply Reply Quote 0
                          • JamfoFLJ Offline
                            JamfoFL @Danp
                            last edited by

                            I kept my VM at 4 GB after making the report the other day and haven't had any issues since. I even updated again this morning (after I noticed the large number of commits from yesterday and this morning) and everything ran just fine. The only thing out of the ordinary I noticed was I am still getting the "chunk" error message during the process.

                            (!) Some chunks are larger than 500 KiB after minification. Consider:
                            - Using dynamic import() to code-split the application
                            - Use build.rollupOptions.output.manualChunks to improve chunking: https://rollupjs.org/guide/en/#outputmanualchunks
                            - Adjust chunk size limit for this warning via build.chunkSizeWarningLimit.
                            

                            That hasn't caused any issues, as far as I see. Everything appears to be working as it should!

                            H 1 Reply Last reply Reply Quote 0
                            • H Offline
                              hoerup @JamfoFL
                              last edited by

                              When I (re)build on my 3 GB vm i use this before in order to keep nodejs at bay

                              export NODE_OPTIONS='--max-old-space-size=3072'
                              

                              And then it runs smoothly

                              DanpD 1 Reply Last reply Reply Quote 1
                              • DanpD Offline
                                Danp Pro Support Team @hoerup
                                last edited by

                                @hoerup Unfortunately, that doesn't help in my situation. I recently upgraded the VM to Ubuntu 22.10, so maybe that is contribution to the problem.

                                JamfoFLJ 1 Reply Last reply Reply Quote 0
                                • JamfoFLJ Offline
                                  JamfoFL @Danp
                                  last edited by

                                  @Danp I'm using Debian Bullseye... I am able to run the updates now, but still see that same "chunk size" error notice you initially reported. Still, it seems you have it even worse...

                                  1 Reply Last reply Reply Quote 1
                                  • ronivayR Offline
                                    ronivay Top contributor
                                    last edited by ronivay

                                    I'm running daily installation from sources on multiple different OS's. All have the same specs: 2vCPU/4GB RAM. This has worked flawlessly for a long time. Recently (starting from 4th/5th Nov) i've started to see OOM errors almost daily during yarn build which then cause it to fail with following error:

                                    Using polyfills: No polyfills were added, since the `useBuiltIns` option was not set.
                                    [01:23:25] Finished 'copyAssets' after 36 s
                                    [01:24:33] Finished 'buildScripts' after 1.73 min
                                    [01:24:33] Finished 'build' after 1.73 min
                                    βœ– 1
                                    error Command failed with exit code 1.
                                    

                                    It isn't consistent, sometimes it's debian that fails, sometimes ubuntu, sometimes centos/almalinux and so on. Something has definitely changed in the build procedure that eats more RAM than it used to.

                                    I’m fine with increasing the RAM if needed. Just wanted to point this out if there’s something out of the ordinary with latest changes.

                                    DanpD 1 Reply Last reply Reply Quote 1
                                    • DanpD Offline
                                      Danp Pro Support Team @ronivay
                                      last edited by

                                      @ronivay said in FYI - Applying 11/3/2022 and 11/4/2022 Commits in XO from Sources:

                                      Something has definitely changed in the build procedure that eats more RAM than it used to.

                                      Agreed. Maybe @julien-f can add some insight into what has changed and how to successfully build from sources.

                                      julien-fJ 1 Reply Last reply Reply Quote 0
                                      • julien-fJ Offline
                                        julien-f Vates πŸͺ Co-Founder XO Team @Danp
                                        last edited by

                                        I believe this was due to the inclusion of XO Lite on the master branch.

                                        I've limited the number of packages built concurrently: https://github.com/vatesfr/xen-orchestra/commit/08298d3284119ad855552af36a810a3a9a006759

                                        Tell me if that helps πŸ™‚

                                        0 julien-f committed to vatesfr/xen-orchestra
                                        feat: limit concurrency of root build script
                                        
                                        Should fixes https://xcp-ng.org/forum/post/54567
                                        JamfoFLJ ronivayR 2 Replies Last reply Reply Quote 1
                                        • JamfoFLJ Offline
                                          JamfoFL @julien-f
                                          last edited by

                                          @julien-f I just ran a "yard build" this morning, and other than still seeing the chunk error message:

                                          (!) Some chunks are larger than 500 KiB after minification. Consider:
                                          - Using dynamic import() to code-split the application
                                          - Use build.rollupOptions.output.manualChunks to improve chunking: https://rollupjs.org/guide/en/#outputmanualchunks
                                          - Adjust chunk size limit for this warning via build.chunkSizeWarningLimit.
                                          

                                          Everything else ran fine... no errors or OOM issues.

                                          julien-fJ 1 Reply Last reply Reply Quote 2
                                          • julien-fJ Offline
                                            julien-f Vates πŸͺ Co-Founder XO Team @JamfoFL
                                            last edited by

                                            @JamfoFL Great!

                                            Yes, the warning is unrelated (ping @pdonias).

                                            1 Reply Last reply Reply Quote 1
                                            • First post
                                              Last post