FYI - Applying 11/3/2022 and 11/4/2022 Commits in XO from Sources
-
Just an FYI for anyone who will be applying the plethora of new commits that were released yesterday and today.
I am running XO from Sources on a VM with 2 CPU's and 2 GB RAM. I was up-to-date for the last commit on October 31, 0623d83, and ran the normal "git pull", "yarn", "yarn build" procedure to update to all of the latest commits from today (which is 17df749 at the time of writing).
It looks like some part of this process is exceedingly memory intensive. After about five minutes or so (and there are parts of these updates that take quite some time to proceed from step to step), I started getting "out of memory" messages during the "yarn build" process.
Fortunately, the new commits are well written, and the entire process was, finally, able to run to completion. All told, it took about 15 minutes until the process was done. While XO ran perfectly afterward, I did see a number of errors during the process.
My solution was to boost the amount of RAM available to my XO VM to 4 GB, and I then re-ran the "yarn build". This time, I could see it progress past several steps where it had errored out during the first run. This also only took a few minutes with this run to complete. During the second build, there were a few times where the memory usage did use the full 4 GB available, so it looks like that helped.
So... I just wanted to put this out there for anyone else in the same situation who may be getting ready to apply the new commits released yesterday and today. If you have the extra memory, it might be a good idea to add some to your XO VM (if you are running a 2 GB machine) so it can more easily process these updates.
After the entire process is complete, the amount of CPU and RAM used by the XO VM settles back down to normal.
Just FYI for the community!
-
Interesting observations. FWIW, I haven't seen any "out of memory" messages. However, I have experienced my bash script mysteriously failing to continue following
yarn build
. I tried bumping memory from 2 --> 3 GB without any improvement. I'll try it again with 4 GB.Edit: Also seeing this warning that I believe is new --
(!) Some chunks are larger than 500 KiB after minification. Consider: - Using dynamic import() to code-split the application - Use build.rollupOptions.output.manualChunks to improve chunking: https://rollupjs.org/guide/en/#outputmanualchunks - Adjust chunk size limit for this warning via build.chunkSizeWarningLimit.
-
@Danp I had a bunch of different messages I noticed I had never seen before when running updates... some of them went by rather quickly, so I wasn't able to take note.
I did see a line for "vite v2.9.14 building for production" that I don't recall ever seeing before... followed by a few other lines for services that were updating that also were not something I ever recalled seeing on previous updates. One of those was part of the errors I got the first time through.
I also noted that I received that same "(!) Some chunks are larger than 500 KiB after minification..." message that you got, as well.
-
For anyone following along, this is the command that stopped working correctly --
curl <link to install script> | bash
Changing it to the following appears to correct the problem --
bash -c "$(curl <link to install script>)"
-
Note that Vates do not promote any one liner for 3rd party XO install scripts, since we can't vouch for what's behind
-
@olivierlambert It seems that the redirection was affected by a change in how the build process works.
I edited the prior post to remove the complete links as they weren't pertinent to the discussion.
-
Compiling with less then 4GB RAM has often failed for me.
-
@iLix I recall this being an issue in the past, but I've been building XO with 2GB for a while now.
-
@Danp OK, good to know. I just set it to 4GB and never looked back
-
@Danp Same here. Have been running XO from script with 4GB since I ran into problems I think a year ago. No problems since than
-
FWIW, today I'm having trouble building with 4GB --
[08:34:41] Starting 'copyAssets'... transforming (10) ../../node_modules/@vue/runtime-dom/dist/runtime-dom.esm-bundler.jsKilled error Command failed with exit code 137. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. ERROR: "type-check" exited with 137. * @xen-orchestra/lite:build β Error: 1 <snip> [08:37:56] Finished 'copyAssets' after 3.23 min [08:39:37] Finished 'buildScripts' after 4.92 min [08:39:37] Finished 'build' after 4.92 min β 1 error Command failed with exit code 1.
-
I kept my VM at 4 GB after making the report the other day and haven't had any issues since. I even updated again this morning (after I noticed the large number of commits from yesterday and this morning) and everything ran just fine. The only thing out of the ordinary I noticed was I am still getting the "chunk" error message during the process.
(!) Some chunks are larger than 500 KiB after minification. Consider: - Using dynamic import() to code-split the application - Use build.rollupOptions.output.manualChunks to improve chunking: https://rollupjs.org/guide/en/#outputmanualchunks - Adjust chunk size limit for this warning via build.chunkSizeWarningLimit.
That hasn't caused any issues, as far as I see. Everything appears to be working as it should!
-
When I (re)build on my 3 GB vm i use this before in order to keep nodejs at bay
export NODE_OPTIONS='--max-old-space-size=3072'
And then it runs smoothly
-
@hoerup Unfortunately, that doesn't help in my situation. I recently upgraded the VM to Ubuntu 22.10, so maybe that is contribution to the problem.
-
@Danp I'm using Debian Bullseye... I am able to run the updates now, but still see that same "chunk size" error notice you initially reported. Still, it seems you have it even worse...
-
I'm running daily installation from sources on multiple different OS's. All have the same specs: 2vCPU/4GB RAM. This has worked flawlessly for a long time. Recently (starting from 4th/5th Nov) i've started to see OOM errors almost daily during
yarn build
which then cause it to fail with following error:Using polyfills: No polyfills were added, since the `useBuiltIns` option was not set. [01:23:25] Finished 'copyAssets' after 36 s [01:24:33] Finished 'buildScripts' after 1.73 min [01:24:33] Finished 'build' after 1.73 min β 1 error Command failed with exit code 1.
It isn't consistent, sometimes it's debian that fails, sometimes ubuntu, sometimes centos/almalinux and so on. Something has definitely changed in the build procedure that eats more RAM than it used to.
Iβm fine with increasing the RAM if needed. Just wanted to point this out if thereβs something out of the ordinary with latest changes.
-
@ronivay said in FYI - Applying 11/3/2022 and 11/4/2022 Commits in XO from Sources:
Something has definitely changed in the build procedure that eats more RAM than it used to.
Agreed. Maybe @julien-f can add some insight into what has changed and how to successfully build from sources.
-
I believe this was due to the inclusion of XO Lite on the
master
branch.I've limited the number of packages built concurrently: https://github.com/vatesfr/xen-orchestra/commit/08298d3284119ad855552af36a810a3a9a006759
Tell me if that helps
-
@julien-f I just ran a "yard build" this morning, and other than still seeing the chunk error message:
(!) Some chunks are larger than 500 KiB after minification. Consider: - Using dynamic import() to code-split the application - Use build.rollupOptions.output.manualChunks to improve chunking: https://rollupjs.org/guide/en/#outputmanualchunks - Adjust chunk size limit for this warning via build.chunkSizeWarningLimit.
Everything else ran fine... no errors or OOM issues.
-