Received: from hera.aquilenet.fr (hera.aquilenet.fr [185.233.100.1]) by nld3-dev1.alpinelinux.org (Postfix) with ESMTPS id 2B0A178078E for <~alpine/users@lists.alpinelinux.org>; Tue, 2 Nov 2021 09:19:19 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hera.aquilenet.fr (Postfix) with ESMTP id 60A07295 for <~alpine/users@lists.alpinelinux.org>; Tue, 2 Nov 2021 10:19:18 +0100 (CET) X-Virus-Scanned: Debian amavisd-new at aquilenet.fr Received: from hera.aquilenet.fr ([127.0.0.1]) by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id vBIy0JJEfiNz for <~alpine/users@lists.alpinelinux.org>; Tue, 2 Nov 2021 10:19:17 +0100 (CET) Received: from [10.0.0.99] (lmontsouris-659-1-34-156.w81-250.abo.wanadoo.fr [81.250.188.156]) by hera.aquilenet.fr (Postfix) with ESMTPSA id 98E381AD for <~alpine/users@lists.alpinelinux.org>; Tue, 2 Nov 2021 10:19:16 +0100 (CET) Message-ID: <6df8863e77b970b466dbfc9a3a5c2bcec3199f48.camel@aquilenet.fr> Subject: Alpine Linux general performances From: =?ISO-8859-1?Q?=C9loi?= Rivard To: ~alpine/users@lists.alpinelinux.org Date: Tue, 02 Nov 2021 10:19:14 +0100 Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.40.4 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spamd-Bar: / Authentication-Results: hera.aquilenet.fr; none X-Rspamd-Server: hera X-Rspamd-Queue-Id: 60A07295 X-Spamd-Result: default: False [0.73 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[~alpine/users@lists.alpinelinux.org]; RCPT_COUNT_ONE(0.00)[1]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; R_MIXED_CHARSET(0.83)[subject]; RCVD_COUNT_TWO(0.00)[2]; RCVD_TLS_ALL(0.00)[]; MID_RHS_MATCH_FROM(0.00)[] Hi. My company have a cluster of LXC/LXD hypervisors that run Alpine and Debian containers, mostly for classical webservices (nginx + uwsgi + python apps + ZEO/ZODB database). We made some investigations when we felt a "general slowness" on our Alpine containers. With some testing we could exonerate most of the i/o factors. In the end we encountered a significative difference on CPU- heavy operations, like for instance the compilation of Python, between Debian and Alpine containers. I made a simple experiment: compile Python 3.10.0 on a fresh Alpine container, and then again on a fresh Debian container. I made this twice, once on my personal computer (Ryzen 7 1800x 16 cores - 16Gb of RAM), and once on our production hypervisor (Xeon(R) E5-2609 0 @ 2.40GHz 8 cores - 64Gb of RAM). Here are the results for the 'make' command (produced with 'time'): # hypervisor # alpine real 5m13 user 4m40 sys 0m33 # debian real 3m01 user 2m47 sys 0m13 # my personal computer # alpine real 3m50 user 3m27 sys 0m20 # debian real 2m17 user 2m07 sys 0m07 In both cases Alpine containers take 66% more time to compile Python, compared to Debian. We also compiled python directly on one of the hypervisors, and we observed results really close to the Debian container. [1] I wonder if this experiment has correlations with the "general slowness" we felt on our production Alpine containers. Some people wrote some interesting things [2] about the memory allocation in musl being slower than on glibc. This is quite technical and I am not sure what to think of this as memory allocation is not really in my field of competence. In the end I am not even sure bad memory allocation performances are the cause for a slow Python compilation, or that "slowness feeling" I mentioned. So I wanted to ask here what do you think of all these. - Is this situation well-know? - Is the musl memory allocation a good lead to explain performance differences? - Can memory allocation explain the whole thing? - Is my experiment flawed because of some other factors? Éloi [1] https://discuss.linuxcontainers.org/t/performance-problem-container-slower-than-host-x1-2/12291/4 [2] https://www.linkedin.com/pulse/testing-alternative-c-memory-allocators-pt-2-musl-mystery-gomes/