From debbugs-submit-bounces@debbugs.gnu.org Tue Dec 13 07:48:40 2016 Received: (at 24937) by debbugs.gnu.org; 13 Dec 2016 12:48:40 +0000 Received: from localhost ([127.0.0.1]:40198 helo=debbugs.gnu.org) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cGmVb-0003JT-Ut for submit@debbugs.gnu.org; Tue, 13 Dec 2016 07:48:40 -0500 Received: from world.peace.net ([50.252.239.5]:53464) by debbugs.gnu.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cGmVa-0003JG-GU for 24937@debbugs.gnu.org; Tue, 13 Dec 2016 07:48:38 -0500 Received: from pool-72-93-37-34.bstnma.east.verizon.net ([72.93.37.34] helo=jojen) by world.peace.net with esmtpsa (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.84_2) (envelope-from ) id 1cGmVU-0005xj-0E; Tue, 13 Dec 2016 07:48:32 -0500 From: Mark H Weaver To: ludo@gnu.org (Ludovic =?utf-8?Q?Court=C3=A8s?=) Subject: Re: bug#24937: "deleting unused links" GC phase is too slow In-Reply-To: <87d1gwvgu0.fsf@gnu.org> ("Ludovic \=\?utf-8\?Q\?Court\=C3\=A8s\=22'\?\= \=\?utf-8\?Q\?s\?\= message of "Tue, 13 Dec 2016 01:00:07 +0100") References: <87wpg7ffbm.fsf@gnu.org> <87lgvm4lzu.fsf@gnu.org> <87twaaa6j9.fsf@netris.org> <87twaa2vjx.fsf@gnu.org> <87lgvm9sgq.fsf@netris.org> <87d1gwvgu0.fsf@gnu.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.1 (gnu/linux) Date: Tue, 13 Dec 2016 07:48:19 -0500 Message-ID: <87wpf4yoz0.fsf@netris.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Spam-Score: 0.0 (/) X-Debbugs-Envelope-To: 24937 Cc: 24937@debbugs.gnu.org X-BeenThere: debbugs-submit@debbugs.gnu.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: debbugs-submit-bounces@debbugs.gnu.org Sender: "Debbugs-submit" X-Spam-Score: 0.0 (/) ludo@gnu.org (Ludovic Court=C3=A8s) writes: > I did some measurements with the attached program on chapters, which is > a Xen VM with spinning disks underneath, similar to hydra.gnu.org. It > has 600k entries in /gnu/store/.links. I just want to point out that 600k inodes use 150 megabytes of disk space on ext4, which is small enough to fit in the cache, so the disk I/O will not be multiplied for such a small test case. > Here=E2=80=99s a comparison of the =E2=80=9Coptimal=E2=80=9D mode (bulk s= tats after we=E2=80=99ve > fetched all the dirents) vs. the =E2=80=9Csemi-interleaved=E2=80=9D mode = (doing bulk > stats every 100,000 dirents): > > ludo@guix:~$ gcc -std=3Dgnu99 -Wall links-traversal.c -DMODE=3D3 > ludo@guix:~$ sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' > ludo@guix:~$ time ./a.out > 603858 dir_entries, 157 seconds > stat took 1 seconds > > real 2m38.508s > user 0m0.324s > sys 0m1.824s > ludo@guix:~$ gcc -std=3Dgnu99 -Wall links-traversal.c -DMODE=3D2 > ludo@guix:~$ sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' > ludo@guix:~$ time ./a.out=20 > 3852 dir_entries, 172 seconds (including stat) > > real 2m51.827s > user 0m0.312s > sys 0m1.808s > > Semi-interleaved is ~12% slower here (not sure how reproducible that is > though). This directory you're testing on is more than an order of magnitude smaller than Hydra's when it's full. Unlike in your test above, all of the inodes in Hydra's store won't fit in the cache. In my opinion, the reason Hydra performs so poorly is because efficiency and scalability are apparently very low priorities in the design of the software running on it. Unfortunately, I feel that my advice in this area is discarded more often than not. >>>> Why not just use GNU sort? It already exists, and does exactly what we >>>> need. >>> >>> Does =E2=80=98sort=E2=80=99 manage to avoid reading whole files in memo= ry? >> >> Yes, it does. I monitored the 'sort' process when I first ran my >> optimized pipeline. It created about 10 files in /tmp, approximately 70 >> megabytes each as I recall, and then read them all concurrently while >> writing the sorted output. >> >> My guess is that it reads a manageable chunk of the input, sorts it in >> memory, and writes it to a temporary file. I guess it repeats this >> process, writing multiple temporary files, until the entire input is >> consumed, and then reads all of those temporary files, merging them >> together into the output stream. > > OK. That seems to be that the comment above =E2=80=98sortlines=E2=80=99 = in sort.c > describes. Also, see . This is a well-studied problem with a long history. >>>> If you object to using an external program for some reason, I would >>>> prefer to re-implement a similar algorithm in the daemon. >>> >>> Yeah, I=E2=80=99d rather avoid serializing the list of file names/inode= number >>> pairs just to invoke =E2=80=98sort=E2=80=99 on that. I'm fairly sure that the overhead of serializing the file names and inode numbers is *far* less than the overhead you would add by iterating over the inodes in multiple passes. >> Sure, I agree that it would be better to avoid that, but IMO not at the >> cost of using O(N) memory instead of O(1) memory, nor at the cost of >> multiplying the amount of disk I/O by a non-trivial factor. > > Understood. > > sort.c in Coreutils is very big, and we surely don=E2=80=99t want to dupl= icate > all that. Yet, I=E2=80=99d rather not shell out to =E2=80=98sort=E2=80= =99. The "shell" would not be involved here at all, just the "sort" program. I guess you dislike launching external processes? Can you explain why? Guix-daemon launches external processes for building derivations, so why is using one for garbage collection a problem? Emacs, a program that you cite in your talks as having many qualities that we seek to emulate, does not shy away from using external programs. > Do you know how many entries are in .links on hydra.gnu.org? "df -i /gnu" indicates that it currently has about 5.5M inodes, but that's with only 29% of the disk in use. A few days ago, when the disk was full, assuming that the average file size is the same, it may have had closer to 5.5M / 0.29 ~=3D 19M inodes, which is over 30 times as many as used in your measurements above. On ext4, which uses 256-byte inodes, that's about 5 gigabytes of inodes. Mark