Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.kernel.org/pub/scm/git/git.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-07-20wrapper: use trace2 counters to collect fsync statsBeat Bolli
As mentioned in the thread starting at [1], trace2 counters should be used to count events instead of ad-hoc static variables. Convert the two fsync static variables to trace2 counters, reducing the coupling between wrapper.c and the trace2 subsystem. Adjust t/t5351 to match the trace2 counter output format. The counters are not per-thread because the ones being replaced also were not. [1] https://lore.kernel.org/git/20230627195251.1973421-2-calvinwan@google.com/ Signed-off-by: Beat Bolli <dev+git@drbeat.li> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-08-12Merge branch 'ab/leak-check'Junio C Hamano
Extend SANITIZE=leak checking and declare more tests "currently leak-free". * ab/leak-check: CI: use "GIT_TEST_SANITIZE_LEAK_LOG=true" in linux-leaks upload-pack: fix a memory leak in create_pack_file() leak tests: mark passing SANITIZE=leak tests as leak-free leak tests: don't skip some tests under SANITIZE=leak test-lib: have the "check" mode for SANITIZE=leak consider leak logs test-lib: add a GIT_TEST_PASSING_SANITIZE_LEAK=check mode test-lib: simplify by removing test_external tests: move copy/pasted PERL + Test::More checks to a lib-perl.sh t/Makefile: don't remove test-results in "clean-except-prove-cache" test-lib: add a SANITIZE=leak logging mode t/README: reword the "GIT_TEST_PASSING_SANITIZE_LEAK" description test-lib: add a --invert-exit-code switch test-lib: fix GIT_EXIT_OK logic errors, use BAIL_OUT test-lib: don't set GIT_EXIT_OK before calling test_atexit_handler test-lib: use $1, not $@ in test_known_broken_{ok,failure}_
2022-07-29t5351: avoid using `test_cmp` for binary dataJohannes Schindelin
The `test_cmp` function is meant to provide nicer output than `cmp` when expected and actual output of Git commands disagree. The implicit assumption is that the output is line-based and human readable. However, aaf81223f48 (unpack-objects: use stream_loose_object() to unpack large objects, 2022-06-11) introduced a call that compares the contents of pack files, which are distinctly not line-based nor human readable. This causes problems because on Windows, we hand off to the Bash function `mingw_test_cmp` that compares the lines while ignoring line ending differences. And this Bash function spends an insane amount of cycles trying to read in that binary pack file, so that it is almost indistinguishable from an infinite loop. For example, t5351 took 1486 seconds in the CI run at https://github.com/git/git/runs/7398490747?check_suite_focus=true#step:5:171, to complete. And yes, that is almost half an hour. Since Git's tests already use `cmp` consistently when comparing pack files, let's change this instance to use `cmp` instead of `test_cmp`, too, and fix that performance problem. Now t5351 takes all of 22 seconds. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-29t5351: avoid relying on `core.fsyncMethod = batch` to be supportedJohannes Schindelin
On FreeBSD, this mode is not supported. But since 3a251bac0d1a (trace2: only include "fsync" events if we git_fsync(), 2022-07-18) t5351 will fail if this mode is unsupported. Let's address this in the minimal fashion, by detecting that that mode is unsupported and expecting a different count of hardware flushes in that case. This fixes the CI/PR builds on FreeBSD again. Note: A better way would be to test only what is relevant in t5351.6 "unpack big object in stream (core.fsyncmethod=batch)" again instead of blindly comparing the output against some exact text. But that would pretty much revert the idea of above-mentioned commit, and that commit was _just_ accepted into Git's main branch so one must assume that it was intentional. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-28leak tests: mark passing SANITIZE=leak tests as leak-freeÆvar Arnfjörð Bjarmason
Mark those remaining tests that pass when run under SANITIZE=leak with TEST_PASSES_SANITIZE_LEAK=true, these were either omitted in f346fcb62a0 (Merge branch 'ab/mark-leak-free-tests-even-more', 2021-12-15) and 5a4f8381b68 (Merge branch 'ab/mark-leak-free-tests', 2021-10-25), or have had their memory leaks fixed since then. With this change there's now a a one-to-one mapping between those tests that we have opted-in via "TEST_PASSES_SANITIZE_LEAK=true", and those that pass with the new "check" mode: GIT_TEST_PASSING_SANITIZE_LEAK=check \ GIT_TEST_SANITIZE_LEAK_LOG=true \ make test SANITIZE=leak Note that the "GIT_TEST_SANITIZE_LEAK_LOG=true" is needed due to the edge cases noted in a preceding commit, i.e. in some cases we'd pass the test itself, but still have outstanding leaks due to ignored exit codes. The "GIT_TEST_SANITIZE_LEAK_LOG=true" corrects for that, we're only marking those tests as passing that really don't have any leaks, whether that was reflected in their exit code or not. Note that the change here to "t9100-git-svn-basic.sh" is marking that test as passing under SANITIZE=leak, we're removing a "TEST_FAILS_SANITIZE_LEAK=true" line, not "TEST_PASSES_SANITIZE_LEAK=true". See 7a98d9ab00d (revisions API: have release_revisions() release "cmdline", 2022-04-13) for the introduction of that t/lib-git-svn.sh-specific variable. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-18trace2: only include "fsync" events if we git_fsync()Ævar Arnfjörð Bjarmason
Fix the overly verbose trace2 logging added in 9a4987677d3 (trace2: add stats for fsync operations, 2022-03-30) (first released with v2.36.0). Since that change every single "git" command invocation has included these "data" events, even though we'll only make use of these with core.fsyncMethod=batch, and even then only have non-zero values if we're writing object data to disk. See c0f4752ed2f (core.fsyncmethod: batched disk flushes for loose-objects, 2022-04-04) for that feature. As we're needing to indent the trace2_data_intmax() lines let's introduce helper variables to ensure that our resulting lines (which were already too) don't exceed the recommendations of the CodingGuidelines. Doing that requires either wrapping them twice, or introducing short throwaway variable names, let's do the latter. The result was that e.g. "git version" would previously emit a total of 6 trace2 events with the GIT_TRACE2_EVENT target (version, start, cmd_ancestry, cmd_name, exit, atexit), but afterwards would emit 8. We'd emit 2 "data" events before the "exit" event. The reason we didn't catch this was that the trace2 unit tests added in a15860dca3f (trace2: t/helper/test-trace2, t0210.sh, t0211.sh, t0212.sh, 2019-02-22) would omit any "data" events that weren't the ones it cared about. Before this change to the C code 6/7 of our "t/t0212-trace2-event.sh" tests would fail if this change was applied to "t/t0212/parse_events.perl". Let's make the trace2 testing more strict, and further append any new events types we don't know about in "t/t0212/parse_events.perl". Since we only invoke the "test-tool trace2" there's no guarantee that we'll catch other overly verbose events in the future, but we'll at least notice if we start emitting new events that are issues every time we log anything with trace2's JSON target. We exclude the "data_json" event type, we'd otherwise would fail on both "win test" and "win+VS test" CI due to the logging added in 353d3d77f4f (trace2: collect Windows-specific process information, 2019-02-22). It looks like that logging should really be using trace2_cmd_ancestry() instead, which was introduced later in 2f732bf15e6 (tr2: log parent process name, 2021-07-21), but let's leave it for now. The fix-up to aaf81223f48 (unpack-objects: use stream_loose_object() to unpack large objects, 2022-06-11) is needed because we're changing the behavior of these events as discussed above. Since we'd always emit a "hardware-flush" event the test added in aaf81223f48 wasn't testing anything except that this trace2 data was unconditionally logged. Even if "core.fsyncMethod" wasn't set to "batch" we'd pass the test. Now we'll check the expected number of "writeout" v.s. "flush" calls under "core.fsyncMethod=batch", but note that this doesn't actually test if we carried out the sync using that method, on a platform where we'd have to fall back to fsync() each of those "writeout" would really be a "flush" (i.e. a full fsync()). But in this case what we're testing is that the logic in "unpack-objects" behaves as expected, not the OS-specific question of whether we actually were able to use the "bulk" method. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-06-13unpack-objects: use stream_loose_object() to unpack large objectsHan Xin
Make use of the stream_loose_object() function introduced in the preceding commit to unpack large objects. Before this we'd need to malloc() the size of the blob before unpacking it, which could cause OOM with very large blobs. We could use the new streaming interface to unpack all blobs, but doing so would be much slower, as demonstrated e.g. with this benchmark using git-hyperfine[0]: rm -rf /tmp/scalar.git && git clone --bare https://github.com/Microsoft/scalar.git /tmp/scalar.git && mv /tmp/scalar.git/objects/pack/*.pack /tmp/scalar.git/my.pack && git hyperfine \ -r 2 --warmup 1 \ -L rev origin/master,HEAD -L v "10,512,1k,1m" \ -s 'make' \ -p 'git init --bare dest.git' \ -c 'rm -rf dest.git' \ './git -C dest.git -c core.bigFileThreshold={v} unpack-objects </tmp/scalar.git/my.pack' Here we'll perform worse with lower core.bigFileThreshold settings with this change in terms of speed, but we're getting lower memory use in return: Summary './git -C dest.git -c core.bigFileThreshold=10 unpack-objects </tmp/scalar.git/my.pack' in 'origin/master' ran 1.01 ± 0.01 times faster than './git -C dest.git -c core.bigFileThreshold=1k unpack-objects </tmp/scalar.git/my.pack' in 'origin/master' 1.01 ± 0.01 times faster than './git -C dest.git -c core.bigFileThreshold=1m unpack-objects </tmp/scalar.git/my.pack' in 'origin/master' 1.01 ± 0.02 times faster than './git -C dest.git -c core.bigFileThreshold=1m unpack-objects </tmp/scalar.git/my.pack' in 'HEAD' 1.02 ± 0.00 times faster than './git -C dest.git -c core.bigFileThreshold=512 unpack-objects </tmp/scalar.git/my.pack' in 'origin/master' 1.09 ± 0.01 times faster than './git -C dest.git -c core.bigFileThreshold=1k unpack-objects </tmp/scalar.git/my.pack' in 'HEAD' 1.10 ± 0.00 times faster than './git -C dest.git -c core.bigFileThreshold=512 unpack-objects </tmp/scalar.git/my.pack' in 'HEAD' 1.11 ± 0.00 times faster than './git -C dest.git -c core.bigFileThreshold=10 unpack-objects </tmp/scalar.git/my.pack' in 'HEAD' A better benchmark to demonstrate the benefits of that this one, which creates an artificial repo with a 1, 25, 50, 75 and 100MB blob: rm -rf /tmp/repo && git init /tmp/repo && ( cd /tmp/repo && for i in 1 25 50 75 100 do dd if=/dev/urandom of=blob.$i count=$(($i*1024)) bs=1024 done && git add blob.* && git commit -mblobs && git gc && PACK=$(echo .git/objects/pack/pack-*.pack) && cp "$PACK" my.pack ) && git hyperfine \ --show-output \ -L rev origin/master,HEAD -L v "512,50m,100m" \ -s 'make' \ -p 'git init --bare dest.git' \ -c 'rm -rf dest.git' \ '/usr/bin/time -v ./git -C dest.git -c core.bigFileThreshold={v} unpack-objects </tmp/repo/my.pack 2>&1 | grep Maximum' Using this test we'll always use >100MB of memory on origin/master (around ~105MB), but max out at e.g. ~55MB if we set core.bigFileThreshold=50m. The relevant "Maximum resident set size" lines were manually added below the relevant benchmark: '/usr/bin/time -v ./git -C dest.git -c core.bigFileThreshold=50m unpack-objects </tmp/repo/my.pack 2>&1 | grep Maximum' in 'origin/master' ran Maximum resident set size (kbytes): 107080 1.02 ± 0.78 times faster than '/usr/bin/time -v ./git -C dest.git -c core.bigFileThreshold=512 unpack-objects </tmp/repo/my.pack 2>&1 | grep Maximum' in 'origin/master' Maximum resident set size (kbytes): 106968 1.09 ± 0.79 times faster than '/usr/bin/time -v ./git -C dest.git -c core.bigFileThreshold=100m unpack-objects </tmp/repo/my.pack 2>&1 | grep Maximum' in 'origin/master' Maximum resident set size (kbytes): 107032 1.42 ± 1.07 times faster than '/usr/bin/time -v ./git -C dest.git -c core.bigFileThreshold=100m unpack-objects </tmp/repo/my.pack 2>&1 | grep Maximum' in 'HEAD' Maximum resident set size (kbytes): 107072 1.83 ± 1.02 times faster than '/usr/bin/time -v ./git -C dest.git -c core.bigFileThreshold=50m unpack-objects </tmp/repo/my.pack 2>&1 | grep Maximum' in 'HEAD' Maximum resident set size (kbytes): 55704 2.16 ± 1.19 times faster than '/usr/bin/time -v ./git -C dest.git -c core.bigFileThreshold=512 unpack-objects </tmp/repo/my.pack 2>&1 | grep Maximum' in 'HEAD' Maximum resident set size (kbytes): 4564 This shows that if you have enough memory this new streaming method is slower the lower you set the streaming threshold, but the benefit is more bounded memory use. An earlier version of this patch introduced a new "core.bigFileStreamingThreshold" instead of re-using the existing "core.bigFileThreshold" variable[1]. As noted in a detailed overview of its users in [2] using it has several different meanings. Still, we consider it good enough to simply re-use it. While it's possible that someone might want to e.g. consider objects "small" for the purposes of diffing but "big" for the purposes of writing them such use-cases are probably too obscure to worry about. We can always split up "core.bigFileThreshold" in the future if there's a need for that. 0. https://github.com/avar/git-hyperfine/ 1. https://lore.kernel.org/git/20211210103435.83656-1-chiyutianyi@gmail.com/ 2. https://lore.kernel.org/git/20220120112114.47618-5-chiyutianyi@gmail.com/ Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Helped-by: Derrick Stolee <stolee@gmail.com> Helped-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Han Xin <chiyutianyi@gmail.com> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-06-13unpack-objects: low memory footprint for get_data() in dry_run modeHan Xin
As the name implies, "get_data(size)" will allocate and return a given amount of memory. Allocating memory for a large blob object may cause the system to run out of memory. Before preparing to replace calling of "get_data()" to unpack large blob objects in latter commits, refactor "get_data()" to reduce memory footprint for dry_run mode. Because in dry_run mode, "get_data()" is only used to check the integrity of data, and the returned buffer is not used at all, we can allocate a smaller buffer and use it as zstream output. Make the function return NULL in the dry-run mode, as no callers use the returned buffer. The "find [...]objects/?? -type f | wc -l" test idiom being used here is adapted from the same "find" use added to another test in d9545c7f465 (fast-import: implement unpack limit, 2016-04-25). Suggested-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Han Xin <chiyutianyi@gmail.com> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>