Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.kernel.org/pub/scm/git/git.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTaylor Blau <me@ttaylorr.com>2023-05-08 20:38:12 +0300
committerJunio C Hamano <gitster@pobox.com>2023-05-08 22:05:55 +0300
commitb0afdce5dab61f224fd66c13768facc36a7f8705 (patch)
tree53d162dc5673ac2f894baaa3d167f548586c6d0e /pack-bitmap.c
parent47ff853f02a0b4af6d01727b7e45046b61b0a9b4 (diff)
pack-bitmap.c: use commit boundary during bitmap traversal
When reachability bitmap coverage exists in a repository, Git will use a different (and hopefully faster) traversal to compute revision walks. Consider a set of positive and negative tips (which we'll refer to with their standard bitmap parlance by "wants", and "haves"). In order to figure out what objects exist between the tips, the existing traversal in `prepare_bitmap_walk()` does something like: 1. Consider if we can even compute the set of objects with bitmaps, and fall back to the usual traversal if we cannot. For example, pathspec limiting traversals can't be computed using bitmaps (since they don't know which objects are at which paths). The same is true of certain kinds of non-trivial object filters. 2. If we can compute the traversal with bitmaps, partition the (dereferenced) tips into two object lists, "haves", and "wants", based on whether or not the objects have the UNINTERESTING flag, respectively. 3. Fall back to the ordinary object traversal if either (a) there are more than zero haves, none of which are in the bitmapped pack or MIDX, or (b) there are no wants. 4. Construct a reachability bitmap for the "haves" side by walking from the revision tips down to any existing bitmaps, OR-ing in any bitmaps as they are found. 5. Then do the same for the "wants" side, stopping at any objects that appear in the "haves" bitmap. 6. Filter the results if any object filter (that can be easily computed with bitmaps alone) was given, and then return back to the caller. When there is good bitmap coverage relative to the traversal tips, this walk is often significantly faster than an ordinary object traversal because it can visit far fewer objects. But in certain cases, it can be significantly *slower* than the usual object traversal. Why? Because we need to compute complete bitmaps on either side of the walk. If either one (or both) of the sides require walking many (or all!) objects before they get to an existing bitmap, the extra bitmap machinery is mostly or all overhead. One of the benefits, however, is that even if the walk is slower, bitmap traversals are guaranteed to provide an *exact* answer. Unlike the traditional object traversal algorithm, which can over-count the results by not opening trees for older commits, the bitmap walk builds an exact reachability bitmap for either side, meaning the results are never over-counted. But producing non-exact results is OK for our traversal here (both in the bitmap case and not), as long as the results are over-counted, not under. Relaxing the bitmap traversal to allow it to produce over-counted results gives us the opportunity to make some significant improvements. Instead of the above, the new algorithm only has to walk from the *boundary* down to the nearest bitmap, instead of from each of the UNINTERESTING tips. The boundary-based approach still has degenerate cases, but we'll show in a moment that it is often a significant improvement. The new algorithm works as follows: 1. Build a (partial) bitmap of the haves side by first OR-ing any bitmap(s) that already exist for UNINTERESTING commits between the haves and the boundary. 2. For each commit along the boundary, add it as a fill-in traversal tip (where the traversal terminates once an existing bitmap is found), and perform fill-in traversal. 3. Build up a complete bitmap of the wants side as usual, stopping any time we intersect the (partial) haves side. 4. Return the results. And is more-or-less equivalent to using the *old* algorithm with this invocation: $ git rev-list --objects --use-bitmap-index $WANTS --not \ $(git rev-list --objects --boundary $WANTS --not $HAVES | perl -lne 'print $1 if /^-(.*)/') The new result performs significantly better in many cases, particularly when the distance from the boundary commit(s) to an existing bitmap is shorter than the distance from (all of) the have tips to the nearest bitmapped commit. Note that when using the old bitmap traversal algorithm, the results can be *slower* than without bitmaps! Under the new algorithm, the result is computed faster with bitmaps than without (at the cost of over-counting the true number of objects in a similar fashion as the non-bitmap traversal): # (Computing the number of tagged objects not on any branches # without bitmaps). $ time git rev-list --count --objects --tags --not --branches 20 real 0m1.388s user 0m1.092s sys 0m0.296s # (Computing the same query using the old bitmap traversal). $ time git rev-list --count --objects --tags --not --branches --use-bitmap-index 19 real 0m22.709s user 0m21.628s sys 0m1.076s # (this commit) $ time git.compile rev-list --count --objects --tags --not --branches --use-bitmap-index 19 real 0m1.518s user 0m1.234s sys 0m0.284s The new algorithm is still slower than not using bitmaps at all, but it is nearly a 15-fold improvement over the existing traversal. In a more realistic setting (using my local copy of git.git), I can observe a similar (if more modest) speed-up: $ argv="--count --objects --branches --not --tags" hyperfine \ -n 'no bitmaps' "git.compile rev-list $argv" \ -n 'existing traversal' "git.compile rev-list --use-bitmap-index $argv" \ -n 'boundary traversal' "git.compile -c pack.useBitmapBoundaryTraversal=true rev-list --use-bitmap-index $argv" Benchmark 1: no bitmaps Time (mean ± σ): 124.6 ms ± 2.1 ms [User: 103.7 ms, System: 20.8 ms] Range (min … max): 122.6 ms … 133.1 ms 22 runs Benchmark 2: existing traversal Time (mean ± σ): 368.6 ms ± 3.0 ms [User: 325.3 ms, System: 43.1 ms] Range (min … max): 365.1 ms … 374.8 ms 10 runs Benchmark 3: boundary traversal Time (mean ± σ): 167.6 ms ± 0.9 ms [User: 139.5 ms, System: 27.9 ms] Range (min … max): 166.1 ms … 169.2 ms 17 runs Summary 'no bitmaps' ran 1.34 ± 0.02 times faster than 'boundary traversal' 2.96 ± 0.05 times faster than 'existing traversal' Here, the new algorithm is also still slower than not using bitmaps, but represents a more than 2-fold improvement over the existing traversal in a more modest example. Since this algorithm was originally written (nearly a year and a half ago, at the time of writing), the bitmap lookup table shipped, making the new algorithm's result more competitive. A few other future directions for improving bitmap traversal times beyond not using bitmaps at all: - Decrease the cost to decompress and OR together many bitmaps together (particularly when enumerating the uninteresting side of the walk). Here we could explore more efficient bitmap storage techniques, like Roaring+Run and/or use SIMD instructions to speed up ORing them together. - Store pseudo-merge bitmaps, which could allow us to OR together fewer "summary" bitmaps (which would also help with the above). Helped-by: Jeff King <peff@peff.net> Helped-by: Derrick Stolee <derrickstolee@github.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Diffstat (limited to 'pack-bitmap.c')
-rw-r--r--pack-bitmap.c182
1 files changed, 169 insertions, 13 deletions
diff --git a/pack-bitmap.c b/pack-bitmap.c
index 5d2cc6ac96..e8a1579b16 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -1077,6 +1077,126 @@ static struct bitmap *fill_in_bitmap(struct bitmap_index *bitmap_git,
return base;
}
+struct bitmap_boundary_cb {
+ struct bitmap_index *bitmap_git;
+ struct bitmap *base;
+
+ struct object_array boundary;
+};
+
+static void show_boundary_commit(struct commit *commit, void *_data)
+{
+ struct bitmap_boundary_cb *data = _data;
+
+ if (commit->object.flags & BOUNDARY)
+ add_object_array(&commit->object, "", &data->boundary);
+
+ if (commit->object.flags & UNINTERESTING) {
+ if (bitmap_walk_contains(data->bitmap_git, data->base,
+ &commit->object.oid))
+ return;
+
+ add_commit_to_bitmap(data->bitmap_git, &data->base, commit);
+ }
+}
+
+static void show_boundary_object(struct object *object,
+ const char *name, void *data)
+{
+ BUG("should not be called");
+}
+
+static struct bitmap *find_boundary_objects(struct bitmap_index *bitmap_git,
+ struct rev_info *revs,
+ struct object_list *roots)
+{
+ struct bitmap_boundary_cb cb;
+ struct object_list *root;
+ unsigned int i;
+ unsigned int tmp_blobs, tmp_trees, tmp_tags;
+ int any_missing = 0;
+
+ cb.bitmap_git = bitmap_git;
+ cb.base = bitmap_new();
+ object_array_init(&cb.boundary);
+
+ revs->ignore_missing_links = 1;
+
+ /*
+ * OR in any existing reachability bitmaps among `roots` into
+ * `cb.base`.
+ */
+ for (root = roots; root; root = root->next) {
+ struct object *object = root->item;
+ if (object->type != OBJ_COMMIT ||
+ bitmap_walk_contains(bitmap_git, cb.base, &object->oid))
+ continue;
+
+ if (add_commit_to_bitmap(bitmap_git, &cb.base,
+ (struct commit *)object))
+ continue;
+
+ any_missing = 1;
+ }
+
+ if (!any_missing)
+ goto cleanup;
+
+ tmp_blobs = revs->blob_objects;
+ tmp_trees = revs->tree_objects;
+ tmp_tags = revs->blob_objects;
+ revs->blob_objects = 0;
+ revs->tree_objects = 0;
+ revs->tag_objects = 0;
+
+ /*
+ * We didn't have complete coverage of the roots. First setup a
+ * revision walk to (a) OR in any bitmaps that are UNINTERESTING
+ * between the tips and boundary, and (b) record the boundary.
+ */
+ trace2_region_enter("pack-bitmap", "boundary-prepare", the_repository);
+ if (prepare_revision_walk(revs))
+ die("revision walk setup failed");
+ trace2_region_leave("pack-bitmap", "boundary-prepare", the_repository);
+
+ trace2_region_enter("pack-bitmap", "boundary-traverse", the_repository);
+ revs->boundary = 1;
+ traverse_commit_list_filtered(revs,
+ show_boundary_commit,
+ show_boundary_object,
+ &cb, NULL);
+ revs->boundary = 0;
+ trace2_region_leave("pack-bitmap", "boundary-traverse", the_repository);
+
+ revs->blob_objects = tmp_blobs;
+ revs->tree_objects = tmp_trees;
+ revs->tag_objects = tmp_tags;
+
+ reset_revision_walk();
+ clear_object_flags(UNINTERESTING);
+
+ /*
+ * Then add the boundary commit(s) as fill-in traversal tips.
+ */
+ trace2_region_enter("pack-bitmap", "boundary-fill-in", the_repository);
+ for (i = 0; i < cb.boundary.nr; i++) {
+ struct object *obj = cb.boundary.objects[i].item;
+ if (bitmap_walk_contains(bitmap_git, cb.base, &obj->oid))
+ obj->flags |= SEEN;
+ else
+ add_pending_object(revs, obj, "");
+ }
+ if (revs->pending.nr)
+ cb.base = fill_in_bitmap(bitmap_git, revs, cb.base, NULL);
+ trace2_region_leave("pack-bitmap", "boundary-fill-in", the_repository);
+
+cleanup:
+ object_array_clear(&cb.boundary);
+ revs->ignore_missing_links = 0;
+
+ return cb.base;
+}
+
static struct bitmap *find_objects(struct bitmap_index *bitmap_git,
struct rev_info *revs,
struct object_list *roots,
@@ -1142,8 +1262,21 @@ static struct bitmap *find_objects(struct bitmap_index *bitmap_git,
}
}
- if (needs_walk)
+ if (needs_walk) {
+ /*
+ * This fill-in traversal may walk over some objects
+ * again, since we have already traversed in order to
+ * find the boundary.
+ *
+ * But this extra walk should be extremely cheap, since
+ * all commit objects are loaded into memory, and
+ * because we skip walking to parents that are
+ * UNINTERESTING, since it will be marked in the haves
+ * bitmap already (or it has an on-disk bitmap, since
+ * OR-ing it in covers all of its ancestors).
+ */
base = fill_in_bitmap(bitmap_git, revs, base, seen);
+ }
return base;
}
@@ -1535,6 +1668,7 @@ struct bitmap_index *prepare_bitmap_walk(struct rev_info *revs,
int filter_provided_objects)
{
unsigned int i;
+ int use_boundary_traversal;
struct object_list *wants = NULL;
struct object_list *haves = NULL;
@@ -1585,13 +1719,21 @@ struct bitmap_index *prepare_bitmap_walk(struct rev_info *revs,
object_list_insert(object, &wants);
}
- /*
- * if we have a HAVES list, but none of those haves is contained
- * in the packfile that has a bitmap, we don't have anything to
- * optimize here
- */
- if (haves && !in_bitmapped_pack(bitmap_git, haves))
- goto cleanup;
+ use_boundary_traversal = git_env_bool(GIT_TEST_PACK_USE_BITMAP_BOUNDARY_TRAVERSAL, -1);
+ if (use_boundary_traversal < 0) {
+ prepare_repo_settings(revs->repo);
+ use_boundary_traversal = revs->repo->settings.pack_use_bitmap_boundary_traversal;
+ }
+
+ if (!use_boundary_traversal) {
+ /*
+ * if we have a HAVES list, but none of those haves is contained
+ * in the packfile that has a bitmap, we don't have anything to
+ * optimize here
+ */
+ if (haves && !in_bitmapped_pack(bitmap_git, haves))
+ goto cleanup;
+ }
/* if we don't want anything, we're done here */
if (!wants)
@@ -1605,18 +1747,32 @@ struct bitmap_index *prepare_bitmap_walk(struct rev_info *revs,
if (load_bitmap(revs->repo, bitmap_git) < 0)
goto cleanup;
- object_array_clear(&revs->pending);
+ if (!use_boundary_traversal)
+ object_array_clear(&revs->pending);
if (haves) {
- revs->ignore_missing_links = 1;
- haves_bitmap = find_objects(bitmap_git, revs, haves, NULL);
- reset_revision_walk();
- revs->ignore_missing_links = 0;
+ if (use_boundary_traversal) {
+ trace2_region_enter("pack-bitmap", "haves/boundary", the_repository);
+ haves_bitmap = find_boundary_objects(bitmap_git, revs, haves);
+ trace2_region_leave("pack-bitmap", "haves/boundary", the_repository);
+ } else {
+ trace2_region_enter("pack-bitmap", "haves/classic", the_repository);
+ revs->ignore_missing_links = 1;
+ haves_bitmap = find_objects(bitmap_git, revs, haves, NULL);
+ reset_revision_walk();
+ revs->ignore_missing_links = 0;
+ trace2_region_leave("pack-bitmap", "haves/classic", the_repository);
+ }
if (!haves_bitmap)
BUG("failed to perform bitmap walk");
}
+ if (use_boundary_traversal) {
+ object_array_clear(&revs->pending);
+ reset_revision_walk();
+ }
+
wants_bitmap = find_objects(bitmap_git, revs, wants, haves_bitmap);
if (!wants_bitmap)