Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.kernel.org/pub/scm/git/git.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/refs
diff options
context:
space:
mode:
authorTaylor Blau <me@ttaylorr.com>2023-07-11 00:12:28 +0300
committerJunio C Hamano <gitster@pobox.com>2023-07-11 00:48:55 +0300
commit59c35fac54054b3f781b0275eac7d7c54468f0d5 (patch)
treeb8ebfa5947fc1fa31dbaba7a71b5e83e262695d4 /refs
parentd22d941ac08aa31266bb0b38d78c79c3ffdc4f86 (diff)
refs/packed-backend.c: implement jump lists to avoid excluded pattern(s)
When iterating through the `packed-refs` file in order to answer a query like: $ git for-each-ref --exclude=refs/__hidden__ it would be useful to avoid walking over all of the entries in `refs/__hidden__/*` when possible, since we know that the ref-filter code is going to throw them away anyways. In certain circumstances, doing so is possible. The algorithm for doing so is as follows: - For each excluded pattern, find the first record that matches it, and the first record that *doesn't* match it (i.e. the location you'd next want to consider when excluding that pattern). - Sort the set of excluded regions from the previous step in ascending order of the first location within the `packed-refs` file that matches. - Clean up the results from the previous step: discard empty regions, and combine adjacent regions. The set of regions which remains is referred to as the "jump list", and never contains any references which should be included in the result set. Then when iterating through the `packed-refs` file, if `iter->pos` is ever contained in one of the regions from the previous steps, advance `iter->pos` past the end of that region, and continue enumeration. Note that we only perform this optimization when none of the excluded pattern(s) have special meta-characters in them. For a pattern like "refs/foo[ac]", the excluded regions ("refs/fooa", "refs/fooc", and everything underneath them) are not connected. A future implementation that handles this case may split the character class (pretending as if two patterns were excluded: "refs/fooa", and "refs/fooc"). There are a few other gotchas worth considering. First, note that the jump list is sorted, so once we jump past a region, we can avoid considering it (or any regions preceding it) again. The member `jump_pos` is used to track the first next-possible region to jump through. Second, note that the jump list is best-effort, since we do not handle loose references, and because of the meta-character issue above. The jump list may not skip past all references which won't appear in the results, but will never skip over a reference which does appear in the result set. In repositories with a large number of hidden references, the speed-up can be significant. Tests here are done with a copy of linux.git with a reference "refs/pull/N" pointing at every commit, as in: $ git rev-list HEAD | awk '{ print "create refs/pull/" NR " " $0 }' | git update-ref --stdin $ git pack-refs --all , it is significantly faster to have `for-each-ref` jump over the excluded references, as opposed to filtering them out after the fact: $ hyperfine \ 'git for-each-ref --format="%(objectname) %(refname)" | grep -vE "^[0-9a-f]{40} refs/pull/"' \ 'git.prev for-each-ref --format="%(objectname) %(refname)" --exclude="refs/pull"' \ 'git.compile for-each-ref --format="%(objectname) %(refname)" --exclude="refs/pull"' Benchmark 1: git for-each-ref --format="%(objectname) %(refname)" | grep -vE "^[0-9a-f]{40} refs/pull/" Time (mean ± σ): 798.1 ms ± 3.3 ms [User: 687.6 ms, System: 146.4 ms] Range (min … max): 794.5 ms … 805.5 ms 10 runs Benchmark 2: git.prev for-each-ref --format="%(objectname) %(refname)" --exclude="refs/pull" Time (mean ± σ): 98.9 ms ± 1.4 ms [User: 93.1 ms, System: 5.7 ms] Range (min … max): 97.0 ms … 104.0 ms 29 runs Benchmark 3: git.compile for-each-ref --format="%(objectname) %(refname)" --exclude="refs/pull" Time (mean ± σ): 4.5 ms ± 0.2 ms [User: 0.7 ms, System: 3.8 ms] Range (min … max): 4.1 ms … 5.8 ms 524 runs Summary 'git.compile for-each-ref --format="%(objectname) %(refname)" --exclude="refs/pull"' ran 21.87 ± 1.05 times faster than 'git.prev for-each-ref --format="%(objectname) %(refname)" --exclude="refs/pull"' 176.52 ± 8.19 times faster than 'git for-each-ref --format="%(objectname) %(refname)" | grep -vE "^[0-9a-f]{40} refs/pull/"' (Comparing stock git and this patch isn't quite fair, since an earlier commit in this series adds a naive implementation of the `--exclude` option. `git.prev` is built from the previous commit and includes this naive implementation). Using the jump list is fairly straightforward (see the changes to `refs/packed-backend.c::next_record()`), but constructing the list is not. To ensure that the construction is correct, add a new suite of tests in t1419 covering various corner cases (overlapping regions, partially overlapping regions, adjacent regions, etc.). Co-authored-by: Jeff King <peff@peff.net> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Diffstat (limited to 'refs')
-rw-r--r--refs/packed-backend.c163
1 files changed, 157 insertions, 6 deletions
diff --git a/refs/packed-backend.c b/refs/packed-backend.c
index 33639f73e1..092b50fa84 100644
--- a/refs/packed-backend.c
+++ b/refs/packed-backend.c
@@ -303,7 +303,8 @@ static int cmp_packed_ref_records(const void *v1, const void *v2)
* Compare a snapshot record at `rec` to the specified NUL-terminated
* refname.
*/
-static int cmp_record_to_refname(const char *rec, const char *refname)
+static int cmp_record_to_refname(const char *rec, const char *refname,
+ int start)
{
const char *r1 = rec + the_hash_algo->hexsz + 1;
const char *r2 = refname;
@@ -312,7 +313,7 @@ static int cmp_record_to_refname(const char *rec, const char *refname)
if (*r1 == '\n')
return *r2 ? -1 : 0;
if (!*r2)
- return 1;
+ return start ? 1 : -1;
if (*r1 != *r2)
return (unsigned char)*r1 < (unsigned char)*r2 ? -1 : +1;
r1++;
@@ -528,7 +529,8 @@ static int load_contents(struct snapshot *snapshot)
}
static const char *find_reference_location_1(struct snapshot *snapshot,
- const char *refname, int mustexist)
+ const char *refname, int mustexist,
+ int start)
{
/*
* This is not *quite* a garden-variety binary search, because
@@ -558,7 +560,7 @@ static const char *find_reference_location_1(struct snapshot *snapshot,
mid = lo + (hi - lo) / 2;
rec = find_start_of_record(lo, mid);
- cmp = cmp_record_to_refname(rec, refname);
+ cmp = cmp_record_to_refname(rec, refname, start);
if (cmp < 0) {
lo = find_end_of_record(mid, hi);
} else if (cmp > 0) {
@@ -591,7 +593,22 @@ static const char *find_reference_location_1(struct snapshot *snapshot,
static const char *find_reference_location(struct snapshot *snapshot,
const char *refname, int mustexist)
{
- return find_reference_location_1(snapshot, refname, mustexist);
+ return find_reference_location_1(snapshot, refname, mustexist, 1);
+}
+
+/*
+ * Find the place in `snapshot->buf` after the end of the record for
+ * `refname`. In other words, find the location of first thing *after*
+ * `refname`.
+ *
+ * Other semantics are identical to the ones in
+ * `find_reference_location()`.
+ */
+static const char *find_reference_location_end(struct snapshot *snapshot,
+ const char *refname,
+ int mustexist)
+{
+ return find_reference_location_1(snapshot, refname, mustexist, 0);
}
/*
@@ -785,6 +802,13 @@ struct packed_ref_iterator {
/* The end of the part of the buffer that will be iterated over: */
const char *eof;
+ struct jump_list_entry {
+ const char *start;
+ const char *end;
+ } *jump;
+ size_t jump_nr, jump_alloc;
+ size_t jump_cur;
+
/* Scratch space for current values: */
struct object_id oid, peeled;
struct strbuf refname_buf;
@@ -802,14 +826,35 @@ struct packed_ref_iterator {
*/
static int next_record(struct packed_ref_iterator *iter)
{
- const char *p = iter->pos, *eol;
+ const char *p, *eol;
strbuf_reset(&iter->refname_buf);
+ /*
+ * If iter->pos is contained within a skipped region, jump past
+ * it.
+ *
+ * Note that each skipped region is considered at most once,
+ * since they are ordered based on their starting position.
+ */
+ while (iter->jump_cur < iter->jump_nr) {
+ struct jump_list_entry *curr = &iter->jump[iter->jump_cur];
+ if (iter->pos < curr->start)
+ break; /* not to the next jump yet */
+
+ iter->jump_cur++;
+ if (iter->pos < curr->end) {
+ iter->pos = curr->end;
+ /* jumps are coalesced, so only one jump is necessary */
+ break;
+ }
+ }
+
if (iter->pos == iter->eof)
return ITER_DONE;
iter->base.flags = REF_ISPACKED;
+ p = iter->pos;
if (iter->eof - p < the_hash_algo->hexsz + 2 ||
parse_oid_hex(p, &iter->oid, &p) ||
@@ -917,6 +962,7 @@ static int packed_ref_iterator_abort(struct ref_iterator *ref_iterator)
int ok = ITER_DONE;
strbuf_release(&iter->refname_buf);
+ free(iter->jump);
release_snapshot(iter->snapshot);
base_ref_iterator_free(ref_iterator);
return ok;
@@ -928,6 +974,108 @@ static struct ref_iterator_vtable packed_ref_iterator_vtable = {
.abort = packed_ref_iterator_abort
};
+static int jump_list_entry_cmp(const void *va, const void *vb)
+{
+ const struct jump_list_entry *a = va;
+ const struct jump_list_entry *b = vb;
+
+ if (a->start < b->start)
+ return -1;
+ if (a->start > b->start)
+ return 1;
+ return 0;
+}
+
+static int has_glob_special(const char *str)
+{
+ const char *p;
+ for (p = str; *p; p++) {
+ if (is_glob_special(*p))
+ return 1;
+ }
+ return 0;
+}
+
+static void populate_excluded_jump_list(struct packed_ref_iterator *iter,
+ struct snapshot *snapshot,
+ const char **excluded_patterns)
+{
+ size_t i, j;
+ const char **pattern;
+ struct jump_list_entry *last_disjoint;
+
+ if (!excluded_patterns)
+ return;
+
+ for (pattern = excluded_patterns; *pattern; pattern++) {
+ struct jump_list_entry *e;
+ const char *start, *end;
+
+ /*
+ * We can't feed any excludes with globs in them to the
+ * refs machinery. It only understands prefix matching.
+ * We likewise can't even feed the string leading up to
+ * the first meta-character, as something like "foo[a]"
+ * should not exclude "foobar" (but the prefix "foo"
+ * would match that and mark it for exclusion).
+ */
+ if (has_glob_special(*pattern))
+ continue;
+
+ start = find_reference_location(snapshot, *pattern, 0);
+ end = find_reference_location_end(snapshot, *pattern, 0);
+
+ if (start == end)
+ continue; /* nothing to jump over */
+
+ ALLOC_GROW(iter->jump, iter->jump_nr + 1, iter->jump_alloc);
+
+ e = &iter->jump[iter->jump_nr++];
+ e->start = start;
+ e->end = end;
+ }
+
+ if (!iter->jump_nr) {
+ /*
+ * Every entry in exclude_patterns has a meta-character,
+ * nothing to do here.
+ */
+ return;
+ }
+
+ QSORT(iter->jump, iter->jump_nr, jump_list_entry_cmp);
+
+ /*
+ * As an optimization, merge adjacent entries in the jump list
+ * to jump forwards as far as possible when entering a skipped
+ * region.
+ *
+ * For example, if we have two skipped regions:
+ *
+ * [[A, B], [B, C]]
+ *
+ * we want to combine that into a single entry jumping from A to
+ * C.
+ */
+ last_disjoint = iter->jump;
+
+ for (i = 1, j = 1; i < iter->jump_nr; i++) {
+ struct jump_list_entry *ours = &iter->jump[i];
+ if (ours->start <= last_disjoint->end) {
+ /* overlapping regions extend the previous one */
+ last_disjoint->end = last_disjoint->end > ours->end
+ ? last_disjoint->end : ours->end;
+ } else {
+ /* otherwise, insert a new region */
+ iter->jump[j++] = *ours;
+ last_disjoint = ours;
+ }
+ }
+
+ iter->jump_nr = j;
+ iter->jump_cur = 0;
+}
+
static struct ref_iterator *packed_ref_iterator_begin(
struct ref_store *ref_store,
const char *prefix, const char **exclude_patterns,
@@ -963,6 +1111,9 @@ static struct ref_iterator *packed_ref_iterator_begin(
ref_iterator = &iter->base;
base_ref_iterator_init(ref_iterator, &packed_ref_iterator_vtable, 1);
+ if (exclude_patterns)
+ populate_excluded_jump_list(iter, snapshot, exclude_patterns);
+
iter->snapshot = snapshot;
acquire_snapshot(snapshot);