Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.kernel.org/pub/scm/git/git.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorElijah Newren <newren@gmail.com>2020-11-02 21:55:04 +0300
committerJunio C Hamano <gitster@pobox.com>2020-11-02 23:15:50 +0300
commit33f20d82177871225e17d9dd44169a52a36c9f1d (patch)
tree049a082449a6253743b11ca5c88cf88b4bb90b2a /hashmap.c
parentb7879b0ba6ee1306a42227f7fd7f4e5f50409184 (diff)
hashmap: introduce a new hashmap_partial_clear()
merge-ort is a heavy user of strmaps, which are built on hashmap.[ch]. clear_or_reinit_internal_opts() in merge-ort was taking about 12% of overall runtime in my testcase involving rebasing 35 patches of linux.git across a big rename. clear_or_reinit_internal_opts() was calling hashmap_free() followed by hashmap_init(), meaning that not only was it freeing all the memory associated with each of the strmaps just to immediately allocate a new array again, it was allocating a new array that was likely smaller than needed (thus resulting in later need to rehash things). The ending size of the map table on the previous commit was likely almost perfectly sized for the next commit we wanted to pick, and not dropping and reallocating the table immediately is a win. Add some new API to hashmap to clear a hashmap of entries without freeing map->table (and instead only zeroing it out like alloc_table() would do, along with zeroing the count of items in the table and the shrink_at field). Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Diffstat (limited to 'hashmap.c')
-rw-r--r--hashmap.c39
1 files changed, 27 insertions, 12 deletions
diff --git a/hashmap.c b/hashmap.c
index bb7c9979b8..922ed07954 100644
--- a/hashmap.c
+++ b/hashmap.c
@@ -174,22 +174,37 @@ void hashmap_init(struct hashmap *map, hashmap_cmp_fn equals_function,
map->do_count_items = 1;
}
+static void free_individual_entries(struct hashmap *map, ssize_t entry_offset)
+{
+ struct hashmap_iter iter;
+ struct hashmap_entry *e;
+
+ hashmap_iter_init(map, &iter);
+ while ((e = hashmap_iter_next(&iter)))
+ /*
+ * like container_of, but using caller-calculated
+ * offset (caller being hashmap_free_entries)
+ */
+ free((char *)e - entry_offset);
+}
+
+void hashmap_partial_clear_(struct hashmap *map, ssize_t entry_offset)
+{
+ if (!map || !map->table)
+ return;
+ if (entry_offset >= 0) /* called by hashmap_clear_entries */
+ free_individual_entries(map, entry_offset);
+ memset(map->table, 0, map->tablesize * sizeof(struct hashmap_entry *));
+ map->shrink_at = 0;
+ map->private_size = 0;
+}
+
void hashmap_free_(struct hashmap *map, ssize_t entry_offset)
{
if (!map || !map->table)
return;
- if (entry_offset >= 0) { /* called by hashmap_free_entries */
- struct hashmap_iter iter;
- struct hashmap_entry *e;
-
- hashmap_iter_init(map, &iter);
- while ((e = hashmap_iter_next(&iter)))
- /*
- * like container_of, but using caller-calculated
- * offset (caller being hashmap_free_entries)
- */
- free((char *)e - entry_offset);
- }
+ if (entry_offset >= 0) /* called by hashmap_free_entries */
+ free_individual_entries(map, entry_offset);
free(map->table);
memset(map, 0, sizeof(*map));
}