Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.kernel.org/pub/scm/git/git.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJunio C Hamano <gitster@pobox.com>2018-09-17 23:53:53 +0300
committerJunio C Hamano <gitster@pobox.com>2018-09-17 23:53:53 +0300
commit7e794d0a3f7ad4a37541539b823d5b9afdc10ce3 (patch)
treee0ee853dbcf38a57195490ffc547039e4348de5f /preload-index.c
parent1b7a91da71d42759dfb83fa3a17be54ad01f0132 (diff)
parent5f4436a7211de7cd7552f6cf3bbb35147db1a070 (diff)
Merge branch 'nd/unpack-trees-with-cache-tree'
The unpack_trees() API used in checking out a branch and merging walks one or more trees along with the index. When the cache-tree in the index tells us that we are walking a tree whose flattened contents is known (i.e. matches a span in the index), as linearly scanning a span in the index is much more efficient than having to open tree objects recursively and listing their entries, the walk can be optimized, which is done in this topic. * nd/unpack-trees-with-cache-tree: Document update for nd/unpack-trees-with-cache-tree cache-tree: verify valid cache-tree in the test suite unpack-trees: add missing cache invalidation unpack-trees: reuse (still valid) cache-tree from src_index unpack-trees: reduce malloc in cache-tree walk unpack-trees: optimize walking same trees with cache-tree unpack-trees: add performance tracing trace.h: support nested performance tracing
Diffstat (limited to 'preload-index.c')
-rw-r--r--preload-index.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/preload-index.c b/preload-index.c
index 71cd2437a3..f7365761f4 100644
--- a/preload-index.c
+++ b/preload-index.c
@@ -78,7 +78,6 @@ static void preload_index(struct index_state *index,
{
int threads, i, work, offset;
struct thread_data data[MAX_PARALLEL];
- uint64_t start = getnanotime();
if (!core_preload_index)
return;
@@ -88,6 +87,7 @@ static void preload_index(struct index_state *index,
threads = 2;
if (threads < 2)
return;
+ trace_performance_enter();
if (threads > MAX_PARALLEL)
threads = MAX_PARALLEL;
offset = 0;
@@ -109,7 +109,7 @@ static void preload_index(struct index_state *index,
if (pthread_join(p->pthread, NULL))
die("unable to join threaded lstat");
}
- trace_performance_since(start, "preload index");
+ trace_performance_leave("preload index");
}
#endif