Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.kernel.org/pub/scm/git/git.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-12-26treewide: remove unnecessary includes in source filesElijah Newren
Each of these were checked with gcc -E -I. ${SOURCE_FILE} | grep ${HEADER_FILE} to ensure that removing the direct inclusion of the header actually resulted in that header no longer being included at all (i.e. that no other header pulled it in transitively). ...except for a few cases where we verified that although the header was brought in transitively, nothing from it was directly used in that source file. These cases were: * builtin/credential-cache.c * builtin/pull.c * builtin/send-pack.c Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-09-26bulk-checkin: only support blobs in index_bulk_checkinEric W. Biederman
As the code is written today index_bulk_checkin only accepts blobs. Remove the enum object_type parameter and rename index_bulk_checkin to index_blob_bulk_checkin, index_stream to index_blob_stream, deflate_to_pack to deflate_blob_to_pack, stream_to_pack to stream_blob_to_pack, to make this explicit. Not supporting commits, tags, or trees has no downside as it is not currently supported now, and commits, tags, and trees being smaller by design do not have the problem that the problem that index_bulk_checkin was built to solve. Before we start adding code to support the hash function transition supporting additional objects types in index_bulk_checkin has no real additional cost, just an extra function parameter to know what the object type is. Once we begin the hash function transition this is not the case. The hash function transition document specifies that a repository with compatObjectFormat enabled will compute and store both the SHA-1 and SHA-256 hash of every object in the repository. What makes this a challenge is that it is not just an additional hash over the same object. Instead the hash function transition document specifies that the compatibility hash (specified with compatObjectFormat) be computed over the equivalent object that another git repository whose storage hash (specified with objectFormat) would store. When comparing equivalent repositories built with different storage hash functions, the oids embedded in objects used to refer to other objects differ and the location of signatures within objects differ. As blob objects have neither oids referring to other objects nor stored signatures their storage hash and their compatibility hash are computed over the same object. The other kinds of objects: trees, commits, and tags, all store oids referring to other objects. Signatures are stored in commit and tag objects. As oids and the tags to store signatures are not the same size in repositories built with different storage hashes the size of the equivalent objects are also different. A version of index_bulk_checkin that supports more than just blobs when computing both the SHA-1 and the SHA-256 of every object added would need a different, and more expensive structure. The structure is more expensive because it would be required to temporarily buffering the equivalent object the compatibility hash needs to be computed over. A temporary object is needed, because before a hash over an object can computed it's object header needs to be computed. One of the members of the object header is the entire size of the object. To know the size of an equivalent object an entire pass over the original object needs to be made, as trees, commits, and tags are composed of a variable number of variable sized pieces. Unfortunately there is no formula to compute the size of an equivalent object from just the size of the original object. Avoid all of those future complications by limiting index_bulk_checkin to only work on blobs. Inspired-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-07-25Merge branch 'jk/unused-parameter'Junio C Hamano
Mark-up unused parameters in the code so that we can eventually enable -Wunused-parameter by default. * jk/unused-parameter: t/helper: mark unused callback void data parameters tag: mark unused parameters in each_tag_name_fn callbacks rev-parse: mark unused parameter in for_each_abbrev callback replace: mark unused parameter in each_mergetag_fn callback replace: mark unused parameter in ref callback merge-tree: mark unused parameter in traverse callback fsck: mark unused parameters in various fsck callbacks revisions: drop unused "opt" parameter in "tweak" callbacks count-objects: mark unused parameter in alternates callback am: mark unused keep_cr parameters http-push: mark unused parameter in xml callback http: mark unused parameters in curl callbacks do_for_each_ref_helper(): mark unused repository parameter test-ref-store: drop unimplemented reflog-expire command
2023-07-14fsck: mark unused parameters in various fsck callbacksJeff King
There are a few callback functions which are used with the fsck code, but it's natural that not all callbacks need all parameters. For reporting, even something as obvious as "the oid of the object which had a problem" is not always used, as some callers are only checking a single object in the first place. And for both reporting and walking, things like void data pointers and the fsck_options aren't always necessary. But since each such parameter is used by _some_ callback, we have to keep them in the interface. Mark the unused ones in specific callbacks to avoid triggering -Wunused-parameter. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-07-05git-compat-util: move alloc macros to git-compat-util.hCalvin Wan
alloc_nr, ALLOC_GROW, and ALLOC_GROW_BY are commonly used macros for dynamic array allocation. Moving these macros to git-compat-util.h with the other alloc macros focuses alloc.[ch] to allocation for Git objects and additionally allows us to remove inclusions to alloc.h from files that solely used the above macros. Signed-off-by: Calvin Wan <calvinwan@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-07-05treewide: remove unnecessary includes for wrapper.hCalvin Wan
Signed-off-by: Calvin Wan <calvinwan@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-06-21repository: remove unnecessary include of path.hElijah Newren
This also made it clear that several .c files that depended upon path.h were missing a #include for it; add the missing includes while at it. Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-05-10Merge branch 'en/header-split-cache-h-part-2'Junio C Hamano
More header clean-up. * en/header-split-cache-h-part-2: (22 commits) reftable: ensure git-compat-util.h is the first (indirect) include diff.h: reduce unnecessary includes object-store.h: reduce unnecessary includes commit.h: reduce unnecessary includes fsmonitor: reduce includes of cache.h cache.h: remove unnecessary headers treewide: remove cache.h inclusion due to previous changes cache,tree: move basic name compare functions from read-cache to tree cache,tree: move cmp_cache_name_compare from tree.[ch] to read-cache.c hash-ll.h: split out of hash.h to remove dependency on repository.h tree-diff.c: move S_DIFFTREE_IFXMIN_NEQ define from cache.h dir.h: move DTYPE defines from cache.h versioncmp.h: move declarations for versioncmp.c functions from cache.h ws.h: move declarations for ws.c functions from cache.h match-trees.h: move declarations for match-trees.c functions from cache.h pkt-line.h: move declarations for pkt-line.c functions from cache.h base85.h: move declarations for base85.c functions from cache.h copy.h: move declarations for copy.c functions from cache.h server-info.h: move declarations for server-info.c functions from cache.h packfile.h: move pack_window and pack_entry from cache.h ...
2023-04-25Merge branch 'ps/fix-geom-repack-with-alternates'Junio C Hamano
Geometric repacking ("git repack --geometric=<n>") in a repository that borrows from an alternate object database had various corner case bugs, which have been corrected. * ps/fix-geom-repack-with-alternates: repack: disable writing bitmaps when doing a local repack repack: honor `-l` when calculating pack geometry t/helper: allow chmtime to print verbosely without modifying mtime pack-objects: extend test coverage of `--stdin-packs` with alternates pack-objects: fix error when same packfile is included and excluded pack-objects: fix error when packing same pack twice pack-objects: split out `--stdin-packs` tests into separate file repack: fix generating multi-pack-index with only non-local packs repack: fix trying to use preferred pack in alternates midx: fix segfault with no packs and invalid preferred pack
2023-04-24object-store.h: reduce unnecessary includesElijah Newren
Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-04-24treewide: remove cache.h inclusion due to previous changesElijah Newren
Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-04-14repack: disable writing bitmaps when doing a local repackPatrick Steinhardt
In order to write a bitmap, we need to have full coverage of all objects that are about to be packed. In the traditional non-multi-pack-index world this meant we need to do a full repack of all objects into a single packfile. But in the new multi-pack-index world we can get away with writing bitmaps when we have multiple packfiles as long as the multi-pack-index covers all objects. This is not always the case though. When asked to perform a repack of local objects, only, then we cannot guarantee to have full coverage of all objects regardless of whether we do a full repack or a repack with a multi-pack-index. The end result is that writing the bitmap will fail in both worlds: $ git multi-pack-index write --stdin-packs --bitmap <packfiles warning: Failed to write bitmap index. Packfile doesn't have full closure (object 1529341d78cf45377407369acb0f4ff2b5cdae42 is missing) error: could not write multi-pack bitmap Now there are two different ways to fix this. The first one would be to amend git-multi-pack-index(1) to disable writing bitmaps when we notice that we don't have full object coverage. - We don't have enough information in git-multi-pack-index(1) in order to tell whether the local repository _should_ have full coverage. Because even when connected to an alternate object directory, it may be the case that we still have all objects around in the main object database. - git-multi-pack-index(1) is quite a low-level tool. Automatically disabling functionality that it was asked to provide does not feel like the right thing to do. We can easily fix it at a higher level in git-repack(1) though. When asked to only include local objects via `-l` and when connected to an alternate object directory then we will override the user's ask and disable writing bitmaps with a warning. This is similar to what we do in git-pack-objects(1), where we also disable writing bitmaps in case we omit an object from the pack. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-04-11object-file.h: move declarations for object-file.c functions from cache.hElijah Newren
Signed-off-by: Elijah Newren <newren@gmail.com> Acked-by: Calvin Wan <calvinwan@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-04-11treewide: be explicit about dependence on convert.hElijah Newren
Signed-off-by: Elijah Newren <newren@gmail.com> Acked-by: Calvin Wan <calvinwan@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-04-04Merge branch 'ab/remove-implicit-use-of-the-repository' into ↵Junio C Hamano
en/header-split-cache-h * ab/remove-implicit-use-of-the-repository: libs: use "struct repository *" argument, not "the_repository" post-cocci: adjust comments for recent repo_* migration cocci: apply the "revision.h" part of "the_repository.pending" cocci: apply the "rerere.h" part of "the_repository.pending" cocci: apply the "refs.h" part of "the_repository.pending" cocci: apply the "promisor-remote.h" part of "the_repository.pending" cocci: apply the "packfile.h" part of "the_repository.pending" cocci: apply the "pretty.h" part of "the_repository.pending" cocci: apply the "object-store.h" part of "the_repository.pending" cocci: apply the "diff.h" part of "the_repository.pending" cocci: apply the "commit.h" part of "the_repository.pending" cocci: apply the "commit-reach.h" part of "the_repository.pending" cocci: apply the "cache.h" part of "the_repository.pending" cocci: add missing "the_repository" macros to "pending" cocci: sort "the_repository" rules by header cocci: fix incorrect & verbose "the_repository" rules cocci: remove dead rule from "the_repository.pending.cocci"
2023-03-28post-cocci: adjust comments for recent repo_* migrationÆvar Arnfjörð Bjarmason
In preceding commits we changed many calls to macros that were providing a "the_repository" argument to invoke corresponding repo_*() function instead. Let's follow-up and adjust references to those in comments, which coccinelle didn't (and inherently can't) catch. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-03-28cocci: apply the "object-store.h" part of "the_repository.pending"Ævar Arnfjörð Bjarmason
Apply the part of "the_repository.pending.cocci" pertaining to "object-store.h". Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-03-21csum-file.h: remove unnecessary inclusion of cache.hElijah Newren
With the change in the last commit to move several functions to write-or-die.h, csum-file.h no longer needs to include cache.h. However, removing that include forces several other C files, which directly or indirectly dependend upon csum-file.h's inclusion of cache.h, to now be more explicit about their dependencies. Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-03-21setup.h: move declarations for setup.c functions from cache.hElijah Newren
Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-03-21environment.h: move declarations for environment.c functions from cache.hElijah Newren
Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-03-21wrapper.h: move declarations for wrapper.c functions from cache.hElijah Newren
Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-03-21abspath.h: move absolute path functions from cache.hElijah Newren
This is another step towards letting us remove the include of cache.h in strbuf.c. It does mean that we also need to add includes of abspath.h in a number of C files. Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-03-21treewide: be explicit about dependence on gettext.hElijah Newren
Dozens of files made use of gettext functions, without explicitly including gettext.h. This made it more difficult to find which files could remove a dependence on cache.h. Make C files explicitly include gettext.h if they are using it. However, while compat/fsmonitor/fsm-ipc-darwin.c should also gain an include of gettext.h, it was left out to avoid conflicting with an in-flight topic. Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-03-18Merge branch 'jk/unused-post-2.39-part2'Junio C Hamano
More work towards -Wunused. * jk/unused-post-2.39-part2: (21 commits) help: mark unused parameter in git_unknown_cmd_config() run_processes_parallel: mark unused callback parameters userformat_want_item(): mark unused parameter for_each_commit_graft(): mark unused callback parameter rewrite_parents(): mark unused callback parameter fetch-pack: mark unused parameter in callback function notes: mark unused callback parameters prio-queue: mark unused parameters in comparison functions for_each_object: mark unused callback parameters list-objects: mark unused callback parameters mark unused parameters in signal handlers run-command: mark error routine parameters as unused mark "pointless" data pointers in callbacks ref-filter: mark unused callback parameters http-backend: mark unused parameters in virtual functions http-backend: mark argc/argv unused object-name: mark unused parameters in disambiguate callbacks serve: mark unused parameters in virtual functions serve: use repository pointer to get config ls-refs: drop config caching ...
2023-02-24for_each_object: mark unused callback parametersJeff King
The for_each_{loose,packed}_object interface uses callback functions, but not every callback needs all of the parameters. Mark the unused ones to satisfy -Wunused-parameter. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-24cache.h: remove dependence on hex.h; make other files include it explicitlyElijah Newren
Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-24alloc.h: move ALLOC_GROW() functions from cache.hElijah Newren
This allows us to replace includes of cache.h with includes of the much smaller alloc.h in many places. It does mean that we also need to add includes of alloc.h in a number of C files. Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-01-31Merge branch 'jk/hash-object-fsck'Junio C Hamano
"git hash-object" now checks that the resulting object is well formed with the same code as "git fsck". * jk/hash-object-fsck: fsck: do not assume NUL-termination of buffers hash-object: use fsck for object checks fsck: provide a function to fsck buffer without object struct t: use hash-object --literally when created malformed objects t7030: stop using invalid tag name t1006: stop using 0-padded timestamps t1007: modernize malformed object tests
2023-01-18hash-object: use fsck for object checksJeff King
Since c879daa237 (Make hash-object more robust against malformed objects, 2011-02-05), we've done some rudimentary checks against objects we're about to write by running them through our usual parsers for trees, commits, and tags. These parsers catch some problems, but they are not nearly as careful as the fsck functions (which make sense; the parsers are designed to be fast and forgiving, bailing only when the input is unintelligible). We are better off doing the more thorough fsck checks when writing objects. Doing so at write time is much better than writing garbage only to find out later (after building more history atop it!) that fsck complains about it, or hosts with transfer.fsckObjects reject it. This is obviously going to be a user-visible behavior change, and the test changes earlier in this series show the scope of the impact. But I'd argue that this is OK: - the documentation for hash-object is already vague about which checks we might do, saying that --literally will allow "any garbage[...] which might not otherwise pass standard object parsing or git-fsck checks". So we are already covered under the documented behavior. - users don't generally run hash-object anyway. There are a lot of spots in the tests that needed to be updated because creating garbage objects is something that Git's tests disproportionately do. - it's hard to imagine anyone thinking the new behavior is worse. Any object we reject would be a potential problem down the road for the user. And if they really want to create garbage, --literally is already the escape hatch they need. Note that the change here is actually in index_mem(), which handles the HASH_FORMAT_CHECK flag passed by hash-object. That flag is also used by "git-replace --edit" to sanity-check the result. Covering that with more thorough checks likewise seems like a good thing. Besides being more thorough, there are a few other bonuses: - we get rid of some questionable stack allocations of object structs. These don't seem to currently cause any problems in practice, but they subtly violate some of the assumptions made by the rest of the code (e.g., the "struct commit" we put on the stack and zero-initialize will not have a proper index from alloc_comit_index(). - likewise, those parsed object structs are the source of some small memory leaks - the resulting messages are much better. For example: [before] $ echo 'tree 123' | git hash-object -t commit --stdin error: bogus commit object 0000000000000000000000000000000000000000 fatal: corrupt commit [after] $ echo 'tree 123' | git.compile hash-object -t commit --stdin error: object fails fsck: badTreeSha1: invalid 'tree' line format - bad sha1 fatal: refusing to create malformed object Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-01-13object-file: fix indent-with-spaceJeff King
Commit b25562e63f (object-file: inline calls to read_object(), 2023-01-07) accidentally indented a conditional block with spaces instead of a tab. Reported-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Acked-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-01-08repo_read_object_file(): stop wrapping read_object_file_extended()Jeff King
The only caller of read_object_file_extended() is the thin wrapper of repo_read_object_file(). Instead of wrapping, let's just rename the inner function and let people call it directly. This cleans up the namespace and reduces confusion. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-01-08read_object_file_extended(): drop lookup_replace optionJeff King
Our sole caller always passes in "1", so we can just drop the parameter entirely. Anybody who doesn't want this behavior could easily call oid_object_info_extended() themselves, as we're just a thin wrapper around it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-01-08object-file: inline calls to read_object()Jeff King
Since read_object() is these days just a thin wrapper around oid_object_info_extended(), and since it only has two callers, let's just inline those calls. This has a few positive outcomes: - it's a net reduction in source code lines - even though the callers end up with a few extra lines, they're now more flexible and can use object_info flags directly. So no more need to convert die_if_corrupt between parameter/flag, and we can ask for lookup replacement with a flag rather than doing it ourselves. - there's one fewer function in an already crowded namespace (e.g., the difference between read_object() and read_object_file() was not immediately obvious; now we only have one of them). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-01-05Merge branch 'jt/avoid-lazy-fetch-commits'Junio C Hamano
Even in a repository with promisor remote, it is useless to attempt to lazily attempt fetching an object that is expected to be commit, because no "filter" mode omits commit objects. Take advantage of this assumption to fail fast on errors. * jt/avoid-lazy-fetch-commits: commit: don't lazy-fetch commits object-file: emit corruption errors when detected object-file: refactor map_loose_object_1() object-file: remove OBJECT_INFO_IGNORE_LOOSE
2022-12-15object-file: emit corruption errors when detectedJonathan Tan
Instead of relying on errno being preserved across function calls, teach do_oid_object_info_extended() to itself report object corruption when it first detects it. There are 3 types of corruption being detected: - when a replacement object is missing - when a loose object is corrupt - when a packed object is corrupt and the object cannot be read in another way Note that in the RHS of this patch's diff, a check for ENOENT that was introduced in 3ba7a06552 (A loose object is not corrupt if it cannot be read due to EMFILE, 2010-10-28) is also removed. The purpose of this check is to avoid a false report of corruption if the errno contains something like EMFILE (or anything that is not ENOENT), in which case a more generic report is presented. Because, as of this patch, we no longer rely on such a heuristic to determine corruption, but surface the error message at the point when we read something that we did not expect, this check is no longer necessary. Besides being more resilient, this also prepares for a future patch in which an indirect caller of do_oid_object_info_extended() will need such functionality. Helped-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-12-15object-file: refactor map_loose_object_1()Jonathan Tan
This function can do 3 things: 1. Gets an fd given a path 2. Simultaneously gets a path and fd given an OID 3. Memory maps an fd Keep 3 (renaming the function accordingly) and inline 1 and 2 into their respective callers. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-12-15object-file: remove OBJECT_INFO_IGNORE_LOOSEJonathan Tan
Its last user was removed in 97b2fa08b6 (fetch-pack: drop custom loose object cache, 2018-11-12), so we can remove it. Helped-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-12-14object-file: inline write_buffer()René Scharfe
write_buffer() reports the OS error if it is unable to write. Its only caller dies in that case, giving some more context in its last message. Inline this function and show only a single error message that includes both the context (writing a loose object file) and the OS error. This shortens the code and simplifies the output. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-11-25object-file: use real paths when adding alternatesGlen Choo
When adding an alternate ODB, we check if the alternate has the same path as the object dir, and if so, we do nothing. However, that comparison does not resolve symlinks. This makes it possible to add the object dir as an alternate, which may result in bad behavior. For example, it can trick "git repack -a -l -d" (possibly run by "git gc") into thinking that all packs come from an alternate and delete all objects. rm -rf test && git clone https://github.com/git/git test && ( cd test && ln -s objects .git/alt-objects && # -c repack.updateserverinfo=false silences a warning about not # being able to update "info/refs", it isn't needed to show the # bad behavior GIT_ALTERNATE_OBJECT_DIRECTORIES=".git/alt-objects" git \ -c repack.updateserverinfo=false repack -a -l -d && # It's broken! git status # Because there are no more objects! ls .git/objects/pack ) Fix this by resolving symlinks and relative paths before comparing the alternate and object dir. This lets us clean up a number of issues noted in 37a95862c6 (alternates: re-allow relative paths from environment, 2016-11-07): - Now that we compare the real paths, duplicate detection is no longer foiled by relative paths. - Using strbuf_realpath() allows us to "normalize" paths that strbuf_normalize_path() can't, so we can stop silently ignoring errors when "normalizing" paths from the environment. - We now store an absolute path based on getcwd() (the "future direction" named in 37a95862c6), so chdir()-ing in the process no longer changes the directory pointed to by the alternate. This is a change in behavior, but a desirable one. Signed-off-by: Glen Choo <chooglen@google.com> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-10-28Merge branch 'jk/unused-anno-more'Junio C Hamano
More UNUSED annotation to help using -Wunused option with the compiler. * jk/unused-anno-more: ll-merge: mark unused parameters in callbacks diffcore-pickaxe: mark unused parameters in pickaxe functions convert: mark unused parameter in null stream filter apply: mark unused parameters in noop error/warning routine apply: mark unused parameters in handlers date: mark unused parameters in handler functions string-list: mark unused callback parameters object-file: mark unused parameters in hash_unknown functions mark unused parameters in trivial compat functions update-index: drop unused argc from do_reupdate() submodule--helper: drop unused argc from module_list_compute() diffstat_consume(): assert non-zero length
2022-10-18object-file: mark unused parameters in hash_unknown functionsJeff King
The 0'th entry of our hash_algos array fills out the virtual methods with a series of functions which simply BUG(). This is the right thing to do, since the point is to catch use of an invalid algo parameter, but we need to annotate them to appease -Wunused-parameters. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-10-18Merge branch 'jt/promisor-remote-fetch-tweak'Junio C Hamano
Remove error detection from a function that fetches from promisor remotes, and make it die when such a fetch fails to bring all the requested objects, to give an early failure to various operations. * jt/promisor-remote-fetch-tweak: promisor-remote: die upon failing fetch promisor-remote: remove a return value
2022-10-05promisor-remote: die upon failing fetchJonathan Tan
In a partial clone, an attempt to read a missing object results in an attempt to fetch that single object. In order to avoid multiple sequential fetches, which would occur when multiple objects are missing (which is the typical case), some commands have been taught to prefetch in a batch: such a command would, in a partial clone, notice that several objects that it will eventually need are missing, and call promisor_remote_get_direct() with all such objects at once. When this batch prefetch fails, these commands fall back to the sequential fetches. But at $DAYJOB we have noticed that this results in a bad user experience: a command would take unexpectedly long to finish (and possibly use up a lot of bandwidth) if the batch prefetch would fail for some intermittent reason, but all subsequent fetches would work. It would be a better user experience for such a command would just fail. Therefore, make it a fatal error if the prefetch fails and at least one object being fetched is known to be a promisor object. (The latter criterion is to make sure that we are not misleading the user that such an object would be present from the promisor remote. For example, a missing object may be a result of repository corruption and not because it is expectedly missing due to the repository being a partial clone.) Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-15Merge branch 'hx/unpack-streaming'Junio C Hamano
Allow large objects read from a packstream to be streamed into a loose object file straight, without having to keep it in-core as a whole. * hx/unpack-streaming: unpack-objects: use stream_loose_object() to unpack large objects core doc: modernize core.bigFileThreshold documentation object-file.c: add "stream_loose_object()" to handle large object object-file.c: factor out deflate part of write_loose_object() object-file.c: refactor write_loose_object() to several steps unpack-objects: low memory footprint for get_data() in dry_run mode
2022-06-13object-file.c: add "stream_loose_object()" to handle large objectHan Xin
If we want unpack and write a loose object using "write_loose_object", we have to feed it with a buffer with the same size of the object, which will consume lots of memory and may cause OOM. This can be improved by feeding data to "stream_loose_object()" in a stream. Add a new function "stream_loose_object()", which is a stream version of "write_loose_object()" but with a low memory footprint. We will use this function to unpack large blob object in later commit. Another difference with "write_loose_object()" is that we have no chance to run "write_object_file_prepare()" to calculate the oid in advance. In "write_loose_object()", we know the oid and we can write the temporary file in the same directory as the final object, but for an object with an undetermined oid, we don't know the exact directory for the object. Still, we need to save the temporary file we're preparing somewhere. We'll do that in the top-level ".git/objects/" directory (or whatever "GIT_OBJECT_DIRECTORY" is set to). Once we've streamed it we'll know the OID, and will move it to its canonical path. "freshen_packed_object()" or "freshen_loose_object()" will be called inside "stream_loose_object()" after obtaining the "oid". After the temporary file is written, we wants to mark the object to recent and we may find that where indeed is already the object. We should remove the temporary and do not leave a new copy of the object. Helped-by: René Scharfe <l.s.r@web.de> Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Helped-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Han Xin <chiyutianyi@gmail.com> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-06-13object-file.c: factor out deflate part of write_loose_object()Ævar Arnfjörð Bjarmason
Split out the part of write_loose_object() that deals with calling git_deflate() into a utility function, a subsequent commit will introduce another function that'll make use of it. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-06-13object-file.c: refactor write_loose_object() to several stepsHan Xin
When writing a large blob using "write_loose_object()", we have to pass a buffer with the whole content of the blob, and this behavior will consume lots of memory and may cause OOM. We will introduce a stream version function ("stream_loose_object()") in later commit to resolve this issue. Before introducing that streaming function, do some refactoring on "write_loose_object()" to reuse code for both versions. Rewrite "write_loose_object()" as follows: 1. Figure out a path for the (temp) object file. This step is only used in "write_loose_object()". 2. Move common steps for starting to write loose objects into a new function "start_loose_object_common()". 3. Compress data. 4. Move common steps for ending zlib stream into a new function "end_loose_object_common()". 5. Close fd and finalize the object file. Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Helped-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Han Xin <chiyutianyi@gmail.com> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-06-11Merge branch 'ab/env-array'Junio C Hamano
Rename .env_array member to .env in the child_process structure. * ab/env-array: run-command API users: use "env" not "env_array" in comments & names run-command API: rename "env_array" to "env"
2022-06-04Merge branch 'tb/cruft-packs'Junio C Hamano
A mechanism to pack unreachable objects into a "cruft pack", instead of ejecting them into loose form to be reclaimed later, has been introduced. * tb/cruft-packs: sha1-file.c: don't freshen cruft packs builtin/gc.c: conditionally avoid pruning objects via loose builtin/repack.c: add cruft packs to MIDX during geometric repack builtin/repack.c: use named flags for existing_packs builtin/repack.c: allow configuring cruft pack generation builtin/repack.c: support generating a cruft pack builtin/pack-objects.c: --cruft with expiration reachable: report precise timestamps from objects in cruft packs reachable: add options to add_unseen_recent_objects_to_traversal builtin/pack-objects.c: --cruft without expiration builtin/pack-objects.c: return from create_object_entry() t/helper: add 'pack-mtimes' test-tool pack-mtimes: support writing pack .mtimes files chunk-format.h: extract oid_version() pack-write: pass 'struct packing_data' to 'stage_tmp_packfiles' pack-mtimes: support reading .mtimes files Documentation/technical: add cruft-packs.txt
2022-06-04Merge branch 'ds/object-file-unpack-loose-header-fix'Junio C Hamano
Coding style fix. * ds/object-file-unpack-loose-header-fix: object-file: convert 'switch' back to 'if'