Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.kernel.org/pub/scm/git/git.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-10-04Merge branch 'jk/clone-unborn-head-in-bare'Junio C Hamano
"git clone" from a repository whose HEAD is unborn into a bare repository didn't follow the branch name the other side used, which is corrected. * jk/clone-unborn-head-in-bare: clone: handle unborn branch in bare repos
2021-09-21clone: handle unborn branch in bare reposJeff King
When cloning a repository with an unborn HEAD, we'll set the local HEAD to match it only if the local repository is non-bare. This is inconsistent with all other combinations: remote HEAD | local repo | local HEAD ----------------------------------------------- points to commit | non-bare | same as remote points to commit | bare | same as remote unborn | non-bare | same as remote unborn | bare | local default So I don't think this is some clever or subtle behavior, but just a bug in 4f37d45706 (clone: respect remote unborn HEAD, 2021-02-05). And it's easy to see how we ended up there. Before that commit, the code to set up the HEAD for an empty repo was guarded by "if (!option_bare)". That's because the only thing it did was call install_branch_config(), and we don't want to do so for a bare repository (unborn HEAD or not). That commit put the handling of unborn HEADs into the same block, since those also need to call install_branch_config(). But the unborn case has an additional side effect of calling create_symref(), and we want that to happen whether we are bare or not. This patch just pulls all of the "figure out the default branch" code out of the "!option_bare" block. Only the actual config installation is kept there. Note that this does mean we might allocate "ref" and not use it (if the remote is empty but did not advertise an unborn HEAD). But that's not really a big deal since this isn't a hot code path, and it keeps the code simple. The alternative would be handling unborn_head_target separately, but that gets confusing since its memory ownership is tangled up with the "ref" variable. There's just one new test, for the case we're fixing. The other ones in the table are handled elsewhere (the unborn non-bare case just above, and the actually-born cases in t5601, t5606, and t5609, as they do not require v2's "unborn" protocol extension). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-08Merge branch 'ab/ls-remote-packet-trace'Junio C Hamano
Debugging aid fix. * ab/ls-remote-packet-trace: ls-remote: set packet_trace_identity(<name>)
2021-08-25ls-remote: set packet_trace_identity(<name>)Ævar Arnfjörð Bjarmason
Set packet_trace_identity() for ls-remote. This replaces the generic "git" identity in GIT_TRACE_PACKET=<file> traces to "ls-remote", e.g.: [...] packet: upload-pack> version 2 [...] packet: upload-pack> agent=git/2.32.0-dev [...] packet: ls-remote< version 2 [...] packet: ls-remote< agent=git/2.32.0-dev Where in an "git ls-remote file://<path>" dialog ">" is the sender (or "to the server") and "<" is the recipient (or "received by the client"). Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-08fetch: fix segfault in --negotiate-only without --negotiation-tip=*Ævar Arnfjörð Bjarmason
The recent --negotiate-only option would segfault in the call to oid_array_for_each() in negotiate_using_fetch() unless one or more --negotiation-tip=* options were provided. All of the other tests for the feature combine both, but nothing was checking this assumption, let's do that and add a test for it. Fixes a bug in 9c1e657a8fd (fetch: teach independent negotiation (no packfile), 2021-05-04). Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-05fetch: teach independent negotiation (no packfile)Jonathan Tan
Currently, the packfile negotiation step within a Git fetch cannot be done independent of sending the packfile, even though there is at least one application wherein this is useful. Therefore, make it possible for this negotiation step to be done independently. A subsequent commit will use this for one such application - push negotiation. This feature is for protocol v2 only. (An implementation for protocol v0 would require a separate implementation in the fetch, transport, and transport helper code.) In the protocol, the main hindrance towards independent negotiation is that the server can unilaterally decide to send the packfile. This is solved by a "wait-for-done" argument: the server will then wait for the client to say "done". In practice, the client will never say it; instead it will cease requests once it is satisfied. In the client, the main change lies in the transport and transport helper code. fetch_refs_via_pack() performs everything needed - protocol version and capability checks, and the negotiation itself. There are 2 code paths that do not go through fetch_refs_via_pack() that needed to be individually excluded: the bundle transport (excluded through requiring smart_options, which the bundle transport doesn't support) and transport helpers that do not support takeover. If or when we support independent negotiation for protocol v0, we will need to modify these 2 code paths to support it. But for now, report failure if independent negotiation is requested in these cases. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-09Merge branch 'jt/transfer-fsck-across-packs-fix'Junio C Hamano
The code to fsck objects received across multiple packs during a single git fetch session has been broken when the packfile URI feature was in use. A workaround has been added by disabling the codepath to avoid keeping a packfile that is too small. * jt/transfer-fsck-across-packs-fix: fetch-pack: do not mix --pack_header and packfile uri
2021-03-06fetch-pack: do not mix --pack_header and packfile uriJonathan Tan
When fetching (as opposed to cloning) from a repository with packfile URIs enabled, an error like this may occur: fatal: pack has bad object at offset 12: unknown object type 5 fatal: finish_http_pack_request gave result -1 fatal: fetch-pack: expected keep then TAB at start of http-fetch output This bug was introduced in b664e9ffa1 ("fetch-pack: with packfile URIs, use index-pack arg", 2021-02-22), when the index-pack args used when processing the inline packfile of a fetch response and when processing packfile URIs were unified. This bug happens because fetch, by default, partially reads (and consumes) the header of the inline packfile to determine if it should store the downloaded objects as a packfile or loose objects, and thus passes --pack_header=<...> to index-pack to inform it that some bytes are missing. However, when it subsequently fetches the additional packfiles linked by URIs, it reuses the same index-pack arguments, thus wrongly passing --index-pack-arg=--pack_header=<...> when no bytes are missing. This does not happen when cloning because "git clone" always passes do_keep, which instructs the fetch mechanism to always retain the packfile, eliminating the need to read the header. There are a few ways to fix this, including filtering out pack_header arguments when downloading the additional packfiles, but I decided to stick to always using index-pack throughout when packfile URIs are present - thus, Git no longer needs to read the bytes, and no longer needs --pack_header here. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-02Merge branch 'jt/transfer-fsck-across-packs'Junio C Hamano
The approach to "fsck" the incoming objects in "index-pack" is attractive for performance reasons (we have them already in core, inflated and ready to be inspected), but fundamentally cannot be applied fully when we receive more than one pack stream, as a tree object in one pack may refer to a blob object in another pack as ".gitmodules", when we want to inspect blobs that are used as ".gitmodules" file, for example. Teach "index-pack" to emit objects that must be inspected later and check them in the calling "fetch-pack" process. * jt/transfer-fsck-across-packs: fetch-pack: print and use dangling .gitmodules fetch-pack: with packfile URIs, use index-pack arg http-fetch: allow custom index-pack args http: allow custom index-pack args
2021-02-22fetch-pack: print and use dangling .gitmodulesJonathan Tan
Teach index-pack to print dangling .gitmodules links after its "keep" or "pack" line instead of declaring an error, and teach fetch-pack to check such lines printed. This allows the tree side of the .gitmodules link to be in one packfile and the blob side to be in another without failing the fsck check, because it is now fetch-pack which checks such objects after all packfiles have been downloaded and indexed (and not index-pack on an individual packfile, as it is before this commit). Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-18Merge branch 'jt/clone-unborn-head'Junio C Hamano
"git clone" tries to locally check out the branch pointed at by HEAD of the remote repository after it is done, but the protocol did not convey the information necessary to do so when copying an empty repository. The protocol v2 learned how to do so. * jt/clone-unborn-head: clone: respect remote unborn HEAD connect, transport: encapsulate arg in struct ls-refs: report unborn targets of symrefs
2021-02-06clone: respect remote unborn HEADJonathan Tan
Teach Git to use the "unborn" feature introduced in a previous patch as follows: Git will always send the "unborn" argument if it is supported by the server. During "git clone", if cloning an empty repository, Git will use the new information to determine the local branch to create. In all other cases, Git will ignore it. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-26t: prepare for GIT_TEST_WRITE_REV_INDEXTaylor Blau
In the next patch, we'll add support for unconditionally enabling the 'pack.writeReverseIndex' setting with a new GIT_TEST_WRITE_REV_INDEX environment variable. This causes a little bit of fallout with tests that, for example, compare the list of files in the pack directory being unprepared to see .rev files in its output. Those locations can be cleaned up to look for specific file extensions, rather than take everything in the pack directory (for instance) and then grep out unwanted items. Once the pack.writeReverseIndex option has been thoroughly tested, we will default it to 'true', removing GIT_TEST_WRITE_REV_INDEX, and making it possible to revert this patch. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-20t5[6-9]*: adjust the references to the default branch name "main"Johannes Schindelin
This trick was performed via $ (cd t && sed -i -e 's/master/main/g' -e 's/MASTER/MAIN/g' \ -e 's/Master/Main/g' -- t5[6-9]*.sh) This allows us to define `GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=main` for those tests. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-20tests: mark tests relying on the current default for `init.defaultBranch`Johannes Schindelin
In addition to the manual adjustment to let the `linux-gcc` CI job run the test suite with `master` and then with `main`, this patch makes sure that GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME is set in all test scripts that currently rely on the initial branch name being `master by default. To determine which test scripts to mark up, the first step was to force-set the default branch name to `master` in - all test scripts that contain the keyword `master`, - t4211, which expects `t/t4211/history.export` with a hard-coded ref to initialize the default branch, - t5560 because it sources `t/t556x_common` which uses `master`, - t8002 and t8012 because both source `t/annotate-tests.sh` which also uses `master`) This trick was performed by this command: $ sed -i '/^ *\. \.\/\(test-lib\|lib-\(bash\|cvs\|git-svn\)\|gitweb-lib\)\.sh$/i\ GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=master\ export GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME\ ' $(git grep -l master t/t[0-9]*.sh) \ t/t4211*.sh t/t5560*.sh t/t8002*.sh t/t8012*.sh After that, careful, manual inspection revealed that some of the test scripts containing the needle `master` do not actually rely on a specific default branch name: either they mention `master` only in a comment, or they initialize that branch specificially, or they do not actually refer to the current default branch. Therefore, the aforementioned modification was undone in those test scripts thusly: $ git checkout HEAD -- \ t/t0027-auto-crlf.sh t/t0060-path-utils.sh \ t/t1011-read-tree-sparse-checkout.sh \ t/t1305-config-include.sh t/t1309-early-config.sh \ t/t1402-check-ref-format.sh t/t1450-fsck.sh \ t/t2024-checkout-dwim.sh \ t/t2106-update-index-assume-unchanged.sh \ t/t3040-subprojects-basic.sh t/t3301-notes.sh \ t/t3308-notes-merge.sh t/t3423-rebase-reword.sh \ t/t3436-rebase-more-options.sh \ t/t4015-diff-whitespace.sh t/t4257-am-interactive.sh \ t/t5323-pack-redundant.sh t/t5401-update-hooks.sh \ t/t5511-refspec.sh t/t5526-fetch-submodules.sh \ t/t5529-push-errors.sh t/t5530-upload-pack-error.sh \ t/t5548-push-porcelain.sh \ t/t5552-skipping-fetch-negotiator.sh \ t/t5572-pull-submodule.sh t/t5608-clone-2gb.sh \ t/t5614-clone-submodules-shallow.sh \ t/t7508-status.sh t/t7606-merge-custom.sh \ t/t9302-fast-import-unpack-limit.sh We excluded one set of test scripts in these commands, though: the range of `git p4` tests. The reason? `git p4` stores the (foreign) remote branch in the branch called `p4/master`, which is obviously not the default branch. Manual analysis revealed that only five of these tests actually require a specific default branch name to pass; They were modified thusly: $ sed -i '/^ *\. \.\/lib-git-p4\.sh$/i\ GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=master\ export GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME\ ' t/t980[0167]*.sh t/t9811*.sh Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-25fetch-pack: make packfile URIs work with transfer.fsckobjectsJonathan Tan
When fetching with packfile URIs and transfer.fsckobjects=1, use the --fsck-objects instead of the --strict flag when invoking index-pack so that links are not checked, only objects. This is because incomplete links are expected. (A subsequent connectivity check will be done when all the packs have been downloaded regardless of whether transfer.fsckobjects is set.) This is similar to 98a2ea46c2 ("fetch-pack: do not check links for partial fetch", 2018-03-15), but for packfile URIs instead of partial clones. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-30t: remove test_oid_init in testsbrian m. carlson
Now that we call test_oid_init in the setup for all test scripts, there's no point in calling it individually. Remove all of the places where we've done so to help keep tests tidy. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Reviewed-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-30builtin/verify-pack: implement an --object-format optionbrian m. carlson
A recently added test in t5702 started using git verify-pack outside of a repository. While this poses no problems with SHA-1, with SHA-256 we implicitly rely on the setup of the repository to initialize our hash algorithm settings. Since we're not in a repository here, we need to provide git verify-pack help to set things up properly. git index-pack already knows an --object-format option, so let's accept one as well and pass it down to our git index-pack invocation. Since we're now dynamically adjusting the elements in argv, let's switch to using struct argv_array to manage them. Finally, let's make t5702 pass the proper argument on down to its git verify-pack caller. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Reviewed-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-07Merge branch 'bc/sha-256-part-2'Junio C Hamano
SHA-256 migration work continues. * bc/sha-256-part-2: (44 commits) remote-testgit: adapt for object-format bundle: detect hash algorithm when reading refs t5300: pass --object-format to git index-pack t5704: send object-format capability with SHA-256 t5703: use object-format serve option t5702: offer an object-format capability in the test t/helper: initialize the repository for test-sha1-array remote-curl: avoid truncating refs with ls-remote t1050: pass algorithm to index-pack when outside repo builtin/index-pack: add option to specify hash algorithm remote-curl: detect algorithm for dumb HTTP by size builtin/ls-remote: initialize repository based on fetch t5500: make hash independent serve: advertise object-format capability for protocol v2 connect: parse v2 refs with correct hash algorithm connect: pass full packet reader when parsing v2 refs Documentation/technical: document object-format for protocol v2 t1302: expect repo format version 1 for SHA-256 builtin/show-index: provide options to determine hash algo t5302: modernize test formatting ...
2020-06-30Merge branch 'xl/upgrade-repo-format'Junio C Hamano
Allow runtime upgrade of the repository format version, which needs to be done carefully. There is a rather unpleasant backward compatibility worry with the last step of this series, but it is the right thing to do in the longer term. * xl/upgrade-repo-format: check_repository_format_gently(): refuse extensions for old repositories sparse-checkout: upgrade repository to version 1 when enabling extension fetch: allow adding a filter after initial clone repository: add a helper function to perform repository format upgrade
2020-06-25Merge branch 'jt/cdn-offload'Junio C Hamano
The "fetch/clone" protocol has been updated to allow the server to instruct the clients to grab pre-packaged packfile(s) in addition to the packed object data coming over the wire. * jt/cdn-offload: upload-pack: fix a sparse '0 as NULL pointer' warning upload-pack: send part of packfile response as uri fetch-pack: support more than one pack lockfile upload-pack: refactor reading of pack-objects out Documentation: add Packfile URIs design doc Documentation: order protocol v2 sections http-fetch: support fetching packfiles by URL http-fetch: refactor into function http: refactor finish_http_pack_request() http: use --stdin when indexing dumb HTTP pack
2020-06-20t5702: offer an object-format capability in the testbrian m. carlson
In order to make this test work with SHA-256, offer an object-format capability so that both sides use the same algorithm. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-11upload-pack: send part of packfile response as uriJonathan Tan
Teach upload-pack to send part of its packfile response as URIs. An administrator may configure a repository with one or more "uploadpack.blobpackfileuri" lines, each line containing an OID, a pack hash, and a URI. A client may configure fetch.uriprotocols to be a comma-separated list of protocols that it is willing to use to fetch additional packfiles - this list will be sent to the server. Whenever an object with one of those OIDs would appear in the packfile transmitted by upload-pack, the server may exclude that object, and instead send the URI. The client will then download the packs referred to by those URIs before performing the connectivity check. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-05fetch: allow adding a filter after initial cloneXin Li
Retroactively adding a filter can be useful for existing shallow clones as they allow users to see earlier change histories without downloading all git objects in a regular --unshallow fetch. Without this patch, users can make a clone partial by editing the repository configuration to convert the remote into a promisor, like:   git config core.repositoryFormatVersion 1   git config extensions.partialClone origin   git fetch --unshallow --filter=blob:none origin Since the hard part of making this work is already in place and such edits can be error-prone, teach Git to perform the required configuration change automatically instead. Note that this change does not modify the existing git behavior which recognizes setting extensions.partialClone without changing repositoryFormatVersion. Signed-off-by: Xin Li <delphij@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-25stateless-connect: send response end packetDenton Liu
Currently, remote-curl acts as a proxy and blindly forwards packets between an HTTP server and fetch-pack. In the case of a stateless RPC connection where the connection is terminated before the transaction is complete, remote-curl will blindly forward the packets before waiting on more input from fetch-pack. Meanwhile, fetch-pack will read the transaction and continue reading, expecting more input to continue the transaction. This results in a deadlock between the two processes. This can be seen in the following command which does not terminate: $ git -c protocol.version=2 clone https://github.com/git/git.git --shallow-since=20151012 Cloning into 'git'... whereas the v1 version does terminate as expected: $ git -c protocol.version=1 clone https://github.com/git/git.git --shallow-since=20151012 Cloning into 'git'... fatal: the remote end hung up unexpectedly Instead of blindly forwarding packets, make remote-curl insert a response end packet after proxying the responses from the remote server when using stateless_connect(). On the RPC client side, ensure that each response ends as described. A separate control packet is chosen because we need to be able to differentiate between what the remote server sends and remote-curl's control packets. By ensuring in the remote-curl code that a server cannot send response end packets, we prevent a malicious server from being able to perform a denial of service attack in which they spoof a response end packet and cause the described deadlock to happen. Reported-by: Force Charlie <charlieio@outlook.com> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Denton Liu <liu.denton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-25remote-curl: error on incomplete packetDenton Liu
Currently, remote-curl acts as a proxy and blindly forwards packets between an HTTP server and fetch-pack. In the case of a stateless RPC connection where the connection is terminated with a partially written packet, remote-curl will blindly send the partially written packet before waiting on more input from fetch-pack. Meanwhile, fetch-pack will read the partial packet and continue reading, expecting more input. This results in a deadlock between the two processes. For a stateless connection, inspect packets before sending them and error out if a packet line packet is incomplete. Helped-by: Jeff King <peff@peff.net> Signed-off-by: Denton Liu <liu.denton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-27t/lib-httpd: avoid using macOS' sedJohannes Schindelin
Among other differences relative to GNU sed, macOS' sed always ends its output with a trailing newline, even if the input did not have such a trailing newline. Surprisingly, this makes three httpd-based tests fail on macOS: t5616, t5702 and t5703. ("Surprisingly" because those tests have been around for some time, but apparently nobody runs them on macOS with a working Apache2 setup.) The reason is that we use `sed` in those tests to filter the response of the web server. Apart from the fact that we use GNU constructs (such as using a space after the `c` command instead of a backslash and a newline), we have another problem: macOS' sed LF-only newlines while webservers are supposed to use CR/LF ones. Even worse, t5616 uses `sed` to replace a binary part of the response with a new binary part (kind of hoping that the replaced binary part does not contain a 0x0a byte which would be interpreted as a newline). To that end, it calls on Perl to read the binary pack file and hex-encode it, then calls on `sed` to prefix every hex digit pair with a `\x` in order to construct the text that the `c` statement of the `sed` invocation is supposed to insert. So we call Perl and sed to construct a sed statement. The final nail in the coffin is that macOS' sed does not even interpret those `\x<hex>` constructs. Let's just replace all of that by Perl snippets. With Perl, at least, we do not have to deal with GNU vs macOS semantics, we do not have to worry about unwanted trailing newlines, and we do not have to spawn commands to construct arguments for other commands to be spawned (i.e. we can avoid a whole lot of shell scripting complexity). The upshot is that this fixes t5616, t5702 and t5703 on macOS with Apache2. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-14Merge branch 'jk/no-flush-upon-disconnecting-slrpc-transport' into maintJunio C Hamano
Reduce unnecessary round-trip when running "ls-remote" over the stateless RPC mechanism. * jk/no-flush-upon-disconnecting-slrpc-transport: transport: don't flush when disconnecting stateless-rpc helper
2020-01-08transport: don't flush when disconnecting stateless-rpc helperJeff King
Since ba227857d2 (Reduce the number of connects when fetching, 2008-02-04), when we disconnect a git transport, we send a final flush packet. This cleanly tells the other side that we're done, and avoids the other side complaining "the remote end hung up unexpectedly" (though we'd only see that for transports that pass along the server stderr, like ssh or local-host). But when we've initiated a v2 stateless-connect session over a transport helper, there's no point in sending this flush packet. Each operation we've performed is self-contained, and the other side is fine with us hanging up between operations. But much worse, by sending the flush packet we may cause the helper to issue an entirely new request _just_ to send the flush packet. So we can incur an extra network request just to say "by the way, we have nothing more to send". Let's drop this extra flush packet. As the test shows, this reduces the number of POSTs required for a v2 ls-remote over http from 2 to 1. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-10Fix spelling errors in names of testsElijah Newren
Reported-by: Jens Schleusener <Jens.Schleusener@fossies.org> Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-10Fix spelling errors in comments of testcasesElijah Newren
Reported-by: Jens Schleusener <Jens.Schleusener@fossies.org> Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-08-23transport-helper: skip ls-refs if unnecessaryJonathan Tan
Commit e70a3030e7 ("fetch: do not list refs if fetching only hashes", 2018-10-07) and its ancestors taught Git, as an optimization, to skip the ls-refs step when it is not necessary during a protocol v2 fetch (for example, when lazy fetching a missing object in a partial clone, or when running "git fetch --no-tags <remote> <SHA-1>"). But that was only done for natively supported protocols; in particular, HTTP was not supported. Teach Git to skip ls-refs when using remote helpers that support connect or stateless-connect. To do this, fetch() is made an acceptable entry point. Because fetch() can now be the first function in the vtable called, "get_helper(transport);" has to be added to the beginning of that function to set the transport up (if not yet set up) before process_connect() is invoked. When fetch() is called, the transport could be taken over (this happens if "connect" or "stateless-connect" is successfully run without any "fallback" response), or not. If the transport is taken over, execution continues like execution for natively supported protocols (fetch_refs_via_pack() is executed, which will fetch refs using ls-refs if needed). If not, the remote helper interface will invoke get_refs_list() if it hasn't been invoked yet, preserving existing behavior. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-08-02t: warn against adding non-httpd-specific tests after sourcing 'lib-httpd'SZEDER Gábor
We have a couple of test scripts that are not completely httpd-specific, but do run a few httpd-specific tests at the end. These test scripts source 'lib-httpd.sh' somewhere mid-script, which then skips all the rest of the test script if the dependencies for running httpd tests are not fulfilled. As the previous two patches in this series show, already on two occasions non-httpd-specific tests were appended at the end of such test scripts, and, consequently, they were skipped as well when httpd tests couldn't be run. Add a comment at the end of these test scripts to warn against adding non-httpd-specific tests at the end, in the hope that they will help prevent similar issues in the future. Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-07-02t5702: use test_commit_bulkJeff King
There are two loops that create 32 commits each using test_commit. Using test_commit_bulk speeds this up from: Benchmark #1: ./t5702-protocol-v2.sh --root=/var/ram/git-tests Time (mean ± σ): 5.409 s ± 0.513 s [User: 2.382 s, System: 2.466 s] Range (min … max): 4.633 s … 5.927 s 10 runs to: Benchmark #1: ./t5702-protocol-v2.sh --root=/var/ram/git-tests Time (mean ± σ): 3.956 s ± 0.242 s [User: 1.775 s, System: 1.627 s] Range (min … max): 3.449 s … 4.239 s 10 runs for an average savings of over 25%. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-05-08Merge branch 'js/misc-doc-fixes'Junio C Hamano
"make check-docs", "git help -a", etc. did not account for cases where a particular build may deliberately omit some subcommands, which has been corrected. * js/misc-doc-fixes: Turn `git serve` into a test helper test-tool: handle the `-C <directory>` option just like `git` check-docs: do not bother checking for legacy scripts' documentation docs: exclude documentation for commands that have been excluded check-docs: allow command-list.txt to contain excluded commands help -a: do not list commands that are excluded from the build Makefile: drop the NO_INSTALL variable remote-testgit: move it into the support directory for t5801
2019-05-08Merge branch 'jt/clone-server-option'Junio C Hamano
"git clone" learned a new --server-option option when talking over the protocol version 2. * jt/clone-server-option: clone: send server options when using protocol v2 transport: die if server options are unsupported
2019-04-19Turn `git serve` into a test helperJohannes Schindelin
The `git serve` built-in was introduced in ed10cb952d31 (serve: introduce git-serve, 2018-03-15) as a backend to serve Git protocol v2, probably originally intended to be spawned by `git upload-pack`. However, in the version that the protocol v2 patches made it into core Git, `git upload-pack` calls the `serve()` function directly instead of spawning `git serve`; The only reason in life for `git serve` to survive as a built-in command is to provide a way to test the protocol v2 functionality. Meaning that it does not even have to be a built-in that is installed with end-user facing Git installations, but it can be a test helper instead. Let's make it so. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-18clone: send server options when using protocol v2Jonathan Tan
Commit 5e3548ef16 ("fetch: send server options when using protocol v2", 2018-04-24) taught "fetch" the ability to send server options when using protocol v2, but not "clone". This ability is triggered by "-o" or "--server-option". Teach "clone" the same ability, except that because "clone" already has "-o" for another parameter, teach "clone" only to receive "--server-option". Explain in the documentation, both for clone and for fetch, that server handling of server options are server-specific. This is similar to receive-pack's handling of push options - currently, they are just sent to hooks to interpret as they see fit. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-18transport: die if server options are unsupportedJonathan Tan
Server options were added in commit 5e3548ef16 ("fetch: send server options when using protocol v2", 2018-04-24), supported only for protocol version 2. But if the user specifies server options, and the protocol version being used doesn't support them, the server options are silently ignored. Teach any transport users to die instead in this situation, just like how "push" dies if push options are provided when the server doesn't support them. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-14tests: use 'test_atexit' to stop httpdSZEDER Gábor
Use 'test_atexit' to run cleanup commands to stop httpd at the end of the test script or upon interrupt or failure, as it is shorter, simpler, and more robust than registering such cleanup commands in the trap on EXIT in the test scripts. Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-03remote-curl: use post_rpc() for protocol v2 alsoJonathan Tan
When transmitting and receiving POSTs for protocol v0 and v1, remote-curl uses post_rpc() (and associated functions), but when doing the same for protocol v2, it uses a separate set of functions (proxy_rpc() and others). Besides duplication of code, this has caused at least one bug: the auth retry mechanism that was implemented in v0/v1 was not implemented in v2. To fix this issue and avoid it in the future, make remote-curl also use post_rpc() when handling protocol v2. Because line lengths are written to the HTTP request in protocol v2 (unlike in protocol v0/v1), this necessitates changes in post_rpc() and some of the functions it uses; perform these changes too. A test has been included to ensure that the code for both the unchunked and chunked variants of the HTTP request is exercised. Note: stateless_connect() has been updated to use the lower-level packet reading functions instead of struct packet_reader. The low-level control is necessary here because we cannot change the destination buffer of struct packet_reader while it is being used; struct packet_buffer has a peeking mechanism which relies on the destination buffer being present in between a peek and a read. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-06Merge branch 'jt/namespaced-ls-refs-fix'Junio C Hamano
Fix namespace support in protocol v2. * jt/namespaced-ls-refs-fix: ls-refs: filter refs using namespace-stripped name
2019-02-06Merge branch 'jt/fetch-v2-sideband'Junio C Hamano
"git fetch" and "git upload-pack" learned to send all exchange over the sideband channel while talking the v2 protocol. * jt/fetch-v2-sideband: tests: define GIT_TEST_SIDEBAND_ALL {fetch,upload}-pack: sideband v2 fetch response sideband: reverse its dependency on pkt-line pkt-line: introduce struct packet_writer pack-protocol.txt: accept error packets in any context Use packet_reader instead of packet_read_line
2019-01-18ls-refs: filter refs using namespace-stripped nameJonathan Tan
If a user fetches refs/heads/master from a repo with namespace "ns", the remote is expected to (1) not send the real refs/heads/master, and (2) send refs/namespaces/ns/refs/heads/master with the name refs/heads/master. (1) indeed happens now, but not (2) - Git only sends refs that have the user-given prefix, but it checks them against the full name of the ref (the one starting with refs/namespaces), and not the namespace-stripped one. This is demonstrated by the patch in the test. Currently, it results in "fatal: couldn't find remote ref refs/heads/master" despite both unnamespaced and namespaced master being present. With the code change, it produces the expected result. Check the ref prefixes against the namespace-stripped name. This bug was discovered through applying patches [1] that override protocol.version to 2 in repositories when running tests, allowing us to notice differences in behavior across different protocol versions. [1] https://public-inbox.org/git/cover.1547677183.git.jonathantanmy@google.com/ Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-17tests: define GIT_TEST_SIDEBAND_ALLJonathan Tan
Define a GIT_TEST_SIDEBAND_ALL environment variable meant to be used from tests. When set to true, this overrides uploadpack.allowsidebandall to true, allowing the entire test suite to be run as if this configuration is in place for all repositories. As of this patch, all tests pass whether GIT_TEST_SIDEBAND_ALL is unset or set to 1. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-11upload-pack: teach deepen-relative in protocol v2Jonathan Tan
Commit 685fbd3291 ("fetch-pack: perform a fetch using v2", 2018-03-15) attempted to teach Git deepen-relative in protocol v2 (among other things), but it didn't work: (1) fetch-pack.c needs to emit "deepen-relative". (2) upload-pack.c needs to ensure that the correct deepen_relative variable is passed to deepen() (there are two - the static variable and the one in struct upload_pack_data). (3) Before deepen() computes the list of reachable shallows, it first needs to mark all "our" refs as OUR_REF. v2 currently does not do this, because unlike v0, it is not needed otherwise. Fix all this and include a test demonstrating that it works now. For (2), the static variable deepen_relative is also eliminated, removing a source of confusion. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Reviewed-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-11fetch-pack: do not take shallow lock unnecessarilyJonathan Tan
When fetching using protocol v2, the remote may send a "shallow-info" section if the client is shallow. If so, Git as the client currently takes the shallow file lock, even if the "shallow-info" section is empty. This is not a problem except that Git does not support taking the shallow file lock after modifying the shallow file, because is_repository_shallow() stores information that is never cleared. And this take-after-modify occurs when Git does a tag-following fetch from a shallow repository on a transport that does not support tag following (since in this case, 2 fetches are performed). To solve this issue, take the shallow file lock (and perform all other shallow processing) only if the "shallow-info" section is non-empty; otherwise, behave as if it were empty. A full solution (probably, ensuring that any action of committing shallow file locks also includes clearing the information stored by is_repository_shallow()) would solve the issue without need for this patch, but this patch is independently useful (as an optimization to prevent writing a file in an unnecessary case), hence why I wrote it. I have included a NEEDSWORK outlining the full solution. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-13Merge branch 'jt/tighten-fetch-proto-v2-response'Junio C Hamano
"git fetch" was a bit loose in parsing resposes from the other side when talking over the protocol v2. * jt/tighten-fetch-proto-v2-response: fetch-pack: be more precise in parsing v2 response
2018-11-06Merge branch 'jt/upload-pack-v2-fix-shallow'Junio C Hamano
"git fetch" over protocol v2 into a shallow repository failed to fetch full history behind a new tip of history that was diverged before the cut-off point of the history that was previously fetched shallowly. * jt/upload-pack-v2-fix-shallow: upload-pack: clear flags before each v2 request upload-pack: make want_obj not global upload-pack: make have_obj not global
2018-11-01fetch-pack: be more precise in parsing v2 responseJonathan Tan
Each section in a protocol v2 response is followed by either a DELIM packet (indicating more sections to follow) or a FLUSH packet (indicating none to follow). But when parsing the "acknowledgments" section, do_fetch_pack_v2() is liberal in accepting both, but determines whether to continue reading or not based solely on the contents of the "acknowledgments" section, not on whether DELIM or FLUSH was read. There is no issue with a protocol-compliant server, but can result in confusing error messages when communicating with a server that serves unexpected additional sections. Consider a server that sends "new-section" after "acknowledgments": - client writes request - client reads the "acknowledgments" section which contains no "ready", then DELIM - since there was no "ready", client needs to continue negotiation, and writes request - client reads "new-section", and reports to the end user "expected 'acknowledgments', received 'new-section'" For the person debugging the involved Git implementation(s), the error message is confusing in that "new-section" was not received in response to the latest request, but to the first one. One solution is to always continue reading after DELIM, but in this case, we can do better. We know from the protocol that "ready" means at least the packfile section is coming (hence, DELIM) and that no "ready" means that no sections are to follow (hence, FLUSH). So teach process_acks() to enforce this. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>