Age | Commit message (Collapse) | Author |
|
Praefect runs the request finalizer even if the request deadline was
exceeded in order to ensure the required metadata updates are made.
If the context deadline is exceeded in the request finalizer, no error
is returned as the metadata updates don't see it. This causes Praefect
to return, log and observe a status OK for requests that exceeded the
deadline or were canceled while running the request finalizer. This
commit returns the error from the original context if no other errors
are raised from the requestFinalizer to ensure the status gets correctly
set.
desc field is added to the tests as they currently abuse the change field
as the test name.
Changelog: fixed
|
|
Remove 'exact_pagination_token_match' feature flag
See merge request gitlab-org/gitaly!4724
|
|
Fixes for a range of typos
See merge request gitlab-org/gitaly!4751
|
|
|
|
|
|
|
|
operations: Remove UserRebaseConfirmableImprovedErrorHandling FF
See merge request gitlab-org/gitaly!4746
|
|
ruby: Fix dependencies in Gemfile.lock
See merge request gitlab-org/gitaly!4750
|
|
With 4da75e5814680fe0d657bb734099527c74b76905 we attempted to update
actionview to resolve a security issue. However, in doing so we updated
the version of actionpack and its dependencies, but not actionview
itself. This has led to builds emitting the following warning:
Your lockfile doesn't include a valid resolution.
You can fix this by regenerating your lockfile or trying to manually editing the bad locked gems to a version that satisfies all dependencies.
The unmet dependencies are:
* actionview (= 6.1.5.1), depended upon actionpack-6.1.5.1, unsatisfied by actionview-6.1.4.7
* activesupport (= 6.1.5.1), depended upon actionpack-6.1.5.1, unsatisfied by activesupport-6.1.4.7
Update Gemfile.lock to update all related dependencies to matched
versions and stop this error.
|
|
Update the stage label to systems for danger
See merge request gitlab-org/gitaly!4748
|
|
|
|
Now that the UserRebaseConfirmable has been on in production without
issues, we can remove the feature flag.
|
|
git: Don't advertise internal references via git-upload-pack(1)
Closes #4358
See merge request gitlab-org/gitaly!4727
|
|
Gemfile.lock: Update actionview gem
See merge request gitlab-org/gitaly!4739
|
|
ssh: Improve errors for fetches canceled by the user
Closes #4331
See merge request gitlab-org/gitaly!4731
|
|
|
|
|
|
In recent weeks we have started to see an uptick of incidents caused by
a single user that was repeatedly fetching from a specific repository
via the SSHUploadPackWithSidechannel RPC. The error has always been the
same:
fatal: the remote hung up unexpectedly
This error is returned by git-upload-pack(1) in case it sees a short
read of data and ultimately indicates that the user has probably killed
git-fetch(1). This is not an actionable error on our side as it is a
totally expected outcome that a killed fetch will cause the fetch to
fail.
Now that we have a lot more tests in this area we can be reasonably sure
that short reads on an otherwise normally closed network connection is
seemingly the only condition that causes the above error. So let's put a
gRPC error code of `Canceled` on this error so that it doesn't cause our
alerts to go off anymore.
Changelog: fixed
|
|
Unfortunately, the test added by that commit is flaky. And while I could
drop that commit before we merge this patch series I think that the
logic added in there is interesting enough to keep around for future
reference if we ever wanted to revisit this topic. Also, even though
it's flaky, it still served its purpose to demonstrate it is fine to
wrap errors with an `Canceled` gRPC code in case we see "fatal: the
remote side hung up unexpectedly".
So let's just keep this commit in our history and revert it.
|
|
In gitlab.com there has been a recent uptick in incidents caused by
SSHUploadPackWithSidechannel failing with the following error message:
fatal: the remote end hung up unexpectedly
While we have been reasonably sure that this error really only happens
in case the client has cancelled the fetch, we couldn't really tell
whether any other conditions might trigger the same error. Most notably,
we were not sure if this might've indicated network connectivity issues.
Add tests to exercise this RPC call with injected network errors. As it
turns out, none of the injected errors cause the error to appear, which
is a good thing.
|
|
Add two more tests that verify reference negotiation with invalid and
missing objects fails as expected.
|
|
The testhelper to start the gRPC server for the ssh service is only
exposing the socket address of the server. Add a new function that
returns the `GitalyServer` to make it accessible to tests.
|
|
Make the gRPC server accessible to outside callers so that they can for
example register additional listeners for the server.
|
|
Add a function to format gRPC errors with an error code of `Canceled`.
|
|
|
|
Revert "Change FindTag err when tag not found to NotFound"
See merge request gitlab-org/gitaly!4736
|
|
This reverts merge request !4735
|
|
[ci skip]
|
|
'4366-findtag-returns-an-internal-error-code-when-it-cannot-find-a-specific-tag' into 'master'
Change FindTag err when tag not found to NotFound
Closes #4366
See merge request gitlab-org/gitaly!4735
|
|
What
---
Update the error message to be `Not Found` rather then `Internal` when a
tag is not found.
Why
---
We burn through our error budget when a single user sends a bunch of
requests to tags that don't exist because we classify the error as
`Internal`, instead it should be `NotFound`.
Reference: https://gitlab.com/gitlab-org/gitaly/-/issues/4366
Signed-off-by: Steve Azzopardi <sazzopardi@gitlab.com>
|
|
Gitaly knows two different types of internal references: once those that
are read-only and thus hidden from git-receive-pack(1), and then those
that should only be used internally and which are thus neither readable
nor writeable. We already handle the former type of internal references
by setting up `receive.hideRefs`, but we don't handle the latter type.
Fix this by setting up hidden reference for git-upload-pack(1). This
causes us to not advertise references with a prefix of `refs/tmp` or
`refs/keep-around` when fetching from repositories.
Note that because we set `transport.hideRefs=!refs/` in our gitaly-ssh
executable, this change does not impact internal fetches. This is
expected and required to keep e.g. `ReplicateRepository()` working
correctly.
Changelog: fixed
|
|
We've got two different types of internal references: those that are
interesting to the user but shouldn't be written to by them, and then
those that are really only required for our own internal housekeeping
and which should neither be readable nor writeable.
Introduce a new `InternalReferenceType` to mark references accordingly.
|
|
We are using a poor-man's static assert via the `init()` function to
verify that we properly set up `receive.hideRefs` for every of our
internal references. This static assert doesn't make a whole lof of
sense anymore since be8a34e94 (receive.hideRefs use InternalRefPrefixes,
2021-06-21) though, which directly implemented setup of these config
entries via the `InternalRefPrefixes` variable.
Remove the static assert to clean up the code.
|
|
ReplicateRepository must know to replicate all internal references, even
when they're hidden by default. Add testcases to assert that this works
as expected both for the initial seeding and when performing incremental
replication.
|
|
The internal references known by Gitaly are not all the same: while some
of them are internal as in read-only, others are internal as in they
should not be visible to the client at all. We don't currently configure
this directly and thus announce them to the client.
Add testcases that demonstrate this misconfiguration.
|
|
The tests for InfoRefs endpoints in the smarthttp test suite verify that
we advertise a subset of expected references. This isn't ideal though
because we cannot check for the absence of specific references like
this.
Refactor the tests to create their test state via an empty repository so
that we end up with a small set of references, only. Like this, we can
migrate tests to instead check the full set of references now that there
are only two or three references anymore instead of hundreds.
|
|
Rename tests exercising smarthttp's InfoRefs endpoints to match modern
best practices.
|
|
Add a helper function that returns a pktline-formatted string. While at
it, add missing calls to `t.Helper()`.
|
|
ci: Enable use of fastzip to speed up cache handling
See merge request gitlab-org/gitaly!4729
|
|
git: Encapsulate object hash specific information
See merge request gitlab-org/gitaly!4720
|
|
command: Fix Goroutines sticking around until context cancellation
Closes #4188
See merge request gitlab-org/gitaly!4719
|
|
repository: Fix test writing into Gitaly's repository
See merge request gitlab-org/gitaly!4718
|
|
Creating and extracting caches takes up a significant amount of time in
our CI pipelines: extracting the cache takes almost 2 minutes, while
creating the cache takes roundabout 4:30min. This is quite excessive and
diminishes the returns we get from the cache.
Luckily, CI folks have realized that the current caching mechanism is
not ideal and have implemented a "fastzip" strategy [1] that can be
optionally enabled by setting the `FF_USE_FASTZIP` feature flag. This
results in a significant speedup: extracting the cache only takes about
40 seconds compared to 2 minutes, and saving the cache is sped up from
4:30min to about 50 seconds.
Let's enable the fastzip feature for Gitaly to speed up our pipelines.
[1]: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/26490
|
|
updateref: Ignore failures of custom post-receive hooks
Closes #3595
See merge request gitlab-org/gitaly!4714
|
|
One of our tests for `clnoeFromURLCommand()` tries to clone into a
subdirectory of the Gitaly repository itself. This is wrong: all test
data should always end up in a temporary directory.
Fix this by properly cloning into a temporary directory. While at it,
add a call to `cmd.Wait()` to not leak the process.
|
|
Add a new funciton `DetectObjectHash()` that, given a repository, can
detect the object hash that's used by the repository by inspecting the
`extensions.objectFormat` config entry.
|
|
Split out the ObjectID-related tests into a separate package so that we
can easily make use of the gittest package, which otherwise would cause
a cyclic dependency.
|
|
The gittest package is currently re-exporting the functionality provided
by the `ObjectHash` structure. Depending on whether we are testing with
SHA1 or SHA256, we export either the implementation details of the
`ObjectHashSHA1` or `ObjectHashSHA256` structure.
Now that we have the hash-specific information neatly encapsulated in
its own structure there is no need to re-export the separate details of
that structure anymore. Instead, refactor the gittest package to only
export a single `DefaultObjectHash` variable that points to one of the
above structures, depending on what we're currently testing.
|
|
Remove the separate `sha256` package in favor of the new `ObjectHash`
structure that encapsulates SHA1- and SHA256-specific info.
|
|
Move the `ZeroOID` variable into the `ObjectHash` structure to make it
dependent on the hash function used.
|