Age | Commit message (Collapse) | Author |
|
|
|
|
|
Refactor Project#create_or_update_import_data
See merge request gitlab-org/gitlab-ce!23701
|
|
Add basic implementation of CI/CD bridge job
See merge request gitlab-org/gitlab-ce!23730
|
|
|
|
'54650-send-an-email-to-project-owners-when-a-mirror-update-fails' into 'master'
Send a notification email on mirror update errors
Closes #54650
See merge request gitlab-org/gitlab-ce!23595
|
|
|
|
The email is sent to project maintainers containing the last mirror
update error. This will allow maintainers to set alarms and react
accordingly.
|
|
In https://gitlab.com/gitlab-org/release/framework/issues/28 we found
that this method was changed a lot over the years: 43 times if our
calculations were correct. Looking at the method, it had quite a few
branches going on:
def create_or_update_import_data(data: nil, credentials: nil)
return if data.nil? && credentials.nil?
project_import_data = import_data || build_import_data
if data
project_import_data.data ||= {}
project_import_data.data = project_import_data.data.merge(data)
end
if credentials
project_import_data.credentials ||= {}
project_import_data.credentials =
project_import_data.credentials.merge(credentials)
end
project_import_data
end
If we turn the || and ||= operators into regular if statements, we can
see a bit more clearly that this method has quite a lot of branches in
it:
def create_or_update_import_data(data: nil, credentials: nil)
if data.nil? && credentials.nil?
return
else
project_import_data =
if import_data
import_data
else
build_import_data
end
if data
if project_import_data.data
# nothing
else
project_import_data.data = {}
end
project_import_data.data =
project_import_data.data.merge(data)
end
if credentials
if project_import_data.credentials
# nothing
else
project_import_data.credentials = {}
end
project_import_data.credentials =
project_import_data.credentials.merge(credentials)
end
project_import_data
end
end
The number of if statements and branches here makes it easy to make
mistakes. To resolve this, we refactor this code in such a way that we
can get rid of all but the first `if data.nil? && credentials.nil?`
statement. We can do this by simply sending `to_h` to `nil` in the right
places, which removes the need for statements such as `if data`.
Since this data gets written to a database, in ProjectImportData we do
make sure to not write empty Hash values. This requires an `unless`
(which is really a `if !`), but the resulting code is still very easy to
read.
|
|
It adds a base class for CompareTestReportsService
containing common code with CompareLicenseManagementReportsService
which is present in GitLab Enterprise Edition.
|
|
'master'"
This reverts commit 793be43b35bc8cd2a9effe38280417ee198647cb, reversing
changes made to 8d0b4872ba3ff787c4067618f48b60bd24466c74.
For projects not using any CI, enabling merge only when pipeline succeeds
caused merge requests to be in unmergeable state, which caused significant
confusion.
See https://gitlab.com/gitlab-org/gitlab-ce/issues/55144 for more details.
|
|
Avoid caching BroadcastMessage as an ActiveRecord object
Closes #55034
See merge request gitlab-org/gitlab-ce!23662
|
|
Remove unnecessary includes of ShellAdapter
See merge request gitlab-org/gitlab-ce!23607
|
|
'54626-able-to-download-a-single-archive-file-with-api-by-ref-name' into 'master'
Add endpoint to download single artifact by ref
Closes #54626
See merge request gitlab-org/gitlab-ce!23538
|
|
When a Rails 4 host serializes a BroadcastMessage, it will serialize
`ActiveRecord::ConnectionAdapters::PostgreSQL::OID::Integer`, which does
not exist in Rails 5. This will cause Error 500s on a Rails 5 reading
from this cache.
To make Rails 4 and 5 play well together, store the data as JSON and
construct the ActiveRecord objects from JSON.
Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/55034
|
|
Allow public forks to be deduplicated
See merge request gitlab-org/gitlab-ce!23508
|
|
When a project is forked, the new repository used to be a deep copy of everything
stored on disk by leveraging `git clone`. This works well, and makes isolation
between repository easy. However, the clone is at the start 100% the same as the
origin repository. And in the case of the objects in the object directory, this
is almost always going to be a lot of duplication.
Object Pools are a way to create a third repository that essentially only exists
for its 'objects' subdirectory. This third repository's object directory will be
set as alternate location for objects. This means that in the case an object is
missing in the local repository, git will look in another location. This other
location is the object pool repository.
When Git performs garbage collection, it's smart enough to check the
alternate location. When objects are duplicated, it will allow git to
throw one copy away. This copy is on the local repository, where to pool
remains as is.
These pools have an origin location, which for now will always be a
repository that itself is not a fork. When the root of a fork network is
forked by a user, the fork still clones the full repository. Async, the
pool repository will be created.
Either one of these processes can be done earlier than the other. To
handle this race condition, the Join ObjectPool operation is
idempotent. Given its idempotent, we can schedule it twice, with the
same effect.
To accommodate the holding of state two migrations have been added.
1. Added a state column to the pool_repositories column. This column is
managed by the state machine, allowing for hooks on transitions.
2. pool_repositories now has a source_project_id. This column in
convenient to have for multiple reasons: it has a unique index allowing
the database to handle race conditions when creating a new record. Also,
it's nice to know who the host is. As that's a short link to the fork
networks root.
Object pools are only available for public project, which use hashed
storage and when forking from the root of the fork network. (That is,
the project being forked from itself isn't a fork)
In this commit message I use both ObjectPool and Pool repositories,
which are alike, but different from each other. ObjectPool refers to
whatever is on the disk stored and managed by Gitaly. PoolRepository is
the record in the database.
|
|
Backports changes made to One notification per code review
See merge request gitlab-org/gitlab-ce!23656
|
|
Log and pass correlation-id between Unicorn, Sidekiq and Gitaly
See merge request gitlab-org/gitlab-ce!22844
|
|
Add a new endpoint
`projects/:id/jobs/artifacts/:ref_name/raw/*artifact_path?job=name`
which is the close the web URL for consistency sake. This endpoint can
be used to download a single file from artifacts for the specified ref
and job.
closes https://gitlab.com/gitlab-org/gitlab-ce/issues/54626
|
|
The EE merge request can be found here:
https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8442
|
|
Use FastDestroy for deleting uploads
Closes #46069
See merge request gitlab-org/gitlab-ce!20977
|
|
Add CI/CD build encrypted tokens (after revert)
Closes #52342
See merge request gitlab-org/gitlab-ce!23649
|
|
|
|
Brings back 1e8f1de0 reverted in !23644
Closes #52342
See merge request gitlab-org/gitlab-ce!23436
|
|
Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/54862
|
|
This reverts commit 1e8f1de034aa9b6a60b640b2b091f60c4d3ba365, reversing
changes made to 62d971129da99936a3cdc04f3740d26f16a0c7a6.
|
|
It gathers list of file paths to delete before destroying
the parent object. Then after the parent_object is destroyed
these paths are scheduled for deletion asynchronously.
Carrierwave needed associated model for deleting upload file.
To avoid this requirement, simple Fog/File layer is used directly
for file deletion, this allows us to use just a simple list of paths.
|
|
The Correlation ID is taken or generated from received X-Request-ID.
Then it is being passed to all executed services (sidekiq workers
or gitaly calls).
The Correlation ID is logged in all structured logs as `correlation_id`.
|
|
|
|
Expose merge request pipeline variables
See merge request gitlab-org/gitlab-ce!23398
|
|
permissions but permissions are not overridden"
|
|
Introduce the following variables
- CI_MERGE_REQUEST_ID
- CI_MERGE_REQUEST_IID
- CI_MERGE_REQUEST_REF_PATH
- CI_MERGE_REQUEST_PROJECT_ID
- CI_MERGE_REQUEST_PROJECT_PATH
- CI_MERGE_REQUEST_PROJECT_URL
- CI_MERGE_REQUEST_TARGET_BRANCH_NAME
- CI_MERGE_REQUEST_SOURCE_PROJECT_ID
- CI_MERGE_REQUEST_SOURCE_PROJECT_PATH
- CI_MERGE_REQUEST_SOURCE_PROJECT_URL
- CI_MERGE_REQUEST_SOURCE_BRANCH_NAME
|
|
Encrypt CI/CD builds tokens
Closes #52342
See merge request gitlab-org/gitlab-ce!23436
|
|
Determined by running the script:
```
included = `git grep --name-only ShellAdapter`.chomp.split("\n")
used = `git grep --name-only gitlab_shell`.chomp.split("\n")
included - used
```
|
|
Use group clusters when deploying (DeploymentPlatform)
See merge request gitlab-org/gitlab-ce!22308
|
|
|
|
Improve help and validation sections of maximum build timeout inputs
Closes #49434
See merge request gitlab-org/gitlab-ce!23586
|
|
|
|
|
|
Merge request pipelines
See merge request gitlab-org/gitlab-ce!23217
|
|
|
|
|
|
For CE, #lfs_http_url_to_repo calls #http_url_to_repo where as for EE we
examine for a Geo setup so we can support push to secondary for LFS.
|
|
This also means we need to apply the `current_scope` otherwise this
method will return all clusters associated with the groups regardless of
any scopes applied to this method
|
|
With this MR, group clusters is now functional, so default to enabled.
Have a single setting on the root ancestor group to enabled or disable
group clusters feature as a whole
|
|
- Rename ordered_group_clusters_for_project ->
ancestor_clusters_for_clusterable
- Improve name of order option. It makes much more sense to have `hierarchy_order: :asc`
and `hierarchy_order: :desc`
- Allow ancestor_clusters_for_clusterable for group
- Re-use code already present in Project
|
|
AFAIK the only relevant place is Projects::CreateService, this gets
called when user creates a new project, forks a new project and does
those things via the api.
Also create k8s namespace for new group hierarchy
when transferring project between groups
Uses new Refresh service to create k8s namespaces
- Ensure we use Cluster#cluster_project
If a project has multiple clusters (EE), using Project#cluster_project
is not guaranteed to return the cluster_project for this cluster. So
switch to using Cluster#cluster_project - at this stage a cluster can
only have 1 cluster_project.
Also, remove rescue so that sidekiq can retry
|
|
For project level, it's the project directly associated. For group
level, it's the projects under that group.
|
|
This returns a union of the project level clusters and group level
clusters associated with this project.
|