Welcome to mirror list, hosted at ThFree Co, Russian Federation.

git.kernel.org/pub/scm/git/git.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMatt Cooper <vtbassmatt@gmail.com>2021-11-02 18:46:08 +0300
committerJunio C Hamano <gitster@pobox.com>2021-11-03 21:22:27 +0300
commite9aa762cc72e6cf8fd76fefe5ca2b5064be1a821 (patch)
tree5ac40538a76de8671af8b1dcc77822a22c1f99d6
parentb79541af7a1a7b7f2438f43196b1774d7b71e852 (diff)
odb: teach read_blob_entry to use size_t
There is mixed use of size_t and unsigned long to deal with sizes in the codebase. Recall that Windows defines unsigned long as 32 bits even on 64-bit platforms, meaning that converting size_t to unsigned long narrows the range. This mostly doesn't cause a problem since Git rarely deals with files larger than 2^32 bytes. But adjunct systems such as Git LFS, which use smudge/clean filters to keep huge files out of the repository, may have huge file contents passed through some of the functions in entry.c and convert.c. On Windows, this results in a truncated file being written to the workdir. I traced this to one specific use of unsigned long in write_entry (and a similar instance in write_pc_item_to_fd for parallel checkout). That appeared to be for the call to read_blob_entry, which expects a pointer to unsigned long. By altering the signature of read_blob_entry to expect a size_t, write_entry can be switched to use size_t internally (which all of its callers and most of its callees already used). To avoid touching dozens of additional files, read_blob_entry uses a local unsigned long to call a chain of functions which aren't prepared to accept size_t. Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Matt Cooper <vtbassmatt@gmail.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
-rw-r--r--entry.c8
-rw-r--r--entry.h2
-rw-r--r--parallel-checkout.c2
-rwxr-xr-xt/t1051-large-conversion.sh2
4 files changed, 8 insertions, 6 deletions
diff --git a/entry.c b/entry.c
index 711ee0693c..4cb3942dbd 100644
--- a/entry.c
+++ b/entry.c
@@ -82,11 +82,13 @@ static int create_file(const char *path, unsigned int mode)
return open(path, O_WRONLY | O_CREAT | O_EXCL, mode);
}
-void *read_blob_entry(const struct cache_entry *ce, unsigned long *size)
+void *read_blob_entry(const struct cache_entry *ce, size_t *size)
{
enum object_type type;
- void *blob_data = read_object_file(&ce->oid, &type, size);
+ unsigned long ul;
+ void *blob_data = read_object_file(&ce->oid, &type, &ul);
+ *size = ul;
if (blob_data) {
if (type == OBJ_BLOB)
return blob_data;
@@ -270,7 +272,7 @@ static int write_entry(struct cache_entry *ce, char *path, struct conv_attrs *ca
int fd, ret, fstat_done = 0;
char *new_blob;
struct strbuf buf = STRBUF_INIT;
- unsigned long size;
+ size_t size;
ssize_t wrote;
size_t newsize = 0;
struct stat st;
diff --git a/entry.h b/entry.h
index b8c0e170dc..61ee8c1760 100644
--- a/entry.h
+++ b/entry.h
@@ -51,7 +51,7 @@ int finish_delayed_checkout(struct checkout *state, int *nr_checkouts);
*/
void unlink_entry(const struct cache_entry *ce);
-void *read_blob_entry(const struct cache_entry *ce, unsigned long *size);
+void *read_blob_entry(const struct cache_entry *ce, size_t *size);
int fstat_checkout_output(int fd, const struct checkout *state, struct stat *st);
void update_ce_after_write(const struct checkout *state, struct cache_entry *ce,
struct stat *st);
diff --git a/parallel-checkout.c b/parallel-checkout.c
index 6b1af32bb3..b6f4a25642 100644
--- a/parallel-checkout.c
+++ b/parallel-checkout.c
@@ -261,7 +261,7 @@ static int write_pc_item_to_fd(struct parallel_checkout_item *pc_item, int fd,
struct stream_filter *filter;
struct strbuf buf = STRBUF_INIT;
char *blob;
- unsigned long size;
+ size_t size;
ssize_t wrote;
/* Sanity check */
diff --git a/t/t1051-large-conversion.sh b/t/t1051-large-conversion.sh
index e7f9f0bdc5..e6d52f98b1 100755
--- a/t/t1051-large-conversion.sh
+++ b/t/t1051-large-conversion.sh
@@ -85,7 +85,7 @@ test_expect_success 'ident converts on output' '
# This smudge filter prepends 5GB of zeros to the file it checks out. This
# ensures that smudging doesn't mangle large files on 64-bit Windows.
-test_expect_failure EXPENSIVE,SIZE_T_IS_64BIT,!LONG_IS_64BIT \
+test_expect_success EXPENSIVE,SIZE_T_IS_64BIT,!LONG_IS_64BIT \
'files over 4GB convert on output' '
test_commit test small "a small file" &&
small_size=$(test_file_size small) &&