diff options
author | Campbell Barton <ideasman42@gmail.com> | 2021-07-20 15:52:31 +0300 |
---|---|---|
committer | Campbell Barton <ideasman42@gmail.com> | 2021-07-20 15:58:14 +0300 |
commit | c3a400b73fbf46f1b8cfa1a5735e2a96254974b2 (patch) | |
tree | 71d54f9f4cd25eed51a3ff20a96b29ece7979608 /source/blender/blenlib/intern/array_store.c | |
parent | 48a45c43e4902e1dd431bf1472e7b1dfb2ec64e3 (diff) |
Cleanup: use single back-tick quoting in comments
While doxygen supports both, conform to our style guide.
Note that single back-tick's are already used in a majority of comments.
Diffstat (limited to 'source/blender/blenlib/intern/array_store.c')
-rw-r--r-- | source/blender/blenlib/intern/array_store.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/source/blender/blenlib/intern/array_store.c b/source/blender/blenlib/intern/array_store.c index e1a7ee98ce5..5ad57a7bec8 100644 --- a/source/blender/blenlib/intern/array_store.c +++ b/source/blender/blenlib/intern/array_store.c @@ -1403,16 +1403,16 @@ static BChunkList *bchunk_list_from_data_merge(const BArrayInfo *info, * Create a new array store, which can store any number of arrays * as long as their stride matches. * - * \param stride: ``sizeof()`` each element, + * \param stride: `sizeof()` each element, * - * \note while a stride of ``1`` will always work, + * \note while a stride of `1` will always work, * its less efficient since duplicate chunks of memory will be searched * at positions unaligned with the array data. * * \param chunk_count: Number of elements to split each chunk into. * - A small value increases the ability to de-duplicate chunks, * but adds overhead by increasing the number of chunks to look up when searching for duplicates, - * as well as some overhead constructing the original array again, with more calls to ``memcpy``. + * as well as some overhead constructing the original array again, with more calls to `memcpy`. * - Larger values reduce the *book keeping* overhead, * but increase the chance a small, * isolated change will cause a larger amount of data to be duplicated. |