Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/littlefs-project/littlefs.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/lfs.h
AgeCommit message (Collapse)Author
2018-10-18Collapsed recursive deorphans into a single passChristopher Haster
Because a block can go bad at any time, if we're unlucky, we may end up generating multiple orphans in a single metadata write. This is exacerbated by the early eviction in dynamic wear-leveling. We can't track _all_ orphans, because that would require unbounded storage and significantly complicate things, but there are a handful of intentional orphans we do track because they are easy to resolve without the O(n^2) deorphan scan. These are anytime we intentionally remove a metadata-pair. Initially we cleaned up orphans as they occur with whatever knowledge we do have, and just accepted the extra O(n^2) deorphan scans in the unlucky case. However we can do a bit better by being lazy and leaving deorphaning up to the next metadata write. This needs to work with the known orphans while still setting the orphan flag on disk correctly. To accomplish this we replace the internal flag with a small counter. Note, this means that our internal representation of orphans differs from what's on disk. This is annoying but not the end of the world.
2018-10-18Dropped lfs_fs_getattr for the more implicit lfs_getattr("/")Christopher Haster
This was a pretty simple oversight on my part. Conceptually, there's no difference between lfs_fs_getattr and lfs_getattr("/"). Any operations on directories can be applied "globally" by referring to the root directory. Implementation wise, this actually fixes the "corner case" of storing attributes on the root directory, which is broken since the root directory doesn't have a related entry. Instead we need to use the root superblock for this purpose. Fewer functions means less code to document and maintain, so this is a nice benefit. Now we just have a single lfs_getattr/setattr/removeattr set of functions along with the ability to access attributes atomically in lfs_file_opencfg.
2018-10-18Added allocation randomization for dynamic wear-levelingChristopher Haster
This implements the second step of full dynamic wear-leveling, block allocation randomization. This is the key part the uniformly distributes wear across the filesystem, even through reboots. The entropy actually comes from the filesystem itself, by xoring together all of the CRCs in the metadata-pairs on the filesystem. While this sounds like a ridiculous operation, it's easy to do when we already scan the metadata-pairs at mount time. This gives us a random number we can use for block allocation. Unfortunately it's not a great general purpose random generator as the output only changes every filesystem write. Fortunately that's exactly when we need our allocator. --- Additionally, the randomization created a mess for the testing framework. Fortunately, this method of randomization is deterministic. A very useful property for reproducing bugs.
2018-10-18Added building blocks for dynamic wear-levelingChristopher Haster
Initially, littlefs relied entirely on bad-block detection for wear-leveling. Conceptually, at the end of a devices lifespan, all blocks would be worn evenly, even if they weren't worn out at the same time. However, this doesn't work for all devices, rather than causing corruption during writes, wear reduces a devices "sticking power", causing bits to flip over time. This means for many devices, true wear-leveling (dynamic or static) is required. Fortunately, way back at the beginning, littlefs was designed to do full dynamic wear-leveling, only dropping it when making the retrospectively short-sighted realization that bad-block detection is theoretically sufficient. We can enable dynamic wear-leveling with only a few tweaks to littlefs. These can be implemented without breaking backwards compatibility. 1. Evict metadata-pairs after a certain number of writes. Eviction in this case is identical to a relocation to recover from a bad block. We move our data and stick the old block back into our pool of blocks. For knowing when to evict, we already have a revision count for each metadata-pair which gives us enough information. We add the configuration option block_cycles and evict when our revision count is a multiple of this value. 2. Now all blocks participate in COW behaviour. However we don't store the state of our allocator, so every boot cycle we reuse the first blocks on storage. This is very bad on a microcontroller, where we may reboot often. We need a way to spread our usage across the disk. To pull this off, we can simply randomize which block we start our allocator at. But we need a random number generator that is different on each boot. Fortunately we have a great source of entropy, our filesystem. So we seed our block allocator with a simple hash of the CRCs on our metadata-pairs. This can be done for free since we already need to scan the metadata-pairs during mount. What we end up with is a uniform distribution of wear on storage. The wear is not perfect, if a block is used for metadata it gets more wear, and the randomization may not be exact. But we can never actually get perfect wear-leveling, since we're already resigned to dynamic wear-leveling at the file level. With the addition of metadata logging, we end up with a really interesting two-stage wear-leveling algorithm. At the low-level, metadata is statically wear-leveled. At the high-level, blocks are dynamically wear-leveled. --- This specific commit implements the first step, eviction of metadata pairs. Entertwining this into the already complicated compact logic was a bit annoying, however we can combine the logic for superblock expansion with the logic for metadata-pair eviction.
2018-10-17Modified CTZ struct type to make space for erased files in the futureChristopher Haster
In v1, littlefs didn't trust blocks that were been previously erased and conservatively erased any blocks before writing to them. This was a part of the design since the beginning because of the complexity of managing erased blocks when we can lose power at any time. However, we theoretically could keep track of files that have been properly erased by marking them with an "erased bit". A file marked this way could be opened and appended to without needing to COW the last block. The requirement would be that the "erased bit" is cleared during a write, since a power-loss would require that littlefs no longer trust the erased state of the file. This commit just shuffles the struct types around to make space for an "erased bit" in the struct type field to be added in the future. This ordering also makes more sense, since there will likely be more file representations than directory representations on disk.
2018-10-17Added root entry and expanding superblocksChristopher Haster
Expanding superblocks has been on my wishlist for a while. The basic idea is that instead of maintaining a fixed offset blocks {0, 1} to the the root directory (1 pointer), we maintain a dynamically sized linked-list of superblocks that point to the actual root. If the number of writes to the root exceeds some value, we increase the size of the superblock linked-list. This can leverage existing metadata-pair operations. The revision count for metadata-pairs provides some knowledge on how much wear we've put on the superblock, and the threaded linked-list can also be reused for this purpose. This means superblock expansion is both optional and cheap to implement. Expanding superblocks helps both extremely small and extremely large filesystem (extreme being relative of course). On the small end, we can actually collapse the superblock into the root directory and drop the hard requirement of 4-blocks for the superblock. On the large end, our superblock will now last longer than the rest of the filesystem. Each time we expand, the number of cycles until the superblock dies is increased by a power. Before we were stuck with this layout: level cycles limit layout 1 E^2 390 MiB s0 -> root Now we expand every time a fixed offset is exceeded: level cycles limit layout 0 E 4 KiB s0+root 1 E^2 390 MiB s0 -> root 2 E^3 37 TiB s0 -> s1 -> root 3 E^4 3.6 EiB s0 -> s1 -> s2 -> root ... Where the cycles are the number of cycles before death, and the limit is the worst-case size a filesystem where early superblock death becomes a concern (all writes to root using this formula: E^|s| = E*B, E = erase cycles = 100000, B = block count, assuming 4096 byte blocks). Note we can also store copies of the superblock entry on the expanded superblocks. This may help filesystem recover tools in the future.
2018-10-16Merge remote-tracking branch 'origin/master' into v2-alphaChristopher Haster
2018-10-16Changed name of upper-limits from blah_size to blah_maxChristopher Haster
This standardizes the naming between the LFS_BLAH_MAX macros and the blah_max configuration in the lfs_config structure.
2018-10-16Changed LFS_ERR_CORRUPT to match EILSEQ instead of EBADEChristopher Haster
LFS_ERR_CORRUPT is unfortunately not a well defined error code. It's very important in the context of littlefs, but missing from the standard error codes defined in Linux. After some discussions with other developers, it was encouraged to use the encoding for EILSEQ over EBADE for representing on disk corrupt, as EILSEQ implies that there is something wrong with the data. I've changed this now to take advantage of the breaking changes in v2 to avoid a risky change to a return value.
2018-10-16Updated custom attribute documentation and tweaked nonexistant attributesChristopher Haster
Because of limitations in how littlefs manages attributes on disk, littlefs views zero-length attributes and missing attributes as the same thing. The simpliest implementation of attributes mirrors this behaviour transparently for the user.
2018-10-16Cleaned up config optionsChristopher Haster
- Updated documentation where needed - Added asserts which take into account relationships with the new cache_size configuration - Restructured ordering to be consistent for the three main configurables: LFS_ATTR_MAX, LFS_NAME_MAX, and LFS_INLINE_MAX
2018-10-16Introduced cache_size as alternative to hardware read/write sizesChristopher Haster
The introduction of an explicit cache_size configuration allows customization of the cache buffers independently from the hardware read/write sizes. This has been one of littlefs's main handicaps. Without a distinction between cache units and hardware limitations, littlefs isn't able to read or program _less_ than the cache size. This leads to the counter-intuitive case where larger cache sizes can actually be harmful, since larger read/prog sizes require sending more data over the bus if we're only accessing a small set of data (for example the CTZ skip-list traversal). This is compounded with metadata logging, since a large program size limits the number of commits we can write out in a single metadata block. It really doesn't make sense to link program size + cache size here. With a separate cache_size configuration, we can be much smarter about what we actually read/write from disk. This also simplifies cache handling a bit. Before there were two possible cache sizes, but these were rarely used. Note that the cache_size is NOT written to the superblock and can be freely changed without breaking backwards compatibility.
2018-10-16Simplified the internal xored-globals implementationChristopher Haster
There wasn't much use (and inconsistent compiler support) for storing small values next to the unaligned lfs_global_t struct. So instead, I've rounded the struct up to the nearest word to try to take advantage of the alignment in xor and memset operations. I've also moved the global fetching into lfs_mount, since that was the only use of the operation. This allows for some variable reuse in the mount function.
2018-10-16Squished in-flight files/dirs into single listChristopher Haster
This is an effort to try to consolidate the handling of in-flight files and dirs opened by the user (and possibly opened internally). Both files and dirs have metadata state that need to be kept in sync by the commit logic. This metadata state is mostly contained in the lfs_mdir_t type, which is present in both the lfs_file_t and lfs_dir_t. Unfortunately both of these structs have some relatively unrelated metadata that needs to be kept in sync: - Files store an id representing the open file - Dirs store an id during iteration While these take up the same space, they unfortunately need to be managed differently by the commit logic. The best solution I can come up with is to simple store a general purpose list and tag both structures with LFS_TYPE_REG and LFS_TYPE_DIR respectively. This is kinda funky, but wins out over duplicated the commit logic.
2018-10-16Cleaned up several TODOsChristopher Haster
Other than removed outdated TODOs, there are several tweaks: - Standardized naming of fs-level functions (mostly internal names) - Tweaked low-level use of subtype to hopefully take advantage of redundant code removal - Moved root-handling into lfs_dir_getinfo - Updated DEBUG statements around move/orphan fixes - Removed trailing 1s in type fields - Removed unused code
2018-10-16Added orphan bit to xored-globalsChristopher Haster
Unfortunately for us, even with the new ability to store global state, orphans can not be handled as gracefully as moves. This is due to the fact that directory operations can create an unbounded number of orphans. It's usually small, the fact that it's unbounded means we can't store the orphan info in xored-globals. However, one thing we can do to leverage the xored-global state is store a bit indicating if _any_ orphans are present. This means in the common case we can completely avoid the deorphan step, while only using a single bit of the global state, which is effectively free since we can store it in the globals tag itself. If a littlefs drive does not want to consider the orphan bit, it's free to use the previous behaviour of always checking for orphans on first write.
2018-10-16Cleaned up config usage in file logicChristopher Haster
The main change here was to drop the in-place twiddling of custom attributes to match the internal attribute structures. The original thought was that this could allow the compiler to garbage collect more of the custom attribute logic when not used, but since this occurs in the common lfs_file_opencfg function, gc can't really happen. Not twiddling the user's structure is the polite thing to do, opens up the ability to store the lfs_attr structure in ROM, and avoids surprising the user if they attempt to use the structure for their own purposes. This means we can make the lfs_attr structure const and rely on the list in the lfs_file_config structure, similar to how we rely on the global lfs_config structure. Some other tweaks: - Dropped the global file_buffer, replaced entirely by per-file buffers. - Updated LFS_INLINE_MAX and LFS_ATTR_MAX to correct values - Added workaround for compiler bug related to zero initializer: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53119
2018-10-16Removed the implicit lfs_t parameter to lfs_traverseChristopher Haster
This is a very minor thing but it has been bugging me. On one hand, all a callback ever needs is a single pointer for context. On the other hand, you could make the argument that in the context of littlefs, the lfs_t struct represents global state and should always be available to callbacks passed to littlefs. In the end I'm sticking with only a single context pointer, since this is satisfies the minimum requirements and has the highest chance of function reuse. If a user needs access to the lfs_t struct, it can be passed by reference in the context provided to the callback. This also matches callbacks used in other languages with more emphasis on objects and classes. Usually the callback doesn't get a reference to the caller.
2018-10-16Fixed the orphan test to handle logging metadata-pairsChristopher Haster
The main issue here was that the old orphan test relied on deleting the block that contained the most recent update. In the new design this doesn't really work since updates get appended to metadata-pairs incrementally. This is fixed by instead using the truncate command on the appropriate block. We're now passing orphan tests.
2018-10-16Added support for custom attributes leveraging the new metadata loggingChristopher Haster
Now that littlefs has been rebuilt almost from the ground up with the intention to support custom attributes, adding in custom attribute support is relatively easy. The highest bit in the 9-bit type structure indicates that an attribute is a user-specified custom attribute. The user then has a full 8-bits to specify the attribute type. Other than that, custom attributes are treated the same as system-level attributes. Also made some tweaks to custom attributes: - Adopted the opencfg for file-level attributes provided by dpgeorge - Changed setattrs/getattrs to the simpler setattr/getattr functions users will probably be more familiar with. Note that multiple attributes can still be committed atomically with files, though not with directories. - Changed LFS_ATTRS_MAX -> LFS_ATTR_MAX since there's no longer a global limit on the sum of attribute sizes, which was rather confusing. Though they are still limited by what can fit in a metadata-pair.
2018-10-16Refactored the updates of in-flight files/dirsChristopher Haster
Updated to account for changes as a result of commits/compacts. And changed instances of iteration over both files and dirs to use a single nested loop. This does rely implicitly on the structure layout of dirs/files and their location in lfs_t, which isn't great. But it gets the job done with less code duplication.
2018-10-16Cleaned up commit logic and function organizationChristopher Haster
Restrctured function organization to make a bit more sense, and made some small refactoring tweaks, specifically around the commit logic and global related functions.
2018-10-16Cleaned up attributes and related logicChristopher Haster
The biggest change here is to make littlefs less obsessed with the lfs_mattr_t struct. It was limiting our flexibility and can be entirely replaced by passing the tag + data explicitly. The remaining use of lfs_mattr_t is specific to the commit logic, where it replaces the lfs_mattrlist_t struct. Other changes: - Added global lfs_diskoff struct for embedding disk references inside the lfs_mattr_t. - Reordered lfs_mattrlist_t to squeeze out some code savings - Added commit_get for explicit access to entries from unfinished metadata-pairs - Parameterized the "stop_at_commit" flag instead of hackily storing it in the lfs_mdir_t temporarily - Changed return value of lfs_pred to error-only with ENOENT representing a missing predecessor - Adopted const where possible
2018-10-16Changed internal functions to return tags over pointersChristopher Haster
One neat (if gimmicky) trick, is that each tag has a valid bit in the highest bit position of the 32-bit word. This is used to determine when to stop a fetch operation, but after fetch, the bit is free to use in the driver. This means we can create a typed-union of sorts with error codes and tags, returning both as the return value from a function. Say what you will about this trick, it does have a significant impact on code size. I suspect this is primarily due to the compiler having a hard time optimizing around pointer access.
2018-10-16Renamed tag functions and macrosChristopher Haster
- lfs_tagverb -> lfs_tag_verb - lfs_mktag -> LFS_MKTAG (it's a macro now) - LFS_STRUCT_THING -> LFS_THINGSTRUCT
2018-10-16Dropped "has id" bit encoding in favor of invalid idChristopher Haster
I've been trying to keep tag types organized with an encoding that hints if a tag uses its id field for file ids. However this seem to have been a mistake. Using a null id of 0x3ff greatly simplified quite a bit of the logic around managing file related tags. The downside is one less id we can use, but if we look at the encoding cost, donating one full bit costs us 2^9 id permutations vs 1 id permutation. So even if we had a perfect encoding it's in our favor to use a null id. The cost of null ids is code size, but with the complexity around figuring out if a type used it's id or not it just works out better to use a null id.
2018-10-16Restructured types to use a more flexible bit encodingChristopher Haster
Recall that the 32-bit tag structure contains a 9-bit type. The type structure then decomposes into a bit more information: [--- 9 ---] [1|- 4 -|- 4 -] ^ ^ ^- specific type | \------- subtype \----------- user bit The main change is an observation from moving type info to the name tag from the struct tag. Since we don't need the type info in the struct tag, we can significantly simplify the type structure.
2018-10-16Changed type info to be retrieved from name tag instead of struct tagChristopher Haster
Originally, I had type info encoded in the struct tag. This initially made sense because the type info only directly impacts the struct tag. However this was a case of focusing too much on the details instead of the bigger picture. A more file operations need to figure out the type of a file, but it's only actually a small number of file operations that need to interact with the file's structure. For the common case, providing the type of the file early shortens operations by a full tag access. Additionally, but storing the type in the file name tag, this opens up the struct tag to use those bits for storing more struct descriptions.
2018-10-16Removed old move logic, now passing move testsChristopher Haster
The introduction of xored-globals required quite a bit of work to integrate. But now that that is working, we can strip out the old move logic. It's worth noting that the xored-globals integration with commits is relatively complex and subtle.
2018-10-15Fixed bug where globals were poisoning move commitsChristopher Haster
The issue lies in the reuse of the id field for globals. Before globals, the only tags with a non-null (0x3ff) id field were names, structs, and other file-specific metadata. But globals are also using this field for the indirect delete, since otherwise the globals structure would be very unaligned (74-bits long). To make matters worse, the id field for globals contains the delta used to reconstruct the globals at mount time. Which means the id field could take on very absurd values and break the dir fetch logic if we're not careful. Solution is to use the scope portion of the type field where necessary, although unforunately this does add some code cost.
2018-10-14Switched back to simple deorphan-step on directory removeChristopher Haster
Originally I tried to reuse the indirect delete to accomplish truely atomic directory removes, however this fell apart when it came to implementing directory removes as a side-effect of renames. A single indirect-delete simply can't handle renames with removes as a side effects. When copying an entry to its destination, we need to atomically delete both the old entry, and the source of our copy. We can't delete both with only a single indirect-delete. It is possible to accomplish this with two indirect-deletes, but this is such an uncommon case that it's really not worth supporting efficiently due to how expensive globals are. I also dropped indirect-deletes for normal directory removes. I may add it back later, but at the moment it's extra code cost for that's not traveled very often. As a result, restructured the indirect delete handling to be a bit more generic, now with a multipurpose lfs_globals_t struct instead of the delete specific lfs_entry_t struct. Also worked on integrating xored-globals, now with several primitive global operations to manage fetching/updating globals on disk.
2018-10-14Restructured tags to better support xored-globalsChristopher Haster
32-bit tag structure: [--- 32 ---] [1|- 9 -|- 10 -|-- 12 --] ^ ^ ^ ^- entry length | | \--------- file id | \--------------- tag type \------------------- valid In this tag, the type decomposes into some more information: [--- 9 ---] [1|- 2 -|- 3 -|- 3 -] ^ ^ ^ ^- struct | | \------- type | \------------- scope \----------------- user The change in this encoding is the addition of a global scope: LFS_SCOPE_STRUCT = 0 00 xxx xxx LFS_SCOPE_ENTRY = 0 01 xxx xxx LFS_SCOPE_DIR = 0 10 xxx xxx LFS_SCOPE_FS = 0 11 xxx xxx LFS_SCOPE_USER = 1 xx xxx xxx
2018-10-14Introduced xored-globals logic to fix fundamental problem with movesChristopher Haster
This was a big roadblock for a while: with the new feature of inlined files, the existing move logic was fundamentally flawed. To pull off atomic moves between two different metadata-pairs, littlefs uses a simple, if a bit clumsy trick. 1. Marks entry as "moving" 2. Copies entry to new metadata-pair 3. Deletes old entry If power is lost before the move operation is completed, we will find the "moving" tag. This means there may or may not be an incomplete move on the filesystem. In this case, we simply search for the moved entry, if we find it, we remove the old entry, otherwise we just remove the "moving" tag. This worked perfectly, until we introduced inlined files. See, unlike the existing directory and ctz entries, inlined files have no guarantee they are unique. There is nothing we can search for that will allow us to find a moved file unless we assign entries globally-unique ids. (note that moves are fundamentally rename operations, so searching for names does not make sense). --- Solving this problem required completely restructuring how littlefs handled moves and pulled out a really old idea that had been left in the cutting room floor back when littlefs was going through many designs: xored-globals. The problem xored-globals solves is the need to maintain some global state via commits to these distributed, independent metadata-pairs. The idea is that we can use some sort of symmetric operation, such as xor, to introduces deltas of the global state that can be committed atomically along with any other info to these metadata-pairs. This means that to figure out our global state, we xor together the global delta stored in every metadata-pair. Which means any commit can update the global state atomically, opening up a whole new set atomic possibilities. There is a couple of downsides. These globals may end up with deltas on every single metadata-pair, effectively duplicating the data for each block. Additionally, these globals need to have multiple copies in RAM. This means and globals need to be a bounded size and very small, since even small globals will have a large footprint. --- On top of xored-globals, it's trivial to fix our move logic. Here we've added an indirect delete tag which allows us to atomically specify a delete of any entry on the filesystem. Our move operation is now: 1. Copy entry to new metadata-pair and atomically xor globals to indirectly delete our original entry. 2. Delete the original entry and xor globals to remove the indirect delete. Extra exciting is that this now takes our relatively clumsy move operation into a sexy guaranteed O(1) move operation with no searching necessary (though we do need to xor globals during mount). Also reintroduced entry struct, now with a specific purpose to describe the metadata-pair + id combo needed by indirect deletes to locate an entry.
2018-10-14Added the internal meta-directory structureChristopher Haster
Similarly to the internal "meta-attributes", I was finding quite a bit of need for an internal structure that mirrors the user-facing directory structure for when I need to do an operation on a metadata-pair, but don't need all of the state associated with a fully iterable directory chain. lfs_mdir_t - meta-directory, describes a single metadata-pair lfs_dir_t - directory, describes an iterable directory chain While it may seem complex to have all these structures lying around, they only complicate the code at compile time. To the machine, any number of nested structures all looks the same.
2018-10-14Renamed lfs_entry_t -> lfs_mattr_tChristopher Haster
Attributes are used to describe more than just entries, so calling these list of attributes "entries" was inaccurate. However, the name "attributes" would conflict with "user attributes", user-facing attributes with a very similar purpose. "user attributes" must be kept distinct due to differences in binary layout (internal attributes can use a more compact tag+buffer representation, but expecting users to jump through hoops to get their data to look like that isn't very user-friendly). Decided to go with "mattr" as shorthand for "meta-attributes", similar to "metadata".
2018-10-14Continued progress with reintroducing testing on the new metadata loggingChristopher Haster
Now with some tweaks to commit/compact, and a committers for entrylists and moves specifically. No longer relying on a commitwith callback, the types of commits are now infered from their tags. This means we can now commit things atomically with special commits, such as moves. Now lfs_rename can move entries to new names correctly.
2018-10-14More testing progress, combined dir/commit traversalChristopher Haster
Passing more tests now with the journalling change, but still have more work to do. The most humorous bug was a bug where during the three step move process, the entry move logic would dumbly copy over any tags associated with the moving entry, including the tag used to temporarily mark the entry as "moving". Also combined dir and commit traversal using a "stop_at_commit" flag in directory struct as a short-term hack to combine the code paths.
2018-10-13Cleaned up enough things to pass basic file testingChristopher Haster
2018-10-13Reorganized the internal operations to make more senseChristopher Haster
Also refactored lfs_dir_compact a bit, adding begin and end as arguments since they simplify a bit of the logic and can be found out much easier earlier in the commit logic. Also changed add -> append and drop -> delete and cleaned up some of the logic around there.
2018-10-13Completed transition of files with journalling metadataChristopher Haster
This was the simpler part of transitioning since file operations only interact with metadata at sync time. Also switched from array to linked-list of entries.
2018-10-13More progress integrating journalingChristopher Haster
- Integrated into lfs_file_t_, duplicating functions where necessary - Added lfs_dir_fetchwith_ as common parent to both lfs_dir_fetch_ and lfs_dir_find_ - Added similar parent with lfs_dir_commitwith_ - Made matching find/get operations with getbuffer/getentry and findbuffer/findentry - lfs_dir_alloc now populates tail, since almost all directory block allocations need to populate tail
2018-10-13Progressed integration of journaling metadata pairsChristopher Haster
- Integrated journaling into lfs_dir_t_ struct and operations, duplicating functions where necessary - Added internal lfs_tag_t and lfs_stag_t - Consolidated lfs_region and lfs_entry structures
2018-10-13Added rudimentary framework for journaling metadata pairsChristopher Haster
This is a big change stemming from the fact that resizable entries were surprisingly complicated to implement and came in with a sizable code cost. The theory is that the journalling has a comparable cost to resizable entries. Both need to handle overflowing blocks, and managing offsets is comparable to managing attribute IDs. But by jumping all the way to full journaling, we can statically wear-level the metadata written to metadata pairs. The idea of journaling littlefs's metadata has been mentioned several times in discussions and fits well into how littlefs works. You could even view the existing metadata log as a log of size 2. The downside of this approach is that changing the metadata in this way would break compatibility from the existing layout on disk. Something that resizable entries does not do. That being said, adopting journaling at the metadata layer offers a big improvement to littlefs's performance and wear-leveling, with very little cost (maybe even none or negative after resizable entries?).
2018-10-10Added lfs_fs_size for finding a count of used blocksChristopher Haster
This has existed for some time in the form of the lfs_traverse function, through which a user could provide a simple callback that would just count the number of blocks lfs_traverse finds. However, this approach is relatively unconventional and has proven to be confusing for most users.
2018-10-10Added file-level and fs-level custom attribute APIsChristopher Haster
In the form of lfs_file_setattr, lfs_file_getattr, lfs_fs_setattr, lfs_fs_getattr. This enables atomic updates of custom attributes as described in 6c754c8, and provides a custom attribute API that allows custom attributes to be stored on the filesystem itself.
2018-10-10Added support for atomically committing custom attributesChristopher Haster
Although it's simple and probably what most users expect, the previous custom attributes API suffered from one problem: the inability to update attributes atomically. If we consider our timestamp use case, updating a file would require: 1. Update the file 2. Update the timestamp If a power loss occurs during this sequence of updates, we could end up with a file with an incorrect timestamp. Is this a big deal? Probably not, but it could be a surprise only found after a power-loss. And littlefs was developed with the _specifically_ to avoid suprises during power-loss. The littlefs is perfectly capable of bundling multiple attribute updates in a single directory commit. That's kind of what it was designed to do. So all we need is a new committer opcode for list of attributes, and then poking that list of attributes through the API. We could provide the single-attribute functions, but don't, because the fewer functions makes for a smaller codebase, and these are already the more advanced functions so we can expect more from users. This also changes semantics about what happens when we don't find an attribute, since erroring would throw away all of the other attributes we're processing. To atomically commit both custom attributes and file updates, we need a new API, lfs_file_setattr. Unfortunately the semantics are a bit more confusing than lfs_setattr, since the attributes aren't written out immediately.
2018-10-10Added simple custom attributesChristopher Haster
A much requested feature (mostly because of littlefs's notable lack of timestamps), this commits adds support for user-specified custom attributes. Planned (though underestimated) since v1, custom attributes provide a route for OSs and applications to provide their own metadata in littlefs, without limiting portability. However, unlike custom attributes that can be found on much more powerful PC filesystems, these custom attributes are very limited, intended for only a handful of bytes for very important metadata. Each attribute has only a single byte to identify the attribute, and the size of all attributes attached to a file is limited to 64 bytes. Custom attributes can be accessed through the lfs_getattr, lfs_setattr, and lfs_removeattr functions.
2018-10-10Bumped versions, cleaned up some TODOs and missing commentsChristopher Haster
2018-10-10Expanded inline files up to a limit of 1023 bytesChristopher Haster
One of the big benefits of inline files is that small files no longer need to take up a full block. This opens up an opportunity to provide much better support for storage devices with only a handful of very large blocks. Such as the internal flash found on most microcontrollers. After investigating some use cases for a filesystem on internal flash, it has become apparent that the 255-byte limit is going to be too restrictive to be useful in many cases. Most uses I found needed files ~4-64 bytes in size, but it wasn't uncommon to find files ~512 bytes in length. To try to remedy this, I've pushed the 255 byte limit up to 1023 bytes, by stealing some bits from the previously-unused attributes's size. Unfortunately this limits attributes to 63 bytes in total and has a minor code cost, but I'm not sure even 1023 bytes will be sufficient for a lot of cases. The littlefs will probably never be as efficient with internal flash as other filesystems such as SPIFFS, it just wasn't designed for this sort of limited geometry. However, this feature has been heavily requested, even with limitations, because of the opportunity for code reuse on microcontrollers with both internal and external flash.
2018-10-10Added disk-backed limits on the name/attrs/inline sizesChristopher Haster
Being a portable, microcontroller-scale embedded filesystem, littlefs is presented with a relatively unique challenge. The amount of RAM available is on completely different scales from machine to machine, and what is normally a reasonable RAM assumption may break completely on an embedded system. A great example of this is file names. On almost every PC these days, the limit for a file name is 255 bytes. It's a very convenient limit for a number of reasons. However, on microcontrollers, allocating 255 bytes of RAM to do a file search can be unreasonable. The simplest solution (and one that has existing in littlefs for a while), is to let this limit be redefined to a smaller value on devices that need to save RAM. However, this presents an interesting portability issue. If these devices are plugged into a PC with relatively infinite RAM, nothing stops the PC from writing files with full 255-byte file names, which can't be read on the small device. One solution here is to store this limit on the superblock during format time. When mounting a disk, the filesystem implementation is responsible for checking this limit in the superblock. If it's larger than what can be read, raise an error. If it's smaller, respect the limit on the superblock and raise an error if the user attempts to exceed it. In this commit, this strategy is adopted for file names, inline files, and the size of all attributes, since these could impact the memory consumption of the filesystem. (Recording the attribute's limit is iffy, but is the only other arbitrary limit and could be used for disabling support of custom attributes). Note! This changes makes it very important to configure littlefs correctly at format time. If littlefs is formatted on a PC without changing the limits appropriately, it will be rejected by a smaller device.