From d5e800575daaf871ed629d5f52af2c4fe1a4cc29 Mon Sep 17 00:00:00 2001 From: Christopher Haster Date: Mon, 13 Aug 2018 09:03:13 -0500 Subject: Collapsed recursive deorphans into a single pass Because a block can go bad at any time, if we're unlucky, we may end up generating multiple orphans in a single metadata write. This is exacerbated by the early eviction in dynamic wear-leveling. We can't track _all_ orphans, because that would require unbounded storage and significantly complicate things, but there are a handful of intentional orphans we do track because they are easy to resolve without the O(n^2) deorphan scan. These are anytime we intentionally remove a metadata-pair. Initially we cleaned up orphans as they occur with whatever knowledge we do have, and just accepted the extra O(n^2) deorphan scans in the unlucky case. However we can do a bit better by being lazy and leaving deorphaning up to the next metadata write. This needs to work with the known orphans while still setting the orphan flag on disk correctly. To accomplish this we replace the internal flag with a small counter. Note, this means that our internal representation of orphans differs from what's on disk. This is annoying but not the end of the world. --- lfs.h | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) (limited to 'lfs.h') diff --git a/lfs.h b/lfs.h index e816515..839cf83 100644 --- a/lfs.h +++ b/lfs.h @@ -302,8 +302,13 @@ typedef union lfs_global { struct { lfs_block_t movepair[2]; uint16_t moveid; - bool deorphaned; - } s; + uint8_t deorphaned; + } l; + struct { + lfs_block_t movepair[2]; + uint16_t moveid; + uint8_t orphans; + } g; } lfs_global_t; typedef struct lfs_mdir { -- cgit v1.2.3