Since it’s kind of related, here’s my anecdote/data point on the bit rot topic: I did a 'btrfs scrub' (checksum) on my two 8 TB Samsung 870 QVO drives. One of them has been always on (10k hours), while the other hasn’t been powered on a single time in 9 months and once in 16 months.
No issues were found on either of them.
I wonder how long those drives can be powered off before they lose the data. And until they lose all functionality when the critical bookkeeping data disappears.
Have had enough consumer SSDs fail on me that I ended up building a NAS with mirrored enterprise ones...but 2nd hand ones. Figured between mirrored and enterprise that's an OK gamble.
Still to be seen how that works out in long run but so far so good.
I wonder whats the best SATA SSD (M.2 2280) one could get now?
I have an old Asus with a M.2 2280 slot that only takes SATA III.
I recall 840 EVO M.2 (if my memory serves me right) is the current drive but looking for a new replacement seems not to be straightforward as most SATA is 2.5 in. Or if its the correct M.2 2280, its for NVMe.
> The reported SSD lifetime is reported to be around 94%, with over 170+ TB of data written
Glad for the guy, but here are a bit different view on the same QVO series:
Device Model: Samsung SSD 870 QVO 1TB
User Capacity: 1,000,204,886,016 bytes [1.00 TB]
== /dev/sda
9 Power_On_Hours 0x0032 091 091 000 Old_age Always - 40779
177 Wear_Leveling_Count 0x0013 059 059 000 Pre-fail Always - 406
241 Total_LBAs_Written 0x0032 099 099 000 Old_age Always - 354606366027
== /dev/sdb
9 Power_On_Hours 0x0032 091 091 000 Old_age Always - 40779
177 Wear_Leveling_Count 0x0013 060 060 000 Pre-fail Always - 402
241 Total_LBAs_Written 0x0032 099 099 000 Old_age Always - 354366033251
== /dev/sdc
9 Power_On_Hours 0x0032 091 091 000 Old_age Always - 40779
177 Wear_Leveling_Count 0x0013 059 059 000 Pre-fail Always - 409
241 Total_LBAs_Written 0x0032 099 099 000 Old_age Always - 352861545042
== /dev/sdd
9 Power_On_Hours 0x0032 091 091 000 Old_age Always - 40778
177 Wear_Leveling_Count 0x0013 060 060 000 Pre-fail Always - 403
241 Total_LBAs_Written 0x0032 099 099 000 Old_age Always - 354937764042
== /dev/sde
9 Power_On_Hours 0x0032 091 091 000 Old_age Always - 40779
177 Wear_Leveling_Count 0x0013 059 059 000 Pre-fail Always - 408
241 Total_LBAs_Written 0x0032 099 099 000 Old_age Always - 353743891717
NB you need to look at the first decimal number in 177 Wear_Leveling_Count to get the 'remaining endurance percent' value, ie 59 and 60 hereWhile overall it's not that bad, losing only 40% after 4.5 years - it means what in another 3-4 years it would be down to 20% if the usage pattern wouldn't change and the system wouldn't hit the write amplification. Sure, someone had that "brilliant" idea ~5 years ago to use a desktop grade QLC flash as a ZFS storage for PVE...
> Overall, I haven’t seen many issues with the drives, and when I did, it was a Linux kernel issue.
Reading the linked post, it's not a Linux kernel issue. Rather, the Linux kernel was forced to disable queued TRIM and maybe even NCQ for these drives, due to issues in the drives.