It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
Abishia: the maxium amount of times i can boot from a SD was 3 times with a linux system
Lots of Raspberry Pi users repeatedly boot from a SD card just fine, me included. I think I've had my Raspberry Pi4 for 1½ years now, or so.

It is constant writing to the SD card that I want to avoid, not reading from it. Hence I have moved especially the swap file but also rest of the Linux filesystem, except for /boot, out of the SD card, to an external USB hard drive. So RPi4 boots from the SD card every time, but uses the USB HDD for the rest. The root filesystem, the /home, /opt, /var etc. are all on the USB hard drive.

Not sure when exactly the system writes to the SD card now, I presume if there are kernel updates or I manually change some config files under /boot. At least it shouldn't now constantly write to the SD card... I hope.

(It is possible nowadays to set up RPi4 to boot completely from an external USB device, not using a swappable SD card at all.)


Lately though, I've been educating myself more how to make the battery of all these battery-powered electric devices (including Steam Deck) last longer. I've been reading about these things mainly as I just got an electric car, and obviously want to preserve its battery in a good condition as long as I can. Apparently the idea is:

- Try to avoid recharging the battery to 100% full, at least all the time. It is much better to limit the max charge level to e.g. 80% or 90%, and only charging it to 100% when you really need it.

- Similarly try to avoid drain the battery to 0%. So generally try to keep your battery level between 20% and 80% most of the time, if possible.

- If you are going to store a battery (-powered device) for longer periods of time without using them at all, the optimal battery level for storage is around 50-60%. Both 0% and 100% for longtime storage is bad for the battery, apparently.

Just today, I googled how to do such max battery level limiting on laptops. Some laptops have software for it, with my Dell work laptop there is a BIOS setting for it. So I just changed it so that by default my laptop battery is charged only up to 80%, and it won't start charging until it is down to 50%.

Maybe a higher lower limit would be better for most users, but I have that laptop connected to power 95% of the time... Too bad I have to go to BIOS to change that setting, it would be nicer there was a toggle on the desktop where I just tell it when I want it to charge to 100%, and when only to e.g. 80%.

Then again, if SteamDeck is primarily used on battery power, I guess it will not be remaining constantly on 100% battery level like many laptops etc. are (because they are still mainly connected to power most of the time).
Post edited October 17, 2021 by timppu
avatar
timppu: (It is possible nowadays to set up RPi4 to boot completely from an external USB device, not using a swappable SD card at all.)
Also possible with the 3; you just need to boot with an SD card to make a permanent change if it's not the 3B+.

The Raspberry Pi Zero (and Zero W) supports a USB device mode boot, which allows you to boot it from a directory on a host machine; very useful if you're making your own root filesystem and want to test it without having to repeatedly write it to an SD card.

The Raspberry Pi 4 I'm using is running off a USB connected SSD, with no SD card at all.
avatar
timppu: It is constant writing to the SD card that I want to avoid, not reading from it. Hence I have moved especially the swap file but also rest of the Linux filesystem, except for /boot, out of the SD card, to an external USB hard drive. So RPi4 boots from the SD card every time, but uses the USB HDD for the rest. The root filesystem, the /home, /opt, /var etc. are all on the USB hard drive.

Not sure when exactly the system writes to the SD card now, I presume if there are kernel updates or I manually change some config files under /boot. At least it shouldn't now constantly write to the SD card... I hope.
One thing I could point out is that modern web browsers, for whatever reason, write to the SD card a lot; this was causing severe stalls when I was running my Pi from an SD card.

From what I gather, in terms of performance, drives can be ordered something like this:
SD Card < eMMC < SATA SSD < NVMe SSD

SATA SSD is still generally fast enough for most purposes, though, and is enough that the average user won't notice a need to upgrade further.

A spinning hard drive seems to be comparable to eMMC, though it may vary based off the actual workload.

(A RAM disk is even faster, but the caching on modern OSes like Linux (but not Windows XP) should make the difference not noticeable for workloads that don't do heavy writing (and that don't load more than fits into RAM), except when first starting up.)

avatar
rtcvb32: Thinking about it, if you have enough memory (2Gb and more) You can likely also disable or disregard swap files/partitions if that is giving issues. ZRam/ZSwap is pretty popular and increases the ram limits for more limited systems, and with larger ram amounts it has less of a penalty (based on how much you fill before it starts working)
zswap requires a physical swap file to work. It tends to be enabled by default on modern Linux distributions, and isn't something the average user will notice.

zram does not, and can serve as a swap drive, but it has other uses; it can, for example, be used for anything that you might use a RAM disk for. I've run virtual machines from ZRAM, for example, or put a root filesystem there (running in a VM).
Post edited October 17, 2021 by dtgreene
avatar
Serren: SD cards use erase or discard operations which function similarly to a trim command.
avatar
rtcvb32: Using linux you have to enable said options and the filesystem and/or hardware has to support it. Say for a ramdrive it would free the memory or zeroize it, for a filesystem it might do something else.
Some details on what might happen:
* RAM disk: The space is deallocated and returned to system RAM. (Also happens with zram on Linux, if the kernel is new enough to support it.)
* device-mapper (including LVM): Typically passed down to the underlying storage
* dm-thin: Storage is de-allocated from the thin pool, and the discard is passed down.
* Emulated disk (in a VM, for example): Passed down to the underlying storage. If the storage is an image file in raw format, a hole is punched in the file, making it a sparse file. If the storage is something like qcow2, the file is updated to have that area of the virtual disk no longer stored in it.
* Image file (Linux's loop devices, for example): A hole is punched in the file.
* SMR hard drive: Similar to flash storage, the area of the drive is marked as unused by the firmware so that data can be copied there for better performance (when deleting something, for example). (I don't understand this well.)
avatar
Abishia: now i read the Steamdeck works and boots from a SD chip?
my thoughts, HOW?
My understanding is that the low end model runs from eMMC, which is better than SD storage. My small laptop used to run from eMMC, and it worked fine for many years without the stalls that happened on my Pi 4 running from SD card.
Post edited October 17, 2021 by dtgreene
avatar
Dark_art_: The Raspberry pi 3 in 24/7 duty and has been rebooted every day in the last couple of years, from SD card.
Out of curiosity, why do you reboot it daily?
I have one that's been up 10-11 months by now without issue. (Really ought to upgrade the packages one of these days. :-) )
avatar
Dark_art_: The Raspberry pi 3 in 24/7 duty and has been rebooted every day in the last couple of years, from SD card.
avatar
brouer: Out of curiosity, why do you reboot it daily?
I have one that's been up 10-11 months by now without issue. (Really ought to upgrade the packages one of these days. :-) )
Cause the device it's connected to it's picky and for me it's easier to schedule a reboot than to properly fix the issue :D
My Pi is at 25 days uptime. Sooner or later I should reboot just so that the kernel can be updated, but doing so means I lose the current desktop state.
Not related to endurance but something I would like to point out, since the storage on the deck is expandable with a SD Card and is marketed as a "AAAAAAAAAAAAAAA games 300+fps" capable pc (we all know what 15W SOC power can do that right?). After using games on SD card for a couple of years is that many games run just fine, with the natural longer load times compared to SSD's. More intensive games stutter from time to time, such an exemple is XCOM Enemy Unknown/Within stutter when taking a type of shot that has not been done previously on a map. My guess is that the game loads the animations on-the-fly. I'm using a Sandisk Ultra 128Gb A1 rated.
avatar
Dark_art_: Not related to endurance but something I would like to point out, since the storage on the deck is expandable with a SD Card and is marketed as a "AAAAAAAAAAAAAAA games 300+fps" capable pc (we all know what 15W SOC power can do that right?). After using games on SD card for a couple of years is that many games run just fine, with the natural longer load times compared to SSD's. More intensive games stutter from time to time, such an exemple is XCOM Enemy Unknown/Within stutter when taking a type of shot that has not been done previously on a map. My guess is that the game loads the animations on-the-fly. I'm using a Sandisk Ultra 128Gb A1 rated.
Worth noting that your workload on the card is likely read-heavy, but not write-heavy, so you're not hurting the card's endurance.

(This is unlike, say, modern web browsers, which seem to want to write to the disk constantly.)

The behavior in the example you listed, where the shot takes longer the first time it's used, could be explained by the caching in modern OSes like Linux (and probably Windows 7+). When the game needs to load the animation, the OS loads it from disk, but since it's already loaded, it can cache that animation in RAM. Then, if the program needs to load the animation again, the OS realizes that it's still cached, and just returns the copy in RAM instead of going to disk. This is very nice, and is one nice benefit to having extra RAM in the system (and using an OS that does this). (If RAM runs out, the OS might decide to just remove the animation from RAM if it hasn't been used in a while, as it can be loaded later if needed, and since it's still on disk, the OS need not write it to swap space.)
avatar
Abishia: the maxium amount of times i can boot from a SD was 3 times with a linux system
You did what? Tell that to my cluster of Raspberry Pi 4s, which have been running off of SD cards for over 2 years now. Either you're buying cheapo SD cards, which can of course go bad whenever they feel like it, or you're doing something wrong I'm afraid.
avatar
dtgreene: Some details on what might happen:
* RAM disk: The space is deallocated and returned to system RAM. (Also happens with zram on Linux, if the kernel is new enough to support it.)
If the block is full of nulls/zeros, it will deallocate it with zram, vs trying to compress it. At least that's my experience. (Test with say vfat and add a bunch of files, delete them they will still be there. Zeroize all sectors afterwards that are free (by making one file and dd from /dev/null) and it frees said blocks of memory) thus trim for some FS's may be zeroizing the unused blocks.

tmpfs on the other hand would just likely deallocate it and return it to the system.
avatar
dtgreene: Some details on what might happen:
* RAM disk: The space is deallocated and returned to system RAM. (Also happens with zram on Linux, if the kernel is new enough to support it.)
avatar
rtcvb32: If the block is full of nulls/zeros, it will deallocate it with zram, vs trying to compress it. At least that's my experience. (Test with say vfat and add a bunch of files, delete them they will still be there. Zeroize all sectors afterwards that are free (by making one file and dd from /dev/null) and it frees said blocks of memory) thus trim for some FS's may be zeroizing the unused blocks.

tmpfs on the other hand would just likely deallocate it and return it to the system.
I have flashback of trying to compile a large program on a low memory system years ago and it ran out of space over an hour into the compilation, had to manually increase tmpfs size.
Post edited October 18, 2021 by cowtipper-
avatar
dtgreene: Some details on what might happen:
* RAM disk: The space is deallocated and returned to system RAM. (Also happens with zram on Linux, if the kernel is new enough to support it.)
avatar
rtcvb32: If the block is full of nulls/zeros, it will deallocate it with zram, vs trying to compress it. At least that's my experience. (Test with say vfat and add a bunch of files, delete them they will still be there. Zeroize all sectors afterwards that are free (by making one file and dd from /dev/null) and it frees said blocks of memory) thus trim for some FS's may be zeroizing the unused blocks.

tmpfs on the other hand would just likely deallocate it and return it to the system.
Thing is, a block that's full of zeroes compresses really easily, to the point where deallocating it would not be much of a space saving over compressing it.

In fact, zeros compress so well that an antivirus program might even consider it a decompression bomb if there are too many of them. I've seen a site with compressed blank disk images for download, and the download is small yet decompresses to gigabytes of zeroes. In fact, it's quite easy to describe the precise contents of, say, 4GB of zeroes without needing much text; in fact, I just did!

(A file filled with any other bit pattern, like all ones, would similarly compress very easily.)
avatar
Abishia: the maxium amount of times i can boot from a SD was 3 times with a linux system
avatar
WinterSnowfall: You did what? Tell that to my cluster of Raspberry Pi 4s, which have been running off of SD cards for over 2 years now. Either you're buying cheapo SD cards, which can of course go bad whenever they feel like it, or you're doing something wrong I'm afraid.
used good qaulity SD (evo 32GB)

one card was total destroyed broken beyond repair and that was my expensive Scandisk Extreme Pro 32GB 300/MB a second
i think a Raspberry ain't compareable with a normal desktop boot those flash cards just brake down after a few boots
avatar
dtgreene: Thing is, a block that's full of zeroes compresses really easily, to the point where deallocating it would not be much of a space saving over compressing it.

In fact, zeros compress so well that an antivirus program might even consider it a decompression bomb if there are too many of them. I've seen a site with compressed blank disk images for download, and the download is small yet decompresses to gigabytes of zeroes. In fact, it's quite easy to describe the precise contents of, say, 4GB of zeroes without needing much text; in fact, I just did!

(A file filled with any other bit pattern, like all ones, would similarly compress very easily.)
Well i think by default the 'block' unallocated is defaulted to all zeros, so you can read a block but it's all zeros if unallocated. Thus when you zeroize it it's back to it's default and it just unallocates it (as there's no reason not to)

Doing a minimal swap device it takes up like 200 bytes but compresses to like 62. I haven't looked closely at the code, but each block of say 4k-64k is likely compressed separately, so while a huge block of zeros will compress well you'd have repeating RLE or the same block (uncompressed 4 bytes, lookback 4 bytes length 1024 bytes, 4 times) which may take... 16-32 bytes? Depending on the method and how it compresses.

Files or blocks filled with other ones would, but i think zero is special.

Some archivers actually would compensate for the zero sections of sparse files and doesn't include those. Or in one case of my xor program (where it kept crashing for a 300Mb xor diff) i'd programmed it to identify said locations and write a special 4 byte header which basically said how long the block was of zeros, then compressing the whole thing together much better.

So it's hard to say what's best in those cases.

On the other hand, you COULD make a huge block that is a bomb and decompress to zram and it would not take much space in reality.... Since it never allocates the zeroized blocks.
avatar
Abishia: one card was total destroyed broken beyond repair and that was my expensive Scandisk Extreme Pro 32GB 300/MB a second
i think a Raspberry ain't compareable with a normal desktop boot those flash cards just brake down after a few boots
I have SanDisk Extreme (non-Pro) microSDs in 4 Raspberry Pis and they've been going strong for 2 years... more reboots that I can count.

You may simply have gotten a bad batch... or a counterfeited one.
Post edited October 18, 2021 by WinterSnowfall