• 12 Posts
  • 103 Comments
Joined 4 months ago
cake
Cake day: April 4th, 2025

help-circle

  • Some more criteria which I think are meaningful:

    • How often are you willing to upgrade or re-install your system ?
    • How reliable does your system need to be ? Would it inconvenience you or even be a risk for your livelihood if it stops working tomorrow morning for a few days until you find time to fix it?
    • If some software package has a breaking change, do you want to see the consequences of that change (a) invariably together with the next minor software update, or (b) only with the next mayor system upgrade of which you can chose the timing?
    • How quick and experienced do you want to have security updates applied? For how long do you need security updates ? (btw this point is an important difference between Debian and Ubuntu, as in older Ubuntu LTS releases security updates are reduced!)
    • is security of your system and privacy of the user data a top concern for you?
    • are you an open source software developer or do you have otherwise a strong need to run the latest software version - and how old would be the oldest version you want to tolerate?
    • do you want to be, in an easy way, to be involved with the open source development community?

  • I am thinking this could be neat for people new to Linux to help them select a first distribution.

    A few more points:

    • There are a lot of choices

    • There are also a lot of different valuable qualities.

    • Consequently, there are no distributions that are “good” or “bad”.

    • It is nice to try out things! And trying out things will change what appeals to you.

    • That said, perhaps you don’t want to try out too many things now, instead right now you’d prefer something that just works…

    • Also, your needs and your capabilities will change over time. If you are a young student who wants to learn programming, a pc gamer, or somebody who likes to learn and understand Linux in detail, they might be different from when you are a busy parent or a young professional which just needs to write job applications!

    • So, what matches your needs best will likely also change over time.

    Finally, the choice of distributions is not an either-either or black-and-white thing. You can run Linux, and on top Windows in a Virtual Machine (basically an entire simulated computer). You also can run another Linux distribution in a virtual machine, which matches a specific use case.








  • Scenarios as this one are why you need backups. Always.

    And yes, Ext4 is a log file system which is much more stable in the scenario of a power outage, but it won’t protect you e.g. from faulty RAM and corruption of kernel data structures.

    On top of that, it can also be a adequate solution to use BTRFS (plus backups) for a system install of a few GB, and EXT4 (PLUS BACKUPS) for user data.

    Also, because sibling comment mentions LVM, it is great, and solid, but it has its own complexity which introduces extra chances for user error. And user error is a main cause for data loss, so it is no silver bullet either.


  • root partition / file system

    For my needs, the simplest way is to use an extra partition might be to keep it as a reserve to install the next Distribution release. So you go

    partition A: Debian 12’s root

    Partition B: /home

    Partition C: Debian 13’s root

    And swap A and C for the next upgrade. It is really nice to have a whole compatible fallback system.

    Alternative 1: BTRFS Another possibly quicker way to do this is a larger BTRFS disk and create subvolumes from snapshots and mount these. When the subvolumes are no longer needed, they can be deleted like any folder.

    Alternative 2: LVM One can also use LVM, the logical volume manager, for the same effect. It plays nicely with LUKS encryption for laptops. But I think BTRFS is simpler most times.

    How to move all packages over

    One can copy the system e.g. using a tar backup, fix the mount points by changing the volume label (which identifies the mount point), and do a dist upgrade then. I use “tar” here because it keeps all file attributes.

    I guess that’s the best way to do it on a server. But for desktop systems, I now think it is better to make a list of manually installed packages (there are tools which help with that), and to re-install the packages that are still needed from that list. This has two advantages:

    1. One gets rid of cruft and experimental installs that are no longer needed, which is really important in the long term. (If you have ever worked in a shop where software, files and configurations were not upgraded for ten or twenty years, you might know what I am talking about: IT systems absolutely need clearing out old stuff, too, and that makes the whole concept of “stability” a lot less important).
    2. Some systems (I am looking at you, GNOME) can break in an ugly way if doing an upgrade instead of a re-install. Very bad behaviour, but it can happen. (And this might answer the question whether Debian is more stable than Arch: Yes, as long as you don’t upgrade GNOME).

    keeping dot files, the copy-and-modify way

    And one more thing I do for the dot files:

    Say, my home folder is in /home/hvb . Then, I install Debian 13 and set /home/hvb/deb13 as my home folder (by editing /etc/passwd). I put my data in /home/hvb/Documents, /home/hvb/Photos/ and sym-link these folders into /home/hvb/deb13.

    Now, default dot files are automatically created in /home/hvb/deb13 or /home/hvb/deb13/.config .

    When I upgrade, I first create a new folder /home/hvb/deb14, copy my dot files from deb13, and install a new root partition with my home set to /home/hvb/deb14. Then, I again link my data folders , documents and media such as /home/hvb/Documents into /home/hvb/deb14 . The reason I do this is that new versions of programs can upgrade the dot files to a new syntax or features, but when I switch back to boot Debian 13, the old versions can’t necessarily read the newer-version config files (the changes are mostly promised to be backward-compatible but not forward-compatible).

    All in all this is a very conservative approach but it works for me with running Debian now for many years in a rather large desktop setup.

    And the above also worked well for me with distro-hopping. Though nowadays, it is more recommended to install parallel dual-booted distros on another removable disk since such installs can also modify grub and EFI setup, early graphics drivers and so on, even if in theory dual-boot installs should be completely independent… but my experience is that is not any more always guaranteed (especially if you have an NVidia graphics card which needs extra support in EFI, but well … in that case you asked for pain).

    For the last reason, I now also run Arch in a VM managed by virt-manager - this also allows it to run both systems at once.

    (What I want to point out is that there is nothing which you can’t do with running Debian as host compared to an Arch host and Debian in a VM. The differences are not really that large - Arch has just often newer software and can be nice if you want to participate in the FLOSS community and contribute packets).*






  • curious how you move all packages over

    One can copy the system using a tar backup, fix the mount pointd by changing the volume label (which identifies the mount point), and do a dist upgrade then.

    I guess that’s the best way to do it on a server. But for desktop systems, I now think it is better to make a list of manually installed packages, and to re-install the packages that are still needed from that list. This has two advantages:

    1. One gets rid of cruft and experimental installs that are no longer needed, which is really important in the long term.
    2. Some systems (I a looking at you GNOME) can break in an ugly way if doing an upgrade instead of a re-install. Very bad behaviour, but it can happen. (And this might answer the question whether Debian is more stable than Arch: Yes, as long as you don’t upgrade GNOME).

    And one more thing I do for the dot files:

    Say, my home folder is in /home/hvb . Then, I install Debian 12 and set /home/hvb/deb12 as my home folder (by editing /etc/passwd). I put my data in /home/hvb/Documents, /home/hvb/Photos/ and sym-link these folders into /home/hvb/deb12. When I upgrade, I first create a new folder /home/hvb/deb14, copy my dot files from deb12, and install a new root partition with my home set to /home/hvb/deb14. Then, I again link my data folders , documents and media such as /home/hvb/Documents into /home/hvb/deb14 . The reason I do this is that new versions of programs can upgrade the dot files to a new syntax or features, but when I switch back to boot Debian 12, the old versions can’t necessarily read the newer-version config files (the changes are mostly promised to be backward-compatible but not forward-compatible).

    All in all this is a very conservative approach but it works for me with running Debian now for about 15 years in a rather large desktop setup.

    And the above also worked well for me with distro-hopping. Though nowadays, it is more recommended to install parallel dual-booted distros on another removable disk since such installs can also modify grub and EFI setup, early graphics drivers and so on, even if in theory dual-boot installs should be completely independent… but my experience is that is not any more always guaranteed.



    • As mentioned in the article, guix pull is sloow.

    This one has beem discussed on several forums discussing the original blog post, like here or also here on lobste.rs

    Part of the reason for slow pulls is that the GNU projects savannah server, which Guix was using so far, is not fast, especially with git repos. Luckily, this is already being improved because Guix is moving to codeberg.org, a FOSS nonprofit org which is hosted in Europe. So if one changes the configured server URL, it is faster. (On top of that interested people might use the opportunity to directly take influence, and donate to codeberg so that they can afford even better hardware 😉).




  • Yes, having programmed bash and its predecessors for 30 years and several lisps (Clojure, Racket, Guile, a little SBCL) in the last 15 years, I very much prefer the Scheme version in this place.

    Why?

    • This code fragment is part of a much larger system, so readability and consistency counts
    • The Guile version supports a more powerful functionality, which is that evaluation of a package can have several extra results (called outputs). It is over a year that I read about that in the Guix documentation and yet I recognize it immediately.
    • the code tells me that it is removing examples.
    • the code fits neatly into a tidy system of several stages of build and packaging
    • the code uses a structured loop. Of course you can do that in shell as well - I am pointing this out because the bash version is a bit shorter because it does not use a loop.
    • Scheme has much safer and more robust string handling. The code will not do harmful things if a file name contains white space or happens to be equal to 'echo a; rm -rf /etc/*'.
    • Scheme strings handle Unicode well
    • If there is an error, it will not be silently ignored as is the norm in shell scripts which are not written by experts, but will throw it.
    • the code has less redundancy. For example, the bash version mentions three times the subfolder “lib”, the Guile version only once. This makes it easier to refactor the code later.