Kernel? Wayland? Pipewire? The most important Linux terms are explained
What is the command line, and what does the system do? What are Flatpaks, and what are they for? All this is explained with the appropriate background – greatly expanded in the second version.
Anyone who says in 2021 that they don’t use Linux is lying – or doesn’t know any better. There is no way around the free operating system anymore. This is mainly due to the dominance of Linux in the server and cloud environment: If you call up a website, for example, there is a very good chance that a Linux system is behind it. But Linux also forms the basis of Android and, thus, the world’s most widely used operating system. It’s the same with Chrome OS, and even Windows now comes with its own Linux subsystem.
That doesn’t change the fact that the Linux world, with all its terms, remains closed to many users. After all, it continues to play a subordinate role in the classic desktop area, and on Android, it’s pretty well hidden under the Google interface. And even those who have been using Linux for a long time do not always need to know exactly what is meant by things like Systemd, Wayland, or Flatpak – or even why these projects are relevant.
The following is an attempt to explain the most important terms relating to Linux as simply as possible. As always, comments and clarifications in the forum are gratefully accepted, which does not change the fact that the author is always right. The article should also be updated regularly if necessary to reflect current developments. After all, what is said should still be correct even if the article is later referred to.
kernel
The kernel is the part of an operating system that is literally at the core of all processes. It establishes the connection between hardware and software. It organizes how which loads are distributed. It forms the basic infrastructure for other system components and the execution of programs.
The kernel is also where it all started: In 1991, the Finnish student Linus Torvalds presented the first version of his – and, as the name suggests: his – hobby project: Linux. But the name carries a second allusion, namely to an older operating system called Unix. It was created in the late 1960s and, in its structure, served as an “inspiration” for Linux in many respects.
The inventor of one of the most important pieces of software of modern times, and yet there is practically no recent photo of him in the agency archives: Linus Torvalds, here at a conference in 2000.
Torvalds’ goal was to create a kind of open-source alternative to all the commercial – and then mostly quite expensive – Unix variants of the time. This plan can be described as a success: Linux has practically completely replaced Unix in most application areas over the years. However, there are still some other Unix-like operating systems like FreeBSD or OpenBSD that are still of some importance today. And Apple has also used the Unix world. The Darwin basis for macOS and iOS goes back to free Unix variants. If you are looking for an Econnomy Linux Hosting with Cpanel, Navicosoft is the ultimate solution.
Bonus note for purists: Strictly speaking, the kernel is not the central part of Linux. It is Linux. Over the years, however, it has become common to also name (many) systems surrounding it as a whole. If you like being right, you can note this fact under every article about Linux as a system as a factual correction. Or you voluntarily refrain from making yourself unpopular with everyone else.
Open-Source
“Open Source” has already been mentioned briefly, but what does that mean? To put it simply: the source code is publicly available. Anyone who has the necessary know-how can see how the program is put together and what it does. Above all, the availability of the code also allows you to make your changes to the software and develop it further. It is the main difference between proprietary operating systems such as Windows or macOS. The manufacturers keep the code strictly secret, and the users only receive the programs in a machine-readable version – the so-called binary code.
Like so many things in this environment, the term “open source” is not without controversy. It is directly linked to one of the oldest conflicts in the Linux environment. Many prefer the more political term “free software,” whereby “free” is to be understood in the sense of “freedom” and not “free of charge.” It emphasizes that it is not just about the publication of the code but also about an accompanying philosophy intended to give users greater control over the software: which offers the freedom to adapt it at will, but over related ones. Licensing models also prescribe obligations for retransmission.
Free Software Licenses
That brings us to the next point: the software licenses, which are inextricably linked. Every piece of free software is released under a certain license that sets the game’s rules. Historically, version 2 of the GNU General Public License (GPL) has dominated Linux – under which the kernel is still located today. Although this allows the software’s free further use and modification, it also stipulates that all modifications must be placed under the GPL again.
An intentional decision is intended to promote the spread of free software by encouraging surrounding projects and docking software components to use this license. As you can imagine, this “virality” is not universally popular. It prompted Microsoft boss Steve Ballmer in 2001 to describe Linux as a “cancer.”
However, the GPLv2 is not the only free software license. Companies, in particular, prefer to choose other licenses for their open-source projects – at least as far as possible. For example, Google uses an Apache 2.0 license for Android, allowing users to adopt the code in commercial projects without returning their changes. Of course, Google and all the smartphone manufacturers also have to publish their adaptations under the GPLv2 for the Linux kernel – something that the latter, in particular, sometimes had to be “friendly” reminded of. In general, there have been legal disputes in recent years involving code from open source projects that companies have illegally stolen. After all, every license model is only worth something if it can be enforced. Are you looking for Ultimate linux Hosting With Cpanel, Navicosoft is the Best solution.
GNU
If you want to learn what a recursive acronym is, you’re in good hands with the GNU Project – the abbreviation GNU stands for “GNU’s Not Unix.” But probably more important for the frame of reference of this text: The project in question is responsible for some of the tools used around the kernel to set up the rest of the system. This role is why some in the open-source world insist that the correct name of a Linux system should be GNU/Linux.
It will probably no longer surprise anyone: Not everyone shares this position either; strictly speaking, most Linux distributions – we will come to this term later – ignore the addition. The GNU Project has been working on its kernel called the Hurd for decades. This knowledge is essential, especially for cynical marginal notes in forum postings.
Desktop
The graphical user interface of an operating system is referred to as a desktop – so far, this should be generally known. But while there is a fixed solution under macOS or Windows, under Linux, you can choose from a wide range of different options, which all look different and, in some cases, also follow fundamentally different concepts.
The two most well-known ventures are called KDE and GNOME. Both emerged at the end of the 1990s, and for a long time, they fought each other, in which the respective advocates did not always meet friendly. Over the years, however, a certain pragmatism has prevailed everywhere, and there has long been collaboration behind the scenes on the basic infrastructure.
Most distributions these days use GNOME as the default desktop, with Ubuntu and Fedora being prominent examples. Others use GNOME derivatives such as Cinnamon (Linux Mint) or Mate – which is a fork of an old GNOME generation since some did not want to keep up with the innovations of GNOME 3.
In addition to KDE, there are also several other, largely independent desktop projects such as Xfce, Deepin, Budgie, and LXQt. These can be installed later on many Linux distributions, and some also offer several download variants in which these alternative desktops are set up by default.
Tool Kits
It would be very unpleasant if every developer had to reinvent the basic graphic elements for a program – both for said developers and the users, who would inevitably have to struggle with a wild mixture of inconsistent operating concepts. For this reason, there are widget toolkits that provide the most important elements in a standardized way for easy access. They are the construction kit for the user interface of a program.
Under Linux, GTK+ and Qt dominate, which are primarily associated with GNOME and KDE but are also used as a basis for other desktop projects. Historically, there have been plenty of other toolkits, but they have lost a lot of their relevance over the years.
Of course, this free choice is nice for developers, but it also has certain disadvantages: A program developed for QT/KDE does not always fit into a GNOME desktop and a GTK+/GNOME program – and vice versa. Although the two projects have removed many of the hurdles in this regard over the years, this effect cannot be entirely denied. Navicosoft is providing the best Linux Hosting Australia.
Distribution
Now we finally come to that mysterious word mentioned a couple of times before distribution. A simple circumstance can explain its relevance. There is no such thing as Linux, but various providers put together a functioning system from all the parts – from the kernel to the desktop to selected programs: just called distribution or “distro” for short. Such distributions exist for almost every purpose. The range extends from use in the area of the “Internet of Things” to the desktop, server, and cloud. If you want to know more about it, you can look at Distrowatch.com to see how many different offers there are.
Of course, there are larger and smaller projects here as well. One of the best known is Debian, on which many other distributions are based, such as Ubuntu. The Red Hat and SUSE offerings are also very traditional, especially with OpenSUSE and Fedora’s community distributions. But both companies make their money with their offers for the corporate sector – and Red Hat is now part of IBM anyway.
Also worth mentioning is Arch Linux, which has enjoyed increasing popularity over the years, especially among advanced Linux users. Arch-based distributions such as Manjaro or EndeavorOS are recommended for entry into this world, where setting up the system is particularly simple.
How important the role of the distribution – and thus also its choice – is also shown by another circumstance: Traditionally, it is practically entirely responsible for the delivery and maintenance of all software. The distribution developers are the first to create the executable binary files from the source code supplied by the individual projects.
However, there are exceptions to this model. For example, Gentoo is a distribution in which all packages are created directly on the users’ computers from the source code – this process is called compiling. In general, however, it is also possible with other Linux distributions to create your packages. However, this is only recommended to a limited extent for the general public. After all, it is a very time-consuming and energy-consuming process.
Rolling release or classic
Another important distinction: Most distributions follow a concept in which there are major new releases at regular intervals, while the focus is mainly on bug fixes. Debian is a prime example of this approach, primarily designed for maximum system stability.
At the same time, some distributions follow a “rolling-release” model in which all components are kept up to date—many distributions intended for desktop use fall somewhere between these two approaches.
For example, Fedora and Ubuntu constantly update desktop programs such as Firefox or LibreOffice. In contrast, the desktop environment is updated with a major version jump every six months. With Fedora, even the Linux kernel and many other system components are constantly upgraded to new versions.
Live Systems
Many available Linux distributions do not make a choice easy for those interested. Simply installing any system on the off chance is only advisable to a limited extent. Luckily, that’s not necessary either, as the Linux world has an excellent answer in this area. If you are looking for an Econnomy Linux Hosting with Cpanel, Navicosoft is the ultimate solution.
Live systems allow Linux distributions to be tried out directly from a USB stick or have it permanently installed on your computer.
Most desktop-oriented distributions are offered as so-called live images. With the help of simple tools such as the Fedora Media Writer, these can be copied to a USB stick, from which the system can then be started directly – i.e., without any installation. And if you like what you see, an installer is usually available at this point to set up the system on the computer.
However, some distributions are primarily aimed at using USB sticks. The anonymizing Tails could be mentioned here, for example. In these cases, it is often possible to store personal data and settings in a specially allocated area – which is not usually the case with live systems.
Package Manager
It brings us back to the perfect transition to the next topic: package managers. This is the software that is responsible for managing the installed software. If that sounds familiar: Yes, this central access can certainly be seen as a kind of ancestor of all the app stores in the mobile area – even if there are differences in the details.
The most important package manager under Linux is certainly DPKG with the associated tool Apt, which comes from the Debian world and has been adopted by many other distributions. Other distributions prefer RPM with tools like DNF and Zypper or (older) YUM. However, some distributions also use their package managers, such as Pacman in Arch Linux.
Although this major responsibility for distribution has generally proven its worth over the years, it also has certain disadvantages. First of all, this system makes it quite difficult for developers of non-free software to support Linux as a platform. After all, they would then have to offer their programs themselves in adapted versions for all possible distributions. It often means that there are only packages for Ubuntu – and even then, only for selected versions.
But even among open source developers, not everyone is satisfied with this approach. Especially with large projects such as LibreOffice or Firefox, one often struggles with error reports. It is not entirely clear whether the problem lies in one’s software or adjustments and optimizations of the respective distribution. In addition, when new versions are released, you cannot determine when they will be delivered to all users. Often distributions offer outdated versions, which is neither pleasant for developers nor users.
Snap and Flatpak
Whatever the package format, this is usually hidden behind a graphical software center for the user.
Accordingly, there are currently several attempts to establish “more modern” package managers, of which Snap and Flatpak are the best known. The idea behind this is that the same package should run on all distributions and therefore be maintained directly by the respective developers instead of the distribution.
In addition, the programs delivered in this way should run isolated from the rest of the system, which promises considerable advantages from a security and privacy perspective. While classic desktop programs under Linux theoretically have unrestricted access to all of the user’s private data, with Flatpaks, all of this is to be handled via an authorization system otherwise known from smartphones. Are you looking for Ultimate linux Hosting With Cpanel, Navicosoft is the Best solution.
It must now be emphasized that this vision has not yet been fully implemented since this is also accompanied by conversions in other projects – above all, the desktops. But the direction of impact is predetermined. The idea that the program developers should take care of their flatpacks and snaps themselves is also still in its infancy. However, this is already the case in some large projects like Firefox and LibreOffice.
Of course, the new package formats are not without controversy either. The biggest point of criticism is that such a system is associated with a certain overhead without wanting to go into too much detail. Above all, more storage space on the data carrier is used. Various tricks are used to minimize this effect, but classic package managers are still more economical in this regard.
Flatpak support is now available on most major distributions, and some even base their offerings entirely around it. On the other hand, Snaps are an invention of Ubuntu manufacturer Canonical, whose importance is largely limited to this distribution.
In addition to Flatpak and Snap, other formats aim to make software delivery under Linux independent of the distributions. An ancestor of this concept is Appimage, which goes back to 2004 under the former name “Klik.” Conceptually, this is kept a bit simpler. The security advantages of Flatpaks and Snaps, for example, do not exist with this one. On the other hand, app images are often faster, especially than snaps, which have often been criticized for their start times.
By the way: It all sounds very complicated, but in practice, one shouldn’t forget that most users don’t notice much of it. After all, even with Linux desktops, the programs can be easily installed via a graphical software center, whether based on a Deb package or a Flatpak. But we are here to explain the background.
Immutable Systems / OSTree
As much as individual distributions may differ in selecting individual programs, the basic structure around a classic package manager is very similar – and has remained the same for many years. This is different with a new generation of distributions: Whether Fedora Silverblue / Kinoite or Endless OS, they all try to structurally modernize Linux systems.
The idea is that there is a core system that the users cannot influence, so it is “unchangeable.” If there is an update for these components, the old data is not overwritten, but a new version of the system is created. The tool used for this, called OSTree, is often described as “version management for operating systems.”
Only at the next reboot will the system switch to the new version. It has one main advantage: If there are problems with an update, the old system status can easily start. In this way, even large version jumps can be completed within a few minutes without fear that something will go wrong.
However, an unchangeable system also means that users cannot easily install programs afterward (ok, actually, that is still possible at the moment, but it is not in the spirit of the concept, so we will hide it, for now, note). These distributions rely entirely on flat packs for program delivery since these can also be installed within the user account. Although the two projects have removed many of the hurdles in this regard over the years, this effect cannot be entirely denied. Navicosoft is providing the best Linux Hosting Australia.
X Window System / X11
Graphic interfaces may have been a fixed part of everyday computer life for a long time, but that doesn’t change the fact that there was a time before when communication with the computer was exclusive via text, without any icons or windows. Under Unix, the X Window System developed by MIT heralded the age of graphical interfaces. Introduced in 1984, the most recent generation of the underlying protocol followed in 1987 – called X11. Of course, there have been all sorts of extensions for this over the years, but the basics have remained the same.
Even modern desktops, such as in the KDE Plasma picture, were initially created on an X11 basis – and still have all sorts of dependencies.
When Linux appeared on the scene in the 1990s, X11 was the obvious choice to give the new operating system a graphical user interface. For a long time, XFree86 was the preferred implementation of X11 under Linux, but after all sorts of conflicts, the X.org project split off in 2004, and all the important companies and projects quickly got behind it. And that can also come up with one of the best domain names ever – x.org.
The fork from X.org brought some fresh impetus into development, but that didn’t change the fact that the architecture dates from when modern desktops and all the technical challenges that came with them were not even dreamed of. Accordingly, X11, with its concept as a comprehensive graphics server, was very well suited for remote operation – i.e., access from another computer – but the developers found it increasingly difficult to update the code for modern desktop requirements. So part of it went into a whole new development, namely:
Wayland
The simplest description for Wayland is that it is the successor to X.org. That is, it takes over its tasks of displaying graphical content. Strictly speaking, of course, it’s a bit more complicated. Wayland does not see itself as the central solution for everything but as a protocol that has to be implemented by the individual desktops. At the same time, many tasks that X.org had historically taken over were outsourced to other places, such as the kernel, graphics libraries, or even the desktop. Nevertheless, the classification as an X.org successor is quite permissible with a certain tolerance for vagueness.
The problem with this: X.org is historically so firmly anchored in the Linux world that the switch to Wayland is a tour de force of Herculean proportions. They all have to be adapted from the desktops to the graphical toolkits, graphics drivers, and finally, the application programs themselves. It also means that although the project was started in 2008, Wayland has only recently become the default choice for many distributions. Ubuntu, for example, has only just switched its desktop to Wayland, mainly because it took a long time before Nvidia offered a fully compatible version of its own – proprietary – drivers. In addition, the Wayland developers still had to fill in some functional gaps compared to X. org close – especially concerning the remote capabilities that are so central to X11. Individual distributions such as Fedora, which rely entirely on free drivers, have nevertheless provided Wayland as a standard solution for some time.
Whether X11 or Wayland: At first glance, users see no difference. However, you can usually choose between these options when logging in.
Currently (as of the end of 2021), GNOME and GNOME-based desktops work perfectly with Wayland, in some points even better than X.org, because some new features are no longer implemented for the old solution due to the high technical hurdles mentioned. Wayland’s support has also recently made massive progress around KDE. It will still take a long time to go down the open-source stream if that ever happens before X.org completely disappears from the Linux desktop. This is because, as mentioned, the desktop programs also have to be adapted. And that, in turn, is a lot of work. Firefox, for example, made this change only recently, while Chrome/Chromium is still working on it. And for older programs that are no longer actively developed, things are looking bad anyway. If you are looking for an Econnomy Linux Hosting with Cpanel, Navicosoft is the ultimate solution.
XWayland
It, of course, begs the question: how do all these programs run on a modern Wayland-based desktop? Via a compatibility solution called XWayland, which opens its own X11 graphics server within the rest of the desktop for the respective program. The users don’t notice anything – or, let’s say, almost nothing. To be honest, a few bugs in the interaction of these worlds keep popping up, but that will (hopefully) be fixed.
Systemd
After all the controversy, let’s finally get back to a completely undisputed topic: systemd. At its core, this is primarily responsible for starting the system and all the services required for this, but over the years, it has also taken on all sorts of related tasks. For example now contains tools for logging all activities (logging), setting up the network, or even setting the time and date. So it’s one of the most important – and biggest – projects for current Linux systems.
Time to admit something: the notion that systemd are uncontroversial may not have been entirely true. On the contrary, there are few projects that Linux fans prefer to bang their heads about virtually. On the one hand, the systemd has taken on many tasks that were historically handled by an assortment of unique tools – which, for purists, contradicts the classic Unix philosophy of one tool for one task. On the other side, this is a strong strength of the systemd since it has standardized many of these tasks.
In earlier years, the systemd start was handled using other solutions such as SysVinit. Even before Systemd, there were attempts to establish alternatives, including Upstart from Ubuntu manufacturer Canonical. In the meantime, however, practically all major distributions – including Ubuntu – have agreed on the systemd. Of course, the open-source world also has room for different opinions to be expressed here. If you want, you can also use distributions such as the Debian derivative Devuan, in which other programs completely replace the systemd. Current alternatives to systemd include OpenRC or Runit.
Pipe wire vs. Pulseaudio
Audio and Linux: In the past, it was not an easy relationship, to put it mildly. The default solution from a desktop perspective has been the sound server Pulse Audio for years. A solution with which not everyone was satisfied due to various weaknesses, so alternatives such as Jack were established for the professional sector.
In the meantime, however, there is something like a designated successor. Pipe wire should replace all the different Linux solutions in the audio area. Last but not least, it promises significantly better performance than Pulse Audio. Lower latency values are also suitable for the professional tasks mentioned. In addition, the support for Bluetooth audio is significantly better with Pipewire. Also important for the future of the Linux desktop: It can be entangled with Flatpak’s permissions system.
What is particularly pleasing is that Pipewire can be used as a direct replacement for Pulse audio. Therefore, programs do not have to be adapted, which makes it easier to switch to new technology. Pipewire is not an audio solution but a multimedia solution, as it is also used for tasks such as screen sharing in Wayland. Are you looking for Ultimate linux Hosting With Cpanel, Navicosoft is the Best solution.
Command Line
At this point, we recall what was mentioned earlier in the discussion of X.org, namely that there was a time when computers did not yet have graphical interfaces. The command line is something like the further development of the terminals of the time – a kind of text interface for the computer.
A lot can be done via the command line. Here, for example, are details of all snap packages installed on a current Ubuntu system.
The rumor persists that a Linux desktop requires a certain basic knowledge of using the command line. That may have been the case in earlier years, but with a modern desktop from Ubuntu, Fedora, or Linux Mint, it’s nothing more than that: a rumor. There you can get along very well with just the graphic tools.
At the same time, it can be worthwhile to deal with this topic. Such a modern command line is also an extremely powerful thing that can be used to customize and control the system at will. Above all, you learn how such a system is structured and what is responsible for what. You don’t have to be interested, but it’s there as an option.
There is a wealth of terminal programs here for every taste. Different shells can run – each with other options and peculiarities in implementing this text interface. The most widely used shell on Linux is bash. In fact, the average user is unlikely to touch anything else since it’s the default choice pretty much everywhere.
Root / Sudo
For security reasons, the options for regular users on a Linux system are severely limited. There are good reasons for this. After all, it is intended to prevent a careless change from being made that would cause the computer to no longer start. Or that a seemingly harmless program with malicious intentions sets up total surveillance of the system. In addition, on systems used by several users, it is important to separate their data cleanly so that everyone cannot see all the private information of the others. Navicosoft is providing the best Linux Hosting Australia.
For administration purposes, however, comprehensive access is often required. In Linux, the user called “root” is responsible for this. The counterpart to this would be the “administrator in the Windows world.” Historically, the root is a separate user with a different password. However, the “sudo” tool also gives normal users the option of running programs individually with root permissions – if they have permission to do so. The user password must be entered again the first time sudo is run in a running session as an additional safeguard.
“Sudo” has been around for a long time, but many desktop distributions have completely dispensed in recent years with setting up their root access. Instead, the first user set up automatically gets sudo permissions. The use of sudo is very simple. The term is simply placed in front of the actual command in the command line. If you don’t know what you’re doing, you shouldn’t do it anyway.
0