2
Welcome to Linux

2.1 Learning the Linux Command Line

Working with a text-based Command Line Environment ( CLI ), without a Graphical User Interface (GUI) can be intimidating at first glance, as most of us are accustomed to using a GUI . But understanding the command line environment will show how powerful and efficient it is.

bash
echo "Hello, Linux!"
text
Hello, Linux!

Most senior programmers in the industry and veteran Linux system administrators will exclusively use Command-Line Interface (CLI) as their day to day interaction with the computer. The reason is, the GUI was designed for simplifying human interaction with computers rather than improving the computer’s efficiency at doing tasks.

The goal of this chapter aims to introduce the fundamentals of working with the Linux command line using a very common shell called Bash as it will be important in the future when working with ROS (Robot Operating System) or in any future endeavour the reader may pursue in the fields related to computer science.

  • Work on what the command line is and how it works,

  • Look at working with files and folders,

  • How Linux protects files from unauthorised access with permissions,

  • Common commands to be familiar with and how to connect commands together with pipes,

  • Introduction to some complex command line tasks.

This part of the lecture-book aims to give practical knowledge on working with the widely used Bash shell, in case you choose to extend your learning into user management, network configuration, programming and development, system administration, or if you catch the tinkerer-bug.

2.1.1 A Short History on Computer Interfaces

The CLI came from a form of dialogue by humans over teleprinter (TTY) machines, in which human operators remotely exchanged information instead of a human communicating with another human over a teleprinter. Early computer systems often used teleprinter machines as the means of interaction with a human operator.

The computer became one end of the human-to-human teleprinter model.

The mechanical teleprinter was then replaced by a terminal , a keyboard and screen emulating the teleprinter. Smart terminals permitted additional functions, such as cursor movement over the entire screen, or local editing of data on the terminal for transmission to the computer.

As the microcomputer revolution replaced the traditional systems, hardware terminals were replaced by terminal emulators - Personal Computer (PC) software that interpreted terminal signals sent through the PC ’s serial ports. These were typically used to interface an organisation’s new PC ’s with their existing mini- or mainframe computers, or to connect PC to PC . Some of these PC s were running Bulletin Board System software.

Early operating system CLI s were implemented as part of resident monitor programs, and could not easily be replaced. The first implementation of the shell as a replaceable component was part of the Multics time-sharing operating system. In 1964, MIT Computation Center staff member Louis Pouzin developed the RUNCOM tool for executing command scripts while allowing argument substitution.

Pouzin coined the term “shell” to describe the technique of using commands like a programming language, and wrote a paper about how to implement the idea in the Multics operating system. Pouzin returned to his native France in 1965, and the first Multics shell was developed by Glenda Schroeder. At Nokia Bell Labs headquarters the first Unix shell, the V6 shell, was developed by Ken Thompson in 1971 and was modelled after Schroeder’s Multics shell. The Bourne shell was introduced in 1977 as a replacement for the V6 shell. Although it is used as an interactive command interpreter, it was also intended as a scripting language and contains most of the features that are commonly considered to produce structured programs.

The Bourne shell led to the development of the KornShell (ksh), Almquist shell (ash), and the popular Bourne-again shell (or bash ). Early microcomputers themselves were based on a CLI such as CP/M, DOS or AppleSoft BASIC. During the 1980s and 1990s, the introduction of the Apple Macintosh and of Microsoft Windows on PCs saw the command line interface as the primary user interface replaced by the Graphical User Interface. The command line remained available as an alternative user interface, often used by system administrators and other advanced users for system administration, computer programming and batch processing.

Shells in other Operating Systems
Windows

In November 2006, Microsoft released version 1.0 of Windows PowerShell, which combined features of traditional Unix shells with their proprietary object-oriented .NET Framework. MinGW and Cygwin are open-source packages for Windows that offer a Unix-like CLI. Microsoft provides MKS Inc.’s ksh implementation MKS Korn shell for Windows through their Services for UNIX add-on.

Macintosh

Since 2001, the Macintosh operating system macOS has been based on a Unix-like operating system called Darwin. On these computers, users can access a Unix-like CLI by running the terminal emulator program called Terminal, or by remotely logging into the machine using ssh . Z shell is the default shell for macOS, 1 1 This was implemented as of macOS Catalina. with bash, tcsh, and the KornShell also provided.

Before macOS Catalina, bash was the default shell.

2.1.2 Linux is a Nutshell

PIC

Figure 2.4: The kernel mapping of the Linux operating system.
A Brief Description of What Linux Does

Linux is a general purpose computer operating system, originally released in 1991 by Linus Torvalds and began as a personal project of him [39]. It was to create a new free operating system kernel which the resulting kernel has been marked by constant growth throughout its history. 2 PIC 2 The MINIX logo. There were alternative OSs on the market such as MINIX but it was under a proprietary license which was later became open-source in 2000. This was one of the reasons why Linux was attempted in the first place. To create a truly open-source implementation of UNIX.

Linux is defined by its kernel, called the Linux kernel , which is the core component of the system. This kernel interacts with the computer hardware to allow software and other hardware to exchange information, which you can see in Fig. 2.4 .

Imagine the kernel as the middle-man between your software and the hardware. This allows you to write a program without worrying to much about what the hardware is.

As Linux is an open-source project and is probably one of the greatest collaborative software work in history, it has a rich history. It was inspired by MINIX which, in turn, was inspired by UNIX with UNIX being the first portable operating system ever designed [40] as it was mostly written with the C programming language [41].

Open Source v. Closed Source

In programming there are two (2) main approaches when it comes to sharing code:

  • It can be closed source, which means, you are not allowed to edit the code the program is running on,

  • Open source which you are free to edit and share the code as you see fit.

Linux is based on a philosophy of software and operating systems being free .

Software should be free of cost and freely modifiable.

The software license which allows this, in the case of the Linux kernel, is called the GNU General Public License. 3 3 are a series of widely used free software licenses, or copyleft licenses, that guarantee end users the freedoms to run, study, share, or modify the software. This emphasis of freedom, both, of cost and modification has helped Linux to become popular for many different applications and purposes from tinkering programming to being used in massive databases of major companies.

Linux has popped up everywhere from the majority of the servers that run web services we all use, to super computers, to Wi-Fi routers, in cars, mobile phones, and everywhere in between. Odds are that you are closed to a device that uses some part of the Linux kernel. In the midst of all these different kinds of Linux installations, the most important distinction you’ll need to be aware of is one of the genealogy of Linux.

2.1.3 Linux Distributions

While the Linux kernel is more or less the same across nearly all installations of Linux, the software that surrounds the kernel that provides capabilities like software package management , control of services , and the location of configuration files differs between them. Many of the tools that come packaged with Linux come from the GNU Project and aren’t actually a part of Linux and, taken together, the combination of the kernel and these common tools is often referred to as GNU Linux . Different groups of software and configuration choices that are maintained by individuals or groups of people are called distributions, or distro’s. Most major distributions of Linux fall into categories based on the original distribution from which they were derived. These are: 4 4 The entire family history of linux can be viewed in the Distribution Timeline

Distribution Advantages Disadvantages
Linux Mint Superb collection of custom tools developed in-house, hundreds of user-friendly enhancements, inclusion of multimedia codecs, open to users’ suggestions The project does not issue security advisories
Ubuntu Fixed release cycle and support period; long-term support (LTS) variants with five years of security updates; novice-friendly; wealth of documentation, both official and user-contributed Lacks compatibility with Debian; frequent major changes tend to drive some users away; non-LTS releases come with only nine months of security support
Arch Linux Excellent software management infrastructure; unparalleled customisation and tweaking options; superb on-line documentation Occasional instability and risk of breakdown
Gentoo Highly flexible, endlessly customizable, able to use a range of compile-time configurations, init systems and run on many architectures Requires a higher degree of knowledge to use, upgrading packages via source can be time consuming
Slackware Linux Considered highly stable, clean and largely bug-free, strong adherence to UNIX principles Limited number of officially supported applications; conservative in terms of base package selection; complex upgrade procedure
Debian Very stable; remarkable quality control; includes over 30,000 software packages; supports more processor architectures than any other Linux distribution Conservative - due to its support for many processor architectures, newer technologies are not always included; slow release cycle (one stable release every 2 - 3 years); discussions on developer mailing lists and blogs can be uncultured at times
Fedora Highly innovative; outstanding security features; large number of supported packages; strict adherence to the free software philosophy; availability of live spins featuring many popular desktop environments Fedora’s priorities tend to lean towards enterprise features, rather than desktop usability; some bleeding edge features, such as switching early to KDE 4 and GNOME 3, occasionally alienate some desktop users
openSUSE Comprehensive and intuitive configuration tool; large repository of software packages, excellent web site infrastructure and printed documentation, Btrfs with boot environments by default Its resource-heavy desktop setup and graphical utilities are sometimes seen as “bloated and slow”
Red Hat Long-term, commercial support of ten years or more. Stability. Lacks latest Linux technologies; small software repositories; licensing restrictions
FreeBSD Fast and stable; availability of over 24,000 software applications (or "ports") for installation; very good documentation; native ZFS support and boot environments Tends to lag behind Linux in terms of support for new and exotic hardware, limited availability of commercial applications; lacks graphical configuration tools
Table 2.1: Most popular distributions used according to distrowatch .

Depending the readers future work or study area, it is likely to end up learning to use the command line on a system that inherits from one of these distributions. Most likely, it will be a distribution derived from Debian or Red Hat. Linux Mint, Ubuntu, Elementary OS, and Kali Linux are all derived from Debian. CentOS, Fedora, and Red Hat Enterprise Linux are derived from Red Hat.

The history of all of these different distributions of Linux is beyond the scope of this document. But, what this means at its core is the need to be aware of what system is in use and the need to adapt to account for differences in distributions. As we begin working with Linux, through the command line, it will be apparent, most of what can be done is the same across the major distributions.

2.2 Installation

There are a wide variety of ways of installing Linux on a computer. These include:

Creating a virtual environment

Allows you to install Linux within your primary Operating System (OS) . There are great many benefits to this approach as it allows you to test an experimental OS without affecting your primary setup. It also allows you to run simple applications without leaving your primary OS and can have various inter-operability options such as shared folder, and shared network settings.

The disadvantages include some hit to performance as both the host and virtual OS have to share the same pool of resources. In addition, graphically virtual ram limits are set to 256 MB .

Creating a partition on your computer

and install it alongside your primary OS . This option is generally done by most people who know what they are doing as they generally have one or two software where there is no alternative on Linux and therefore would like to keep their primary OS for those specific software.

Using a container

One of the more popular option in recent times. The way it works is similar to that of using a virtual machine but heavily stripped one. Basically a container houses enough components to house an application. Think of running Linux per application instead of a full blown os.

There are many merits and demerits to using an option and in this lecture we will focus on building a container image for both Linux programming and Robot Operating System 2 (ROS) application

2.3 Docker

PIC

Figure 2.5: The docker logo

Docker is an open-source platform that has completely changed the way we develop, deploy, and use apps. The application development lifecycle is a dynamic process, and developers are always looking for ways to make it more efficient. Docker enables developers to package their work and all of its dependencies into standardised units called containers by utilizing containerisation technology.

By separating apps from the underlying infrastructure, these lightweight containers provide reliable performance and functionality in a variety of environments. Because of this, Docker is a game-changer for developers because it frees them up to concentrate on creating amazing software rather than handling difficult infrastructure.

Regardless of your level of experience, Docker provides an extensive feature set and a strong toolset that can greatly enhance your development process. In this tutorial, we will provide you with a thorough understanding of Docker, going over its main features, advantages, and ways to use it to develop, launch, and distribute apps more quickly and easily.

Docker is not the only containerisation software available as there is also podman 5 PIC 5 The logo of the podman software. which is almost compatible and is developed by RHEL.

Information : Docker v. Podman

The biggest difference is the underlying architecture each is built on. Docker heavily relies on a daemon, while Podman is daemonless. Think of a daemon as a process that runs in the background on the host OS . In Docker’s case, its daemon is responsible for managing Docker objects 6 6 such as images and containers and communicating with other systems. To run its daemon, Docker uses a package called dockerd. Daemons typically require root-level access to the machine they run on. This lends itself to security vulnerabilities . If a bad actor can get access to a daemon, they now have access to the entire machine. Podman’s daemonless architecture comes with a few benefits. Since running daemons almost always requires root privileges, a daemonless architecture can be thought of as “rootless.” This means that users who don’t have system-level access to the machine their containers are running on can still use Podman which isn’t always the case with Docker. Instead of a daemon, Podman uses a Linux package known as systemd. Since systemd is native to the Linux operating system, Podman is often considered more “light-weight” than Docker and will usually see faster container spin-up times than when using Docker.

2.3.1 Dockerfile

A Dockerfile is a text document in which you can lay down all the instructions you want for an image to be created.

  • The first entry in the file specifies the base image, which is a pre-made image containing all the dependencies you need for your application.

  • Then, there are commands you can send to the Dockerfile to install additional software, copy files, or run scripts.

The result is a Docker image:

a self-sufficient, executable file with all the information needed to run an application.

Dockerfiles are an easy way to create and deploy applications. They help in creating an environment consistently reproducibly , and in an easier way.

A Dockerfile is used to create new custom images prepared individually according to specific needs. For instance, a Docker image can have a particular version of a web server or, for example, a database server, or in this case run an entire Linux OS with ROS installed.

For the preparation of the lecture the following Dockerfile is written which you can see in snippets below, which is a great way to start explaining how the document works:

dockerfile
# Declare the ubuntu version FROM ubuntu:jammy-20250404

Here we are declaring a base image. A base image is a bare-bones OS and/or application in which we build our software on. In this case it is an Ubuntu jammy jellyfish (22.04).

dockerfile
# Define the target-platform and the current maintainer ARG TARGETPLATFORM LABEL maintainer="dtm@mci4me.at"

We define an ARG which is a variable. We set a TARGETPLATFORM which we can use to change the architecture of the install. LABEL is used to give a meta-data information which in this case is a simple email information of the current maintainer.

dockerfile
# Execute the following command as string SHELL ["/bin/bash", "-c"]

Next we configure the shell of the container to use bash and process the given input as a string and NOT as a command.

dockerfile
# Upgrade Ubuntu Jammy and remove downloaded list of packages RUN apt-get update -q && \ DEBIAN_FRONTEND=noninteractive apt-get upgrade -y && \ apt-get autoclean && \ apt-get autoremove && \ rm -rf /var/lib/apt/lists/*

We now start with building the OS . We start with declaring an update of the OS and ask it to do it non-interactively. Once it updates the system, we ask it to remove the repo-list to save up on space.

dockerfile
# Install Ubuntu Mate desktop and remove downloaded list of packages RUN apt-get update -q && \ DEBIAN_FRONTEND=noninteractive apt-get install -y \ ubuntu-mate-desktop && \ apt-get autoclean && \ apt-get autoremove && \ rm -rf /var/lib/apt/lists/*

To make our programming easier and creating a more friendly environment, we shall install a GUI . There are a wide variety of desktop environments for use in Linux, such as GNOME, KDE, Xcfe but for our application we shall use mate which is the default environment for Linux Mint.

dockerfile
# Add important packages RUN apt-get update && \ DEBIAN_FRONTEND=noninteractive apt-get install -y \ tigervnc-standalone-server tigervnc-common \ supervisor wget curl gosu git sudo python3-pip tini nano\ build-essential vim sudo lsb-release locales info\ bash-completion tzdata emacs \ dos2unix && \ apt-get autoclean && \ apt-get autoremove && \ rm -rf /var/lib/apt/lists/*

We start now with installing software on top of the OS . Remember, we are installing a bare bones system which means it is stripped from all the software suites we take for granted.

dockerfile
# Install noVNC and Websockify RUN git clone \ https://github.com/AtsushiSaito/noVNC.git \ -b add_clipboard_support /usr/lib/novnc RUN pip install git+https://github.com/novnc/websockify.git@v0.10.0 RUN ln -s /usr/lib/novnc/vnc.html /usr/lib/novnc/index.html # Set remote resize function enabled by default RUN sed -i \ "s/UI.initSetting('resize', 'off');/UI.initSetting('resize', 'remote');/g" \ /usr/lib/novnc/app/ui.js # Disable auto update and crash report RUN sed -i 's/Prompt=.*/Prompt=never/' /etc/update-manager/release-upgrades RUN sed -i 's/enabled=1/enabled=0/g' /etc/default/apport

This section is all about installing a Virtual Network Computing (VNC) which is a graphical desktop-sharing system which allows remote controlling of another computer. It transmits the keyboard and mouse input from one computer to another, relaying the graphical-screen updates, over a network. We also do some text manipulation to fix some glitches.

dockerfile
# Install Firefox and its configuration RUN DEBIAN_FRONTEND=noninteractive add-apt-repository ppa:mozillateam/ppa -y && \ echo 'Package: *' > /etc/apt/preferences.d/mozilla-firefox && \ echo 'Pin: release o=LP-PPA-mozillateam' \ » /etc/apt/preferences.d/mozilla-firefox && \ echo 'Pin-Priority: 1001' » /etc/apt/preferences.d/mozilla-firefox && \ apt-get update -q && \ apt-get install -y \ firefox && \ apt-get autoclean && \ apt-get autoremove && \ rm -rf /var/lib/apt/lists/*

To aid in easy navigation and searching information, we shall also install Firefox into this docker container as well.

dockerfile
# Install VSCodium for people who are accosstomed to VSCode but # prefer to keep it open-source RUN wget https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/master/pub.gpg \ -O /usr/share/keyrings/vscodium-archive-keyring.asc && \ echo 'deb [ signed-by=/usr/share/keyrings/vscodium-archive-keyring.asc ] https://paulcarroty.gitlab.io/vscodium-deb-rpm-repo/debs vscodium main' \ | tee /etc/apt/sources.list.d/vscodium.list && \ apt-get update -q && \ apt-get install -y codium && \ apt-get autoclean && \ apt-get autoremove && \ rm -rf /var/lib/apt/lists/*

To allow us to do easy programming, we shall install VSCodium, an open-source implementation of VSCode.

dockerfile
# Install ROS Humble version ENV ROS_DISTRO=humble # Install Desktop version ARG INSTALL_PACKAGE=desktop RUN apt-get update -q && \ apt-get install -y curl gnupg2 lsb-release && \ curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key \ -o /usr/share/keyrings/ros-archive-keyring.gpg && \ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] http://packages.ros.org/ros2/ubuntu $(lsb_release -cs) main" | tee /etc/apt/sources.list.d/ros2.list > /dev/null && \ apt-get update -q && \ apt-get install -y ros-${ROS_DISTRO}-${INSTALL_PACKAGE} \ python3-argcomplete \ python3-colcon-common-extensions \ python3-rosdep python3-vcstool && \ rosdep init && \ rm -rf /var/lib/apt/lists/* RUN rosdep update # Install simulation package only on amd64 RUN if [ "$TARGETPLATFORM" = "linux/amd64" ]; then \ apt-get update -q && \ apt-get install -y \ ros-${ROS_DISTRO}-gazebo-ros-pkgs \ ros-${ROS_DISTRO}-ros-ign && \ rm -rf /var/lib/apt/lists/*; \ fi

Now that everything is installed, we start the installation of ROS and its dependencies. Here we define ENV variable which allows us set which version of ROS to install.

dockerfile
# Download the Linux Tutorial file from repo ARG ZIPFILE="https://github.com/dTmC0945/L-MCI-BSc-Mobile-Robotics/raw/refs/heads/main/datasets/linux-tutorials.zip" # Create some user directories to simulate a desktop environment RUN mkdir -p \ /home/ubuntu/Downloads \ /home/ubuntu/Downloads \ /home/ubuntu/Desktop # Download the tutorial files to their correct place RUN cd "/home/ubuntu/Desktop" && \ wget -O "linux.zip" "$ZIPFILE" && \ unzip "linux.zip" && \ rm "linux.zip" # Enable apt-get completion after running `apt-get update` in the container RUN rm /etc/apt/apt.conf.d/docker-clean # Copy the entrypoint.sh into the image COPY ./entrypoint.sh / # Convert file for linux compatability RUN dos2unix /entrypoint.sh ENTRYPOINT [ "/bin/bash", "-c", "/entrypoint.sh" ] # Define user and password ENV USER=ubuntu ENV PASSWD=ubuntu

Finally, we download some additional files from a repo which will be a folder containing tutorial files for use in learning Linux. We then unzip it to Desktop and remove the image. To finish of we define a user called ubuntu with a password of ubuntu.

As mentioned, a Dockerfile is a text document which includes all the different steps and instructions on how to build a Docker image. In a general sense, the main elements described in the Dockerfile are:

  • the base image,

  • required dependencies, and

  • commands to execute application deployment within a container.

Let’s have a look in detail as to what is going on under-the-hood. Here we will have a look at command used in the file. For a more detailed explanation of what is going on, please have a look at the file.

FROM

This instruction sets the base image on which the new image is going to be built upon. It is usually the first instruction in a Dockerfile.

In this case we are downloading the official ubuntu:jammy-20250404 version from the docker repository.

ARG

These define variables used during the building of the image.

Here, we define a variable called TARGETPLATFORM to control the architecture installed on the OS

LABEL

Allows writing meta-data information to the docker image. This could be the maintainer of the code or the version as an example.

RUN

This will be an instruction that will be executed for running the commands inside the container while building. It is typically used to install an application, update libraries, or do general setup.

COPY

Allows us to copy a file from the host to the image. In this case we are copying a special file called entrypoint.sh which sets up the container with various configurations.

ENV

Sets up environmental parameters inside the image.

To build this image we need to use the CLI , move to the directory where both Dockerfile and entrypoint.sh is present and run the following code to build image.

bash
docker build . -t mci:ros2 -f Dockerfile

2.3.2 Running the Container

Now we built the image, we now need to create a container. The following command will create a new container from an image.

This command will create a container every time it is invoked.

An image is a read-only, self-contained template containing instructions for building a Docker container, like a blueprint for a building. Container is a running instance of that image, like the building itself, and is a fully isolated environment for running applications. Images are used to create containers, and multiple containers can be created from the same image.

bash
docker run \ --volume ~/Documents/docker-documents:/home/ubuntu/Desktop/Host \ --publish 6080:80 \ --name="ros2linux" \ --security-opt seccomp=unconfined \ --shm-size=512m \ mci:ros2

Let’s have a look at all the options given here and understand whats going on:

docker run

Our main command. It runs a command in a new container, pulling the image if needed and starting the container. Of course, in our file it is already written so when this is executed, there won’t be any additional downloads needed.

--volume

Binds volume between the host and the computer. Here we introduce a path from our host computer \textasciitilde/Documents/docker-documents and link it to the docker container root/home/ubuntu/Desktop/Host to allow us to share files between host and docker container.

The left side of the path may need to be adjusted for your computer.

--publish

Publish a container’s port(s) to the host. Here we are allowing access of the port 6080 (TCP) to the host computer which is used by noVNC.

--name

Adds a specific name to a container. If this option is not set a random will be given.

--security-opt

Additional security options. Here we are passing no additional security options. This is done due to the requirements by noVNC.

--shm-size

the amount of shared memory allotted to a docker container. A temporary file storage filesystem using Random Access Memory (RAM) for storing files.

Once the container is created, please use the following to close it properly.

bash
docker container stop "ros2linux"

To re-run the container and continue where left off, use the following.

bash
docker container start "ros2linux"