Deploying Embedded Linux Systems

From DAVE Developer's Wiki
Jump to: navigation, search
Info Box
Tux.png Applies to Linux


Introduction

Deployment of Embedded Linux systems is the typical operation which follows the development phase. When the application is ready and fully tested in the develoment environment, it's time to take the system to the field for the “real work”. This phase brings a lot of concerns to cope with, for example creating a suitable root filesystem, saving the data properly, implement successful on-the-field update strategies. This how-to guide explains how to solve the problems connected to the deployment of an embedded linux system.

The development environment

The following figure illustrates the typical developing environment for an Embedded Linux system: it is composed by a host machine and a target machine.

Development env.png

The host (usually a PC or a virtual machine running the Linux operating system) is used by the developer to (cross-)compile the code that will run on the target, for example a DAVE Embedded Systems ARM CPU module such as Lizard or Naon. The Linux kernel running on the target is able to mount the root file system from different physical media. During the software development, it is very common to use a directory exported via NFS by the host for this purpose. Moreover, the linux kernel is usually retrieved by a simple network transfer protocol like tftp.

Moving to the field

When the system is ready to move to the field, most of the times the particular link between host and target must be removed. In the worst case, the system must run without any NFS filesystem or file transfer services, relying only on its hardware resources (for example, the on-board ram and flash memories). Resources that are obviously limited, due to the nature of the embedded systems. Generally speaking, the procedure used to deploy the system configuration highly depends on the specific application; however, some topics are quite common. The following sections shine a light on these topics.

Root file systems

Linux needs a root file system: a root file system must contain everything needed to support the Linux system (applications, settings, data, ..). The root file system is the file system that is contained on the same partition on which the root directory is located. The Linux kernel, at the end of its startup stage, mounts the root file system on the configured root device and finally launches the /sbin/init, the first user space process and "father" of all the other processes. An example of root file system is shown below:


drwxr-xr-x  2 root     root     4096 2011-05-03 11:23 bin/
drwxr-xr-x  2 root     root     4096 2011-04-01 17:20 boot/
drwxr-xr-x  3 root     root     4096 2011-07-07 12:17 dev/
drwxr-xr-x 44 root     root     4096 2011-05-03 19:02 etc/
drwxr-xr-x  4 root     root     4096 2011-04-01 17:35 home/
drwxr-xr-x  5 root     root     4096 2011-05-03 11:23 lib/
drwxr-xr-x 12 root     root     4096 2011-07-07 12:03 media/
drwxr-xr-x  6 root     root     4096 2011-05-19 16:39 mnt/
drwxr-xr-x  2 root     root     4096 2011-03-11 05:21 proc/
drwxr-xr-x  2 root     root     4096 2011-05-03 11:23 sbin/
drwxr-xr-x  2 root     root     4096 2011-03-11 05:21 sys/
lrwxrwxrwx  1 root     root        8 2011-05-30 12:19 tmp -> /var/tmp/
drwxr-xr-x 11 root     root     4096 2010-11-05 19:36 usr/
drwxr-xr-x  8 root     root     4096 2010-12-12 11:30 var/

For more information on the Linux filesystem, please refer to The Linux filesystem explained


Strategies

The integrity of the root file system is mandatory to allow the kernel to complete the boot process and usually it is not required that the whole file system is writeable. For these reasons, usually the file system is splitted in (at least) two parts as shown in the following table:

Part File System type Access Physical medium
Minimal root file system ext2, cramfs, .. write protected ramdisk (*)
Storage file system UBIFS, JFFS2, YAFFS2, ext2/3, .. read/write NOR and NAND flashes, SSD, hard disk, ..

(*) As this file system is mounted over a volatile memory, modifications will be lost when the system will be turned off.

The first part is the actual root file system, it contains the minimum components to allow the system to boot properly and usually it does not require on-the-field upgrading. The other part is used to store applications binaries and files created and/or modified by the user, thus it must be mounted over a non-volatile memory device.

Creating the root file system

Building a root file system from scratch is definitively a complex task because several well known directories must be created and populated with a lot of files that must follow some standard rules. Usually, it's a good idea to start with a pre-packaged root file system, in order to skip the actual creation step, and letting you to work on the customization of the file system. You have two options:

  1. start from a big file system and remove all the components (packages, libraries, application binaries, ..) that you don't need
  2. start from a small file system and add all the components (packages, libraries, application binaries, ..) that you need

Option #2 is always preferrable, because it leads to a very space-optimized root file system, but it could be more demanding, especially when you need to save just little storage space compared to the size of the original RFS (in this case, you can easily go for Option #1).

Please see the Embedded distros article for an introduction on Embedded Linux distributions.

Please also refer to the following articles for additional information:

If you prefer to build the entire root file system, there are several possibilities that are described in the following sections.

OpenEmbedded

OpenEmbedded is a build framework for Embedded Linux. It offers a cross-compile environment which allows developers to create a complete Linux Distribution for embedded systems. Some of the OpenEmbedded advantages include:

  • support for many hardware architectures
  • multiple releases for those architectures
  • tools for speeding up the process of recreating the base after changes have been made
  • easy to customize
  • runs on any Linux distribution
  • cross-compiles 1000's of packages including GTK+, Qt, the X Windows system, Mono, Java, ...

OpenEmbedded is at the basis of some known distribution, like Angstrom, OpenMoko and others, and it can target a lot of different targets and architectures. Primarily, the project maintains and develops a collection of BitBake (a task execution manager derived from Gentoo's Portage) recipes The recipes consist of the source URL of the package, dependencies and compile or install options. During the build process they are used to track dependencies, cross-compile the package and pack it up, suitable to be installed on the target device. It's also possible to create complete images, consisting of root file system and kernel. As a first step the framework will build a cross-compiler toolchain for the target platform, then the build system builds all the packages included in the selected BitBake recipe, which can range from a single application to an entire Linux distribution.

Yocto

The Yocto project is an open source collaboration project that provides templates, tools and methods to help you create custom Linux-based systems for embedded products. It is derived from OpenEmbedded, but it provides a less steep learning curve, a graphical interface for Bitbake and very good documentation.

Yocto is sponsored by the Linux Foundation

Recent versions of the Embedded Linux Develpment Kit from Denx are based on Yocto.

Arago

Arago Project targets the TI OMAP, Sitara and DaVinci platform, providing a verified, tested and supported subset of packages and has been created to simplify the standard OpenEmbedded approach (mainly setup and interaction). In fact, setting up a complete OE/Bitbake system is a task recommended only to experienced users/developers, so the availability of a SDK that allows building applications for the target without learning OE/Bitbake is very important for the less-experienced audience.

Buildroot

Buildroot is a set of scripts and patches for the creation of a cross-compilation toolchain as well as the creation of a complete root file system.

Linux From Scratch

Linux From Scratch is a way to install a working Linux system by building all components of it manually. In particular, Cross Linux From Scratch allows the cross-compilation of a Linux root file system for embedded targets. The advantages to this method are a compact, flexible and secure system and a greater understanding of the internal workings of the Linux-based operating systems; this comes at the price of a time consuming and quite complex process.

Customizing the root file system

This step is clearly required to add to the basic root file system your custom application files (libraries, binaries, configuration files, ...)

Application dependancies

The applications executables that you have developed might depend on libraries that are not provided by the basic root file system. In this case these libraries must be added. To find out which libraries your applications depends on you can use the ldd and readelf tools. Please note that often you'll need to rebuild some libraries, cross-compiling them to match your target architecture.

Specific devices

Usually the embedded system provides custom devices for which developers had to write specific device drivers. To enable the support for these devices, the proper device files must be created in /dev.

Drivers built as modules

In embedded system the device drivers are typically statically linked to the kernel. In case they are built as modules you have to:

  • install them in the root file system
  • provide the command line utilities (*) required to handle the modules

Since the loadable kernel modules is a huge topic, it is recommended to read the Linux Loadable Kernel Module HOWTO available at this URL: http://www.tldp.org/HOWTO/Module-HOWTO/.

(*) for kernels 2.4 you must use the modutils (http://www.kernel.org/pub/linux/utils/kernel/modutils/)
(*) for kernels 2.6 you must use the mod-init-tools (http://www.kernel.org/pub/linux/utils/kernel/module-init-tools/)

Boot scripts

After the kernel is booted and initialized, the kernel starts init, the first user-space application, which commonly is /sbin/init. Init is responsible for starting system processes as defined in the /etc/inittab file. Init typically will start at least an instances of "getty" which waits for console logins which spawn one's user shell process. Upon shutdown, init controls the sequence and processes for shutdown. The init process is never shut down. It is a user process and not a kernel system process although it does run as root.

The boot scripts (typically /etc/inittab and /etc/rc.sh) must be modified in order to automatically execute some operations at boot such as launching user-space applications and mounting file systems (i.e. Sysfs).

Please note that init implementations can differ: Busybox offers its own version while other distributions, like Angstrom, can provide the classic System V version. More modern distros, like Ubuntu, implements Upstart, a complete replacement of the init daemon. For further information on the init process, please visit this page.

Watchdog

Typically, during application development the watchdog device included in the embedded system is turned off. Before moving to the field, enabling the watchdog is a mandatory choice. The use of this peripheral is a little bit tricky because it involves both U-Boot and Linux. The following sequence shows the typical scenario when the system is working on the field:

  1. Processor comes out of reset; internal watchdog is disabled
  2. U-Boot enables watchdog (timeout = 5 s); U-Boot main loop will take care of refreshing it
  3. Before giving control to Linux kernel, U-Boot will set up a long (e.g 180 seconds) timeout. This is required in order to allow the kernel to complete the boot stage and to run the application that will handle the watchdog refresh
  4. Once the kernel boot process has completed, watchdog application will open the watchdog device file and will take care about its refresh (timeout = 10 s)

To enable watchdog support in U-Boot, source code must be modified and the bootloadr must be recompiled. Usually this means enabling CONFIG_WATCHDOG and CONFIG_xxx (where xxx is the name of the watchdog device).

Once Linux is started (and if the kernel is compiled with watchdog support), watchdog is refreshed by a simple application like the one shown below:

#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdlib.h>
int main(int argc, const char *argv[]){
        int fd=open("/dev/watchdog",O_WRONLY);
        if(fd==-1){
            perror("watchdog");
            exit(1);
        }
        while(1){
            write(fd,"\0",1);
            fsync(fd);
            sleep(5);
        }
}

This requires the character device file:

crw-r--r-- 1 root root 10, 130 Oct 3 2006 /dev/watchdog

which can be created using the following command:

mknod /dev/watchdog c 10 130

Startup sequence

The serial port

U-Boot implements a text console on the serial port. This console can be used to stop the startup sequence to allow an interactive session with the human operator. This is very useful during debug operations, but by default any character the bootloader receives during the startup sequence stops this process. As a side effect, any spurious character received from the serial port devoted to the console is able to prevent the bootloader to complete the automatic boot process (and typically to start the operating system).

Autoboot configuration

For this reason before moving to the field it is highly recommended to configure the bootloader to halt the sequence when receiving a specific string. Fortunately, Autoboot process is deeply configurable: parameters defining the retry behaviour and the strings used to stop booting can be specified. Please read the README.autoboot file, provided inside the documentation directory of the U-Boot sources, for more details.

Setting the MAC address

In case the system provides Ethernet interface, it must be guaranteed that each device is delivered with a unique MAC address. MAC addresses are managed by IEEE Registration Authority:

IEEE Registration Authority

IEEE Standards Department

445 Hoes Lane

Piscataway NJ 08854

Phone: (732) 562-3813

Fax: (732) 562-1571

http://standards.ieee.org/contact/form.html

For more details see also:

DAVE Embedded Systems owns an IAB (Individual Address Block, a set of 4096 addresses), that is in the public listing, so everyone can find out that an address is associated to DAVE Embedded Systems. Note that the registration authority provides only IABs and OUIs (16000000+ addresses), and that a company is not allowed to request another IAB until at least 95% of the MAC addresses of the previous IAB have been used.

Customers who build their products using DAVE Embedded Systems' SOMs (Naon, Lizard, Qong, Zefeer,...) usually provide MAC numbers by themselves by acquiring them from IEEE. In fact there are many reasons for that. Three can be stressed:

  • A CPU module is NOT an end-product. It is not a product that goes directly to the final user as a LAN PCI board, or a printer server. So, in case of CPU modules, who gets a CPU module and build its own product with it, is responsible for handling the MAC address.
  • Even if DAVE Embedded Systems programs the MAC address in flash (as an example) at manufacturing stage, customer may erase, overwrite, modify this number for the actual CPU module. Also, the strategy and the position (NOR, NAND, E2PROM,...) of the MAC address may vary. DAVE Embedded Systems cannot guarantee - in other words - that MAC address is maintained in the form and position it had when delivered.
  • An end-product hosting a DAVE Embedded Systems CPU module is not always a DAVE Embedded Systems' product. When it is (and there are some examples), DAVE Embedded Systems puts the proper MAC address on the product. When it's not, DAVE can't provide MAC addresses: as already stated, the list of DAVE's MAC addresses is public, and by reading this list everybody can see that the product manufacturer is DAVE Embedded Systems, which is not true.

On-the-field software upgrades

One of the greatest challenges for embedded systems manufacturers is to guarantee that the software on the system can be updated in the easiest way. On-the-field software upgradability is a major requirement that allows to replace bogus code and to enhance application features. How to perform this operation is highly platform-dependent. The following section shows in detail a specific situation.

U-Boot/Linux system

We assume the system software/firmware is composed by the following components stored in flash memory (NOR and/or NAND):

  • U-Boot bootloader with redundant environments
  • Linux kernel
  • Root file system (read/write but not persistent)
  • Additional file system (read/write, persistent)

We also assume that Linux MTD subsystem provides partitions accordingly:

  • /dev/mtd0 -> U-Boot code
  • /dev/mtd1 -> U-Boot environment #1
  • /dev/mtd2 -> U-Boot environment #2
  • /dev/mtd3 -> Linux kernel
  • /dev/mtd4 -> root file system
  • /dev/mtd5 -> additional file system

Basically we can upgrade the system through the bootloader or through the kernel.

Finally, we assume that only the aforementioned components should be upgraded. In case the system equips external microcontrollers, FPGAs, CPLDs, etc, different strategies must be taken into account depending on the particular case.


Warning-icon.png Please note that, in case of problems (e.g. power failures) in the middle of U-Boot upgrading, the system might get in an unrecoverable state. Warning-icon.png


Upgrade approaches

We can depict the following approaches to on the field upgrading, depending on the system capabilities and operating environments:


U-Boot-based upgrading

  • With the help of the U-Boot commands (tftpload, protect, erase and cp) we can download and store kernel images, file system images and U-Boot itself in the target system.
  • The main disadvantage is that this procedure usually requires to physically access the system, attaching to the serial console through a serial cable and using a PC with a terminal emulator software.
  • Implement software upgrade procedures in u-boot, though possible, is not so easy, due to the limited set of commands provided by the u-boot shell. Moreover, U-Boot usually doesn't support all the available storage devices (for example, on a system with both NOR and NAND flash, it's possible that u-boot supports just the NOR, not allowing to program the NAND flash from the command line).
  • Due to the previous considerations, automatic upgrade procedures are hard to implement.


Linux-based upgrading

  • System running Linux can be updated from user space using standard applications and tools. Most of the times the upgrade procedures can be created using common shell commands and scripts.
  • Usually, when the system provides a GUI, the upgrade function is integrated in the application interface and can be activated and controlled by the user through graphical elements.
  • If the network is available, it's a good point that the embedded system is able to run programs like a tftp client, a ftp server/client, a ssh client (with scp program) or the wget program: with these tools, the system can easily retrieve the upgrade packages from the network.
  • When the network is unavailable, a typical approach is to provide the end-user with a storage device (e.g. usb pen drive or SD card) with the software upgrade packages. This device can then be plugged to the system to run the upgrade.
  • When preparing the final root file system, it's fundamental to add all the application binaries and libraries required to implement the upgrade procedures.
  • It's always possible to access the u-boot environment variables from user space, both for read and write operations. These operations can be performed using the fw_printenv/fw_setenv programs contained in the tools/env directory of the u-boot sources.
  • In some cases, the upgrade procedures can be activated automatically:
    1. running periodic checks on some resource on the network
    2. running periodic checks on some place on the local storage (e.g. a directory on the local file system which can be remotely written by ftp)
    3. triggering the start when detecting an attached storage device containing the software upgrade
  • A typical strategy on headless systems is to create custom init scripts that perform checks on the file system at boot, looking for upgrade packages and triggering the upgrade procedure when required.
  • Please note that, in order to erase and write MTD flash partitions, their writability flag must be set in Linux. Usually the MTD partition dedicated to u-boot is protected against write in Linux, so an update of the kernel is required before storing a new u-boot image. Updating U-Boot is not a common operation during the system lifetime, but sometimes it is required to solve some bugs or implement new features.


Local upgrading

When the system doesn't allow a remote access, an operator must locally access the system through its user interface (in the best case) or through the serial port (in the worst case). If the system provides a USB port, SD, MMC or PCMCIA slot, the system can retrieve the upgrade packages from those memory devices; if the user interface features a sort of upgrade command, the operator should just plug the device and activate the upgrade function. The serial port is the last option to connect to the system and manually send the proper commands to complete the upgrade procedure.


Remote upgrading

When the system features a LAN or internet connection, a remote-update strategy can be implemented. If the machine can be contacted (for example using a telnet or ssh connection), it is quite simple to activate a script that executes all the necessary commands to complete the upgrade procedure.

Note: building Dropbear SSH server for ARM platform

Dropbear is a light SSH suite, with client and server applications. It could be built as a multibinary application, like the famous Busybox: a single executable that can be used for SSH server, SSH client, scp, etc.

For example, to build Dropbear for the Zefeer platform, the user must:

  • Set the environment variables:
    1. export PATH=/usr/local/eldk41arm/usr/bin:$PATH
    2. export ARCH=arm
    3. export CROSS_COMPILE=arm-linux
  • Run the configuration tool:
    ./configure –build=ARCH --host=CROSS_COMPILE
  • Run the make tool
    make PROGRAMS=”dropbear dbclient scp” MULTI=1 STATIC=1

More information are available in README, INSTALL and MULTI files included in the Dropbear distribution. Please note that for recent systems, as Lizard and Naon, Dropbear can be installed from pre-built packages (please refer to the distribution's package-manager documentation).