configure.txt 21 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432
  1. // -*- mode:doc; -*-
  2. // vim: set syntax=asciidoc:
  3. [[configure]]
  4. Details on Buildroot configuration
  5. ----------------------------------
  6. All the configuration options in +make *config+ have a help text
  7. providing details about the option. However, a number of topics
  8. require additional details that cannot easily be covered in the help
  9. text and are there covered in the following sections.
  10. Cross-compilation toolchain
  11. ~~~~~~~~~~~~~~~~~~~~~~~~~~~
  12. A compilation toolchain is the set of tools that allows you to compile
  13. code for your system. It consists of a compiler (in our case, +gcc+),
  14. binary utils like assembler and linker (in our case, +binutils+) and a
  15. C standard library (for example
  16. http://www.gnu.org/software/libc/libc.html[GNU Libc],
  17. http://www.uclibc.org/[uClibc]).
  18. The system installed on your development station certainly already has
  19. a compilation toolchain that you can use to compile an application
  20. that runs on your system. If you're using a PC, your compilation
  21. toolchain runs on an x86 processor and generates code for an x86
  22. processor. Under most Linux systems, the compilation toolchain uses
  23. the GNU libc (glibc) as the C standard library. This compilation
  24. toolchain is called the "host compilation toolchain". The machine on
  25. which it is running, and on which you're working, is called the "host
  26. system" footnote:[This terminology differs from what is used by GNU
  27. configure, where the host is the machine on which the application will
  28. run (which is usually the same as target)].
  29. The compilation toolchain is provided by your distribution, and
  30. Buildroot has nothing to do with it (other than using it to build a
  31. cross-compilation toolchain and other tools that are run on the
  32. development host).
  33. As said above, the compilation toolchain that comes with your system
  34. runs on and generates code for the processor in your host system. As
  35. your embedded system has a different processor, you need a
  36. cross-compilation toolchain - a compilation toolchain that runs on
  37. your _host system_ but generates code for your _target system_ (and
  38. target processor). For example, if your host system uses x86 and your
  39. target system uses ARM, the regular compilation toolchain on your host
  40. runs on x86 and generates code for x86, while the cross-compilation
  41. toolchain runs on x86 and generates code for ARM.
  42. Buildroot provides different solutions to build, or use existing
  43. cross-compilation toolchains:
  44. * The *internal toolchain backend*, called +Buildroot toolchain+ in
  45. the configuration interface.
  46. * The *external toolchain backend*, called +External toolchain+ in
  47. the configuration interface.
  48. * The *Crosstool-NG toolchain backend*, called +Crosstool-NG
  49. toolchain+ in the configuration interface.
  50. The choice between these three solutions is done using the +Toolchain
  51. Type+ option in the +Toolchain+ menu. Once one solution has been
  52. chosen, a number of configuration options appear, they are detailed in
  53. the following sections.
  54. [[internal-toolchain-backend]]
  55. Internal toolchain backend
  56. ^^^^^^^^^^^^^^^^^^^^^^^^^^
  57. The _internal toolchain backend_ is the backend where Buildroot builds
  58. by itself a cross-compilation toolchain, before building the userspace
  59. applications and libraries for your target embedded system.
  60. This backend is the historical backend of Buildroot, and has been
  61. limited for a long time to the usage of the
  62. http://www.uclibc.org[uClibc C library]. Support for the _eglibc_ C
  63. library has been added in 2013 and is at this point considered
  64. experimental. See the _External toolchain backend_ and _Crosstool-NG
  65. toolchain backend_ for other solutions to use _glibc_ or _eglibc_.
  66. Once you have selected this backend, a number of options appear. The
  67. most important ones allow to:
  68. * Change the version of the Linux kernel headers used to build the
  69. toolchain. This item deserves a few explanations. In the process of
  70. building a cross-compilation toolchain, the C library is being
  71. built. This library provides the interface between userspace
  72. applications and the Linux kernel. In order to know how to "talk"
  73. to the Linux kernel, the C library needs to have access to the
  74. _Linux kernel headers_ (i.e, the +.h+ files from the kernel), which
  75. define the interface between userspace and the kernel (system
  76. calls, data structures, etc.). Since this interface is backward
  77. compatible, the version of the Linux kernel headers used to build
  78. your toolchain do not need to match _exactly_ the version of the
  79. Linux kernel you intend to run on your embedded system. They only
  80. need to have a version equal or older to the version of the Linux
  81. kernel you intend to run. If you use kernel headers that are more
  82. recent than the Linux kernel you run on your embedded system, then
  83. the C library might be using interfaces that are not provided by
  84. your Linux kernel.
  85. * Change the version and the configuration of the uClibc C library
  86. (if uClibc is selected). The default options are usually
  87. fine. However, if you really need to specifically customize the
  88. configuration of your uClibc C library, you can pass a specific
  89. configuration file here. Or alternatively, you can run the +make
  90. uclibc-menuconfig+ command to get access to uClibc's configuration
  91. interface. Note that all packages in Buildroot are tested against
  92. the default uClibc configuration bundled in Buildroot: if you
  93. deviate from this configuration by removing features from uClibc,
  94. some packages may no longer build.
  95. * Change the version of the GCC compiler and binutils.
  96. * Select a number of toolchain options (uClibc only): whether the
  97. toolchain should have largefile support (i.e support for files
  98. larger than 2 GB on 32 bits systems), IPv6 support, RPC support
  99. (used mainly for NFS), wide-char support, locale support (for
  100. internationalization), C++ support, thread support. Depending on
  101. which options you choose, the number of userspace applications and
  102. libraries visible in Buildroot menus will change: many applications
  103. and libraries require certain toolchain options to be enabled. Most
  104. packages show a comment when a certain toolchain option is required
  105. to be able to enable those packages.
  106. It is worth noting that whenever one of those options is modified,
  107. then the entire toolchain and system must be rebuilt. See
  108. xref:full-rebuild[].
  109. Advantages of this backend:
  110. * Well integrated with Buildroot
  111. * Fast, only builds what's necessary
  112. Drawbacks of this backend:
  113. * Rebuilding the toolchain is needed when doing +make clean+, which
  114. takes time. If you're trying to reduce your build time, consider
  115. using the _External toolchain backend_.
  116. [[external-toolchain-backend]]
  117. External toolchain backend
  118. ^^^^^^^^^^^^^^^^^^^^^^^^^^
  119. The _external toolchain backend_ allows to use existing pre-built
  120. cross-compilation toolchains. Buildroot knows about a number of
  121. well-known cross-compilation toolchains (from
  122. http://www.linaro.org[Linaro] for ARM,
  123. http://www.mentor.com/embedded-software/sourcery-tools/sourcery-codebench/editions/lite-edition/[Sourcery
  124. CodeBench] for ARM, x86, x86-64, PowerPC, MIPS and SuperH,
  125. https://blackfin.uclinux.org/gf/project/toolchain[Blackfin toolchains
  126. from ADI], http://git.xilinx.com/[Xilinx toolchains for Microblaze],
  127. etc.) and is capable of downloading them automatically, or it can be
  128. pointed to a custom toolchain, either available for download or
  129. installed locally.
  130. Then, you have three solutions to use an external toolchain:
  131. * Use a predefined external toolchain profile, and let Buildroot
  132. download, extract and install the toolchain. Buildroot already knows
  133. about a few CodeSourcery, Linaro, Blackfin and Xilinx toolchains.
  134. Just select the toolchain profile in +Toolchain+ from the
  135. available ones. This is definitely the easiest solution.
  136. * Use a predefined external toolchain profile, but instead of having
  137. Buildroot download and extract the toolchain, you can tell Buildroot
  138. where your toolchain is already installed on your system. Just
  139. select the toolchain profile in +Toolchain+ through the available
  140. ones, unselect +Download toolchain automatically+, and fill the
  141. +Toolchain path+ text entry with the path to your cross-compiling
  142. toolchain.
  143. * Use a completely custom external toolchain. This is particularly
  144. useful for toolchains generated using crosstool-NG. To do this,
  145. select the +Custom toolchain+ solution in the +Toolchain+ list. You
  146. need to fill the +Toolchain path+, +Toolchain prefix+ and +External
  147. toolchain C library+ options. Then, you have to tell Buildroot what
  148. your external toolchain supports. If your external toolchain uses
  149. the 'glibc' library, you only have to tell whether your toolchain
  150. supports C\+\+ or not and whether it has built-in RPC support. If
  151. your external toolchain uses the 'uClibc'
  152. library, then you have to tell Buildroot if it supports largefile,
  153. IPv6, RPC, wide-char, locale, program invocation, threads and
  154. C++. At the beginning of the execution, Buildroot will tell you if
  155. the selected options do not match the toolchain configuration.
  156. Our external toolchain support has been tested with toolchains from
  157. CodeSourcery and Linaro, toolchains generated by
  158. http://crosstool-ng.org[crosstool-NG], and toolchains generated by
  159. Buildroot itself. In general, all toolchains that support the
  160. 'sysroot' feature should work. If not, do not hesitate to contact the
  161. developers.
  162. We do not support toolchains from the
  163. http://www.denx.de/wiki/DULG/ELDK[ELDK] of Denx, for two reasons:
  164. * The ELDK does not contain a pure toolchain (i.e just the compiler,
  165. binutils, the C and C++ libraries), but a toolchain that comes with
  166. a very large set of pre-compiled libraries and programs. Therefore,
  167. Buildroot cannot import the 'sysroot' of the toolchain, as it would
  168. contain hundreds of megabytes of pre-compiled libraries that are
  169. normally built by Buildroot.
  170. * The ELDK toolchains have a completely non-standard custom mechanism
  171. to handle multiple library variants. Instead of using the standard
  172. GCC 'multilib' mechanism, the ARM ELDK uses different symbolic links
  173. to the compiler to differentiate between library variants (for ARM
  174. soft-float and ARM VFP), and the PowerPC ELDK compiler uses a
  175. +CROSS_COMPILE+ environment variable. This non-standard behaviour
  176. makes it difficult to support ELDK in Buildroot.
  177. We also do not support using the distribution toolchain (i.e the
  178. gcc/binutils/C library installed by your distribution) as the
  179. toolchain to build software for the target. This is because your
  180. distribution toolchain is not a "pure" toolchain (i.e only with the
  181. C/C++ library), so we cannot import it properly into the Buildroot
  182. build environment. So even if you are building a system for a x86 or
  183. x86_64 target, you have to generate a cross-compilation toolchain with
  184. Buildroot or crosstool-NG.
  185. If you want to generate a custom toolchain for your project, that can
  186. be used as an external toolchain in Buildroot, our recommandation is
  187. definitely to build it with http://crosstool-ng.org[crosstool-NG]. We
  188. recommend to build the toolchain separately from Buildroot, and then
  189. _import_ it in Buildroot using the external toolchain backend.
  190. Advantages of this backend:
  191. * Allows to use well-known and well-tested cross-compilation
  192. toolchains.
  193. * Avoids the build time of the cross-compilation toolchain, which is
  194. often very significant in the overall build time of an embedded
  195. Linux system.
  196. * Not limited to uClibc: glibc and eglibc toolchains are supported.
  197. Drawbacks of this backend:
  198. * If your pre-built external toolchain has a bug, may be hard to get a
  199. fix from the toolchain vendor, unless you build your external
  200. toolchain by yourself using Crosstool-NG.
  201. [[crosstool-ng-toolchain-backend]]
  202. Crosstool-NG toolchain backend
  203. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  204. The _Crosstool-NG toolchain backend_ integrates the
  205. http://crosstool-ng.org[Crosstool-NG] project with
  206. Buildroot. Crosstool-NG is a highly-configurable, versatile and
  207. well-maintained tool to build cross-compilation toolchains.
  208. If you select the +Crosstool-NG toolchain+ option in +Toolchain Type+,
  209. then you will be offered to:
  210. * Choose which C library you want to use. Crosstool-NG supports the
  211. three most important C libraries used in Linux systems: glibc,
  212. eglibc and uClibc
  213. * Choose a custom Crosstool-NG configuration file. Buildroot has its
  214. own default configuration file (one per C library choice), but you
  215. can provide your own. Another option is to run +make
  216. ctng-menuconfig+ to get access to the Crosstool-NG configuration
  217. interface. However, note that all Buildroot packages have only been
  218. tested with the default Crosstool-NG configurations.
  219. * Choose a number of toolchain options (rather limited if glibc or
  220. eglibc are used, or numerous if uClibc is used)
  221. When you will start the Buildroot build process, Buildroot will
  222. download and install the Crosstool-NG tool, build and install its
  223. required dependencies, and then run Crosstool-NG with the provided
  224. configuration.
  225. Advantages of this backend:
  226. * Not limited to uClibc: glibc and eglibc are supported.
  227. * Vast possibilities of toolchain configuration.
  228. Drawbacks of this backend:
  229. * Crosstool-NG is not perfectly integrated with Buildroot. For
  230. example, Crosstool-NG has its own download infrastructure, not
  231. integrated with the one in Buildroot (for example a Buildroot +make
  232. source+ will not download all the source code tarballs needed by
  233. Crosstool-NG).
  234. * The toolchain is completely rebuilt from scratch if you do a +make
  235. clean+.
  236. /dev management
  237. ~~~~~~~~~~~~~~~
  238. On a Linux system, the +/dev+ directory contains special files, called
  239. _device files_, that allow userspace applications to access the
  240. hardware devices managed by the Linux kernel. Without these _device
  241. files_, your userspace applications would not be able to use the
  242. hardware devices, even if they are properly recognized by the Linux
  243. kernel.
  244. Under +System configuration+, +/dev management+, Buildroot offers four
  245. different solutions to handle the +/dev+ directory :
  246. * The first solution is *Static using device table*. This is the old
  247. classical way of handling device files in Linux. With this method,
  248. the device files are persistently stored in the root filesystem
  249. (i.e they persist accross reboots), and there is nothing that will
  250. automatically create and remove those device files when hardware
  251. devices are added or removed from the system. Buildroot therefore
  252. creates a standard set of device files using a _device table_, the
  253. default one being stored in +system/device_table_dev.txt+ in the
  254. Buildroot source code. This file is processed when Buildroot
  255. generates the final root filesystem image, and the _device files_
  256. are therefore not visible in the +output/target+ directory. The
  257. +BR2_ROOTFS_STATIC_DEVICE_TABLE+ option allows to change the
  258. default device table used by Buildroot, or to add an additional
  259. device table, so that additional _device files_ are created by
  260. Buildroot during the build. So, if you use this method, and a
  261. _device file_ is missing in your system, you can for example create
  262. a +board/<yourcompany>/<yourproject>/device_table_dev.txt+ file
  263. that contains the description of your additional _device files_,
  264. and then you can set +BR2_ROOTFS_STATIC_DEVICE_TABLE+ to
  265. +system/device_table_dev.txt
  266. board/<yourcompany>/<yourproject>/device_table_dev.txt+. For more
  267. details about the format of the device table file, see
  268. xref:makedev-syntax[].
  269. * The second solution is *Dynamic using devtmpfs only*. _devtmpfs_ is
  270. a virtual filesystem inside the Linux kernel that has been
  271. introduced in kernel 2.6.32 (if you use an older kernel, it is not
  272. possible to use this option). When mounted in +/dev+, this virtual
  273. filesystem will automatically make _device files_ appear and
  274. disappear as hardware devices are added and removed from the
  275. system. This filesystem is not persistent accross reboots: it is
  276. filled dynamically by the kernel. Using _devtmpfs_ requires the
  277. following kernel configuration options to be enabled:
  278. +CONFIG_DEVTMPFS+ and +CONFIG_DEVTMPFS_MOUNT+. When Buildroot is in
  279. charge of building the Linux kernel for your embedded device, it
  280. makes sure that those two options are enabled. However, if you
  281. build your Linux kernel outside of Buildroot, then it is your
  282. responsability to enable those two options (if you fail to do so,
  283. your Buildroot system will not boot).
  284. * The third solution is *Dynamic using mdev*. This method also relies
  285. on the _devtmpfs_ virtual filesystem detailed above (so the
  286. requirement to have +CONFIG_DEVTMPFS+ and +CONFIG_DEVTMPFS_MOUNT+
  287. enabled in the kernel configuration still apply), but adds the
  288. +mdev+ userspace utility on top of it. +mdev+ is a program part of
  289. Busybox that the kernel will call every time a device is added or
  290. removed. Thanks to the +/etc/mdev.conf+ configuration file, +mdev+
  291. can be configured to for example, set specific permissions or
  292. ownership on a device file, call a script or application whenever a
  293. device appears or disappear, etc. Basically, it allows _userspace_
  294. to react on device addition and removal events. +mdev+ can for
  295. example be used to automatically load kernel modules when devices
  296. appear on the system. +mdev+ is also important if you have devices
  297. that require a firmware, as it will be responsible for pushing the
  298. firmware contents to the kernel. +mdev+ is a lightweight
  299. implementation (with fewer features) of +udev+. For more details
  300. about +mdev+ and the syntax of its configuration file, see
  301. http://git.busybox.net/busybox/tree/docs/mdev.txt.
  302. * The fourth solution is *Dynamic using udev*. This method also
  303. relies on the _devtmpfs_ virtual filesystem detailed above, but
  304. adds the +udev+ userspace daemon on top of it. +udev+ is a daemon
  305. that runs in the background, and gets called by the kernel when a
  306. device gets added or removed from the system. It is a more
  307. heavyweight solution than +mdev+, but provides higher flexibility
  308. and is sometimes mandatory for some system components (systemd for
  309. example). +udev+ is the mechanism used in most desktop Linux
  310. distributions. For more details about +udev+, see
  311. http://en.wikipedia.org/wiki/Udev.
  312. The Buildroot developers recommandation is to start with the *Dynamic
  313. using devtmpfs only* solution, until you have the need for userspace
  314. to be notified when devices are added/removed, or if firmwares are
  315. needed, in which case *Dynamic using mdev* is usually a good solution.
  316. init system
  317. ~~~~~~~~~~~
  318. The _init_ program is the first userspace program started by the
  319. kernel (it carries the PID number 1), and is responsible for starting
  320. the userspace services and programs (for example: web server,
  321. graphical applications, other network servers, etc.).
  322. Buildroot allows to use three different types of init systems, which
  323. can be chosen from +System configuration+, +Init system+:
  324. * The first solution is *Busybox*. Amongst many programs, Busybox has
  325. an implementation of a basic +init+ program, which is sufficient
  326. for most embedded systems. Enabling the +BR2_INIT_BUSYBOX+ will
  327. ensure Busybox will build and install its +init+ program. This is
  328. the default solution in Buildroot. The Busybox +init+ program will
  329. read the +/etc/inittab+ file at boot to know what to do. The syntax
  330. of this file can be found in
  331. http://git.busybox.net/busybox/tree/examples/inittab (note that
  332. Busybox +inittab+ syntax is special: do not use a random +inittab+
  333. documentation from the Internet to learn about Busybox
  334. +inittab+). The default +inittab+ in Buildroot is stored in
  335. +system/skeleton/etc/inittab+. Apart from mounting a few important
  336. filesystems, the main job the default inittab does is to start the
  337. +/etc/init.d/rcS+ shell script, and start a +getty+ program (which
  338. provides a login prompt).
  339. * The second solution is *systemV*. This solution uses the old
  340. traditional _sysvinit_ program, packed in Buildroot in
  341. +package/sysvinit+. This was the solution used in most desktop
  342. Linux distributions, until they switched to more recent
  343. alternatives such as Upstart or Systemd. +sysvinit+ also works with
  344. an +inittab+ file (which has a slightly different syntax than the
  345. one from Busybox). The default +inittab+ installed with this init
  346. solution is located in +package/sysvinit/inittab+.
  347. * The third solution is *systemd*. +systemd+ is the new generation
  348. init system for Linux. It does far more than traditional _init_
  349. programs: aggressive parallelization capabilities, uses socket and
  350. D-Bus activation for starting services, offers on-demand starting
  351. of daemons, keeps track of processes using Linux control groups,
  352. supports snapshotting and restoring of the system state,
  353. etc. +systemd+ will be useful on relatively complex embedded
  354. systems, for example the ones requiring D-Bus and services
  355. communicating between each other. It is worth noting that +systemd+
  356. brings a fairly big number of large dependencies: +dbus+, +glib+
  357. and more. For more details about +systemd+, see
  358. http://www.freedesktop.org/wiki/Software/systemd.
  359. The solution recommended by Buildroot developers is to use the
  360. *Busybox init* as it is sufficient for most embedded
  361. systems. *systemd* can be used for more complex situations.