Kconfig 17 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526
  1. #
  2. # Block device driver configuration
  3. #
  4. menuconfig MD
  5. bool "Multiple devices driver support (RAID and LVM)"
  6. depends on BLOCK
  7. select SRCU
  8. help
  9. Support multiple physical spindles through a single logical device.
  10. Required for RAID and logical volume management.
  11. if MD
  12. config BLK_DEV_MD
  13. tristate "RAID support"
  14. ---help---
  15. This driver lets you combine several hard disk partitions into one
  16. logical block device. This can be used to simply append one
  17. partition to another one or to combine several redundant hard disks
  18. into a RAID1/4/5 device so as to provide protection against hard
  19. disk failures. This is called "Software RAID" since the combining of
  20. the partitions is done by the kernel. "Hardware RAID" means that the
  21. combining is done by a dedicated controller; if you have such a
  22. controller, you do not need to say Y here.
  23. More information about Software RAID on Linux is contained in the
  24. Software RAID mini-HOWTO, available from
  25. <http://www.tldp.org/docs.html#howto>. There you will also learn
  26. where to get the supporting user space utilities raidtools.
  27. If unsure, say N.
  28. config MD_AUTODETECT
  29. bool "Autodetect RAID arrays during kernel boot"
  30. depends on BLK_DEV_MD=y
  31. default y
  32. ---help---
  33. If you say Y here, then the kernel will try to autodetect raid
  34. arrays as part of its boot process.
  35. If you don't use raid and say Y, this autodetection can cause
  36. a several-second delay in the boot time due to various
  37. synchronisation steps that are part of this step.
  38. If unsure, say Y.
  39. config MD_LINEAR
  40. tristate "Linear (append) mode"
  41. depends on BLK_DEV_MD
  42. ---help---
  43. If you say Y here, then your multiple devices driver will be able to
  44. use the so-called linear mode, i.e. it will combine the hard disk
  45. partitions by simply appending one to the other.
  46. To compile this as a module, choose M here: the module
  47. will be called linear.
  48. If unsure, say Y.
  49. config MD_RAID0
  50. tristate "RAID-0 (striping) mode"
  51. depends on BLK_DEV_MD
  52. ---help---
  53. If you say Y here, then your multiple devices driver will be able to
  54. use the so-called raid0 mode, i.e. it will combine the hard disk
  55. partitions into one logical device in such a fashion as to fill them
  56. up evenly, one chunk here and one chunk there. This will increase
  57. the throughput rate if the partitions reside on distinct disks.
  58. Information about Software RAID on Linux is contained in the
  59. Software-RAID mini-HOWTO, available from
  60. <http://www.tldp.org/docs.html#howto>. There you will also
  61. learn where to get the supporting user space utilities raidtools.
  62. To compile this as a module, choose M here: the module
  63. will be called raid0.
  64. If unsure, say Y.
  65. config MD_RAID1
  66. tristate "RAID-1 (mirroring) mode"
  67. depends on BLK_DEV_MD
  68. ---help---
  69. A RAID-1 set consists of several disk drives which are exact copies
  70. of each other. In the event of a mirror failure, the RAID driver
  71. will continue to use the operational mirrors in the set, providing
  72. an error free MD (multiple device) to the higher levels of the
  73. kernel. In a set with N drives, the available space is the capacity
  74. of a single drive, and the set protects against a failure of (N - 1)
  75. drives.
  76. Information about Software RAID on Linux is contained in the
  77. Software-RAID mini-HOWTO, available from
  78. <http://www.tldp.org/docs.html#howto>. There you will also
  79. learn where to get the supporting user space utilities raidtools.
  80. If you want to use such a RAID-1 set, say Y. To compile this code
  81. as a module, choose M here: the module will be called raid1.
  82. If unsure, say Y.
  83. config MD_RAID10
  84. tristate "RAID-10 (mirrored striping) mode"
  85. depends on BLK_DEV_MD
  86. ---help---
  87. RAID-10 provides a combination of striping (RAID-0) and
  88. mirroring (RAID-1) with easier configuration and more flexible
  89. layout.
  90. Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to
  91. be the same size (or at least, only as much as the smallest device
  92. will be used).
  93. RAID-10 provides a variety of layouts that provide different levels
  94. of redundancy and performance.
  95. RAID-10 requires mdadm-1.7.0 or later, available at:
  96. https://www.kernel.org/pub/linux/utils/raid/mdadm/
  97. If unsure, say Y.
  98. config MD_RAID456
  99. tristate "RAID-4/RAID-5/RAID-6 mode"
  100. depends on BLK_DEV_MD
  101. select RAID6_PQ
  102. select LIBCRC32C
  103. select ASYNC_MEMCPY
  104. select ASYNC_XOR
  105. select ASYNC_PQ
  106. select ASYNC_RAID6_RECOV
  107. ---help---
  108. A RAID-5 set of N drives with a capacity of C MB per drive provides
  109. the capacity of C * (N - 1) MB, and protects against a failure
  110. of a single drive. For a given sector (row) number, (N - 1) drives
  111. contain data sectors, and one drive contains the parity protection.
  112. For a RAID-4 set, the parity blocks are present on a single drive,
  113. while a RAID-5 set distributes the parity across the drives in one
  114. of the available parity distribution methods.
  115. A RAID-6 set of N drives with a capacity of C MB per drive
  116. provides the capacity of C * (N - 2) MB, and protects
  117. against a failure of any two drives. For a given sector
  118. (row) number, (N - 2) drives contain data sectors, and two
  119. drives contains two independent redundancy syndromes. Like
  120. RAID-5, RAID-6 distributes the syndromes across the drives
  121. in one of the available parity distribution methods.
  122. Information about Software RAID on Linux is contained in the
  123. Software-RAID mini-HOWTO, available from
  124. <http://www.tldp.org/docs.html#howto>. There you will also
  125. learn where to get the supporting user space utilities raidtools.
  126. If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y. To
  127. compile this code as a module, choose M here: the module
  128. will be called raid456.
  129. If unsure, say Y.
  130. config MD_MULTIPATH
  131. tristate "Multipath I/O support"
  132. depends on BLK_DEV_MD
  133. help
  134. MD_MULTIPATH provides a simple multi-path personality for use
  135. the MD framework. It is not under active development. New
  136. projects should consider using DM_MULTIPATH which has more
  137. features and more testing.
  138. If unsure, say N.
  139. config MD_FAULTY
  140. tristate "Faulty test module for MD"
  141. depends on BLK_DEV_MD
  142. help
  143. The "faulty" module allows for a block device that occasionally returns
  144. read or write errors. It is useful for testing.
  145. In unsure, say N.
  146. config MD_CLUSTER
  147. tristate "Cluster Support for MD (EXPERIMENTAL)"
  148. depends on BLK_DEV_MD
  149. depends on DLM
  150. default n
  151. ---help---
  152. Clustering support for MD devices. This enables locking and
  153. synchronization across multiple systems on the cluster, so all
  154. nodes in the cluster can access the MD devices simultaneously.
  155. This brings the redundancy (and uptime) of RAID levels across the
  156. nodes of the cluster.
  157. If unsure, say N.
  158. source "drivers/md/bcache/Kconfig"
  159. config BLK_DEV_DM_BUILTIN
  160. bool
  161. config BLK_DEV_DM
  162. tristate "Device mapper support"
  163. select BLK_DEV_DM_BUILTIN
  164. select DAX
  165. ---help---
  166. Device-mapper is a low level volume manager. It works by allowing
  167. people to specify mappings for ranges of logical sectors. Various
  168. mapping types are available, in addition people may write their own
  169. modules containing custom mappings if they wish.
  170. Higher level volume managers such as LVM2 use this driver.
  171. To compile this as a module, choose M here: the module will be
  172. called dm-mod.
  173. If unsure, say N.
  174. config DM_MQ_DEFAULT
  175. bool "request-based DM: use blk-mq I/O path by default"
  176. depends on BLK_DEV_DM
  177. ---help---
  178. This option enables the blk-mq based I/O path for request-based
  179. DM devices by default. With the option the dm_mod.use_blk_mq
  180. module/boot option defaults to Y, without it to N, but it can
  181. still be overriden either way.
  182. If unsure say N.
  183. config DM_DEBUG
  184. bool "Device mapper debugging support"
  185. depends on BLK_DEV_DM
  186. ---help---
  187. Enable this for messages that may help debug device-mapper problems.
  188. If unsure, say N.
  189. config DM_BUFIO
  190. tristate
  191. depends on BLK_DEV_DM
  192. ---help---
  193. This interface allows you to do buffered I/O on a device and acts
  194. as a cache, holding recently-read blocks in memory and performing
  195. delayed writes.
  196. config DM_DEBUG_BLOCK_MANAGER_LOCKING
  197. bool "Block manager locking"
  198. depends on DM_BUFIO
  199. ---help---
  200. Block manager locking can catch various metadata corruption issues.
  201. If unsure, say N.
  202. config DM_DEBUG_BLOCK_STACK_TRACING
  203. bool "Keep stack trace of persistent data block lock holders"
  204. depends on STACKTRACE_SUPPORT && DM_DEBUG_BLOCK_MANAGER_LOCKING
  205. select STACKTRACE
  206. ---help---
  207. Enable this for messages that may help debug problems with the
  208. block manager locking used by thin provisioning and caching.
  209. If unsure, say N.
  210. config DM_BIO_PRISON
  211. tristate
  212. depends on BLK_DEV_DM
  213. ---help---
  214. Some bio locking schemes used by other device-mapper targets
  215. including thin provisioning.
  216. source "drivers/md/persistent-data/Kconfig"
  217. config DM_CRYPT
  218. tristate "Crypt target support"
  219. depends on BLK_DEV_DM
  220. select CRYPTO
  221. select CRYPTO_CBC
  222. ---help---
  223. This device-mapper target allows you to create a device that
  224. transparently encrypts the data on it. You'll need to activate
  225. the ciphers you're going to use in the cryptoapi configuration.
  226. For further information on dm-crypt and userspace tools see:
  227. <https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt>
  228. To compile this code as a module, choose M here: the module will
  229. be called dm-crypt.
  230. If unsure, say N.
  231. config DM_SNAPSHOT
  232. tristate "Snapshot target"
  233. depends on BLK_DEV_DM
  234. select DM_BUFIO
  235. ---help---
  236. Allow volume managers to take writable snapshots of a device.
  237. config DM_THIN_PROVISIONING
  238. tristate "Thin provisioning target"
  239. depends on BLK_DEV_DM
  240. select DM_PERSISTENT_DATA
  241. select DM_BIO_PRISON
  242. ---help---
  243. Provides thin provisioning and snapshots that share a data store.
  244. config DM_CACHE
  245. tristate "Cache target (EXPERIMENTAL)"
  246. depends on BLK_DEV_DM
  247. default n
  248. select DM_PERSISTENT_DATA
  249. select DM_BIO_PRISON
  250. ---help---
  251. dm-cache attempts to improve performance of a block device by
  252. moving frequently used data to a smaller, higher performance
  253. device. Different 'policy' plugins can be used to change the
  254. algorithms used to select which blocks are promoted, demoted,
  255. cleaned etc. It supports writeback and writethrough modes.
  256. config DM_CACHE_SMQ
  257. tristate "Stochastic MQ Cache Policy (EXPERIMENTAL)"
  258. depends on DM_CACHE
  259. default y
  260. ---help---
  261. A cache policy that uses a multiqueue ordered by recent hits
  262. to select which blocks should be promoted and demoted.
  263. This is meant to be a general purpose policy. It prioritises
  264. reads over writes. This SMQ policy (vs MQ) offers the promise
  265. of less memory utilization, improved performance and increased
  266. adaptability in the face of changing workloads.
  267. config DM_ERA
  268. tristate "Era target (EXPERIMENTAL)"
  269. depends on BLK_DEV_DM
  270. default n
  271. select DM_PERSISTENT_DATA
  272. select DM_BIO_PRISON
  273. ---help---
  274. dm-era tracks which parts of a block device are written to
  275. over time. Useful for maintaining cache coherency when using
  276. vendor snapshots.
  277. config DM_MIRROR
  278. tristate "Mirror target"
  279. depends on BLK_DEV_DM
  280. ---help---
  281. Allow volume managers to mirror logical volumes, also
  282. needed for live data migration tools such as 'pvmove'.
  283. config DM_LOG_USERSPACE
  284. tristate "Mirror userspace logging"
  285. depends on DM_MIRROR && NET
  286. select CONNECTOR
  287. ---help---
  288. The userspace logging module provides a mechanism for
  289. relaying the dm-dirty-log API to userspace. Log designs
  290. which are more suited to userspace implementation (e.g.
  291. shared storage logs) or experimental logs can be implemented
  292. by leveraging this framework.
  293. config DM_RAID
  294. tristate "RAID 1/4/5/6/10 target"
  295. depends on BLK_DEV_DM
  296. select MD_RAID0
  297. select MD_RAID1
  298. select MD_RAID10
  299. select MD_RAID456
  300. select BLK_DEV_MD
  301. ---help---
  302. A dm target that supports RAID1, RAID10, RAID4, RAID5 and RAID6 mappings
  303. A RAID-5 set of N drives with a capacity of C MB per drive provides
  304. the capacity of C * (N - 1) MB, and protects against a failure
  305. of a single drive. For a given sector (row) number, (N - 1) drives
  306. contain data sectors, and one drive contains the parity protection.
  307. For a RAID-4 set, the parity blocks are present on a single drive,
  308. while a RAID-5 set distributes the parity across the drives in one
  309. of the available parity distribution methods.
  310. A RAID-6 set of N drives with a capacity of C MB per drive
  311. provides the capacity of C * (N - 2) MB, and protects
  312. against a failure of any two drives. For a given sector
  313. (row) number, (N - 2) drives contain data sectors, and two
  314. drives contains two independent redundancy syndromes. Like
  315. RAID-5, RAID-6 distributes the syndromes across the drives
  316. in one of the available parity distribution methods.
  317. config DM_ZERO
  318. tristate "Zero target"
  319. depends on BLK_DEV_DM
  320. ---help---
  321. A target that discards writes, and returns all zeroes for
  322. reads. Useful in some recovery situations.
  323. config DM_MULTIPATH
  324. tristate "Multipath target"
  325. depends on BLK_DEV_DM
  326. # nasty syntax but means make DM_MULTIPATH independent
  327. # of SCSI_DH if the latter isn't defined but if
  328. # it is, DM_MULTIPATH must depend on it. We get a build
  329. # error if SCSI_DH=m and DM_MULTIPATH=y
  330. depends on !SCSI_DH || SCSI
  331. ---help---
  332. Allow volume managers to support multipath hardware.
  333. config DM_MULTIPATH_QL
  334. tristate "I/O Path Selector based on the number of in-flight I/Os"
  335. depends on DM_MULTIPATH
  336. ---help---
  337. This path selector is a dynamic load balancer which selects
  338. the path with the least number of in-flight I/Os.
  339. If unsure, say N.
  340. config DM_MULTIPATH_ST
  341. tristate "I/O Path Selector based on the service time"
  342. depends on DM_MULTIPATH
  343. ---help---
  344. This path selector is a dynamic load balancer which selects
  345. the path expected to complete the incoming I/O in the shortest
  346. time.
  347. If unsure, say N.
  348. config DM_DELAY
  349. tristate "I/O delaying target"
  350. depends on BLK_DEV_DM
  351. ---help---
  352. A target that delays reads and/or writes and can send
  353. them to different devices. Useful for testing.
  354. If unsure, say N.
  355. config DM_UEVENT
  356. bool "DM uevents"
  357. depends on BLK_DEV_DM
  358. ---help---
  359. Generate udev events for DM events.
  360. config DM_FLAKEY
  361. tristate "Flakey target"
  362. depends on BLK_DEV_DM
  363. ---help---
  364. A target that intermittently fails I/O for debugging purposes.
  365. config DM_VERITY
  366. tristate "Verity target support"
  367. depends on BLK_DEV_DM
  368. select CRYPTO
  369. select CRYPTO_HASH
  370. select DM_BUFIO
  371. ---help---
  372. This device-mapper target creates a read-only device that
  373. transparently validates the data on one underlying device against
  374. a pre-generated tree of cryptographic checksums stored on a second
  375. device.
  376. You'll need to activate the digests you're going to use in the
  377. cryptoapi configuration.
  378. To compile this code as a module, choose M here: the module will
  379. be called dm-verity.
  380. If unsure, say N.
  381. config DM_VERITY_FEC
  382. bool "Verity forward error correction support"
  383. depends on DM_VERITY
  384. select REED_SOLOMON
  385. select REED_SOLOMON_DEC8
  386. ---help---
  387. Add forward error correction support to dm-verity. This option
  388. makes it possible to use pre-generated error correction data to
  389. recover from corrupted blocks.
  390. If unsure, say N.
  391. config DM_SWITCH
  392. tristate "Switch target support (EXPERIMENTAL)"
  393. depends on BLK_DEV_DM
  394. ---help---
  395. This device-mapper target creates a device that supports an arbitrary
  396. mapping of fixed-size regions of I/O across a fixed set of paths.
  397. The path used for any specific region can be switched dynamically
  398. by sending the target a message.
  399. To compile this code as a module, choose M here: the module will
  400. be called dm-switch.
  401. If unsure, say N.
  402. config DM_LOG_WRITES
  403. tristate "Log writes target support"
  404. depends on BLK_DEV_DM
  405. ---help---
  406. This device-mapper target takes two devices, one device to use
  407. normally, one to log all write operations done to the first device.
  408. This is for use by file system developers wishing to verify that
  409. their fs is writing a consistent file system at all times by allowing
  410. them to replay the log in a variety of ways and to check the
  411. contents.
  412. To compile this code as a module, choose M here: the module will
  413. be called dm-log-writes.
  414. If unsure, say N.
  415. config DM_INTEGRITY
  416. tristate "Integrity target support"
  417. depends on BLK_DEV_DM
  418. select BLK_DEV_INTEGRITY
  419. select DM_BUFIO
  420. select CRYPTO
  421. select ASYNC_XOR
  422. ---help---
  423. This device-mapper target emulates a block device that has
  424. additional per-sector tags that can be used for storing
  425. integrity information.
  426. This integrity target is used with the dm-crypt target to
  427. provide authenticated disk encryption or it can be used
  428. standalone.
  429. To compile this code as a module, choose M here: the module will
  430. be called dm-integrity.
  431. If unsure, say N.
  432. endif # MD