Skip to content

Commit

Permalink
doc: Auto-calculated sizes and RAID generators
Browse files Browse the repository at this point in the history
  • Loading branch information
ancorgs committed Oct 16, 2024
1 parent 5e9eb7c commit 6fc1ddc
Showing 1 changed file with 161 additions and 5 deletions.
166 changes: 161 additions & 5 deletions doc/auto_storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,7 @@ MdRaid
level [<string>]
chunkSize [<number>]
devices [<<string|Search>[]>]
size [<Size>]
encryption [<Encryption>]
filesystem [Filesystem]
ptableType [<string>]
Expand Down Expand Up @@ -283,9 +284,13 @@ If the size is completely omitted for a device that already exists (ie. combined
then Agama would act as if both min and max limits would have been set to "current" (which implies
the partition or logical volume will not be resized).

On the other hand, if the size is omitted for a device that will be created, Agama will decide the
size based on the mount point and the settings of the product. From a more technical point of view,
that translates into the following:
Moreover, if the size is omitted for a device that will be created, Agama will determine the size
limits when possible. There are basically two kinds of situations in which that automatic size
calculation can be performed.

On the one hand, the device may directly contain a `filesystem` entry specifying a mount point.
Agama will then use the settings of the product to set the size limits. From a more technical point
of view, that translates into the following:

- If the mount path corresponds to a volume supporting `auto_size`, that feature will be used.
- If it corresponds to a volume without `auto_size`, the min and max sizes of the volumes will be
Expand All @@ -294,6 +299,53 @@ that translates into the following:
- If the product does not specify a default volume, the behavior is still not defined (there are
several reasonable options).

On the other hand, the size limits of some devices can be omitted if they can be inferred from other
related devices following some rules.

- For an MD RAID defined on top of new partitions, it is possible to specify the size of all the
partitions that will become members of the RAID but is also possible to specify the desired size
for the resulting MD RAID and then the size limits of each partition will be automatically
inferred with a small margin of error of a few MiBs.
- Something similar happens with a partition that acts as the **only** physical volume of a new LVM
volume group. Specifying the sizes of the logical volumes could be enough, the size limits of the
underlying partition will match the necessary values to make the logical volumes fit. In this
case the calculated partition size is fully accurate.
- The two previous scenarios can be combined. For a new MD RAID that acts as the **only** physical
volume of a new LVM volume group, the sizes of the logical volumes can be used to precisely
determine what should be the size of the MD and, based on that, what should be the almost
exact size of the underlying new partitions defined to act as members of the RAID.

The two described mechanisms to automatically determine size limits can be combined. Even creating
a configuration with no explicit sizes at all like the following example.

```json
"storage": {
"drives": [
{
"partitions": [
{ "alias": "pv" }
]
}
],
"volumeGroups": [
{
"name": "system",
"physicalVolumes": [ "pv" ],
"logicalVolumes": [
{ "filesystem": { "path": "/" } },
{ "filesystem": { "path": "swap" } }
]
}
]
}
```
```
Assuming the product configuration specifies a root filesystem with a minimum size of 5 GiB and a
max of 20 GiB and sets that the swap must have a size equivalent to the RAM on the system, then
those values would be applied to the logical volumes and the partition with alias "pv" would be
sized accordingly, taking into account all the overheads and roundings introduced by the different
technologies like LVM or the used partition table type.
#### Under Discussion
Expand Down Expand Up @@ -729,6 +781,82 @@ space first by shrinking the partitions and deleting them only if shrinking is n
}
```

### Generating Partitions as MD RAID members

MD arrays can be configured to explicitly use a set of devices by adding their aliases to the
`devices` property.

```json
"storage": {
"drives": [
{
"search": "/dev/sda",
"partitions": [
{ "alias": "sda-40", "size": "40 GiB" }
]
},
{
"search": "/dev/sdb",
"partitions": [
{ "alias": "sdb-40", "size": "40 GiB" }
]
}
],
"mdRaids": [
{
"devices": [ "sda-40", "sdb-40" ],
"level": "raid0"
}
]
}
```

The partitions acting as members can be automatically generated by simply indicating the target
disks that will hold the partitions. For that, the `devices` section must contain a `generate`
entry.

```json
"storage": {
"drives": [
{ "search": "/dev/sda", "alias": "sda" },
{ "search": "/dev/sdb", "alias": "sdb" },
],
"mdRaids": [
{
"devices": [
{
"generate": {
"targetDisks": ["sda", "sdb" ],
"size": "40 GiB"
}
}
]
"level": "raid0"
}
]
}
```

As explained at the section about sizes, it's also possible to set the size for the new RAID letting
Agama calculate the corresponding sizes of the partitions used as members. That allows to use the
short syntax for `generate`.

```json
"storage": {
"drives": [
{ "search": "/dev/sda", "alias": "sda" },
{ "search": "/dev/sdb", "alias": "sdb" },
],
"mdRaids": [
{
"devices": [ { "generate": ["sda", "sdb" ] } ],
"level": "raid0",
"size": "40 GiB"
}
]
}
```

### Generating Default Volumes

Every product provides a configuration which defines the storage volumes (e.g., feasible file
Expand Down Expand Up @@ -810,6 +938,7 @@ The *mandatory* keyword can be used for only generating the mandatory partitions
"storage": {
"volumeGroups": [
{
"name": "system",
"logicalVolumes": [
{ "generate": "mandatory" }
]
Expand All @@ -818,6 +947,30 @@ The *mandatory* keyword can be used for only generating the mandatory partitions
}
```

The *default* and *mandatory* keywords can also be used to generate a set of formatted MD arrays.
Assuming the default volumes are "/", "/home" and "swap", the following snippet would generate three
RAIDs of the appropriate sizes and the corresponding six partitions needed to support them.

```json
"storage": {
"drives": [
{ "search": "/dev/sda", "alias": "sda" },
{ "search": "/dev/sdb", "alias": "sdb" },
],
"mdRaids": [
{
"generate": {
"mdRaids": "default",
"level": "raid0",
"devices": [
{ "generate": ["sda", "sdb"] }
]
}
}
]
}
```

### Generating Physical Volumes

Volume groups can be configured to explicitly use a set of devices as physical volumes. The aliases
Expand All @@ -829,13 +982,14 @@ of the devices to use are added to the list of physical volumes:
{
"search": "/dev/vda",
"partitions": [
{ "alias": "pv2" },
{ "alias": "pv1" }
{ "alias": "pv2", "size": "100 GiB" },
{ "alias": "pv1", "size": "20 GiB" }
]
}
],
"volumeGroups": [
{
"name": "system",
"physicalVolumes": ["pv1", "pv2"]
}
]
Expand All @@ -856,6 +1010,7 @@ volumes:
],
"volumeGroups": [
{
"name": "system",
"physicalVolumes": [
{ "generate": ["pvs-disk"] }
]
Expand All @@ -878,6 +1033,7 @@ the *generate* section:
],
"volumeGroups": [
{
"name": "system",
"physicalVolumes": [
{
"generate": {
Expand Down

0 comments on commit 6fc1ddc

Please sign in to comment.