Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document new ideas for RAID management on the storage configuration #1671

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

ancorgs
Copy link
Contributor

@ancorgs ancorgs commented Oct 16, 2024

We are considering the best way to integrate RAID configuration at Agama. That should cover both the configuration (used for example for unattended installation) and the web UI.

This pull request documents some of the ideas we have been discussing.

Bonus: Moved some of the content of auto_storage.md to a separate file to get the document more focused.

@ancorgs
Copy link
Contributor Author

ancorgs commented Oct 16, 2024

Let me draft here a possible algorithm that could handle the auto-calculation of sizes explained at the document. It would imply adding some fields to Planned::Md to store the desired sizes (if they are not specified by the members).

  • For each entry at drives

    • Generate planned disk or planned md
    • For its partitions, generate planned partitions pointing to the parent device → Set sizes if possible
  • For each entry at md_raids

    • Generate planned md
    • Create all its planned partitions pointing to the parent device → Set sizes if possible
    • If the members are expressed with a generate, create planned partitions for the chosen disks → Set sizes if the generator specifies them.
    • Assign sizes to members if sizes are specified for the md raid (maybe this can postponed after LVM)
      • Use MD sizes as base for the sizes of the member partitions
      • Maybe report error if there is already a size specified for those member partitions (could be detected in advance?)
      • Maybe report error if some of the members are disks (could be detected in advance?)
  • For each entry at volume_groups

    • Generate planned vg (with pvs_candidate_devices if needed)
    • Generate all planned LVs → Set sizes (is always possible)
    • For each PV expressed as an alias, set the size of the corresponding planned device if it has no device yet (can propagate if the PV is an MD).

@ancorgs
Copy link
Contributor Author

ancorgs commented Oct 16, 2024

Generally, the different members of an MD Array should have the same size. In fact, mdadm complains if the variance among the devices is greater than 1%, with the exception of RAID0.

That does not play very well with size ranges. For example, assume two members of the same RAID1, both with a min of 40GiB and a max of 100GiB. Imagine a chose PartitionsDistribution in which one gets located in a space in which it can grow up to 100GiB and the other in a space in which it can only grow up to 45GiB. It actually makes no sense for the first one to grow so much (and maybe that space can be used for other partition that also could make use of it).

To mitigate that we can process the spaces of the distribution in a sensible order, starting by those in which there are RAID members but have a smaller extra size. After processing each space, we can register for each MD what's the smallest member. When processing the next space, we can limit the growth of members of the same MD.

This looks like a good place to insert that logic:

class Y2Storage::Proposal::PartitionCreator
    def create_partitions(distribution)
        self.devicegraph = original_graph.duplicate

        devices_map = distribution.spaces.reduce({}) do |devices, space|
          new_devices = process_free_space(
            space.disk_space, space.partitions, space.usable_size, space.num_logical
          )
          devices.merge(new_devices)
        end

        CreatorResult.new(devicegraph, devices_map)
    end
end

If needed, we could also consider adding a criteria of "best distribution" minimizing the opportunity to locate members of the same array in spaces that would result in situations like the one described above.

Comment on lines +962 to +967
"generate": {
"mdRaids": "default",
"level": "raid0",
"devices": [
{ "generate": ["sda", "sdb"] }
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing :)

{
"devices": [ { "generate": ["sda", "sdb" ] } ],
"level": "raid0",
"size": "40 GiB"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if both generate and the raid indicate a size?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes no much sense, I would say.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants