Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some parameter values need to be changed in NVMe/TCP in order to support 256 connections #160

Closed
gbregman opened this issue Jul 11, 2023 · 4 comments · Fixed by #90
Closed
Assignees

Comments

@gbregman
Copy link
Contributor

When the NVMe/TCP support for CEPH is GAed, we need to support 256 host connections. For this to work for me I had to change some parameter values so I won't run out of memory and fail some connections.

I increased the hugepages value from 2048 to 4096:

diff --git a/Makefile b/Makefile
index 8cc8668..9354412 100644
--- a/Makefile
+++ b/Makefile
@@ -1,4 +1,4 @@
-HUGEPAGES = 2048 # 4 GB
+HUGEPAGES = 4096# 8 GB
 HUGEPAGES_DIR = /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages

I also decreased the number of queues from 128 to 8 which is what we use in our NVMe/TCP implementation in SVC. And increase the size of the data capsule:

diff --git a/ceph-nvmeof.conf b/ceph-nvmeof.conf
index 7721148..e08363b 100644
--- a/ceph-nvmeof.conf
+++ b/ceph-nvmeof.conf
@@ -42,4 +42,4 @@ log_level = WARN
 # transports = tcp

 # Example value: {"max_queue_depth" : 16, "max_io_size" : 4194304, "io_unit_size" : 1048576, "zcopy" : false}
-# transport_tcp_options =
+transport_tcp_options = {"in_capsule_data_size" : 8192, "max_io_qpairs_per_ctrlr" : 7}
@rkachach
Copy link

rkachach commented Jul 11, 2023

@epuertat in the current service spec we only have transports and no support for transport_tcp_options configuration option. Should we add this in the service spec also?

@rkachach
Copy link

@gbregman please, can you provide a sample of the ceph-nvmeof.conf you normally use for your testing? (just to make sure we are covering all the fields from cephadm)

@rkachach
Copy link

rkachach commented Jul 11, 2023

Support has been added as part of the PR ceph/ceph#50423, user can provide additional options such as:

service_type: nvmeof
service_id: nvmeof.test
placement:
  hosts:
    - ceph-node-0
spec:
  pool: rbd
  name: myname
  group: mygroup
  transport_tcp_options:
    in_capsule_data_size: 4096
    max_io_qpairs_per_ctrlr: 3

by default, in case nothing is provided it will generate:

transport_tcp_options = {"in_capsule_data_size" : 8192, "max_io_qpairs_per_ctrlr" : 7}

@gbregman
Copy link
Contributor Author

@rkachach

ceph-nvmeof.conf.txt

@epuertat epuertat linked a pull request Jul 13, 2023 that will close this issue
19 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

4 participants