Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ack-frequency #319

Merged
merged 6 commits into from
Apr 10, 2020
Merged

ack-frequency #319

merged 6 commits into from
Apr 10, 2020

Conversation

kazuho
Copy link
Member

@kazuho kazuho commented Apr 7, 2020

The aim of this PR to build the framework of using ACK Frequency frames.

Specifically, the PR implements the following:

  • Send and receive ACK frequency frames.
  • As a receiver, allow the peer to change packet and reorder tolerance.
  • As a sender, utilize packet tolerance. Packet tolerance is updated every 4 PTO (i.e. sentmap expiration time) to 1/8 of CWND size.

We can look into expanding / improving the use of ACK Frequency in follow-up PRs.

@kazuho kazuho marked this pull request as ready for review April 8, 2020 04:38
@kazuho
Copy link
Member Author

kazuho commented Apr 8, 2020

@janaiyengar I think the PR is ready for review. PTAL.

@kazuho kazuho requested a review from janaiyengar April 8, 2020 04:42
@kazuho kazuho mentioned this pull request Apr 8, 2020
Copy link
Collaborator

@janaiyengar janaiyengar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! A couple of comments.

if (packet_tolerance > QUICLY_MAX_PACKET_TOLERANCE)
packet_tolerance = QUICLY_MAX_PACKET_TOLERANCE;
s->dst = quicly_encode_ack_frequency_frame(s->dst, conn->egress.ack_frequency.sequence++, packet_tolerance,
conn->super.peer.transport_params.max_ack_delay * 1000, 0);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be set to some fraction of RTT, probably 1/8th also, but we can do this later.

lib/quicly.c Show resolved Hide resolved
lib/quicly.c Show resolved Hide resolved
Copy link
Collaborator

@janaiyengar janaiyengar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, one minor comment, but LGTM. Thanks for working on this!

lib/quicly.c Show resolved Hide resolved
@kazuho kazuho merged commit db0468f into master Apr 10, 2020
@kazuho
Copy link
Member Author

kazuho commented Apr 10, 2020

Thank you for the review. Confirmed interoperability with picoquic. Merging.

@larryliu2018
Copy link

larryliu2018 commented Jun 30, 2022

I simulated a server and a client connected to a switch through mininet, the topology is similar: client-------swith-------server,
Code to run:
server:
./cli -k ./t/assets/server.key -c ./t/assets/server.crt -f 0.125 10.0.0.20 4433

client:
time ./cli 10.0.0.20 4433 -p /www/test-50M -f 0.125 > /dev/null

I keep adjusting the value of the -f parameter, but the number of acks and the ratio of data packets are always approximately 1:2, there is no obvious change, I don't know why this is the case, is my command written incorrectly or what? , hope to get your help, thank you!

image
@kazuho @janaiyengar

@kazuho
Copy link
Member Author

kazuho commented Jun 30, 2022

@larryliu2018 I'm only guessing, but presumably, there is no loss being observed.

Until a loss is observed (and therefore the connection exits slow start), frequency of acks are controlled to be 2:1. The rationale is that during slow start, timely receipt of acks are essential for growing CWND at 2x per RTT.

@larryliu2018
Copy link

@larryliu2018 I'm only guessing, but presumably, there is no loss being observed.

Until a loss is observed (and therefore the connection exits slow start), frequency of acks are controlled to be 2:1. The rationale is that during slow start, timely receipt of acks are essential for growing CWND at 2x per RTT.

Thank you very much for your reply, I understand why,
Indeed, when I was simulating the link, I did not set the packet loss rate, and the receive and transmit buffer spaces were also set to be very large.
Also, there is one more question I want to ask you. In the process of testing quicly, I found that after adding "> /dev/null" after the client command, the throughput tested is normal than that without adding it. Not sure why this is?

Tested topology: client-------switch1------------switch2------------server

Among them, the link between swith1 and swith2 is set to 1000Mbps, the RTT is 1ms,
The test result shows the transfer time for transferring a 1000MB file,
image
The picture may be a bit blurry, roughly, the transfer time before adding "> /dev/null" is about 25s, and then it is about 9s,
@kazuho

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants