Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use u32 to describe vsock related buffer sizes #4637

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 51 additions & 39 deletions src/vmm/src/devices/virtio/iovec.rs
Original file line number Diff line number Diff line change
Expand Up @@ -108,23 +108,25 @@
pub fn read_exact_volatile_at(
&self,
mut buf: &mut [u8],
offset: usize,
offset: u32,
) -> Result<(), VolatileMemoryError> {
if offset < self.len() as usize {
let expected = buf.len();
if offset < self.len() {
let expected = u32::try_from(buf.len()).unwrap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, we can translate the error from try_from into some VolatileMemoryError to indicate that its impossible to read more than u32::MAX bytes from a descriptor chain

let bytes_read = self.read_volatile_at(&mut buf, offset, expected)?;

if bytes_read != expected {
return Err(VolatileMemoryError::PartialBuffer {
expected,
completed: bytes_read,
expected: expected as usize,
completed: bytes_read as usize,
Comment on lines +119 to +120
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use expected.into() instead of as here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and everywhere else in this commit. We should always be able to use .into() or .try_into().unwrap() where appropriate.

});
}

Ok(())
} else {
// If `offset` is past size, there's nothing to read.
Err(VolatileMemoryError::OutOfBounds { addr: offset })
Err(VolatileMemoryError::OutOfBounds {
addr: offset as usize,
})
}
}

Expand All @@ -134,29 +136,30 @@
pub fn read_volatile_at<W: WriteVolatile>(
&self,
dst: &mut W,
mut offset: usize,
mut len: usize,
) -> Result<usize, VolatileMemoryError> {
mut offset: u32,
mut len: u32,
) -> Result<u32, VolatileMemoryError> {
let mut total_bytes_read = 0;

for iov in &self.vecs {
if len == 0 {
break;
}

if offset >= iov.iov_len {
offset -= iov.iov_len;
let iov_len = u32::try_from(iov.iov_len).unwrap();
if offset >= iov_len {
offset -= iov_len;
Comment on lines +149 to +151
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mh, I think here its better to have a let mut offset = offset as usize in line 142, so that we do not need any casts around this arithmetic and the VolatileSlice constructor

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, however, let's do let mut offset = u32::try_into(offset).unwrap. We should also add a comment on why this is ok. Something like:

`iov.iov_len` is a `usize` but it gets assigned from `DescriptorChain::len` which is a `u32`, so the guest cannot pass to us something that is bigger than `u32`. As a result 

continue;
}

let mut slice =
// SAFETY: the constructor IoVecBufferMut::from_descriptor_chain ensures that
// all iovecs contained point towards valid ranges of guest memory
unsafe { VolatileSlice::new(iov.iov_base.cast(), iov.iov_len).offset(offset)? };
unsafe { VolatileSlice::new(iov.iov_base.cast(), iov.iov_len).offset(offset as usize)? };
offset = 0;

if slice.len() > len {
slice = slice.subslice(0, len)?;
if u32::try_from(slice.len()).unwrap() > len {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if u32::try_from(slice.len()).unwrap() > len {
if slice.len() > len as usize {

Or also just a let len = len as usize; at the top of the method :). We mostly care that the API of these functions expresses the right thing, but casting up to usize inside of them is fine, since from there, the usize won't "escape" again.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again agree.

slice = slice.subslice(0, len as usize)?;
}

let bytes_read = loop {
Expand All @@ -166,13 +169,13 @@
{
continue
}
Ok(bytes_read) => break bytes_read,
Ok(bytes_read) => break u32::try_from(bytes_read).unwrap(),
Err(volatile_memory_error) => return Err(volatile_memory_error),
}
};
total_bytes_read += bytes_read;

if bytes_read < slice.len() {
if slice.len() > bytes_read as usize {
break;
}
len -= bytes_read;
Expand Down Expand Up @@ -248,23 +251,25 @@
pub fn write_all_volatile_at(
&mut self,
mut buf: &[u8],
offset: usize,
offset: u32,
) -> Result<(), VolatileMemoryError> {
if offset < self.len() as usize {
let expected = buf.len();
if offset < self.len() {
let expected = u32::try_from(buf.len()).unwrap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see above

let bytes_written = self.write_volatile_at(&mut buf, offset, expected)?;

if bytes_written != expected {
return Err(VolatileMemoryError::PartialBuffer {
expected,
completed: bytes_written,
expected: expected as usize,
completed: bytes_written as usize,

Check warning on line 263 in src/vmm/src/devices/virtio/iovec.rs

View check run for this annotation

Codecov / codecov/patch

src/vmm/src/devices/virtio/iovec.rs#L262-L263

Added lines #L262 - L263 were not covered by tests
});
}

Ok(())
} else {
// We cannot write past the end of the `IoVecBufferMut`.
Err(VolatileMemoryError::OutOfBounds { addr: offset })
Err(VolatileMemoryError::OutOfBounds {
addr: offset as usize,
})
}
}

Expand All @@ -274,29 +279,30 @@
pub fn write_volatile_at<W: ReadVolatile>(
&mut self,
src: &mut W,
mut offset: usize,
mut len: usize,
) -> Result<usize, VolatileMemoryError> {
mut offset: u32,
mut len: u32,
) -> Result<u32, VolatileMemoryError> {
let mut total_bytes_read = 0;

for iov in &self.vecs {
if len == 0 {
break;
}

if offset >= iov.iov_len {
offset -= iov.iov_len;
let iov_len = u32::try_from(iov.iov_len).unwrap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see above

if offset >= iov_len {
offset -= iov_len;
continue;
}

let mut slice =
// SAFETY: the constructor IoVecBufferMut::from_descriptor_chain ensures that
// all iovecs contained point towards valid ranges of guest memory
unsafe { VolatileSlice::new(iov.iov_base.cast(), iov.iov_len).offset(offset)? };
unsafe { VolatileSlice::new(iov.iov_base.cast(), iov.iov_len).offset(offset as usize)? };
offset = 0;

if slice.len() > len {
slice = slice.subslice(0, len)?;
if u32::try_from(slice.len()).unwrap() > len {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see above

slice = slice.subslice(0, len as usize)?;
}

let bytes_read = loop {
Expand All @@ -306,13 +312,13 @@
{
continue
}
Ok(bytes_read) => break bytes_read,
Ok(bytes_read) => break u32::try_from(bytes_read).unwrap(),
Err(volatile_memory_error) => return Err(volatile_memory_error),
}
};
total_bytes_read += bytes_read;

if bytes_read < slice.len() {
if slice.len() > bytes_read as usize {
break;
}
len -= bytes_read;
Expand Down Expand Up @@ -587,7 +593,9 @@
// 5 bytes at offset 252 (only 4 bytes left).
test_vec4[60..64].copy_from_slice(&buf[0..4]);
assert_eq!(
iovec.write_volatile_at(&mut &*buf, 252, buf.len()).unwrap(),
iovec
.write_volatile_at(&mut &*buf, 252, buf.len().try_into().unwrap())
.unwrap(),
4
);
vq.dtable[0].check_data(&test_vec1);
Expand Down Expand Up @@ -731,11 +739,13 @@
assert_eq!(
iov.read_volatile_at(
&mut KaniBuffer(&mut buf),
offset as usize,
GUEST_MEMORY_SIZE
offset,
GUEST_MEMORY_SIZE.try_into().unwrap()
)
.unwrap(),
buf.len().min(iov.len().saturating_sub(offset) as usize)
u32::try_from(buf.len())
.unwrap()
.min(iov.len().saturating_sub(offset))
);
}

Expand All @@ -761,11 +771,13 @@
iov_mut
.write_volatile_at(
&mut KaniBuffer(&mut buf),
offset as usize,
GUEST_MEMORY_SIZE
offset,
GUEST_MEMORY_SIZE.try_into().unwrap()
)
.unwrap(),
buf.len().min(iov_mut.len().saturating_sub(offset) as usize)
u32::try_from(buf.len())
.unwrap()
.min(iov_mut.len().saturating_sub(offset))
);
}
}
6 changes: 3 additions & 3 deletions src/vmm/src/devices/virtio/net/device.rs
Original file line number Diff line number Diff line change
Expand Up @@ -445,7 +445,7 @@ impl Net {
net_metrics: &NetDeviceMetrics,
) -> Result<bool, NetError> {
// Read the frame headers from the IoVecBuffer
let max_header_len = headers.len();
let max_header_len = u32::try_from(headers.len()).unwrap();
let header_len = frame_iovec
.read_volatile_at(&mut &mut *headers, 0, max_header_len)
.map_err(|err| {
Expand All @@ -454,7 +454,7 @@ impl Net {
NetError::VnetHeaderMissing
})?;

let headers = frame_bytes_from_buf(&headers[..header_len]).map_err(|e| {
let headers = frame_bytes_from_buf(&headers[..header_len as usize]).map_err(|e| {
error!("VNET headers missing in TX frame");
net_metrics.tx_malformed_frames.inc();
e
Expand All @@ -466,7 +466,7 @@ impl Net {
// Ok to unwrap here, because we are passing a buffer that has the exact size
// of the `IoVecBuffer` minus the VNET headers.
frame_iovec
.read_exact_volatile_at(&mut frame, vnet_hdr_len())
.read_exact_volatile_at(&mut frame, vnet_hdr_len().try_into().unwrap())
.unwrap();
let _ = ns.detour_frame(&frame);
METRICS.mmds.rx_accepted.inc();
Expand Down
27 changes: 13 additions & 14 deletions src/vmm/src/devices/virtio/vsock/csm/connection.rs
Original file line number Diff line number Diff line change
Expand Up @@ -235,11 +235,8 @@ where
} else {
// On a successful data read, we fill in the packet with the RW op, and
// length of the read data.
// Safe to unwrap because read_cnt is no more than max_len, which is bounded
// by self.peer_avail_credit(), a u32 internally.
pkt.set_op(uapi::VSOCK_OP_RW)
.set_len(u32::try_from(read_cnt).unwrap());
METRICS.rx_bytes_count.add(read_cnt as u64);
pkt.set_op(uapi::VSOCK_OP_RW).set_len(read_cnt);
METRICS.rx_bytes_count.add(read_cnt.into());
}
self.rx_cnt += Wrapping(pkt.len());
self.last_fwd_cnt_to_peer = self.fwd_cnt;
Expand Down Expand Up @@ -605,7 +602,7 @@ where
/// Raw data can either be sent straight to the host stream, or to our TX buffer, if the
/// former fails.
fn send_bytes(&mut self, pkt: &VsockPacket) -> Result<(), VsockError> {
let len = pkt.len() as usize;
let len = pkt.len();

// If there is data in the TX buffer, that means we're already registered for EPOLLOUT
// events on the underlying stream. Therefore, there's no point in attempting a write
Expand Down Expand Up @@ -635,8 +632,8 @@ where
};
// Move the "forwarded bytes" counter ahead by how much we were able to send out.
// Safe to unwrap because the maximum value is pkt.len(), which is a u32.
self.fwd_cnt += wrap_usize_to_u32(written);
METRICS.tx_bytes_count.add(written as u64);
self.fwd_cnt += written;
METRICS.tx_bytes_count.add(written.into());

// If we couldn't write the whole slice, we'll need to push the remaining data to our
// buffer.
Expand All @@ -662,8 +659,8 @@ where

/// Get the maximum number of bytes that we can send to our peer, without overflowing its
/// buffer.
fn peer_avail_credit(&self) -> usize {
(Wrapping(self.peer_buf_alloc) - (self.rx_cnt - self.peer_fwd_cnt)).0 as usize
fn peer_avail_credit(&self) -> u32 {
(Wrapping(self.peer_buf_alloc) - (self.rx_cnt - self.peer_fwd_cnt)).0
}

/// Prepare a packet header for transmission to our peer.
Expand Down Expand Up @@ -918,7 +915,7 @@ mod tests {
assert!(credit < self.conn.peer_buf_alloc);
self.conn.peer_fwd_cnt = Wrapping(0);
self.conn.rx_cnt = Wrapping(self.conn.peer_buf_alloc - credit);
assert_eq!(self.conn.peer_avail_credit(), credit as usize);
assert_eq!(self.conn.peer_avail_credit(), credit);
}

fn send(&mut self) {
Expand All @@ -943,11 +940,13 @@ mod tests {
}

fn init_data_tx_pkt(&mut self, mut data: &[u8]) -> &VsockPacket {
assert!(data.len() <= self.tx_pkt.buf_size());
assert!(data.len() <= self.tx_pkt.buf_size() as usize);
self.init_tx_pkt(uapi::VSOCK_OP_RW, u32::try_from(data.len()).unwrap());

let len = data.len();
self.rx_pkt.read_at_offset_from(&mut data, 0, len).unwrap();
self.rx_pkt
.read_at_offset_from(&mut data, 0, len.try_into().unwrap())
.unwrap();
&self.tx_pkt
}
}
Expand Down Expand Up @@ -1284,7 +1283,7 @@ mod tests {
ctx.set_stream(stream);

// Fill up the TX buffer.
let data = vec![0u8; ctx.tx_pkt.buf_size()];
let data = vec![0u8; ctx.tx_pkt.buf_size() as usize];
ctx.init_data_tx_pkt(data.as_slice());
for _i in 0..(csm_defs::CONN_TX_BUF_SIZE as usize / data.len()) {
ctx.send();
Expand Down
Loading
Loading