From: Jens Axboe <[email protected]>
To: [email protected]
Subject: [PATCHSET RFC 0/3] Add support for incremental buffer consumption
Date: Mon, 12 Aug 2024 09:51:11 -0600 [thread overview]
Message-ID: <[email protected]> (raw)
Hi,
The recommended way to use io_uring for networking workloads is to use
ring provided buffers. The application sets up a ring (or several) for
buffers, and puts buffers for receiving data into them. When a recv
completes, the completion contains information on which buffer data was
received into. You can even use bundles with receive, and receive data
into multiple buffers at the same time.
This all works fine, but has some limitations in that a buffer is always
fully consumed. This patchset adds support for partial consumption of
a buffer. This, in turn, allows an application to supply fewer buffers
for receives, but of a much larger size. For example, rather than add
a ton of 1500b buffers for receiving data, the application can just add
one large buffer. Whenever data is received, only the current head part
of the buffer is consumed and used. This leads to less iteration of
buffers, and also eliminates any potential wasteage of memory if some
of the receives only partially fill a provided buffer.
Patchset is lightly tested, passes current tests and also the new test
cases I wrote for it. The liburing 'pbuf-ring-inc' branch has extra
tests and support for this.
Using incrementally consumed buffers from an application point of view
is fairly trivial. Just pass the flag IOU_PBUF_RING_INC to
io_uring_setup_buf_ring(), and this marks this buffer group ID as being
incrementally consumed. Outside of that, the application just needs to
keep track of where the current read/recv point is at. See patch 3
for details.
Patch 1+2 are just basic prep patches, patch 3 is the meat of it. But
still pretty darn simple. Note that this feature ONLY works with ring
provide buffers, not with legacy/classic provided buffers. Code can also
be found here, along with some other patches on top which aren't strictly
related:
https://git.kernel.dk/cgit/linux/log/?h=io_uring-net-coalesce
and it's based on 6.11-rc3 with the pending 6.12 io_uring patches pulled
in first.
Comments/reviews welcome! I'll add support for this to examples/proxy
in the liburing repo, and can provide some performance results post
that.
include/uapi/linux/io_uring.h | 8 ++++++
io_uring/io_uring.c | 2 +-
io_uring/kbuf.c | 28 +++++++++---------
io_uring/kbuf.h | 54 ++++++++++++++++++++++++++---------
io_uring/net.c | 8 +++---
io_uring/rw.c | 8 +++---
6 files changed, 71 insertions(+), 37 deletions(-)
--
Jens Axboe
next reply other threads:[~2024-08-12 16:01 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-12 15:51 Jens Axboe [this message]
2024-08-12 15:51 ` [PATCH 1/3] io_uring/kbuf: add io_kbuf_commit() helper Jens Axboe
2024-08-12 15:51 ` [PATCH 2/3] io_uring/kbuf: move io_ring_head_to_buf() to kbuf.h Jens Axboe
2024-08-12 15:51 ` [PATCH 3/3] io_uring/kbuf: add support for incremental buffer consumption Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox