* [PATCHSET v3 0/4] Provide more efficient buffer registration
@ 2024-09-12 16:38 Jens Axboe
2024-09-12 16:38 ` [PATCH 1/4] io_uring/rsrc: clear 'slot' entry upfront Jens Axboe
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Jens Axboe @ 2024-09-12 16:38 UTC (permalink / raw)
To: io-uring
Hi,
Pretty much what the subject line says, it's about 25k to 40k times
faster to provide a way to duplicate an existing rings buffer
registration than it is manually map/pin/register the buffers again
with a new ring.
Patch 1 is just a prep patch, patch 2 adds refs to struct
io_mapped_ubuf, patch 3 abstracts out a helper, and patch 4 finally adds
the register opcode to allow a ring to duplicate the registered mappings
from one ring to another.
This came about from discussing overhead from the varnish cache
project for cases with more dynamic ring/thread creation.
Also see the buf-copy liburing branch for support and test code:
https://git.kernel.dk/cgit/liburing/log/?h=buf-copy
include/uapi/linux/io_uring.h | 13 +++++
io_uring/register.c | 60 ++++++++++++++--------
io_uring/register.h | 1 +
io_uring/rsrc.c | 96 ++++++++++++++++++++++++++++++++++-
io_uring/rsrc.h | 2 +
5 files changed, 150 insertions(+), 22 deletions(-)
Since v2:
- Ensure that it works for registered rings (both src/dst)
- Little cleanups
--
Jens Axboe
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/4] io_uring/rsrc: clear 'slot' entry upfront
2024-09-12 16:38 [PATCHSET v3 0/4] Provide more efficient buffer registration Jens Axboe
@ 2024-09-12 16:38 ` Jens Axboe
2024-09-12 16:38 ` [PATCH 2/4] io_uring/rsrc: add reference count to struct io_mapped_ubuf Jens Axboe
` (2 subsequent siblings)
3 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2024-09-12 16:38 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
No functional changes in this patch, but clearing the slot pointer
earlier will be required by a later change.
Signed-off-by: Jens Axboe <[email protected]>
---
io_uring/rsrc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 7d639a996f28..d42114845fac 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -114,6 +114,7 @@ static void io_buffer_unmap(struct io_ring_ctx *ctx, struct io_mapped_ubuf **slo
struct io_mapped_ubuf *imu = *slot;
unsigned int i;
+ *slot = NULL;
if (imu != &dummy_ubuf) {
for (i = 0; i < imu->nr_bvecs; i++)
unpin_user_page(imu->bvec[i].bv_page);
@@ -121,7 +122,6 @@ static void io_buffer_unmap(struct io_ring_ctx *ctx, struct io_mapped_ubuf **slo
io_unaccount_mem(ctx, imu->acct_pages);
kvfree(imu);
}
- *slot = NULL;
}
static void io_rsrc_put_work(struct io_rsrc_node *node)
--
2.45.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/4] io_uring/rsrc: add reference count to struct io_mapped_ubuf
2024-09-12 16:38 [PATCHSET v3 0/4] Provide more efficient buffer registration Jens Axboe
2024-09-12 16:38 ` [PATCH 1/4] io_uring/rsrc: clear 'slot' entry upfront Jens Axboe
@ 2024-09-12 16:38 ` Jens Axboe
2024-09-12 16:38 ` [PATCH 3/4] io_uring/register: provide helper to get io_ring_ctx from 'fd' Jens Axboe
2024-09-12 16:38 ` [PATCH 4/4] io_uring: add IORING_REGISTER_COPY_BUFFERS method Jens Axboe
3 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2024-09-12 16:38 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
Currently there's a single ring owner of a mapped buffer, and hence the
reference count will always be 1 when it's torn down and freed. However,
in preparation for being able to link io_mapped_ubuf to different spots,
add a reference count to manage the lifetime of it.
Signed-off-by: Jens Axboe <[email protected]>
---
io_uring/rsrc.c | 3 +++
io_uring/rsrc.h | 1 +
2 files changed, 4 insertions(+)
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index d42114845fac..28f98de3c304 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -116,6 +116,8 @@ static void io_buffer_unmap(struct io_ring_ctx *ctx, struct io_mapped_ubuf **slo
*slot = NULL;
if (imu != &dummy_ubuf) {
+ if (!refcount_dec_and_test(&imu->refs))
+ return;
for (i = 0; i < imu->nr_bvecs; i++)
unpin_user_page(imu->bvec[i].bv_page);
if (imu->acct_pages)
@@ -990,6 +992,7 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
imu->folio_shift = data.folio_shift;
imu->folio_mask = ~((1UL << data.folio_shift) - 1);
}
+ refcount_set(&imu->refs, 1);
off = (unsigned long) iov->iov_base & ~imu->folio_mask;
*pimu = imu;
ret = 0;
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index 3d0dda3556e6..98a253172c27 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -47,6 +47,7 @@ struct io_mapped_ubuf {
unsigned int folio_shift;
unsigned long acct_pages;
unsigned long folio_mask;
+ refcount_t refs;
struct bio_vec bvec[] __counted_by(nr_bvecs);
};
--
2.45.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 3/4] io_uring/register: provide helper to get io_ring_ctx from 'fd'
2024-09-12 16:38 [PATCHSET v3 0/4] Provide more efficient buffer registration Jens Axboe
2024-09-12 16:38 ` [PATCH 1/4] io_uring/rsrc: clear 'slot' entry upfront Jens Axboe
2024-09-12 16:38 ` [PATCH 2/4] io_uring/rsrc: add reference count to struct io_mapped_ubuf Jens Axboe
@ 2024-09-12 16:38 ` Jens Axboe
2024-09-12 16:38 ` [PATCH 4/4] io_uring: add IORING_REGISTER_COPY_BUFFERS method Jens Axboe
3 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2024-09-12 16:38 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
Can be done in one of two ways:
1) Regular file descriptor, just fget()
2) Registered ring, index our own table for that
In preparation for adding another register use of needing to get a ctx
from a file descriptor, abstract out this helper and use it in the main
register syscall as well.
Signed-off-by: Jens Axboe <[email protected]>
---
io_uring/register.c | 54 +++++++++++++++++++++++++++------------------
io_uring/register.h | 1 +
2 files changed, 34 insertions(+), 21 deletions(-)
diff --git a/io_uring/register.c b/io_uring/register.c
index 57cb85c42526..d90159478045 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -550,21 +550,16 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
return ret;
}
-SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
- void __user *, arg, unsigned int, nr_args)
+/*
+ * Given an 'fd' value, return the ctx associated with if. If 'registered' is
+ * true, then the registered index is used. Otherwise, the normal fd table.
+ * Caller must call fput() on the returned file, unless it's an ERR_PTR.
+ */
+struct file *io_uring_register_get_file(int fd, bool registered)
{
- struct io_ring_ctx *ctx;
- long ret = -EBADF;
struct file *file;
- bool use_registered_ring;
- use_registered_ring = !!(opcode & IORING_REGISTER_USE_REGISTERED_RING);
- opcode &= ~IORING_REGISTER_USE_REGISTERED_RING;
-
- if (opcode >= IORING_REGISTER_LAST)
- return -EINVAL;
-
- if (use_registered_ring) {
+ if (registered) {
/*
* Ring fd has been registered via IORING_REGISTER_RING_FDS, we
* need only dereference our task private array to find it.
@@ -572,27 +567,44 @@ SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
struct io_uring_task *tctx = current->io_uring;
if (unlikely(!tctx || fd >= IO_RINGFD_REG_MAX))
- return -EINVAL;
+ return ERR_PTR(-EINVAL);
fd = array_index_nospec(fd, IO_RINGFD_REG_MAX);
file = tctx->registered_rings[fd];
- if (unlikely(!file))
- return -EBADF;
} else {
file = fget(fd);
- if (unlikely(!file))
- return -EBADF;
- ret = -EOPNOTSUPP;
- if (!io_is_uring_fops(file))
- goto out_fput;
}
+ if (unlikely(!file))
+ return ERR_PTR(-EBADF);
+ if (io_is_uring_fops(file))
+ return file;
+ fput(file);
+ return ERR_PTR(-EOPNOTSUPP);
+}
+
+SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
+ void __user *, arg, unsigned int, nr_args)
+{
+ struct io_ring_ctx *ctx;
+ long ret = -EBADF;
+ struct file *file;
+ bool use_registered_ring;
+
+ use_registered_ring = !!(opcode & IORING_REGISTER_USE_REGISTERED_RING);
+ opcode &= ~IORING_REGISTER_USE_REGISTERED_RING;
+
+ if (opcode >= IORING_REGISTER_LAST)
+ return -EINVAL;
+
+ file = io_uring_register_get_file(fd, use_registered_ring);
+ if (IS_ERR(file))
+ return PTR_ERR(file);
ctx = file->private_data;
mutex_lock(&ctx->uring_lock);
ret = __io_uring_register(ctx, opcode, arg, nr_args);
mutex_unlock(&ctx->uring_lock);
trace_io_uring_register(ctx, opcode, ctx->nr_user_files, ctx->nr_user_bufs, ret);
-out_fput:
if (!use_registered_ring)
fput(file);
return ret;
diff --git a/io_uring/register.h b/io_uring/register.h
index c9da997d503c..cc69b88338fe 100644
--- a/io_uring/register.h
+++ b/io_uring/register.h
@@ -4,5 +4,6 @@
int io_eventfd_unregister(struct io_ring_ctx *ctx);
int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id);
+struct file *io_uring_register_get_file(int fd, bool registered);
#endif
--
2.45.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 4/4] io_uring: add IORING_REGISTER_COPY_BUFFERS method
2024-09-12 16:38 [PATCHSET v3 0/4] Provide more efficient buffer registration Jens Axboe
` (2 preceding siblings ...)
2024-09-12 16:38 ` [PATCH 3/4] io_uring/register: provide helper to get io_ring_ctx from 'fd' Jens Axboe
@ 2024-09-12 16:38 ` Jens Axboe
2024-09-17 16:41 ` Gabriel Krisman Bertazi
3 siblings, 1 reply; 7+ messages in thread
From: Jens Axboe @ 2024-09-12 16:38 UTC (permalink / raw)
To: io-uring; +Cc: Jens Axboe
Buffers can get registered with io_uring, which allows to skip the
repeated pin_pages, unpin/unref pages for each O_DIRECT operation. This
reduces the overhead of O_DIRECT IO.
However, registrering buffers can take some time. Normally this isn't an
issue as it's done at initialization time (and hence less critical), but
for cases where rings can be created and destroyed as part of an IO
thread pool, registering the same buffers for multiple rings become a
more time sensitive proposition. As an example, let's say an application
has an IO memory pool of 500G. Initial registration takes:
Got 500 huge pages (each 1024MB)
Registered 500 pages in 409 msec
or about 0.4 seconds. If we go higher to 900 1GB huge pages being
registered:
Registered 900 pages in 738 msec
which is, as expected, a fully linear scaling.
Rather than have each ring pin/map/register the same buffer pool,
provide an io_uring_register(2) opcode to simply duplicate the buffers
that are registered with another ring. Adding the same 900GB of
registered buffers to the target ring can then be accomplished in:
Copied 900 pages in 17 usec
While timing differs a bit, this provides around a 25,000-40,000x
speedup for this use case.
Signed-off-by: Jens Axboe <[email protected]>
---
include/uapi/linux/io_uring.h | 13 +++++
io_uring/register.c | 6 +++
io_uring/rsrc.c | 91 +++++++++++++++++++++++++++++++++++
io_uring/rsrc.h | 1 +
4 files changed, 111 insertions(+)
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index a275f91d2ac0..9dc5bb428c8a 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -609,6 +609,9 @@ enum io_uring_register_op {
IORING_REGISTER_CLOCK = 29,
+ /* copy registered buffers from source ring to current ring */
+ IORING_REGISTER_COPY_BUFFERS = 30,
+
/* this goes last */
IORING_REGISTER_LAST,
@@ -694,6 +697,16 @@ struct io_uring_clock_register {
__u32 __resv[3];
};
+enum {
+ IORING_REGISTER_SRC_REGISTERED = 1,
+};
+
+struct io_uring_copy_buffers {
+ __u32 src_fd;
+ __u32 flags;
+ __u32 pad[6];
+};
+
struct io_uring_buf {
__u64 addr;
__u32 len;
diff --git a/io_uring/register.c b/io_uring/register.c
index d90159478045..dab0f8024ddf 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -542,6 +542,12 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
break;
ret = io_register_clock(ctx, arg);
break;
+ case IORING_REGISTER_COPY_BUFFERS:
+ ret = -EINVAL;
+ if (!arg || nr_args != 1)
+ break;
+ ret = io_register_copy_buffers(ctx, arg);
+ break;
default:
ret = -EINVAL;
break;
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 28f98de3c304..40696a395f0a 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -17,6 +17,7 @@
#include "openclose.h"
#include "rsrc.h"
#include "memmap.h"
+#include "register.h"
struct io_rsrc_update {
struct file *file;
@@ -1137,3 +1138,93 @@ int io_import_fixed(int ddir, struct iov_iter *iter,
return 0;
}
+
+static int io_copy_buffers(struct io_ring_ctx *ctx, struct io_ring_ctx *src_ctx)
+{
+ struct io_mapped_ubuf **user_bufs;
+ struct io_rsrc_data *data;
+ int i, ret, nbufs;
+
+ /*
+ * Drop our own lock here. We'll setup the data we need and reference
+ * the source buffers, then re-grab, check, and assign at the end.
+ */
+ mutex_unlock(&ctx->uring_lock);
+
+ mutex_lock(&src_ctx->uring_lock);
+ ret = -ENXIO;
+ nbufs = src_ctx->nr_user_bufs;
+ if (!nbufs)
+ goto out_unlock;
+ ret = io_rsrc_data_alloc(ctx, IORING_RSRC_BUFFER, NULL, nbufs, &data);
+ if (ret)
+ goto out_unlock;
+
+ ret = -ENOMEM;
+ user_bufs = kcalloc(nbufs, sizeof(*ctx->user_bufs), GFP_KERNEL);
+ if (!user_bufs)
+ goto out_free_data;
+
+ for (i = 0; i < nbufs; i++) {
+ struct io_mapped_ubuf *src = src_ctx->user_bufs[i];
+
+ refcount_inc(&src->refs);
+ user_bufs[i] = src;
+ }
+
+ /* Have a ref on the bufs now, drop src lock and re-grab our own lock */
+ mutex_unlock(&src_ctx->uring_lock);
+ mutex_lock(&ctx->uring_lock);
+ if (!ctx->user_bufs) {
+ ctx->user_bufs = user_bufs;
+ ctx->buf_data = data;
+ ctx->nr_user_bufs = nbufs;
+ return 0;
+ }
+
+ /* someone raced setting up buffers, dump ours */
+ for (i = 0; i < nbufs; i++)
+ io_buffer_unmap(ctx, &user_bufs[i]);
+ io_rsrc_data_free(data);
+ kfree(user_bufs);
+ return -EBUSY;
+out_free_data:
+ io_rsrc_data_free(data);
+out_unlock:
+ mutex_unlock(&src_ctx->uring_lock);
+ mutex_lock(&ctx->uring_lock);
+ return ret;
+}
+
+/*
+ * Copy the registered buffers from the source ring whose file descriptor
+ * is given in the src_fd to the current ring. This is identical to registering
+ * the buffers with ctx, except faster as mappings already exist.
+ *
+ * Since the memory is already accounted once, don't account it again.
+ */
+int io_register_copy_buffers(struct io_ring_ctx *ctx, void __user *arg)
+{
+ struct io_uring_copy_buffers buf;
+ bool registered_src;
+ struct file *file;
+ int ret;
+
+ if (ctx->user_bufs || ctx->nr_user_bufs)
+ return -EBUSY;
+ if (copy_from_user(&buf, arg, sizeof(buf)))
+ return -EFAULT;
+ if (buf.flags & ~IORING_REGISTER_SRC_REGISTERED)
+ return -EINVAL;
+ if (memchr_inv(buf.pad, 0, sizeof(buf.pad)))
+ return -EINVAL;
+
+ registered_src = (buf.flags & IORING_REGISTER_SRC_REGISTERED) != 0;
+ file = io_uring_register_get_file(buf.src_fd, registered_src);
+ if (IS_ERR(file))
+ return PTR_ERR(file);
+ ret = io_copy_buffers(ctx, file->private_data);
+ if (!registered_src)
+ fput(file);
+ return ret;
+}
diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
index 98a253172c27..93546ab337a6 100644
--- a/io_uring/rsrc.h
+++ b/io_uring/rsrc.h
@@ -68,6 +68,7 @@ int io_import_fixed(int ddir, struct iov_iter *iter,
struct io_mapped_ubuf *imu,
u64 buf_addr, size_t len);
+int io_register_copy_buffers(struct io_ring_ctx *ctx, void __user *arg);
void __io_sqe_buffers_unregister(struct io_ring_ctx *ctx);
int io_sqe_buffers_unregister(struct io_ring_ctx *ctx);
int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg,
--
2.45.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 4/4] io_uring: add IORING_REGISTER_COPY_BUFFERS method
2024-09-12 16:38 ` [PATCH 4/4] io_uring: add IORING_REGISTER_COPY_BUFFERS method Jens Axboe
@ 2024-09-17 16:41 ` Gabriel Krisman Bertazi
2024-09-20 6:16 ` Jens Axboe
0 siblings, 1 reply; 7+ messages in thread
From: Gabriel Krisman Bertazi @ 2024-09-17 16:41 UTC (permalink / raw)
To: Jens Axboe; +Cc: io-uring
Jens Axboe <[email protected]> writes:
> Buffers can get registered with io_uring, which allows to skip the
> repeated pin_pages, unpin/unref pages for each O_DIRECT operation. This
> reduces the overhead of O_DIRECT IO.
>
> However, registrering buffers can take some time. Normally this isn't an
> issue as it's done at initialization time (and hence less critical), but
> for cases where rings can be created and destroyed as part of an IO
> thread pool, registering the same buffers for multiple rings become a
> more time sensitive proposition. As an example, let's say an application
> has an IO memory pool of 500G. Initial registration takes:
>
> Got 500 huge pages (each 1024MB)
> Registered 500 pages in 409 msec
>
> or about 0.4 seconds. If we go higher to 900 1GB huge pages being
> registered:
>
> Registered 900 pages in 738 msec
>
> which is, as expected, a fully linear scaling.
>
> Rather than have each ring pin/map/register the same buffer pool,
> provide an io_uring_register(2) opcode to simply duplicate the buffers
> that are registered with another ring. Adding the same 900GB of
> registered buffers to the target ring can then be accomplished in:
>
> Copied 900 pages in 17 usec
>
> While timing differs a bit, this provides around a 25,000-40,000x
> speedup for this use case.
Looks good, but I couldn't get it to apply on top of your branches. I
have only one comment, if you are doing a v4:
>
> Signed-off-by: Jens Axboe <[email protected]>
> ---
> include/uapi/linux/io_uring.h | 13 +++++
> io_uring/register.c | 6 +++
> io_uring/rsrc.c | 91 +++++++++++++++++++++++++++++++++++
> io_uring/rsrc.h | 1 +
> 4 files changed, 111 insertions(+)
>
> diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
> --- a/io_uring/rsrc.c
> +++ b/io_uring/rsrc.c
> @@ -17,6 +17,7 @@
> #include "openclose.h"
> #include "rsrc.h"
> #include "memmap.h"
> +#include "register.h"
>
> struct io_rsrc_update {
> struct file *file;
> @@ -1137,3 +1138,93 @@ int io_import_fixed(int ddir, struct iov_iter *iter,
>
> return 0;
> }
> +
> +static int io_copy_buffers(struct io_ring_ctx *ctx, struct io_ring_ctx *src_ctx)
The error handling code in this function is a bit hairy, IMO. I think
if you check nbufs unlocked and validate it later, it could be much
simpler:
static int io_copy_buffers(struct io_ring_ctx *ctx, struct io_ring_ctx *src_ctx)
{
struct io_mapped_ubuf **user_bufs;
struct io_rsrc_data *data;
int i, ret, nbufs;
/* Read nr_user_bufs unlocked. Must be validated later */
nbufs = READ_ONCE(src_ctx->nr_user_bufs);
if (!nbufs)
return -ENXIO;
ret = io_rsrc_data_alloc(ctx, IORING_RSRC_BUFFER, NULL, nbufs, &data);
if (ret)
return ret;
user_bufs = kcalloc(nbufs, sizeof(*ctx->user_bufs), GFP_KERNEL);
if (!user_bufs) {
ret = -ENOMEM;
goto out_free_data;
}
mutex_unlock(&ctx->uring_lock);
mutex_lock(&src_ctx->uring_lock);
ret = -EBUSY;
if (nbufs != src_ctx->nr_user_bufs) {
mutex_unlock(&src_ctx->uring_lock);
mutex_lock(&ctx->uring_lock);
goto out;
}
for (i = 0; i < nbufs; i++) {
struct io_mapped_ubuf *src = src_ctx->user_bufs[i];
refcount_inc(&src->refs);
user_bufs[i] = src;
}
/* Have a ref on the bufs now, drop src lock and re-grab our own lock */
mutex_unlock(&src_ctx->uring_lock);
mutex_lock(&ctx->uring_lock);
if (!ctx->user_bufs)
goto out_unmap;
ctx->user_bufs = user_bufs;
ctx->buf_data = data;
ctx->nr_user_bufs = nbufs;
return 0;
out_unmap:
/* someone raced setting up buffers, dump ours */
for (i = 0; i < nbufs; i++)
io_buffer_unmap(ctx, &user_bufs[i]);
out:
kfree(user_bufs);
out_free_data:
io_rsrc_data_free(data);
return ret;
}
Thanks,
--
Gabriel Krisman Bertazi
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 4/4] io_uring: add IORING_REGISTER_COPY_BUFFERS method
2024-09-17 16:41 ` Gabriel Krisman Bertazi
@ 2024-09-20 6:16 ` Jens Axboe
0 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2024-09-20 6:16 UTC (permalink / raw)
To: Gabriel Krisman Bertazi; +Cc: io-uring
On 9/17/24 10:41 AM, Gabriel Krisman Bertazi wrote:
> Jens Axboe <[email protected]> writes:
>
>> Buffers can get registered with io_uring, which allows to skip the
>> repeated pin_pages, unpin/unref pages for each O_DIRECT operation. This
>> reduces the overhead of O_DIRECT IO.
>>
>> However, registrering buffers can take some time. Normally this isn't an
>> issue as it's done at initialization time (and hence less critical), but
>> for cases where rings can be created and destroyed as part of an IO
>> thread pool, registering the same buffers for multiple rings become a
>> more time sensitive proposition. As an example, let's say an application
>> has an IO memory pool of 500G. Initial registration takes:
>>
>> Got 500 huge pages (each 1024MB)
>> Registered 500 pages in 409 msec
>>
>> or about 0.4 seconds. If we go higher to 900 1GB huge pages being
>> registered:
>>
>> Registered 900 pages in 738 msec
>>
>> which is, as expected, a fully linear scaling.
>>
>> Rather than have each ring pin/map/register the same buffer pool,
>> provide an io_uring_register(2) opcode to simply duplicate the buffers
>> that are registered with another ring. Adding the same 900GB of
>> registered buffers to the target ring can then be accomplished in:
>>
>> Copied 900 pages in 17 usec
>>
>> While timing differs a bit, this provides around a 25,000-40,000x
>> speedup for this use case.
>
> Looks good, but I couldn't get it to apply on top of your branches. I
> have only one comment, if you are doing a v4:
>>
>> Signed-off-by: Jens Axboe <[email protected]>
>> ---
>> include/uapi/linux/io_uring.h | 13 +++++
>> io_uring/register.c | 6 +++
>> io_uring/rsrc.c | 91 +++++++++++++++++++++++++++++++++++
>> io_uring/rsrc.h | 1 +
>> 4 files changed, 111 insertions(+)
>>
>> diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
>
>> --- a/io_uring/rsrc.c
>> +++ b/io_uring/rsrc.c
>> @@ -17,6 +17,7 @@
>> #include "openclose.h"
>> #include "rsrc.h"
>> #include "memmap.h"
>> +#include "register.h"
>>
>> struct io_rsrc_update {
>> struct file *file;
>> @@ -1137,3 +1138,93 @@ int io_import_fixed(int ddir, struct iov_iter *iter,
>>
>> return 0;
>> }
>> +
>> +static int io_copy_buffers(struct io_ring_ctx *ctx, struct io_ring_ctx *src_ctx)
>
>
> The error handling code in this function is a bit hairy, IMO. I think
> if you check nbufs unlocked and validate it later, it could be much
> simpler:
Sorry missed this due to travel - this is upstream in this merge window.
If you want to send a cleanup against for-6.12/io_uring, then please do!
--
Jens Axboe
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-09-20 6:16 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-12 16:38 [PATCHSET v3 0/4] Provide more efficient buffer registration Jens Axboe
2024-09-12 16:38 ` [PATCH 1/4] io_uring/rsrc: clear 'slot' entry upfront Jens Axboe
2024-09-12 16:38 ` [PATCH 2/4] io_uring/rsrc: add reference count to struct io_mapped_ubuf Jens Axboe
2024-09-12 16:38 ` [PATCH 3/4] io_uring/register: provide helper to get io_ring_ctx from 'fd' Jens Axboe
2024-09-12 16:38 ` [PATCH 4/4] io_uring: add IORING_REGISTER_COPY_BUFFERS method Jens Axboe
2024-09-17 16:41 ` Gabriel Krisman Bertazi
2024-09-20 6:16 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox