public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCHSET 0/2] Provided buffer improvements
@ 2022-03-09 18:32 Jens Axboe
  2022-03-09 18:32 ` [PATCH 1/2] io_uring: recycle provided buffers if request goes async Jens Axboe
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Jens Axboe @ 2022-03-09 18:32 UTC (permalink / raw)
  To: io-uring

Hi,

One functional improvement for recycling provided buffers when we don't
know when the readiness trigger comes in, and one optimization for how
we index them.

-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/2] io_uring: recycle provided buffers if request goes async
  2022-03-09 18:32 [PATCHSET 0/2] Provided buffer improvements Jens Axboe
@ 2022-03-09 18:32 ` Jens Axboe
  2022-03-09 18:32 ` [PATCH 2/2] io_uring: use unlocked xarray helpers Jens Axboe
  2022-03-09 21:52 ` [PATCHSET 0/2] Provided buffer improvements Jens Axboe
  2 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2022-03-09 18:32 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

If we are using provided buffers, it's less than useful to have a buffer
selected and pinned if a request needs to go async or arms poll for
notification trigger on when we can process it.

Recycle the buffer in those events, so we don't pin it for the duration
of the request.

Signed-off-by: Jens Axboe <[email protected]>
---
 fs/io_uring.c | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index aca76e731c70..fa637e00062d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -268,6 +268,7 @@ struct io_buffer {
 	__u64 addr;
 	__u32 len;
 	__u16 bid;
+	__u16 bgid;
 };
 
 struct io_restriction {
@@ -1335,6 +1336,34 @@ static inline unsigned int io_put_kbuf(struct io_kiocb *req,
 	return cflags;
 }
 
+static void io_kbuf_recycle(struct io_kiocb *req)
+{
+	struct io_ring_ctx *ctx = req->ctx;
+	struct io_buffer *head, *buf;
+
+	if (likely(!(req->flags & REQ_F_BUFFER_SELECTED)))
+		return;
+
+	lockdep_assert_held(&ctx->uring_lock);
+
+	buf = req->kbuf;
+
+	head = xa_load(&ctx->io_buffers, buf->bgid);
+	if (head) {
+		list_add(&buf->list, &head->list);
+	} else {
+		int ret;
+
+		/* if we fail, just leave buffer attached */
+		ret = xa_insert(&ctx->io_buffers, buf->bgid, buf, GFP_KERNEL);
+		if (unlikely(ret < 0))
+			return;
+	}
+
+	req->flags &= ~REQ_F_BUFFER_SELECTED;
+	req->kbuf = NULL;
+}
+
 static bool io_match_task(struct io_kiocb *head, struct task_struct *task,
 			  bool cancel_all)
 	__must_hold(&req->ctx->timeout_lock)
@@ -4690,6 +4719,7 @@ static int io_add_buffers(struct io_ring_ctx *ctx, struct io_provide_buf *pbuf,
 		buf->addr = addr;
 		buf->len = min_t(__u32, pbuf->len, MAX_RW_COUNT);
 		buf->bid = bid;
+		buf->bgid = pbuf->bgid;
 		addr += pbuf->len;
 		bid++;
 		if (!*head) {
@@ -7203,6 +7233,8 @@ static void io_queue_sqe_arm_apoll(struct io_kiocb *req)
 {
 	struct io_kiocb *linked_timeout = io_prep_linked_timeout(req);
 
+	io_kbuf_recycle(req);
+
 	switch (io_arm_poll_handler(req)) {
 	case IO_APOLL_READY:
 		io_req_task_queue(req);
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/2] io_uring: use unlocked xarray helpers
  2022-03-09 18:32 [PATCHSET 0/2] Provided buffer improvements Jens Axboe
  2022-03-09 18:32 ` [PATCH 1/2] io_uring: recycle provided buffers if request goes async Jens Axboe
@ 2022-03-09 18:32 ` Jens Axboe
  2022-03-09 21:52 ` [PATCHSET 0/2] Provided buffer improvements Jens Axboe
  2 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2022-03-09 18:32 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

io_uring provides its own locking for the xarray manipulations, so use
the helpers that bypass the xarray internal locking.

Signed-off-by: Jens Axboe <[email protected]>
---
 fs/io_uring.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index fa637e00062d..2f3aedbffd24 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1355,7 +1355,7 @@ static void io_kbuf_recycle(struct io_kiocb *req)
 		int ret;
 
 		/* if we fail, just leave buffer attached */
-		ret = xa_insert(&ctx->io_buffers, buf->bgid, buf, GFP_KERNEL);
+		ret = __xa_insert(&ctx->io_buffers, buf->bgid, buf, GFP_KERNEL);
 		if (unlikely(ret < 0))
 			return;
 	}
@@ -3330,7 +3330,7 @@ static struct io_buffer *io_buffer_select(struct io_kiocb *req, size_t *len,
 			list_del(&kbuf->list);
 		} else {
 			kbuf = head;
-			xa_erase(&req->ctx->io_buffers, bgid);
+			__xa_erase(&req->ctx->io_buffers, bgid);
 		}
 		if (*len > kbuf->len)
 			*len = kbuf->len;
@@ -4594,7 +4594,7 @@ static int __io_remove_buffers(struct io_ring_ctx *ctx, struct io_buffer *buf,
 		cond_resched();
 	}
 	i++;
-	xa_erase(&ctx->io_buffers, bgid);
+	__xa_erase(&ctx->io_buffers, bgid);
 
 	return i;
 }
@@ -4749,7 +4749,7 @@ static int io_provide_buffers(struct io_kiocb *req, unsigned int issue_flags)
 
 	ret = io_add_buffers(ctx, p, &head);
 	if (ret >= 0 && !list) {
-		ret = xa_insert(&ctx->io_buffers, p->bgid, head, GFP_KERNEL);
+		ret = __xa_insert(&ctx->io_buffers, p->bgid, head, GFP_KERNEL);
 		if (ret < 0)
 			__io_remove_buffers(ctx, head, p->bgid, -1U);
 	}
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCHSET 0/2] Provided buffer improvements
  2022-03-09 18:32 [PATCHSET 0/2] Provided buffer improvements Jens Axboe
  2022-03-09 18:32 ` [PATCH 1/2] io_uring: recycle provided buffers if request goes async Jens Axboe
  2022-03-09 18:32 ` [PATCH 2/2] io_uring: use unlocked xarray helpers Jens Axboe
@ 2022-03-09 21:52 ` Jens Axboe
  2 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2022-03-09 21:52 UTC (permalink / raw)
  To: io-uring

On 3/9/22 11:32 AM, Jens Axboe wrote:
> Hi,
> 
> One functional improvement for recycling provided buffers when we don't
> know when the readiness trigger comes in, and one optimization for how
> we index them.

I'll spin a v2 with this, a few changes are needed.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-03-09 21:52 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-03-09 18:32 [PATCHSET 0/2] Provided buffer improvements Jens Axboe
2022-03-09 18:32 ` [PATCH 1/2] io_uring: recycle provided buffers if request goes async Jens Axboe
2022-03-09 18:32 ` [PATCH 2/2] io_uring: use unlocked xarray helpers Jens Axboe
2022-03-09 21:52 ` [PATCHSET 0/2] Provided buffer improvements Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox