public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCHSET v2 0/3] Provided buffer improvements
@ 2022-03-10 16:59 Jens Axboe
  2022-03-10 16:59 ` [PATCH 1/3] io_uring: retry early for reads if we can poll Jens Axboe
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Jens Axboe @ 2022-03-10 16:59 UTC (permalink / raw)
  To: io-uring

Hi,

One functional improvement for recycling provided buffers when we don't
know when the readiness trigger comes in, one optimization for how
we index them, and one fix for READ/READV on re-selecting a buffer
when we attempt to issue the request.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/3] io_uring: retry early for reads if we can poll
  2022-03-10 16:59 [PATCHSET v2 0/3] Provided buffer improvements Jens Axboe
@ 2022-03-10 16:59 ` Jens Axboe
  2022-03-10 16:59 ` [PATCH 2/3] io_uring: ensure reads re-import for selected buffers Jens Axboe
  2022-03-10 16:59 ` [PATCH 3/3] io_uring: recycle provided buffers if request goes async Jens Axboe
  2 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2022-03-10 16:59 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

Most of the logic in io_read() deals with regular files, and in some ways
it would make sense to split the handling into S_IFREG and others. But
at least for retry, we don't need to bother setting up a bunch of state
just to abort in the loop later. In particular, don't bother forcing
setup of async data for a normal non-vectored read when we don't need it.

Signed-off-by: Jens Axboe <[email protected]>
---
 fs/io_uring.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 3d30f7b07677..4d8366bc226f 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3773,6 +3773,9 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 
 	if (ret == -EAGAIN || (req->flags & REQ_F_REISSUE)) {
 		req->flags &= ~REQ_F_REISSUE;
+		/* if we can poll, just do that */
+		if (req->opcode == IORING_OP_READ && file_can_poll(req->file))
+			return -EAGAIN;
 		/* IOPOLL retry should happen for io-wq threads */
 		if (!force_nonblock && !(req->ctx->flags & IORING_SETUP_IOPOLL))
 			goto done;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/3] io_uring: ensure reads re-import for selected buffers
  2022-03-10 16:59 [PATCHSET v2 0/3] Provided buffer improvements Jens Axboe
  2022-03-10 16:59 ` [PATCH 1/3] io_uring: retry early for reads if we can poll Jens Axboe
@ 2022-03-10 16:59 ` Jens Axboe
  2022-03-10 16:59 ` [PATCH 3/3] io_uring: recycle provided buffers if request goes async Jens Axboe
  2 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2022-03-10 16:59 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

If we drop buffers between scheduling a retry, then we need to re-import
when we start the request again.

Signed-off-by: Jens Axboe <[email protected]>
---
 fs/io_uring.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 4d8366bc226f..584b36dcd0aa 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3737,6 +3737,16 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 		if (unlikely(ret < 0))
 			return ret;
 	} else {
+		/*
+		 * Safe and required to re-import if we're using provided
+		 * buffers, as we dropped the selected one before retry.
+		 */
+		if (req->flags & REQ_F_BUFFER_SELECT) {
+			ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
+			if (unlikely(ret < 0))
+				return ret;
+		}
+
 		rw = req->async_data;
 		s = &rw->s;
 		/*
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 3/3] io_uring: recycle provided buffers if request goes async
  2022-03-10 16:59 [PATCHSET v2 0/3] Provided buffer improvements Jens Axboe
  2022-03-10 16:59 ` [PATCH 1/3] io_uring: retry early for reads if we can poll Jens Axboe
  2022-03-10 16:59 ` [PATCH 2/3] io_uring: ensure reads re-import for selected buffers Jens Axboe
@ 2022-03-10 16:59 ` Jens Axboe
  2 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2022-03-10 16:59 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

If we are using provided buffers, it's less than useful to have a buffer
selected and pinned if a request needs to go async or arms poll for
notification trigger on when we can process it.

Recycle the buffer in those events, so we don't pin it for the duration
of the request.

Signed-off-by: Jens Axboe <[email protected]>
---
 fs/io_uring.c | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 584b36dcd0aa..3145c9cacee0 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -269,6 +269,7 @@ struct io_buffer {
 	__u64 addr;
 	__u32 len;
 	__u16 bid;
+	__u16 bgid;
 };
 
 struct io_restriction {
@@ -1351,6 +1352,36 @@ static inline unsigned int io_put_kbuf(struct io_kiocb *req,
 	return cflags;
 }
 
+static void io_kbuf_recycle(struct io_kiocb *req)
+{
+	struct io_ring_ctx *ctx = req->ctx;
+	struct io_buffer *head, *buf;
+
+	if (likely(!(req->flags & REQ_F_BUFFER_SELECTED)))
+		return;
+
+	lockdep_assert_held(&ctx->uring_lock);
+
+	buf = req->kbuf;
+
+	head = xa_load(&ctx->io_buffers, buf->bgid);
+	if (head) {
+		list_add(&buf->list, &head->list);
+	} else {
+		int ret;
+
+		INIT_LIST_HEAD(&buf->list);
+
+		/* if we fail, just leave buffer attached */
+		ret = xa_insert(&ctx->io_buffers, buf->bgid, buf, GFP_KERNEL);
+		if (unlikely(ret < 0))
+			return;
+	}
+
+	req->flags &= ~REQ_F_BUFFER_SELECTED;
+	req->kbuf = NULL;
+}
+
 static bool io_match_task(struct io_kiocb *head, struct task_struct *task,
 			  bool cancel_all)
 	__must_hold(&req->ctx->timeout_lock)
@@ -4763,6 +4794,7 @@ static int io_add_buffers(struct io_ring_ctx *ctx, struct io_provide_buf *pbuf,
 		buf->addr = addr;
 		buf->len = min_t(__u32, pbuf->len, MAX_RW_COUNT);
 		buf->bid = bid;
+		buf->bgid = pbuf->bgid;
 		addr += pbuf->len;
 		bid++;
 		if (!*head) {
@@ -7395,8 +7427,12 @@ static void io_queue_sqe_arm_apoll(struct io_kiocb *req)
 		 * Queued up for async execution, worker will release
 		 * submit reference when the iocb is actually submitted.
 		 */
+		io_kbuf_recycle(req);
 		io_queue_async_work(req, NULL);
 		break;
+	case IO_APOLL_OK:
+		io_kbuf_recycle(req);
+		break;
 	}
 
 	if (linked_timeout)
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-03-10 16:59 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-03-10 16:59 [PATCHSET v2 0/3] Provided buffer improvements Jens Axboe
2022-03-10 16:59 ` [PATCH 1/3] io_uring: retry early for reads if we can poll Jens Axboe
2022-03-10 16:59 ` [PATCH 2/3] io_uring: ensure reads re-import for selected buffers Jens Axboe
2022-03-10 16:59 ` [PATCH 3/3] io_uring: recycle provided buffers if request goes async Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox