public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCHSET 0/3] Add support for multishot reads
@ 2023-09-11 20:40 Jens Axboe
  2023-09-11 20:40 ` [PATCH 1/3] io_uring/rw: split io_read() into a helper Jens Axboe
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Jens Axboe @ 2023-09-11 20:40 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

Hi,

We support multishot for other request types, generally in the shape of
a flag for the request. Doing a flag based approach with reads isn't
straightforward, as the read/write flags are in the RWF_ space. Instead,
add a separate opcode for this, IORING_OP_READ_MULTISHOT.

This can only be used provided buffers, like other multishot request
types that read/receive data.

It can also only be used for pollable file types, like a tun device or
pipes, for example. File types that are always readable (or seekable),
like regular files, cannot be used with multishot reads.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/3] io_uring/rw: split io_read() into a helper
  2023-09-11 20:40 [PATCHSET 0/3] Add support for multishot reads Jens Axboe
@ 2023-09-11 20:40 ` Jens Axboe
  2023-09-11 20:40 ` [PATCH 2/3] io_uring/rw: mark readv/writev as vectored in the opcode definition Jens Axboe
  2023-09-11 20:40 ` [PATCH 3/3] io_uring/rw: add support for IORING_OP_READ_MULTISHOT Jens Axboe
  2 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2023-09-11 20:40 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence, Jens Axboe

Add __io_read() which does the grunt of the work, leaving the completion
side to the new io_read(). No functional changes in this patch.

Signed-off-by: Jens Axboe <[email protected]>
---
 io_uring/rw.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/io_uring/rw.c b/io_uring/rw.c
index c8c822fa7980..402e8bf002d6 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -708,7 +708,7 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
 	return 0;
 }
 
-int io_read(struct io_kiocb *req, unsigned int issue_flags)
+static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
 {
 	struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
 	struct io_rw_state __s, *s = &__s;
@@ -853,6 +853,17 @@ int io_read(struct io_kiocb *req, unsigned int issue_flags)
 	/* it's faster to check here then delegate to kfree */
 	if (iovec)
 		kfree(iovec);
+	return ret;
+}
+
+int io_read(struct io_kiocb *req, unsigned int issue_flags)
+{
+	int ret;
+
+	ret = __io_read(req, issue_flags);
+	if (unlikely(ret < 0))
+		return ret;
+
 	return kiocb_done(req, ret, issue_flags);
 }
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/3] io_uring/rw: mark readv/writev as vectored in the opcode definition
  2023-09-11 20:40 [PATCHSET 0/3] Add support for multishot reads Jens Axboe
  2023-09-11 20:40 ` [PATCH 1/3] io_uring/rw: split io_read() into a helper Jens Axboe
@ 2023-09-11 20:40 ` Jens Axboe
  2023-09-11 20:40 ` [PATCH 3/3] io_uring/rw: add support for IORING_OP_READ_MULTISHOT Jens Axboe
  2 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2023-09-11 20:40 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence, Jens Axboe

This is cleaner than gating on the opcode type, particularly as more
read/write type opcodes may be added.

Then we can use that for the data import, and for __io_read() on
whether or not we need to copy state.

Signed-off-by: Jens Axboe <[email protected]>
---
 io_uring/opdef.c |  2 ++
 io_uring/opdef.h |  2 ++
 io_uring/rw.c    | 10 ++++++----
 3 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/io_uring/opdef.c b/io_uring/opdef.c
index 1d26ef10fc16..bfb7c53389c0 100644
--- a/io_uring/opdef.c
+++ b/io_uring/opdef.c
@@ -65,6 +65,7 @@ const struct io_issue_def io_issue_defs[] = {
 		.ioprio			= 1,
 		.iopoll			= 1,
 		.iopoll_queue		= 1,
+		.vectored		= 1,
 		.prep			= io_prep_rw,
 		.issue			= io_read,
 	},
@@ -78,6 +79,7 @@ const struct io_issue_def io_issue_defs[] = {
 		.ioprio			= 1,
 		.iopoll			= 1,
 		.iopoll_queue		= 1,
+		.vectored		= 1,
 		.prep			= io_prep_rw,
 		.issue			= io_write,
 	},
diff --git a/io_uring/opdef.h b/io_uring/opdef.h
index c22c8696e749..9e5435ec27d0 100644
--- a/io_uring/opdef.h
+++ b/io_uring/opdef.h
@@ -29,6 +29,8 @@ struct io_issue_def {
 	unsigned		iopoll_queue : 1;
 	/* opcode specific path will handle ->async_data allocation if needed */
 	unsigned		manual_alloc : 1;
+	/* vectored opcode, set if 1) vectored, and 2) handler needs to know */
+	unsigned		vectored : 1;
 
 	int (*issue)(struct io_kiocb *, unsigned int);
 	int (*prep)(struct io_kiocb *, const struct io_uring_sqe *);
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 402e8bf002d6..c3bf38419230 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -388,8 +388,7 @@ static struct iovec *__io_import_iovec(int ddir, struct io_kiocb *req,
 	buf = u64_to_user_ptr(rw->addr);
 	sqe_len = rw->len;
 
-	if (opcode == IORING_OP_READ || opcode == IORING_OP_WRITE ||
-	    (req->flags & REQ_F_BUFFER_SELECT)) {
+	if (!io_issue_defs[opcode].vectored || req->flags & REQ_F_BUFFER_SELECT) {
 		if (io_do_buffer_select(req)) {
 			buf = io_buffer_select(req, &sqe_len, issue_flags);
 			if (!buf)
@@ -776,8 +775,11 @@ static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
 
 	if (ret == -EAGAIN || (req->flags & REQ_F_REISSUE)) {
 		req->flags &= ~REQ_F_REISSUE;
-		/* if we can poll, just do that */
-		if (req->opcode == IORING_OP_READ && file_can_poll(req->file))
+		/*
+		 * If we can poll, just do that. For a vectored read, we'll
+		 * need to copy state first.
+		 */
+		if (file_can_poll(req->file) && !io_issue_defs[req->opcode].vectored)
 			return -EAGAIN;
 		/* IOPOLL retry should happen for io-wq threads */
 		if (!force_nonblock && !(req->ctx->flags & IORING_SETUP_IOPOLL))
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/3] io_uring/rw: add support for IORING_OP_READ_MULTISHOT
  2023-09-11 20:40 [PATCHSET 0/3] Add support for multishot reads Jens Axboe
  2023-09-11 20:40 ` [PATCH 1/3] io_uring/rw: split io_read() into a helper Jens Axboe
  2023-09-11 20:40 ` [PATCH 2/3] io_uring/rw: mark readv/writev as vectored in the opcode definition Jens Axboe
@ 2023-09-11 20:40 ` Jens Axboe
  2023-09-11 23:57   ` Gabriel Krisman Bertazi
  2023-09-12  0:38   ` Gabriel Krisman Bertazi
  2 siblings, 2 replies; 9+ messages in thread
From: Jens Axboe @ 2023-09-11 20:40 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence, Jens Axboe

This behaves like IORING_OP_READ, except:

1) It only supports pollable files (eg pipes, sockets, etc). Note that
   for sockets, you probably want to use recv/recvmsg with multishot
   instead.

2) It supports multishot mode, meaning it will repeatedly trigger a
   read and fill a buffer when data is available. This allows similar
   use to recv/recvmsg but on non-sockets, where a single request will
   repeatedly post a CQE whenever data is read from it.

3) Because of #2, it must be used with provided buffers. This is
   uniformly true across any request type that supports multishot and
   transfers data, with the reason being that it's obviously not
   possible to pass in a single buffer for the data, as multiple reads
   may very well trigger before an application has a chance to process
   previous CQEs and the data passed from them.

Signed-off-by: Jens Axboe <[email protected]>
---
 include/uapi/linux/io_uring.h |  1 +
 io_uring/opdef.c              | 13 +++++++
 io_uring/rw.c                 | 66 +++++++++++++++++++++++++++++++++++
 io_uring/rw.h                 |  2 ++
 4 files changed, 82 insertions(+)

diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index daa363d1a502..c35438af679a 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -246,6 +246,7 @@ enum io_uring_op {
 	IORING_OP_FUTEX_WAIT,
 	IORING_OP_FUTEX_WAKE,
 	IORING_OP_FUTEX_WAITV,
+	IORING_OP_READ_MULTISHOT,
 
 	/* this goes last, obviously */
 	IORING_OP_LAST,
diff --git a/io_uring/opdef.c b/io_uring/opdef.c
index bfb7c53389c0..03e1a6f26fa5 100644
--- a/io_uring/opdef.c
+++ b/io_uring/opdef.c
@@ -460,6 +460,16 @@ const struct io_issue_def io_issue_defs[] = {
 		.prep			= io_eopnotsupp_prep,
 #endif
 	},
+	[IORING_OP_READ_MULTISHOT] = {
+		.needs_file		= 1,
+		.unbound_nonreg_file	= 1,
+		.pollin			= 1,
+		.buffer_select		= 1,
+		.audit_skip		= 1,
+		.ioprio			= 1,
+		.prep			= io_read_mshot_prep,
+		.issue			= io_read_mshot,
+	},
 };
 
 const struct io_cold_def io_cold_defs[] = {
@@ -692,6 +702,9 @@ const struct io_cold_def io_cold_defs[] = {
 	[IORING_OP_FUTEX_WAITV] = {
 		.name			= "FUTEX_WAITV",
 	},
+	[IORING_OP_READ_MULTISHOT] = {
+		.name			= "READ_MULTISHOT",
+	},
 };
 
 const char *io_uring_get_opcode(u8 opcode)
diff --git a/io_uring/rw.c b/io_uring/rw.c
index c3bf38419230..7305792fbbbf 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -123,6 +123,22 @@ int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	return 0;
 }
 
+/*
+ * Multishot read is prepared just like a normal read/write request, only
+ * difference is that we set the MULTISHOT flag.
+ */
+int io_read_mshot_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+{
+	int ret;
+
+	ret = io_prep_rw(req, sqe);
+	if (unlikely(ret))
+		return ret;
+
+	req->flags |= REQ_F_APOLL_MULTISHOT;
+	return 0;
+}
+
 void io_readv_writev_cleanup(struct io_kiocb *req)
 {
 	struct io_async_rw *io = req->async_data;
@@ -869,6 +885,56 @@ int io_read(struct io_kiocb *req, unsigned int issue_flags)
 	return kiocb_done(req, ret, issue_flags);
 }
 
+int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags)
+{
+	unsigned int cflags = 0;
+	int ret;
+
+	/*
+	 * Multishot MUST be used on a pollable file
+	 */
+	if (!file_can_poll(req->file))
+		return -EBADFD;
+
+	ret = __io_read(req, issue_flags);
+
+	/*
+	 * If we get -EAGAIN, recycle our buffer and just let normal poll
+	 * handling arm it.
+	 */
+	if (ret == -EAGAIN) {
+		io_kbuf_recycle(req, issue_flags);
+		return -EAGAIN;
+	}
+
+	/*
+	 * Any error will terminate a multishot request
+	 */
+	if (ret <= 0) {
+finish:
+		io_req_set_res(req, ret, cflags);
+		if (issue_flags & IO_URING_F_MULTISHOT)
+			return IOU_STOP_MULTISHOT;
+		return IOU_OK;
+	}
+
+	/*
+	 * Put our buffer and post a CQE. If we fail to post a CQE, then
+	 * jump to the termination path. This request is then done.
+	 */
+	cflags = io_put_kbuf(req, issue_flags);
+
+	if (io_fill_cqe_req_aux(req, issue_flags & IO_URING_F_COMPLETE_DEFER,
+				ret, cflags | IORING_CQE_F_MORE)) {
+		if (issue_flags & IO_URING_F_MULTISHOT)
+			return IOU_ISSUE_SKIP_COMPLETE;
+		else
+			return -EAGAIN;
+	}
+
+	goto finish;
+}
+
 int io_write(struct io_kiocb *req, unsigned int issue_flags)
 {
 	struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
diff --git a/io_uring/rw.h b/io_uring/rw.h
index 4b89f9659366..c5aed03d42a4 100644
--- a/io_uring/rw.h
+++ b/io_uring/rw.h
@@ -23,3 +23,5 @@ int io_writev_prep_async(struct io_kiocb *req);
 void io_readv_writev_cleanup(struct io_kiocb *req);
 void io_rw_fail(struct io_kiocb *req);
 void io_req_rw_complete(struct io_kiocb *req, struct io_tw_state *ts);
+int io_read_mshot_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe);
+int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags);
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] io_uring/rw: add support for IORING_OP_READ_MULTISHOT
  2023-09-11 20:40 ` [PATCH 3/3] io_uring/rw: add support for IORING_OP_READ_MULTISHOT Jens Axboe
@ 2023-09-11 23:57   ` Gabriel Krisman Bertazi
  2023-09-12  0:46     ` Jens Axboe
  2023-09-12  0:38   ` Gabriel Krisman Bertazi
  1 sibling, 1 reply; 9+ messages in thread
From: Gabriel Krisman Bertazi @ 2023-09-11 23:57 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, asml.silence

Jens Axboe <[email protected]> writes:

> This behaves like IORING_OP_READ, except:
>
> 1) It only supports pollable files (eg pipes, sockets, etc). Note that
>    for sockets, you probably want to use recv/recvmsg with multishot
>    instead.
>
> 2) It supports multishot mode, meaning it will repeatedly trigger a
>    read and fill a buffer when data is available. This allows similar
>    use to recv/recvmsg but on non-sockets, where a single request will
>    repeatedly post a CQE whenever data is read from it.
>
> 3) Because of #2, it must be used with provided buffers. This is
>    uniformly true across any request type that supports multishot and
>    transfers data, with the reason being that it's obviously not
>    possible to pass in a single buffer for the data, as multiple reads
>    may very well trigger before an application has a chance to process
>    previous CQEs and the data passed from them.
>
> Signed-off-by: Jens Axboe <[email protected]>

This is a really cool feature.  Just two comments inline.


> +/*
> + * Multishot read is prepared just like a normal read/write request, only
> + * difference is that we set the MULTISHOT flag.
> + */
> +int io_read_mshot_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
> +{
> +	int ret;
> +
> +	ret = io_prep_rw(req, sqe);
> +	if (unlikely(ret))
> +		return ret;
> +
> +	req->flags |= REQ_F_APOLL_MULTISHOT;
> +	return 0;
> +}
> +
>  void io_readv_writev_cleanup(struct io_kiocb *req)
>  {
>  	struct io_async_rw *io = req->async_data;
> @@ -869,6 +885,56 @@ int io_read(struct io_kiocb *req, unsigned int issue_flags)
>  	return kiocb_done(req, ret, issue_flags);
>  }
>  
> +int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags)
> +{
> +	unsigned int cflags = 0;
> +	int ret;
> +
> +	/*
> +	 * Multishot MUST be used on a pollable file
> +	 */
> +	if (!file_can_poll(req->file))
> +		return -EBADFD;

io_uring is pollable, so I think you want to also reject when
req->file->f_ops == io_uring_fops to avoid the loop where a ring
monitoring itself will cause a recursive completion? Maybe this can't
happen here for some reason I miss?

> +
> +	ret = __io_read(req, issue_flags);
> +
> +	/*
> +	 * If we get -EAGAIN, recycle our buffer and just let normal poll
> +	 * handling arm it.
> +	 */
> +	if (ret == -EAGAIN) {
> +		io_kbuf_recycle(req, issue_flags);
> +		return -EAGAIN;
> +	}
> +
> +	/*
> +	 * Any error will terminate a multishot request
> +	 */
> +	if (ret <= 0) {
> +finish:
> +		io_req_set_res(req, ret, cflags);
> +		if (issue_flags & IO_URING_F_MULTISHOT)
> +			return IOU_STOP_MULTISHOT;
> +		return IOU_OK;

Just a style detail, but I'd prefer to unfold this on the end of the function
instead of jumping backwards here..

> +	}
> +
> +	/*
> +	 * Put our buffer and post a CQE. If we fail to post a CQE, then
> +	 * jump to the termination path. This request is then done.
> +	 */
> +	cflags = io_put_kbuf(req, issue_flags);
> +
> +	if (io_fill_cqe_req_aux(req, issue_flags & IO_URING_F_COMPLETE_DEFER,
> +				ret, cflags | IORING_CQE_F_MORE)) {
> +		if (issue_flags & IO_URING_F_MULTISHOT)
> +			return IOU_ISSUE_SKIP_COMPLETE;
> +		else
> +			return -EAGAIN;
> +	}
> +
> +	goto finish;
> +}
> +
>  int io_write(struct io_kiocb *req, unsigned int issue_flags)
>  {
>  	struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
> diff --git a/io_uring/rw.h b/io_uring/rw.h
> index 4b89f9659366..c5aed03d42a4 100644
> --- a/io_uring/rw.h
> +++ b/io_uring/rw.h
> @@ -23,3 +23,5 @@ int io_writev_prep_async(struct io_kiocb *req);
>  void io_readv_writev_cleanup(struct io_kiocb *req);
>  void io_rw_fail(struct io_kiocb *req);
>  void io_req_rw_complete(struct io_kiocb *req, struct io_tw_state *ts);
> +int io_read_mshot_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe);
> +int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags);

-- 
Gabriel Krisman Bertazi

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] io_uring/rw: add support for IORING_OP_READ_MULTISHOT
  2023-09-11 20:40 ` [PATCH 3/3] io_uring/rw: add support for IORING_OP_READ_MULTISHOT Jens Axboe
  2023-09-11 23:57   ` Gabriel Krisman Bertazi
@ 2023-09-12  0:38   ` Gabriel Krisman Bertazi
  2023-09-12  0:47     ` Jens Axboe
  1 sibling, 1 reply; 9+ messages in thread
From: Gabriel Krisman Bertazi @ 2023-09-12  0:38 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, asml.silence

Jens Axboe <[email protected]> writes:

> This behaves like IORING_OP_READ, except:
>
> 1) It only supports pollable files (eg pipes, sockets, etc). Note that
>    for sockets, you probably want to use recv/recvmsg with multishot
>    instead.
>
> 2) It supports multishot mode, meaning it will repeatedly trigger a
>    read and fill a buffer when data is available. This allows similar
>    use to recv/recvmsg but on non-sockets, where a single request will
>    repeatedly post a CQE whenever data is read from it.
>
> 3) Because of #2, it must be used with provided buffers. This is
>    uniformly true across any request type that supports multishot and
>    transfers data, with the reason being that it's obviously not
>    possible to pass in a single buffer for the data, as multiple reads
>    may very well trigger before an application has a chance to process
>    previous CQEs and the data passed from them.
>
> Signed-off-by: Jens Axboe <[email protected]>
> ---
>  include/uapi/linux/io_uring.h |  1 +
>  io_uring/opdef.c              | 13 +++++++
>  io_uring/rw.c                 | 66 +++++++++++++++++++++++++++++++++++
>  io_uring/rw.h                 |  2 ++
>  4 files changed, 82 insertions(+)
>
> diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
> index daa363d1a502..c35438af679a 100644
> --- a/include/uapi/linux/io_uring.h
> +++ b/include/uapi/linux/io_uring.h
> @@ -246,6 +246,7 @@ enum io_uring_op {
>  	IORING_OP_FUTEX_WAIT,
>  	IORING_OP_FUTEX_WAKE,
>  	IORING_OP_FUTEX_WAITV,
> +	IORING_OP_READ_MULTISHOT,
>  
>  	/* this goes last, obviously */
>  	IORING_OP_LAST,
> diff --git a/io_uring/opdef.c b/io_uring/opdef.c
> index bfb7c53389c0..03e1a6f26fa5 100644
> --- a/io_uring/opdef.c
> +++ b/io_uring/opdef.c
> @@ -460,6 +460,16 @@ const struct io_issue_def io_issue_defs[] = {
>  		.prep			= io_eopnotsupp_prep,
>  #endif
>  	},
> +	[IORING_OP_READ_MULTISHOT] = {
> +		.needs_file		= 1,
> +		.unbound_nonreg_file	= 1,
> +		.pollin			= 1,
> +		.buffer_select		= 1,
> +		.audit_skip		= 1,
> +		.ioprio			= 1,
> +		.prep			= io_read_mshot_prep,
> +		.issue			= io_read_mshot,
> +	},
>  };
>  
>  const struct io_cold_def io_cold_defs[] = {
> @@ -692,6 +702,9 @@ const struct io_cold_def io_cold_defs[] = {
>  	[IORING_OP_FUTEX_WAITV] = {
>  		.name			= "FUTEX_WAITV",
>  	},
> +	[IORING_OP_READ_MULTISHOT] = {
> +		.name			= "READ_MULTISHOT",
> +	},
>  };
>  
>  const char *io_uring_get_opcode(u8 opcode)
> diff --git a/io_uring/rw.c b/io_uring/rw.c
> index c3bf38419230..7305792fbbbf 100644
> --- a/io_uring/rw.c
> +++ b/io_uring/rw.c
> @@ -123,6 +123,22 @@ int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
>  	return 0;
>  }
>  
> +/*
> + * Multishot read is prepared just like a normal read/write request, only
> + * difference is that we set the MULTISHOT flag.
> + */
> +int io_read_mshot_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
> +{
> +	int ret;
> +
> +	ret = io_prep_rw(req, sqe);
> +	if (unlikely(ret))
> +		return ret;
> +
> +	req->flags |= REQ_F_APOLL_MULTISHOT;
> +	return 0;
> +}
> +
>  void io_readv_writev_cleanup(struct io_kiocb *req)
>  {
>  	struct io_async_rw *io = req->async_data;
> @@ -869,6 +885,56 @@ int io_read(struct io_kiocb *req, unsigned int issue_flags)
>  	return kiocb_done(req, ret, issue_flags);
>  }
>  
> +int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags)
> +{
> +	unsigned int cflags = 0;
> +	int ret;
> +
> +	/*
> +	 * Multishot MUST be used on a pollable file
> +	 */
> +	if (!file_can_poll(req->file))
> +		return -EBADFD;
> +

Please disregard my previous comment about checking for
io_uring_fops. It is not necessary because this kind of file can't be
read in the first place, so it would never get here.

(Also, things seems to be misbehaving on my MUA archive and Lore didn't
get my own message yet, so I'm not replying directly to it)

-- 
Gabriel Krisman Bertazi

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] io_uring/rw: add support for IORING_OP_READ_MULTISHOT
  2023-09-11 23:57   ` Gabriel Krisman Bertazi
@ 2023-09-12  0:46     ` Jens Axboe
  2023-09-12  0:53       ` Jens Axboe
  0 siblings, 1 reply; 9+ messages in thread
From: Jens Axboe @ 2023-09-12  0:46 UTC (permalink / raw)
  To: Gabriel Krisman Bertazi; +Cc: io-uring, asml.silence

On 9/11/23 5:57 PM, Gabriel Krisman Bertazi wrote:
>> +int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags)
>> +{
>> +	unsigned int cflags = 0;
>> +	int ret;
>> +
>> +	/*
>> +	 * Multishot MUST be used on a pollable file
>> +	 */
>> +	if (!file_can_poll(req->file))
>> +		return -EBADFD;
> 
> io_uring is pollable, so I think you want to also reject when
> req->file->f_ops == io_uring_fops to avoid the loop where a ring
> monitoring itself will cause a recursive completion? Maybe this can't
> happen here for some reason I miss?

I saw your followup, but we do actually handle that case - if this fd is
an io_uring context, then we track the inflight state of it so we can
appropriately cancel to break that loop.

But yeah, doesn't matter for this case, as you cannot read or write to
an io_uring fd in the first place.

>> +	ret = __io_read(req, issue_flags);
>> +
>> +	/*
>> +	 * If we get -EAGAIN, recycle our buffer and just let normal poll
>> +	 * handling arm it.
>> +	 */
>> +	if (ret == -EAGAIN) {
>> +		io_kbuf_recycle(req, issue_flags);
>> +		return -EAGAIN;
>> +	}
>> +
>> +	/*
>> +	 * Any error will terminate a multishot request
>> +	 */
>> +	if (ret <= 0) {
>> +finish:
>> +		io_req_set_res(req, ret, cflags);
>> +		if (issue_flags & IO_URING_F_MULTISHOT)
>> +			return IOU_STOP_MULTISHOT;
>> +		return IOU_OK;
> 
> Just a style detail, but I'd prefer to unfold this on the end of the function
> instead of jumping backwards here..

Sure, that might look better. I'll make the edit.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] io_uring/rw: add support for IORING_OP_READ_MULTISHOT
  2023-09-12  0:38   ` Gabriel Krisman Bertazi
@ 2023-09-12  0:47     ` Jens Axboe
  0 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2023-09-12  0:47 UTC (permalink / raw)
  To: Gabriel Krisman Bertazi; +Cc: io-uring, asml.silence

On 9/11/23 6:38 PM, Gabriel Krisman Bertazi wrote:
> (Also, things seems to be misbehaving on my MUA archive and Lore didn't
> get my own message yet, so I'm not replying directly to it)

I'm having all sorts of delays too, both last week and this week...
Pretty unfortunate for probably the most core of kernel development
infrastructure.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] io_uring/rw: add support for IORING_OP_READ_MULTISHOT
  2023-09-12  0:46     ` Jens Axboe
@ 2023-09-12  0:53       ` Jens Axboe
  0 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2023-09-12  0:53 UTC (permalink / raw)
  To: Gabriel Krisman Bertazi; +Cc: io-uring, asml.silence

On 9/11/23 6:46 PM, Jens Axboe wrote:
>>> +	ret = __io_read(req, issue_flags);
>>> +
>>> +	/*
>>> +	 * If we get -EAGAIN, recycle our buffer and just let normal poll
>>> +	 * handling arm it.
>>> +	 */
>>> +	if (ret == -EAGAIN) {
>>> +		io_kbuf_recycle(req, issue_flags);
>>> +		return -EAGAIN;
>>> +	}
>>> +
>>> +	/*
>>> +	 * Any error will terminate a multishot request
>>> +	 */
>>> +	if (ret <= 0) {
>>> +finish:
>>> +		io_req_set_res(req, ret, cflags);
>>> +		if (issue_flags & IO_URING_F_MULTISHOT)
>>> +			return IOU_STOP_MULTISHOT;
>>> +		return IOU_OK;
>>
>> Just a style detail, but I'd prefer to unfold this on the end of the
>> function instead of jumping backwards here..
> 
> Sure, that might look better. I'll make the edit.

Actually we can just indent the next case and get rid of the goto
completely:

int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags)
{
	unsigned int cflags = 0;
	int ret;

	/*
	 * Multishot MUST be used on a pollable file
	 */
	if (!file_can_poll(req->file))
		return -EBADFD;

	ret = __io_read(req, issue_flags);

	/*
	 * If we get -EAGAIN, recycle our buffer and just let normal poll
	 * handling arm it.
	 */
	if (ret == -EAGAIN) {
		io_kbuf_recycle(req, issue_flags);
		return -EAGAIN;
	}

	/*
	 * Any successful return value will keep the multishot read armed.
	 */
	if (ret > 0) {
		/*
		 * Put our buffer and post a CQE. If we fail to post a CQE, then
		 * jump to the termination path. This request is then done.
		 */
		cflags = io_put_kbuf(req, issue_flags);

		if (io_fill_cqe_req_aux(req,
					issue_flags & IO_URING_F_COMPLETE_DEFER,
					ret, cflags | IORING_CQE_F_MORE)) {
			if (issue_flags & IO_URING_F_MULTISHOT)
				return IOU_ISSUE_SKIP_COMPLETE;
			return -EAGAIN;
		}
	}

	/*
	 * Either an error, or we've hit overflow posting the CQE. For any
	 * multishot request, hitting overflow will terminate it.
	 */
	io_req_set_res(req, ret, cflags);
	if (issue_flags & IO_URING_F_MULTISHOT)
		return IOU_STOP_MULTISHOT;
	return IOU_OK;
}

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-09-12  4:23 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-11 20:40 [PATCHSET 0/3] Add support for multishot reads Jens Axboe
2023-09-11 20:40 ` [PATCH 1/3] io_uring/rw: split io_read() into a helper Jens Axboe
2023-09-11 20:40 ` [PATCH 2/3] io_uring/rw: mark readv/writev as vectored in the opcode definition Jens Axboe
2023-09-11 20:40 ` [PATCH 3/3] io_uring/rw: add support for IORING_OP_READ_MULTISHOT Jens Axboe
2023-09-11 23:57   ` Gabriel Krisman Bertazi
2023-09-12  0:46     ` Jens Axboe
2023-09-12  0:53       ` Jens Axboe
2023-09-12  0:38   ` Gabriel Krisman Bertazi
2023-09-12  0:47     ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox