public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH for-next 0/4] io_uring: force async only ops to go async
@ 2023-01-27 13:52 Dylan Yudaken
  2023-01-27 13:52 ` [PATCH for-next 1/4] io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async Dylan Yudaken
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Dylan Yudaken @ 2023-01-27 13:52 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

Many ops such as statx do not support nonblock issuing (for now). At the
moment the code goes through the issue call just to receive -EAGAIN and
then be run async. There is no need for this as internally the
REQ_F_FORCE_ASYNC flag can just be added on.

The upside for this is generally minimal, and possibly you may decide that
it's not worth the extra risk of bugs. Though as far as I can tell the
risk is simply doing a blocking call from io_uring_enter(2), which while
still a bug is not disasterous.

While doing this I noticed that linked requests are actually issued with
IO_URING_F_NONBLOCK first regardless of the IOSQE_ASYNC flag, and so I
fixed this at the same time. The difference should be neglegible I assume.

Note this depends on the drain fix I have just sent for 6.2.

Patch 1 is the fix.
Patch 2 forces async for the easy cases where it is always on
Patch 3/4 is for fadvise/open which require a bit of logic to determine whether
or not to force async

Dylan Yudaken (4):
  io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async
  io_uring: for requests that require async, force it
  io_uring: always go async for unsupported fadvise flags
  io_uring: always go async for unsupported open flags

 io_uring/advise.c    | 29 +++++++++++++++++------------
 io_uring/fs.c        | 20 ++++++++++----------
 io_uring/io_uring.c  |  8 +++++---
 io_uring/net.c       |  4 ++--
 io_uring/openclose.c | 18 ++++++++++++------
 io_uring/splice.c    |  7 +++----
 io_uring/statx.c     |  4 ++--
 io_uring/sync.c      | 14 ++++++++------
 io_uring/xattr.c     | 14 ++++++--------
 9 files changed, 65 insertions(+), 53 deletions(-)


base-commit: cea6756d62abfc4791efc81d1f6afa016ed8df8c
-- 
2.30.2


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH for-next 1/4] io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async
  2023-01-27 13:52 [PATCH for-next 0/4] io_uring: force async only ops to go async Dylan Yudaken
@ 2023-01-27 13:52 ` Dylan Yudaken
  2023-01-29 22:57   ` Jens Axboe
  2023-01-27 13:52 ` [PATCH for-next 2/4] io_uring: for requests that require async, force it Dylan Yudaken
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Dylan Yudaken @ 2023-01-27 13:52 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

REQ_F_FORCE_ASYNC was being ignored for re-queueing linked
requests. Instead obey that flag.

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/io_uring.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index db623b3185c8..980ba4fda101 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1365,10 +1365,12 @@ void io_req_task_submit(struct io_kiocb *req, bool *locked)
 {
 	io_tw_lock(req->ctx, locked);
 	/* req->task == current here, checking PF_EXITING is safe */
-	if (likely(!(req->task->flags & PF_EXITING)))
-		io_queue_sqe(req);
-	else
+	if (unlikely(req->task->flags & PF_EXITING))
 		io_req_defer_failed(req, -EFAULT);
+	else if (req->flags & REQ_F_FORCE_ASYNC)
+		io_queue_iowq(req, locked);
+	else
+		io_queue_sqe(req);
 }
 
 void io_req_task_queue_fail(struct io_kiocb *req, int ret)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH for-next 2/4] io_uring: for requests that require async, force it
  2023-01-27 13:52 [PATCH for-next 0/4] io_uring: force async only ops to go async Dylan Yudaken
  2023-01-27 13:52 ` [PATCH for-next 1/4] io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async Dylan Yudaken
@ 2023-01-27 13:52 ` Dylan Yudaken
  2023-01-27 13:52 ` [PATCH for-next 3/4] io_uring: always go async for unsupported fadvise flags Dylan Yudaken
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: Dylan Yudaken @ 2023-01-27 13:52 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

Some requests require being run async as they do not support
non-blocking. Instead of trying to issue these requests, getting -EAGAIN
and then queueing them for async issue, rather just force async upfront.

Add WARN_ON_ONCE to make sure surprising code paths do not come up,
however in those cases the bug would end up being a blocking
io_uring_enter(2) which should not be critical.

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/advise.c |  4 ++--
 io_uring/fs.c     | 20 ++++++++++----------
 io_uring/net.c    |  4 ++--
 io_uring/splice.c |  7 +++----
 io_uring/statx.c  |  4 ++--
 io_uring/sync.c   | 14 ++++++++------
 io_uring/xattr.c  | 14 ++++++--------
 7 files changed, 33 insertions(+), 34 deletions(-)

diff --git a/io_uring/advise.c b/io_uring/advise.c
index 449c6f14649f..cf600579bffe 100644
--- a/io_uring/advise.c
+++ b/io_uring/advise.c
@@ -39,6 +39,7 @@ int io_madvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	ma->addr = READ_ONCE(sqe->addr);
 	ma->len = READ_ONCE(sqe->len);
 	ma->advice = READ_ONCE(sqe->fadvise_advice);
+	req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 #else
 	return -EOPNOTSUPP;
@@ -51,8 +52,7 @@ int io_madvise(struct io_kiocb *req, unsigned int issue_flags)
 	struct io_madvise *ma = io_kiocb_to_cmd(req, struct io_madvise);
 	int ret;
 
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 	ret = do_madvise(current->mm, ma->addr, ma->len, ma->advice);
 	io_req_set_res(req, ret, 0);
diff --git a/io_uring/fs.c b/io_uring/fs.c
index 7100c293c13a..f6a69a549fd4 100644
--- a/io_uring/fs.c
+++ b/io_uring/fs.c
@@ -74,6 +74,7 @@ int io_renameat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	}
 
 	req->flags |= REQ_F_NEED_CLEANUP;
+	req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 }
 
@@ -82,8 +83,7 @@ int io_renameat(struct io_kiocb *req, unsigned int issue_flags)
 	struct io_rename *ren = io_kiocb_to_cmd(req, struct io_rename);
 	int ret;
 
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 	ret = do_renameat2(ren->old_dfd, ren->oldpath, ren->new_dfd,
 				ren->newpath, ren->flags);
@@ -123,6 +123,7 @@ int io_unlinkat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 		return PTR_ERR(un->filename);
 
 	req->flags |= REQ_F_NEED_CLEANUP;
+	req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 }
 
@@ -131,8 +132,7 @@ int io_unlinkat(struct io_kiocb *req, unsigned int issue_flags)
 	struct io_unlink *un = io_kiocb_to_cmd(req, struct io_unlink);
 	int ret;
 
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 	if (un->flags & AT_REMOVEDIR)
 		ret = do_rmdir(un->dfd, un->filename);
@@ -170,6 +170,7 @@ int io_mkdirat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 		return PTR_ERR(mkd->filename);
 
 	req->flags |= REQ_F_NEED_CLEANUP;
+	req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 }
 
@@ -178,8 +179,7 @@ int io_mkdirat(struct io_kiocb *req, unsigned int issue_flags)
 	struct io_mkdir *mkd = io_kiocb_to_cmd(req, struct io_mkdir);
 	int ret;
 
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 	ret = do_mkdirat(mkd->dfd, mkd->filename, mkd->mode);
 
@@ -220,6 +220,7 @@ int io_symlinkat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	}
 
 	req->flags |= REQ_F_NEED_CLEANUP;
+	req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 }
 
@@ -228,8 +229,7 @@ int io_symlinkat(struct io_kiocb *req, unsigned int issue_flags)
 	struct io_link *sl = io_kiocb_to_cmd(req, struct io_link);
 	int ret;
 
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 	ret = do_symlinkat(sl->oldpath, sl->new_dfd, sl->newpath);
 
@@ -265,6 +265,7 @@ int io_linkat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	}
 
 	req->flags |= REQ_F_NEED_CLEANUP;
+	req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 }
 
@@ -273,8 +274,7 @@ int io_linkat(struct io_kiocb *req, unsigned int issue_flags)
 	struct io_link *lnk = io_kiocb_to_cmd(req, struct io_link);
 	int ret;
 
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 	ret = do_linkat(lnk->old_dfd, lnk->oldpath, lnk->new_dfd,
 				lnk->newpath, lnk->flags);
diff --git a/io_uring/net.c b/io_uring/net.c
index 90326b279965..aeb1d016e2e9 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -90,6 +90,7 @@ int io_shutdown_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 		return -EINVAL;
 
 	shutdown->how = READ_ONCE(sqe->len);
+	req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 }
 
@@ -99,8 +100,7 @@ int io_shutdown(struct io_kiocb *req, unsigned int issue_flags)
 	struct socket *sock;
 	int ret;
 
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 	sock = sock_from_file(req->file);
 	if (unlikely(!sock))
diff --git a/io_uring/splice.c b/io_uring/splice.c
index 53e4232d0866..2a4bbb719531 100644
--- a/io_uring/splice.c
+++ b/io_uring/splice.c
@@ -34,6 +34,7 @@ static int __io_splice_prep(struct io_kiocb *req,
 	if (unlikely(sp->flags & ~valid_flags))
 		return -EINVAL;
 	sp->splice_fd_in = READ_ONCE(sqe->splice_fd_in);
+	req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 }
 
@@ -52,8 +53,7 @@ int io_tee(struct io_kiocb *req, unsigned int issue_flags)
 	struct file *in;
 	long ret = 0;
 
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 	if (sp->flags & SPLICE_F_FD_IN_FIXED)
 		in = io_file_get_fixed(req, sp->splice_fd_in, issue_flags);
@@ -94,8 +94,7 @@ int io_splice(struct io_kiocb *req, unsigned int issue_flags)
 	struct file *in;
 	long ret = 0;
 
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 	if (sp->flags & SPLICE_F_FD_IN_FIXED)
 		in = io_file_get_fixed(req, sp->splice_fd_in, issue_flags);
diff --git a/io_uring/statx.c b/io_uring/statx.c
index d8fc933d3f59..abb874209caa 100644
--- a/io_uring/statx.c
+++ b/io_uring/statx.c
@@ -48,6 +48,7 @@ int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	}
 
 	req->flags |= REQ_F_NEED_CLEANUP;
+	req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 }
 
@@ -56,8 +57,7 @@ int io_statx(struct io_kiocb *req, unsigned int issue_flags)
 	struct io_statx *sx = io_kiocb_to_cmd(req, struct io_statx);
 	int ret;
 
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 	ret = do_statx(sx->dfd, sx->filename, sx->flags, sx->mask, sx->buffer);
 	io_req_set_res(req, ret, 0);
diff --git a/io_uring/sync.c b/io_uring/sync.c
index 64e87ea2b8fb..255f68c37e55 100644
--- a/io_uring/sync.c
+++ b/io_uring/sync.c
@@ -32,6 +32,8 @@ int io_sfr_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	sync->off = READ_ONCE(sqe->off);
 	sync->len = READ_ONCE(sqe->len);
 	sync->flags = READ_ONCE(sqe->sync_range_flags);
+	req->flags |= REQ_F_FORCE_ASYNC;
+
 	return 0;
 }
 
@@ -41,8 +43,7 @@ int io_sync_file_range(struct io_kiocb *req, unsigned int issue_flags)
 	int ret;
 
 	/* sync_file_range always requires a blocking context */
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 	ret = sync_file_range(req->file, sync->off, sync->len, sync->flags);
 	io_req_set_res(req, ret, 0);
@@ -62,6 +63,7 @@ int io_fsync_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 
 	sync->off = READ_ONCE(sqe->off);
 	sync->len = READ_ONCE(sqe->len);
+	req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 }
 
@@ -72,8 +74,7 @@ int io_fsync(struct io_kiocb *req, unsigned int issue_flags)
 	int ret;
 
 	/* fsync always requires a blocking context */
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 	ret = vfs_fsync_range(req->file, sync->off, end > 0 ? end : LLONG_MAX,
 				sync->flags & IORING_FSYNC_DATASYNC);
@@ -91,6 +92,7 @@ int io_fallocate_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	sync->off = READ_ONCE(sqe->off);
 	sync->len = READ_ONCE(sqe->addr);
 	sync->mode = READ_ONCE(sqe->len);
+	req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 }
 
@@ -100,8 +102,8 @@ int io_fallocate(struct io_kiocb *req, unsigned int issue_flags)
 	int ret;
 
 	/* fallocate always requiring blocking context */
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
+
 	ret = vfs_fallocate(req->file, sync->mode, sync->off, sync->len);
 	if (ret >= 0)
 		fsnotify_modify(req->file);
diff --git a/io_uring/xattr.c b/io_uring/xattr.c
index 6201a9f442c6..e1c810e0b85a 100644
--- a/io_uring/xattr.c
+++ b/io_uring/xattr.c
@@ -75,6 +75,7 @@ static int __io_getxattr_prep(struct io_kiocb *req,
 	}
 
 	req->flags |= REQ_F_NEED_CLEANUP;
+	req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 }
 
@@ -109,8 +110,7 @@ int io_fgetxattr(struct io_kiocb *req, unsigned int issue_flags)
 	struct io_xattr *ix = io_kiocb_to_cmd(req, struct io_xattr);
 	int ret;
 
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 	ret = do_getxattr(mnt_idmap(req->file->f_path.mnt),
 			req->file->f_path.dentry,
@@ -127,8 +127,7 @@ int io_getxattr(struct io_kiocb *req, unsigned int issue_flags)
 	struct path path;
 	int ret;
 
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 retry:
 	ret = filename_lookup(AT_FDCWD, ix->filename, lookup_flags, &path, NULL);
@@ -174,6 +173,7 @@ static int __io_setxattr_prep(struct io_kiocb *req,
 	}
 
 	req->flags |= REQ_F_NEED_CLEANUP;
+	req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 }
 
@@ -222,8 +222,7 @@ int io_fsetxattr(struct io_kiocb *req, unsigned int issue_flags)
 {
 	int ret;
 
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 	ret = __io_setxattr(req, issue_flags, &req->file->f_path);
 	io_xattr_finish(req, ret);
@@ -237,8 +236,7 @@ int io_setxattr(struct io_kiocb *req, unsigned int issue_flags)
 	struct path path;
 	int ret;
 
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK);
 
 retry:
 	ret = filename_lookup(AT_FDCWD, ix->filename, lookup_flags, &path, NULL);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH for-next 3/4] io_uring: always go async for unsupported fadvise flags
  2023-01-27 13:52 [PATCH for-next 0/4] io_uring: force async only ops to go async Dylan Yudaken
  2023-01-27 13:52 ` [PATCH for-next 1/4] io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async Dylan Yudaken
  2023-01-27 13:52 ` [PATCH for-next 2/4] io_uring: for requests that require async, force it Dylan Yudaken
@ 2023-01-27 13:52 ` Dylan Yudaken
  2023-01-27 13:52 ` [PATCH for-next 4/4] io_uring: always go async for unsupported open flags Dylan Yudaken
  2023-01-29 22:20 ` [PATCH for-next 0/4] io_uring: force async only ops to go async Jens Axboe
  4 siblings, 0 replies; 11+ messages in thread
From: Dylan Yudaken @ 2023-01-27 13:52 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

No point in issuing -> return -EAGAIN -> go async, when it can be done upfront.

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/advise.c | 25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/io_uring/advise.c b/io_uring/advise.c
index cf600579bffe..7085804c513c 100644
--- a/io_uring/advise.c
+++ b/io_uring/advise.c
@@ -62,6 +62,18 @@ int io_madvise(struct io_kiocb *req, unsigned int issue_flags)
 #endif
 }
 
+static bool io_fadvise_force_async(struct io_fadvise *fa)
+{
+	switch (fa->advice) {
+	case POSIX_FADV_NORMAL:
+	case POSIX_FADV_RANDOM:
+	case POSIX_FADV_SEQUENTIAL:
+		return false;
+	default:
+		return true;
+	}
+}
+
 int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 {
 	struct io_fadvise *fa = io_kiocb_to_cmd(req, struct io_fadvise);
@@ -72,6 +84,8 @@ int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	fa->offset = READ_ONCE(sqe->off);
 	fa->len = READ_ONCE(sqe->len);
 	fa->advice = READ_ONCE(sqe->fadvise_advice);
+	if (io_fadvise_force_async(fa))
+		req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 }
 
@@ -80,16 +94,7 @@ int io_fadvise(struct io_kiocb *req, unsigned int issue_flags)
 	struct io_fadvise *fa = io_kiocb_to_cmd(req, struct io_fadvise);
 	int ret;
 
-	if (issue_flags & IO_URING_F_NONBLOCK) {
-		switch (fa->advice) {
-		case POSIX_FADV_NORMAL:
-		case POSIX_FADV_RANDOM:
-		case POSIX_FADV_SEQUENTIAL:
-			break;
-		default:
-			return -EAGAIN;
-		}
-	}
+	WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK && io_fadvise_force_async(fa));
 
 	ret = vfs_fadvise(req->file, fa->offset, fa->len, fa->advice);
 	if (ret < 0)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH for-next 4/4] io_uring: always go async for unsupported open flags
  2023-01-27 13:52 [PATCH for-next 0/4] io_uring: force async only ops to go async Dylan Yudaken
                   ` (2 preceding siblings ...)
  2023-01-27 13:52 ` [PATCH for-next 3/4] io_uring: always go async for unsupported fadvise flags Dylan Yudaken
@ 2023-01-27 13:52 ` Dylan Yudaken
  2023-01-29 22:20 ` [PATCH for-next 0/4] io_uring: force async only ops to go async Jens Axboe
  4 siblings, 0 replies; 11+ messages in thread
From: Dylan Yudaken @ 2023-01-27 13:52 UTC (permalink / raw)
  To: Jens Axboe, Pavel Begunkov; +Cc: io-uring, kernel-team, Dylan Yudaken

No point in issuing -> return -EAGAIN -> go async, when it can be done upfront.

Signed-off-by: Dylan Yudaken <[email protected]>
---
 io_uring/openclose.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/io_uring/openclose.c b/io_uring/openclose.c
index 67178e4bb282..a1b98c81a52d 100644
--- a/io_uring/openclose.c
+++ b/io_uring/openclose.c
@@ -31,6 +31,15 @@ struct io_close {
 	u32				file_slot;
 };
 
+static bool io_openat_force_async(struct io_open *open)
+{
+	/*
+	 * Don't bother trying for O_TRUNC, O_CREAT, or O_TMPFILE open,
+	 * it'll always -EAGAIN
+	 */
+	return open->how.flags & (O_TRUNC | O_CREAT | O_TMPFILE);
+}
+
 static int __io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 {
 	struct io_open *open = io_kiocb_to_cmd(req, struct io_open);
@@ -61,6 +70,8 @@ static int __io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe
 
 	open->nofile = rlimit(RLIMIT_NOFILE);
 	req->flags |= REQ_F_NEED_CLEANUP;
+	if (io_openat_force_async(open))
+		req->flags |= REQ_F_FORCE_ASYNC;
 	return 0;
 }
 
@@ -108,12 +119,7 @@ int io_openat2(struct io_kiocb *req, unsigned int issue_flags)
 	nonblock_set = op.open_flag & O_NONBLOCK;
 	resolve_nonblock = open->how.resolve & RESOLVE_CACHED;
 	if (issue_flags & IO_URING_F_NONBLOCK) {
-		/*
-		 * Don't bother trying for O_TRUNC, O_CREAT, or O_TMPFILE open,
-		 * it'll always -EAGAIN
-		 */
-		if (open->how.flags & (O_TRUNC | O_CREAT | O_TMPFILE))
-			return -EAGAIN;
+		WARN_ON_ONCE(io_openat_force_async(open));
 		op.lookup_flags |= LOOKUP_CACHED;
 		op.open_flag |= O_NONBLOCK;
 	}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH for-next 0/4] io_uring: force async only ops to go async
  2023-01-27 13:52 [PATCH for-next 0/4] io_uring: force async only ops to go async Dylan Yudaken
                   ` (3 preceding siblings ...)
  2023-01-27 13:52 ` [PATCH for-next 4/4] io_uring: always go async for unsupported open flags Dylan Yudaken
@ 2023-01-29 22:20 ` Jens Axboe
  4 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2023-01-29 22:20 UTC (permalink / raw)
  To: Pavel Begunkov, Dylan Yudaken; +Cc: io-uring, kernel-team


On Fri, 27 Jan 2023 05:52:23 -0800, Dylan Yudaken wrote:
> Many ops such as statx do not support nonblock issuing (for now). At the
> moment the code goes through the issue call just to receive -EAGAIN and
> then be run async. There is no need for this as internally the
> REQ_F_FORCE_ASYNC flag can just be added on.
> 
> The upside for this is generally minimal, and possibly you may decide that
> it's not worth the extra risk of bugs. Though as far as I can tell the
> risk is simply doing a blocking call from io_uring_enter(2), which while
> still a bug is not disasterous.
> 
> [...]

Applied, thanks!

[1/4] io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async
      (no commit info)
[2/4] io_uring: for requests that require async, force it
      (no commit info)
[3/4] io_uring: always go async for unsupported fadvise flags
      (no commit info)
[4/4] io_uring: always go async for unsupported open flags
      (no commit info)

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH for-next 1/4] io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async
  2023-01-27 13:52 ` [PATCH for-next 1/4] io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async Dylan Yudaken
@ 2023-01-29 22:57   ` Jens Axboe
  2023-01-29 23:17     ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2023-01-29 22:57 UTC (permalink / raw)
  To: Dylan Yudaken, Pavel Begunkov; +Cc: io-uring, kernel-team

On 1/27/23 6:52?AM, Dylan Yudaken wrote:
> REQ_F_FORCE_ASYNC was being ignored for re-queueing linked
> requests. Instead obey that flag.
> 
> Signed-off-by: Dylan Yudaken <[email protected]>
> ---
>  io_uring/io_uring.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index db623b3185c8..980ba4fda101 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -1365,10 +1365,12 @@ void io_req_task_submit(struct io_kiocb *req, bool *locked)
>  {
>  	io_tw_lock(req->ctx, locked);
>  	/* req->task == current here, checking PF_EXITING is safe */
> -	if (likely(!(req->task->flags & PF_EXITING)))
> -		io_queue_sqe(req);
> -	else
> +	if (unlikely(req->task->flags & PF_EXITING))
>  		io_req_defer_failed(req, -EFAULT);
> +	else if (req->flags & REQ_F_FORCE_ASYNC)
> +		io_queue_iowq(req, locked);
> +	else
> +		io_queue_sqe(req);
>  }
>  
>  void io_req_task_queue_fail(struct io_kiocb *req, int ret)

This one causes a failure for me with test/multicqes_drain.t, which
doesn't quite make sense to me (just yet), but it is a reliable timeout.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH for-next 1/4] io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async
  2023-01-29 22:57   ` Jens Axboe
@ 2023-01-29 23:17     ` Jens Axboe
  2023-01-30 10:45       ` Dylan Yudaken
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2023-01-29 23:17 UTC (permalink / raw)
  To: Dylan Yudaken, Pavel Begunkov; +Cc: io-uring, kernel-team

On 1/29/23 3:57 PM, Jens Axboe wrote:
> On 1/27/23 6:52?AM, Dylan Yudaken wrote:
>> REQ_F_FORCE_ASYNC was being ignored for re-queueing linked
>> requests. Instead obey that flag.
>>
>> Signed-off-by: Dylan Yudaken <[email protected]>
>> ---
>>  io_uring/io_uring.c | 8 +++++---
>>  1 file changed, 5 insertions(+), 3 deletions(-)
>>
>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
>> index db623b3185c8..980ba4fda101 100644
>> --- a/io_uring/io_uring.c
>> +++ b/io_uring/io_uring.c
>> @@ -1365,10 +1365,12 @@ void io_req_task_submit(struct io_kiocb *req, bool *locked)
>>  {
>>  	io_tw_lock(req->ctx, locked);
>>  	/* req->task == current here, checking PF_EXITING is safe */
>> -	if (likely(!(req->task->flags & PF_EXITING)))
>> -		io_queue_sqe(req);
>> -	else
>> +	if (unlikely(req->task->flags & PF_EXITING))
>>  		io_req_defer_failed(req, -EFAULT);
>> +	else if (req->flags & REQ_F_FORCE_ASYNC)
>> +		io_queue_iowq(req, locked);
>> +	else
>> +		io_queue_sqe(req);
>>  }
>>  
>>  void io_req_task_queue_fail(struct io_kiocb *req, int ret)
> 
> This one causes a failure for me with test/multicqes_drain.t, which
> doesn't quite make sense to me (just yet), but it is a reliable timeout.

OK, quick look and I think this is a bad assumption in the test case.
It's assuming that a POLL_ADD already succeeded, and hence that a
subsequent POLL_REMOVE will succeed. But now it's getting ENOENT as
we can't find it just yet, which means the cancelation itself isn't
being done. So we just end up waiting for something that doesn't happen.

Or could be an internal race with lookup/issue. In any case, it's
definitely being exposed by this patch.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH for-next 1/4] io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async
  2023-01-29 23:17     ` Jens Axboe
@ 2023-01-30 10:45       ` Dylan Yudaken
  2023-01-30 15:53         ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Dylan Yudaken @ 2023-01-30 10:45 UTC (permalink / raw)
  To: [email protected], [email protected]
  Cc: Kernel Team, [email protected]

[-- Attachment #1: Type: text/plain, Size: 3187 bytes --]

On Sun, 2023-01-29 at 16:17 -0700, Jens Axboe wrote:
> On 1/29/23 3:57 PM, Jens Axboe wrote:
> > On 1/27/23 6:52?AM, Dylan Yudaken wrote:
> > > REQ_F_FORCE_ASYNC was being ignored for re-queueing linked
> > > requests. Instead obey that flag.
> > > 
> > > Signed-off-by: Dylan Yudaken <[email protected]>
> > > ---
> > >  io_uring/io_uring.c | 8 +++++---
> > >  1 file changed, 5 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > > index db623b3185c8..980ba4fda101 100644
> > > --- a/io_uring/io_uring.c
> > > +++ b/io_uring/io_uring.c
> > > @@ -1365,10 +1365,12 @@ void io_req_task_submit(struct io_kiocb
> > > *req, bool *locked)
> > >  {
> > >         io_tw_lock(req->ctx, locked);
> > >         /* req->task == current here, checking PF_EXITING is safe
> > > */
> > > -       if (likely(!(req->task->flags & PF_EXITING)))
> > > -               io_queue_sqe(req);
> > > -       else
> > > +       if (unlikely(req->task->flags & PF_EXITING))
> > >                 io_req_defer_failed(req, -EFAULT);
> > > +       else if (req->flags & REQ_F_FORCE_ASYNC)
> > > +               io_queue_iowq(req, locked);
> > > +       else
> > > +               io_queue_sqe(req);
> > >  }
> > >  
> > >  void io_req_task_queue_fail(struct io_kiocb *req, int ret)
> > 
> > This one causes a failure for me with test/multicqes_drain.t, which
> > doesn't quite make sense to me (just yet), but it is a reliable
> > timeout.
> 
> OK, quick look and I think this is a bad assumption in the test case.
> It's assuming that a POLL_ADD already succeeded, and hence that a
> subsequent POLL_REMOVE will succeed. But now it's getting ENOENT as
> we can't find it just yet, which means the cancelation itself isn't
> being done. So we just end up waiting for something that doesn't
> happen.
> 
> Or could be an internal race with lookup/issue. In any case, it's
> definitely being exposed by this patch.
> 

That is a bit of an unpleasasnt test.
Essentially it triggers a pipe, and reads from the pipe immediately
after. The test expects to see a CQE for that trigger, however if
anything ran asynchronously then there is a race between the read and
the poll logic running.

The attached patch fixes the test, but the reason my patches trigger it
is a bit weird.

This occurs on the second loop of the test, after the initial drain.
Essentially ctx->drain_active is still true when the second set of
polls are added, since drain_active is only cleared inside the next
io_drain_req. So then the first poll will have REQ_F_FORCE_ASYNC set.

Previously those FORCE_ASYNC's were being ignored, but now with
"io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async"
they get sent to the work thread, which causes the race. 

I wonder if drain_active should actually be cleared earlier? perhaps
before setting the REQ_F_FORCE_ASYNC flag?
The drain logic is pretty complex though, so I am not terribly keen to
start changing it if it's not generally useful.



[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: patch.diff --]
[-- Type: text/x-patch; name="patch.diff", Size: 1863 bytes --]

commit d362fb231310a52a79c8b9f72165a708bfd8aa44
Author: Dylan Yudaken <[email protected]>
Date:   Mon Jan 30 01:49:57 2023 -0800

    multicqes_drain: make trigger event wait before reading
    
    trigger_event is used to generate CQEs on the poll requests. However there
    is a race if that poll request is running asynchronously, where the
    read_pipe will complete before the poll is run, and the poll result will
    be that there is no data ready.
    
    Instead sleep and force an io_uring_get_events in order to give the poll a
    chance to run before reading from the pipe.
    
    Signed-off-by: Dylan Yudaken <[email protected]>

diff --git a/test/multicqes_drain.c b/test/multicqes_drain.c
index 3755beec42c7..6c4d5f2ba887 100644
--- a/test/multicqes_drain.c
+++ b/test/multicqes_drain.c
@@ -71,13 +71,15 @@ static void read_pipe(int pipe)
 		perror("read");
 }
 
-static int trigger_event(int p[])
+static int trigger_event(struct io_uring *ring, int p[])
 {
 	int ret;
 	if ((ret = write_pipe(p[1], "foo")) != 3) {
 		fprintf(stderr, "bad write return %d\n", ret);
 		return 1;
 	}
+	usleep(1000);
+	io_uring_get_events(ring);
 	read_pipe(p[0]);
 	return 0;
 }
@@ -236,10 +238,8 @@ static int test_generic_drain(struct io_uring *ring)
 		if (si[i].op != multi && si[i].op != single)
 			continue;
 
-		if (trigger_event(pipes[i]))
+		if (trigger_event(ring, pipes[i]))
 			goto err;
-
-		io_uring_get_events(ring);
 	}
 	sleep(1);
 	i = 0;
@@ -317,13 +317,11 @@ static int test_simple_drain(struct io_uring *ring)
 	}
 
 	for (i = 0; i < 2; i++) {
-		if (trigger_event(pipe1))
+		if (trigger_event(ring, pipe1))
 			goto err;
-		io_uring_get_events(ring);
 	}
-	if (trigger_event(pipe2))
-			goto err;
-	io_uring_get_events(ring);
+	if (trigger_event(ring, pipe2))
+		goto err;
 
 	for (i = 0; i < 2; i++) {
 		sqe[i] = io_uring_get_sqe(ring);

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH for-next 1/4] io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async
  2023-01-30 10:45       ` Dylan Yudaken
@ 2023-01-30 15:53         ` Jens Axboe
  2023-01-30 16:21           ` Pavel Begunkov
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2023-01-30 15:53 UTC (permalink / raw)
  To: Dylan Yudaken, [email protected]
  Cc: Kernel Team, [email protected]

On 1/30/23 3:45 AM, Dylan Yudaken wrote:
> On Sun, 2023-01-29 at 16:17 -0700, Jens Axboe wrote:
>> On 1/29/23 3:57 PM, Jens Axboe wrote:
>>> On 1/27/23 6:52?AM, Dylan Yudaken wrote:
>>>> REQ_F_FORCE_ASYNC was being ignored for re-queueing linked
>>>> requests. Instead obey that flag.
>>>>
>>>> Signed-off-by: Dylan Yudaken <[email protected]>
>>>> ---
>>>>  io_uring/io_uring.c | 8 +++++---
>>>>  1 file changed, 5 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
>>>> index db623b3185c8..980ba4fda101 100644
>>>> --- a/io_uring/io_uring.c
>>>> +++ b/io_uring/io_uring.c
>>>> @@ -1365,10 +1365,12 @@ void io_req_task_submit(struct io_kiocb
>>>> *req, bool *locked)
>>>>  {
>>>>         io_tw_lock(req->ctx, locked);
>>>>         /* req->task == current here, checking PF_EXITING is safe
>>>> */
>>>> -       if (likely(!(req->task->flags & PF_EXITING)))
>>>> -               io_queue_sqe(req);
>>>> -       else
>>>> +       if (unlikely(req->task->flags & PF_EXITING))
>>>>                 io_req_defer_failed(req, -EFAULT);
>>>> +       else if (req->flags & REQ_F_FORCE_ASYNC)
>>>> +               io_queue_iowq(req, locked);
>>>> +       else
>>>> +               io_queue_sqe(req);
>>>>  }
>>>>  
>>>>  void io_req_task_queue_fail(struct io_kiocb *req, int ret)
>>>
>>> This one causes a failure for me with test/multicqes_drain.t, which
>>> doesn't quite make sense to me (just yet), but it is a reliable
>>> timeout.
>>
>> OK, quick look and I think this is a bad assumption in the test case.
>> It's assuming that a POLL_ADD already succeeded, and hence that a
>> subsequent POLL_REMOVE will succeed. But now it's getting ENOENT as
>> we can't find it just yet, which means the cancelation itself isn't
>> being done. So we just end up waiting for something that doesn't
>> happen.
>>
>> Or could be an internal race with lookup/issue. In any case, it's
>> definitely being exposed by this patch.
>>
> 
> That is a bit of an unpleasasnt test.
> Essentially it triggers a pipe, and reads from the pipe immediately
> after. The test expects to see a CQE for that trigger, however if
> anything ran asynchronously then there is a race between the read and
> the poll logic running.
> 
> The attached patch fixes the test, but the reason my patches trigger it
> is a bit weird.
> 
> This occurs on the second loop of the test, after the initial drain.
> Essentially ctx->drain_active is still true when the second set of
> polls are added, since drain_active is only cleared inside the next
> io_drain_req. So then the first poll will have REQ_F_FORCE_ASYNC set.
> 
> Previously those FORCE_ASYNC's were being ignored, but now with
> "io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async"
> they get sent to the work thread, which causes the race. 
> 
> I wonder if drain_active should actually be cleared earlier? perhaps
> before setting the REQ_F_FORCE_ASYNC flag?
> The drain logic is pretty complex though, so I am not terribly keen to
> start changing it if it's not generally useful.

Pavel, any input on the drain logic? I think you know that part the
best.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH for-next 1/4] io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async
  2023-01-30 15:53         ` Jens Axboe
@ 2023-01-30 16:21           ` Pavel Begunkov
  0 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2023-01-30 16:21 UTC (permalink / raw)
  To: Jens Axboe, Dylan Yudaken; +Cc: Kernel Team, [email protected]

On 1/30/23 15:53, Jens Axboe wrote:
> On 1/30/23 3:45 AM, Dylan Yudaken wrote:
>> On Sun, 2023-01-29 at 16:17 -0700, Jens Axboe wrote:
>>> On 1/29/23 3:57 PM, Jens Axboe wrote:
>>>> On 1/27/23 6:52?AM, Dylan Yudaken wrote:
>>>>> REQ_F_FORCE_ASYNC was being ignored for re-queueing linked
>>>>> requests. Instead obey that flag.
>>>>>
>>>>> Signed-off-by: Dylan Yudaken <[email protected]>
>>>>> ---
>>>>>   io_uring/io_uring.c | 8 +++++---
>>>>>   1 file changed, 5 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
>>>>> index db623b3185c8..980ba4fda101 100644
>>>>> --- a/io_uring/io_uring.c
>>>>> +++ b/io_uring/io_uring.c
>>>>> @@ -1365,10 +1365,12 @@ void io_req_task_submit(struct io_kiocb
>>>>> *req, bool *locked)
>>>>>   {
>>>>>          io_tw_lock(req->ctx, locked);
>>>>>          /* req->task == current here, checking PF_EXITING is safe
>>>>> */
>>>>> -       if (likely(!(req->task->flags & PF_EXITING)))
>>>>> -               io_queue_sqe(req);
>>>>> -       else
>>>>> +       if (unlikely(req->task->flags & PF_EXITING))
>>>>>                  io_req_defer_failed(req, -EFAULT);
>>>>> +       else if (req->flags & REQ_F_FORCE_ASYNC)
>>>>> +               io_queue_iowq(req, locked);
>>>>> +       else
>>>>> +               io_queue_sqe(req);
>>>>>   }
>>>>>   
>>>>>   void io_req_task_queue_fail(struct io_kiocb *req, int ret)
>>>>
>>>> This one causes a failure for me with test/multicqes_drain.t, which
>>>> doesn't quite make sense to me (just yet), but it is a reliable
>>>> timeout.
>>>
>>> OK, quick look and I think this is a bad assumption in the test case.
>>> It's assuming that a POLL_ADD already succeeded, and hence that a
>>> subsequent POLL_REMOVE will succeed. But now it's getting ENOENT as
>>> we can't find it just yet, which means the cancelation itself isn't
>>> being done. So we just end up waiting for something that doesn't
>>> happen.
>>>
>>> Or could be an internal race with lookup/issue. In any case, it's
>>> definitely being exposed by this patch.
>>>
>>
>> That is a bit of an unpleasasnt test.
>> Essentially it triggers a pipe, and reads from the pipe immediately
>> after. The test expects to see a CQE for that trigger, however if
>> anything ran asynchronously then there is a race between the read and
>> the poll logic running.
>>
>> The attached patch fixes the test, but the reason my patches trigger it
>> is a bit weird.
>>
>> This occurs on the second loop of the test, after the initial drain.
>> Essentially ctx->drain_active is still true when the second set of
>> polls are added, since drain_active is only cleared inside the next
>> io_drain_req. So then the first poll will have REQ_F_FORCE_ASYNC set.
>>
>> Previously those FORCE_ASYNC's were being ignored, but now with
>> "io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async"
>> they get sent to the work thread, which causes the race.

And that sounds like a userspace problem, any request might be executed
async on an io_uring whim.

>> I wonder if drain_active should actually be cleared earlier? perhaps
>> before setting the REQ_F_FORCE_ASYNC flag?
>> The drain logic is pretty complex though, so I am not terribly keen to
>> start changing it if it's not generally useful.

No, that won't work. As a drain request forces the ring to delay all
future requests until all previous requests are completed we can't
skip draining checks based on state of currently prepared request.

Draining is currently out of hot paths, and I'm happy about it. There
might be some ways, we can use a new flag instead of REQ_F_FORCE_ASYNC
to force it into the slow path and return the request back to the normal
path if the draining is not needed, but we don't care and really should
not care. I'd argue even further, it's time to mark DRAIN deprecated,
it's too slow, doesn't fit io_uring model well and has edge case
behaviour problems.

> Pavel, any input on the drain logic? I think you know that part the
> best.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-01-30 16:22 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-01-27 13:52 [PATCH for-next 0/4] io_uring: force async only ops to go async Dylan Yudaken
2023-01-27 13:52 ` [PATCH for-next 1/4] io_uring: if a linked request has REQ_F_FORCE_ASYNC then run it async Dylan Yudaken
2023-01-29 22:57   ` Jens Axboe
2023-01-29 23:17     ` Jens Axboe
2023-01-30 10:45       ` Dylan Yudaken
2023-01-30 15:53         ` Jens Axboe
2023-01-30 16:21           ` Pavel Begunkov
2023-01-27 13:52 ` [PATCH for-next 2/4] io_uring: for requests that require async, force it Dylan Yudaken
2023-01-27 13:52 ` [PATCH for-next 3/4] io_uring: always go async for unsupported fadvise flags Dylan Yudaken
2023-01-27 13:52 ` [PATCH for-next 4/4] io_uring: always go async for unsupported open flags Dylan Yudaken
2023-01-29 22:20 ` [PATCH for-next 0/4] io_uring: force async only ops to go async Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox