public inbox for [email protected]
 help / color / mirror / Atom feed
* [PATCH] io_uring: ensure REQ_F_ISREG is set async offload
@ 2022-07-21 15:11 Jens Axboe
  2022-07-21 23:43 ` Yin Fengwei
  0 siblings, 1 reply; 3+ messages in thread
From: Jens Axboe @ 2022-07-21 15:11 UTC (permalink / raw)
  To: io-uring; +Cc: Yin Fengwei

If we're offloading requests directly to io-wq because IOSQE_ASYNC was
set in the sqe, we can miss hashing writes appropriately because we
haven't set REQ_F_ISREG yet. This can cause a performance regression
with buffered writes, as io-wq then no longer correctly serializes writes
to that file.

Ensure that we set the flags in io_prep_async_work(), which will cause
the io-wq work item to be hashed appropriately.

Fixes: 584b0180f0f4 ("io_uring: move read/write file prep state into actual opcode handler")
Link: https://lore.kernel.org/io-uring/20220608080054.GB22428@xsang-OptiPlex-9020/
Reported-and-tested-by: Yin Fengwei <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>

---

Yin, this is the 5.20 version. Once this lands upstream, I'll get the
5.19/18 variant sent to stable as well. Thanks!

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 624535c62565..4b3fd645d023 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -406,6 +406,9 @@ static void io_prep_async_work(struct io_kiocb *req)
 	if (req->flags & REQ_F_FORCE_ASYNC)
 		req->work.flags |= IO_WQ_WORK_CONCURRENT;
 
+	if (req->file && !io_req_ffs_set(req))
+		req->flags |= io_file_get_flags(req->file) << REQ_F_SUPPORT_NOWAIT_BIT;
+
 	if (req->flags & REQ_F_ISREG) {
 		if (def->hash_reg_file || (ctx->flags & IORING_SETUP_IOPOLL))
 			io_wq_hash_work(&req->work, file_inode(req->file));
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 868f45d55543..5db0a60dc04e 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -41,6 +41,11 @@ struct file *io_file_get_normal(struct io_kiocb *req, int fd);
 struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
 			       unsigned issue_flags);
 
+static inline bool io_req_ffs_set(struct io_kiocb *req)
+{
+	return req->flags & REQ_F_FIXED_FILE;
+}
+
 bool io_is_uring_fops(struct file *file);
 bool io_alloc_async_data(struct io_kiocb *req);
 void io_req_task_work_add(struct io_kiocb *req);
diff --git a/io_uring/rw.c b/io_uring/rw.c
index ade3e235f277..c50a0d66d67a 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -647,11 +647,6 @@ static bool need_read_all(struct io_kiocb *req)
 		S_ISBLK(file_inode(req->file)->i_mode);
 }
 
-static inline bool io_req_ffs_set(struct io_kiocb *req)
-{
-	return req->flags & REQ_F_FIXED_FILE;
-}
-
 static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
 {
 	struct io_rw *rw = io_kiocb_to_cmd(req);

-- 
Jens Axboe


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] io_uring: ensure REQ_F_ISREG is set async offload
  2022-07-21 15:11 [PATCH] io_uring: ensure REQ_F_ISREG is set async offload Jens Axboe
@ 2022-07-21 23:43 ` Yin Fengwei
  2022-07-22  1:36   ` Jens Axboe
  0 siblings, 1 reply; 3+ messages in thread
From: Yin Fengwei @ 2022-07-21 23:43 UTC (permalink / raw)
  To: Jens Axboe, io-uring


On 7/21/2022 11:11 PM, Jens Axboe wrote:
> If we're offloading requests directly to io-wq because IOSQE_ASYNC was
> set in the sqe, we can miss hashing writes appropriately because we
> haven't set REQ_F_ISREG yet. This can cause a performance regression
> with buffered writes, as io-wq then no longer correctly serializes writes
> to that file.
> 
> Ensure that we set the flags in io_prep_async_work(), which will cause
> the io-wq work item to be hashed appropriately.
> 
> Fixes: 584b0180f0f4 ("io_uring: move read/write file prep state into actual opcode handler")
> Link: https://lore.kernel.org/io-uring/20220608080054.GB22428@xsang-OptiPlex-9020/
> Reported-and-tested-by: Yin Fengwei <[email protected]>
This issue is reported by (from the original report):

If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>



Regards
Yin, Fengwei

> Signed-off-by: Jens Axboe <[email protected]>
> 
> ---
> 
> Yin, this is the 5.20 version. Once this lands upstream, I'll get the
> 5.19/18 variant sent to stable as well. Thanks!
> 
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index 624535c62565..4b3fd645d023 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -406,6 +406,9 @@ static void io_prep_async_work(struct io_kiocb *req)
>  	if (req->flags & REQ_F_FORCE_ASYNC)
>  		req->work.flags |= IO_WQ_WORK_CONCURRENT;
>  
> +	if (req->file && !io_req_ffs_set(req))
> +		req->flags |= io_file_get_flags(req->file) << REQ_F_SUPPORT_NOWAIT_BIT;
> +
>  	if (req->flags & REQ_F_ISREG) {
>  		if (def->hash_reg_file || (ctx->flags & IORING_SETUP_IOPOLL))
>  			io_wq_hash_work(&req->work, file_inode(req->file));
> diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
> index 868f45d55543..5db0a60dc04e 100644
> --- a/io_uring/io_uring.h
> +++ b/io_uring/io_uring.h
> @@ -41,6 +41,11 @@ struct file *io_file_get_normal(struct io_kiocb *req, int fd);
>  struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
>  			       unsigned issue_flags);
>  
> +static inline bool io_req_ffs_set(struct io_kiocb *req)
> +{
> +	return req->flags & REQ_F_FIXED_FILE;
> +}
> +
>  bool io_is_uring_fops(struct file *file);
>  bool io_alloc_async_data(struct io_kiocb *req);
>  void io_req_task_work_add(struct io_kiocb *req);
> diff --git a/io_uring/rw.c b/io_uring/rw.c
> index ade3e235f277..c50a0d66d67a 100644
> --- a/io_uring/rw.c
> +++ b/io_uring/rw.c
> @@ -647,11 +647,6 @@ static bool need_read_all(struct io_kiocb *req)
>  		S_ISBLK(file_inode(req->file)->i_mode);
>  }
>  
> -static inline bool io_req_ffs_set(struct io_kiocb *req)
> -{
> -	return req->flags & REQ_F_FIXED_FILE;
> -}
> -
>  static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
>  {
>  	struct io_rw *rw = io_kiocb_to_cmd(req);
> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] io_uring: ensure REQ_F_ISREG is set async offload
  2022-07-21 23:43 ` Yin Fengwei
@ 2022-07-22  1:36   ` Jens Axboe
  0 siblings, 0 replies; 3+ messages in thread
From: Jens Axboe @ 2022-07-22  1:36 UTC (permalink / raw)
  To: Yin Fengwei, io-uring

On 7/21/22 5:43 PM, Yin Fengwei wrote:
> 
> On 7/21/2022 11:11 PM, Jens Axboe wrote:
>> If we're offloading requests directly to io-wq because IOSQE_ASYNC was
>> set in the sqe, we can miss hashing writes appropriately because we
>> haven't set REQ_F_ISREG yet. This can cause a performance regression
>> with buffered writes, as io-wq then no longer correctly serializes writes
>> to that file.
>>
>> Ensure that we set the flags in io_prep_async_work(), which will cause
>> the io-wq work item to be hashed appropriately.
>>
>> Fixes: 584b0180f0f4 ("io_uring: move read/write file prep state into actual opcode handler")
>> Link: https://lore.kernel.org/io-uring/20220608080054.GB22428@xsang-OptiPlex-9020/
>> Reported-and-tested-by: Yin Fengwei <[email protected]>
> This issue is reported by (from the original report):
> 
> If you fix the issue, kindly add following tag
> Reported-by: kernel test robot <[email protected]>

Sorry, I missed that in the original. I'll be rebasing this branch this
weekend anyway, I'll try and remember to make the edit.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-07-22  1:36 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-07-21 15:11 [PATCH] io_uring: ensure REQ_F_ISREG is set async offload Jens Axboe
2022-07-21 23:43 ` Yin Fengwei
2022-07-22  1:36   ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox