public inbox for [email protected]
 help / color / mirror / Atom feed
From: Xiaoguang Wang <[email protected]>
To: Jens Axboe <[email protected]>, [email protected]
Cc: [email protected]
Subject: Re: [PATCH 2/2] io_uring: add memory barrier to synchronize io_kiocb's result and iopoll_completed
Date: Mon, 15 Jun 2020 23:32:31 +0800	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>

hi,

> On 6/15/20 8:48 AM, Xiaoguang Wang wrote:
>> hi,
>>
>>> On 6/15/20 3:24 AM, Xiaoguang Wang wrote:
>>>> In io_complete_rw_iopoll(), stores to io_kiocb's result and iopoll
>>>> completed are two independent store operations, to ensure that once
>>>> iopoll_completed is ture and then req->result must been perceived by
>>>> the cpu executing io_do_iopoll(), proper memory barrier should be used.
>>>>
>>>> And in io_do_iopoll(), we check whether req->result is EAGAIN, if it is,
>>>> we'll need to issue this io request using io-wq again. In order to just
>>>> issue a single smp_rmb() on the completion side, move the re-submit work
>>>> to io_iopoll_complete().
>>>
>>> Did you actually test this one?
>> I only run test cases in liburing/test in a vm.
>>
>>>
>>>> @@ -1736,11 +1748,20 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
>>>>    {
>>>>    	struct req_batch rb;
>>>>    	struct io_kiocb *req;
>>>> +	LIST_HEAD(again);
>>>> +
>>>> +	/* order with ->result store in io_complete_rw_iopoll() */
>>>> +	smp_rmb();
>>>>    
>>>>    	rb.to_free = rb.need_iter = 0;
>>>>    	while (!list_empty(done)) {
>>>>    		int cflags = 0;
>>>>    
>>>> +		if (READ_ONCE(req->result) == -EAGAIN) {
>>>> +			req->iopoll_completed = 0;
>>>> +			list_move_tail(&req->list, &again);
>>>> +			continue;
>>>> +		}
>>>>    		req = list_first_entry(done, struct io_kiocb, list);
>>>>    		list_del(&req->list);
>>>>    
>>>
>>> You're using 'req' here before you initialize it...
>> Sorry, next time when I submit patches, I'll construct test cases which
>> will cover my codes changes.
> 
> I'm surprised the compiler didn't complain, or that the regular testing
> didn't barf on it.
I'm also surprised, will try to find the reason.
And indeed the iopoll test case failed, but below command displayed nothing:
[lege@localhost test]$ sudo ./iopoll
Then I considered this test case pass wrongly.

dmesg show errors:
[  127.806945] ==================================================================
[  127.806983] BUG: KASAN: use-after-free in io_iopoll_complete+0xbb/0x980
[  127.806989] Read of size 4 at addr ffff8886e3e98808 by task io_uring-sq/1643

[  127.806999] CPU: 16 PID: 1643 Comm: io_uring-sq Not tainted 5.7.0+ #501
[  127.807013] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.1-0-g0551a4be2c-prebuilt.qemu-project.org 04/01/2014
[  127.807021] Call Trace:
[  127.807040]  dump_stack+0x75/0xa0
[  127.807047]  ? io_iopoll_complete+0xbb/0x980
[  127.807062]  print_address_description.constprop.6+0x1a/0x220
[  127.807086]  ? _raw_write_lock_irqsave+0xd0/0xd0
[  127.807092]  ? io_free_req_many.part.79+0x208/0x2a0
[  127.807107]  ? __rcu_read_unlock+0x37/0x200
[  127.807112]  ? io_iopoll_complete+0xbb/0x980
[  127.807117]  ? io_iopoll_complete+0xbb/0x980
[  127.807122]  kasan_report.cold.9+0x1f/0x42
[  127.807128]  ? io_iopoll_complete+0xbb/0x980
[  127.807133]  io_iopoll_complete+0xbb/0x980
[  127.807138]  ? io_timeout_fn+0x140/0x140
[  127.807150]  ? __switch_to+0x2e9/0x5a0
[  127.807157]  io_iopoll_getevents+0x287/0x310
[  127.807163]  ? io_iopoll_complete+0x980/0x980
[  127.807172]  ? finish_wait+0xcb/0xf0
[  127.807179]  io_sq_thread+0x1c1/0x600
[  127.807185]  ? __ia32_sys_io_uring_enter+0x450/0x450
[  127.807194]  ? preempt_count_add+0x77/0xd0
[  127.807200]  ? _raw_spin_lock_irqsave+0x84/0xd0
[  127.807206]  ? _raw_write_lock_irqsave+0xd0/0xd0
[  127.807210]  ? finish_wait+0xf0/0xf0
[  127.807215]  ? preempt_count_sub+0x18/0xc0
[  127.807224]  ? __kthread_parkme+0xaf/0xd0
[  127.807231]  ? __ia32_sys_io_uring_enter+0x450/0x450
[  127.807235]  kthread+0x1e4/0x210
[  127.807241]  ? kthread_create_on_node+0xa0/0xa0
[  127.807246]  ret_from_fork+0x22/0x30

> 
> Don't think you need a new test case for this, the iopoll test case
> should cover it, if you limit the queue depth on the device by
> setting /sys/block/<dev>/queue/nr_requests to 2 or something like
> that.
Thanks, I'll try later.

Regards,
Xiaoguang Wang

> 

  parent reply	other threads:[~2020-06-15 15:32 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-15  9:24 [PATCH 0/2] add proper memory barrier for IOPOLL mode Xiaoguang Wang
2020-06-15  9:24 ` [PATCH 1/2] io_uring: don't fail links for EAGAIN error in " Xiaoguang Wang
2020-06-15  9:24 ` [PATCH 2/2] io_uring: add memory barrier to synchronize io_kiocb's result and iopoll_completed Xiaoguang Wang
2020-06-15 14:36   ` Jens Axboe
2020-06-15 14:48     ` Xiaoguang Wang
2020-06-15 15:02       ` Jens Axboe
2020-06-15 15:09         ` Pavel Begunkov
2020-06-15 15:32         ` Xiaoguang Wang [this message]
2020-06-15 15:35           ` Jens Axboe
2020-06-15 16:51             ` Jens Axboe
2020-06-15 17:53               ` Xiaoguang Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d37647d3-da74-7c30-94c8-f18d0afcd958@linux.alibaba.com \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox