public inbox for [email protected]
 help / color / mirror / Atom feed
From: Jens Axboe <[email protected]>
To: Brian Foster <[email protected]>, [email protected]
Cc: [email protected]
Subject: Re: occasional metadata I/O errors (-EOPNOTSUPP) on XFS + io_uring
Date: Wed, 16 Sep 2020 10:55:08 -0600	[thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <20200916131957.GB1681377@bfoster>

On 9/16/20 7:19 AM, Brian Foster wrote:
> On Tue, Sep 15, 2020 at 07:33:27AM -0400, Brian Foster wrote:
>> Hi Jens,
>>
>> I'm seeing an occasional metadata (read) I/O error (EOPNOTSUPP) when
>> running Zorro's recent io_uring enabled fsstress on XFS (fsstress -d
>> <mnt> -n 99999999 -p 8). The storage is a 50GB dm-linear device on a
>> virtio disk (within a KVM guest). The full callstack of the I/O
>> submission path is appended below [2], acquired via inserting a
>> WARN_ON() in my local tree.
>>
>> From tracing around a bit, it looks like what happens is that fsstress
>> calls into io_uring, the latter starts a plug and sets plug.nowait =
>> true (via io_submit_sqes() -> io_submit_state_start()) and eventually
>> XFS needs to read an inode cluster buffer in the context of this task.
>> That buffer read ultimately fails due to submit_bio_checks() setting
>> REQ_NOWAIT on the bio and the following logic in the same function
>> causing a BLK_STS_NOTSUPP status:
>>
>> 	if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_mq(q))
>> 		goto not_supported;
>>
>> In turn, this leads to the following behavior in XFS:
>>
>> [ 3839.273519] XFS (dm-2): metadata I/O error in "xfs_imap_to_bp+0x116/0x2c0 [xfs]" at daddr 0x323a5a0 len 32 error 95
>> [ 3839.303283] XFS (dm-2): log I/O error -95
>> [ 3839.321437] XFS (dm-2): xfs_do_force_shutdown(0x2) called from line 1196 of file fs/xfs/xfs_log.c. Return address = ffffffffc12dea8a
>> [ 3839.323554] XFS (dm-2): Log I/O Error Detected. Shutting down filesystem
>> [ 3839.324773] XFS (dm-2): Please unmount the filesystem and rectify the problem(s)
>>
>> I suppose it's possible fsstress is making an invalid request based on
>> my setup, but I find it a little strange that this state appears to leak
>> into filesystem I/O requests. What's more concerning is that this also
>> seems to impact an immediately subsequent log write submission, which is
>> a fatal error and causes the filesystem to shutdown.
>>
>> Finally, note that I've seen your patch associated with Zorro's recent
>> bug report [1] and that does seem to prevent the problem. I'm still
>> sending this report because the connection between the plug and that
>> change is not obvious to me, so I wanted to 1.) confirm this is intended
>> to fix this problem and 2.) try to understand whether this plugging
>> behavior introduces any constraints on the fs when invoked in io_uring
>> context. Thoughts? Thanks.
>>
> 
> To expand on this a bit, I was playing more with the aforementioned fix
> yesterday while waiting for this email's several hour trip to the
> mailing list to complete and eventually realized that I don't think the
> plug.nowait thing properly accommodates XFS' use of multiple devices. A
> simple example is XFS on a data device with mq support and an external
> log device without mq support. Presumably io_uring requests could thus
> enter XFS with plug.nowait set to true, and then any log bio submission
> that happens to occur in that context is doomed to fail and shutdown the
> fs.

Do we ever read from the logdev? It'll only be a concern on the read
side. And even from there, you'd need nested reads from the log device.

In general, the 'can async' check should be advisory, the -EAGAIN
or -EOPNOTSUPP should be caught and reissued. The failure path was
just related to this happening off the retry path on arming for the
async buffered callback.

-- 
Jens Axboe


  reply	other threads:[~2020-09-16 20:38 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-15 11:33 occasional metadata I/O errors (-EOPNOTSUPP) on XFS + io_uring Brian Foster
2020-09-16 13:19 ` Brian Foster
2020-09-16 16:55   ` Jens Axboe [this message]
2020-09-16 18:05     ` Brian Foster
2020-09-16 18:12       ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    [email protected] \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox