On 15/02/2020 09:01, Jens Axboe wrote: > diff --git a/fs/io_uring.c b/fs/io_uring.c > index fb94b8bac638..530dcd91fa53 100644 > @@ -4630,6 +4753,14 @@ static void __io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe) > */ > if (ret == -EAGAIN && (!(req->flags & REQ_F_NOWAIT) || > (req->flags & REQ_F_MUST_PUNT))) { > + > + if (io_arm_poll_handler(req, &retry_count)) { > + if (retry_count == 1) > + goto issue; Better to sqe=NULL before retrying, so it won't re-read sqe and try to init the req twice. Also, the second sync-issue may -EAGAIN again, and as I remember, read/write/etc will try to copy iovec into req->io. But iovec is already in req->io, so it will self memcpy(). Not a good thing. > + else if (!retry_count) > + goto done_req; > + INIT_IO_WORK(&req->work, io_wq_submit_work); It's not nice to reset it as this: - prep() could set some work.flags - custom work.func is more performant (adds extra switch) - some may rely on specified work.func to be called. e.g. close(), even though it doesn't participate in the scheme > + } > punt: > if (io_op_defs[req->opcode].file_table) { > ret = io_grab_files(req); > @@ -5154,26 +5285,40 @@ void io_uring_task_handler(struct task_struct *tsk) > { > LIST_HEAD(local_list); > struct io_kiocb *req; > + long state; > > spin_lock_irq(&tsk->uring_lock); > if (!list_empty(&tsk->uring_work)) -- Pavel Begunkov