* [PATCH 0/2] io_uring: Pass whole sqe to commands
@ 2023-04-19 10:29 Breno Leitao
2023-04-19 10:29 ` [PATCH 1/2] " Breno Leitao
2023-04-19 10:29 ` [PATCH 2/2] io_uring: Remove unnecessary BUILD_BUG_ON Breno Leitao
0 siblings, 2 replies; 11+ messages in thread
From: Breno Leitao @ 2023-04-19 10:29 UTC (permalink / raw)
To: io-uring, linux-nvme, asml.silence, axboe
Cc: leit, linux-kernel, linux-block, sagi, hch, kbusch, ming.lei
These two patches prepares for the sock support in the io_uring cmd, as
described in the following RFC:
https://lore.kernel.org/lkml/[email protected]/
Since the support described above depends on other refactors, such as the sock
ioctl() sock refactor[1], I would like to start integrating patches that have
consensus and can bring value right now. This will also reduce the patchset
size later.
Regarding to these two patches, they are simple changes that turn io_uring cmd
subsystem more flexible (by passing the whole SQE to the command), and cleaning up an
unnecessary compile check.
These patches were tested by creating a filesyste and mounting an NVME disk
using ubdsrv/ublkb0.
[1] https://lore.kernel.org/lkml/[email protected]/
Breno Leitao (2):
io_uring: Pass whole sqe to commands
io_uring: Remove unnecessary BUILD_BUG_ON
drivers/block/ublk_drv.c | 24 ++++++++++++------------
drivers/nvme/host/ioctl.c | 2 +-
include/linux/io_uring.h | 2 +-
io_uring/opdef.c | 2 +-
io_uring/uring_cmd.c | 14 ++++++--------
io_uring/uring_cmd.h | 8 --------
6 files changed, 21 insertions(+), 31 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/2] io_uring: Pass whole sqe to commands
2023-04-19 10:29 [PATCH 0/2] io_uring: Pass whole sqe to commands Breno Leitao
@ 2023-04-19 10:29 ` Breno Leitao
2023-04-20 4:57 ` Christoph Hellwig
2023-04-19 10:29 ` [PATCH 2/2] io_uring: Remove unnecessary BUILD_BUG_ON Breno Leitao
1 sibling, 1 reply; 11+ messages in thread
From: Breno Leitao @ 2023-04-19 10:29 UTC (permalink / raw)
To: io-uring, linux-nvme, asml.silence, axboe
Cc: leit, linux-kernel, linux-block, sagi, hch, kbusch, ming.lei
Currently uring CMD operation relies on having large SQEs, but future
operations might want to use normal SQE.
The io_uring_cmd currently only saves the payload (cmd) part of the SQE,
but, for commands that use normal SQE size, it might be necessary to
access the initial SQE fields outside of the payload/cmd block. So,
saves the whole SQE other than just the pdu.
This changes slightly how the io_uring_cmd works, since the cmd
structures and callbacks are not opaque to io_uring anymore. I.e, the
callbacks can look at the SQE entries, not only, in the cmd structure.
The main advantage is that we don't need to create custom structures for
simple commands.
Suggested-by: Pavel Begunkov <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
Reviewed-by: Keith Busch <[email protected]>
---
drivers/block/ublk_drv.c | 24 ++++++++++++------------
drivers/nvme/host/ioctl.c | 2 +-
include/linux/io_uring.h | 2 +-
io_uring/opdef.c | 2 +-
io_uring/uring_cmd.c | 11 ++++++-----
5 files changed, 21 insertions(+), 20 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index c73cc57ec547..ec23a3c9fac8 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -1263,7 +1263,7 @@ static void ublk_handle_need_get_data(struct ublk_device *ub, int q_id,
static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
{
- struct ublksrv_io_cmd *ub_cmd = (struct ublksrv_io_cmd *)cmd->cmd;
+ struct ublksrv_io_cmd *ub_cmd = (struct ublksrv_io_cmd *)cmd->sqe->cmd;
struct ublk_device *ub = cmd->file->private_data;
struct ublk_queue *ubq;
struct ublk_io *io;
@@ -1567,7 +1567,7 @@ static struct ublk_device *ublk_get_device_from_id(int idx)
static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd)
{
- struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd;
+ struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->sqe->cmd;
int ublksrv_pid = (int)header->data[0];
struct gendisk *disk;
int ret = -EINVAL;
@@ -1630,7 +1630,7 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd)
static int ublk_ctrl_get_queue_affinity(struct ublk_device *ub,
struct io_uring_cmd *cmd)
{
- struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd;
+ struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->sqe->cmd;
void __user *argp = (void __user *)(unsigned long)header->addr;
cpumask_var_t cpumask;
unsigned long queue;
@@ -1681,7 +1681,7 @@ static inline void ublk_dump_dev_info(struct ublksrv_ctrl_dev_info *info)
static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
{
- struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd;
+ struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->sqe->cmd;
void __user *argp = (void __user *)(unsigned long)header->addr;
struct ublksrv_ctrl_dev_info info;
struct ublk_device *ub;
@@ -1844,7 +1844,7 @@ static int ublk_ctrl_del_dev(struct ublk_device **p_ub)
static inline void ublk_ctrl_cmd_dump(struct io_uring_cmd *cmd)
{
- struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd;
+ struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->sqe->cmd;
pr_devel("%s: cmd_op %x, dev id %d qid %d data %llx buf %llx len %u\n",
__func__, cmd->cmd_op, header->dev_id, header->queue_id,
@@ -1863,7 +1863,7 @@ static int ublk_ctrl_stop_dev(struct ublk_device *ub)
static int ublk_ctrl_get_dev_info(struct ublk_device *ub,
struct io_uring_cmd *cmd)
{
- struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd;
+ struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->sqe->cmd;
void __user *argp = (void __user *)(unsigned long)header->addr;
if (header->len < sizeof(struct ublksrv_ctrl_dev_info) || !header->addr)
@@ -1894,7 +1894,7 @@ static void ublk_ctrl_fill_params_devt(struct ublk_device *ub)
static int ublk_ctrl_get_params(struct ublk_device *ub,
struct io_uring_cmd *cmd)
{
- struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd;
+ struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->sqe->cmd;
void __user *argp = (void __user *)(unsigned long)header->addr;
struct ublk_params_header ph;
int ret;
@@ -1925,7 +1925,7 @@ static int ublk_ctrl_get_params(struct ublk_device *ub,
static int ublk_ctrl_set_params(struct ublk_device *ub,
struct io_uring_cmd *cmd)
{
- struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd;
+ struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->sqe->cmd;
void __user *argp = (void __user *)(unsigned long)header->addr;
struct ublk_params_header ph;
int ret = -EFAULT;
@@ -1983,7 +1983,7 @@ static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq)
static int ublk_ctrl_start_recovery(struct ublk_device *ub,
struct io_uring_cmd *cmd)
{
- struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd;
+ struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->sqe->cmd;
int ret = -EINVAL;
int i;
@@ -2025,7 +2025,7 @@ static int ublk_ctrl_start_recovery(struct ublk_device *ub,
static int ublk_ctrl_end_recovery(struct ublk_device *ub,
struct io_uring_cmd *cmd)
{
- struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd;
+ struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->sqe->cmd;
int ublksrv_pid = (int)header->data[0];
int ret = -EINVAL;
@@ -2092,7 +2092,7 @@ static int ublk_char_dev_permission(struct ublk_device *ub,
static int ublk_ctrl_uring_cmd_permission(struct ublk_device *ub,
struct io_uring_cmd *cmd)
{
- struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd;
+ struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->sqe->cmd;
bool unprivileged = ub->dev_info.flags & UBLK_F_UNPRIVILEGED_DEV;
void __user *argp = (void __user *)(unsigned long)header->addr;
char *dev_path = NULL;
@@ -2171,7 +2171,7 @@ static int ublk_ctrl_uring_cmd_permission(struct ublk_device *ub,
static int ublk_ctrl_uring_cmd(struct io_uring_cmd *cmd,
unsigned int issue_flags)
{
- struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->cmd;
+ struct ublksrv_ctrl_cmd *header = (struct ublksrv_ctrl_cmd *)cmd->sqe->cmd;
struct ublk_device *ub = NULL;
int ret = -EINVAL;
diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index d24ea2e05156..351dff872fa0 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -552,7 +552,7 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
struct io_uring_cmd *ioucmd, unsigned int issue_flags, bool vec)
{
struct nvme_uring_cmd_pdu *pdu = nvme_uring_cmd_pdu(ioucmd);
- const struct nvme_uring_cmd *cmd = ioucmd->cmd;
+ const struct nvme_uring_cmd *cmd = (struct nvme_uring_cmd *)ioucmd->sqe->cmd;
struct request_queue *q = ns ? ns->queue : ctrl->admin_q;
struct nvme_uring_data d;
struct nvme_command c;
diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h
index 35b9328ca335..2dfc81dd6d1a 100644
--- a/include/linux/io_uring.h
+++ b/include/linux/io_uring.h
@@ -24,7 +24,7 @@ enum io_uring_cmd_flags {
struct io_uring_cmd {
struct file *file;
- const void *cmd;
+ const struct io_uring_sqe *sqe;
union {
/* callback to defer completions to task context */
void (*task_work_cb)(struct io_uring_cmd *cmd, unsigned);
diff --git a/io_uring/opdef.c b/io_uring/opdef.c
index cca7c5b55208..3b9c6489b8b6 100644
--- a/io_uring/opdef.c
+++ b/io_uring/opdef.c
@@ -627,7 +627,7 @@ const struct io_cold_def io_cold_defs[] = {
},
[IORING_OP_URING_CMD] = {
.name = "URING_CMD",
- .async_size = uring_cmd_pdu_size(1),
+ .async_size = 2 * sizeof(struct io_uring_sqe),
.prep_async = io_uring_cmd_prep_async,
},
[IORING_OP_SEND_ZC] = {
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index 5113c9a48583..5cb2e39e99f9 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -69,15 +69,16 @@ EXPORT_SYMBOL_GPL(io_uring_cmd_done);
int io_uring_cmd_prep_async(struct io_kiocb *req)
{
struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd);
- size_t cmd_size;
+ size_t size = sizeof(struct io_uring_sqe);
BUILD_BUG_ON(uring_cmd_pdu_size(0) != 16);
BUILD_BUG_ON(uring_cmd_pdu_size(1) != 80);
- cmd_size = uring_cmd_pdu_size(req->ctx->flags & IORING_SETUP_SQE128);
+ if (req->ctx->flags & IORING_SETUP_SQE128)
+ size <<= 1;
- memcpy(req->async_data, ioucmd->cmd, cmd_size);
- ioucmd->cmd = req->async_data;
+ memcpy(req->async_data, ioucmd->sqe, size);
+ ioucmd->sqe = req->async_data;
return 0;
}
@@ -103,7 +104,7 @@ int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
req->imu = ctx->user_bufs[index];
io_req_set_rsrc_node(req, ctx, 0);
}
- ioucmd->cmd = sqe->cmd;
+ ioucmd->sqe = sqe;
ioucmd->cmd_op = READ_ONCE(sqe->cmd_op);
return 0;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/2] io_uring: Remove unnecessary BUILD_BUG_ON
2023-04-19 10:29 [PATCH 0/2] io_uring: Pass whole sqe to commands Breno Leitao
2023-04-19 10:29 ` [PATCH 1/2] " Breno Leitao
@ 2023-04-19 10:29 ` Breno Leitao
1 sibling, 0 replies; 11+ messages in thread
From: Breno Leitao @ 2023-04-19 10:29 UTC (permalink / raw)
To: io-uring, linux-nvme, asml.silence, axboe
Cc: leit, linux-kernel, linux-block, sagi, hch, kbusch, ming.lei
In the io_uring_cmd_prep_async() there is a unnecessary compilation time
check to check if cmd is correctly placed at field 48 of the SQE.
This is uncessary, since this check is already in place at
io_uring_init():
BUILD_BUG_SQE_ELEM(48, __u64, addr3);
Remove it and the uring_cmd_pdu_size() function, which is not used
anymore.
Keith started a discussion about this topic in the following thread:
https://lore.kernel.org/lkml/[email protected]/
Signed-off-by: Breno Leitao <[email protected]>
---
io_uring/uring_cmd.c | 3 ---
io_uring/uring_cmd.h | 8 --------
2 files changed, 11 deletions(-)
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index 5cb2e39e99f9..fccc497bab59 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -71,9 +71,6 @@ int io_uring_cmd_prep_async(struct io_kiocb *req)
struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd);
size_t size = sizeof(struct io_uring_sqe);
- BUILD_BUG_ON(uring_cmd_pdu_size(0) != 16);
- BUILD_BUG_ON(uring_cmd_pdu_size(1) != 80);
-
if (req->ctx->flags & IORING_SETUP_SQE128)
size <<= 1;
diff --git a/io_uring/uring_cmd.h b/io_uring/uring_cmd.h
index 7c6697d13cb2..8117684ec3ca 100644
--- a/io_uring/uring_cmd.h
+++ b/io_uring/uring_cmd.h
@@ -3,11 +3,3 @@
int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags);
int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe);
int io_uring_cmd_prep_async(struct io_kiocb *req);
-
-/*
- * The URING_CMD payload starts at 'cmd' in the first sqe, and continues into
- * the following sqe if SQE128 is used.
- */
-#define uring_cmd_pdu_size(is_sqe128) \
- ((1 + !!(is_sqe128)) * sizeof(struct io_uring_sqe) - \
- offsetof(struct io_uring_sqe, cmd))
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] io_uring: Pass whole sqe to commands
2023-04-19 10:29 ` [PATCH 1/2] " Breno Leitao
@ 2023-04-20 4:57 ` Christoph Hellwig
2023-04-20 12:29 ` Breno Leitao
2023-04-21 15:11 ` Breno Leitao
0 siblings, 2 replies; 11+ messages in thread
From: Christoph Hellwig @ 2023-04-20 4:57 UTC (permalink / raw)
To: Breno Leitao
Cc: io-uring, linux-nvme, asml.silence, axboe, leit, linux-kernel,
linux-block, sagi, hch, kbusch, ming.lei
On Wed, Apr 19, 2023 at 03:29:29AM -0700, Breno Leitao wrote:
> struct nvme_uring_cmd_pdu *pdu = nvme_uring_cmd_pdu(ioucmd);
> - const struct nvme_uring_cmd *cmd = ioucmd->cmd;
> + const struct nvme_uring_cmd *cmd = (struct nvme_uring_cmd *)ioucmd->sqe->cmd;
Please don't add the pointless cast. And in general avoid the overly
long lines.
I suspect most other users should just also defined their structures
const instead of adding all theses casts thare are a sign of problems,
but that's a pre-existing issue.
> struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd);
> - size_t cmd_size;
> + size_t size = sizeof(struct io_uring_sqe);
>
> BUILD_BUG_ON(uring_cmd_pdu_size(0) != 16);
> BUILD_BUG_ON(uring_cmd_pdu_size(1) != 80);
>
> - cmd_size = uring_cmd_pdu_size(req->ctx->flags & IORING_SETUP_SQE128);
> + if (req->ctx->flags & IORING_SETUP_SQE128)
> + size <<= 1;
Why does this stop using uring_cmd_pdu_size()?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] io_uring: Pass whole sqe to commands
2023-04-20 4:57 ` Christoph Hellwig
@ 2023-04-20 12:29 ` Breno Leitao
2023-04-20 12:31 ` Christoph Hellwig
2023-04-21 15:11 ` Breno Leitao
1 sibling, 1 reply; 11+ messages in thread
From: Breno Leitao @ 2023-04-20 12:29 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, linux-nvme, asml.silence, axboe, leit, linux-kernel,
linux-block, sagi, kbusch, ming.lei
Hello Christoph,
On Thu, Apr 20, 2023 at 06:57:12AM +0200, Christoph Hellwig wrote:
> On Wed, Apr 19, 2023 at 03:29:29AM -0700, Breno Leitao wrote:
> > struct nvme_uring_cmd_pdu *pdu = nvme_uring_cmd_pdu(ioucmd);
> > - const struct nvme_uring_cmd *cmd = ioucmd->cmd;
> > + const struct nvme_uring_cmd *cmd = (struct nvme_uring_cmd *)ioucmd->sqe->cmd;
>
> Please don't add the pointless cast. And in general avoid the overly
> long lines.
Ack!
>
> I suspect most other users should just also defined their structures
> const instead of adding all theses casts thare are a sign of problems,
> but that's a pre-existing issue.
> > struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd);
> > - size_t cmd_size;
> > + size_t size = sizeof(struct io_uring_sqe);
> >
> > BUILD_BUG_ON(uring_cmd_pdu_size(0) != 16);
> > BUILD_BUG_ON(uring_cmd_pdu_size(1) != 80);
> >
> > - cmd_size = uring_cmd_pdu_size(req->ctx->flags & IORING_SETUP_SQE128);
> > + if (req->ctx->flags & IORING_SETUP_SQE128)
> > + size <<= 1;
>
>
> Why does this stop using uring_cmd_pdu_size()?
Before, only the cmd payload (sqe->cmd) was being copied to the async
structure. We are copying over the whole sqe now, since we can use SQE
fields inside the ioctl callbacks (instead of only cmd fields). So, the
copy now is 64 bytes for single SQE or 128 for double SQEs.
This has two major advantages:
* It is not necessary to create a cmd structure for every command
operations (which will be mapped in sqe->cmd) to pass arguments. The
arguments could be passed as fields in SQE.
* sqe->cmd is 16 bytes on single SQEs. Passing the whole SQE to cmd
will reduce the necessity to use double SQE for operations that
require large fields, such as {g,s}etsockopt().
There are some discussions about it also at
https://lkml.org/lkml/2023/4/6/786
Thanks for the review!
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] io_uring: Pass whole sqe to commands
2023-04-20 12:29 ` Breno Leitao
@ 2023-04-20 12:31 ` Christoph Hellwig
2023-04-20 12:38 ` Breno Leitao
0 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2023-04-20 12:31 UTC (permalink / raw)
To: Breno Leitao
Cc: Christoph Hellwig, io-uring, linux-nvme, asml.silence, axboe,
leit, linux-kernel, linux-block, sagi, kbusch, ming.lei
On Thu, Apr 20, 2023 at 05:29:18AM -0700, Breno Leitao wrote:
> > > - cmd_size = uring_cmd_pdu_size(req->ctx->flags & IORING_SETUP_SQE128);
> > > + if (req->ctx->flags & IORING_SETUP_SQE128)
> > > + size <<= 1;
> >
> >
> > Why does this stop using uring_cmd_pdu_size()?
>
> Before, only the cmd payload (sqe->cmd) was being copied to the async
> structure. We are copying over the whole sqe now, since we can use SQE
> fields inside the ioctl callbacks (instead of only cmd fields). So, the
> copy now is 64 bytes for single SQE or 128 for double SQEs.
That's the point of this series and I get it. But why do we remove
the nice and self-documenting helper that returns once or twice
the sizeof of the SQE structure and instead add a magic open coded
left shift?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] io_uring: Pass whole sqe to commands
2023-04-20 12:31 ` Christoph Hellwig
@ 2023-04-20 12:38 ` Breno Leitao
2023-04-20 12:46 ` Christoph Hellwig
0 siblings, 1 reply; 11+ messages in thread
From: Breno Leitao @ 2023-04-20 12:38 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, linux-nvme, asml.silence, axboe, leit, linux-kernel,
linux-block, sagi, kbusch, ming.lei
On Thu, Apr 20, 2023 at 02:31:39PM +0200, Christoph Hellwig wrote:
> On Thu, Apr 20, 2023 at 05:29:18AM -0700, Breno Leitao wrote:
> > > > - cmd_size = uring_cmd_pdu_size(req->ctx->flags & IORING_SETUP_SQE128);
> > > > + if (req->ctx->flags & IORING_SETUP_SQE128)
> > > > + size <<= 1;
> > >
> > >
> > > Why does this stop using uring_cmd_pdu_size()?
> >
> > Before, only the cmd payload (sqe->cmd) was being copied to the async
> > structure. We are copying over the whole sqe now, since we can use SQE
> > fields inside the ioctl callbacks (instead of only cmd fields). So, the
> > copy now is 64 bytes for single SQE or 128 for double SQEs.
>
> That's the point of this series and I get it. But why do we remove
> the nice and self-documenting helper that returns once or twice
> the sizeof of the SQE structure and instead add a magic open coded
> left shift?
uring_cmd_pdu_size() returns the size of the payload, not the size of
the SQE structure. Basically it returns 16 bytes or single SQE or 80
for double SQE.
Since we are not coping the payload anymore, this is not necessary. Now
we are copying 64 bytes for the single SQE or 128 bytes for double SQE.
Do you prefer I create a helper that returns the SQE size, instead of
doing the left shift?
Thank you!
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] io_uring: Pass whole sqe to commands
2023-04-20 12:38 ` Breno Leitao
@ 2023-04-20 12:46 ` Christoph Hellwig
2023-04-30 14:37 ` Breno Leitao
0 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2023-04-20 12:46 UTC (permalink / raw)
To: Breno Leitao
Cc: Christoph Hellwig, io-uring, linux-nvme, asml.silence, axboe,
leit, linux-kernel, linux-block, sagi, kbusch, ming.lei
On Thu, Apr 20, 2023 at 05:38:02AM -0700, Breno Leitao wrote:
> Since we are not coping the payload anymore, this is not necessary. Now
> we are copying 64 bytes for the single SQE or 128 bytes for double SQE.
>
> Do you prefer I create a helper that returns the SQE size, instead of
> doing the left shift?
I think a helper would be nice. And adding another sizeof(sqe) seems
more self documenting then the shift, but if you really prefer the
shift at least write a good comment explaining it.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] io_uring: Pass whole sqe to commands
2023-04-20 4:57 ` Christoph Hellwig
2023-04-20 12:29 ` Breno Leitao
@ 2023-04-21 15:11 ` Breno Leitao
2023-04-24 5:08 ` Christoph Hellwig
1 sibling, 1 reply; 11+ messages in thread
From: Breno Leitao @ 2023-04-21 15:11 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, linux-nvme, asml.silence, axboe, leit, linux-kernel,
linux-block, sagi, kbusch, ming.lei
On Thu, Apr 20, 2023 at 06:57:12AM +0200, Christoph Hellwig wrote:
> On Wed, Apr 19, 2023 at 03:29:29AM -0700, Breno Leitao wrote:
> > struct nvme_uring_cmd_pdu *pdu = nvme_uring_cmd_pdu(ioucmd);
> > - const struct nvme_uring_cmd *cmd = ioucmd->cmd;
> > + const struct nvme_uring_cmd *cmd = (struct nvme_uring_cmd *)ioucmd->sqe->cmd;
>
> Please don't add the pointless cast. And in general avoid the overly
> long lines.
If I don't add this cast, the compiler complains with the follow error:
drivers/nvme/host/ioctl.c: In function ‘nvme_uring_cmd_io’:
drivers/nvme/host/ioctl.c:555:37: error: initialization of ‘const struct nvme_uring_cmd *’ from incompatible pointer type ‘const __u8 *’ {aka ‘const unsigned char *’} [-Werror=incompatible-pointer-types]
const struct nvme_uring_cmd *cmd = ioucmd->sqe->cmd;
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] io_uring: Pass whole sqe to commands
2023-04-21 15:11 ` Breno Leitao
@ 2023-04-24 5:08 ` Christoph Hellwig
0 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2023-04-24 5:08 UTC (permalink / raw)
To: Breno Leitao
Cc: Christoph Hellwig, io-uring, linux-nvme, asml.silence, axboe,
leit, linux-kernel, linux-block, sagi, kbusch, ming.lei
On Fri, Apr 21, 2023 at 08:11:31AM -0700, Breno Leitao wrote:
> On Thu, Apr 20, 2023 at 06:57:12AM +0200, Christoph Hellwig wrote:
> > On Wed, Apr 19, 2023 at 03:29:29AM -0700, Breno Leitao wrote:
> > > struct nvme_uring_cmd_pdu *pdu = nvme_uring_cmd_pdu(ioucmd);
> > > - const struct nvme_uring_cmd *cmd = ioucmd->cmd;
> > > + const struct nvme_uring_cmd *cmd = (struct nvme_uring_cmd *)ioucmd->sqe->cmd;
> >
> > Please don't add the pointless cast. And in general avoid the overly
> > long lines.
>
> If I don't add this cast, the compiler complains with the follow error:
>
> drivers/nvme/host/ioctl.c: In function ‘nvme_uring_cmd_io’:
> drivers/nvme/host/ioctl.c:555:37: error: initialization of ‘const struct nvme_uring_cmd *’ from incompatible pointer type ‘const __u8 *’ {aka ‘const unsigned char *’} [-Werror=incompatible-pointer-types]
> const struct nvme_uring_cmd *cmd = ioucmd->sqe->cmd;
Oh. I think then we need a helper to get the private data from
the io_uring_cmd as an interface that requires casts in all callers
is one asking for bugs.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] io_uring: Pass whole sqe to commands
2023-04-20 12:46 ` Christoph Hellwig
@ 2023-04-30 14:37 ` Breno Leitao
0 siblings, 0 replies; 11+ messages in thread
From: Breno Leitao @ 2023-04-30 14:37 UTC (permalink / raw)
To: Christoph Hellwig
Cc: io-uring, linux-nvme, asml.silence, axboe, leit, linux-kernel,
linux-block, sagi, kbusch, ming.lei
On Thu, Apr 20, 2023 at 02:46:15PM +0200, Christoph Hellwig wrote:
> On Thu, Apr 20, 2023 at 05:38:02AM -0700, Breno Leitao wrote:
> > Since we are not coping the payload anymore, this is not necessary. Now
> > we are copying 64 bytes for the single SQE or 128 bytes for double SQE.
> >
> > Do you prefer I create a helper that returns the SQE size, instead of
> > doing the left shift?
>
> I think a helper would be nice. And adding another sizeof(sqe) seems
> more self documenting then the shift, but if you really prefer the
> shift at least write a good comment explaining it.
Agree, this is a good idea. I've fixed it in the nvme code by creating a
function helper.
The same problem happen on the ublkd_drv, and I've also fixed it there
in a new patch.
https://lkml.org/lkml/2023/4/30/94
Thanks for the review.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2023-04-30 14:38 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-19 10:29 [PATCH 0/2] io_uring: Pass whole sqe to commands Breno Leitao
2023-04-19 10:29 ` [PATCH 1/2] " Breno Leitao
2023-04-20 4:57 ` Christoph Hellwig
2023-04-20 12:29 ` Breno Leitao
2023-04-20 12:31 ` Christoph Hellwig
2023-04-20 12:38 ` Breno Leitao
2023-04-20 12:46 ` Christoph Hellwig
2023-04-30 14:37 ` Breno Leitao
2023-04-21 15:11 ` Breno Leitao
2023-04-24 5:08 ` Christoph Hellwig
2023-04-19 10:29 ` [PATCH 2/2] io_uring: Remove unnecessary BUILD_BUG_ON Breno Leitao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox