From: Bernd Schubert <[email protected]>
To: Bernd Schubert <[email protected]>, Miklos Szeredi <[email protected]>
Cc: Amir Goldstein <[email protected]>,
"[email protected]" <[email protected]>,
"[email protected]" <[email protected]>,
Kent Overstreet <[email protected]>,
Josef Bacik <[email protected]>,
Joanne Koong <[email protected]>,
Jens Axboe <[email protected]>
Subject: Re: [PATCH RFC v2 00/19] fuse: fuse-over-io-uring
Date: Fri, 30 Aug 2024 00:32:16 +0200 [thread overview]
Message-ID: <[email protected]> (raw)
In-Reply-To: <[email protected]>
[I shortened the CC list as that long came up only due to mmap and optimizations]
On 6/12/24 16:56, Bernd Schubert wrote:
> On 6/12/24 16:07, Miklos Szeredi wrote:
>> On Wed, 12 Jun 2024 at 15:33, Bernd Schubert <[email protected]> wrote:
>>
>>> I didn't do that yet, as we are going to use the ring buffer for requests,
>>> i.e. the ring buffer immediately gets all the data from network, there is
>>> no copy. Even if the ring buffer would get data from local disk - there
>>> is no need to use a separate application buffer anymore. And with that
>>> there is just no extra copy
>>
>> Let's just tackle this shared request buffer, as it seems to be a
>> central part of your design.
>>
>> You say the shared buffer is used to immediately get the data from the
>> network (or various other sources), which is completely viable.
>>
>> And then the kernel will do the copy from the shared buffer. Single copy, fine.
>>
>> But if the buffer wasn't shared? What would be the difference?
>> Single copy also.
>>
>> Why is the shared buffer better? I mean it may even be worse due to
>> cache aliasing issues on certain architectures. copy_to_user() /
>> copy_from_user() are pretty darn efficient.
>
> Right now we have:
>
> - Application thread writes into the buffer, then calls io_uring_cmd_done
>
> I can try to do without mmap and set a pointer to the user buffer in the
> 80B section of the SQE. I'm not sure if the application is allowed to
> write into that buffer, possibly/probably we will be forced to use
> io_uring_cmd_complete_in_task() in all cases (without 19/19 we have that
> anyway). My greatest fear here is that the extra task has performance
> implications for sync requests.
>
>
>>
>> Why is it better to have that buffer managed by kernel? Being locked
>> in memory (being unswappable) is probably a disadvantage as well. And
>> if locking is required, it can be done on the user buffer.
>
> Well, let me try to give the buffer in the 80B section.
>
>>
>> And there are all the setup and teardown complexities...
>
> If the buffer in the 80B section works setup becomes easier, mmap and
> ioctls go away. Teardown, well, we still need the workaround as we need
> to handle io_uring_cmd_done, but if you could live with that for the
> instance, I would ask Jens or Pavel or Ming for help if we could solve
> that in io-uring itself.
> Is the ring workaround in fuse_dev_release() acceptable for you? Or do
> you have any another idea about it?
>
>>
>> Note: the ring buffer used by io_uring is different. It literally
>> allows communication without invoking any system calls in certain
>> cases. That shared buffer doesn't add anything like that. At least I
>> don't see what it actually adds.
>>
>> Hmm?
>
> The application can write into the buffer. We won't shared queue buffers
> if we could solve the same with a user pointer.
Wanted to send out a new series today,
https://github.com/bsbernd/linux/tree/fuse-uring-for-6.10-rfc3-without-mmap
but then just noticed a tear down issue.
1525.905504] KASAN: null-ptr-deref in range [0x00000000000001a0-0x00000000000001a7]
[ 1525.910431] CPU: 15 PID: 183 Comm: kworker/15:1 Tainted: G O 6.10.0+ #48
[ 1525.916449] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[ 1525.922470] Workqueue: events io_fallback_req_func
[ 1525.925840] RIP: 0010:__lock_acquire+0x74/0x7b80
[ 1525.929010] Code: 89 bc 24 80 00 00 00 0f 85 1c 5f 00 00 83 3d 6e 80 b0 02 00 0f 84 1d 12 00 00 83 3d 65 c7 67 02 00 74 27 48 89 f8 48 c1 e8 03 <42> 80 3c 30 00 74 0d e8 50 44 42 00 48 8b bc 24 80 00 00 00 48 c7
[ 1525.942211] RSP: 0018:ffff88810b2af490 EFLAGS: 00010002
[ 1525.945672] RAX: 0000000000000034 RBX: 0000000000000000 RCX: 0000000000000001
[ 1525.950421] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00000000000001a0
[ 1525.955200] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000
[ 1525.959979] R10: dffffc0000000000 R11: fffffbfff07b1cbe R12: 0000000000000000
[ 1525.964252] R13: 0000000000000001 R14: dffffc0000000000 R15: 0000000000000001
[ 1525.968225] FS: 0000000000000000(0000) GS:ffff88875b200000(0000) knlGS:0000000000000000
[ 1525.973932] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1525.976694] CR2: 00005555b6a381f0 CR3: 000000012f5f1000 CR4: 00000000000006f0
[ 1525.980030] Call Trace:
[ 1525.981371] <TASK>
[ 1525.982567] ? __die_body+0x66/0xb0
[ 1525.984376] ? die_addr+0xc1/0x100
[ 1525.986111] ? exc_general_protection+0x1c6/0x330
[ 1525.988401] ? asm_exc_general_protection+0x22/0x30
[ 1525.990864] ? __lock_acquire+0x74/0x7b80
[ 1525.992901] ? mark_lock+0x9f/0x360
[ 1525.994635] ? __lock_acquire+0x1420/0x7b80
[ 1525.996629] ? attach_entity_load_avg+0x47d/0x550
[ 1525.998765] ? hlock_conflict+0x5a/0x1f0
[ 1526.000515] ? __bfs+0x2dc/0x5a0
[ 1526.001993] lock_acquire+0x1fb/0x3d0
[ 1526.004727] ? gup_fast_fallback+0x13f/0x1d80
[ 1526.006586] ? gup_fast_fallback+0x13f/0x1d80
[ 1526.008412] gup_fast_fallback+0x158/0x1d80
[ 1526.010170] ? gup_fast_fallback+0x13f/0x1d80
[ 1526.011999] ? __lock_acquire+0x2b07/0x7b80
[ 1526.013793] __iov_iter_get_pages_alloc+0x36e/0x980
[ 1526.015876] ? do_raw_spin_unlock+0x5a/0x8a0
[ 1526.017734] iov_iter_get_pages2+0x56/0x70
[ 1526.019491] fuse_copy_fill+0x48e/0x980 [fuse]
[ 1526.021400] fuse_copy_args+0x174/0x6a0 [fuse]
[ 1526.023199] fuse_uring_prepare_send+0x319/0x6c0 [fuse]
[ 1526.025178] fuse_uring_send_req_in_task+0x42/0x100 [fuse]
[ 1526.027163] io_fallback_req_func+0xb4/0x170
[ 1526.028737] ? process_scheduled_works+0x75b/0x1160
[ 1526.030445] process_scheduled_works+0x85c/0x1160
[ 1526.032073] worker_thread+0x8ba/0xce0
[ 1526.033388] kthread+0x23e/0x2b0
[ 1526.035404] ? pr_cont_work_flush+0x290/0x290
[ 1526.036958] ? kthread_blkcg+0xa0/0xa0
[ 1526.038321] ret_from_fork+0x30/0x60
[ 1526.039600] ? kthread_blkcg+0xa0/0xa0
[ 1526.040942] ret_from_fork_asm+0x11/0x20
[ 1526.042353] </TASK>
We probably need to call iov_iter_get_pages2() immediately
on submitting the buffer from fuse server and not only when needed.
I had planned to do that as optimization later on, I think
it is also needed to avoid io_uring_cmd_complete_in_task().
The part I don't like here is that with mmap we had a complex
initialization - but then either it worked or did not. No exceptions
at IO time. And run time was just a copy into the buffer.
Without mmap initialization is much simpler, but now complexity shifts
to IO time.
Thanks,
Bernd
next prev parent reply other threads:[~2024-08-29 22:32 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-29 18:00 [PATCH RFC v2 00/19] fuse: fuse-over-io-uring Bernd Schubert
2024-05-29 18:00 ` [PATCH RFC v2 19/19] fuse: {uring} Optimize async sends Bernd Schubert
2024-05-31 16:24 ` Jens Axboe
2024-05-31 17:36 ` Bernd Schubert
2024-05-31 19:10 ` Jens Axboe
2024-06-01 16:37 ` Bernd Schubert
2024-05-30 7:07 ` [PATCH RFC v2 00/19] fuse: fuse-over-io-uring Amir Goldstein
2024-05-30 12:09 ` Bernd Schubert
2024-05-30 15:36 ` Kent Overstreet
2024-05-30 16:02 ` Bernd Schubert
2024-05-30 16:10 ` Kent Overstreet
2024-05-30 16:17 ` Bernd Schubert
2024-05-30 17:30 ` Kent Overstreet
2024-05-30 19:09 ` Josef Bacik
2024-05-30 20:05 ` Kent Overstreet
2024-05-31 3:53 ` [PATCH] fs: sys_ringbuffer() (WIP) Kent Overstreet
2024-05-31 13:11 ` kernel test robot
2024-05-31 15:49 ` kernel test robot
2024-05-30 16:21 ` [PATCH RFC v2 00/19] fuse: fuse-over-io-uring Jens Axboe
2024-05-30 16:32 ` Bernd Schubert
2024-05-30 17:26 ` Jens Axboe
2024-05-30 17:16 ` Kent Overstreet
2024-05-30 17:28 ` Jens Axboe
2024-05-30 17:58 ` Kent Overstreet
2024-05-30 18:48 ` Jens Axboe
2024-05-30 19:35 ` Kent Overstreet
2024-05-31 0:11 ` Jens Axboe
2024-06-04 23:45 ` Ming Lei
2024-05-30 20:47 ` Josef Bacik
2024-06-11 8:20 ` Miklos Szeredi
2024-06-11 10:26 ` Bernd Schubert
2024-06-11 15:35 ` Miklos Szeredi
2024-06-11 17:37 ` Bernd Schubert
2024-06-11 23:35 ` Kent Overstreet
2024-06-12 13:53 ` Bernd Schubert
2024-06-12 14:19 ` Kent Overstreet
2024-06-12 15:40 ` Bernd Schubert
2024-06-12 15:55 ` Kent Overstreet
2024-06-12 16:15 ` Bernd Schubert
2024-06-12 16:24 ` Kent Overstreet
2024-06-12 16:44 ` Bernd Schubert
2024-06-12 7:39 ` Miklos Szeredi
2024-06-12 13:32 ` Bernd Schubert
2024-06-12 13:46 ` Bernd Schubert
2024-06-12 14:07 ` Miklos Szeredi
2024-06-12 14:56 ` Bernd Schubert
2024-08-02 23:03 ` Bernd Schubert
2024-08-29 22:32 ` Bernd Schubert [this message]
2024-08-30 13:12 ` Jens Axboe
2024-08-30 13:28 ` Bernd Schubert
2024-08-30 13:33 ` Jens Axboe
2024-08-30 14:55 ` Pavel Begunkov
2024-08-30 15:10 ` Bernd Schubert
2024-08-30 20:08 ` Jens Axboe
2024-08-31 0:02 ` Bernd Schubert
2024-08-31 0:49 ` Bernd Schubert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
[email protected] \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox