X-Original-To: alpine-devel@mail.alpinelinux.org Delivered-To: alpine-devel@mail.alpinelinux.org Received: from mail.alpinelinux.org (dallas-a1.alpinelinux.org [127.0.0.1]) by mail.alpinelinux.org (Postfix) with ESMTP id 4BE30DC0309 for ; Mon, 18 Jan 2016 12:14:27 +0000 (UTC) Received: from smtp1.tech.numericable.fr (smtp1.tech.numericable.fr [82.216.111.37]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.alpinelinux.org (Postfix) with ESMTPS id E144BDC014A for ; Mon, 18 Jan 2016 12:14:26 +0000 (UTC) Received: from sinay.internal.skarnet.org (ip-62.net-82-216-6.versailles2.rev.numericable.fr [82.216.6.62]) by smtp1.tech.numericable.fr (Postfix) with SMTP id 840131405B3 for ; Mon, 18 Jan 2016 13:14:19 +0100 (CET) Received: (qmail 24344 invoked from network); 18 Jan 2016 12:14:44 -0000 Received: from elzian.internal.skarnet.org. (HELO ?192.168.0.2?) (192.168.0.2) by sinay.internal.skarnet.org. with SMTP; 18 Jan 2016 12:14:44 -0000 Subject: Re: [alpine-devel] udev replacement on Alpine Linux To: alpine-devel@lists.alpinelinux.org References: <20150727103737.4f95e523@ncopa-desktop.alpinelinux.org> <20150728052436.GC1923@newbook> <20160112153804.GI32545@example.net> <56953ABE.5090203@skarnet.org> <56958E22.90806@skarnet.org> <56964414.1000605@skarnet.org> <56978822.8020205@skarnet.org> From: Laurent Bercot Message-ID: <569CD71C.2020407@skarnet.org> Date: Mon, 18 Jan 2016 13:14:20 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 X-Mailinglist: alpine-devel Precedence: list List-Id: Alpine Development List-Unsubscribe: List-Post: List-Help: List-Subscribe: MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-VR-SPAMSTATE: OK X-VR-SPAMSCORE: 50 X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrfeekiedrleeigdefiecutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfpfgfogfftkfevteeunffgnecuuegrihhlohhuthemuceftddtnecuogetfeejfedqtdegucdlhedtmdenucfjughrpefuvfhfhffkffgfgggjtgfgsehtjegrtddtfeejnecuhfhrohhmpefnrghurhgvnhhtuceuvghrtghothcuoehskhgrqdguvghvvghlsehskhgrrhhnvghtrdhorhhgqeenucffohhmrghinhepshhkrghrnhgvthdrohhrghenucfrrghrrghmpehmohguvgepshhmthhpohhuth X-Virus-Scanned: ClamAV using ClamSMTP On 16/01/2016 18:48, Jude Nelson wrote: > This sounds reasonable. In fact, within vdevd there are already > distinct netlink listener and data gatherer threads that communicate > over a producer/consumer queue. Splitting them into separate > processes connected by a pipe is consistent with the current design, > and would also help with portability. I have a standalone netlink listener: http://skarnet.org/software/s6-linux-utils/s6-uevent-listener.html Any data gatherer / event dispatcher program can be used behind it. I'm currently using it as "s6-uevent-listener s6-uevent-spawner mdev", which spawns a mdev instance per uevent. Ideally, I should be able to use it as something like "s6-uevent-listener vdev-data-gatherer vdev-event-dispatcher" and have a pipeline of 3 long-lived processes, every process being independently replaceable on the command-line by any other implementation that uses the same API. > I think this is one of the things Plan 9 got right--letting a process > expose whatever fate-sharing state it wanted through the VFS. The more I keep hearing about Plan 9, the more I tell myself I really need to try it out. The day where I actually do it is getting closer and closer - I'm just afraid that once I do, I'll realize how horrible Unix is and won't ever want to work with Unix again, which would be bad for my financial well-being. XD > * Unlike netlink sockets, a program cannot > control the size of an inotify descriptor's "receive" buffer. This > is a system-wide constant, defined in > /proc/sys/fs/inotify/max_queued_events. However, libudev offers > clients the ability to do just this (via > udev_monitor_set_receive_buffer_size). This is what I originally > meant--libudev-compat needs to ensure that the desired receive buffer > size is honored. Reading the udev_monitor doc pages stirs up horrible memories of the D-Bus API. Urge to destroy world rising. It looks like udev_monitor_set_receive_buffer_size() could be completely stubbed out for your implementation via inotify. It is only useful when events queue up in the kernel buffer because a client isn't reading them fast enough; but with your system, events are stored in the filesystem so they will never be lost - so there's no such thing as a meaningful "kernel buffer" in your case, and nobody cares what its size is: clients will always have access to the full set of events. "return 0;" is the implementation you want here. > To work around these constraints, libudev-compat routes a > udev_monitor's events through an internal socket pair. > (cut layers upon layers of hacks to emulate udev_monitor filters) Blech. I understand the API is inherently complex and kinda enforces the system's architecture - which is very similar to what systemd does, so it's very unsurprising to me that systemd phagocyted udev: those two were *made* to be together - but it looks like by deciding to do things differently and wanting to still provide compatibility, you ended up coding something that's just as complex, and more convoluted (since you're not using the original mechanisms) than the original. The filter mechanism is horribly specific and does not leave much room for alternative implementations, so I know it's hard to do correctly, but it seems to me that your implementation gets the worst of both worlds: - one of your implementation's advantages is that clients can never lose events, but by piling your socketpair thingy onto it for an "accurate" udev_monitor emulation, you make it so clients can actually shoot themselves in the foot. It may be accurate, but it's lower quality than your idea permits. - the original udev implementation's advantage is that clients are never woken up when an event arrives if the event doesn't pass the filter. Here, your application will never be woken up indeed, but libudev-compat will be, since you will get readability on your inotify descriptor. Filters are not server-side (or even kernel-side) as udev intended, they're client-side, and that's not efficient. I believe that you'd be much better off simply using a normal Unix socket connection from the client to an event dispatcher daemon, and implementing a small protocol where udev_monitor_filter primitives just write strings to the socket, and the server reads them and implements filters server-side by *not* linking filtered events to the client's event directory. This way, clients really aren't woken up by events that do not pass the filter. > But I'm uncomfortable with the technical debt it can introduce to the > ecosystem--for example, a message bus has its own semantics that > effectively require a bus-specific library, clients' design choices > can require a message bus daemon to be running at all times, > pervasive use of the message bus by system-level software can make > the implementation a hard requirement for having a usable system, > etc. (in short, we get dbus again). Huh? I wasn't suggesting using a generic bus. I was suggesting that the natural architecture for an event dispatcher was that of a single publisher (the server) with multiple subscribers (the clients). And that was similar to a bus - except simpler, because you don't even have multiple publishers. It's not about using a system bus or anything of the kind. It's about writing the event dispatcher and the client library as you'd write a bus server and a bus client library (and please, forget about the insane D-Bus model of message-passing between symmetrical peers - a client-server model is much simpler, and easier to implement, at least on Unix). Good luck with your Ph.D. thesis! -- Laurent --- Unsubscribe: alpine-devel+unsubscribe@lists.alpinelinux.org Help: alpine-devel+help@lists.alpinelinux.org ---