Mail archive

Re: [alpine-devel] udev replacement on Alpine Linux

From: Laurent Bercot <>
Date: Mon, 18 Jan 2016 13:14:20 +0100

On 16/01/2016 18:48, Jude Nelson wrote:
> This sounds reasonable. In fact, within vdevd there are already
> distinct netlink listener and data gatherer threads that communicate
> over a producer/consumer queue. Splitting them into separate
> processes connected by a pipe is consistent with the current design,
> and would also help with portability.

  I have a standalone netlink listener:
  Any data gatherer / event dispatcher program can be used behind it.
I'm currently using it as "s6-uevent-listener s6-uevent-spawner mdev",
which spawns a mdev instance per uevent.
  Ideally, I should be able to use it as something like
"s6-uevent-listener vdev-data-gatherer vdev-event-dispatcher" and have
a pipeline of 3 long-lived processes, every process being independently
replaceable on the command-line by any other implementation that uses
the same API.

> I think this is one of the things Plan 9 got right--letting a process
> expose whatever fate-sharing state it wanted through the VFS.

  The more I keep hearing about Plan 9, the more I tell myself I really
need to try it out. The day where I actually do it is getting closer and
closer - I'm just afraid that once I do, I'll realize how horrible Unix
is and won't ever want to work with Unix again, which would be bad for
my financial well-being. XD

> * Unlike netlink sockets, a program cannot
> control the size of an inotify descriptor's "receive" buffer. This
> is a system-wide constant, defined in
> /proc/sys/fs/inotify/max_queued_events. However, libudev offers
> clients the ability to do just this (via
> udev_monitor_set_receive_buffer_size). This is what I originally
> meant--libudev-compat needs to ensure that the desired receive buffer
> size is honored.

  Reading the udev_monitor doc pages stirs up horrible memories of the
D-Bus API. Urge to destroy world rising.

  It looks like udev_monitor_set_receive_buffer_size() could be
completely stubbed out for your implementation via inotify. It is only
useful when events queue up in the kernel buffer because a client isn't
reading them fast enough; but with your system, events are stored in
the filesystem so they will never be lost - so there's no such thing as
a meaningful "kernel buffer" in your case, and nobody cares what its
size is: clients will always have access to the full set of events.
"return 0;" is the implementation you want here.

> To work around these constraints, libudev-compat routes a
> udev_monitor's events through an internal socket pair.
> (cut layers upon layers of hacks to emulate udev_monitor filters)

  I understand the API is inherently complex and kinda enforces the
system's architecture - which is very similar to what systemd does, so
it's very unsurprising to me that systemd phagocyted udev: those two
were *made* to be together - but it looks like by deciding to do things
differently and wanting to still provide compatibility, you ended up
coding something that's just as complex, and more convoluted (since
you're not using the original mechanisms) than the original.

  The filter mechanism is horribly specific and does not leave much
room for alternative implementations, so I know it's hard to do
correctly, but it seems to me that your implementation gets the worst
of both worlds:
- one of your implementation's advantages is that clients can never
lose events, but by piling your socketpair thingy onto it for an "accurate"
udev_monitor emulation, you make it so clients can actually shoot
themselves in the foot. It may be accurate, but it's lower quality than
your idea permits.
- the original udev implementation's advantage is that clients are never
woken up when an event arrives if the event doesn't pass the filter. Here,
your application will never be woken up indeed, but libudev-compat will be,
since you will get readability on your inotify descriptor. Filters are
not server-side (or even kernel-side) as udev intended, they're client-side,
and that's not efficient.

  I believe that you'd be much better off simply using a normal Unix
socket connection from the client to an event dispatcher daemon, and
implementing a small protocol where udev_monitor_filter primitives just
write strings to the socket, and the server reads them and implements
filters server-side by *not* linking filtered events to the
client's event directory. This way, clients really aren't woken up by
events that do not pass the filter.

> But I'm uncomfortable with the technical debt it can introduce to the
> ecosystem--for example, a message bus has its own semantics that
> effectively require a bus-specific library, clients' design choices
> can require a message bus daemon to be running at all times,
> pervasive use of the message bus by system-level software can make
> the implementation a hard requirement for having a usable system,
> etc. (in short, we get dbus again).

  I wasn't suggesting using a generic bus.
  I was suggesting that the natural architecture for an event dispatcher
was that of a single publisher (the server) with multiple subscribers
(the clients). And that was similar to a bus - except simpler, because
you don't even have multiple publishers.

  It's not about using a system bus or anything of the kind. It's about
writing the event dispatcher and the client library as you'd write a bus
server and a bus client library (and please, forget about the insane
D-Bus model of message-passing between symmetrical peers - a client-server
model is much simpler, and easier to implement, at least on Unix).

  Good luck with your Ph.D. thesis!

Received on Mon Jan 18 2016 - 13:14:20 UTC