Mail archive
alpine-user

[alpine-user] Alpine limit on file descriptors?

From: Alex Butler <alex_at_ewinkle.com>
Date: Tue, 14 Aug 2018 18:23:04 +0100

We've been having some issues with what looks like some kind of limitation
on the maximum number of file descriptors (or a /dev/shm semaphore
limitation). Our application is in Python and uses the standard Python
multiprocessing library to create processes and associated queues for
communication (typically creating between 20 and 200 processes at start-up
depending on hardware configuration). It runs fine on Raspbian/Debian with
any number of processes we choose (within reason!) and runs fine under
Alpine when we run with low numbers of processes.

 

It however always barfs for larger numbers of processes under Alpine -
suggesting (from the reported OSError) that it is running out of file
descriptors. [Might be a red herring but might be related to the use of OS
semaphore management in /dev/shm. Just not sure!]

 

Anyway, after trying quite a few things we've narrowed it down to failing in
every stock flavour of Alpine we've tried (x64, Raspberry Pi etc) but which
just doesn't happen at all in the different flavours of
Raspbian/Debian/Ubuntu etc.

 

Is there some Alpine setting/limit which we haven't yet found which sets the
maximum number of file descriptors (or some other subtle Alpine difference).
We've tried all the "obvious" Linux file descriptor changes like ulimit,
sysctl type changes etc.

 

To help recreate this we've created a simple Python script (attached).

 

Under Alpine (Raspberry Pi) it fails after the 85th process pair. If
MAX_PAIRS is set to 85 it works fine. i.e. no exceptions. Put in anything
bigger for MAX_PAIRS and we always get the following error message at the
86th:

 

---
data for 83 was [83001, 83002, 83003, 83004, 83005, 83006, 83007, 83008,
83009]
data for 84 was [84001, 84002, 84003, 84004, 84005, 84006, 84007, 84008,
84009]
data for 85 was [85001, 85002, 85003, 85004, 85005, 85006, 85007, 85008,
85009]
Traceback (most recent call last):
  File "queue_test.py", line 41, in <module>
    q = Queue()
  File "/usr/lib/python2.7/multiprocessing/__init__.py", line 218, in Queue
    return Queue(maxsize)
  File "/usr/lib/python2.7/multiprocessing/queues.py", line 68, in __init__
    self._wlock = Lock()
  File "/usr/lib/python2.7/multiprocessing/synchronize.py", line 147, in
__init__
    SemLock.__init__(self, SEMAPHORE, 1, 1)
  File "/usr/lib/python2.7/multiprocessing/synchronize.py", line 75, in
__init__
    sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue)
OSError: [Errno 24] No file descriptors available
---
 
As I said - on other Linux distro's this code runs fine.  We'd _really_ like
to use Alpine for a variety of obvious reasons.  It's not obvious what is
going on and not being able to run multiprocessing to the level of
parallelism we need might be a deal-breaker.
 
Incidentally, at MAX_PAIRS = 85 (when the test code runs fine), doing a
"lsof | wc -l" reveals about 29991 file descriptors (~29k).
 
I've attached a copy of the test python code for ease of replication.  We
just run it as root using "/usr/bin/python queue_test.py"
 
Any help or suggestions as to what might be going on gratefully received!
 
Cheers,
 
Alex Butler
UK





---
Unsubscribe:  alpine-user+unsubscribe_at_lists.alpinelinux.org
Help:         alpine-user+help_at_lists.alpinelinux.org
---
Received on Tue Aug 14 2018 - 18:23:04 GMT