Erik Rigtorp
2009-12-30 22:34:48 UTC
Hi!
I've read the two discussions on using ZeroMQ for IPC. I think ZeroMQ
should support IPC and in-process communication.
TCP is nice to work with but it has one problem: On linux (and others)
TCP over loopback doesn't bypass the TCP stack which makes the latency
several times higher than using pipes or unix domain sockets. I know
that on Solaris this is optimized so that a loopback TCP connection
becomes more or less a pipe. For low latency IPC on Linux ZeroMQ needs
pipes or unix domain sockets.
For ultra low latency IPC there is only one way to go and that is to
use shared memory. I took a look at yqueue.hpp in zeromq2 and it's a
good start. We only need to add a lock free memory allocator (which
can be implemented using a lock free queue) or implement a lock free
ringbuffer that would hold a fixed number of messages and block the
writer when it's full. For signaling I suggest to implement two
different approaches. One using pthreads conditions and one using busy
waiting.
I've read the two discussions on using ZeroMQ for IPC. I think ZeroMQ
should support IPC and in-process communication.
TCP is nice to work with but it has one problem: On linux (and others)
TCP over loopback doesn't bypass the TCP stack which makes the latency
several times higher than using pipes or unix domain sockets. I know
that on Solaris this is optimized so that a loopback TCP connection
becomes more or less a pipe. For low latency IPC on Linux ZeroMQ needs
pipes or unix domain sockets.
For ultra low latency IPC there is only one way to go and that is to
use shared memory. I took a look at yqueue.hpp in zeromq2 and it's a
good start. We only need to add a lock free memory allocator (which
can be implemented using a lock free queue) or implement a lock free
ringbuffer that would hold a fixed number of messages and block the
writer when it's full. For signaling I suggest to implement two
different approaches. One using pthreads conditions and one using busy
waiting.