Discussion:
[zeromq-dev] Comparing OpenDDS and ZeroMQ Usage and Performance
Martin Sustrik
2010-06-18 17:15:57 UTC
Permalink
I haven't read the article yet but it looks interesting:

http://mnb.ociweb.com/mnb/MiddlewareNewsBrief-201004.html

Martin
Jon Dyte
2010-06-19 17:44:14 UTC
Permalink
Post by Martin Sustrik
http://mnb.ociweb.com/mnb/MiddlewareNewsBrief-201004.html
Martin
_______________________________________________
zeromq-dev mailing list
http://lists.zeromq.org/mailman/listinfo/zeromq-dev
seems to boil down to OpenDDS with it's strongly typed(corba idl or
something similar) messages doing a round trip of 205 microseconds
versus 0MQ + Google Protocol Buffers coming in at 216 microseconds. I'm
surprised 0MQ isn't faster ....

Jon
Apps, John
2010-06-19 19:32:52 UTC
Permalink
These are the summary data points.

OpenDDS Raw Buffer 185 usec
ZeroMQ Raw Buffer 170 usec
Boost.Asio Raw Buffer 75 usec

OpenDDS .NET Object streamed through a Raw Buffer 630 usec
ZeroMQ .NET Object streamed through a Raw Buffer 537 usec
Boost.Asio .NET Object streamed through a Raw Buffer 413 usec

OpenDDS Strongly Typed Data 205 usec
ZeroMQ Strongly Typed Data with Boost Serialization 577 usec
Boost.Asio Strongly Typed Data with Boost Serialization 396 usec
ZeroMQ Strongly Typed Data with Google Protocol Buffers 216 usec

I think an expert eye should be cast over these numbers... In addition, a message length of 1000 is probably a bit more than 0MQ is optimized for?

-- ***@hp.com | +491718691813 | http://twitter.com/johnapps --


-----Original Message-----
From: zeromq-dev-***@lists.zeromq.org [mailto:zeromq-dev-***@lists.zeromq.org] On Behalf Of Jon Dyte
Sent: Saturday, June 19, 2010 19:44
To: 0MQ development list
Subject: Re: [zeromq-dev] Comparing OpenDDS and ZeroMQ Usage and Performance
Post by Martin Sustrik
http://mnb.ociweb.com/mnb/MiddlewareNewsBrief-201004.html
Martin
_______________________________________________
zeromq-dev mailing list
http://lists.zeromq.org/mailman/listinfo/zeromq-dev
seems to boil down to OpenDDS with it's strongly typed(corba idl or something similar) messages doing a round trip of 205 microseconds versus 0MQ + Google Protocol Buffers coming in at 216 microseconds. I'm surprised 0MQ isn't faster ....

Jon
Michael P. McCormick
2010-06-19 21:20:50 UTC
Permalink
Wonder what portion of that round-trip is just google protocol buffers.
Post by Apps, John
These are the summary data points.
OpenDDS Raw Buffer 185 usec
ZeroMQ Raw Buffer 170 usec
Boost.Asio Raw Buffer 75 usec
OpenDDS .NET Object streamed through a Raw Buffer 630 usec
ZeroMQ .NET Object streamed through a Raw Buffer 537 usec
Boost.Asio .NET Object streamed through a Raw Buffer 413 usec
OpenDDS Strongly Typed Data 205 usec
ZeroMQ Strongly Typed Data with Boost Serialization 577 usec
Boost.Asio Strongly Typed Data with Boost Serialization 396 usec
ZeroMQ Strongly Typed Data with Google Protocol Buffers 216 usec
I think an expert eye should be cast over these numbers... In addition, a message length of 1000 is probably a bit more than 0MQ is optimized for?
-----Original Message-----
Sent: Saturday, June 19, 2010 19:44
To: 0MQ development list
Subject: Re: [zeromq-dev] Comparing OpenDDS and ZeroMQ Usage and Performance
Post by Martin Sustrik
http://mnb.ociweb.com/mnb/MiddlewareNewsBrief-201004.html
Martin
_______________________________________________
zeromq-dev mailing list
http://lists.zeromq.org/mailman/listinfo/zeromq-dev
seems to boil down to OpenDDS with it's strongly typed(corba idl or something similar) messages doing a round trip of 205 microseconds versus 0MQ + Google Protocol Buffers coming in at 216 microseconds. I'm surprised 0MQ isn't faster ....
Jon
_______________________________________________
zeromq-dev mailing list
http://lists.zeromq.org/mailman/listinfo/zeromq-dev
_______________________________________________
zeromq-dev mailing list
http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Pieter Hintjens
2010-06-19 21:24:49 UTC
Permalink
As the article says for each test, each run produces different figures. It
is somewhat deceptive to quote usec figures on boxes that run random
processes with non RT kernels. A proper test would use two clean boxes and a
dedicated switch. Note that these tests are done by the guys who make
opendds.

-Pieter

Sent from my Android mobile phone.

On Jun 19, 2010 9:37 PM, "Apps, John" <***@hp.com> wrote:

These are the summary data points.

OpenDDS Raw Buffer 185 usec
ZeroMQ Raw Buffer 170 usec
Boost.Asio Raw Buffer 75 usec

OpenDDS .NET Object streamed through a Raw Buffer 630 usec
ZeroMQ .NET Object streamed through a Raw Buffer 537 usec
Boost.Asio .NET Object streamed through a Raw Buffer 413 usec

OpenDDS Strongly Typed Data 205
usec
ZeroMQ Strongly Typed Data with Boost Serialization 577 usec
Boost.Asio Strongly Typed Data with Boost Serialization 396 usec
ZeroMQ Strongly Typed Data with Google Protocol Buffers 216 usec

I think an expert eye should be cast over these numbers... In addition, a
message length of 1000 is probably a bit more than 0MQ is optimized for?

-- ***@hp.com | +491718691813 | http://twitter.com/johnapps --



-----Original Message-----
From: zeromq-dev-***@lists.zeromq.org [mailto:zeromq-dev-***@li...
Armin Steinhoff
2010-06-20 11:39:37 UTC
Permalink
Post by Pieter Hintjens
As the article says for each test, each run produces different
figures. It is somewhat deceptive to quote usec figures on boxes that
run random processes with non RT kernels. A proper test would use two
clean boxes and a dedicated switch.
Note that these tests are done by the guys who make opendds.
ZeroMQ is in 2 of 3 cases faster than DDS ... based on these numbers
it's a little bit strange to do a conclusion that DDS is in general
faster than ZeroMQ !

But this shows the intention of that "performance test" of the DDS guys :)

--Armin
Post by Pieter Hintjens
-Pieter
Sent from my Android mobile phone.
Post by Apps, John
These are the summary data points.
OpenDDS Raw Buffer 185 usec
ZeroMQ Raw Buffer 170 usec
Boost.Asio Raw Buffer 75 usec
OpenDDS .NET Object streamed through a Raw Buffer 630 usec
ZeroMQ .NET Object streamed through a Raw Buffer 537 usec
Boost.Asio .NET Object streamed through a Raw Buffer 413 usec
OpenDDS Strongly Typed Data
205 usec
ZeroMQ Strongly Typed Data with Boost Serialization 577 usec
Boost.Asio Strongly Typed Data with Boost Serialization 396 usec
ZeroMQ Strongly Typed Data with Google Protocol Buffers 216 usec
I think an expert eye should be cast over these numbers... In
addition, a message length of 1000 is probably a bit more than 0MQ is
optimized for?
http://twitter.com/johnapps --
-----Original Message-----
------------------------------------------------------------------------
_______________________________________________
zeromq-dev mailing list
http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Martin Sustrik
2010-06-21 10:19:09 UTC
Permalink
Hi,
Post by Pieter Hintjens
ZeroMQ Raw Buffer 170 usec
This seems bit too high for 1kB messages. Even on lousy kernel like
Windows I wouldn't expect it to be more than ~75us. Is anyone seeing
this kind of latency (170us?) If so, it should be considered a
regression IMO.

Martin
Malcolm Spence
2010-06-24 18:39:14 UTC
Permalink
We are an open source shop. We use many open source products with clients as
we build out their systems. Some of the products we develop ourselves as we
want to contribute to communities not just be free-riders. We use whatever is
appropriate for the client. We try to optimize the entire system for low
latency, or throughput etc. not just the individual pieces which might be at
the expense of other elements.

We were asked to do very quick "positioning" of the technologies with a
financial market data sweetspot.Hence the message size which is pretty much
standard.

We did not withhold the findings. We published them, along with the code, so
that others can build on them. The more information that is out there, the
better the choices that will be made.

regards Malcolm Spence
Director of Business Development
OCI St. Louis MO USA
TEL: 1-314-590-0206
Vitaly Mayatskikh
2010-06-20 09:49:39 UTC
Permalink
Post by Jon Dyte
seems to boil down to OpenDDS with it's strongly typed(corba idl or
something similar) messages doing a round trip of 205 microseconds
versus 0MQ + Google Protocol Buffers coming in at 216 microseconds. I'm
surprised 0MQ isn't faster ....
Windows XP... There should be rt-kernel to make OS latencies more
predictable.
--
wbr, Vitaly
Michael Santy
2010-06-21 13:50:35 UTC
Permalink
Post by Martin Sustrik
Post by Pieter Hintjens
ZeroMQ Raw Buffer 170 usec
This seems bit too high for 1kB messages. Even on lousy kernel like
Windows I wouldn't expect it to be more than ~75us. Is anyone seeing
this kind of latency (170us?) If so, it should be considered a
regression IMO.
Martin
I've never seen this kind of latency for 1KB messages, but then again I've
only used 0MQ on Linux. I don't have access to a system with a real-time
kernel, but I'd be willing to try to duplicate the raw buffer comparison
tests on Linux w/ Ethernet and 20Gb Infiniband. It will be a couple of
weeks before I can get around to it though.

Mike
Martin Sustrik
2010-06-21 14:24:51 UTC
Permalink
Michael,
Post by Michael Santy
I've never seen this kind of latency for 1KB messages, but then again I've
only used 0MQ on Linux. I don't have access to a system with a real-time
kernel, but I'd be willing to try to duplicate the raw buffer comparison
tests on Linux w/ Ethernet and 20Gb Infiniband. It will be a couple of
weeks before I can get around to it though.
That would be great. Thanks!

Martin
Don Busch
2010-06-24 17:16:54 UTC
Permalink
Hi,
Post by Martin Sustrik
Michael,
Post by Michael Santy
I've never seen this kind of latency for 1KB messages, but then again I've
only used 0MQ on Linux. I don't have access to a system with a real-time
kernel, but I'd be willing to try to duplicate the raw buffer comparison
tests on Linux w/ Ethernet and 20Gb Infiniband. It will be a couple of
weeks before I can get around to it though.
That would be great. Thanks!
Martin
I wrote the paper. If you'd like to run the tests on a faster box with an RT
kernel, that would be great. That's why the source code is attached, and the
paper encouraged users to build the code and run the tests themselves.

The laptop I ran them on is really old and slow, and the paper stated as such,
so the raw numbers aren't worth much. It's the relative numbers that are more
interesting. I needed a Windows box for the .NET comparisons.

The point of the paper is that the decision to use ZeroMQ or OpenDDS depends
a lot on what you're using it for. If all you're doing is sending raw buffers
over the wire, and your collection of participating processes doesn't change
throughout the execution of the system, then ZeroMQ will be faster than
OpenDDS. But if you are sending something like C++ structs, then you have to
build that capability on top of ZeroMQ one way or another and that will kill
any performance advantage that ZeroMQ has. Also, if your system consists of
processes that intermittently come and go, then OpenDDS can handle that right
out of the box whereas you'd have to build it on top of ZeroMQ. OpenDDS also
has a load of QoS capabilities that the article doesn't really talk much
about, capabilities that you'd have to build on top of ZeroMQ if you
want them.

So to summarize, I guess, we'd encourage users to look at their full end-to-end
use cases to figure out what they really need. If they find themselves wanting
to build things on top of ZeroMQ that are already in OpenDDS, then they most
likely eliminate the performance advantage of ZeroMQ while also creating a lot
of extra work for themselves.

But, yes, I certainly encourage people to take the code and build and run it
themselves. ZeroMQ looks like an interesting product.

Best Regards,

Don Busch
Principal Software Engineer
Object Computing, Inc.
Nicholas Piël
2010-06-24 18:32:05 UTC
Permalink
Hi Don,

It seems that you have used the default settings for ProtoBuf. By default it optimizes for code size, when you add the following line "option optimize_for = SPEED;" to your .proto file, ProtoBuf will show a significant performance increase.

Cheers,
Nicholas
Post by Don Busch
Hi,
Post by Martin Sustrik
Michael,
Post by Michael Santy
I've never seen this kind of latency for 1KB messages, but then again I've
only used 0MQ on Linux. I don't have access to a system with a real-time
kernel, but I'd be willing to try to duplicate the raw buffer comparison
tests on Linux w/ Ethernet and 20Gb Infiniband. It will be a couple of
weeks before I can get around to it though.
That would be great. Thanks!
Martin
I wrote the paper. If you'd like to run the tests on a faster box with an RT
kernel, that would be great. That's why the source code is attached, and the
paper encouraged users to build the code and run the tests themselves.
The laptop I ran them on is really old and slow, and the paper stated as such,
so the raw numbers aren't worth much. It's the relative numbers that are more
interesting. I needed a Windows box for the .NET comparisons.
The point of the paper is that the decision to use ZeroMQ or OpenDDS depends
a lot on what you're using it for. If all you're doing is sending raw buffers
over the wire, and your collection of participating processes doesn't change
throughout the execution of the system, then ZeroMQ will be faster than
OpenDDS. But if you are sending something like C++ structs, then you have to
build that capability on top of ZeroMQ one way or another and that will kill
any performance advantage that ZeroMQ has. Also, if your system consists of
processes that intermittently come and go, then OpenDDS can handle that right
out of the box whereas you'd have to build it on top of ZeroMQ. OpenDDS also
has a load of QoS capabilities that the article doesn't really talk much
about, capabilities that you'd have to build on top of ZeroMQ if you
want them.
So to summarize, I guess, we'd encourage users to look at their full end-to-end
use cases to figure out what they really need. If they find themselves wanting
to build things on top of ZeroMQ that are already in OpenDDS, then they most
likely eliminate the performance advantage of ZeroMQ while also creating a lot
of extra work for themselves.
But, yes, I certainly encourage people to take the code and build and run it
themselves. ZeroMQ looks like an interesting product.
Best Regards,
Don Busch
Principal Software Engineer
Object Computing, Inc.
_______________________________________________
zeromq-dev mailing list
http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Martin Sustrik
2010-06-24 19:53:39 UTC
Permalink
Hi Don,

First of all, thanks for doing the benchmarks and writing the paper!
Post by Don Busch
I wrote the paper. If you'd like to run the tests on a faster box with an RT
kernel, that would be great. That's why the source code is attached, and the
paper encouraged users to build the code and run the tests themselves.
I don't think the RT kernel would make a difference. RT kernel is good
for eliminating latency peaks, however, average latency tends to be the
same or even worse than with non-RT kernel.
Post by Don Busch
The laptop I ran them on is really old and slow, and the paper stated as such,
so the raw numbers aren't worth much. It's the relative numbers that are more
interesting. I needed a Windows box for the .NET comparisons.
I see. That can possibly mean that there's no Win platform performance
regression as I thought, just that the computer was slow.

I'll check on my box. Just to be sure: It was run on a single box and
10.201.200.72 resolved into TCP loopback interface, right?
Post by Don Busch
The point of the paper is that the decision to use ZeroMQ or OpenDDS depends
a lot on what you're using it for. If all you're doing is sending raw buffers
over the wire, and your collection of participating processes doesn't change
throughout the execution of the system, then ZeroMQ will be faster than
OpenDDS. But if you are sending something like C++ structs, then you have to
build that capability on top of ZeroMQ one way or another and that will kill
any performance advantage that ZeroMQ has. Also, if your system consists of
processes that intermittently come and go, then OpenDDS can handle that right
out of the box whereas you'd have to build it on top of ZeroMQ. OpenDDS also
has a load of QoS capabilities that the article doesn't really talk much
about, capabilities that you'd have to build on top of ZeroMQ if you
want them.
Ack.
Post by Don Busch
So to summarize, I guess, we'd encourage users to look at their full end-to-end
use cases to figure out what they really need. If they find themselves wanting
to build things on top of ZeroMQ that are already in OpenDDS, then they most
likely eliminate the performance advantage of ZeroMQ while also creating a lot
of extra work for themselves.
But, yes, I certainly encourage people to take the code and build and run it
themselves. ZeroMQ looks like an interesting product.
Martin
Don Busch
2010-06-24 20:38:02 UTC
Permalink
Martin,
Post by Martin Sustrik
I see. That can possibly mean that there's no Win platform performance
regression as I thought, just that the computer was slow.
I'll check on my box. Just to be sure: It was run on a single box and
10.201.200.72 resolved into TCP loopback interface, right?
Yes, it did. Thanks for the feedback!

Regards,

Don
Martin Sustrik
2010-06-25 07:55:50 UTC
Permalink
Don,
Post by Don Busch
Post by Martin Sustrik
I see. That can possibly mean that there's no Win platform performance
regression as I thought, just that the computer was slow.
I'll check on my box. Just to be sure: It was run on a single box and
10.201.200.72 resolved into TCP loopback interface, right?
Yes, it did. Thanks for the feedback!
Ok. I've done basic latency test on Win box via TCP loopback (1000
1000-byte messages as in your test).

For 0MQ/2.0.7 the latency was ~85us, for 0MQ/2.0.6 it was ~65us.

This regression results from removing some kernel-bypass functionality
(namely lock-free polling) in exchange for more functionality (namely
allowing for more than 63 threads to use 0MQ sockets).

Severity of the regression depends on efficiency of underlying kernel.
On Linux it's almost negligible. On Windows... well, the best solution
would be to optimise the kernel code, but once again... it's Windows :(

Anyway, this was just the standard latency test supplied with 0MQ. Later
on I'll try to run your test to see whether it won't show some other
regression.

Martin
Peter Alexander
2010-06-25 08:25:21 UTC
Permalink
Hi Martin,
Post by Martin Sustrik
This regression results from removing some kernel-bypass functionality
(namely lock-free polling) in exchange for more functionality (namely
allowing for more than 63 threads to use 0MQ sockets).
Just out of curiosity, If a regression is involved for an off hand use
case (more than 63 threads), why not have this as an optional
configuration flag prior to compiling.

But, I'm glad to hear that Linux kernels make this concern negligible.

thanks.. ~Peter
Martin Sustrik
2010-06-25 09:06:03 UTC
Permalink
Peter,
Post by Peter Alexander
Post by Martin Sustrik
This regression results from removing some kernel-bypass functionality
(namely lock-free polling) in exchange for more functionality (namely
allowing for more than 63 threads to use 0MQ sockets).
Just out of curiosity, If a regression is involved for an off hand use
case (more than 63 threads), why not have this as an optional
configuration flag prior to compiling.
The change cuts through the most of 0MQ codebase. Thus you would end up
with virtually maintaining two separate codebases.

Moreover, lock-free polling doesn't provide enough functionality to
implement zmq_poll so this function would have to be disabled in the
"optimised" branch.

Also, you would have to specify number of threads you are going to use
0MQ from in advance (this has API implications).

Finally, the work on migrating 0MQ sockets between OS threads that's
going on now wouldn't be possible with the lock-free polling.

All in all, if someone feels that maintaining a highly optimised but
less functional version of 0MQ for Windows is worth of the effort, just
go on!

Martin
Peter Alexander
2010-06-25 09:23:13 UTC
Permalink
Post by Martin Sustrik
Peter,
Post by Peter Alexander
Post by Martin Sustrik
This regression results from removing some kernel-bypass functionality
(namely lock-free polling) in exchange for more functionality (namely
allowing for more than 63 threads to use 0MQ sockets).
Just out of curiosity, If a regression is involved for an off hand use
case (more than 63 threads), why not have this as an optional
configuration flag prior to compiling.
The change cuts through the most of 0MQ codebase. Thus you would end up
with virtually maintaining two separate codebases.
Moreover, lock-free polling doesn't provide enough functionality to
implement zmq_poll so this function would have to be disabled in the
"optimised" branch.
Also, you would have to specify number of threads you are going to use
0MQ from in advance (this has API implications).
Finally, the work on migrating 0MQ sockets between OS threads that's
going on now wouldn't be possible with the lock-free polling.
All in all, if someone feels that maintaining a highly optimised but
less functional version of 0MQ for Windows is worth of the effort, just
go on!
Martin
_______________________________________________
zeromq-dev mailing list
http://lists.zeromq.org/mailman/listinfo/zeromq-dev
As I mentally absorb the 0MQ source code, this type of information is
very useful.

Thank you for such a detailed explanation. :)
Pieter Hintjens
2010-06-25 09:27:53 UTC
Permalink
Post by Martin Sustrik
The change cuts through the most of 0MQ codebase. Thus you would end up
with virtually maintaining two separate codebases.
Is there something special about the number 63?
Post by Martin Sustrik
Also, you would have to specify number of threads you are going to use
0MQ from in advance (this has API implications).
It still strikes me as odd that the context is configured at creation.
It seems a fragile API choice. Would it make sense to add a
setcontextopt()/getcontextopt() method pair so that number of I/O
threads, and perhaps non-portable limits like this could be set on a
newly created context?

-Pieter
Martin Sustrik
2010-06-25 09:43:18 UTC
Permalink
Post by Pieter Hintjens
Post by Martin Sustrik
The change cuts through the most of 0MQ codebase. Thus you would end up
with virtually maintaining two separate codebases.
Is there something special about the number 63?
Lock-free algorithms work on words. Word on 64-bit CPU has 64 bits. One
bit is used by the algorithm itself which leaves 63 bit for signals from
different threads.
Post by Pieter Hintjens
Post by Martin Sustrik
Also, you would have to specify number of threads you are going to use
0MQ from in advance (this has API implications).
It still strikes me as odd that the context is configured at creation.
It seems a fragile API choice. Would it make sense to add a
setcontextopt()/getcontextopt() method pair so that number of I/O
threads, and perhaps non-portable limits like this could be set on a
newly created context?
Thread-pool resizing? Doable, but is it worth of the effort? Migrating
exsiting I/O objects to another I/O thread when pool size is decreased
would be pretty painful. Most people use thread pool of size 1 anyway.

Martin
Pieter Hintjens
2010-06-25 09:49:03 UTC
Permalink
Post by Martin Sustrik
Lock-free algorithms work on words. Word on 64-bit CPU has 64 bits. One
bit is used by the algorithm itself which leaves 63 bit for signals from
different threads.
Right.
Post by Martin Sustrik
Thread-pool resizing? Doable, but is it worth of the effort? Migrating
exsiting I/O objects to another I/O thread when pool size is decreased
would be pretty painful. Most people use thread pool of size 1 anyway.
I'd do it as for sockets, allow configuration only on a virgin object.
Brute force is destroy old context and create a new one with the new
properties. The real win is to remove that hard coded set of options
from the API, allowing expansion in the future. It also seems wrong to
have to specify the sane default (1 I/O thread) explicitly in every app.

-Pieter
Martin Sustrik
2010-06-25 09:54:22 UTC
Permalink
Post by Pieter Hintjens
I'd do it as for sockets, allow configuration only on a virgin object.
Brute force is destroy old context and create a new one with the new
properties. The real win is to remove that hard coded set of options
from the API, allowing expansion in the future. It also seems wrong to
have to specify the sane default (1 I/O thread) explicitly in every app.
Easy solution would be to state in the API guidelines document that
io_threads parameter should have default value of 1.

Martin
Pieter Hintjens
2010-06-25 07:13:36 UTC
Permalink
I wrote the paper.  If you'd like to run the tests on a faster box with an RT
kernel, that would be great.  That's why the source code is attached, and the
paper encouraged users to build the code and run the tests themselves.
Hi Don, Malcolm,

I have to admit being slightly skeptical at first when reading your
paper, but you've done a really good job of comparing the three
products IMO. It certainly helps position OpenDDS in this growing
space. Have you considered that OpenDDS might profitably run over 0MQ
in the future?

It's rare to see vendors make open and reproducible benchmarks with
source code, kudos for that. Some time ago we started sketching a
standard for benchmarks (http://wiki.amqp.org/spec:5), this might be
worth exhuming and fleshing out.

However I suspect users will want to compare more than just performance.

-
Pieter Hintjens
iMatix
Don Busch
2010-06-25 21:19:02 UTC
Permalink
Post by Pieter Hintjens
I have to admit being slightly skeptical at first when reading your
paper, but you've done a really good job of comparing the three
products IMO. It certainly helps position OpenDDS in this growing
space. Have you considered that OpenDDS might profitably run over 0MQ
in the future?
Thanks for the kind words.

As I was looking at 0MQ, it did cross my mind that it might work as another
pluggable transport underneath the OpenDDS transport framework. I haven't
thought it through, though, so I'm not yet completely sure how the pieces fit
together. But it's definitely been in the back of my head.
Post by Pieter Hintjens
-
Pieter Hintjens
iMatix
Regards,

Don Busch
Object Computing, Inc.
Martin Sustrik
2010-06-26 07:05:30 UTC
Permalink
Post by Don Busch
As I was looking at 0MQ, it did cross my mind that it might work as another
pluggable transport underneath the OpenDDS transport framework. I haven't
thought it through, though, so I'm not yet completely sure how the pieces fit
together. But it's definitely been in the back of my head.
It can possibly make sense. 0MQ never intended to be a full-fledged
messaging middleware, rather a low-level layer in the stack to provide
primitives for building one. Not sure how would that play with OMG's DDS
standard (I suppose OpenDDS is focused on implementing DDS).

Martin

Loading...