performance test-time between the sending and the reception


I am working with CoreDX DDS since 2 weeks, and I have some questions about the performance time.

I am working, for a first step, with the example hello_c.
I would like to know the time between the "NameDataWriter_write" and the "NameDataReader_take" prototypes, to know the time spent by the middleware to communicate.

the source code is unchanged, so the listener is asynchronous, , the publisher send a data each second, and the Qos is the one by default.

I measure the time with two methods:
-->the first one is using the "gettimeofday()" function. The publisher time is take just before to send the data for the publisher, and the subscriber time is take when the code goes to function.
For the moment the publisher and the subscriber are on the same machine.
The average time is more or less 10ms.

--> the second method used the "reception_timestamp" and the "source_timestamp" of the "DDS_SampleInfo" structure, available with the subscriber, into the " if ( si->valid_data)" loop.
the time is between 60µs and 120µs(0.12ms)

For my first test, I don't understand why the time is so long.
the time with the time stamp is ok for me.
But, if my first test is ok(algorithm, code) which function take a so long time between the expedition and the reception of the data?

Maybe you had already done theses tests, with a local machine or with two distant machines. Did you get results to share?

Thanks a lot.



RE: performance test-time between the sending and the reception

Hello Guillaume,

First, I think some of your text was lost in the forum post. Perhaps you are trying to include some HTML or other mark-up that our web server didn't accept...

Regardless, I believe that I understand your general question: you are attempting to measure the time elapsed between performing a 'write' operation on a DataWriter and receiving the data at a DataReader.

The first important point is that the default DataReader QoS settings (in use in the hello_c example code) configure a 'latency_budget' of 10ms. This is consistent with the average time you measured in the first method. The 'latency_budget' is a powerful setting that allocates time to the middleware. The CoreDX DDS middleware uses this time budget to reduce overhead in several ways. First, the middleware can collect multiple data samples together for combined transmission on the wire - this reduces per/sample network overhead. Further, the middleware can potentially collect multiple samples together at the DataReader before notifying the application of the DATA_AVAILABLE condition. This helps reduce context swaps, and other overhead of data access.

The configuration of latency_budget is very application specific. Setting it to zero at both the DataReader and DataWriter will cause the middleware to make every effort to deliver data immediately, and will result in the lowest 'latency' of data. However, it may incur greater overhead and CPU utilization depending on your application behaviour.

I am not sure that I understand your description in the second method... can you repeat?

Also, you may be interested in looking at the example/latency_test source code. This example shows QoS policy configuration to obtain very low data latencies.

When we execute the latency_test under Linux, we measure latencies on the order of 40usec on platform, and 60usec over a 1Gbps network switch.

I hope this helps!


Thanks a lot, I fixed the

Thanks a lot,

I fixed the latency_budget to 0, and now the average time of the firts test is more or less the same like in the second method;

The second method used the time stamp in the structure DDS_SampleInfo of the subscriber.There are two fields to get the time of the source and of the reader:

DDS_SampleInfo si;

My question is: Where this time stamp is done? In which function? what is the accuracy of the time?

I have a new question: I try do modify the Qos to try this: I start the publisher and the subscriber. In the topic, there is a counter, at each cycle this one increment of 1, and the subscriber displays it. so we can see each samples is received, and there is no gap.
Now, I stop the receiver. The publisher is kept: the samples are still sent. But theses samples are not received, but if the Qos is configured for, theses samples are buffered...and when the subscriber is restart, I would like to read all samples from the stop(if the buffer size enables it).

for the moment i changed the Qos of the subscriber and publisher with that:


but the subscriber read just the last sample, there is a gap in the counter.
do you have an example to test it?
or just few hints.
I read the buffer like in the hello_c example, maybe it's not the good way....

I hope that I'm understandable.

Thanks a lot,


Re: Thanks a lot, I fixed the

I'm glad that helped.

The source timestamp is initialized during the call to DataWriter::write(). It is sent on the wire with the sample only if required (that is, only if QoS.destination_order.kind == BY_SOURCE_TIMESTAMP_DESTINATIONORDER_QOS).

The reception timestamp is initialized at the moment the data sample is received by the DataReader.

In both case, the precision of the timestamp is based on the precision of the host's clock_gettime() routine.

The accuracy of source_timestamp can vary if multiple data samples are aggregated together into one network packet. In that case, the timestamp will represent the 'oldest' sample contained in the group. The accuracy of the reception_timestamp is not impacted by this, and should always be very accurate.

Concerning 'durability' of data: I think the settings you present are correct with the exception of reliability. Reliability should be set to RELIABLE to enable transmission of historical data to late-joining readers.