There are 2 changes -
1. Encode txPort in ttag packets and use it at TxTtagStats and
RxPcapStats to identify Tx and Rx packets respectively
2. Don't use pcap_sendqueue_transmit() if stream timing is in use -
since we can't modify TTAG packets inside that API
In a previous commit, we start/stop these timers based on number of
ports tracking stream stats triggered by RPCs. However, timers cannot
be triggered across threads (RPC thread and main thread in this case).
This fix uses a queued connection to post it to the other queue.
If and when we remove PacketSequence::ttagL4ChecksumOffset we will take a call
if we should revert back to passing seq->sendQueue instead of seq at that time
The stream timingHash is read by getStreamStats() while it is read/write
for processRecords(), the latter is a more frequent operation so there's
no real benefit of using a read-write lock instead of simple mutex.
PcaptxTtagStats inherits from PcapSession which already includes a protected
handle_ member.
This removal was likely left off when PcapTxTtagStats started inheriting from
PcapSession.
The previous commit changed the algo to determine which packets were Ttag'd,
but changes were done only for interleaved mode.
This commit adds the changes required for sequential mode.
The algo works for the following cases of interleaved streams -
* pktListDuration < ttagTimeInterval
* pktListDuration > ttagTimeInterval
* some streams have Ttag, some don't
- first stream has Ttag
- first stream does NOT have Ttag
* no streams have Ttag
Changes for sequential mode are pending
Interleaved mode used an implicitly added packet set in both base and Turbo
code. This has been chaned to use an explicit mode to keep things consistent.
Turbo code still has the implicit packet set related code - that needs to be
removed, once the explicit packet set code is validated and tested.
This singleton class will keep track of Ttag timing across all ports and GUIDs.
A bunch of FIXMEs/TODOs are pending for this class implementation; also this
class has not been hooked up to the rest of the code yet.
For now we are just debug printing timestamp with T-TagId and GUID. We
need to store this tuple and compare when we Rx the same - this will be
in a upcoming commit.
As part of Turbo changes, we made changes to create explicit packet
sets, but for the base code we continued creating implicit packet
sets for some cases. With this change we don't create any implicit
packet set.
This change needs to be tested thoroughly for multiple cases.
The problem happens for bidirectional flows. The sequence of events is
as follows when you start Tx on Ports p1, p2 with the current code -
1. Clear stream stats on p1
2. Start tx on p1
3. Clear stream stats on p2
4. Start tx on p2
By the time #3 is executed, it may have already rx packets from p1 which
are being incorrectly cleared, this will cause these number of packets
to show up as dropped instead - incorrectly.
The fix is to change the order like this -
1. Clear stream stats on p1
2. Clear stream stats on p2
3. Start tx on p1
4. Start tx on p2
Unidirectional flows will not see this problem - as long as startTx is
done only on the Tx port and not the Rx port.
This bug is a regression caused due to the code changes introduced for the
stream stats rates feature implemented in 1.2.0