|
Bugzilla – Full Text Bug Listing |
| Summary: | Calculate[Bytes, Bits]TxTime should avoid double to time conversions when possible | ||
|---|---|---|---|
| Product: | ns-3 | Reporter: | Tommaso Pecorella <tommaso.pecorella> |
| Component: | network | Assignee: | Tommaso Pecorella <tommaso.pecorella> |
| Status: | RESOLVED WONTFIX | ||
| Severity: | enhancement | CC: | tomh |
| Priority: | P5 | ||
| Version: | ns-3-dev | ||
| Hardware: | All | ||
| OS: | All | ||
| Attachments: |
patch
true patch |
||
|
Description
Tommaso Pecorella
2014-09-05 16:36:51 UTC
Created attachment 1871 [details]
patch
Speaking of...
/**
* \brief Calculate transmission time
*
* Calculates the transmission time at this data rate
* \param bytes The number of bytes (not bits) for which to calculate
* \return The transmission time for the number of bytes specified
*/
Time CalculateTxTime (uint32_t bytes) const;
There's no way to ask the Tx time of a given number of bits. Not every "packet" is a multiple of 8 bits... especially at L2 it is common to have fields that are not a multiple of an octect.
The function should be marked as obsolete, and replaced with:
- Time CalculateBytesTxTime (uint32_t bytes) const;
- Time CalculateBitsTxTime (uint32_t bits) const;
Created attachment 1872 [details]
true patch
Added two functions, deprecated the old one.
I agree with the proposed patch and API change.
regarding this:
// \todo avoid to use double (if possible)
return Seconds (static_cast<double>(bytes)*8/m_bps);
m_bps will typically be on the order of 10^6 plus or minus a few orders of magnitude. The number of bits or bytes is typically from small to packet-sized. So, it may be that if we scale up the bits or bytes, we might be able to perform successful integer division in many cases.
Something like this might help:
uint64_t scaledBytes = bytes * 8000000000;
if ((scaledBytes % m_bps) == 0)
{
// avoid integer to double conversion when possible
return NanoSeconds (scaledBytes/m_bps);
}
else
{
return Seconds (static_cast<double>(bytes)*8/m_bps);
}
but I'd suggest to profile it on our test suite (see what the usage pattern is, see if it helps) before committing this type of enhancement. So, I'd suggest to commit the original patch and leave this open in Patch Wanted, and request a second patch (with some profiling results) to avoid conversions.
changeset 11226:d1185f77286f Reopened with different scope (and name): avoid double conversion when possible Comment on attachment 1872 [details]
true patch
This patch was committed. A new one is wanted to avoid double conversions (when possible).
I did a bit of thinking on the problem of avoiding double conversion. The main issue is that "NanoSeconds" could not be available due to different time resolution. A clean solution could be to store the BPS in reverse format (time per bit), however this should be stored in the smallest available time resolution. This, in turn, is an even bigger problem, because the time resolution is not know until the simulation starts. In other words, the only reliable place to do the checks is inside Calculate[Bytes, Bits]TxTime, which would slow down the function. I'd mark this as wintfix, at least until someone shows an evidence that these functions are a real bottleneck. |