Bugzilla – Full Text Bug Listing |
Summary: | The auto tests in the TimeTests class are incomplete | ||
---|---|---|---|
Product: | ns-3 | Reporter: | Emmanuelle Laprise <emmanuelle.laprise> |
Component: | samples | Assignee: | Emmanuelle Laprise <emmanuelle.laprise> |
Status: | RESOLVED FIXED | ||
Severity: | minor | CC: | emmanuelle.laprise, mathieu.lacage |
Priority: | P5 | ||
Version: | pre-release | ||
Hardware: | PC | ||
OS: | Linux |
Description
Emmanuelle Laprise
2007-05-09 12:23:15 UTC
(In reply to comment #0) > The function TimeTests::RunTests is incomplete in the sense that it does some > calculations but does not check each result. If it did, it would fail because > the values used with the default precision of 1 ns would cause an overflow. > > For example, it does the following operation: (-1 sec * 10 sec)/ 11 sec, which > should give about -0.9 sec. This does not work (it gives +0.768 sec) because it > causes an overflow when using a precision of 1ns (which is the default): > -1 sec = -10^9, 10 sec = 10^10, if you multiply them, it gives: -10^19, which > is greater than the maximum number that can be represented with a signed 64-bit > number. If you try out this example, you will notice that the output is actually 0.9 and not what you expect because Time objects are not 64 bit objects. They hold the nanosecond value in 64.64 fixed-point integers. When these Time objects are used by the Simulation core, through a call to Simulator::Schedule, they are converted to a 64 bit nanosecond integer by ignoring the 64 bits of sub-nanosecond precision. So, the simulator uses a 64 bit integer to keep track of nanosecond-based time but the Time objects which are used by the user to perform simulation time arithmetic keep 64 bits of extra precision to avoid the kind of scenario you describe here. > > I will modify the tests to use smaller numbers and add a check for each > calculation, if that is ok. More tests would be nice but I don't think that the current tests need to be modified as you suggest. After reading the ns-dev list, you are right, the tests should pass. You should get the extra precision when you multiply and then divide, but it is not working on my machine. I get an overflow (I think) and I get the following result: FAIL ops 2a Expected:-1 Actual: 0.844674 Precision: 1e-09 FAIL ops 2 Expected:-0.909091 Actual: 0.767886 Precision: 1e-09 FAIL ops 3 Expected:-0.909091 Actual: 0.767886 Precision: 1e-09 2a: t3 = t2 * t0 / t0 2: t3 = t2 * t0 / t1 3: t3 = t0 * t2 / t1 t0 is 10 sec, t1 is 11 sec, t2 is 1 sec I modified all of the tests to autocheck the answers and this is how I discovered that some of them don't give the expected answers. I haven't figured out why they overflow though. If t0 is changed to 10 ms, t1 to 11 ms and t2 to 1 ms the tests pass no problem. Here is the functions that I am using to check the answers: void TimeTests::CheckTimeSec (std::string test_id, double actual, double expected, bool *flag, double precMultFactor, bool verbose) { double prec = pow(10,-ns3::m_tsPrecision) * precMultFactor; if ((actual < (expected-prec)) || (actual > (expected+prec))) { std::cout << "FAIL " << test_id << " Expected:" << expected << " Actual: " << actual << " Precision: " << prec << std::endl; *flag = false; } else { if (verbose) { std::cout << "PASS " << test_id << " Expected:" << expected << " Actual: " << actual << " Precision: " << prec << std::endl; } } } Wait, I think that I made a mistake and that it should overflow. If it is 64.64 fixed point math, then this does not change the maximum value of the number, just its precision. The maximum number that you can represent is still about 2^63-1(~9.2e18), if it is signed. You have more precision, but not a larger maximum number. |