Bugzilla – Full Text Bug Listing |
Summary: | Allow the user to tradeoff the time precision with the maximum simulation length | ||
---|---|---|---|
Product: | ns-3 | Reporter: | Emmanuelle Laprise <emmanuelle.laprise> |
Component: | core | Assignee: | Emmanuelle Laprise <emmanuelle.laprise> |
Status: | RESOLVED FIXED | ||
Severity: | enhancement | CC: | emmanuelle.laprise, mathieu.lacage |
Priority: | P4 | Keywords: | feature-request |
Version: | pre-release | ||
Hardware: | PC | ||
OS: | Linux | ||
Attachments: |
This is a patch that allows the user to specify the precision of the simulation timestep
A patch to fix the problems that were noted about the first patch. |
Description
Emmanuelle Laprise
2007-05-04 21:30:20 UTC
Created attachment 15 [details] This is a patch that allows the user to specify the precision of the simulation timestep In the attached file is a proposed way to allow the user the set the simulation timestep size. Except for the time.cc and nstime.h file, most of the changes to the other files consist only of renaming the functions that used to contain "ns" or "NanoSecond" to "ts" and "TimeStep". This is because the timestep is now not necessarily specified in nanoseconds and thus the function and variable names would have been confusing if left referring to ns. The changes in the time.cc and nstime.h file make it such that there is a static variable (not sure if it is in the right place though) that specifies the unit of the data stored in a Time object -- m_tsPrecision. There are two static functions (not sure if they are in the right place as well) that allow this variable to be set and retrieved (SetTsPrecision and GetTsPrecision). There is an enum variable PrecisionType that allows the user to use the abbreviations NS, US, MS, SEC, PS, and FS when setting the precision. But, the actual type for the precision is ts_precision_t which is an 8-bit number allowing the user to set the precision to 10 ns or 100 ns etc. I've added a CheckTime function for the tests in the time.cc file. This makes sure that the results of Time calculations are within the specified precision of the selected timestep. The precision that can be obtained may actually be greater than the timestep in some cases and I plan to look at that as part of bug 22. I will also add checks for overflow as part of bug 22. These are commented out because they are not quite complete. I've also added functions to check the conversion between the different units at some of the more common timestep units. I've also temporarily changed some of the test values due to the pending resolution of Bug 22. I hope that this code will be useful to you and that it will be possible to include it in ns3 in order to allow the user to specify the timestep unit/precision. Let me know if you wish me to make any modifications or feel free to modify as you wish. Emmanuelle hi emmanuelle, 1) int32_t et al. types are defined in the system header stdint.h. i.e. #include <stdint.h> should work to get them defined. You do not need cairo-wideint-xxxx.h 2) in nstime.h I do not think we need to provide doxygen documentation for the TimeStep function since it is really expected to be used from simulator.cc. 3) in nstime.h, it would be nice to avoid exporting the m_tsPrecision and m_tsPrecisionFactor variables in a header. Could we export instead a setter and a getter ? 4) Do we really need the ability to specify a "seconds-based" precision ? The initial design of these interfaces was to make a commitment that we would never decrease the precision of the simulation to ensure perfectly reproducible results and that we might increase it but the goal was to ensure that existing models would keep the exact same behavior as before. So, your work seems to be implicitely killing this initial commitment. Hence, my question: do we really need the ability to decrease the precision from the default nanoseconds-based precision ? 5) in time.cc, I am somewhat worried about the code in the ::GetXXSeconds and the XXXSeconds methods. These methods now use doubles where they had been very carefully written not to use any: I fear mostly precision problems but I am concerned by performance too. So, there are two questions: - do you think you could try to benchmark these functions and compare them to the previous functions ? - do you think that this conversion to double is not going to lose any precision ? i.e., if I call NanoSeconds (1) and the precision is nanoseconds, is the output going to be stable ? i.e., could I not risk getting 0 or 2 depending on the floating point rounding mode ? Part of the rationale for trying to use integers is that various platforms have slightly different behaviors with regard to double rounding et al. So, generally, I am really worried when I see a conversion back and forth to double just for some arithmetics. Could we rewrite these functions to use only integer arithmetics ? If so, what would be the impact on performance ? 6) Despite my comments and questions, that patch is awesome ! thanks a lot. Thanks for your comments. I have made most of the changes, which I'll include in a new patch. 1) Changed the include to <stdint.h> 2) Removed the Doxygen comment in nstime.h and the ones in time.cc 3) Done. The variable was moved to the .cc file. The implementation of the Set/Get functions were placed in the .cc file, with only the function template in the .h file 4) In this case, you let the user decide by letting him call the global command SetTsPrecision() (It might be better as a compilation switch or set via an environment variable, I am not sure, suggestions welcome) If the function is not called, then the default precision has been set to 1 nanosecond. Once you allow the user to increase the precision, you get it for free to also allow the user to reduce the precision. A user will need to consciously change the setting to something else to reduce the precision -- and will hopefully figure out the effect before making the call. We could force the user to not be allowed to reduce the precision, but you lose the tradeoff of longer sims for less precision. In some simulators, less precision decreases the simulation time, but I don't think that this is the case with how this simulator is implemented -- though it could have an impact with some scheduling algorithms, I think. Also, I am not convinced that increasing the precision has no impact on the reproducibility of simulation results. For example, if the user had scheduled events at a:1575ns, b:1550ns, c:1500ns and d:1525ns: With a 1us precision they are all scheduled at 15us, possibly in the order that they are received a,b,c,d. With a 1ns precision, they are scheduled in a different order: c,d,b,a. Could this impact the simulation results. 5a) I have to think about this some more, but I have one question first: What makes the GetXXXSeconds() functions use doubles? Is it because I am using division with a double? Or does just using the division do it? I will be happy to try to remove them if I can figure out how they get there. (the GetSeconds() directly uses double, but that's cause it did before) I'll add in the new patch once I figure out what to do about the doubles. Emmanuelle > 3) Done. The variable was moved to the .cc file. The implementation of the > Set/Get functions were placed in the .cc file, with only the function template > in the .h file You probably also want to export this variable with the DefaultValue mechanism, potentially using the EnumDefaultValue object (see src/core/default-value.h) > 4) In this case, you let the user decide by letting him call the global command > SetTsPrecision() (It might be better as a compilation switch or set via an > environment variable, I am not sure, suggestions welcome) If the function is > not called, then the default precision has been set to 1 nanosecond. Once you > allow the user to increase the precision, you get it for free to also allow the > user to reduce the precision. A user will need to consciously change the > setting to something else to reduce the precision -- and will hopefully figure > out the effect before making the call. We could force the user to not be > allowed to reduce the precision, but you lose the tradeoff of longer sims for > less precision. In some simulators, less precision decreases the simulation > time, but I don't think that this is the case with how this simulator is > implemented -- though it could have an impact with some scheduling algorithms, > I think. > Also, I am not convinced that increasing the precision has no impact on the > reproducibility of simulation results. For example, if the user had scheduled > events at a:1575ns, b:1550ns, c:1500ns and d:1525ns: With a 1us precision they > are all scheduled at 15us, possibly in the order that they are received > a,b,c,d. With a 1ns precision, they are scheduled in a different order: > c,d,b,a. Could this impact the simulation results. This is a very good point. It makes a lot of sense. > 5a) I have to think about this some more, but I have one question first: > What makes the GetXXXSeconds() functions use doubles? Is it because I am using > division with a double? Or does just using the division do it? I will be happy > to try to remove them if I can figure out how they get there. > (the GetSeconds() directly uses double, but that's cause it did before) To perform the division by a double, the compiler first converts the integer to double, performs the division, and then converts back the double to an integer. I have not given much thinking to this issue but I suspect that part of the solution might be to write something like the following: Time MicroSeconds (uint64_t us) { switch (precision) { case S: { uint64_t divider = 1000000; uint64_t ts = us / divider; return TimeStep (ts); } break; case MS: { uint64_t divider = 1000; uint64_t ts = us / divider; return TimeStep (ts); } break; case US: { return TimeStep (us); } break; case NS: { uint64_t multiplier = 1000; uint64_t ts = us * multiplier; return TimeStep (ts); } break; ... } } Wholly untested but, I think it should give you an idea of what can be done to avoid the conversion to double. Created attachment 16 [details]
A patch to fix the problems that were noted about the first patch.
This patch must be applied on top of units.patch. It hopefully fixes the problems that were found with the original version of the patch. Most significantly the removal of the conversion to and from double.
Note about unit2.patch -- I have defined the constants NS, MS, PS, etc -- but these might need to be renamed. I have also not yet added the defaultValue capability. I'll try to figure that out soon. Let me know if I should change the constant names. Emmanuelle Ok, I have created a new patch (that needs to be applied on top of the first patch) that I think removes the conversion to double. I have done it slightly differently than what you suggested, but the result is similar. The other changes that I mentioned earlier are also included. I have created the following two functions: int64_t TimeUnit<1>::ConvertToUnits (int64_t timeValue, uint64_t unitFactor) const { uint64_t precFactor; // In order to avoid conversion to double, precFactor can't be less 1 if (tsPrecFactor < unitFactor) { precFactor = unitFactor / tsPrecFactor; timeValue = timeValue * precFactor; } else { precFactor = tsPrecFactor / unitFactor; timeValue = timeValue / precFactor; } return timeValue; } and uint64_t TimeUnit<1>::UnitsToTimestep (uint64_t unitValue, uint64_t unitFactor) { uint64_t precFactor; // In order to avoid conversion to double, precFactor can't be less 1 if (tsPrecFactor < unitFactor) { precFactor = unitFactor / tsPrecFactor; unitValue = unitValue / precFactor; } else { precFactor = tsPrecFactor / unitFactor; unitValue = unitValue * precFactor; } return unitValue; } I've also made a couple of other changes to remove the need to use the pow function all of the time cause I thought that it might be slow. Let me know what you think. Thanks, Emmanuelle Somehow, I think that I missed adding this comment with the attachment. Ok, I have created a new patch (that needs to be applied on top of the first patch) that I think removes the conversion to double. I have done it slightly differently than what you suggested, but the result is similar. The other changes that I mentioned earlier are also included. I have created the following two functions: int64_t TimeUnit<1>::ConvertToUnits (int64_t timeValue, uint64_t unitFactor) const { uint64_t precFactor; // In order to avoid conversion to double, precFactor can't be less 1 if (tsPrecFactor < unitFactor) { precFactor = unitFactor / tsPrecFactor; timeValue = timeValue * precFactor; } else { precFactor = tsPrecFactor / unitFactor; timeValue = timeValue / precFactor; } return timeValue; } and uint64_t TimeUnit<1>::UnitsToTimestep (uint64_t unitValue, uint64_t unitFactor) { uint64_t precFactor; // In order to avoid conversion to double, precFactor can't be less 1 if (tsPrecFactor < unitFactor) { precFactor = unitFactor / tsPrecFactor; unitValue = unitValue / precFactor; } else { precFactor = tsPrecFactor / unitFactor; unitValue = unitValue * precFactor; } return unitValue; } I've also made a couple of other changes to remove the need to use the pow function all of the time cause I thought that it might be slow. Let me know what you think. Thanks, Emmanuelle Applied patches |