Bugzilla – Bug 1284
./test.py performance tests do not exist
Last modified: 2012-09-10 20:17:09 UTC
'./test.py -c performance' crashes because we have no performance tests. I would like to start adding these; this tracker issue is to just discuss the enabling of some initial ones (such as creating large number of packets, or calling "GetValue()" from a RandomVariable a large number of times). For starters, I suggest that the performance tests should be explicitly enabled by users; either: './test.py -c performance' or './test.py --constrain=performance' but not if user just types './test.py' Here is our current basic test output: PASS: TestSuite global-value When doing performance, it may make sense to add a timing field, expressed as: (hh:mm:ss.ss), so we might see something like: PASS (00:00:10.23): TestSuite global-value-performance for that matter, it may make sense to add the timing field to all of our tests so that performance is not a special case.
All of this information is already in the system because I see that Craig has output this to XML: <Test> <Name>watchdog</Name> <Result>PASS</Result> <Time real="0.000" user="0.000" system="0.000"/> <Test> <Name>Check that we can keepalive a watchdog</Name> <Result>PASS</Result> <Time real="0.000" user="0.000" system="0.000"/> </Test> </Test> so I guess it boils down to: 1) make ./test.py -c performance not crash by adding a performance test 2) decide whether to expose the timing information to the simple stdout output
Created attachment 1264 [details] Patch for bug 1284 I am attaching a patch that makes test.py's output include the elaspsed time in seconds like this: PASS (0.104): TestSuite traced-callback PASS (0.104): TestSuite type-traits PASS (0.095): TestSuite time PASS (0.102): TestSuite timer PASS (0.095): TestSuite simulator PASS (0.093): TestSuite sample PASS (0.092): TestSuite ptr PASS (0.167): TestSuite basic-random-number . . .
Should performance tests be examples with main functions or should they be test suites?
(In reply to comment #3) > Should performance tests be examples with main functions or should they be test > suites? Can they easily be both? i.e. can examples_to_run.py allow categorization of examples as either performance or not?
All of the current categories of tests run by test.py are test suites of these types: core_kinds = ["bvt", "core", "system", "unit"] To be consistent and to make less work modifying all of the examples-to-run.py files, I am proposing to add a performance category: core_kinds = ["bvt", "core", "system", "unit", "performance"] That would allow this to work: TestSuite ("many-random-variables-test-suite", PERFORMANCE)
(In reply to comment #5) > All of the current categories of tests run by test.py are test suites of these > types: > > core_kinds = ["bvt", "core", "system", "unit"] > > To be consistent and to make less work modifying all of the examples-to-run.py > files, I am proposing to add a performance category: > > core_kinds = ["bvt", "core", "system", "unit", "performance"] > > That would allow this to work: > > TestSuite ("many-random-variables-test-suite", PERFORMANCE) agreed
Bug closed. ns-3-dev changeset: 7bfaded450be