c++ - Measuring Runtime - Not Precise Enough -
I got an assignment. Its programming side should be relatively simple, but I can not seem to get results here. We can perform merge sort and radix sorting, and then test them at a random number of groups. Test it for ten times n = 10, n = 100, n = 1000 and n = 10000, average of each of the ten trials to get the average runtime. Looks quite simple, but I can not get any results for n = 10, and the results of n = 100 and n = 1000 are not very accurate, most I can get out of these .0001
The assignment issue is to compare these runtime to theoretical runtime, but we have never discussed how a runtime is happening, so any of us can do that Google hopes to do such work Hope is no target.
I have tried some different ways of doing this, and none of them has been able to produce enough results, the most recent attempt is using the Crono and high_resolution_clock. Still no good, I tried to put my computer under tension test and ran 100% load program and it still does not work
for (int cnt = 0; cnt> 10; cnt ++) {populated (number, 10); High_resolution_clock :: time_point t1 = high_resolution_clock :: now (); MergeSort (number, 0, numbers.size () - 1); High_resolution_clock :: Time_pointT2 = High_Resolution_clock :: now (); Auto Duration = std :: chrono :: duration_cast & lt; Std :: chrono :: nanoseconds & gt; (T2-T1). Count (); Fear + = period; } Cout & lt; & Lt; Andal & lt; & Lt; For "N = 10, Mergeartort Average Runtime:" & lt; & Lt; Dur / 10; Far = 0;
Instead of 10 arrays on an array, 10 arrays (even 10 Use of arrays will not be an issue for 10,000 memory usage). Populate 10 arrays with a random number, then call MergeRort () to sort 10 arrows every 10 times. It will be a bit of help, however, it may not be enough that I did some tests with very large arrays:
The time for sorting 4,194,304 64 bit elements of psuedo random data:
Radix sort 203 MS merge sort 297 MS
64 bit mode, Windows XP64, Visual Studio 2005 compiler Intel i7 2600 3.4 hours, DP 67 bg motherboard, 4GB RAM < / P>
You can use more arrays, but if the number of arrays is large If calling overhead becomes a factor. Radios is linear in line (sorted) in time, whereas merge sort is O (n log 2 (n)). You can test large arrays, then use the formulas to estimate the time for small arrays.
For high precision timers, you can calculate a CPU cycle of X 86 RDSC instructions (assembly) or internally if your compiler supports it as it is a cycle count, if the CPU Has an independent overclocking mode, such as Intel's Turbo Boost, so it is unaffected.
Comments
Post a Comment