Coding for PerformanceC/AL, the Microsoft Dynamics NAV programming language, is very flexible. There are a lot of ways to do the same thing, but not all ways are created equal. This typically doesn’t matter for smaller customers, where there simply isn’t the transaction volume to see a noticeable difference, so some developers may not see the need to take code execution performance into account.
As customers grow, and these processes begin to break down, and the time to execute each line of code can begin to have a serious impact on system performance. Customers might begin to think Microsoft Dynamics NAV cannot handle their growing business when nothing could be further from the truth. I don’t want your customers to be mad because they have to close for the month, and Adjust Cost has no sign of ending.
So let’s take a look at a few small things we can all do right away to keep this from happening.
Filtering Record Sets
GLEntry.SETRANGE(“Entry No.”, 1); // 2.0407 ms GLEntry.SETFILTER(“Entry No.”, ‘1’); // 3.2125 msThese two commands achieve the exact same result. For the non-developers out there, SETRANGE allows you to create a very basic filter on the records in the table, while SETFILTER allows for more complex filters using things like “not-equals”.
As you can see there is 1.2 ms increase (57%) in execution time between the two statements. That’s 490 statements per second for SETRANGE, and only 311 per second for SETFILTER. Now I know many of you are saying “Who cares about milliseconds?” but let’s extend this very basic example. What if you were filtering on five fields? That’s 10ms vs. 16ms. What if you need to execute this same routine 1,000 times? That’s 10 seconds vs. 16 seconds. 1,000,000 times? That’s 2.7 hours vs. 4.4 hours, or 1.7 hours of extra time to get the same result.
GLEntry.FINDSET // 125 ms GLEntry.FINDFIRST // 141 ms GLEntry.FINDLAST //172 msAgain, these statements give you the same results at the end of the day, however, each one should only be used in certain scenarios. FINDSET is what should be used when looping through a dataset. The others should be used when you only need to retrieve a single record from the database, like when you want to know the last entry number used in a ledger entry table.
The times above represent retrieving 10,000 un-cached records from the database. Remember, this is just reading the records. We are not doing anything to them. There are no changes to values, no calculations being done, no writing back to the database. This is as basic as it gets. You’re looking at a 16ms to 47ms time loss for using the wrong command. Extend that out to 1,000,000 records and you start seeing some major time increases.
These simple changes can save hours and hours of execution time in the long run and it doesn’t take any extra time when doing the development.