«

»

Mar 10

Overview of Code Refactoring Part II

It has been a few weeks since part 1, but I wanted to get back to it and bring up a few resources on where to look to understand the different techniques when refactoring code and also give you some assistance on where to get started. If you want to refresh your memory or haven’t had a chance to read part I, you can find it >here.

What is Code Refactoring?

I already mentioned this in my first part, but I wanted to just take another stab at a definition of code factoring in my own words. Code refactoring is the activity of taking existing code and rewrite it to still provide the same functionality, but be better readably, easier to troubleshoot, reusable, less intrusive.

At least that’s what I would consider the basics. It comes down to the fact that you, as a developer, possibly need to change the way that you have been developing and evolve with the changing technology or that the project had been done so long ago and back then, there was no other way of developing some specific functionality and it was then upgraded through the versions without change (which is also the main issue in part I – you should always do some code refactoring during upgrades, especially when this is the first upgrade for the client that you do or the technology changed – but the customer doesn’t want to pay for it).

How to get started?

This really depends on the individual project, but I would suggest to first identify the areas or objects with the biggest amount of changes in the code – and I am not talking about percentages, since 2 lines added to a 5 line object is a rather large change. Historically, those areas are the posting routines and release codeunits.

As an alternative, you can also look at all tables that have been customized. You will see changes to those tables often are more than just adding a field. If you start with the tables, you might be able to knock out a large amount of changes in a smaller amount of time. Redesigning changes in a posting routine together with testing, etc., will take quite some time. Updating a bunch of tables with the changes described below might not.

Tables

Triggers

Refactoring tables is somewhat “simple”, but not always. Let’s take care of the simple things first: I am assuming that everyone is upgrading nowadays to 2016 or 2017, so the event pattern is available. What does this mean? It means that any code that was added in any of the standard table triggers in a standard table should be moved out. You create a function in a codeunit and declare that the event subscriber to the table trigger or field trigger that is available. You still might have to rewrite the code a bit, but often enough, it’s just moving the code and you are done. Ideally, you would create one codeunit per modified table and add the events for just this table in there – together with any support functions that are needed. This is, unfortunately, not feasible, since customers still have to pay for individual objects (I can’t believe that this is not removed yet – get rid of it, just like customers don’t have to pay for database sizes anymore).

Custom Functions

So, table triggers and field triggers are the easy part. Often, new functions are added to tables. While adding new functions to tables was good intention – to keep it “object oriented”, it also causes issues, because ideally, you do not have anything changed in a table other than new fields. Why? Because then Microsoft’s upgrade tools work most efficiently.

So, what do we do with those custom functions? Well, some of the functions might have been created to assist in changes in triggers. Those are usually all the local functions that were added. Those can hopefully just be moved over into the new codeunit(s) created and then work as support functions for the code in the triggers then.

The global functions, which provide additional functionality to other code outside of the object, can also be handled easily. Those can also be moved into the codeunit created for the table (or in a central codeunit), but you will need to add a parameter to the function. I would just add that at the beginning and this should be the table itself – so declare a parameter of type Record and with the table that you are moving the function from. And make sure that you mark this as VAR, so it’s transferred by reference and you can make changes to the data in the table. Then, delete the function from the table and just compile all objects to see, where it is used (there is no “where used” in the dev environment – but you could use the merge tool 😊). Then replace the calls to the function in the table with a call to the codeunit and pass the table as the parameter.

Custom Code in Standard Functions

Often, code is added to standard functions in tables. Especially, if this is a block of code and not only one line, you will need to change that. So, there are different ways of handling this, depending on the type of change. If it’s just one line of code, you could choose to leave it there. But only, if there is absolutely no way to move this code somewhere else.

For instance, if the function is an “InitRecord” function and you initialize a value of a custom field, there are different options how to deal with this:

  • You can set the InitValue of the field to a value
  • You can create an event subscriber to all triggers this function is called from and add your custom “InitRecord” function that only sets your values
  • You create an event subscriber to the OnInsert trigger and set the initial values, if the fields don’t have a value.

If there is a block of code, or even worse, a whole bunch of code blocks in a standard function, there is a bit more effort that goes in. Look at the code to see, if this can be redesigned to have the changes in the code all in one block (at the best, at the end of the standard function) – the idea here would be that you let the standard code do whatever it needs to do and then have your custom functionality that executes afterwards and makes the changes you need or, to have the custom functionality run instead of standard functionality based on certain criteria. Once you found the best place for the code and made sure that it is only one block of code, you can apply the hook pattern and move the custom code into a different codeunit and call this function from the original place.

Pages

Since you have been following best practices for a while now, you have no pages or forms (when upgrading from classic) that have any code on them. We all know that that’s not true. So, what can we do here? Ideally, you again apply the event pattern and do as much as possible through that. This, unfortunately, won’t allow you to show or hide fields dynamically. The only way that this can be done is adding fields that will have temporary values defined to show or hide fields. This is not really the best option, although it could be an option. For something like this, you might have to keep some code in the pages and for that, you also should apply the hook pattern.

Reports, XMLPorts, and Queries

These are the object types that unfortunately are the biggest concerns during upgrades. Why? Because there is no good way to make changes without a “larger footprint”. Obviously, if you make changes in the code of those objects, you should apply the things I mentioned above already. But if you change the layout of a report or change the data set, you just have to make those changes.

Now, there is always the discussion – should you update the standard report or make a copy of it and make your changes there? If you ask 10 people, you probably get 12 different opinions. So, I am giving you mine: It depends…

It depends on the nature of the changes. If you, for instance, add a couple fields to a report and also want to make some layout changes to support those fields, make those in the standard report. If you use the report as the base for a new report and basically rewrite a lot of it, make a copy.

The advantage with keeping the standard report and making changes to it is that, if the report gets updated or fixed, you can easily apply the fixes. You won’t remember 5 years from now that you have a custom report that should have the same changes applied than a specific standard report. Just apply the same patterns: Use hooks wherever you can, make changes only in custom areas or at the beginning or end of the standard code, etc. If you change the layout of the report, check, if you can make those changes in a word layout and create a custom word layout for this – this is easier to migrate to a new version as well.

Codeunits

Codeunits are important to refactor, because the changes in codeunits are difficult to merge automatically. But they are also somewhat difficult to refactor, because there are often not a lot of events available and the code is usually spread all over the place. But, in general, the first part again would be to look at the code and see, how you can move some code into events. For instance, you have events in the posting and release codeunits and almost all custom checks (meaning, if you should throw an error or not) can be moved into those events. The CopyDocument codeunit now has events, so a lot can be moved there. If you exhausted events, you can still apply the hook pattern to create hooks and reduce the impact of changes in the standard codeunits. Basically, all adjustments to make are a summary of the ones describe before and applied to tables, pages, and other object types.

Additional information

I would suggest reading the different patterns described at the design patterns on Dynamics Communities. This has a lot of good information about proper development techniques and low impact changes. Although some of the patterns only make sense to apply for product development, a lot of them make sense to keep in mind for every single line of code that you write.

2 comments

  1. Frédéric Vercaemst

    Hi Peter,

    We pretty much apply the same rules when refactoring code, although some colleagues apply a variant on the hooks pattern instead. In places where you need to add customizations in between standard code, a – custom created – event publisher is called instead of the hook from the hooks codeunit. Then the custom created (hook) codeunit contains an event subscriber that actually runs the actual functions.

    What would be the (dis)advantage of applying the events pattern instead of the hooks pattern in this scenario? The final result is approx. the same (low footprint on NAV code, limited to 1 line of code, event or hook call / one codeunit containing all functions) although the event approach requires one more step, being the definition of the event publisher.

    What’s your view on this for customer / project development? (For add-ons / ISV’s, the approach might be different)

    1. pzentner

      I am sorry it took a little bit, but I finally get around to the comments.

      you can use both approaches, event and hook pattern are very similar. I like the hook pattern, because you don’t need to define the event publisher, therefore you have a smaller footprint and, which can be an advantage for product development – it is backwards compatible with older versions.

      the other issue could be that someone could subscribe to your event publisher and implement other functionality, which, if they know how everything is designed, could be a good thing, but it could also open up the way for some unexpected behavior – the biggest problem with events is that you can’t control the order in which subscribers are executed.

      however, for custom development and for product development, both are valid patterns to apply with the hook pattern being backwards compatible and stricter in control over code flow

      Peter

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">