The encapsulation bug that was a feature.

The recent release of RAD Studio Berlin 10.1 saw a fix to a long standing bug in the Delphi compiler. That-is, helper classes are no longer able to access the private members of their base class, as they’d been able to do in the past.

This put me into an unusual situation. I found myself in a discussion with a customer that has taken advantage of this bug, and considers it a feature. It is somewhat unusual (at least in my experience) in the software industry to have a customer demanding that you reinstate a fixed bug!

I understand my customers position on this, the repair of this bug has meant that code which utilized this bad behavior is now broken. This will cause those customers to face some cost in repairing their code should they upgrade to the latest version of our product. Believe me, I’m not arguing without compassion for their plight, however, using the bug as a feature is simply bad practice.

Take a look at the wiki page for Encapsulation: wiki page on encapsulation.You’ll notice that while the URL ends with ‘#Encapsulation’ that the page is titled ‘Information hiding’.  This is no accident, the concept of encapsulation in software is that of hiding information. This is precisely what the ‘private’ keyword is intended to do, to allow a developer writing a class to hide information about how that class works from subsequent developers deriving the class.

This is known as the “segregation of design decisions” which means that the originating author of a class need not concern themselves with how their class may be misused, they can prevent it’s misuse by making parts of the code private.

In Delphi you are able to use ‘strict private’ to prevent any subsequent modification of the behavior of a class member, and ‘private’ which permits modification of the class member only by classes in the same code unit. That-is, if you have access to the source code. This allows you to permit modification of the class behavior to those within your own engineering team for example, but to prevent a third party from meddling in places that you never intended them to.

Arguing that the compiler ought to let you override this functionality, is arguing to REMOVE a protection feature from the compiler.

So what do you do if you want to alter the behavior of a class? Well this is what the ‘protected’ encapsulation level is for. When the author of the class deems that the class should be flexible in it’s behavior, they can place members into the protected encapsulation level to provide this flexibility.

The problem, as discovered by our customers, is that developers don’t always place their members into the protected encapsulation level when you believe they should have. Now, you may be correct in believing they should have done this, or you may not, in fact, that’s the point of the private level, to keep you from second guessing the original author.

In cases where the original author really should have raised the protection level to protected, then the fault lies with them for not having done so. Fault does not lie with the compiler when it forbids you from accessing private members, that’s what it’s supposed to do!

My customer argued that we should add an additional keyword permitting the overriding of a private member. Again, this violates encapsulation, what is the point of the compiler telling you “No, the original author does not want you changing this” if you’re then able to say “but I insist” and having an “okay…” response? Effectively, this cheats the original author from the protection they thought they had in the first place, which is why the bug was fixed.

Ultimately, you really should be blaming either the original author of the class, or your own poor practice in violating the private encapsulation level, rather than blaming the compiler for doing what it should.

Suppose you have a unit which represents a memory heap. Within the implementation of that unit is a constant named ‘Granularity’ and it’s set to a value of 512. This means that when the memory unit is used to allocate chunks of memory, it will always allocate them in blocks of half a kilobyte, which is not unusual since the majority of hardware vendors historically used this value to paginate memory.  (Don’t quote me on this, but I believe the MS-Windows OS still uses this granularity for it’s paginated memory management.)

So, now you find yourself on specialized server hardware or a server OS which supports memory pagination of 1024 (1 kilobyte), and you realize that the memory allocation would be twice as fast if it could allocate memory with this larger granularity value. But you’re unable to alter the value because it’s a constant, and you don’t have access to it’s source code!

Do you blame the compiler for not allowing you to use a constant as a variable? No, you blame the author of the unit for not having thought ahead to provide you with some means of setting the granularity at run-time.

Suppose now you’d been using a hack to locate the address of the constant and modify it by directly accessing the memory. Your compiler vendor realizes the compiler is putting constants into a writable memory location, alters the compiler to place constants in a memory page flagged as read-only, and the OS throws up an error when you attempt to access that memory location.  You may want to blame your compiler vendor for making the compiler do this, but, lets be honest, the fault is not with them it’s with you. You should not have been modifying constants at run-time, that’s why they’re called constants.

Yes, I'm aware that for historical reasons there is a compiler option to allow for assignment to constants, it's a concession to another instance of the same issue! At one time the compiler allowed assignment to constants, this was a bug, which was fixed. The override option in the compiler was offered for those customers that had taken advantage of the bug. It still stands true that this was a bug, and that the correct default behavior of the compiler ought to be to honor the constant constraint.

Nobody likes to be told you’re code is the problem, particularly if it’s going to cost you time and/or money to correct the code. I understand that, and it’s easy to look at the compiler vendor and become angry with them for causing your code to break. I get that.

It’s also true that some developers (in some kind of power trip I think) feel the compiler should just let you do as you please, it’s your responsibility to ensure that your code is doing what it should.

The truth is however, this is not a professional attitude. Following coding practice is essential to good software design, and to avoiding bugs. A compiler which helps you to follow coding practice, by preventing you from making mistakes in doing so, is a compiler doing it’s job.

The Delphi compiler is now doing what it should have been doing all along, the bug is fixed. As a class author you now have a restored feature, the ability to keep others from meddling in code you don’t want them to meddle in. You should use it responsibly, and provide flexible protected level functionality where-ever it can be useful.

It’s regrettable that some of our customers will suffer because of this change, and I do not yet know if our product and R&D teams have a solution for them, however, I believe this is for the best and only serves to improve the product.

Thanks for reading!