The Essence of Code.

I've spent the largest part of my career as a software developer working on line of business applications in an object oriented programming arena. This has served me well and paid the bills, but the more experience I gain, the more I'm reminded that what I used to really enjoy in programming is not what I've been doing for 20+ years, and it's because of the "Essence of Code."

I'd like to start by explaining my relationship with OOP (Object Oriented Programming).

OOP really began in 1967, in a programming tool called Simula, which as I understand it was an extension or revision of Algol. The software industry back then was a very different space, barely even an industry on comparison to what it is today. It's long before my time, I wasn't born until a decade later, and even in my younger years, the software industry as we know it today was burgeoning, but still small compared to today. By the time I entered school to learn formally about programming, in the mid 90's, Object Oriented Programming was only starting to become popular. Today, I think of OOP as the "Industry Standard", but despite its true age, back then it was still considered new.

I'd grown up learning to program 8-bit micro computers at home, reading every text I could find at the library. For those of you that don't remember libraries, they were buildings where you could borrow books. Um, books were like blogs, but written on paper... Well you get my point. I'd learned a language called BASIC, and quickly realized that it wasn't good enough for what I wanted to make, which as a young boy growing up through the 80's and 90's was video games of course. I'd learned therefore to write code in assembler for the various 8-bit, and later for my 32-bit Amiga. I'd also begun my initial delve into Pascal. The pascal compiler that I had at that time did not have OOP features, it was traditional procedural pascal. I was aware of, and knew a little C, which was also procedural, but I'd not encountered OOP in practice.

Finding my way into programming education in England in those days was not straight forwards, particularly as I grew up in an industrial town. I remember I once had a career advisor interview me. When I explained that I wanted to write software, he recommended I go to the vocational college to take a course called C.L.A.I.T (Computer Literacy and Information Technology). To explain, this would have been a course in learning to use Microsoft Windows 95, Word, Excel ... I was dumbfounded and tried to explain, I don't want to eat the cake, I can already do that, I want to bake the cake. The advisor had nothing for me.

I eventually found myself in a vocational curriculum in computing which did include a programming module, as well as C.L.A.I.T, and a module on assembling computer hardware ( really, you just slot this PCB into that PCB and you have a course to teach this? ). Options were relatively thin on the ground. However, as I said, there was a module in programming and the languages of choice were Pascal and C++ - Object Oriented Turbo Pascal, and Turbo C++. At around the same time, I came across a copy of Delphi personal edition, distributed on a magazine cover disk. This was my introduction to OOP.

I won't go into too much detail on how this turn in my life was made, but I ended up seeking employment in programming. Application programming, not writing games as I'd had a desire to do. I did make several efforts, but the path was not easy, and ultimately I needed to be earning money. As it turns out, generally, application development has been more lucrative and an easier market to enter into. I'd used Delphi to create a HTML editing application which I'd distributed as freeware, and it was surprisingly popular. It was this effort that eventually served as a "portfolio" piece to help me get hired into application development. I don't regret my career at all, but if I could go back and do anything differently, I think instead of writing a HTML editor, I'd write a video game. I think that had I made that decision instead, my career in the video games industry might have started as I'd wished.

Anyhow, all of this is to get around to explaining, I actually struggled to learn OOP. While it all seemed quite logical at an abstract level, it didn't mix well with anything that I'd self taught. You see, by the time I got to college, the latest and hottest computer available would be based around the Intel 486-DX, running at 66Mhz with as much as 8 or 16MB of RAM. That's megahertz not gigahertz, and megabytes not gigabytes, and even then it was a vast step up in resources over the machines I'd learned on. The Amiga had it's 2MB of RAM as standard, and a 32-bit hybrid CPU running at 14Mhz. The Commodore 64 before it had only 64KB of RAM and a 1Mhz CPU. What I'd learned is that every CPU cycle matters, and that every byte of RAM matters. OOP was at odds with this way of thinking. It seems obvious to me now why OOP had taken three decades to come out of its initial stages in the late 60's to its rise in popularity through the mid to late 90's, it was resource hungry!

OOP is resource hungry, even today. The dramatic rise in computing resources with faster CPU's and higher availability of RAM have essentially masked just how hungry OOP is. I've had concerns about OOP since day one, but having been an application developer using Delphi, to say so for much of my career would have been heresy. Today, I try to remain pragmatic, but there is a partitioning in my way of thinking about programming which doesn't feel good. I feel like I'm lying, in blog posts, or on those occasions that I've programmed in public. I feel dishonest. I'm not being dishonest, but rather, I'm being genuine within a given context.

For example, I have been heard saying things like "You should always use interfaces, even when not using multiple implementations." What I wasn't able to articulate is that the use of interfaces, encourages compositing and dependency injection over inheritance. I wrote a blog post a short while back covering the shortcomings of using interfaces, in which I feel I've finally done a fair job of explaining my discomfort with OOP inheritance. Still this way of thinking is a bit of a lie, and so I'd like to explain further.

You see, I don't think you should always use interfaces. In fact, I don't think you should always use OOP even. One of the things that I've enjoyed about working with interfaces is that in Delphi they are ARC memory managed, and I've talked about how great that is... but I don't really believe you should always work within a memory manager. I think that this whole way of thinking about programming is wrong! It's bothered me for some time.

Well, let me walk that back a little. Today, the software industry is complex, mature, and firmly stood in the soil of OOP. The language that has made my career would not have been nearly so popular as it was, if it hadn't depended on OOP. The same is true of other popular application development tools. Consider C# for example, with its OOP and its interfaces and its generics and its garbage collection. It would be simply foolish to say that all of these features are bad or wrong in some way without at least some acknowledgement that they have contributed hugely to the software industry, and that they are popular with good reasons.

When I suggest you "should" code a particular way, I'm not conveying the context. Perhaps a better way to say it would be, if you are programming in an OOP setting today, then you should use interfaces. In particular, in the Delphi world, given that just about everything you write will be OOP based, you should be using interfaces. Why? Well for the many reasons I discussed in my previous post about interfaces, they help to decouple code which makes the code base more flexible to change. Often, this flexibility for change is more important to the software house that you're employed at, than the technical costs of OOP.

I've picked at OOP a little here, but it's not just OOP that bothers me. In fact, many modern programming features are discomforting, but I believe that most of them stem from the industry adopting OOP to begin with.

From OOP comes the over use of reference types, which, when used as they are in modern software are a disaster for performance. VMT's are a disaster for performance, heck, even a call-back is not great if it can be avoided, yet we have "event driven" programming. Allocating tiny objects one at a time is horrible for performance, and potentially for memory fragmentation too. On top of this we have initialization, which adds checking code to every one of our allocations. In the case of "safe references" we add a check to every use of a reference too. Cascading the de-allocation of a tree of references when using ARC, another crime against performance.

Using interfaces is worse still, because an interface brings a level of polymorphism that means that the compiler cannot know which VMT to call upon at compile time, it adds the need for a run-time "look up" for the VMT, adding more performance cost. You see, as we unwind all modern programming features related to memory safety, decoupling, and so on down the list, we ultimately get back to OOP being the root cause of a huge amount of performance costs.

These days, we've doubled down on our approach time and time again. The situation just continues to get worse. Many applications today are written systems that generate an applications as essentially HTML/CSS and JS, which are then embedded into a browser component, to run on a modern desktop or tablet. Why? To create cross platform applications while avoiding the inconvenience of the different UI systems on those platforms. I can't even begin to estimate how resource intensive and CPU hungry such applications are, and yet, they are being made, and the developers using these tools actually think it's a great idea.

The simple fact is that every single convenience feature that is offered by your compiler, or your tool-chain, or your "tech stack" is another performance cost. There's no getting around it. CPU's are essentially a digital electronic form of clockwork technology. Despite the way we've been encouraged to think about computers being insanely powerful, and they are, they are still finite. We could be doing a whole lot more with our available CPU cycles than we are doing - much more, orders of magnitude more. So why do we pay this price?

Frankly, its a business issue. To state the obvious, the software industry is comprised of businesses that make money by selling software. In order to do this, they need to hire software engineers and developers, and they need that staff to all understand the same code base, the same practices, and to be able to work together. Though they may claim to be looking for higher levels of skill, they're actually not, they're looking for a common denominator.

For the longest time, the demand for developers was so high that companies had little choice but to take the employees that they could find. The economy in the software world looks a little different today, but this is the foundation on which we stand. Companies didn't, and still don't want mavericks that do things in a different way to everyone else, that just ends up costing them down the line. They want the construction of software to be akin to a factory line, with the simplest steps possible for each worker at each step down that line. The OOP model has worked well for them, offering the mental simplicity of only having to work on the smallest piece of code at a time, the "object."

The features that have come along since, have been mechanisms to compensate for this. If your development team are all working in an OOP setting, then they're all focusing on small, individual "objects" of which there will be an increasing number to manage, as the sophistication of your software grows. Well then, they'd better not forget to free objects or else memory will leak and the application will eventually fall down. Okay, so introduce memory management features. Inheritance has lead to tight coupling, which makes it difficult to make changes, we can't have that when the business wants to sell a new feature, so bring in interfaces as a means of decoupling code. Managing all of these objects takes a lot of "management" code, much of which is the same with only minor differences to account for types, so lets invent "parameterized types" or "generics" so that we can write our management code only once and reuse it. Now that we have generics, and multiple implementations, lets use them to create "mocks" so that we can test that they work in "every case."

In order to be a good citizen at an employer, you must understand OOP and you must understand all of the other features that have been piled on-top in order to make working this way rational... or at least to appear rational.

I may have also been heard saying things like "good code is that which is more easily understood by others." While I still believe this to be true, it also comes with some additional context, as it can be taken too far. There is, or at least there ought to be some lowest common denominator. You can't be expected to write code that is so easy for others to understand, that any non programmer could understand it, right? There has to be some minimum expected level of competency that is required in order to work in your code, and you *should* strive to keep this code at a level that is easy to understand for that level of competency... That-is, if you wish to be a good citizen among the developers employed along side you.

This is the explanation of my own duality. "Good code" means different things in different contexts. The vast majority of modern application code is certainly not "good code" in terms of resource usage, or performance, or anything based in the physics of a computer system. "Good code" in this context is code that other developers find enjoyable, or at least tolerable to work with. In my mind, there is nothing "good" about this, not because I care too much about performance, though that is a factor, but because I believe that the gap between what is good code in each context could have been far narrower. Even in this context, the very term "Good Code" I believe to be something of a fallacy. Consider it a legitimized fiction if you will.

I feel that the entire software industry has taken a miss step, and gone down the wrong path, and yet, I've played along. I contributed my share, and took my share and have made efforts to be a good citizen, even schooling others on how to be a good citizen in this setting. You see, as an employee your code can be an asset to the business or it can be a liability. It's not in your interests for the code to become a liability, because that'll either cause you more work, or ultimately it could damage the business and therefore your own finances or career. Being a good citizen means giving your best effort to adopt the best, and most financially viable solutions to the problems that you have. It does not always mean fixing the root cause.

This brings me to the sentiment that inspired me to write this post. While I've been a good citizen as I've explained, while all along feeling a little uncomfortable about it, the more brash among our developer kin have taken this one step further. On several occasions now, when expressing sentiments such as these, I've been accused of being too rigid.

For instance, one of the reasons that I've appreciated the pascal language is that it is strongly typed. I know that there are those that prefer languages that are not strongly typed, you do you, but the counter that I was given is that its more important to focus on the "essence of code" than the details. At first my thought was that this sounded a little too much like aromatherapy to be taken seriously. My thoughts then became a little more practical. Why would anyone employed as a software developer not care about the details of how their tools work, or how their program runs? Surely, this must be the primary concern for any software developer, it is what they're paid for after all, isn't it?

Actually, these days, no it's not!

When I first entered the programming workforce, using a compiler that was OOP and which did not have any memory manager to speak of, and while computers will still crawling out of their infancy, those of us working in software knew that we were primarily responsible for managing resources. The two primary resources being CPU cycles and memory, but by extension to anything which had a 'handle' attached, be it a socket or a file or anything else. We had to care about these things, because the compiler wasn't going to care about them for us. Even if the compiler offered tools to help, largely it was left up to us, the software developers. ; We did, and still do care that the code is readable. This is one of the reasons I enjoyed pascal, sure it's a little verbose, but it was generally very readable. Ultimately however, the syntax didn't matter quite so much as our responsibility to ensure it was performant, because it would have been very easy to exceed the CPU capabilities of machines at that time, and that it ran in an amount of memory that the end-user was likely to have installed on their system.

Today however, it seems that the priority for many developers is to do the least amount of work possible. They'll grab some "framework" which appears to solve many of the problems in the programming paradigm, especially favoring it if it saves a few key strokes here and there. This "essence of code" idea is one of taking polymorphism and dependency injection to such an extreme that, at any time that you're working in the code, you need not understand what is actually going on under the code, but that you understand in "essence" what it should do. The strict requirement to understand the CPU and memory are long since forgotten. On top of this, hardware has become more sophisticated, such that having that understanding alone is a race.

Most of modern software, to someone that has been around for a few decades, is horrifying.

As an example, I recently started up a word processor on my old Amiga 2000 machine, it started near instantly. If I open Microsoft Word on my modern high core count, high ram desktop, it takes several seconds just to open the application. I tried both 365 desktop, and the much older 2003 versions, the result was the same, give or take half a second or so.

Now, to be fair in this race, the vintage Amiga is slow in many places, particularly in boot-up time since it has to boot from an emulated hard disk or floppies, but when I compare it to the time taken to start up Windows 11.... it's a shade faster! I say "a shade" not having timed it accurately, but having raced them. Maybe I'll go back and bench them, but that's not the point. What is going on with my modern desktop machine, which is so much more capable on a hardware level, that it's slower to use than a 30 year old vintage machine?!

Yes, the newer software does have more features, but does it really need all those CPU cycles to prepare the software for use? Ultimately, with either the older application or the new one, I'm able to edit a rich text document, with pretty fonts, and should I wish to, send it to a printer. The fundamental function of what is a word processor has not changed. What about the new features is so advanced that it costs the CPU so much start up time? Nothing.

What's happening is that software, built on top of costly programming paradigms is consuming all of the available CPU cycles to do ... well, work that doesn't need to be done. When I start up a word processor, does it actually need to start up a sand-boxed instance of the edge browser component, complete with all of the features that it offers to javascript? Does it need to load the application as javascript code, or byte code, and compile it "Just-In-Time"? No. Such a component didn't even exist on the vintage machine, it's not necessary for word processing, not at all.

Consider modern web applications. Sure, I get their convenience, it's blatant, but think about how they work for a moment. These are applications delivered essentially as source code, in textual form, to be compiled and executed inside the web browser. This is, well, all modern web applications. So we use a narrow bandwidth network, to carry entire user experiences as textual source code, over a text based transport protocol. When this technology was invented, bandwidth was even more narrow than it is today. It was hardly possible to have a good UX using x-server over LAN, much less a graphical UI system over WAN, yet this was the solution that the software industry decided was a good idea and channeled all of its effort into, in order to build the web that we know today. It's so wasteful that even with much improved internet connection speeds, and with compression, round-trip times and exchanging data between the 'back end' and 'front end' in a timely way remains a big concern to this day.

History could have been quite different. Imagine for a moment if you will, a timeline in which OOP never existed. Compilers could still have offered modern features, but with fewer costs. Imagine then that, instead of allocating an object every time you call new() or .Create(), the compiler encouraged you to allocate arrays of structured data. Instead of trying to manage when you free each object, it could simply invalidate or reuse those structures and ultimately, dispose the entire allocation when you were done with it. How much faster might applications be, if it weren't wasting time allocating and releasing tiny blocks of memory to store your class with only two member variables? What if there were no virtual method tables, but every method had a fixed place in the executable, such that CPU branch prediction need not deal with indirect pointers? How much faster might applications run if they didn't have to clear their memory caches for every "event" called. It's all possible and doable.

How much more enjoyable and responsive might web applications be if we'd been able to predict their rise, and provide a compressed binary protocol instead of HTTP, and if instead of compiling applications written in textual javascript, the browsers had been designed to act as remote UI displays, instead of as a tool to display essentially RTF documents. Feel free to do the math, but let me give you an example to get you started...

The very first dedicated 3D graphics card for desktops was the Voodoo by 3Dfx, and it provided high paced graphical experiences driven by a 133MB/s PCI bus. By comparison, I just ran a speed test on my internet connection which measured at 481.8Mbps, that's actually reasonably fast ( though not nearly as advertised, I need to call my ISP ). Notice the difference in measurements however, the local PCI bus is 133MB (megabytes per second), my internet connection is 481Mbps (megabits per second), that's only a little over 60MBs. My internet connection is half as fast today as the PCI bus used to deliver a graphical experience in 1996. Because of limited bandwidth on the connecting bus, graphics cards are generally driven by binary data, and they are many orders of magnitude faster today with modern bus speeds... but can you imagine for a moment if graphics cards were driven by textual data, transferred as text, how much slower our graphical experiences would be? Yet, this is the standard that we've held for two decades now to deliver user experiences over a far more limited bandwidth connection to some remote web server. It's not sane, it's just where we've ended up.

So, go ahead, spend a few thousand dollars on the latest and greatest hardware, and rest assured that the software industry will suck up every last cpu cycle you have in its wasteful practices, all so that modern developers can kick up their feet and pontificate on the "essence of code."