
I was downloading a software application for my computer the other
day, a product who shall remain nameless. Typically, I start the
download and then work on something else and come back to it later to
complete the install. This time, for no particular reason,
I decided to watch the download as it occurred. It was over 145
megabytes – keep in mind this was a business application, not an
operating system – and not some complex solution with multiple
subsystems and moving parts. Needless to say, I cancelled the download
and moved on. What in the world would require this install package to be
over 145 megabytes?
Think about all of the software you have
installed on your computer and how much disk space it takes up. Yes I
know, disk is cheap and you have loads of unused space and you don’t
really care, but what about memory? If these installs are so bloated in
download size, what are they going to be like when they are running? How
much memory are they going to consume? And what if you have the need to
run three or four or more of these hogs on your system? Have you
recently opened the task manager, clicked on processes and then sorted
them top to bottom on private memory utilization and CPU? If you have
never done this or haven’t done it lately, try it. You might be
surprised to find what is using up the memory and processor on your
computer.
There is a computing adage that I came across a while back called
Wirth’s Law.
Wirth’s Law basically states that software is getting slower more
rapidly than hardware becomes faster. There was a variant on Wirth
called Gates’ law, borrowing its name from Bill Gates. It made more of a
humorous point by stating that commercial software generally slows by
50 percent every 18 months, thereby negating all of the benefits of
Moore’s law, which deals with improvements in the capabilities of
computing devices.
What exactly is the point of all of this? The
point is code quality and efficiency, as it impedes your ability to work
more productively. If a programmer is writing a new function, instead
of creating a new and more efficient function, they take a shortcut and
link to a library on top of the existing code. As this process is
repeated over and over with each new release and new libraries linked
into larger libraries, the composite code begins to resemble a house
built on top of a house, built on top of another house. Over the course
of time, the code begins to look like a pile of tangled houses after a
tornado with the bottom layers becoming dead code. The negative effect
on the performance of the application becomes clearly noticeable by
users.
When I started out in IT in a datacenter back in the 1990s,
I worked with some people who would share stories about programming on
hardware with 64K of memory. In that environment every line of code was
carefully considered, written or rewritten because memory was limited
and costly. It was simply not possible to link in a new library and
continue on. As a result, the code was very efficient and compact. When
programmers got a welcomed memory bump to the computer, it was a big
deal because they also got quite a jump in performance as well.
Fast
forward to today. Servers can have 256GB or more of memory, while even
Laptops and desktops can have 8 GB or more. Memory is plentiful, so who
cares? With plentiful memory, it would also appear that efficient coding
is no longer needed. Think again. The problem does not necessarily lie
with one application that is a poorly written memory hog. Rather, it
manifests itself in business environments that are running dozens or
more of these applications. If Wirth’s law proves to be correct,
hardware vendors will not be able to escape performance and utilization
hardship from poorly coded software.
Some have concluded that the
result of poorly written code is the result of lazy programmers focused
on cranking the code out with no eye to the efficiency of what they are
coding. Some may say that it’s the software organization’s fault in that
they do not provide developers the needed time or direction to re-write
existing code in a more efficient manner. Regardless of the cause, it
is clear that efficient coding in most corners is a lost art and
apparently is no longer taught in schools, if it ever was.
(Academicians, please sound off here and defend your institutions!)
So
where do we go from here? The solution to this dilemma lies with
business organizations that are the largest consumers of business
software. These organizations have the influence because they have the
checkbooks. They need to start holding the vendor community responsible
for the performance of the applications they procure from their vendors.
In
defense of the software vendor community – of which I am a member –
there are generally minimum hardware and software requirements provided
by vendors, but it is far short of what is required to give an
organization the data they need to understand how an application will
perform in production. What is needed is hard data on how the
application has performed in current deployments, and the software
vendors have this data from previous implementations. Demand it from
them!
Here are 3 “ask fors” for your business during your next software purchase:
- Ask
for performance data from the vendors that spans smaller
implementations to larger ones. This will help you understand how well
the application scales and help you avoid issues if you are rolling out
the application in phases.
- Ask for references that are similar
in size and network configuration to yours. Put your network or
application specialists in contact with theirs, and discuss performance
pre- and post-deployment. These customer references can provide you with
some invaluable intelligence about how the application performs on a
day-to-day basis; information that the vendor might not even be aware
of.
- Develop some predetermined and agreed-upon performance
metrics with the vendor and bake them into the contract. If they are
baked into the contract, the vendor is more likely to provide you
upfront with reliable data that they will stick with.
If
more organizations become more focused on the performance and resource
utilization of the applications they acquire, the vendors will begin to
make it a business priority. Some might even hopefully see it as a
distinctive competitive advantage and one that is to be advertised.
Perhaps it might even bring about a rebirth of a focus towards more
efficient coding and avert the train wreck down the road that Wirth’s
law predicts.
At Vallum Software, we take the performance and
resource utilization of the applications we create very seriously. We
place a strong emphasis on code efficiency to ensure that our customers
get the most of our solutions within the smallest footprint, and that
there are no surprises down the road if they expand its use.
What
is your take on this? What application performance issues are you seeing
in your network? For those of you that date back to the “64k days,”
what difference do you see these days in app code as related to
performance?
I hope this information has been useful to you and as always, I welcome any comments. Please check out
Vallum and our partner the
GMI-Foundation.
About the Author:
Lance
Edelman is a technology professional with 25+ years of experience in
enterprise software, security, document management and network
management. He is co-founder and CEO at Vallum Software and currently
lives in Atlanta, GA.