Monday, May 16, 2016

Why your MSP cannot keep pace with your changing business needs



Many small and medium sized businesses (SMBs) utilize managed service providers (MSPs) for network monitoring and management (NMM) services. The reasons generally center on cost and complexity, as the majority of on-premise installed solutions can be expensive and complex. Licensing costs can be prohibitively expensive, and many SMBs do not have the technical resources in-house to install, maintain and use complex on-premise solutions.   

MSPs tout that they are the experts and can deliver these  
services more efficiently than an SMB can themselves. While it is true that MSPs have the experience and the needed technical resources, still the cost of an MSP service can match or exceed the licensing and support costs of on-premise solutions. The main driver of engaging an MSP is the elimination of on-premise technical resource requirements on the part of SMBs, which in many cases is substantial, and can seal the deal.

Here is an often overlooked question. Besides the cost savings, what exactly is the value proposition of utilizing an MSP? Is the service, capabilities, and responsiveness better? If you compared an SMB using an on-premise network monitoring solution with an MSP providing the same service (apples to apples), will the MSP deliver a better service? My answer to that question is no, and here is why. 

The secret that many SMBs overlook in their selection of an MSP is that MSPs are themselves using third party vendor solutions to provide their network monitoring services to their clients, and these vendor solutions suffer from the very same cost, complexity, and inflexibility issues as on-premise vendor solutions. They all utilize the same centralized architectural approach that leads to the solutions becoming monolithic, difficult to maintain and enhance, inflexible, and hard to use. MSPs are therefore subject to the limitations of their vendor solution, and are necessarily incapable of providing a better service than what an SMB could do on their own with the proper technical resources. In fact, I would go so far as to say that the service is less given that the MSPs resources are spread across multiple customers.   

Regardless of whether you are using an MSP or an on-premise vendor, if you have a new business requirement that requires new functionality, be prepared to wait months for it to be delivered. The reason is that the requirements eventually end up on the door step of a software vendor, and the centralized architectural approach that they employ with their solutions require a new release of their solution in order to add new functionality. These release cycles require extensive end-to-end testing of the whole solution and can take months to complete. If the vendor does not see the value proposition of your enhancement request, it may be pushed to a subsequent release, or you may not see it at all. Vendor solutions whether MSP or on-premise are limited by their architectures, and simply cannot keep pace with the constantly changing requirements of a dynamic and changing business environment. Don’t believe me. Here is a little test. Ask for a new monitoring capability from your MSP and see what happens.

There needs to be a change in vendor architectures that address the complexity, cost and flexibility issues that plague network monitoring vendors. This need is the reason why Vallum Software was founded. Our solution, the Halo Manager, has a NextGen decentralized modular architectural approach that addresses these issues for organizations and MSPs alike. The modular nature of our architecture allows for the introduction of new functionality without a new release. New functional capabilities can be delivered in a few days or weeks depending on their complexity. We have a free trial download of our solution on our website. Don’t be too shocked by the small download size. Our architectural approach does not have a central server install.  

I hope this information has been useful to you and as always, I welcome any comments. Please check out Vallum and our partner the GMI-Foundation.

About the Author:

Lance Edelman is a technology professional with 25+ years of experience in enterprise software, security, document management and network management. He is co-founder and CEO at Vallum Software and currently lives in Atlanta, GA.

Tuesday, May 10, 2016

How important are analyst evaluations of software vendor solutions and are there any blind spots?

Hand out for moneyI came across an article the other day regarding NetScout, a provider of network performance management products that is suing Gartner Research over their magic quadrant placement, which they were obviously unhappy with. NetScout is claiming that the vendors that pay for Gartner Research services are ranked above those that do not. This article brought to mind a couple of questions regarding research analysts. The first is just how important is the magic quadrant placement and ratings in general? The second one is one that I have thought about for a while, do analysts have a blind spot with regards to broader market needs?

There are some organizations out there that believe that analyst reports and rankings are the gold standard, and will not accept a vendor unless they are on the list. However, my experience has been that the majority take a more pragmatic approach. Many construct a fairly comprehensive vendor evaluation process, which include request for proposals (RFPs). Every organization is different and a solution that makes sense for one organization may not make sense for another. It is imperative that organizations understand their environments and connect the dots between their needs and the available solutions on the market.

There are many elements that need to be considered in a software purchase decision, such as price, functionality, implementation and training costs, enhancements and overall maintainability. Each element will have a different weight from organization to organization. Some will be more price sensitive than others, while others will require different functional capabilities. Measuring and matching all of these elements against the needs of an organization can be a very tedious and time consuming process, but it is time well spent to ensure that you end with the best solution for your organization.

While analysts firms provide good quality analysis of software vendor solutions, there is one area that is typically a blind spot for them. One of the metrics that analysts typically rate vendors on is their product functionality. While a vendor can get dinged for having a difficult or cluttered interface, more functionality in one direction or another generally translates to a positive. The blind spot in this is that not every organization wants or needs all of the functionality that a solution has. It’s the old 80/20 rule in software that states 80% of the users will only use 20% of the features of a solution. So the question is, who are the analysts targeting their reports at, the 80% or the 20? And, is it made clear in their analysis which one it is? If a software vendor attempts to target the 80% with less functionality that are much more pertinent to them, they are very likely to get dinged for not being a visionary. The unintended consequence of all of this is a functionality arms race among vendors vying to be positively rated by analysts. In many cases, they are essentially developing features for 20% of organizations that the other 80% don’t need and will likely never use. Another unintended consequence is that more functionality generally translates into a larger and more complicated product. This complexity generally leads to complicated implementations, higher end hardware, longer learning curves, which leads to a larger price tag.

Is Gartner Research operating a pay-to-play business model? While I have watched the marketing people in the software companies that I have worked for fall all over themselves to wine and dine analysts, I have seen no indication that it bought us any favor other than a more attentive ear. Are companies really blindly outsourcing their technology decisions to third party analysts? I don’t think so, but NetScout seems to think so or maybe there is an unseen agenda behind all of this.

I hope this information has been useful to you and as always, I welcome any comments. Please check out Vallum and our partner the GMI-Foundation.

About the Author:

Lance Edelman is a technology professional with 25+ years of experience in enterprise software, security, document management and network management. He is co-founder and CEO at Vallum Software and currently lives in Atlanta, GA.

The case for an installed software agent and why is there so much misinformation out there

Some of you know that the Vallum technology and the Vallum Halo Manager itself utilizes an installed agent for some metrics and capabilities, but you may not know why we went in this direction and thought process behind it. I have been asked the question on a few blog last year. Hopefully this information will help you make informed decisions if you are in a selection process for a network monitoring solution or any solution where an agent is to be used.
occasions, so I thought I would spend some time on this topic and explain the ins and outs of it. I also touched on this in a previous

Before you can really begin to understand agents, the first thing you have to cut through is all the misinformation and agendas that are out there. You will commonly hear from vendors that their solutions are agentless, as if this is some badge of honor or big positive. They will tell you that agents have huge resource overhead and they will crash your systems, and we don’t use them so our solution is better. But let’s step back a moment and examine exactly what they saying. Are agents as a category inherently bloated with the tendency to crash your systems? or are many just written poorly? While there is most definitely a very specific artistry to developing agents, at the end of the day they are generally no different than any other software. If you have a web browser solution that is constantly crashing and eating up the resources on your computer, do you blame web browsers in general or do you blame the vendor that developed it? Of course it is the vendor’s responsibility.

The reason that many software vendors bad mouth agents is that they themselves are not very good at writing them. Many took a stab at developing them with poor results. In addition, the agents ended up being single purpose, only able to function with their solution. If you had three agent-based solutions, you had three agents installed on your platform. Organizations pushed back on the poor quality and multiple agent installs. This is why many vendors deviated their solution focus towards an agentless approach with negative information about agents to try to make it a positive. To be quite honest, there is an artistry to writing agents, as I previously mentioned, and it is frankly not a competency that many vendors possess or want to possess. The agents are a less visible sub-component of their core solution that needs to be maintained in conjunction with it. It is not something that vendors generally focus on or directly sell, although this does not stop many of them from selling their agents as an expensive line item. The poor agent quality combined with the cost leads many to erroneously avoid agents in favor of less functionally capable agentless approaches.

So now that we have hopefully cleared up the miscommunication around agents, what exactly is the value proposition of agents vs. an agentless approach? A well written agent will be much more flexible given its direct presence and interaction with the platform and its services. It will have access to a greater number of metrics and granular data that are much more accurate and timely. The presence of the agent on the platform can provide remediation capabilities such as killing processes without the addition of external scripts. The agents can also function independently if communication is lost with the platform, which also allows processing to be decentralized across the enterprise, providing scalability that is not possible otherwise.

Agentless approaches on the other hand are reliant on analyzing network packets flowing across the network and utilizing existing exposed APIs on the platform. Packet capture and analysis can be very resource intensive not only for the network, but also the solution that is doing it. While it can provide data on service performance and availability, it cannot provide deeper more granular data on the platforms themselves.

Agentless solutions typically rely on polling technologies such as Simple Network Management Protocol (SNMP) and querying Windows Management Instrumentation (WMI) for specific server metrics, neither of which the vendor has any control over. With SNMP, it is not simple and you only have access to the data that the vendor on the platform has provided, and it cannot easily be modified or extended. WMI is specific to the windows platforms. Neither method will provide the metrics that can be obtained by an agent, and each come with their own complexities and security risks. Finally, both methods may or may not be implemented in an environment, and this process can be quite time consuming.

So this leaves us with two important topics. Charging for the agents and making them closed and single purpose. Agents are generally not a main selling point of solutions that utilize them, but they are the single point of enablement for the solution. The agent has to be installed before you can manage the platform. By charging for the agents, the vendors are simply crippling the rollout of their own solutions. Instead of buying 500 agents, a customer may only buy 250 of them. The closed, single purpose nature of the agents places organizations in an unpleasant position. If they have three agent-based solutions, they will have three agents installed on their platforms.

At Vallum we were painfully aware of these issues and this is one of the reasons why we partnered with the GMI-Foundation to utilize their GMI-Agent. The "GMI" stands for General Management Interface, and it is open and free to use. Vallum's version is called the Halo Agent. The Halo Agent effectively addresses the issues of cost, complexity and agents being closed and single purpose. To that end, Vallum recently announced the release of a software development kit (SDK) for developing applications for the Halo Manager solution and the Halo Agent. The applications are called Halo Apps, and you can find a growing selection of them on the Halo App Store on Vallum's website. Halo Apps allow organizations to tailor the Halo Manager solution to their specific requirements. Halo Apps can be created in a few days or weeks depending on their complexity. They are modular and reusable, and they do not require end-to-end Q/A testing of the Halo Manager solution, so they can be developed and deployed quickly. The bottom line is that the multi-purpose Halo Agent along with the ability develop new functionality via Halo Apps, allow organizations to free themselves from bloated and inefficient agents from other vendors.

I hope this information has been useful to you, and as always I welcome any comments. Please check out Vallum Software and our partner the GMI-Foundation.

About the Author:
Lance Edelman is a technology professional with 25+ years of experience in enterprise software, security, document management and network management. He is co-founder and Director of Technology at Vallum Software and currently lives in Atlanta, GA.

Monday, May 9, 2016

A day in the life of a power user, and six things you should look for in a software download to make your end-user experience stress-free

Most of us have been there. You have a business problem to solve and you go to Google and start searching for software solutions. They are very easy to find and there are a myriad of different vendors with loads of different solutions. Some are commercial and some are free. Yes, they showed up on a search, but for many of the freeware solutions, it is difficult to determine what they do from their website. They do not explain what their product does very well. In some cases there is no explanation at all. Some provide no screenshots or product details to establish an expectation of what you will be getting if you wanted to download and try them.

You pick one that has reasonable product details, download it and try to install it. Two things generally happen, either you can’t get it installed or you can install it but can’t get it to work. Time to read the documentation which is oftentimes as difficult to decipher as the product descriptions on their sites. You have an easier experience with the directions that came with a toaster oven you recently bought!
Okay, getting back to our software problem... The instructions are complex, vague and look like they were created by someone who has never sat down, installed and documented the experience of their own product. Installation instructions appear to presume that you are a developer and are full of jargon and acronyms, most of which you don’t understand. You muddle half way through an installation only to find that they left out a key step in their instructions. You can’t go back and perform the key step, so you abandon the installation. OK, maybe it was just a bad apple, there has to be better ones in the barrel, right?

So, you go back to Google and repeat the process a couple of more times with similar results. Frustration starts to set in and it is time to reach out to support as a last result. You revisit the least worst of the products you previously downloaded and go to their support page (if they have one). They make you register to access their support topics. There are a boatload of topics and many of them are dated. Comparison charts, if there are any, reference older versions of their competitor’s solutions. You attempt a few things that you find to address your issues, but nothing works. Frustration mounts, you try to contact support directly. No phone number, just an email. You shoot them an email and there is no response. Who are these people? Why would you put forth the effort to build a software product and then not support it? Why would you not put forth a well-thought effort to explain your product on your website?

Through trial and error, you finally get a product to work. It turns out to be a limited trial version that lacks the functionality of the full enterprise version that was documented on their website, which is what you based your selection on in the first place. The trial version won’t do what you need it to do. Do you risk it and buy the full version? No, not without seeing it first. Back to square one!

After several iterations of this process you give up in total frustration. If you’re lucky, you have some colleagues or friends that you can reach out to who can recommend a product. You go back to all of the products you have attempted to install to uninstall them and there are no uninstall utilities for many of them. You have to delete everything manually and hope you get everything. Hopefully, there were no registry entries. On top of all of this, you start getting SPAM emails from websites you would never visit, undoubtedly the result of registering for the support sites.

Appallingly, this was almost verbatim, the last two application searches I embarked on to find 1) a tool to enhance my Webalizer statistics, and 2) a tool to use for a shopping cart for my website. I seem to have crossed paths with some vendors that did a great job on their search engine marketing, but a very poor job of providing a favorable end-user experience (EUE). A few of the brands I encountered might surprise you but the results were similar – vague product descriptions, clunky to install, difficult documentation (sometimes no documentation without registering and SPAM after registration), and no help if I needed to uninstall.

Unfortunately, we live in the age of “build a website and like magic, you’re a business owner.” A few clicks and $125 later, and you have registered your entity as a corporation. But what’s missing, even for some enterprise software vendors, is the attention to the details of the EUE. We have entered the age of the proliferation of applications, fueled in part by the mobile device and tablet explosion. Fueled in part by an entrepreneurial spirit borne of a troubled tech economy (see recent layoffs by Microsoft and HP and the other 40,000+ tech jobs lost in the first 6 months of 2014[1]).

Unfortunately, for us power users, our EUE isn’t going to get any better anytime soon. If you are a developer or programmer reading this, I encourage you to put yourself in my shoes and try to see your program from my side of the keyboard. And for you end users out there, here are six things to look for that can steer you towards a favorable EUE.
  1. Where there is smoke there is usually fire and … lipstick on the pig. If you can’t immediately determine what a product does from the vendor’s website, you should probably avoid it. Poor website documentation will usually be followed by poor install/uninstall processes and instructions, followed by poor support and a poorly designed product. Usually… be aware that there are those vendors that have a poor product surrounded by layers of professional marketing. It might take a little effort to see the lipstick on the pig.
  2. Take note of the download sizes. Product downloads can dramatically vary in size. Some can be a few megabytes while others can be a few hundred megabytes. While a small download can sometimes mean the product is limited in functionality, it can also mean one that was exceptionally well written and is very “tight.” On the other hand, a very large download can mean software that is bloated and poorly written. It could be an application that is older and has had many fingers in it with each programmer taking the easy route and simply adding additional libraries to the mix instead of evaluating and rewriting code. In many cases, there is a direct correlation between a large installation file and complex installation and implementation processes.
  3. Are the requirements in the documentation? How many times have you downloaded a product only to find that it required the installation of additional software and hardware? Look for this and if you don’t already have the requirements in place, find out what is involved to install them and if there are any licensing requirements involved. Be sure to check the release requirements.
  4. How long has the product been GA? While new products are not always poor, there are those vendors that choose to have their customers QA their products. If this is the case, you can generally find feedback regarding the product and the vendor on the internet, preferably on the vendor’s support page.
  5. Search for up-to-date ratings and objectionable reviews. Search for up-to-date independent ratings of the products. Beware of those done by the vendors, which include their own products as the ratings will not be objective. Also beware of user product reviews where vendors have submitted their own positive reviews. You can spot these because they will have the same wording or focus as their website communication, and they will be nearly all 5-star positive.
  6. 6.    Scrutinize the vendor’s support page and policies. Check out the vendor support before you devote any significant time to the product. Is there phone or only email support? What do the end-users have to say about the support? With licensed products, does the vendor include the support and if not, what does it cost?
This blog and list was the result of many years of experience on both sides of the vendor fence – developer and end user. Hopefully it will provide you with the knowledge to sniff out the bad products and vendors and make your life a little easier. As always, your comments are always welcome.
I hope this information has been useful to you and as always, I welcome any comments. Please check out Vallum and our partner the GMI-Foundation.

About the Author:
Lance Edelman is a technology professional with 25+ years of experience in enterprise software, security, document management and network management. He is co-founder and CEO at Vallum Software and currently lives in Atlanta, GA.

Wirth’s Law, 64k memory, and 3 things to ask your software vendor for during the next purchase

Wirths LawI was downloading a software application for my computer the other day, a product who shall remain nameless. Typically, I start the download and then work on something else and come back to it later to complete the install. This time, for no particular reason, I decided to watch the download as it occurred. It was over 145 megabytes – keep in mind this was a business application, not an operating system – and not some complex solution with multiple subsystems and moving parts. Needless to say, I cancelled the download and moved on. What in the world would require this install package to be over 145 megabytes?

Think about all of the software you have installed on your computer and how much disk space it takes up. Yes I know, disk is cheap and you have loads of unused space and you don’t really care, but what about memory? If these installs are so bloated in download size, what are they going to be like when they are running? How much memory are they going to consume? And what if you have the need to run three or four or more of these hogs on your system? Have you recently opened the task manager, clicked on processes and then sorted them top to bottom on private memory utilization and CPU? If you have never done this or haven’t done it lately, try it. You might be surprised to find what is using up the memory and processor on your computer.

There is a computing adage that I came across a while back called Wirth’s Law. Wirth’s Law basically states that software is getting slower more rapidly than hardware becomes faster. There was a variant on Wirth called Gates’ law, borrowing its name from Bill Gates. It made more of a humorous point by stating that commercial software generally slows by 50 percent every 18 months, thereby negating all of the benefits of Moore’s law, which deals with improvements in the capabilities of computing devices.
What exactly is the point of all of this? The point is code quality and efficiency, as it impedes your ability to work more productively. If a programmer is writing a new function, instead of creating a new and more efficient function, they take a shortcut and link to a library on top of the existing code. As this process is repeated over and over with each new release and new libraries linked into larger libraries, the composite code begins to resemble a house built on top of a house, built on top of another house. Over the course of time, the code begins to look like a pile of tangled houses after a tornado with the bottom layers becoming dead code. The negative effect on the performance of the application becomes clearly noticeable by users.

When I started out in IT in a datacenter back in the 1990s, I worked with some people who would share stories about programming on hardware with 64K of memory. In that environment every line of code was carefully considered, written or rewritten because memory was limited and costly. It was simply not possible to link in a new library and continue on. As a result, the code was very efficient and compact. When programmers got a welcomed memory bump to the computer, it was a big deal because they also got quite a jump in performance as well.

Fast forward to today. Servers can have 256GB or more of memory, while even Laptops and desktops can have 8 GB or more. Memory is plentiful, so who cares? With plentiful memory, it would also appear that efficient coding is no longer needed. Think again. The problem does not necessarily lie with one application that is a poorly written memory hog. Rather, it manifests itself in business environments that are running dozens or more of these applications. If Wirth’s law proves to be correct, hardware vendors will not be able to escape performance and utilization hardship from poorly coded software.

Some have concluded that the result of poorly written code is the result of lazy programmers focused on cranking the code out with no eye to the efficiency of what they are coding. Some may say that it’s the software organization’s fault in that they do not provide developers the needed time or direction to re-write existing code in a more efficient manner. Regardless of the cause, it is clear that efficient coding in most corners is a lost art and apparently is no longer taught in schools, if it ever was. (Academicians, please sound off here and defend your institutions!)

So where do we go from here? The solution to this dilemma lies with business organizations that are the largest consumers of business software. These organizations have the influence because they have the checkbooks. They need to start holding the vendor community responsible for the performance of the applications they procure from their vendors.

In defense of the software vendor community – of which I am a member – there are generally minimum hardware and software requirements provided by vendors, but it is far short of what is required to give an organization the data they need to understand how an application will perform in production. What is needed is hard data on how the application has performed in current deployments, and the software vendors have this data from previous implementations. Demand it from them!

Here are 3 “ask fors” for your business during your next software purchase:
  1. Ask for performance data from the vendors that spans smaller implementations to larger ones. This will help you understand how well the application scales and help you avoid issues if you are rolling out the application in phases.
  2. Ask for references that are similar in size and network configuration to yours. Put your network or application specialists in contact with theirs, and discuss performance pre- and post-deployment. These customer references can provide you with some invaluable intelligence about how the application performs on a day-to-day basis; information that the vendor might not even be aware of.
  3. Develop some predetermined and agreed-upon performance metrics with the vendor and bake them into the contract. If they are baked into the contract, the vendor is more likely to provide you upfront with reliable data that they will stick with.
If more organizations become more focused on the performance and resource utilization of the applications they acquire, the vendors will begin to make it a business priority. Some might even hopefully see it as a distinctive competitive advantage and one that is to be advertised. Perhaps it might even bring about a rebirth of a focus towards more efficient coding and avert the train wreck down the road that Wirth’s law predicts.

At Vallum Software, we take the performance and resource utilization of the applications we create very seriously. We place a strong emphasis on code efficiency to ensure that our customers get the most of our solutions within the smallest footprint, and that there are no surprises down the road if they expand its use.

What is your take on this? What application performance issues are you seeing in your network? For those of you that date back to the “64k days,” what difference do you see these days in app code as related to performance?

I hope this information has been useful to you and as always, I welcome any comments. Please check out Vallum and our partner the GMI-Foundation.

About the Author:

Lance Edelman is a technology professional with 25+ years of experience in enterprise software, security, document management and network management. He is co-founder and CEO at Vallum Software and currently lives in Atlanta, GA.

The B2B software procurement process is burdensome and broken. 5 guidelines that help you avoid the ‘software bloat’ problem that prevents you from having better implementations.

I was reading an article the other day that referenced the
80/20 rule with regards to software. For those of you that haven’t heard of it, it states that 80 percent of users only use 20 percent of the features of a software product. We have all used Microsoft Word at one point or another, but what percentage of Word’s features do you really use? Of all the software packages you have, do you really use more than 20 percent of any of them? And more importantly, how much does the 80 percent of the features that you never use get in the way of finding and using the 20 percent that you do use?

The 80/20 rule has me thinking of software development and how vendors go about addressing organizational needs and developing enterprise solutions. I have been on the vendor side of enterprise software for a number years. I have seen firsthand how a large number of established vendors and some startups approach the process of developing enterprise software solutions with regards to functionality. While each vendor solution generally starts off with a modest feature set, from there they generally begin furiously adding functionality to their products on a regular release cycle in a functionality arms race to be crowned as vendor with the most functionality. The functionality requests, for the most part, come from individual customers or from the vendor’s own development group with little or no vetting in the marketplace. Rarely is any functionality ever removed because they are afraid a customer – even one customer – is going to complain.

With each new release, the new functionality must be integrated into the existing product not only on a technical level, but also into the user interface. As more and more functionality is layered into the products they become slower, more complex to use (as in harder to find the 20%) and difficult to maintain with some becoming unstable. The result is a user interface cluttered with a mishmash of selections making them complex and hard to navigate. Addressing the end user’s needs was the point of developing the solution in the first place, but now the end-user experience perversely ends up getting lost in the “process.”

While software vendors most certainly have their share of the blame for this situation, the user organizations are not completely blameless. Have you ever read an enterprise software request for proposal (RFP)? When user organizations create RFPs, they generally throw everything including the kitchen sink into them. Because IT budget is so hard to come by, they ask for capabilities that they don’t need and will probably never use (the 80%), which in turn obscures the ones they will use (the 20%). The vendors are then judged on responses based on their ability to deliver the kitchen sink, and these responses are in many cases yes or no answers. So if you are the typical enterprise software vendor, what choice do you have? You are compelled to answer “yes” to as many questions as you can, which essentially enters you into the functionality arms race whether you intended to or not.

User organizations’ requirements don’t evolve into long functionality laundry lists by accident. This is a result of the way in which enterprise software solutions are delivered. You have to make sure you get everything you can possibly think of in there because you generally have one budget approved for the solution you need to solve the problem you have. Once you select the solution, that’s it, you’re stuck with it, and you’re not going to be able to go back for a redo.

So what’s the solution? At some point there must be a paradigm shift for delivering enterprise B2B solutions. Instead of continuing to deliver bloated solutions that users will only leverage 20 percent of, the functionality needs to be broken down into bite-sized, more easily consumable pieces. Smaller pieces of functionality that are more narrowly defined that allow organizations to more easily target specific requirements. Customers can then request specific functionality that best fits their requirements. The result is more successful implementations where you no longer have to pound a square behemoth solution into a round focused problem.

Here are 5 guidelines to help you avoid the software bloat pitfall and have better implementations:
  1. Clearly define the business requirements and group them into smaller functional groups that make sense.
  2. Break down the technical and functional requirements into understandable elements and align them with the business requirements. This will help you understand the requirements better and help eliminate those that do not contribute to the business need.
  3. Don’t boil the ocean. Group the requirements into several functional groups and implement a priority weighting on each requirement and each functional group, then attack the problem piecemeal.
  4. Avoid the behemoth “everything is in there” solutions. Select smaller more specific solutions from vendors that are better suited for addressing each group. If you have a new vendor that you have not worked with before, have them prove themselves on lower priority items first before the higher priority ones.
  5. Identify all integration points between solutions early on and clearly define them. If you have more than one vendor, bring them together as partners early in the project.
In addition to the monolithic software bloat problem I described above, the software industry also has issues with the number of installed agents that proprietary solutions require. This makes updates and patches very time consuming and resource laden. Our response to this problem is the Halo Manager solution, which has a unique modular decentralized architecture that is designed to have functionality added to it with specialized applications called Halo Apps. Halo Apps can have a nearly endless array of capabilities, and allow organizations to tailor the Halo Manager solution to their specific needs. Think of adding apps to a mobile device and you have a good idea of our model. There is a growing selection of Halo Apps in the Halo App Store on Vallum's website.  The Halo Apps are easy to download and deploy with the Halo Manager solution.

What is your take on this problem? Have you been privy to an implementation that took longer than expected, had more speed bumps than expected, and was difficult to use? This blog is the place for feedback so sound off if you have been a part of a large software deployment that you use only 20 percent of.

One of our goals for starting Vallum Software was to provide problem-solving solutions at the lowest possible cost with the quickest installations. We believe we have accomplished that goal with the Halo Manager solution and our plug-and-go Halo Apps from the Halo App Store. Our focus is on the end user experience in the network monitoring and management market. Find out more at http://vallumsoftware.com.

I hope this information has been useful to you and as always, I welcome any comments. Please check out Vallum and our partner the GMI-Foundation.

About the Author:

Lance Edelman is a technology professional with 25+ years of experience in enterprise software, security, document management and network management. He is co-founder and CEO at Vallum Software and currently lives in Atlanta, GA.

Originally software agents were created to simplify IT, not make it more complex and unstable. Now what?

I was reading a poll recently that stated that 90% of systems and networks admins say that there were more responsibilities and demands on their time in recent years. Nine out of ten stated that there were more complexities involved with the systems they manage. Networks and server architectures are obviously getting more complex and the task of administering them much more time consuming.

As organizations continue on the path of automating more
and more of their processes, software vendors have responded with a growing variety of different enterprise solutions. You can see this in such areas as IT Asset Management, Software Asset Management, Security Information and Event Management (SIEM), Application Performance Management (APM) and enterprise scheduling to name a few. While these markets are all quite different in their focus, they all have one thing in common; and that is they need to gather data from a diverse array of remote systems. In many cases they also need to execute commands on those systems.

So how is this accomplished amongst proprietary software systems with a myriad of hardware components with their own software complexities? Most of the vendors utilize a software agent that requires the installation of a service on the remote platforms to perform these duties. Sounds simple, but there are issues that organizations face in deploying and managing agents. The first is that each of these agents is specific to one application. Let’s say you have 500 servers in your environment and conservatively a half a dozen different applications that require agents. This leaves you with 3000 installed agents to manage and maintain that are all specific to individual applications, not an easy task. The second issue is that the agents themselves are not a huge technology priority for most software vendors and their development is far down on the priority list. This has opened the door to poor development leading to issues with high resource utilization, lack of system stability and in some cases, server outages. The result is that agents have understandably gained a bad reputation in the market. While some vendors have responded by going “agentless,” agentless approaches lack the functionality and capabilities that a properly written agent can provide.  
     
So what is the solution? The solution is for software vendors to get out of the agent business, and no I am not suggesting that they all go agentless. Agents are still the best approach, but first you need a well written agent that is solid, multi-purpose, programmable, and fully supported. These requirements are some of what sparked the founding of Vallum Software. Vallum’s solution, the Halo Manager, has a unique decentralized architecture that is built around a multi-purpose agent called the Halo Agent. Our goal with the Halo Agent was to provide a stable, secure, and multi-purpose agent that can serve as an organization’s only agent and we believe we have accomplished that.

And so the “now what” is that the Halo Agent won’t cost you and arm and a leg. Actually it’s open source, free to to use and is included with the Halo Manager solution. The Halo Agent is designed to have its functionality augmented with the addition of specialized applications called Halo Apps. Halo Apps add functionality to the Halo Agent(s) and the Halo Manager solution in a modular manner, allowing an organization to tailor the solution to their specific needs. There is a growing selection of Halo Apps in the Halo App Store on Vallum's website. Can't find the functionality your looking for in a Halo App? We can build it for you in a few days or weeks depending on the complexity. Want to build your own Halo Apps? There is a fully documented software development kit (SDK) in the store that will allow you to build your own custom Halo Apps to your specific requirements. The capabilities of Halo Apps are nearly endless, providing a level of flexibility that will allow you to better manage your IT complexity, and improve the speed and  deliver of services to your organization.

Our ultimate goal is for network admins have one agent to maintain in their environment and applications. It is what should have been done in the first place, but the beginning of the enterprise software era, coupled with the dotcom boom (er, bust) created a sort of technology free-range mentality much like the open range prairie land race of the late 19th century. The software race was on and the speed to market was more important than solid, foundational technology. We’re bringing the foundation back to technology with the Vallum Halo Manager and the Halo Agent. For those of you around in the late 90s and early 2000s, what do you think? We’d like to know your take on how disparate IT environments have become. Comments welcome.

Take Halo Manager solution and the Halo Agent for a spin. Your only cost is the time it takes to download and install, which is mere minutes. Full documentation comes with the download package and the software is very intuitive; you will be monitoring and managing your NOC very quickly.

I hope this information has been useful to you and as always, I welcome any comments. Please check out Vallum and our partner the GMI-Foundation.

About the Author:
Lance Edelman is a technology professional with 25+ years of experience in enterprise software, security, document management and network management. He is co-founder and CEO at Vallum Software and currently lives in Atlanta, GA.