August 5 2016

Congratulations PCIT Clients in Defying the Odds 09:59 am

As CEO of PCIT I would like to extend a heartfelt thanks to our clients who have worked with us over the last 12 months to protect their data and digital resources.  Together we have defied the odds and won against a host of hackers in a big way.  Referencing a recent global survey by the Osterman group compared to PCIT’s operational results we find our clients had significantly less malware, less downtime from malware and never paid a dime to get their data restored.   According to the survey of over 540 global companies Canadians PAID the ransom 82% of the time and approximately 1/3 of those surveyed were forced into this situation.

If our customers had not been so proactive in educating their staff and promoting a security conscious approach we would have never gotten this far.  In the summer of 2015 our message to clients was ‘Information Technology could NOT control the security of their data and their network without everyone’s participation and ownership of this concern’.  At that point many of our clients had invested in best in class technology but were also open to taking the next step.  Together we had great results.  If we were keeping score for 2015 it would look something like this.

  Typical Canadian organization surveyed PCIT Customers
Profile 5,400 staff + a CIO, IT Director or Chief Information Security Officer + lots of internal IT staff, respondents across Canada 250 or less staff, mostly have PCIT as 100% IT resource, a few cases where PCIT is responsible for operational results and works along side -1 full time internal staff, Peace River region focused
suffered security attack last 12 months 72% 60%
Percent who last data due to ransom ware and PAID between $1,000 – $50,000 to get it back 72% 0%
Percent who lost data when they refused to pay the ransom 82% 0%
Severe downtime – It took more than a day trying to restore endpoint functionality 63% 0%
More than 9 hours to remediate 60% 0%*
Upper management and C-Level executives are at higher risk 8% target C-suite, 22% target managers Typical Cndn organization results seem very similar – don’t have hard data
High Risk 43% lost revenue, 25% stopped operations Data not available – would estimate the actual results were much much lower
Confident they can stop security issues (after all they have lost of ‘smart people’ on staff, and likely someone solely in charge of security) 51% ? I really doubt most of our clients are that confident.  Guessing results would come in under 20% as being confident they can stop security issues.  Most would probably have an internal resource if they felt they could afford it and find one.  However, these results are starting to speak for themselves.

*PCIT did have a remediation that took more than 9 hours in 2014 but it was just after C-level management requested we remove one of our recent security best practices as it ‘was frustrating the staff’.  Less than 2 weeks later 3 million files were erased after a C-level executive experienced a security breach.  Subsequent to that the security best practice was re-engaged and has remained ever since.

Results are based from an international study released in August 2016 of over 540 organizations worldwide.  Canadian specific results were also discussed in this Digital Journal article.

To me these results stress a couple points.  First, we have great clients who have been diligent in working with PCIT in this regard.  Very few push back and ask us to own the security results when we say we need everyone’s help.  Second, our ‘secret sauce’ appears to be working.   In early 2015 we began benchmarking PCIT’s security results across our entire client base and comparing it to individual customers results.  In this manner we could very clearly identify when our clients were hindering or helping the protection of their data and their operations.

Finally, I believe a deep analysis of the above table completely and totally disproves the fallacy that having an in house resource is the best way to support IT.  The facts appear to heavily weigh against the fact that that no matter how smart, how helpful, how well trained, and how well intentioned internal resources are most Canadian organizations have NO IDEA how large their security exposure is.

I can actually picture the conversation in most boardroom’s as being sympathetic to internal IT resources after having to pay a $20,000 ransom like the University of Calgary just did.  Executive’s not knowing how to manage IT try to get results by hiring, providing budget and gauging results by how well they ‘feel’ about the work that is being done.  To most managers having to pay a ransom can be excused because the bad guys are ‘really really bad‘ and they just know their ‘guy(s)’ or ‘gal(s)’ are good.  Results seem to speak otherwise.

If there are organizations who want to manage technology results by more than a ‘feeling’ we would love to discuss if our approach would be a fit.

January 4 2016

When to Upgrade to Windows 10 – For the Office 04:40 pm

Using Windows 10 for several months now has been generally a pleasant experience at the office.  We thought providing readers with feedback regarding when and how to upgrade in an organization may help many of our customers effectively plan and use Windows.

Before tackling when and how to upgrade let’s think of the last 10 years.  Since it is the first day back after closing 2015 reflection seems appropriate.  In that time period most offices have gone from Windows XP, to Windows 7.  This is a significant change but it in no way matches the changes that have taken place with smartphones and cloud services.  I would believe that most of our customers who for some reason or another use a computer at work have also gotten accustomed to iOS products from Apple like the iPhone or iPad over the last 10 years.  Many may also remember using a Blackberry (joking).  Several more have probably used an Android device.  Across this whole cross section of end users we also know many have tried some form of cloud service that involves using a program via an Internet Browser.  This could be anything like Salesforce, ADP, Bellamy, Gmail and more.

My point to this reflection is that changes to the Operating System are not nearly as significant to the user’s productivity anymore.  A great new feature in the Operating System will typically not make the average user significantly more productive.  Neither will a poorly designed feature cost someone significant time and frustration.  In short, most of us have ‘gotten over’ so much change or at least reluctantly live with it.  (I haven’t mentioned Windows 8 – which we actively tried to prevent being deployed in our Customer base. As the CEO I never could really figure out how to use it and that seemed to speak for itself.)

So when we say it is better than any other Operating System for our customer’s to do business with let me put that in context.  It is probably not $500 worth of improvement.  Given the cost of a new Operating System is $99-$200 and a typical deployment takes 3-6 hours per computer the total cost per user is likely around $500 for many organizations with under 200 users.   We like Windows 10 but if it was our money this upgrade isn’t THAT much worth it.  We believe our customers could typically take that same $500 spend and do something far more value generating.

So this leaves us with the economics of upgrading coming down to when.  If it’s not valuable now when would the upgrade be valuable?  In this regard we believe staying with a 20-25% PC refresh rate every year is a fantastic time to upgrade the Operating System.  Sure we don’t get the efficiencies of deployment down to 1-2 hours per user by automating everything all at once.  But as already mentioned neither can we understand how our typical customer gets a significant payback by putting everyone on a new Operating System at $500 a user.  If the customer ends up with a few mismatches in the PC fleet both they and their provider can manage this variance with a little bit of automation or some old fashioned documentation.  Even IT over the last 10 years has learned better how to get over all this change ….

In regards to exciting new feature the Operating System comes with two internet browsers.  We so far have found a lot of applications and sites don’t work with the new browser called Edge.  A lot of customers also use Chrome as a browser and many also use Firefox.  Having to manage security loop holes for 4 browsers is an interesting challenge.  Let’s hope that doesn’t come back and cause security issues for our clients.

The improvement in BitLocker encryption technology is a welcome change for Professional grade Operating Systems.  Our clients may not get hostile with us if we try this security technology though we haven’t field tested it.  Other encryption products we have tried have almost had pcit ‘walk the plank’ so to speak.  Not having to worry about all the laptop users who may have data that should not be splattered across the internet should be a stress reducer with our clients.

Several of our customer’s bypass the encryption concern by simply access virtual applications when working remote.  None of the date in this regard resides on the laptop so corporate data is not at risk should the laptop be lost or stolen.

The other really nice features with Windows 10 comes with the ENTERPRISE version.  If your organization has over 50 branch office users and wants sharing data between offices to be much faster than we should talk about the value Enterprise may deliver.  In this case a rollout beyond the typical 20-25% refresh cycle each year could be value generating.

In summary, we believe Windows 10 should be a part of every organizations future. At the cheapest price possible for organizations with under 200 users likely means a phased deployment over 1-4 years. If someone really wants to see productivity jump or costs be cut our immediate recommendations are to look elsewhere in areas like application development or automation technologies.

September 30 2015

What Value do you put on Operational Excellence? 11:06 am

I am surprised to believe that operational excellence can be undervalued but from all our research and information gathered across the technology industry this belief seems to resonate from almost every corner.

What we mean by operational excellence is that the quality of the operation of technology can vary between organizations. This variance is often attributed to the foundation the technology is put on or the quality of the technical resources implementing the solution. Both are factors but there is also a 3rd factor and it is the overall measure of operational quality of the technology. This 3rd factor appears to be a key productivity driver and organizations can pick up a quick and sizeable boost in efficiency by spending time on developing their operational excellence.

Let me offer an example.   Let’s assume no technology put in place is perfect nor can maintain perfection. Our example would involve every new piece of software or hardware as they always needs some form of adjustment, patching, maintenance, etc. as well as adjustments to user’s needs. But how much is normal? What defines an excellent operational level of performance? Based on our experience no one really knows what excellent results are. Even more amazing is that if they do know it exists there is hardly value on great results versus mediocre.

Even the respected and knowledgeable analysts from Gartner, whose entire business model is evaluating the quality of IT technology and people, suggests IT management should set the bar a little bit below excellent. These analysts suggest that IT Operations is something that CIO’s should not get any better results from than good as no one in the executive suite values operations much better than good! I am paraphrasing but can send the source for those interested.

Our impression is that ‘good enough’ is somewhere slightly above the point where staff aren’t complaining or frustrated right now. This would seem to be intuitively correct for if everyone could do their job what more should management want? Apparently the statistical average of good enough is somewhere around 1 to 1.5 requests per user month. Citing Gartner again this level of requests is assumed to be operationally good and there is little value in pursuing excellence. Put another way most users, analysts, and executive leadership seem to think it is okay to need IT support once or even twice per month.

Our results are different. We believe a request from each user can be lowered to a single request every two months, three months or in some cases even every four months. This is what we define as operational excellence. As the CEO of a technology support company this level of excellence is very valuable. It means each support person could support up to 200-400% more people and still deliver the same quality of service. That seems significant. Even better is the result for our customers. It could mean that each end user we support is spending as little as 25% of the time on technology related issues as what many organizations and analysts would consider normal. Our understanding is that 25% of the time spent on any task multiplied by the number of staff in an organization could be valuable.

Put another way the key to getting everyone to work efficiently and effectively is often attributed to specific technologies. These technologies currently carry names like the cloud, business intelligence, collaboration, the right tablet, mobility, security, the right software, etc. Just today I found a new graphic produced by a vendor that highlighted 7 key things every organization must do to be productive. In it the list included other important subjects such as strategy, alignment and some of the already mentioned hot words. Operational excellence once again didn’t make the list. Perhaps operational excellence was number 8 but we are skeptical it even made the top 20. What if one organization bought the hot technologies and had 2-3 times the number of issues arise as their competitor who had operational excellence? Both may be effective but the high degree of differentiation could be a difference maker. Especially if those requests start to arise at the time of key deadlines or heavy workloads.

We believe operational excellence should be on almost every operational related executive’s radar. The difference in the organization’s level of productivity, risk exposure and overall staff satisfaction with technology can be sizeable. As the leveraging of technology becomes more and more pervasive across organizations of all size this focus should only increase with time.

Of course defining what is a request requires some clarification but I’d welcome starting the conversation on the value others put to operational excellence.

September 22 2015

10 Mbps now the ‘Minimum’ for most offices 02:23 pm

How big of an internet plan does your office need? The necessary service speeds for most offices is increasing steadily. Video services, Wi-Fi services and voice traffic are placing much more demand on most networks than even 3 years ago. A new international study on behalf of internet service providers now places 10 Mbps as the minimum most users should have. This is for a single home consumer. Offices with several users may need much more.

Here in Alberta internet service providers are gearing up for the increased demand for video and voice traffic by rapidly upgrading their networks. In 2015 Telus has already completed major upgrades in the Peace River country and elsewhere to connect offices to fibre. This is great news and expands the types of cloud services and video services organizations can use effectively. Axia FibreNet has also upgraded the available government ministries service plans recently and even offers a 10 Gbps service (100x faster than the studies minimum.) Eastlink has offered great network speeds and plans for a period of time. Now with the fibre connections becoming more available Eastlink will have work to do to catch up!

At pcit we will be reviewing each of our customers offices internet service to help ensure it is consistent with this new reality that increased bandwidth is needed.

Interestingly the study by Ovum that placed 10Mbps as the new minimum also stressed two other factors to make the consumer’s experience the best. The other two areas of importance are a ‘a stable and reliable network that delivers content within three seconds and “outstanding customer service” that deals with most issues at the first point of contact’. We couldn’t agree more.

July 6 2015

What a Feeling – 100% Uptime – All Clients! 12:51 pm

What a great feeling to look at every single client and every single system that connects their technology together and see it was 100% available.

Friday we reviewed our company dashboard and when we got to these results it was time to celebrate! Not one internet failure that affected our clients, not one application that broke when being updated by the vendor, not one problem with any of what we call sub-systems at our clients sites. Since carefully collecting these data points the last couple months usually there was some sort of a problem somewhere so when we hit 100% uptime it was a great way to end the week.

Our process involves three different ways of analyzing this data. First there is uptime percent. Basically if everything worked perfectly uptime is 100%. If there was any sort of downtime on any system then we deduct this time from 100%. So if a cloud service was down for two hours we would express that as 95% uptime for that client that week.

The next data point we collect is client downtime. Here we measure in hours all of the sub-systems that were down. In the above case it would be 2 hours down time if the cloud service was down for 2 hours. If more than one system is down we multiply the hours times the number of systems down. This really helps us look at down time from our clients perspective. They typically don’t hand out congratulations for 100% uptime achievements. Rather we get phone calls like “Look our internet service has been down for ‘x’ amount of hours, can we get an update?” Even if we keep in touch in these downtime situations customers will still remember the DOWN time. We want to see and be reminded of this from their perspective.

Both of these approaches seemed to miss a key component. That is that some things are simply out of our hands. PCIT can work with a cloud vendor, we can implement automatic internet fail over services and so on but at the end of the day we are still needing other vendors to deliver a great product. If they fail there is DOWN time and less than 100% UP time. Taking the example already used, if that cloud service is with Amazon or Azure we have no control over the uptime (other than trying to recommend the best vendors to start with).  Also to say there was 95% uptime unfairly represents that really there were lot of other business functions that the client could do. These would be things like email, internet usage, other cloud applications and so on while this cloud application was down. Without doing a ton of research we came up with our own methodology.

We counted key components of our clients networks and then averaged the results to come up with a number of 7 sub-systems per client. We then count how many of them PCIT is directly responsible for. So back to the case of the failed cloud service PCIT’s sub-system time down would be 0%. If instead there was problem we were responsible for like a firewall, switch or Cloud server (not cloud service) and it had 2 hours downtime our subsystem downtime % would be how many subsystems were affected divided by 40 hours per week to get our Sub-system downtime percent. In this case 2 hours for one subsystem would be .7% subsystem downtime. (2 hours divided by 7 subsystems*40 hours expresses as a %)

Perhaps no one will ever care but PCIT about our little sub-system downtime percentage methodology. However, it gives us some consolation to report .13% down across our client base instead of the tragic looking 4 hours downtime. Downtime in whatever form it arrives in usually translates into pain for someone. With the sub-system approach it is helpful to picture some situations as like a mosquito bite on the pain meter versus a giant kick in the shins.