Wednesday, 30 December 2015

Improvements 2016 - Not a Broken Record

Every year I say that I am not going to have a post on the coming year, but I find myself starting to think about what 2016 will look like and here I am writing a post about it. If you were to look back year after year at posts of this type many of the same process centric activities keep popping up, much like dieting or quitting smoking. It sounds like a broken record. While those are still important, you might notice that the subjects I will be focussing on this year are more general.

Focus on Business
Do you know what your business does and what they are trying to achieve? In some cases we make assumptions that we know, or that we have a general idea of what they do. This level of assumption will not be enough in the year(s) going forward. We need to better understand the five W’s of the business. We also need to let the business know that we are interested in understanding their needs. Have we made a visible effort in building a relationship with them aside from the fact that we work at the same company.

Knowing what your business needs will better allow you and your team to target the delivery of service that they get. This should be an activity that you, as a provider, look at in the big picture sense of the word. In many cases you might find that these efforts seem segregated from the rest of the organization. This relationship should not be a secret; if it is it will limit its effectiveness. Be inclusive, and have people from various teams involved. Have the group open and not by ‘invite only’. Make sure that this is also marketed in a way that all teams are aware of this initiative; again make sure that this is not a secret. People should be excited about this.

Continual service improvement
Since the New Year is filled with initiatives to improve, it would seem like a ‘no brainer’ to have a continual service improvement initiative in this list. The trick to getting this started is to keep your improvement initiatives simple. Have a larger picture in mind, and build momentum off a few small wins in the beginning. While you might want to naturally have a large improvement which is more visible to people, you will be better served by having several small successes which lead up to something bigger. Each quarter review what you have achieved and don’t forget to celebrate the successes. This is important to building momentum.

Broaden your framework horizons
As more and more people are getting their heads around ITIL, we are starting to appreciate that there are other approaches that can complement and add value to daily activities. While we may use a particular framework as the, well framework for service delivery within IT, we should recognize that there are many other methods available that can add value. For me I will be looking at BRM, DevOps and COBIT, and integrating them into service improvement initiatives.

Whether it is online or in person, get to know the people in your community. This doesn’t always mean you have to go to a conference, although it might. Connect on social media platforms and get engaged in on-line learning and webinars where they are available, most of the information you can collect is free, and you can bounce ideas off one another to improve understanding. I spoke earlier about getting to know the business better but we can also leverage our colleagues in our communities to ask questions and see what they did in particular situations. Find yourself a ‘mentor like’ figure to give you some insight that you might not otherwise have. In turn you should also considering mentoring others as a way to further develop your community. You will find that this type of interaction will pay dividends on your understanding of how to better serve your own business.

In addition to these subjects I look forward to continuing to blog as well as interacting with you. Feel free to connect with me on Twitter @ryanrogilvie and/or on LinkedIn

If you like these articles please take a few minutes to share on social media or comment

Monday, 21 December 2015

Problem Management and the Checkbox of Doom

An accepted drawback of being a problem manager, if you could call it that, is the fact that your efforts seem to go unrewarded when you compare it to the incident manager for example. It is possible that in a meeting where you are reviewing a recent major incident that you begin to daydream that you are in a large vaulted ceiling cavern. At the far end of the space you see the idol (root cause) you have long been searching for. Carefully making your way to the front of the room, negotiating over several traps you reach the idol which sits upon a stone pedestal. As an expert at situations like this you pull out a small sack and fill it with sand to match what you assume the corresponding weight of the idol is. With one swift motion you swap the sack with the idol and have it firmly in your grasp. It is then that the stone pedestal begins to drop and this sets off a chain reaction. You begin to hear a loud, deep rumbling sound which makes the hairs on your neck stand up. You know that it is time to escape. Running back through the trap area you make your way out into the corridor where behind you a gigantic bolder is crushing everything it comes into contact with. As you reach the end of the passage way you leap to narrowly avoid being crushed yourself.

Snapping back into reality you remember that problem management is not this dramatic, however, root cause analysis can have a similar cause and effect.

Think about it in these terms.

You have a business critical application that has a particular function which is not working consistently. While this particular function may not seem business impacting since it initially has only happened briefly a few times a month. It is now happening more often and for longer durations. To your knowledge there were no changes that could have impacted the application in a negative way but you can’t seem to get your head around how it was working before and now seems to not only be getting worse but it seems to go away without intervention.

After some initial investigation it appears to have been an oversight in a backend setting of the application. It is a checkbox that needs to be unchecked. Much like the dramatization above your excitement peaks as you clutch for the idol (root cause). Beware however that this could inadvertently trigger a boulder to chase you through the cavern.

A good understanding should be established on what that checkbox does and why the issue has arisen now. Don’t be too quick to swap the idol for the sand until you understand what the outcomes of that decision might be. While the checkbox might appear to be the root cause, the reason that the checkbox is impacting the system now is a deeper dive in to what is really causing the issue. Once you know that, you can get this tested out and confirm the results without fear of further impact.

Follow me on Twitter @ryanrogilvie or connect with me on LinkedIn

If you like these articles please take a few minutes to share on social media or comment

Wednesday, 16 December 2015

Navigating Categories in a Sea of Others

Recently I was at a practitioner session where we began to talk about the categorization of work. Inevitably the conversation moved to what to do with a category or subcategory of 'other'. One of the participants in our group discussion had mentioned (in their words) “the extreme amount of time determining and vetting the categories and subcategories chosen.” They went on to say that their organizational culture operates in such a detailed oriented way that avoiding the use of terms like “other” is essential and was accounted for in the timeline for this project.

As a stark contrast there was another participant in the group whose organization categorized a majority of their requests in a particular bucket, but despite that they still had quite a few “others” to contend with. Since the others were not the main services they lived with the sea of others.

In my opinion finding a balance that matches with the organizations requirements is important. For the most part being able to actively report against them to improve in some capacity is necessary. The purpose of collecting this data in the first place is to use the information collected to make some assessments about what is happening in daily operations and then addressing them appropriately.

To achieve this level of balance we need to commit to those who will be consuming the output of collected data that all items which are reported on as other will be reviewed each month. After all, it is possible that our initial requirements gathering had missed something, or that a particular category might have been overlooked. Additionally as our organization grows we will have new services as well as services that will be retired so while the “others” need to be reviewed and added, we should also consider what to do with categories and sub-categories that are simply not used.

To accomplish this, a monthly report might need to be run to outline all the requests and incidents whose category contains some level of other. While some things may stay designated as other, others may be collected into a group where we can say definitively that these should be a particular category.

The reason that this becomes important is that while the vast majority of metrics are measured properly and being used by teams to look closely at what they are doing, the smaller stats are not being shared with the right teams who might be able to find out what they are dealing with.

In this example I am showing an overly simplified look at this. While the percentage of “others” in comparison to Application Support is not that large it is important to address what it is made of. After we expand this we reveal that there are security and HR escalations in there. Being able to provide this information to teams who are impacted will allow them to build their own improvement initiatives with reportable data rather than the “gut” feeling that they were dealing with in the past. This is only the beginning….

This sounds like quite a bit of work but if we want to display data in consumable and useful chunks we need to understand that these fields in our systems have a lifecycle just as much as the things they represent.

Follow me on Twitter @ryanrogilvie or connect with me on LinkedIn

If you like these articles please take a few minutes to share on social media or comment


Monday, 14 December 2015

Starting the Improvement Engine

 “If you can’t do something right than it’s not worth doing” was something a stern professor had told me early in my college career. With this sage advice in front of mind, I dropped his class identifying that this method of leaning might not work out well for me in the long run. The trouble is that this mode of thought makes it way into everyday activities as well as how we operate to support our business, and in some cases we end up doing nothing. We have the “we would rather not fail, so we won’t try” mentality.

On the flipside I also had another professor who had a different perspective on the similar subject. His philosophy was “…that you will get nowhere until you take the first step”. Which was a completely opposite thought process from the other professor. The key, he outlined, was that while you get the ball rolling you also need to learn from what you either didn’t get right or what you might have missed.

We need to come to terms with the fact that we won’t get it perfect and how we will manage what is not.

This is where CSI (Continual Service Improvement) comes into play. While we may not get everything perfect we need to understand how the things we are working with impact the business objective that they support. After all if the improvements or work don’t support the business goals they are already going to be fighting for scraps at the bottom.

The next things to consider are what we are currently doing and where we want to get. This will likely involve some amount of reporting and communication on what we want to achieve. This type of gap analysis will give you a good sense of what will be involved in achieving the improvements you have identified in this step.

The key here is to keep this as manageable as possible. In my opinion I find building this into a routine works best for me. I will schedule the reporting on an activity or process from a months’ worth of data from the 15th to 15th. I do this so that any findings I have can be figured out and will accompany any month end management reports. For me this kills two birds with one stone.

This reporting will let me see if we still have gaps in the improvement initiative as well as whether we are achieving the goals we had set out in the beginning.

Overall you need to have transparency and inclusion of all stakeholders. This is where we may have some challenges. We WANT to have things perfect and we may have issue with showing any imperfections that may make us look as though we aren’t perfect. However if we look at this from the perspective of we have a strategy for handling the challenges that come our way, and support from all teams who are impacted we will be able to manage all imperfections that we come across and improve service delivery.

Follow me on Twitter @ryanrogilvie or connect with me on LinkedIn

If you like these articles please take a few minutes to share on social media or comment


Wednesday, 9 December 2015

CMDB - Should I or Shouldn’t I?

Even as I post this, I am wondering what will emerge from Pandora’s box…

A comment that I received in a recent post was why I didn’t write more about the CMDB, or configuration management database.

Without a valid answer I replied back to the person and asked if there was something specific that they wanted to see. They wanted to know “should I or shouldn’t I implement a CMDB?”

In my experience I have seen this activity received with groans, excitement and confusion. In many cases all of these feelings have been exhibited during the course of the team discussion of “should we or not?”

The trouble is more often than not the reason that this even comes up is that through a tool implementation there is a component of service or infrastructure which is mapped against CI’s (Configuration Items) and this needs to be addressed in one way or another.

The problem with looking at this as an output from a tool is that there is little consideration on how this process or activity will be managed by the people in the organization. Remember the people; they are important in making this work. Far too often the reason that this process doesn’t work well (or at all) is that we only looked at it as a means to an end from a tool perspective instead of the perspective of people, process and technology

If I could offer some suggestions I would take it back it the beginning and look at the scope of what we are going to manage in our CMDB. No two organizations are going to look at this the same so I won’t tell you that there is a silver bullet on this one. In some cases you might want to look at your critical services at the beginning. Whatever you choose remember that keeping it simple will allow you to successfully manage this in the beginning which will produce some check in the ‘win’ column. If you attempt to boil the ocean it will likely work in your favor.

With a scope established you should be able to tie this in with your change management process. Keep in mind that like your change process there are activities in configuration management that need to be managed by people to a certain degree. Have some way to regularly review the CI’s for accuracy. This will not only allow you to see where they may be issues in the configuration management process but also if the inputs and output to this are also having challenges.

I can already hear you. “our tool does this automatically, so we don’t have to worry” while to a degree this may be true, trouble more often enough is that the tool will automate what we tell it to do and if we don’t fully understand what it is meant to do in the first place (process and people) we wont tell it the right actions to take. Which is why performing regular audits of your data is important to continually improve.

To summarize, think with the end in mind. Decide what you want to get out of the CMBD in the initial stages. Keep the scope small and don’t let the scope creep. Ensure that this process is tied into other service management processes so that where it applies the usage and value can be recognized acrros various areas within IT. Lastly, report on the success that you get as well as noting the issues and making corrections.

While a CMDB can be a daunting task, being pragmatic with how you look at this will ensure that you can achieve your goals.

Follow me on Twitter @ryanrogilvie or connect with me on LinkedIn

If you like these articles please take a few minutes to share on social media or comment



Monday, 7 December 2015

Blame Free Post Mortems

In Latin, mortem means "death," and post means "after," in other words something that happens after death.

This morose definition lends itself to a less than proactive viewpoint on a critical improvement activity. However, how this is managed will make the difference to improve service through transparency rather than people holding back as a result of fear.

Recently, and part of the reason I am sharing this post, I ran into a friend who was in the middle of his holiday. After exchanging the usual questions and answers he ended up on the topic of work, as people tend to do. He was thrilled that he was off work this week as they were going over a post mortem for a recent outage his company had with one of its key applications.

He continued to tell me that there are very few things dreaded as much as the post mortem. People hate them so much that even during the outage they are already thinking about what actions will get them into hot water during the review. Imagine that?

He said that the post mortem at his company is lovingly referred to as “the blame game”. To add insult to injury he said that the team he is on, infrastructure, typically gets the lion share of the blame sine they aren’t as “inventive” with their explanations for issues and as such are not able to conclusively rule out that they aren’t responsible for the issue to some degree.

This should clearly not be the intention for a post mortem

By its very nature these post mortems should be an exercise in understanding, sharing and learning.

These principals should be applied early on. Involving all parties who were a part of the restoration of the service(s) as well as anyone who have a vested interest in the service(s) should be invited to the meeting to review as well as getting the document that outlines all the findings and outcomes. We need to foster transparency, and as such our culture should allow us to be open enough to be able to see where we can make improvements without worrying about who to blame.

After all the issue happened, and was fixed. That is the hard part, now we need to ensure that we can learn enough from this exercise to ensure we don’t repeat the same mistakes if we can avoid them.

Digging deep into the timeline will allow us to clearly see what actions we took, and why, at intervals throughout the issue. After all we may have experienced many different symptoms which led us to make particular assessments, which at the time seemed appropriate, but afterwards might not have.

Personally I avoid the use of the phrase ‘post mortem’ whenever I can and replace it with incident review. While you are making a step in the right direction to hold these reviews, if you are not fostering a culture of collaboration and transparency, you risk some details being supressed in fear of some form of punitive action.

Key components of a blame free incident review:
  • People involved during the issue
  • What contributing factors came into play during the issue
  • What was the impact of the issue
  • What did we learn as a result of this issue

Keep these activities in mind, take small actions as a result of the discussion you have after the incident and you will be setting yourself up for your team to make improvements on these issues rather than pointing fingers.
If you like this article please take a few minutes to share on social media or comment

Follow me on Twitter @ryanrogilvie or connect with me on LinkedIn


Thursday, 12 November 2015

Problem Management is not the Incident Graveyard

For those that are unaware, the purpose of problem management is the reduction of incidents in amount and severity that impact a business. It should not be a place for incidents to rot in search of a root cause. Problem management should help to drive service management from a reactive to a more proactive place. This statement would suggest that any team which is experiencing incidents should also logically have some level of problem management, right?

Whether formalized or not, the trouble is some organizations are not looking at problem management in a context of business value. Instead they view problem from an IT or worse yet an incident management perspective. You might be saying to yourself "of course that's how we view it, problems are the result of incidents which impact IT", and that I may have lost my mind. Simply because we have done something from a certain perspective in the past doesnt mean we can't look at it from another perspective now.
By sticking with the same viewpoint on problems, we may be inadvertently focussing our efforts on issues which have no solution, root cause and may be low on the priority scale when it comes to business impact. While it’s good that they are looking at these issue at all, they have low value in terms of what really matters to our business.
The reason that problem might be getting a raw deal in your ability to produce results is that when we focus on these low impact issues, the urgency and in turn ability to assign resourcing of any kind to resolve this issue also remains low. The result is that when we talk about what problem management is doing to improve service delivery it looks relatively low, leaving those in leadership to ask what value it brings to the table at all.
The first thing we as practitioners need to do is stop thinking like IT. Not all incidents are going to require a technical resolution. Take a closer look at the top escalations to the service desk and see what drives them in the first place. Here is an example of some typical escalations:
  • Application errors
  • Password resets
  • Questions
  • Hardware failure
  • Network issues
While some of these are still technical in nature, some things such as questions and password resets are everyday common place in some organizations.  While your company may not have this issue to deal with there are still may that come into work on a Monday morning faced with the repetitive task of addressing the forgotten password. This costly use of resources could be resolved with an automated reset tool of some sort. The problem analysis would support the cost benefit to implementing a tool or the effort to automate this activity versus your service desk resources spending their valuable time to reset a password.

Another heavy hitter is the area of questions. This is where a knowledge repository of some type could reduce the calls for people who are asking how to map a network drive over and over again. People want to be able to search and execute a simple fix for themselves if given the opportunity. They are already .accustomed to searching online to solve small issues or address questions that they may have. Just make sure that the place where people go to consume your knowledge records provide some metrics so that you can quantify what people are looking up and using.
What you need to keep in mind is that while we will get problems that have low impact we also need to proactively look to address larger issues which are impacting our business. Find that balance and you can improve the value of problem management.

Follow me on Twitter @ryanrogilvie or connect with me on LinkedIn

Thursday, 5 November 2015

My Fusion15 Experience

Over the past few days I had the good fortune to attend my first HDI and itSMF USA sponsored event, Fusion15. This year the event was in New Orleans which was a city that I hadn’t been to before so it was another first.

Like other larger conferences I found that there was a large cross section of attendees from many geographic areas, in addition to this variety these practitioners supported an equally diverse sample of lines of business. The attendees may have come for the content that they saw in the brochure or on the Fusion website, but I found in many cases that through discussion that they were also able to gain some insights into common issues and solutions from each other in many cases.

On a personal note I found that I was able to network on two levels:

I was able to connect with people in the ITSM community that I might not have otherwise been able to speak with ‘in person’. Previous to the conference my level of interaction was limited to 140 characters on twitter or ‘likes’ on Facebook or LinkedIn. Being able to have face to face dialog expands that level of networking.

Secondly, as I mentioned above, I was able to connect with people who were experiencing similar operational challenges as well as where they had insights into areas of improvement which I may have not even considered. The trick to this is to listen to people and have an open mind to what their experiences are. In some cases what they are doing in their own organization to address specific challenges or requirements may not translate to what we are currently able to do , but will plant a seed to get your thought process generating some solutions that may have not been previously considered.

An important part of these conferences, in my opinion, is to have access to vendors but in a way that is not intrusive to the learning and sharing experience. This conference had that sewn up. There was plenty of time to meet with vendors and in some cases you were able to block off one on one sessions to discuss your needs. Even in attending the sessions I was not able to tell which of the presenters were with a particular vendor.

The staff that managed the event were excellent. Always available to answer questions as well as point me in the right direction for an additional source of coffee when I needed it. The ‘app’ that was used to manage things like agenda, events and networking also had a flavor of gamification for content shared. The scoring was intense and despite my best efforts I didn’t quite land in the top ten. For me there was only two things that I mentioned in my survey that might be areas for improvement, and they were pretty minor. The first was that in the event app points were awarded based on levels of sharing, for example pictures versus attendee updates and so on. What they should add is that points are awarded for filing out the post session surveys. The second was that there should be a conference branded Snuggie for all those who found the air conditioning too much in some rooms.

Overall I would recommend this conference to anyone who was looking to broaden their horizons and network with the greater service management community. If you have any questions feel free to ask and connect with me:

Wednesday, 4 November 2015

Is your Consultant your Partner?

Not so long ago I was watching an episode of “Kitchen Nightmares” with Gordon Ramsay and it got me to think about how he effortlessly swoops in finds all the issues with the kitchen and the front of house, makes some adjustments, slaps on some new paint and voila, the issues are fixed in the course of an hour. Granted the filming looks like it takes several days with some degree of prep bookending before and after the filming. But regardless it is an overall improvement initiative that is quick and seems to stick…or does it. Each year he revisits some of these ‘makeovers’ and with some transparency there are some that are still doing well and some that have reverted back to where they were before he arrived.

Like any improvement initiatives, Service management can be like this as well sometimes. A consultant may come in for a limited period of time and facilitate a review of current state, make some adjustments, even facilitate a new tool, but this might all be lost if there isn’t something consistent which remains with the organization after the consultant is gone.

A good consultant will tell you that one of their goals is to leave the organization in a better position than when they arrived. In the kitchen nightmares example, Chef Ramsay often has an experienced chef come in to help transition the staff at the newly improved restaurant. This is an important component of the improvement initiative. The question has to be asked, “What do we do when the consultant is gone?”

In the case of a good consulting outfit they will have addressed this question before you even ask it, in fact as part of their review of current state and looking to roadmap the future they should have identified if there is a gap in the long term sustainment of whatever it is you are trying to improve in the first place. This will include resources such as staff, training and yes, even possible a tool.

So how do you ensure that your improvement initiatives stick?

If you have a resource steering your team through this improvement initiative ask them questions. I know this sounds obvious but the trouble in some cases with having an expert on site guiding you through something is that it looks easy and may make you think that this will be just as easy after they are gone.

Know what the landscape will look like during the improvement cycle. Avoid people doing this off the side of the desk as this will always be a point of contention with resourcing and the first thing dropped when it gets busy.

Ensure that the changes that you are making are small enough to show some improvement over a short period of time. Small changes are simple, easy to manage and having some wins which we can demonstrate will generate some inertia in the forwarding the improvement cycle.

The key here is that while this may look easy from the outside, the reality is that a marathon of hard work is about to take place, so make sure that you are as prepared for this by working with professionals who are looking out for your ability to achieve your improvement initiatives.

Follow me on Twitter @ryanrogilvie or connect with me on LinkedIn

Monday, 26 October 2015

The Road to Feedback is paved with Good Intentions

We have all been there before at some point or another. In an effort to understand the business we solicit information from them in a “how are we doing” button or survey. The trouble that may present itself is that while we are working to improve things from a delivery perspective we may not have fully built out a strategy to manage the lifecycle of the feedback.

Here are a few points to consider, but as always feel free to share which areas have worked well, or not so well for you.

Decide what you want to gain from this
When I say ‘you’, I really mean the business which you support. When we think about gathering feedback from people the first thing that pops into frame is that we are looking to address some level of concern. In reality, it is about understanding what your business or customers need or want. By thinking with the end in mind we will be able to ask the right questions and target the right people rather than a shotgun blast of generic questions or a solicitation for feedback.

Communicate the program
We have all seen cases where the communication around the feedback program was great in the beginning but then started to fall off as time went on. Interestingly as this happened the responses made a decline as well. Remember, people want to contribute so ensure the audience is aware of how they can do that. Everyone gets busy so keeping this at front of mind is important. However this is a balancing act, you want to notify without being intrusive.

Understand the channels for feedback
Be open to finding out how your audience wants to communicate with you on feedback. There are many ways to communicate (social media, tools, phone, email, etc.) so make sure that you manage which ever ones you choose to leverage accordingly. It could be very easy for us to assume that this would best be done via email or directly from an application. However the idea that we are assuming anything rather than asking is counter-intuitive to the feedback process in the first place…..

Feedback Management
Now that we are receiving the feedback we need to make sure that we manage the information that we are getting appropriately. Being in a position to take the feedback and report to the submitter that we have their information and that we are in fact doing something with it is key. Far too often the reason cited for not submitting feedback is that “they aren’t going to do anything with it anyways”. While many tools have a canned response after the information is submitted people really want to have some direct communication that their feedback is being considered in some way or another. Have some expectations on the feedback. While the content may still be vague, hearing from a human being at least give a sense of connection that an automated response does not. After collecting the information we will quickly assess if the data will be acted upon or put aside for later consumption. Let people know what you are doing with this. The transparency of not acting on someone’s feedback based on some fact will allow people to know that you are listening and actually reviewing ideas that they submit. If the information is sent in and they never hear back they will be less likely to participate the next time.

Feedback Findings
Where ever possible share the findings of the feedback regularly to the targeted audience to drive further submissions. If there is an area that will peak the interest of those in the audience you may be able to steer people who would be otherwise not respond to submit some feedback. Use this reporting as another tool to market the need for feedback.

Overall, having a well-defined scope on the information you are soliciting paired with engaging the target audience with communications and regular updates, will enable you to better manage the feedback over the long term and make the improvements  that will make a difference in all the right ways.  

Follow me on Twitter @ryanrogilvie or connect with me on LinkedIn


Monday, 19 October 2015

Driving Inertia on Service Improvement Initiatives

Newtons first law of inertia states:

“an object either remains at rest or continues to move at a constant velocity, unless acted upon by an external force”
We have all been in this position at one time or another before. We have made some level of improvement and then for whatever reason we seem to drift back into a state where we were before the improvement initiative was implemented. So the question of how we avoid losing momentum in these initiatives becomes paramount.

One of the challenges with continual service improvement initiatives is that they are continual. Unlike other initiatives where there is a finish, this marathon like work continues cycle after cycle. In the beginning there is a sense of excitement and this is evident in the work that is done around the processes which are in the looking glass for CSI. As this momentum starts to lose ground we may start to see evidence within these processes.

To continue inertia on our improvement initiatives, we need to start to look at how we define this improvement journey. While it is continuous in nature, we should set up timed check points during the initiative where we can showcase the success or additional areas to improve. (Quarterly for example) Because this process has no real beginning or end and is cyclical in nature we need to build in our own start and end components.
As I have outlined before we need to ensure that we are focusing on business objective. The first step is to ensure that we understand the business vision and how IT strategies will line up to this. In order to start us off on the right foot we need to keep this improvement initiative as agile as possible so that we do not bite off more that we can chew. In my opinion when we have too large of an initiative underway which does not line up to the business momentum will drop off before it even begins.

The next set of activities will review what we are current capabilities are and then decide what we want to improve. Remember that we are keeping it simple over several cycles of improvement so small moves in the beginning. We may want to fix everything all at once but in doing this we have deliverables that are taking too long to produce results which are a detriment to momentum.
Once we know where we want to go we can outline what actions need to be taken to get there. Since we have chosen to keep things simple in the beginning we should have a shorter list of activities to manage.

The last component allows us to measure what we have done and then begin the cycle again to add on to what we have started to get us close to the business objective. At each measurement cycle we need to communicate back with our stakeholders to celebrate wins as well as having transparency on areas that did not go as planned. These items are NOT losses, rather they are areas where we can learn and re-focus our improvement efforts.
Keeping things simple over the long term will allow your teams to make iterative improvements that they will be visible to the teams which they will ultimately serve.

Follow me on Twitter @ryanrogilvie or connect with me on LinkedIn



Wednesday, 7 October 2015

IT Disaster Recovery - Practice Makes Perfect

With national fire prevention week upon us I thought I would speak to drills. Specifically those that help us practice disaster recovery skills. As our businesses are ever more reliant on IT services we must be in a position to address service delivery when and if a disaster occurs.

Keep in mind that when we are talking about disaster while we may envision earthquakes or floods, we are really talking about events which are impacting the ability to operate the business. This could be a transit strike, or weather which makes travel into work a significant challenge.

To ensure that we are as prepared as possible we should have a disaster recovery plan in place. A disaster recovery plan should outline the actions your IT department is required to take in event of a service interruption or outage of any kind, despite the type of disaster. Having a plan in place is a good start but you need to be able to ensure that you can carry the plan out.

This is where the drill takes place. You should regularly exercise your skills in this regard. This will allow your team to see areas where you may need to make other considerations. It will also familiarize your staff with the disaster recovery plan and it procedures

Plan to do both active and passive testing. Active testing (which may be performed annually should simulate the entire disaster from start to end as though it was the real thing. This should include full functional testing of a complete restoration of all critical hardware, network and data. At completion a post disaster review should be completed to go over all the things that went well and areas that still need some work with actionable items. The passive tests should walk through the procedures without the actual restoration part, but you should be validating that the work could be completed even if we do not actually initiate the work of doing it physically.

Like a fire drill, we want to be prepared and ensure that the plan we have in place is something that we can rely on in a real emergency.

Follow me on Twitter @ryanrogilvie or connect with me on LinkedIn

Thursday, 1 October 2015

Navigating Metrics to Improve Service Delivery

I want you to think about all the different types of organizations you worked for. Whether they were Finance, Communications, Energy, Agriculture, or Transport, there was likely one similarity among them. The reporting that was done from an IT perspective did not produce metrics that mattered.
It’s simple, we (as an IT organization) tend to loop endlessly on the metrics as they apply to IT
We must move away from thinking that recovery from failure = value.
Recovery from critical incidents is an important part of what we do in IT, but it is not the one which ultimately defines whether we are doing a good job or not. We should be considering the needs of the business, and not just how long our networks are available or how quickly we answered the phone and fixed a PC.
One of the things I learned early on was that the marketing of the IT metrics was as important as the metrics themselves. In many cases relating them to a particular process was something that was not only confusing for the busines we provide service to, but also that we should be speaking in business language as this all should tie back to a service. IT reporting as it pertains to service management typically talks about KPI’s as they relate to a CSF. The challenge with this in some cases is that it does not relate to a business objective necessarily.
Start to look at it in terms of:
Business Objectives - Understand and document the business objectives of the organization or line of business
CSF’s - Determine which Critical Success Factors (CSF’s) are needed to be successful
KPI’s - Determine Key Performance Indicators (KPI’s) based on the Critical Success Factors. Include target levels for these, so success is clearly shown.
Dashboards – Firstly, share them. Have a way to view them in a dashboard or viewable metrics based on the audience. Ensure these dashboards are audience specific. Where it applies ensure they can be used for trending, historical reporting in an operational capacity.
Let’s look at an example:
Let’s talk about a company called Drill-Tech Industries. This small energy services company would like to take its business to the next level but always seems to hit some roadblocks. The CEO has outlined that the goals of the business are “to ensure that rig systems are available as well as ensuring a high degree of safety.”
The first step should be to get some alignment by gathering the right people together from various streams within the appropriate business units. Include a BRM if you have one as well as some key IT stakeholders. in the beginning you might need some practice on getting the 'right' people together.
The next step is to get some clarity on the objectives and goals for the organization. Rather than assuming we know what the business wants, as IT has famously done in the past, gather the right resources together to jointly identify what the business objectives are. Within this new steering committee ensure that you are lining up your initiatives to the goals of the business. Clarity of business objectives can help you in many ways. They should have these characteristics:
  • Must be important to the business
  • There should only be a few critical ones
  • It should represent the results to be obtained
  • It should be visible and unambiguous
The third step is to map out your goals and measures. Having your objectives matched up on a table to CSF and KPI’s might seem overly simplistic, but that is the point.
Make the goals measurable
To quantify the goals, you’ll need to work with your steering committee to determine the Critical Success Factors that will demonstrate the fulfillment of their goals. The best Critical Success Factors (CSF’s) will be: “SMART”: Specific, Measurable, Attainable, Realistic and Timely.
Once you and the steering committee have agreed on the CSF’s, you’ll be able to develop Key Performance Indicators, or measures that support the CSF. It’s extremely beneficial to develop KPI’s along with targets, so you and your business partners are clear on whether you’re successful in delivering on each of the goals. The best part about this approach is that when IT and the business agree on measures and targets, it’s easy to tell when IT has delivered or when IT is not meeting the needs identified by the business.
Build the dashboards and scorecards
Once the matrix is agreed on and the method of measuring each KPI is defined, documented and agreed on by the steering committee, the final step is to design dashboards and scorecards that represent these KPI’s. These are both graphical views of the Key Performance Indicators listed above, showing the result in comparison to the target.
Benefits of the program
Providing metrics that are responsive to your business’ needs rather than the same old IT metrics they don’t really care about will not only improve the level of performance but also strengthen and build out the relationship between you and the rest of the business. Looking back at the reasons to measure, you can expect the following results:
Live dashboards also provide the ability to determine the activities needed to drive success of an initiative and whether these activities are providing the expected result,
You and your stakeholders are able to use the metrics you provide to validate whether IT’s performance is contributing to the business’ ability to meet their goals and objectives,
IT is able to produce metrics that support a business case for infrastructure or development projects related to the delivery of a service,
Live dashboards provide IT and the Business to know when there is a performance issue and they can intervene immediately to turn the problem around.
This helps an organization move from a purely reactive mode to a more proactive approach that is integrated with the success of the business’ initiatives in mind
Long term Success
As these dashboards and scorecards are used by the business, it’s important to come back to the steering committee to evaluate the results, part of the "wash,rinse, repeat" process. This may lead to creating new KPI’s or tweaking the ways in which they are measured, depending upon the steering committee satisfaction with performance. In the case of the sample organization, it’s possible that the business is not meeting their objectives and may initiate changes to their critical success factors that will drive a need to change the measures. The point here is that you should not build the dashboards and scorecards then forget about them. Rather, you should meet with the steering committee regularly to review the metrics and IT’s achievements. This is a great opportunity to talk about service improvements that the business might need to support their future initiatives as well. Keep in mind that once you are achieving targets reliably you will be proving out your abilities to deliver so you need to continue to raise the bar.
Follow me on Twitter @ryanrogilvie or connect with me on LinkedIn