Thursday, 28 February 2013

10 Things You DONT Want to Hear at CAB

Each week, depending on your organizational setup, a group of vested parties gets together to review, assess and prioritize the upcoming changes. This advisory board is also known as the CAB or Change Advisory Board. Whether your organization has an official one or several “CABs”the following is a selection of some of the reasoning’s that I have heard in these meetings.

This is a quick 30 second change – the implementer says this as though they are not sure why this change must even be tracked. The truth of the matter is that if implemented incorrectly could cause a lengthy application outage. The old saying goes “what could possibly happen?” Better to be prepared with a plan if this does not go smooth.


No user impact – while it may be believed that there is no impact to users, how did we quantify that? We need to outline what that really works out to. For example - While the network outage does not impact the site in question as it is “after hours” we should notify them that their site will be down and phones will not be working overnight. If they had someone on site that was injured they may not have a way to dial out if the service is unavailable. A communication plan would help in this case. Transparency is key.


Minor user interface change – translation - no training was provided. Despite what we may believe when we are implementing changes which impact the user interface, even something as simple as a color change may need to be communicated. Depending on the business, there may be confusion on what has happened, and may in turn increase calls into the service desk. This could have been avoided with either a solid communication or training plan or a combination of the two.


No testing is really needed – this is one pops up more often than you would thhink. The question I pose back is how you know it was a success. The answer I typically will get is "well, you know, the lights are all on and the device is up." Even if the testing is simple, there is still a test to ensure it was successful. Track whatever it is in the change record. This simple test may be challenged at the CAB, but that is the point - to bring things to the attention of others what might not have been identified otherwise.


No real rollback is needed. This will work – while the steamroller of progress moves this into a production environment on a mandate that this needs to be implemented on a certain timeline, we need to ensure we have a plan established to mitigate any risks we encounter as a result of any issues should they arise. If we need to 'fail forward' we need to position ourselves to manage any potential issues.


This is a pretty routine change – even though the implementer is comfortable with making this change and has done it several times there is the element of risk and visibility to consider. If this change is truly “routine” it should be created as a standard or routine change, however you describe it in your organization.


But we have done this a million times – nothing is flawless, even with a solid standard operating procedure there could be other variables which could impact the change. even if we have done it a million times if only one percent fails that works out to 10,000 failed changes if you get the math....


It was an emergency – this change may have not been reviewed at CAB even. Despite users complaints to “correct this situation right away” there should be a process to handle this type of work. understand the difference between urgency and emergency


This was tested in Development; we are all good – First question. Are there differences in your non-prod and prod environments? Testing post implementation should always have the same rigor that the development testing has. While there may be components that can’t be simulated in all environments it is important to note and discuss them at CAB.


This is such a small change, not even sure why we need to record this – while this may be true – watch out, this is usually a red flag for issues. Such little regard for this was given that there may be integration points with other applications which were not even considered. This could be a Problem waiting to happen. If it is this small not tracking the change could result in issues correcting it down the road.


Words you really don't want to hear


Shouldn't – doesn’t sound very certain – question to ask is why won’t it? How can we ensure that this “wont” happen

Probably - sounds like you are running the odds rather than implementing a change

Might – (see shouldn’t) if the might is something that is getting business signoff (accepted risk) ensure the front line staff get the rundown of how to handle the situation

I think – similar to “probably” this implies that not all the details are verified

N/A and TBD – these are not even words, generally if these are in the RFC there is way more to be filled out before reviewing at CAB.


Subtle laughter - when people do this it could be a precursor to change issues....



These are a few of my favorites; please feel free to share any of yours



Connect with me on LinkedIn or on Twitter @ryanrogilvie


If you like these articles please take a few minutes to share on social media or comment
 
 


 


 

8 comments:

  1. "Oh... is that considered a change?"

    ReplyDelete
  2. Nice article. Thanks. These are precisely the thing you hear. it would be handy to extend the list to include effective replies. for example, a effective reply to "But we have done this a million times" might be: "then you've had plenty of time and reason to define this as a standardized change, complete with plan for testing, communication and execution protocol and rollback plan. I suggest we start by standardizing it this time.
    For the use of the keyword Probably, I might elaborate the above mentioned response with : can you quantify that chance and do you have numbers to back that claim? if so, do the odds live up to our required quality standards and why isn't the change standardized yet?

    ReplyDelete
  3. All these points assume the usual design paradigm, which amounts to a rather non-existent design. If service design is accomplished correctly, the provisioning impact is very well known, tested for all use cases, and pushing it out should be very routine. The reason these statements are so disturbing is that the operations community usually has little confidence in the design because it's has been largely MIA. I've expressed this in more detail in this article: http://www.networkperformanceinnovations.com/blog/what-a-good-change-management-program-cant-solve/

    ReplyDelete
  4. This only happens due to lack of governance, training and change management design.

    ReplyDelete
  5. How About?

    "I could have done this already, and you wouldn't know"

    "If you reject this, it will become an Emergency change"

    "Oh, was I supposed to explain how and when I wanted to do it?"

    "There is no test environment. It's a unique 40gb network and we can't afford to duplicate any of it"

    ReplyDelete
  6. Nice list, sadly heard these far too often :)

    ReplyDelete