I.T. Service Management

Posted on 1 Comment

In this article, Feathered Owl gives the lowdown on IT Service Management....

“I.T. Service Management”; “Business Service Management”; “ITIL”. It would be very surprising indeed if you haven’t encountered at least one of these terms in the last 12 months, or received a call from a salesperson trying to sell you an “ITIL-compliant” solution of some description. Maybe you’ve been tasked with “implementing ITIL processes” or adopting a more “service-centric” way of working in your team or department. But what does it all mean?

In the Beginning there was Technology - lots of it.

Up until a few years ago, IT was about technology. Designing, building, supporting, enhancing and everything else was all about making sure that the servers, networks, databases, data storage or any of the many components which made up IT infrastructures worked as well as possible. Datacentres were organised by technology “silo” and each area had its own people specialising in that particular technology. Monitoring, measurement and reporting and perhaps even service levels were all focussed on making sure that, for example, critical network links never went above 80% utilisation. Or that there was always enough storage space on the fileservers to cope with the amount of data being saved by users.

As well as all that infrastructure stuff there were applications (not that some people in datacentres seemed to notice). Usually, these were the responsibility of a completely separate group within the IT department from Infrastructure and the focus was entirely on requirements analysis, development of elegant code and functional testing to ensure that the code did what the requirements said. Once that was done the apps would be thrown over the fence to the Datacentre guys and the next interesting development project would begin. Everybody was happy. Well, almost….

Don’t forget the Users

As IT environments grew ever more complex and more and more money disappeared into them, the people who used the applications which ran on all that expensive infrastructure began to ask why they never seemed to perform properly or do precisely what was required to support the activities of the business in question. Or why, when new applications were released, something important like training the users in the new application always seemed to get overlooked.

At the top of the pile, business management started to complain to IT management that the systems just weren’t delivering to the required level, no matter how many charts they were shown of servers performing well below maximum CPU utilisation. In fact, exactly how much value were they getting for all that cash they were stumping up to fund the annual IT spend?

So what was wrong?

The problem was that, although all the component parts of IT may have been working fine in isolation, overall they were failing to provide the right services to the business users. In fact, for the most part, nobody in IT really knew what the right services were in the first place. As a rule, IT users don’t care about servers, databases or storage; what interests them are activities like sales, order processing, accounts, despatch and so on and the applications which they log on to and use to perform these business activities. Basically, IT was missing the point – effective management of technology silos alone was never going to deliver the right IT services to the users.

What’s IT Service Management?

IT Service Management (ITSM) was born out of the frustration of user departments with the quality of service they were getting from IT. As a recognisable discipline it originated in the late 1980s as a result of the Office of Government Commerce (OGC, formerly the Central Computer and Telecommunications Agency) in the UK being asked to do something to help the British public sector get better value from its IT investments. The result of this investigation was the IT Infrastructure Library (ITIL), a set of vendor- and technology-independent best practices and process guidelines based on received wisdom within the IT industry at large and, importantly, what was observed to work well by the OGC in the organisations it studied.

Then there was ITIL

ITIL was released into an unsuspecting IT industry in the early 1990’s. For several years not much was heard of it; this was, after all, the decade when everyone was busy getting rid of their mainframes, midrange computers and terminals and replacing them with cheaper, easier to implement and more agile distributed computing infrastructures made up of PCs and minicomputers talking to each other over IP networks.

In a way then, ITIL appeared at exactly the right moment, it’s just that nobody realised it at the time. By now we are all familiar with the headaches of managing complex distributed IT infrastructures and the applications they run to support IT services. Had it been adopted by the industry at large from the outset, ITIL could have saved everyone a whole lot of bother.

ITIL is published by the OGC as a set of manuals which give detailed information on a number of important IT practices down to the level of checklists, tasks, procedures and even roles and responsibilities. The areas covered by ITIL, divided into Service Support and Service Delivery are summarised in the following section. Unless you’ve been living under a stone for the past few years you’ll recognise at least some of them and appreciate that they encompass most of the things that should probably have been thought about at the same time as the rush to distributed computing was under way, in order to keep it all manageable to at least some degree.

Service Support Disciplines

Service Desk

Provides a central interface and point of contact between users and IT, handling incidents reported by users and requests for new services and acting as the interface into other processes as required.

Incident Management

Provides a means of restoring normal operation as quickly as possible following a service impacting outage, if necessary by effecting a temporary fix or workaround based on previous experience.

Problem Management

Seeks to identify the underlying root cause of incidents and to implement permanent fixes or changes to remove these and so prevent re-occurrence of the same or similar incidents

Change Management

Manages the risk associated with changes to any part of the IT infrastructure to ensure that the desired outcome is achieved without adversely affecting the service in question or causing any unforeseen knock-on effects.

Release Management

Considers everything that needs to be done to ensure that a major release (such as a new application rollout) in to the IT infrastructure is successful, including support staff and user training, documentation, operational handover, testing etc.

Configuration Management

Seeks to manage the configuration and versions of all technology components, applications and other IT assets IT assets, providing a logical model of the IT infrastructure and the relationships between “configuration items”.

ITIL Service Delivery Disciplines

Service Level Management

Defines expected levels of IT service, documents these in service level agreements (SLAs) implements monitoring and reporting to measure achievement of these and seeks to “design in” the ability to meet SLAs from the outset of IT projects.

Availability Management

Does everything possible to ensure that IT services are available at the required times to the right people, including designing for resilience, monitoring and reporting service availability and process optimisation for availability.

Capacity Management

Performs continuous monitoring, analysis and optimisation for production IT services to ensure continued delivery in line with SLAs, supports predeployment performance testing and optimisation and assesses the impact of changes on service performance.

Financial Management for I.T. Services

Provides guidelines for effective IT financial management including recovery of costs through usage-based charging.

I.T. Service Continuity Management

Seeks to ensure the continuity of service through effective backup and recovery, DR/failover solutions and processes to ensure that continuity of IT service provision is maintained and services can be recovered in the event of a disaster in line with contingency plans for business recovery

So what’s in it for me?

Having read the above list you’re probably thinking that this all makes perfect sense and is just what every organisation should be doing in order to manage its IT resources and services effectively. In practice, however, it can be difficult to do all this stuff well and every organisation has different specific requirements depending on its technology, people, processes and culture.

Where ITIL scores is in the fact that it doesn’t dictate a standard way of doing things using specific tools. Rather, it recommends best practices that are pragmatic and can be tailored to the requirements of virtually any organisation, large or small, which makes use of IT to go about its business. You can even leave bits out if they’re not relevant to you. Fantastic! It’s this flexibility and pragmatism (or put another way, a common sense approach) which has seen ITIL adopted the world over as the “industry standard” framework for managing IT as a service as opposed to a set of technology platforms.

Simply put, ITIL works. If someone had packaged it up and sold it they’d have made a fortune by now. Luckily, ITIL is in the public domain; for a few hundred pounds an organisation can buy the OGC manuals and off they go. An active ITSM community with its own representative body, the IT Service Management Foundation, constantly shares information and promotes the wider application and further development of the best practice guidelines which make up ITIL, and there are now recognised professional qualifications for individuals who are involved in application of these in their own organisations or provision of ITIL consultancy or related services. In many cases, an ITIL qualification or accreditation of some kind is becoming a requirement rather than a nice to have when looking for that next position in IT, the clearest evidence of all that IT Service Management is here to stay (for a while at least).

SAN, NAS or Both?

Posted on 1 Comment

In another of our series of technical articles, Feathered Owl gives a few pointers to help you decide whether SAN or NAS is the right storage solution for you.

Over the past twenty years there has been a worldwide migration from host-based to distributed computing. This has had numerous effects, many of them unforeseen when organisations first began augmenting or replacing their mainframe and midrange systems with mini- and microcomputers. One such effect is the increasing move toward storage consolidation and the emergence of Storage as an I.T. discipline in its own right. Today’s I.T. managers have a huge range of vendors and technologies available to them in the Storage arena, and one key decision which must be reached is whether to use Storage Area Network (SAN), Network Attached Storage (NAS), or, indeed, both to satisfy an organisation’s storage consolidation requirements.

Why Consolidate Storage?

Once many organisations had installed dozens, hundreds or even thousands of small computers in their machine rooms, each with their own locally-attached storage, the following issues typically arose: -

  • Storage attached to one computer could not be readily accessed by users or applications on another computer
  • Management of storage resources across the distributed I.T. estate became more onerous
  • Overall utilisation of storage space was inefficient due to the cumulative levels of slack space inaccessible from systems
  • Data backup and recovery became a significant challenge

The Emergence of SAN and NAS

To address these problems, it is now common practice, in I.T. environments of any significant size, to treat data storage as a centralised resource and provide shared access to it via a network. The simplest form of this is the use of file servers to house personal or workgroup data. A file server can be any computer on a network whose storage has been rendered accessible to other computers via a file-sharing protocol such as NFS or CIFS (more on protocols in a while).

File servers became widespread when PC network operating systems such as Netware and Windows for Workgroups emerged; these originally came into being to support sharing of files and printers attached to desktop PCs in office environments. NAS devices evolved from fileservers, as manufacturers introduced dedicated file serving devices to reduce the cost and management overhead associated with multiple server operating systems whose only role in life was to make storage accessible on the network.

At the other end of the spectrum, a more heavyweight solution for consolidating and sharing storage for “enterprise” applications such as databases emerged – the SAN. Historically, a SAN was a means of linking storage devices to multiple computers via a dedicated Fibre Channel network separate from the main data network.

Some More Technical Detail

So a SAN is for big enterprise storage and NAS is for workgroup files and home drives? Yes and no, or maybe not really. We’ll talk about this in a bit. But first there’s some more technical stuff which is important to help us understand the differences between the two technologies. No self-respecting technology is complete without an acronym, so here’s a few explained for you in as simple a fashion as possible: -

SCSI

The Small Computer Systems Interface was ratified by ANSI in 1986 and quickly became an almost universal standard means for attaching storage to mini- and microcomputers via a parallel connection. Other standards which have evolved from this are SCSI-2, SCSI-3, iSCSI and Fibre Channel. As you may be aware, the Fibre Channel network protocol is very important in the world of Storage and SANs.

Fibre Channel

Fibre Channel came about as an alternative to SPI (the SCSI Parallel Interface) which got round the main limitations of parallel SCSI, which are: -

Parallel SCSI cable has length limitations due to crosstalk within copper cables and external interference

  • Parallel SCSI is limited to a maximum of 16 devices on a bus
  • It’s not practical to connect more than one computer to the same storage device

Fibre Channel is a serial protocol which uses fibre optic cable, allowing single cable runs of up 10 kilometres (Fibre Channel can also run over copper cable but that’s another story). As well as supporting longer distances, Fibre Channel supports (in theory) up to 16 million devices on the same bus, meaning that storage devices can be readily shared amongst multiple computers at the network level.

NFS

The Network File System, part of the TCP/IP protocol suite, was developed by Sun Microsystems and released to the public in 1984. Since then it has become the standard means of sharing filesystems in the UNIX world.

CIFS

The Common Internet File System is the commonest protocol used to share files in Windows environments, and is based on Netbios. Functionally, NFS and CIFS are analogous; they just tend to be found in UNIX and Windows environments respectively.

Needless to say it’s all vastly more complex than this, but, in essence, SANs are an evolution of SCSI and NAS is an evolution of NFS and CIFS.

SAN and NAS Defined

Returning now to the issue of distinguishing between SAN and NAS, rather than just considering size and physical architecture, we can use the following definitions: -

A SAN is storage shared at the device level via a serial SCSI protocol such as Fibre Channel

NAS is a computer or device dedicated to sharing files via NFS or CIFS

Technological Convergence in Storage

This definition of SAN and NAS works well when you start to consider that since their inceptions, the uses to which SANs and NAS are put and the hardware on which they are implemented have experienced significant convergence.

As speed, size and reliability have improved, NAS devices have begun to be used for “enterprise” applications such as databases, email and data archiving as opposed to just home drives and shared workgroup directories. SANs have got smaller and more manageable and it is now possible to buy a “SAN in a box” – basically a SAN appliance which contains both storage and SAN fabric to which computers can simply be connected via a Fibre Channel network card or HBA (host bus adapter).

Another recent development is the NAS gateway, which allows SAN storage to be presented via NFS or CIFS over an IP network instead of as a raw chunk or logical unit (LUN) of disk over Fibre Channel.

So what’s the Difference Again?

In summary, the key difference to bear in mind is that SAN is good for making consolidated storage available to multiple computers as raw devices, whilst NAS is good for making it available via network shares. Where an application requires or works best with raw device storage (as is typically the case with enterprise database software), you need a SAN. Where an application will happily access data via a network share, NAS will, more often than not, fit the bill.

Voice Over IP Anyone?

Well, we here at Feathered Owl have just picked up another project to implement a VoIP telephone solution for a new client. More and more people are going down this route, it seems that the technology is becoming increasingly mainstream. We'll be updating this post as the project progresses.

Feathered Owl Technology enters the Blogosphere

A few days ago we here at Feathered Owl made a commitment to start putting some of the advice we freely dispense (actually, ideally we don't dispense it entirely for free...) to our clients and anyone else who happens to be listening into practice. In fact, this blog is one of the results. And guess what, all that stuff about blogging being a great way to increase the visibility of your site out there on the Interweb seems to be true. Although our expectations as to how many people are actually reading this blog at this moment in time (please let us know if you are) remain on the low side, Google seems to know all about it already and indexes new posts within hours or even minutes of them being posted. We are keeping a close watch on some of the other search engines, which appear to be less interested in our outpourings here but have, at least, discovered the blog homepage itself.

In addition to search engines, there are a seemingly endless number of social networking and blog watching sites out there. The theory is that once your blog starts to be seen by Net surfers it will be bookmarked, tagged or otherwise linked to and so become more visible since the search engines assume that if a few people have bothered to link to something it must be interesting for some reason. Thus, we are told, a virtuous cycle (or circle, whichever you prefer) is created and over time your blog and maybe even the rest of your website which contains details about what you want to sell, or whatever, becomes more and more visible. To start the ball rolling, we've created an account for ourselves on Technorati and added this blog to our list of "claimed blogs". At the time of writing  we are ranked number 3,196,618 or  thereabouts so there's a bit of room for improvement...

We're going to keep monitoring the progress of all this and will keep you all in the loop. Maybe by the time the next update is posted "you all" may have ceased to be an abstract concept, who knows.

CSS – Coming Soon to a Website Near You

Actually, this website. We're going through all the pages and starting to bring them in line with some kind of up to date standard for HTML code. All using Wordpad, which is good for our understanding of CSS/HTML but is making us seriously consider (not for the first time, but more seriously than before) investing in a copy of Adobe Dreamweaver. And maybe a Mac...